request rate limiting with haproxy vs nginx using chef solo
When consumer base for API or web site grows, the number of potential abusers will eventually increase. Whether on purpose or not it may cause problems for legit consumers by slowing down the performance or even taking down the servers. In case of a web site it’s much easier to predict the absolute maximum requests per second/minute threshold. It wouldn’t make sense for someone to browse the page for more than 2-3 times a second including accidental refresh hits (F5). I’m speaking here about page views, not the concurrent calls to backend (css files, javascript, multiple sections loaded from different paths using jQuery).
This is much harder to do for APIs since they might be proxying the requests - in such scenario it might be worth using X-Forwarded-For header for getting client IP, white-listing legit consumers (this might be even necessary in web site scenario for companies that proxy internet traffic for their employees), delaying or restricting requests that exceed thresholds. Believe it or not all this is fairly easy to set up and comes with no-extra cost in case already using HAProxy or Nginx.
Companies that use HAProxy can be found here and here.
Companies that use Nginx can be found here and here, and here is how Nginx competes amongst other web servers.
To demonstrate the use of HAProxy I’m using:
- vagrant
- chef cookbooks for haproxy (haproxy-1.5-dev19) (original can be found here) and nginx
Although haproxy-1.5-dev19 is still in development it’s been used by major companies out there, some of them make their own branches so to make sure it’s inline with their upgrade policy.
The repository can be found at https://github.com/uldissturms/request-rate-limit where all the infrastructure setup can be seen.
In case you don’t have omnibus vagrant plugin installed already run command:
This will bring up two machines:
- 10.0.0.100 HAProxy
- 10.0.0.101 Nginx
And expose HAProxy 80 port to host port 8081 so that the web site can be accessed through http://localhost:8081/.
Lets start by testing HAProxy. To test the performance of web site we will use ApacheBench.
This will install the apache bench and run it against our web server making sure that we have 10 concurrent connections. Lets go ahead and try to open up 11th one.
After the changes:
When running in parallel, connection is immediately dropped
Settings that prevent from client to hold the connection open for too long can be applied
When 3 seconds are passed we notice:
Bursts can be used instead of dropping the request so that consumer experience slowdown instead of service failure. While this is a great option to consider it might also hide problems - when legit consumers aren’t familiar with busts set up then API misuse will not result in HTTP error. Monitoring and a close look should be applied.
Burst can be set up as in this gist: https://gist.github.com/dsuch/5872245
White-listing can be applied using /usr/local/etc/whitelist.lst file.
Using Nginx for request rate limiting. This is achieved using limit request module.
White-listing can be achieved with geo module. Let’s white-list localhost and execute request from Nginx server itself.
You can specify the nodelay to drop request instead of throttling.
Things I liked about HAProxy:
- endless options to extend the rules for request limiting - even HTTP status codes returned from web server can be taken into account - lets say we want to restrict users that scrape our service by violently incrementing identifiers and fetching off content - these will probably result in loads of 404’s.
- build-in user interface for monitoring server health
- the top most element in most architectures already
- goes above HTTP, load balance any TCP
-
used by stackoverflow, twitter
Things I liked about Nginx:
- available through package manager
- easy to get started
- one of the most popular web servers
-
used by netflix, github and facebook
Thinks that would be interesting to try:
- load logs from haproxy using logstash into elasticsearch and see how excessive traffic gets cut of using kibana
- set up heartbeat for HAProxy server, introduce second web server, basically reduce SPoFs in all the stack
-
push concurrent user count to maximum
References:
- http://blog.exceliance.fr/2012/02/27/use-a-load-balancer-as-a-first-row-of-defense-against-ddos/
- http://blog.serverfault.com/2010/08/26/1016491873/
- http://rohitishere1.github.io/2013/06/27/rate-limit-per-ip—-nginx/