In short, a hit rate above 10000/s is sustained for objects
smaller than 6 kB, and the Gigabit/s is sustained for
objects larger than 40 kB.
In production, HAProxy has been installed several times as an emergency solution
when very expensive, high-end hardware load balancers suddenly failed on Layer 7
processing. Hardware load balancers process requests at the
packet level and have a great difficulty at supporting
requests across multiple packets and high response
times because they do no buffering at all. On the
other side, software load balancers use TCP buffering
and are insensible to long requests and high response times. A
nice side effect of HTTP buffering is that it
increases the server's connection acceptance by reducing the
session duration, which leaves room for new requests. New
benchmarks will be executed soon, and results will be
published. Depending on the hardware, expected rates are in the order of a few
tens of thousands of new connections/s with tens of thousands of simultaneous
connections.
There are 3 important factors used to measure a load balancer's performance :
- The session rate
This factor is very important, because it directly determines when the load
balancer will not be able to distribute all the requests it receives. It is
mostly dependant on the CPU.
Sometimes, you will hear about requests/s or hits/s, and they are the same as
sessions/s in HTTP/1.0 or HTTP/1.1 with
keep-alive disabled. Requests/s with keep-alive enabled does not mean
anything, and is generally useless because it is very often that keep-alive has
to be disabled to offload the servers under very high loads. This factor is
measured with varying object sizes, the fastest results generally coming from
empty objects (eg: HTTP 302, 304 or 404 response codes).
Session rates above 20000 sessions/s can be achieved on
Dual Opteron systems such as HP-DL145 running a carefully
patched Linux-2.4 kernel. Even the cheapest Sun's X2100-M2 achieves 25000 sessions/s in dual-core 1.8 GHz configuration.
- The session concurrency
This factor is tied to the previous one. Generally, the session rate
will drop when the number of concurrent sessions increases (except the
epoll polling mechanism). The slower the servers, the higher
the number of concurrent sessions for a same session rate. If a load balancer
receives 10000 sessions per second and the servers respond in 100 ms, then the
load balancer will have 1000 concurrent sessions. This number is limited by the
amount of memory and the amount of file-descriptors the system can
handle. With 8 kB buffers, HAProxy will need about 16 kB per session, which
results in around 60000 sessions per GB of RAM. In practise, socket
buffers in the system also need some memory and 20000 sessions per GB of RAM is
more reasonable. Layer 4 load balancers generally announce millions of
simultaneous sessions because they don't process any data so they don't need
any buffer. Moreover, they are sometimes designed to be used in Direct Server
Return mode, in which the load balancer only sees forward traffic, and which
forces it to keep the sessions for a long time after their end to avoid cutting
sessions before they are closed.
- The data rate
This factor generally is at the opposite of the session rate. It is measured
in Megabytes/s (MB/s), or sometimes in Megabits/s (Mbps). Highest data rates
are achieved with large objects to minimise the overhead caused by session
setup and teardown. Large objects generally increase session concurrency, and
high session concurrency with high data rate requires large amounts of memory
to support large windows. High data rates burn a lot of CPU and bus cycles on
software load balancers because the data has to be copied from the input
interface to memory and then back to the output device. Hardware load balancers
tend to directly switch packets from input port to output port for higher data
rate, but cannot process them and sometimes fail to touch a header or a cookie.
For reference, the Dual Opteron systems described above can saturate 2
Gigabit Ethernet links on large objects, and I know people who constantly
run between 3 and 4 Gbps of real traffic on 10-Gig NICs plugged into quad-core
servers.
A load balancer's performance related to these factors is generally announced for
the best case (eg: empty objects for session rate, large objects for data rate).
This is not because of lack of honnesty from the vendors, but because it is not
possible to tell exactly how it will behave in every combination. So when those 3
limits are known, the customer should be aware that he will generally be below
all of them. A good rule of thumb on software load balancers is to consider an
average practical performance of half of maximal session and data rates for
average sized objects.
You might be interested in checking the 10-Gigabit/s page.
Being obsessed with reliability, I tried to do my best to ensure a total
continuity of service by design. It's more difficult to design something
reliable from the ground up in the short term, but in the long term it reveals
easier to maintain than broken code which tries to hide its own bugs behind
respawning processes and tricks like this.
In single-process programs, you have no right to fail : the smallest bug
will either crash your program, make it spin like mad or freeze. There has not
been any such bug found in the code nor in production for the last 10 years.
HAProxy has been installed on Linux 2.4 systems serving millions of pages
every day,
and which have only known one reboot in 3 years for a complete OS upgrade.
Obviously, they were not directly exposed to the Internet because they did not receive
any patch at all. The kernel was a heavily patched 2.4 with Robert Love's
jiffies64 patches to support time wrap-around at 497 days (which
happened twice). On such systems, the software cannot fail without being
immediately noticed !
Right now, it's being used in several Fortune 500 companies around the world to
reliably serve millions of pages per day or relay huge amounts of money. Some
people even trust it so much that they use it as the default solution to solve
simple problems (and I often tell them that they do it the dirty way). Such
people sometimes still use versions 1.1 or 1.2 which sees very limited evolutions
and which targets mission-critical usages. HAProxy is really suited for such environments
because the indicators it returns provide a lot of valuable information about the application's
health, behaviour and defects, which are used to make it even more reliable.
Version 1.3 has now received far more testing than 1.1 and 1.2 combined, so
users are strongly encouraged to migrate to a stable 1.3 for mission-critical
usages.
As previously explained, most of the work is executed by the Operating System.
For this reason, a large part of the reliability involves the OS itself. Recent
versions of Linux 2.4 offer the highest level of stability. However, it requires
a bunch of patches to achieve a high level of performance. Linux 2.6
includes the features needed to achieve this level of performance, but is not
yet as stable for such usages. The kernel needs at least one upgrade every
month to fix a bug or vulnerability. Some people prefer to run it on Solaris (or
do not have the choice). Solaris 8 and 9 are known to be really stable right now,
offering a level of performance comparable to Linux 2.4. Solaris 10 might show
performances closer to Linux 2.6, but with the same code stability problem. I
have too few reports from FreeBSD users, but it should be close to Linux 2.4 in
terms of performance and reliability. OpenBSD sometimes shows socket allocation
failures due to sockets staying in FIN_WAIT2 state when client suddenly
disappears. Also, I've noticed that hot reconfiguration does not work under
OpenBSD.
The reliability can significantly decrease when the system is pushed to its
limits. This is why finely tuning the sysctls is important. There is no
general rule, every system and every application will be specific. However, it is
important to ensure that the system will never run out of memory and
that it will never swap. A correctly tuned system must be able to run for
years at full load without slowing down nor crashing.
Security is an important concern when deploying a software load balancer. It is
possible to harden the OS, to limit the number of open ports and accessible
services, but the load balancer itself stays exposed. For this reason, I have been
very careful about programming style. The only vulnerability found so far dates early
2002 and only lasted for one week. It was introduced when logs were reworked. It
could be used to cause BUS ERRORS to crash the process, but it did not
seem possible to execute code : the overflow concerned only 3 bytes, too short to
store a pointer (and there was a variable next).
Anyway, much care is taken when writing code to manipulate headers. Impossible
state combinations are checked and returned, and errors are processed from the
creation to the death of a session. A few people around the world have reviewed
the code and suggested cleanups for better clarity to ease auditing. By the way,
I'm used to refuse patches that introduce suspect processing or in which not
enough care is taken for abnormal conditions.
I generally suggest starting HAProxy as root because it
can then jail itself in a chroot and drop all of its privileges
before starting the instances. This is not possible if it is not started as
root because only root can execute chroot().
Logs provide a lot of information to help to maintain a satisfying security
level. They can only be sent over UDP because once chrooted, the
/dev/log UNIX socket is unreachable, and it must not be possible to
write to a file. The following information are particularly useful :
- source IP and port of requestor make it possible to find their origin
in firewall logs ;
- session set up date generally matches firewall logs, while tear
down date often matches proxies dates ;
- proper request encoding ensures the requestor cannot hide
non-printable characters, nor fool a terminal.
- arbitrary request and response header and cookie capture help to
detect scan attacks, proxies and infected hosts.
- timers help to differentiate hand-typed requests from browsers's.
HAProxy also provides regex-based header control. Parts of the request, as
well as request and response headers can be denied, allowed, removed, rewritten, or
added. This is commonly used to block dangerous requests or encodings (eg: the
Apache Chunk exploit),
and to prevent accidental information leak from the server to the client.
Other features such as Cache-control checking ensure that no sensible
information gets accidentely cached by an upstream proxy consecutively to a bug in
the application server for example.
The source code is covered by GPL v2. Source code and pre-compiled binaries for
Linux/x86 and Solaris/Sparc can be downloaded right here :
- Development version (1.5) :
- Latest version (1.4) :
- Latest version (1.3) :
- Previous branch (1.2) :
- X-Forwarded-For support for Stunnel
Stunnel currently makes a perfect
complement to provide SSL client-side support to HAProxy. However, since
Stunnel is a proxy an has no knowledge of HTTP, the client's IP address was
lost, which is somewhat annoying. A few patches were available on the Net to
add the X-Forwarded-For header, but they introduced an undesirable buffer
overflow. So I took my courage and wrote a reliable and secure patch to
implement this useful feature. I sent it to Stunnel's authors but got no
feedback. So the patch is provided here for various versions from Stunnel-4.14
and above in the hope it will be useful to some people. At least it seems to
be the case, considering the number of people who send updates :-) Note that
this patch does not work with keep-alive, see send-proxy below for that.
Get the patches from Exceliance's public patch repository
- Send-proxy support for Stunnel (not needed since 4.45)
This patch contributed by Exceliance adds to stunnel the ability
to inform haproxy about the incoming connection (protocol, source, destination, ...). It's more flexible
than the X-Forwarded-For patch above, but requires haproxy 1.5-dev3 minimum with support for the
accept-proxy bind option. This feature has been merged into stunnel 4.45, so the patch is
not needed anymore starting from this version. Update: this patch is not needed anymore, since it
requires a development version of haproxy, and the latest development version (1.5-dev12) supports native
SSL, so there is no need anymore for stunnel in front of haproxy.
Get the patches from Exceliance's public patch repository
- Unix socket support for Stunnel (not needed since 4.46)
This patch contributed by Exceliance adds to stunnel the ability
to connect to haproxy over a UNIX stream socket instead of using TCP. Sometimes this can be more convenient
and/or more secure. It requires haproxy 1.5-dev3 minimum. Update: this patch is not needed anymore,
since it requires a development version of haproxy, and the latest development version (1.5-dev12) supports
native SSL, so there is no need anymore for stunnel in front of haproxy.
Get the patches from Exceliance's public patch repository
- Other Stunnel patches
There are other patches contributed to Stunnel by Exceliance, such as multi-process
SSL session synchronization, transparent binding and performance improvements. Please check them below.
Get the patches from Exceliance's public patch repository
- Various Patches :
- Logo :
If you are a happy user of haproxy and want to put a reference to it on your site,
simply copy the following HTML code where you feel appropriate on your site, it will
present the logo above to your visitors :
- Browsable directory
There are three types of documentation now : the Reference Manual which explains
how to configure HAProxy but which is outdated, the Architecture Guide which will
guide you through various typical setups, and the new Configuration Manual which
replaces the Reference Manual with more a explicit configuration language explanation.
- Reference Manual for version 1.5 (development) :
- Reference Manual for version 1.4 (stable) :
- Reference Manual for version 1.3 (stable) :
- Reference Manual for version 1.2 (old stable) :
- Reference Manual for version 1.1 (unmaintained) :
architecture.txt : Architecture Guide
Article on Load Balancing (HTML version) : worth reading for people who don't know what type of load balancer they need
In addition to Cyril's HTML converter above, an automated format converter is being developed by Pavel Lang. At the time of writing these lines, it is able to produce a PDF from the documentation, and some heavy work is ongoing to support other output formats. Please consult the
project's page for more information.
Here's an example
of what it is able to do on version 1.5 configuration manual.
If you think you don't have the time and skills to setup and maintain a free load
balancer, or if you're seeking for commercial support to satisfy your customers or
your boss, you should contact Exceliance.
Another solution would be to use Exceliance's ALOHA appliances or the HAPEE distribution (see below).
The following products or projects use HAProxy :
- redWall Firewall
From the site : "redWall is a bootable CD-ROM Firewall. Its goal is to provide
a feature rich firewall solution, with the main goal, to provide a webinterface
for all the logfiles generated!"
- Exceliance's
ALOHA Load Balancer appliance
Exceliance is a french company who sells a complete haproxy-based solution embedding an optimized
and hardened version of Formilux packaged for ease of
use via a full-featured Web interface, reduced maintenance, and enhanced availability
through the use of VRRP for box fail-over, bonding for link fail-over, configuration
synchronization, SSL, transparent mode, etc...
(check differences between HAProxy and Aloha).
An evaluation version running in VMWare Player is available on the site. Since this is where I
work, a lot of features are created there :-)
- Exceliance's
HAPEE distribution
HAPEE is 100%-software alternative to the ALOHA and standard HAProxy, which runs on standard
distributions. It offers pre-patched add-ons (eg: stunnel, ...), system settings, commented
config files and command line completion to ease the setup of a complete HAProxy-based load
balancer, including VRRP and logging. It also comes with support contracts and assistance
tickets.
- Loadbalancer.org
This company based in the UK has recently added HAProxy to their load-balancing solution
in order to provide the basic layer 7 support that some customers were asking for. They're
also among the rare commercial product makers who admit to use HAProxy and who have donated
to the project.
- Snapt HAPROXY
Snapt develops graphical user interfaces for a few products among which HAProxy.
They managed to build a dynamic configuration interface which allows the user to play
with a very wide range of settings, including ACLs, and to propose contextual choices
when additional options are required (eg: backend lists for some ACLs). They have an
online demo which is worth testing.
Some happy users have contributed code which may or may not be included. Others
spent a long time analysing the code, and there are some who maintain ports up to
date. The most difficult internal changes have been contributed in the form of
paid time by some big customers who can afford to pay a developer for several
months working on an opensource project. Unfortunately some of them do not want
to be listed, which is the case for the largest of them.
This table enumerates all known significant contributions,
as well as proposed fundings and features yet to be developped but waiting for spare
time.
Some contributions were developped and not merged, most often by lack of sign of
interest from the users or simply because they overlap with some pending changes
in a way that could make it harder to maintain future compatibility.
- Geolocation support
Quite some time ago now, Cyril Bonté contacted me about a very interesting
feature he has developped, initially for 1.4, and which now supports both 1.4
and 1.5. This feature is Geolocation, which many users have been asking for
for a long time, and this one does not require to split the IP files by country
codes. In fact it's extremely easy and convenient to configure.
The feature was not merged yet because it does for a specific purpose (GeoIP)
what we want to have for a more general use (map converters, session variables, and
use of variables in the redirect URLs), which will allow the same features to
be implemented with more flexibility (eg: extract the IP from a header, or pass
the country code and/or AS number to a backend server, etc...). Cyril was very
receptive to these arguments and accepted to maintain his patchset out of tree
waiting for the features to be implemented (Update: 1.5-dev20 with
maps now make this possible). Cyril's code is well maintained and used in
production so there is no risk in using it, except the fact that the configuration
statements will change a bit once the feature is permitted later.
The code and documentation are available here : https://github.com/cbonte/haproxy-patches/wiki/Geolocation
- sFlow support
Neil Mckee posted a patch to the list in early 2013, and unfortunately this patch
did not receive any sign of interest nor feedback, which is sad considering the
amount of work that was done. I personally am clueless about sFlow and expressed
my skepticism to Neil about the benefits of sampling some HTTP traffic when you
can get much more detailed informations for free with existing logs.
Neil kindly responded with the following elements :
I agree that the logging you already have in haproxy is more flexible and detailed,
and I acknowledge that the benefit of exporting sFlow-HTTP records is not immediately
obvious.
The value that sFlow brings is that the measurements are standard, and are designed to
integrate seamlessly with sFlow feeds from switches, routers, servers and applications to
provide a comprehensive end to end picture of the performance of large scale multi-tier
systems. So the purpose is not so much to troubleshoot haproxy in isolation, but to
analyze the performance of the whole system that haproxy is part of.
Perhaps the best illustration of this is the 1-in-N sampling feature.
If you configure sampling.http to be, say, 1-in-400 then you might
only see a handful of sFlow records per second from an haproxy
instance, but that is enough to tell you a great deal about what is
going on -- in real time. And the data will not bury you even if you
have a bank of load-balancers, hundreds of web-servers, a huge
memcache-cluster and a fast network interconnect all contributing
their own sFlow feeds to the same analyzer.
Even after that explanation, no discussion emerged on the subject on the list, so
I guess there is little interest among users for now. I suspect that sFlow is
probably more deployed among network equipments than application layer equipments,
which could explain this situation. The code is large (not huge though) and I am not
convinced about the benefits of merging it and maintaining it if nobody shows even
a little bit of interest. Thus for now I prefer to leave it out of tree. Neil has
posted it on GitHub here :
https://github.com/sflow/haproxy.
Please, if you do use this patch, report your feedback to the mailing list, and invest
some time helping with the code review and testing.
Some older code contributions which possibly do not appear in the table above are still listed here.
- Application Cookies
Aleksandar Lazic and Klaus Wagner implemented this feature which
was merged in 1.2. It allows the proxy to learn cookies sent by the server
to the client, and to find it back in the URL to direct the client to the right
server. The learned cookies are automatically purged after some inactive time.
- Least Connections load balancing algorithm
This patch for haproxy-1.2.14 was submitted by Oleksandr Krailo. It implements
a basic least connection algorithm. I've not merged this version into 1.3 because
of scalability concerns, but I'm leaving it here for people who are tempted to
include it into version 1.2, and the patch is really clean.
- Soft Server-Stop
Aleksandar Lazic sent me this patch against 1.1.28 which in fact does two things.
The first interesting part allows one to write a file enumerating servers which
will have to be stopped, and then sending a signal to the running proxy to tell
it to re-read the file and stop using these servers. This will not be merged into
mainline because it has indirect implications on security since the running
process will have to access a file on the file-system, while current version can
run in a chrooted, empty, read-only directory. What is really needed is a way to
send commands to the running process. However, I understand that some people
might need this feature, so it is provided here. The second part of the patch has
been merged. It allowed both an active and a backup server to share a same
cookie. This may sound obvious but it was not possible earlier.
Usage: Aleks says that you just have to write the server names that you
want to stop in the file, then kill -USR2 the running process. I have
not tested it though.
- Server Weight
Sébastien Brize sent me this patch against 1.1.27 which adds the
'weight' option to a server to provide smoother balancing between fast and slow
servers. It is available here because there may be other people looking for this
feature in version 1.1.
I did not include this change because it has a side effect that with
high or unequal weights, some servers might receive lots of consecutive
requests. A different concept to provide a smooth and fair
balancing has been implemented in 1.2.12, which also supports
weighted hash load balancing.
Usage: specify "weight X" on a server line.
Note: configurations written with this patch applied will normally still
work with future 1.2 versions.
- IPv6 support for 1.1.27
I implemented IPv6 support on client side for 1.1.27, and merged it into
haproxy-1.2. Anyway, the patch is still provided here for people who want to
experiment with IPv6 on HAProxy-1.1.
- Other patches
Please browse the directory for other useful
contributions.
If you don't need all of HAProxy's features and are looking for a simpler solution,
you may find what you need here :
-
Linux Virtual Servers (LVS)
Very fast layer 3/4 load balancing merged in Linux 2.4 and 2.6 kernels. Should
be coupled with Keepalived to monitor
servers. This generally is the solution embedded by default in most
IP-based load balancers.
-
Nginx ("engine X")
Nginx is an excellent piece of software. Initially it's a very fast and reliable
web server, but it has grown into a full-featured proxy which can also offer
load-balancing capabilities. Nginx's load balancing features are less advanced
than haproxy's but it can do a lot more things (eg: compression, caching), which
explains why they are very commonly found together. I strongly recommend it to
whoever needs a fast, reliable and flexible web server !
-
Pound
Pound can be seen as a complement to HAProxy. It supports SSL, and can direct
traffic according to the requested URL. Its code is very small and will stay
small for easy auditing. Its configuration file is very small too. However, it
does not support persistence, and the performance associated to its
multi-threaded model limits its usage to medium sites only.
-
Pen
Pen is a very simple load balancer for TCP protocols. It supports source IP-based
persistence for up to 2048 clients. Supports IP-based ACLs. Uses
select() and supports higher loads than Pound but will not scale very
well to thousands of simultaneous connections.
Feel free to contact me at for any questions or comments :
Some people regularly ask if it is possible to send donations, so I have set up a Paypal account for this.
Click here if you want to donate.
An IRC channel for haproxy has been opened on FreeNode (but don't seek me there, I'm not) :