Skip to main content Skip to page footer

Debugging web application performance with HAProxy

Erstellt von Christian Zagrodnick | | Blog

This post is the start of a little series about using the HAProxy logs to find out where and why your web application “hangs”. The HAProxy log files are very useful in this area. Before we get started: HAProxy itself is almost never the problem. It is a pretty rock solid piece of software.

Are requests actually getting to HAProxy?

HAProxy logs every request. If it passes, errors, times out – you name it. HAProxy logs the request – if it gets to it. Welcome the HAProxy HTTP log format: A line of log (taken from the manual) looks like this:

Feb  6 12:14:14 localhost haproxy[14389]: [06/Feb/2009:12:14:14.655] http-in static/srv1 10/0/30/69/109 200 2750 - - ---- 1/1/1/1/0 0/0 {} {} "GET /index.html HTTP/1.1"

There is a lot of information in there. Let's get into some of the details:

  • is the client IP and port. Depending on your configuration, that is either your front-end proxy (like nginx) or the real client IP.
  • [06/Feb/2009:12:14:14.655] is the TCP accept date. That is the date when the TCP connection was opened. The following timers are based on this date.
  • http-in: The frontend the request was processed through.
  • static/srv1: That is the backend (static) and the actual backend server (srv1) the request was processed by.
  • 10/0/30/69/109: Now it's getting interesting. Those are different timers, telling you how long certain stages of request processing have taken. They are  Tq, Tw, Tc, Tr, Tt. T means “total time”, given in milliseconds. I will come back to those.
  • 200: HTTP status code
  • 2750: Bytes sent to the client, including headers.
  • 1/1/1/1/0: At the time of logging, what was the overall queue state? In order: concurrent active connections, concurrent connections to the frontend, concurrent active and queued connections to the used backend, concurrent active active connections to the used server, retries.
  • 0/0: When this request has entered the backend/server queue, how many requests needed to be processed before it?

I have left out a few fields that are not important for us at the moment. Back to our topic: Are requests actually reaching HAProxy?

When you compare your frontend proxy or SSL termination logs with HAProxy, you should see the requests reaching HAProxy.

If you don't see requests flowing in, or you see things like the following, you should have a look at the frontend proxy configuration and the network in between.

Nov 26 07:08:16 localhost haproxy[20695]: [26/Nov/2015:07:08:16.154] http http/<NOSRV> -1/-1/-1/-1/410 400 187 - - PR-- 0/0/0/0/0 0/0 "<BADREQ>"

The timers (-1/-1/-1/-1/410) give some interesting hints here: The -1 means the request was aborted before it reached that stage. In this case, the first counter Tq is -1. The connection was aborted before the complete HTTP headers were received. You also see that this connection was not handled by any backend “<NOSRV>”.

Nov 26 07:21:52 localhost haproxy[20695]: [26/Nov/2015:07:21:45.446] http app/app13 7476/0/1/4/7481 302 638 - - ---- 0/0/0/0/0 0/0 "GET / HTTP/1.1"

In this case Tq is huge. 7476 ms, over 7 seconds. This is an indication of a broken network between the frontend proxy and HAProxy. The HTTP head is usually small and fits into one TCP packet. If that takes 7 seconds, nothing else will have good performance either.

Slow requests, overloaded applications

In the next part, we will look at overloaded applications and slow requests.

If you would like to stay up to date on what's happening at Flying Circus subscribe to our newsletter.