The correct way to benchmark your web server

Keeping your web server running smoothly is crucial for a positive user experience. But how do you know it can handle the traffic you expect? Benchmarking is the answer! It allows you to simulate real-world load and measure your server’s performance. This post will guide you through using wrk, a powerful tool for benchmarking your web server.

wrk is a fast, user-friendly HTTP benchmarking tool. It’s lightweight and easy to install. Once you have wrk set up, you can start crafting your benchmark test. wrk offers various options to configure your test precisely. You can define the number of threads to simulate concurrent users, the number of connections per thread, and the test duration. wrk also allows you to specify headers to be sent with each request, mimicking real user behavior.

Running a wrk test is straightforward. You provide the URL you want to benchmark along with the desired configuration options. Wrks then bombards your server with simulated traffic, measuring critical metrics like request latency, throughput, and overall system health. After the test concludes, wrk presents a detailed report. This report showcases valuable data points like average request time, request rate, and connection errors. Analyzing this data helps you identify bottlenecks and areas for improvement in your web server’s performance.

By incorporating wrk into your routine, you gain valuable insights into your web server’s capabilities. This allows you to optimize your server’s configuration, ensuring it can handle peak traffic and deliver a seamless user experience.

Here’s an excerpt from the wrk project’s README on GitHub for quick reference:


Basic Usage

wrk -t12 -c400 -d30s http://127.0.0.1:8080/index.html

This runs a benchmark for 30 seconds, using 12 threads, and keeping 400 HTTP connections open.

Output:

Running 30s test @ http://127.0.0.1:8080/index.html
  12 threads and 400 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   635.91us    0.89ms  12.92ms   93.69%
    Req/Sec    56.20k     8.07k   62.00k    86.54%
  22464657 requests in 30.00s, 17.76GB read
Requests/sec: 748868.53
Transfer/sec:    606.33MB

Command Line Options

    -c, --connections: total number of HTTP connections to keep open with
                       each thread handling N = connections/threads

    -d, --duration:    duration of the test, e.g. 2s, 2m, 2h

    -t, --threads:     total number of threads to use

    -s, --script:      LuaJIT script, see SCRIPTING

    -H, --header:      HTTP header to add to request, e.g. "User-Agent: wrk"

        --latency:     print detailed latency statistics

        --timeout:     record a timeout if a response is not received within
                       this amount of time.

Benchmarking Tips

The machine running wrk must have a sufficient number of ephemeral ports available and closed sockets should be recycled quickly. To handle the initial connection burst the server’s listen backlog should be greater than the number of concurrent connections being tested.

A user script that only changes the HTTP method, path, adds headers or a body, will have no performance impact. Per-request actions, particularly building a new HTTP request, and use of response() will necessarily reduce the amount of load that can be generated.

Showtime 🚀

Write a program that can serve a list of 100 albums with a REST like interface (JSON) and measure the performance with wrk

Here’s the project in GitHub with full source code and implementations in more than 8 languages: web-service-albums-benchmark

And the results:

Chart

It’s clear that Rust and Go are the speed demons when it comes to this particular task. However, threading and concurrency can be a game-changer in terms of performance. The key takeaway is that your server’s capabilities play a significant role in determining the optimal approach. For instance, if you’re running on a 4-core CPU, don’t expect more than 4 threads to run simultaneously - it’s like trying to fit too many eggs in one basket! So, before making any changes to your tech stack, take some time to consider these factors and plan accordingly.

George Litos
George Litos
Senior Software Engineer