Stephen Reese

Everyone likes a responsive website and being that I host a few, I look for ways to improve their speed. There are three services I was interested in, HTTP, HTTPS, and HTTP/WAF. I have often used browsers and third-party online services to benchmark page and site performance but began to look at other solutions. A few online benchmark tools are Pingdom Website Speed Test and PageSpeed Insights.

The first tool I leveraged was Apache Bench, commonly known as ab. This allows me to run a quick test in order to determine the max requests per second (req/s). While fun, it is not a practical metric as there a a number of factors that must be considered when benchmarking a web-service and understanding where weaknesses may present themselves.

HTTPS requests without keep-alives:

./httpd-2.4.12/support/ab -n 20000 -c 100 -f TLS1.2 -H "Accept-Encoding: gzip,deflate" https://www.rsreese.com/web-stack/

Server Software:        nginx
Server Hostname:        www.rsreese.com
Server Port:            443
SSL/TLS Protocol:       TLSv1.2,ECDHE-RSA-AES256-GCM-SHA384,2048,256

Document Path:          /web-stack/
Document Length:        2608 bytes

Concurrency Level:      100
Time taken for tests:   24.487 seconds
Complete requests:      20000
Failed requests:        0
Total transferred:      59560000 bytes
HTML transferred:       52160000 bytes
Requests per second:    816.77 [#/sec] (mean)
Time per request:       122.434 [ms] (mean)
Time per request:       1.224 [ms] (mean, across all concurrent requests)
Transfer rate:          2375.33 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       14   92  33.3     92     276
Processing:     5   30  20.9     26     149
Waiting:        1   16  14.2     11     146
Total:         30  122  33.4    121     300

Percentage of the requests served within a certain time (ms)
  50%    121
  66%    133
  75%    138
  80%    144
  90%    162
  95%    179
  98%    200
  99%    229
 100%    300 (longest request)

HTTPS requests with keep-alives, connection reuse provides significant speedup:

./httpd-2.4.10/support/ab -n 20000 -k -c 200 -f TLS1.2 -H "Accept-Encoding: gzip,deflate" https://www.rsreese.com/web-stack/

Server Software:        nginx
Server Hostname:        www.rsreese.com
Server Port:            443
SSL/TLS Protocol:       TLSv1.2,ECDHE-RSA-AES256-GCM-SHA384,2048,256

Document Path:          /web-stack/
Document Length:        2592 bytes

Concurrency Level:      200
Time taken for tests:   3.697 seconds
Complete requests:      20000
Failed requests:        0
Keep-Alive requests:    19896
Total transferred:      59339480 bytes
HTML transferred:       51840000 bytes
Requests per second:    5409.08 [#/sec] (mean)
Time per request:       36.975 [ms] (mean)
Time per request:       0.185 [ms] (mean, across all concurrent requests)
Transfer rate:          15672.45 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    5  49.1      0     642
Processing:    21   31  26.9     27     504
Waiting:       21   29  14.6     26     254
Total:         21   37  64.3     27     687

Percentage of the requests served within a certain time (ms)
  50%     27
  66%     30
  75%     31
  80%     33
  90%     37
  95%     46
  98%    156
  99%    446
 100%    687 (longest request)

HTTP request benchmark although the site no longer serves HTTP to external requests with the exception of a 301 redirect to the respective HTTPS resource. Apache Bench toggles adjusted slightly for increased concurrency in order to squeeze out a few more requests per second:

Server Software:
Server Hostname:        origin.rsreese.com
Server Port:            80

Document Path:          /web-stack/
Document Length:        2592 bytes

Concurrency Level:      400
Time taken for tests:   17.345 seconds
Complete requests:      100000
Failed requests:        12
   (Connect: 0, Receive: 0, Length: 12, Exceptions: 0)
Total transferred:      284465860 bytes
HTML transferred:       259168896 bytes
Requests per second:    5765.28 [#/sec] (mean)
Time per request:       69.381 [ms] (mean)
Time per request:       0.173 [ms] (mean, across all concurrent requests)
Transfer rate:          16015.88 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   46 221.8      1    3015
Processing:     0   14  99.4      1    8059
Waiting:        0   12  79.2      1    8059
Total:          0   60 249.9      3    8060

Percentage of the requests served within a certain time (ms)
  50%      3
  66%      5
  75%      8
  80%     12
  90%     35
  95%    213
  98%   1007
  99%   1012
 100%   8060 (longest request)

HTTP requests with keep-alives enabled:

Server Software:
Server Hostname:        origin.rsreese.com
Server Port:            80

Document Path:          /web-stack/
Document Length:        2592 bytes

Concurrency Level:      400
Time taken for tests:   1.497 seconds
Complete requests:      20000
Failed requests:        0
Keep-Alive requests:    20000
Total transferred:      57020000 bytes
HTML transferred:       51840000 bytes
Requests per second:    13363.67 [#/sec] (mean)
Time per request:       29.932 [ms] (mean)
Time per request:       0.075 [ms] (mean, across all concurrent requests)
Transfer rate:          37206.86 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    1   4.0      0      34
Processing:    21   28   9.0     25     365
Waiting:       11   28   8.1     24     112
Total:         21   28   9.9     25     365

Percentage of the requests served within a certain time (ms)
  50%     25
  66%     27
  75%     31
  80%     33
  90%     41
  95%     53
  98%     57
  99%     59
 100%    365 (longest request)

HTTP WAF requests. A host with Apache and ModSecurity proxied requests to the origin server. While not practical, all of the base rules from set 2.2.9 were enabled:

Server Software:        Apache/2.4.10
Server Hostname:        waf.rsreese.com
Server Port:            80

Document Path:          /web-stack/
Document Length:        2592 bytes

Concurrency Level:      200
Time taken for tests:   10.889 seconds
Complete requests:      20000
Failed requests:        0
Total transferred:      57520000 bytes
HTML transferred:       51840000 bytes
Requests per second:    1836.76 [#/sec] (mean)
Time per request:       108.888 [ms] (mean)
Time per request:       0.544 [ms] (mean, across all concurrent requests)
Transfer rate:          5158.70 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       21   22   1.6     21      44
Processing:    22   87  67.6     74     496
Waiting:       22   85  66.8     69     496
Total:         44  109  67.6     96     518

Percentage of the requests served within a certain time (ms)
  50%     96
  66%    121
  75%    138
  80%    151
  90%    206
  95%    249
  98%    296
  99%    326
 100%    518 (longest request)

The previous requests were from a Digital Ocean host to my Atlanta Linode. Want to see what a difference the network makes? Here are HTTPS requests from another Linode on the same network with no hops between the nodes:

Server Software:        nginx
Server Hostname:        www.rsreese.com
Server Port:            443
SSL/TLS Protocol:       TLSv1.2,ECDHE-RSA-AES256-GCM-SHA384,2048,256

Document Path:          /web-stack/
Document Length:        2592 bytes

Concurrency Level:      200
Time taken for tests:   2.644 seconds
Complete requests:      20000
Failed requests:        0
Keep-Alive requests:    19876
Total transferred:      59339380 bytes
HTML transferred:       51840000 bytes
Requests per second:    7564.85 [#/sec] (mean)
Time per request:       26.438 [ms] (mean)
Time per request:       0.132 [ms] (mean, across all concurrent requests)
Transfer rate:          21918.64 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    4  43.0      0     503
Processing:     1   22  26.6      5     165
Waiting:        1   22  26.7      5     165
Total:          1   26  56.1      5     586

Percentage of the requests served within a certain time (ms)
  50%      5
  66%     10
  75%     50
  80%     58
  90%     63
  95%     68
  98%     77
  99%     94
 100%    586 (longest request)

And another set for HTTP, 17K req/sec:

Server Software:
Server Hostname:        origin.rsreese.com
Server Port:            80

Document Path:          /web-stack/
Document Length:        2592 bytes

Concurrency Level:      400
Time taken for tests:   1.179 seconds
Complete requests:      20000
Failed requests:        0
Keep-Alive requests:    20000
Total transferred:      57000000 bytes
HTML transferred:       51840000 bytes
Requests per second:    16970.71 [#/sec] (mean)
Time per request:       23.570 [ms] (mean)
Time per request:       0.059 [ms] (mean, across all concurrent requests)
Transfer rate:          47232.94 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   1.8      0      17
Processing:     0   19  55.6      8     426
Waiting:        0   19  55.6      8     426
Total:          0   19  55.6      8     426

Percentage of the requests served within a certain time (ms)
  50%      8
  66%      9
  75%     10
  80%     11
  90%     14
  95%    203
  98%    213
  99%    224
 100%    426 (longest request)

While Apache Bench provides a quick and dirty analysis of some of our page capabilities, tsung is benchmark tool that can provide additional performance insights through its advanced configuration options. Below are configurations for HTTP and HTTPS respectively. We use the same HTTP configuration for the respective Proxy and WAF benchmarks with the exception of hostname changes. The configuration states that we are running tsung locally, the target host, the interval for this phase (yes, you can have more), user agent in which we have two with a ratio defined, and finally the session, which in this case will cause tsung to send as many requests as it can. Again, this is not realistic, just fun.

<?xml version="1.0"?><tsung loglevel="notice" version="1.0">
  <clients>
    <client host="localhost" use_controller_vm="true" maxusers="10000"/>
  </clients>
  <servers>
<server host="origin.rsreese.com" port="80" type="tcp"/>
</servers>
  <load>
  <arrivalphase phase="1" duration="1" unit="minute">
     <users maxnumber="10000" interarrival="0.05" unit="second"/>
   </arrivalphase>
</load>
  <options>
   <option type="ts_http" name="user_agent">
    <user_agent probability="80">Mozilla/5.0 (Windows NT 6.1; rv:34.0) Gecko/20100101 Firefox/34.0</user_agent>
    <user_agent probability="20">Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36</user_agent>
   </option>
  </options>
 <sessions>
<session name="web-stack" probability="100" type="ts_http">
   <for from="1" to="10000" var="i">
    <request><http url="/web-stack/" version="1.1" method="GET"/></request>
   </for>
  </session>
 </sessions>
</tsung>

Here we have the HTTPS configuration, similar except for the HTTPS portion.

<?xml version="1.0"?><tsung loglevel="notice" version="1.0">
  <clients>
    <client host="localhost" use_controller_vm="true" maxusers="10000"/>
  </clients>
  <servers>
<server host="www.rsreese.com" port="443" type="ssl"/>
</servers>
  <load>
  <arrivalphase phase="1" duration="1" unit="minute">
     <users maxnumber="10000" interarrival="0.05" unit="second"/>
   </arrivalphase>
</load>
  <options>
   <option type="ts_http" name="user_agent">
    <user_agent probability="80">Mozilla/5.0 (Windows NT 6.1; rv:34.0) Gecko/20100101 Firefox/34.0</user_agent>
    <user_agent probability="20">Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36</user_agent>
   </option>
  </options>
 <sessions>
<session name="web-stack" probability="100" type="ts_http">
   <for from="1" to="10000" var="i">
    <request><http url="/web-stack/" version="1.1" method="GET"/></request>
   </for>
  </session>
 </sessions>
</tsung>

Run tsung and generate the reports. Optionaly, multiple reports can be combined. You may have to sudo depending your systems permissions.

$ tsung -f origin.xml start
$ cd results-directory
$ /usr/lib/tsung/bin/tsung_stats.pl
$ tsplot "HTTP" 20150418-1658/tsung.log "HTTPS" 20150418-1712/tsung.log -d combine2/

tsung provides useful reports and graphics. For the sake of brivety, I will not include the report but just a few charts.

Request Count

Request Mean

Received Size

Sent Size

With this baseline, you can tailor the tsung configuration to include phases of increasing user load along with multiple pages and actions. See the tsung documention for details and leave a comment below if you have any questions about this post.


Comments

comments powered by Disqus