Hosting Challenger 3

Hosting Challenger 3

Here is a third comparison in which we made an identical copy of the customer website content and database and then loaded it into our hosting platform for testing.

So as not to muddy up the clients statistic history, we did need to remove some plugins for HubSpot and Google Analytics.  Due to the asynchronous nature of these plugins, I don’t believe they were causing much more than a couple ms of latency, but it’s worth mentioning for full transparency.

“Challenger 3” is hosted with another very large commodity hosting company that reels clients in by the truck load with offers of huge discounts for long term commitments.  We don’t particularly like this business model since the customer usually doesn’t realize how bad they have it until after they’ve signed on the dotted line and committed to (multiple) years of poor service – but more on that later.  For now, on with the test!

The tests ran were the same as Challenger 1 and Challenger 2: firstly comparing the network uptime and response, secondly comparing page requests, and thirdly running the site under load.

Round 1 – Network Uptime

To save a bit of time I won’t rehash too much; you can take a look at Challenger 1 or Challenger 2 for details. I’ll just provide the results and some comments.

Challenger 3 – Network Response Time

The remote side was consistent with their network response.  A bit delayed at 3 seconds, but the lack of outages is promising.

Mindpack Studios – Network Response Time

Note that the scale of the two above images are different. Our speed was fast, though it wasn’t as consistent as we’d like.  Late in the test I realized we had this client on a testing server, not a production server, so that could have been the cause of the 300ms variation (that is one answer, gremlins are another).  Yet, even with that in consideration, we’re pushing 4 times faster than the competition.

Round 2 – Page Speed Tests

Page speed tests connect to the remote site and download the full content of the page, then process the content and display for review.  This test happens once every 30 minutes, and the graphs display the response time to download the full page.

Challenger 3 – Page Speed

4.3 second average response times stayed more consistent than the other Challengers, but still considerably slower than what visitors would like to see.

Mindpack Studios – Page Speed

Note that the scale of the two above images are different. 1.2 second average response times makes Mindpack 3.6 times faster than the competition.  Again, a few blips which is still puzzling on this end, though we didn’t spend too much time looking through the client’s site; there could be other plugins or modules causing some hiccups at times.

Page Speed Comparison

For clarity, we created an overlay to see clearly just how much faster Mindpack is.  Click the image to zoom in; the black graph represents the average response times with Mindpack Studios overlayed on the blue Challenger 3 hosting platform.

Round 3 – Concurrency

Our concurrency tests use ApacheBench to simulate 20 people attempting to visit the website continuously until 500 requests are made.

The results here are startling.  

Shortly after starting the concurrency testing, the website just stop responding.  We assumed that a web access firewall was getting in the way, thinking we were DoS-ing the site and blocking us proactively.  Twenty simultaneous connections is hardly a denial of service attack, of course, but still, some WAF’s are overly protective.

Only, instead of our testing workstation being blocked as we expected, it turned out the whole Internet couldn’t see the site. (!!!)

Within 30 seconds of the test, the pingdom messages started coming in:

ACP-14a - [client name redacted for privacy] (www.<redacted>.com) is down since 10/11/2017 05:21:06PM. 
Reason: Socket timeout, unable to connect to server
ACP-14a - [client name redacted for privacy] (www.<redacted>.com) is UP again at 10/11/2017 05:24:06PM, after 3m of downtime.

We waited a bit, and then tried the concurrency test again.  And again, within 30 seconds, another report came in:

ACP-14a - [client name redacted for privacy] (www.<redacted>.com) is down since 10/11/2017 05:30:06PM. 
Reason: Socket timeout, unable to connect to server
ACP-14a - [client name redacted for privacy] (www.<redacted>.com) is UP again at 10/11/2017 05:32:06PM, after 2m of downtime.

Seriously unbelievable, but since we have to perform 3 tests, we gave it one last whirl:

ACP-14a - [client name redacted for privacy] (www.<redacted>.com) is down since 10/11/2017 05:38:06PM. 
Reason: Could not find redirect location
ACP-14a - [client name redacted for privacy] (www.<redacted>.com) is UP again at 10/11/2017 05:39:06PM, after 1m of downtime.

It is unfortunate that our testing brought down the site so blatantly.  Twenty connections should never put a system into that type of state, though it seems to be a regular scenario with commodity hosting, overloaded networks and servers, and close to zero pride in the product they sell.

The image above shows some graphical data of the outages.

Challenger 3 – Concurrency

Concurrency Level:      20
Time taken for tests:   344.343 seconds
Complete requests:      500
Failed requests:        0
Non-2xx responses:      500
Total transferred:      300000 bytes
HTML transferred:       0 bytes
Requests per second:    1.45 [#/sec] (mean)
Time per request:       13773.740 [ms] (mean)
Time per request:       688.687 [ms] (mean, across all concurrent requests)
Transfer rate:          0.85 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       41   55   8.1     61      76
Processing:  7193 13590 1561.8  13836   18968
Waiting:     7193 13590 1561.8  13836   18968
Total:       7239 13645 1562.0  13884   19015

As soon as this site was under any type of load, I believe a WAF was getting involved based on the result of 0 for the the amount of HTML transferred (see above).  That is suggesting that the site is being rate limited heavily and requests aren’t getting back to the client with any page body.  Again, I’d say this was perfectly fair for a WAF to prevent access thinking we were an attacker, but the fact that the entire site came down suggests there is more to play here.

Mindpack Studios – Concurrency

Concurrency Level:      20
Time taken for tests:   13.244 seconds
Complete requests:      500
Failed requests:        0
Total transferred:      20254000 bytes
HTML transferred:       20084000 bytes
Requests per second:    37.75 [#/sec] (mean)
Time per request:       529.753 [ms] (mean)
Time per request:       26.488 [ms] (mean, across all concurrent requests)
Transfer rate:          1493.47 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       12   18   2.2     18      29
Processing:   436  503  57.0    489     932
Waiting:      342  385  34.9    374     641
Total:        451  521  57.1    507     950

It’s not much of a comparison when the other side doesn’t even respond to concurrency.  But we’ve included the results of the Mindpack test to give some indication of what the response times should have looked like.

Conclusion

Early on in the test we noticed some promising results. The site enjoyed the consistency of the network response times, and even if the page was running a bit slow at 4-seconds, it was at least usable.  There wasn’t any major spikes upward of 10 or 15 seconds like we’ve seen in the past with other providers.  Additionally, although the site is slower than we personally would like to see, it’s hard to tell if the 4 second delay is causing any bounces due to customer frustration.

That being said, the concurrency test is just bad – really bad.  As soon as the customer starts getting any minimal levels of traffic (like 10 people visiting at the same time?), they’ll see the site come down.  Or more than likely, this has already happened and they never noticed the customers already lost.

We’ve expressed a few times now that commodity hosting rate-limits sites so that one site can’t overload others on the same server.  In their quest to keep the phone from ringing and to maximize legacy hardware and density, it’s far more likely the big hosting companies end up rate-limiting legitimate traffic to keep one site from causing a bunch of issues.

Shortly after a website owner sends out that 100 or 500 user e-mail showcasing a new product or service for sale, customers traffic into the site by the dozen or hundreds.  During this most valuable time, when the customer is actually interested in the product you are pushing and you actually have their attention for that short 10 seconds out of their busy day, they see a spinning icon in their browser and a white page as they wait for the site to come up…

We’ve all been there, and we all just close the browser and carry on.