A couple of weeks ago Cloudflare, one of our competitors, published a blog post in which they claimed that their edge compute platform is roughly three times as fast as Compute@Edge. This nonsensical conclusion provides a great example of how statistics can be used to mislead. Read on for an analysis of Cloudflare’s testing methodology, and the results of a more scientific and useful comparison.
It has often been said that there are three kinds of untruths: lies, damned lies, and statistics. This is perhaps unfair: Some statistics are pretty sound. But these are not:
Where to begin? Citing Catchpoint like this makes this claim sound like an independent study from a third party. It's not. Catchpoint allows you to configure their tools for your needs, meaning you could use them to create a test based on a fair and rigorous benchmark standard, or you could use them to tell the story you want to tell.
Cloudflare used a curated selection of Catchpoint nodes in the tests. There's no explanation of why this specific set of nodes was chosen, but it's worth noting that Cloudflare's infrastructure is not in exactly the same places as ours, and a biased choice of test locations affects the result dramatically.