While a lot of software for creating and managing scale comes out of supercomputing centers, hyperscalers, and the largest public cloud builders, ther

Forget Mesos And OpenStack, Hashi Stack Is The New Next Platform

submited by
Style Pass
2021-06-17 01:30:03

While a lot of software for creating and managing scale comes out of supercomputing centers, hyperscalers, and the largest public cloud builders, there is still plenty of innovation being done by people who need to tackle scale outside of these upper echelon organizations. Two of them are Mitchell Hashimoto and Armon Dadgar, the co-founders of HashiCorp, and they have spent more than a decade building what is turning out to be the likely commercial alternative to the Kubernetes stack – which also supports Kubernetes if you really want to do that, too.

Like many open source projects that have made the leap to commercial success – and we are not saying that there are many of those – the first project in the Hashi Stack, called Vagrant, was a personal project of Hashimoto that created a kind of consistent configuration wrapper around application software that made it easier to package and update. Eventually Engine Yard – remember that platform cloud alternative to Red Hat’s original OpenShift and VMware’s Cloud Foundry? – sponsored Vagrant, which originally ran on Oracle’s VirtualBox hypervisor but which was expanded to include VMware’s ESXi, Red Hat’s KVM, and Microsoft’s Hyper-V hypervisors as well as the custom Xen hypervisor used by Amazon Web Services.

Hashimoto and Dadgar both got their bachelor’s degrees in computer science from the University of Washington and they also worked together at Kiip, which is a mobile ad tech and data platform provider based in San Francisco and which has Coca-Cola, Kellogg’s, Proctor & Gamble, McDonald’s, and Johnson & Johnson as its marquee customers. The Kiip ad engine was built in Python, Ruby, Bash, and Puppet, and when it was first turned on in 2010 (when Vagrant was a side project for Hashimoto), it could process a measly 1 query per second at 200 milliseconds of average latency, which is right at the impatience limit of the human attention span. And when they founded HashiCorp two years later, that Kiip system they left in the hands of their former employer was revved up to 2,000 queries per second at a 20 millisecond average response time. That’s a 2,000X improvement in throughput and a 10X improvement in latency, which is not too shabby.

Leave a Comment