Roman Chernobelskiy                                       on May 17, 2021 · 12 minute read                  On

How to connect stateful workloads across Kubernetes clusters

submited by
Style Pass
2021-06-06 12:00:11

Roman Chernobelskiy on May 17, 2021 · 12 minute read

One of the biggest selling points of Apache Cassandra™ is its shared-nothing architecture, making it an ideal choice for deployments that span multiple physical datacenters. So when our Cassandra as-a-service single-region offering reached maturity, we naturally started looking into offering it cross-region and cross-cloud. One of the biggest challenges in providing a solution that spans multiple regions and clouds is correctly configuring the network so that Cassandra nodes in different data centers can communicate with each other successfully, even as individual nodes are added, replaced, or removed. From the start of the cloud journey at DataStax, we selected Kubernetes as our orchestration platform, so our search for a networking solution started there. While we’ve benefited immensely from the ecosystem and have our share of war stories, this time we chose to forge our own path, landing on ad-hoc overlay virtual application networks (how’s that for a buzzword soup?). In this post, we’ll go over how we arrived at our solution, its technical overview, and a hands-on example with the Cassandra operator.

About a year ago, several blog posts were published that inspired us on this journey. The first was the Nebula announcement from Slack. Reading it and then learning about Nebula’s architecture was a good introduction to the capabilities and feasibility of home-built overlay networks. The introduction to how Tailscale works was another good primer on the subject. Later, Linkerd published a post about service mirroring. While we use Istio in some capacity, Linkerd has always looked attractive due to its architecture of simple and pluggable components. And their blog post about service mirroring did not disappoint. As a lot of great ideas do, the idea of exposing a pod in a different cluster with a service IP looked obvious in hindsight. But using Service IPs in our use case was not super scalable, as kube-proxy provisions all of them on each node. So the idea of making our own virtual IPs and exposing them just to the relevant pods via sidecars was born.

Leave a Comment