Microservices, by their nature are ephemeral. Pods and containers are created and removed all the time on demand. Every autoscale event, new application, code update, or roll back essentially leads to the creation or deletion of swathes of pods and containers.

Kubernetes will track this internally and keep all the entities in line. But as pods are created or removed, the ingress proxy must be updated with the changes quickly so it knows where to send requests. Rates of change can be so high there’s a real danger that the ingress proxy can’t keep pace. That can lead to application failures and affect your ability to serve your customers.

Large Scale Ephemeral Pod Deployment

Large scale ephemeral pod deployments are real for ultra-autoscale or rolling out thousands of application code updates every day.

Take Uber, for example. At KubeCon 2019 in San Diego, Uber mentioned that they spin up 1 million containers every day and could create 40,000 pods in 30 seconds across 8,000 nodes. That’s a staggering rate, and it was inspiring for us to hear!

Creating pods is one thing, but at Citrix we wanted to see how the ingress proxy might handle these situations. Could it cope with high rates of change we threw at it?

Yes, We Did It!

For Citrix ADC the answer is an emphatic yes! We ran tests to create 50,000 pods across 1,000 nodes on Google Cloud Platform (GKE), and all the updates were sent to the Citrix ingress proxy — Citrix ADC VPX. The results were astounding. A single instance of Citrix ADC VPX was able keep up with the 50,000-pod creation update rate and stayed ahead of all the changes with ample room to spare.

Of course, we know that, in reality, customers will use multiple instances of proxies to manage and scale their workloads. But it’s supremely satisfying to validate that a single instance of Citrix ADC can support such large scale ephemeral deployments.

Learn more about Citrix ADC, and keep an eye out for an upcoming technical blog post on our tests.