Autoscaling using KEDA

Scale workloads based on the size of a rabbitMQ queue automatically, and have on-demand processing for any tasks There should have been a video here but your browser does not seem to support it. A sped up example of autoscaling using KEDA with a rabbitMQ setup what & why Kubernetes is a great fit for autoscaling, and it already has a built-in system for doing autoscaling based on the metrics-server infos, like CPU usage for a pod. It’s quite easy to do that using the Horizontal Pod Autoscaler (HPA), and I made a demo system with it (here)[/posts/kube-hpa]. ...

May 16, 2022 · 6 min · 1249 words

Exploring Kube's Horizontal Pod Autoscaler

what & why Let’s say you have a scalable architecture (like a server/worker model), and you want autoscaling to happens automatically based on the workers CPU usage, which is useful is some scenarios. Kubernetes has an Horizontal Pod Autoscaler feature that we can utilize to do just that ! how First, let’s talk requirements. You’ll need : a k8s cluster (k0s, minikube or microk8s), kubectl installed and configured to talk to your cluster metrics-server deployed. This will provide the metrics necessary for the autoscaling algorithm to work. Check on your particular provider how to do so. example architecture Here is an example architecture that can benefit from scaling : ...

July 27, 2021 · 5 min · 930 words