|
- KEDA | Kubernetes Event-driven Autoscaling
With KEDA, you can drive the scaling of any container in Kubernetes based on the number of events needing to be processed KEDA is a single-purpose and lightweight component that can be added into any Kubernetes cluster
- KEDA Concepts
What is KEDA? KEDA is a tool that helps Kubernetes scale applications based on real-world events It was created by Microsoft and Red Hat With KEDA, you can adjust the size of your containers automatically, depending on the workload—like the number of messages in a queue or incoming requests
- KEDA | Getting Started
Welcome to the documentation for KEDA, the Kubernetes Event-driven Autoscaler Use the navigation bar on the left to learn more about KEDA’s architecture and how to deploy and use KEDA
- Scaling Deployments, StatefulSets Custom Resources - KEDA
With KEDA you can scale any workload defined as any Custom Resource (for example ArgoRollout resource) The scaling behaves the same way as scaling for arbitrary Kubernetes Deployment or StatefulSet
- Deploying KEDA
This command installs KEDA in a dedicated namespace (keda) You can customize the installation by passing additional configuration values with --set, allowing you to adjust parameters like replica counts, scaling metrics, or logging levels
- Scalers - KEDA
KEDA External Scaler that can obtain metrics from OTel collector and use them for autoscaling
- AWS SQS Queue - KEDA
When identityOwner set to operator - the only requirement is that the KEDA operator has the correct IAM permissions on the SQS queue Additional Authentication Parameters are not required
- Prometheus - KEDA
Setup GCP Workload Identity on KEDA operator; Assign the Monitoring Viewer role (namely roles monitoring viewer) to the Google Service Account on Identity Access and Management (IAM)
|
|
|