Autoscaling across AWS, Azure, GCP, Hetzner, and IONOS — with one model.
Define Node Pools with provider-specific settings. K8S Engine handles scaling logic, safe drain operations, and capacity management across all your infrastructure.
Why it matters
Most teams can "get Kubernetes running," but struggle with:
Scaling across mixed providers with different APIs
Consistent behavior in hybrid setups
Safe scale-down (drain/PDBs) without impacting availability
Node Pools: the core concept
A Node Pool defines a group of nodes with shared characteristics and scaling behavior.
Scale up
When pods are pending due to insufficient capacity, K8S Engine increases desired pool size and provisions new nodes through Cluster API.
- Detects pending pods with resource requests
- Calculates optimal node count per pool
- Provisions nodes via provider API
Scale down (safe by default)
When nodes are underutilized and safe to remove:
- Cordon → Drain → Respect PDBs → Terminate
- Cooldown windows prevent thrashing
- Guardrails prevent runaway scaling
Multi-provider strategy
Define multiple pools
For example: on-prem steady-state + cloud burst capacity
Use labels, taints, and affinity
Steer workloads to specific pools or providers
Clear visibility
Every scale action is recorded and explainable
What teams love
One UI and one API
Unified interface for scaling policies across all providers
Predictable behavior
Same scaling logic whether nodes are on AWS, bare metal, or edge
Fewer custom scripts
No more 'autoscaler glue code' per environment
Easier capacity planning
Clear visibility into scaling decisions and utilization
One control plane. Any infrastructure.
Create Node Pools across AWS, Azure, Google Cloud, Hetzner, and IONOS. Use labels and taints to steer workloads where they need to run.