Chapters
Try It For Free
No items found.
December 31, 2024

Kubernetes Node Autoscaling with Cluster Orchestrator

Table of Contents
Kubernetes Node Autoscaling with Cluster Orchestrator


Kubernetes Node Autoscaling with Cluster Orchestrator

Blog image

Node Autoscaling Problem in Kubernetes

Kubernetes enables dynamic scaling of workloads, but efficiently scaling the underlying nodes remains a challenge. Workloads experience fluctuating demand, and without a responsive scaling mechanism, clusters either run out of capacity (leading to failed pod scheduling) or become inefficient (leading to underutilized nodes and unnecessary costs). The key challenges in Kubernetes node autoscaling include:

  • Slow reaction to demand surges: Without automation, provisioning new nodes takes time, causing pod scheduling delays.
  • Over-provisioning or under-utilization: Manually setting node counts leads to inefficiencies.
  • Cluster resource fragmentation: Suboptimal node selection results in wasted resources.
  • Cost inefficiency: Running excess nodes increases cloud costs without delivering proportional benefits.

Cluster Autoscaler for Kubernetes Autoscaling

To address these challenges, the Cluster Autoscalerwas introduced as a native solution for node autoscaling. The Cluster Autoscaler in Kubernetes automatically adjusts the number of nodes in a cluster based on workload demands. It scales up by adding nodes when pods cannot be scheduled due to insufficient resources and scales down by removing underutilized nodes to optimize costs. CA integrates with cloud providers to modify node pools dynamically and respects scheduling constraints like taints, tolerations, and affinity rules. However, its limitations include reactive scaling, as it only provisions nodes after pods remain unscheduled, slow response times, due to reliance on cloud provider APIs, inefficient bin-packing, leading to resource fragmentation, and lack of multi-cluster awareness, as it operates within a single cluster without optimizing workload distribution across multiple clusters

Karpenter for Kubernetes Autoscaling

Karpenter overcomes the limitations of Cluster Autoscaler by offering a faster, more flexible, and workload-aware approach to node autoscaling. Unlike CA, which relies on cloud provider-managed node groups, Karpenter directly provisions nodes using cloud APIs, significantly reducing the time it takes to bring up new instances. Karpenter eliminates the need for static node groups, enabling on-demand instance provisioning with support for diverse instance types, including spot and on-demand instances, for cost optimization. However, Karpenter has its shortcomings: it lacks sophisticated spot orchestration capabilities, effective bin packing, and visibility to the change it brings to the cluster.

Cluster Orchestrator for Kubernetes

Harness CCM Cluster Orchestrator is built on top of Karpenter and addresses its key shortcomings. It provides:

  • Out-of-the-box support for advanced spot orchestration with on-demand fallback and reverse fallback.
  • Harness’ Pod Evictor, paired with Karpenter’s consolidation, for efficient bin packing.
  • Dynamic spot/on-demand workload replica splitting for better cost optimization.
  • Visibility into cost savings generated by Cluster Orchestrator.
  • Commitment utilization guarantee by integrating with Harness’s Commitment Orchestration to maximize reserved capacity.

Spot Orchestration for Kubernetes Nodes with Dynamic Spot/On-Demand Workload Split Configuration

Cluster Orchestrator: Spot Preferences

Cluster Orchestrator: Spot Preferences

Karpenter’s native spot orchestration is limited compared to Cluster Orchestrator. Karpenter relies on an SQS Queue, which must be set up and maintained by the user to monitor interruption signals. In contrast, Cluster Orchestrator works seamlessly out of the box, requiring no additional configuration.

Cluster Orchestrator employs a sophisticated strategy for managing spot nodes:

  • Node Selection Strategy: A configurable strategy (e.g., Cost-Optimized or Least Interrupted) determines the type of node to provision.
  • Cost-Optimized: Prefers cheaper nodes, even if they have higher interruption rates.
  • Least Interrupted: Prefers spot nodes with lower chances of interruption.
  • Fallback to On-Demand Nodes: If no suitable spot nodes are available based on pod requirements and the configured strategy, the orchestrator provisions an on-demand node as a fallback.
  • Reverse Fallback: When the spot market stabilizes, the on-demand node is converted back to a spot node, ensuring continuous cost-efficiency.

Cluster Orchestrator also allows users to split Kubernetes workload replicas between spot and on-demand nodes using percentage-based allocation. Users can:

  • Define what percentage of pods should run on on-demand vs. spot nodes.
  • Maintain a base capacity of on-demand pods to handle steady traffic while scaling additional workloads on cost-effective spot nodes.
  • Choose to apply the configuration to All Workloads in the cluster or just to the Spot Ready(workloads with more than 1 replica)workloads of the cluster.

This feature is particularly beneficial for clusters requiring stable, uninterrupted workloads, while allowing excess capacity to leverage cheaper spot instances for cost savings.

Efficient Bin Packing

Cluster Orchestrator: Bin Packing

Cluster Orchestrator: Bin Packing

Cluster Orchestrator enhances Karpenter’s consolidation features by implementing a Pod Evictor, which takes user-defined CPU and memory utilization thresholds to determine underutilized nodes. If a node’s resource usage falls below these thresholds, all its pods are evicted — but only if they can be scheduled on other nodes in the cluster, respecting Pod Disruption Budgets (PDBs) and other constraints. This approach significantly improves bin packing, reducing wasted resources.

Commitment Utilization Guarantee

One of Karpenter’s limitations is its lack of awareness of cloud commitment-based discounts, which can lead to over-provisioning of on-demand or spot nodes — resulting in higher costs. Cluster Orchestrator integrates with Harness’s Commitment Orchestrator to ensure that reserved capacity is fully utilized before provisioning additional nodes, optimizing cloud spending.

By addressing Karpenter’s limitations, Harness CCM Cluster Orchestrator delivers a more cost-efficient, automated, and workload-aware node scaling solution for Kubernetes clusters.

Conclusion

Harness CCM Cluster Orchestrator builds upon Karpenter to deliver a smarter, more efficient, and cost-aware autoscaling solution for Kubernetes. By providing seamless spot orchestration, dynamic workload splitting, enhanced bin packing, and commitment-aware scaling, it eliminates the inefficiencies of both Cluster Autoscaler and Karpenter. Unlike traditional autoscalers, it doesn’t just react to pending pods — it proactively optimizes resource utilization, ensuring lower costs and higher efficiency with minimal manual intervention. For organizations looking to maximize cost savings, improve reliability, and simplify cluster scaling, Cluster Orchestrator is the superior choice over standalone Karpenter or Cluster Autoscaler.

Riyas P

Riyas P is a polyglot software engineer focused on building thoughtful, reliable, and impactful products. He has hands-on experience across Python, Go, Node.js, Java, React, and DevOps tools like Docker and Kubernetes. Passionate about developer experience and scalable systems, he enjoys solving meaningful problems beyond just writing code — always driven by purpose, empathy, and continuous learning.

Cloudopoly: Master Cloud Spend to Achieve Strategy, Savings, and Scale

Join the FinOps Excellence Summit on July 16th. Learn from industry leaders about cloud cost optimization, savings strategies, and AI-powered FinOps. Register now!

Register now

Similar Blogs

No items found.
No items found.
Cloud Cost Management