Understanding Kubernetes Cluster Autoscaler: Features, Limitations and Alternatives - Spot.io

Understanding Kubernetes Cluster Autoscaler: Features, Limitations and Alternatives

There are different tools and mechanisms for scaling applications and provisioning resources in Kubernetes. Kubernetes’s native horizontal and vertical pod autoscaling (HPA and VPA) handle scaling at the application level. However when it comes to the infrastructure layer, Kubernetes doesn’t carry out infrastructure scaling itself. Instead, there are multiple tools and mechanisms for handling scaling of resources at the infrastructure level. 

In this article, we’ll explore two solutions that address infrastructure scaling automation for Kubernetes — Ocean by Spot and the open source Cluster Autoscaler. We’ll cover:

Cluster Autoscaler overview and considerations

Cluster Autoscaler is an open-source project that automatically scales a Kubernetes cluster  based on the scheduling status of pods and resource utilization of nodes. If you have several pods that are unscheduled because of insufficient resources, Cluster Autoscaler will automatically add more nodes to the cluster using your cloud provider’s auto scaling capabilities–for example, Auto Scaling Groups (ASGs) and Spot Fleet within AWS or similar services in the case of Microsoft Azure or Google Cloud 

Despite this simple approach to auto scaling, configuring Cluster Autoscaler for optimal use is complex. As a DIY solution, users need to have a good understanding of their pods and container needs, and need to be aware of the limitations (and related consequences) of Cluster Autoscaler:

  • Overprovisioning is common as Cluster Autoscaler looks at defined resource requests and limits, not at actual resource usage 
  • Limited flexibility because although mixed instance types can be used in a node group, instances need to have the same capacity (CPU and memory) 
  • For customers that want to leverage different kinds of compute, managing multiple node pools is complex
  • With no fallback to on-demand instances, cannot be used with spot instances without creating performance and availability risks 
  • Auto Scaling Groups need to be managed independently by the user

Default Solution vs Hands-off approach

These limitations of Cluster Autoscaler mean that, while it’s a good initial solution, it’s not always the best fit, especially when users are looking for strategies to take a more hands-off approach to infrastructure and reduce the cost of their cloud operations. 

What is Ocean and how does it work?

To help simplify and further automate infrastructure scaling for Kubernetes users, Spot offers Ocean, a fully managed data plane service that provides a serverless infrastructure engine for running containers.

Kubernetes cluster autoscaling

Pod-driven infrastructure auto scaling 

Leveraging pod-driven auto scaling, Ocean dynamically allocates compute infrastructure based on container requirements (documented  in their YML config files), such as memory, CPU, disks and networking limitations. It’s designed to work in such a way that pods and workloads can take advantage of the underlying capabilities of cloud compute infrastructure such as pricing model, lifecycle, performance, and availability without having to know anything about them. 

Ocean does this by automating and optimizing three infrastructure at three layers:

kubernetes, kubernetes cluster autoscaler

Ocean provides users with a number of features that enhance their ability to effectively and efficiently manage their container cluster resources, including the following:

  • With out-of-the-box nodes of varying types and sizes, users don’t have to configure or maintain individual scaling groups
  • Ocean dynamically scales infrastructure and allocates the best fit of instances based on scale, shape of pods and any labels, taints or tolerations
  • Events are monitored at the Kubernetes API server, affording levels of visibility and flexibility that can’t otherwise be achieved, ensuring dependable performance and fast scalability
  • Ocean maintains a scoring model for compute capacity markets to significantly reduce interruptions and efficiently leverage cloud pricing models (spot instance, on-demand, and reserved instances) for up to a 90% cost reduction
  • Reduce waste by more efficiently packing containers with Ocean, significantly improving bin packing by 30-40%

Comparing Ocean and Cluster Autoscaler

In order to help make an informed decision between Ocean and Cluster Autoscaler, we have cataloged some architectural differences between the two, focusing on the common use case of Cluster Autoscaler with AWS ASGs. 

Feature

Cluster Autoscaler with AWS Auto Scaling Groups

Ocean by Spot

Mixed Instance Types Limited support as only one ASG will be scaled per pod request and mixed instance types within each ASG must have the same capacity (CPU and memory). More information Supported. Ocean supports mixed instance types across all families by default.  Instead of managing multiple node-pools, you will be running a single “Ocean”.
Availability Zone awareness A single AutoScaling group cannot span multiple availability zones without consideration for rebalancing. Alternatively, one can manage one AutoScaling Group per Availability Zone. More information Supported. The k8s cluster can be managed with a single entity that manages all the underlying instances, across multiple configurations, irrespective of the AZ.
Persistent Volume Claims (PVC)  awareness NodeGroups must be configured with an AutoScaling group that is tied to a single Availability Zone. More information Supported. Ocean reads requirements of pending pods in real time. If the resource should have a volume available, Ocean will launch the instance in the required AZ. No additional management is needed.
Fallback to
on-demand
CA or ASG/Spot Fleet don’t have an option to fallback to an on-demand instance. Supported. Ocean falls back to on-demand instances when there is a shortage in spot capacity pools.
Scale down +
re-shuffling pods
Based on the conditions that (1) running CPU and memory across the pods in a node is less than 50% allocated or (2) all running pods on a node can be moved to another node. Ephemeral storage is accounted for in scale down decisions

CA does not scale down pods using Horizontal Pod Autoscaling
Ocean scale down takes all CA considerations as well as instance size for bin packing efficiency into account, resulting in ~30% reduction in cluster footprint when compared to CA.

Ocean bases scale down decisions based on pod disruption budget, with no issues scaling down pods with HPA

Spot interruption handler In order to handle spot interruptions, one needs to install aws/spot-interruption-handler daemon-set. Available by default in Spot SaaS platform. With Spot, one does not need to install extra tools in the cluster. Interruptions are predicted and managed automatically.
Fast, high-performance auto scaling Supports DIY over-provisioning to deliver workload-based headroom. Supported. Ocean automatically calculates a cluster headroom parameter, which allows clusters to always have space for incoming pods, without waiting for new nodes.
Infrastructure management
Must manage Auto Scaling Groups independently and associate them with the cluster using labels. Ocean automatically scales infrastructure dynamically as needed.
GPU support
Requires the set up and labelling of additional GPU based node-pools to support different types and sizes. Supported out of the box. 
Node template generation
Upstream autoscaler always uses an existing node as a template for a node group. Only the first node for each node group is selected, which might be up-to-date or not. More information. Using Ocean ‘Launch Specification’, Ocean has a source of truth for the node template which is predictable. Launch specifications provide a large set of (optional) properties to configure and can be updated at will.
podAntiAffinity If podAntiAffinity is configured, Cluster Autoscaler will scale up only one node at a time Ocean will continue to scale up in parallel for every constraint including podAntiAffinity and deliver immediate infrastructure scaling. 

Trying Out Ocean

Like Cluster Autoscaler, Ocean works with the container engine of your choice. To see how it works, the following guide will take you through the steps to set up Ocean on top of EKS using eksctl

For trying out Ocean on different Cloud Providers use the following links

  • Google Cloud – Ocean on GKE
  • Microsoft Azure – Ocean on AKS

Prerequisites

1. Configure your AWS credentials
To use awscli environment variables, run the following commands:

$ export AWS_ACCESS_KEY_ID=<aws_access_key>
$ export AWS_SECRET_ACCESS_KEY=<aws_secret_access_key>

2. Set up Spot.io account (sign up for free) and credentials to AWS according to Spot IAM policy
To use environment variables, run the following commands:

$ export SPOTINST_TOKEN=<spotinst_token>
$ export SPOTINST_ACCOUNT=<spotinst_account>

3. eksctl installed with Ocean integration

$ curl -sfL https://spotinst-public.s3.amazonaws.com/integrations/kubernetes/eksctl/eksctl.sh | sh 
$ sudo mv ./bin/eksctl /usr/local/bin && rm -rf ./bin

 Steps

1.  Create a yaml file to describe the desired EKS cluster and Ocean configurations. For example, “myEKSwithOcean.yaml”: 

apiVersion: eksctl.io/v1alpha5 
kind: ClusterConfig 
metadata: 
  name: cluster-4 
  region: eu-north-1 
  
vpc: 
  id: "vpc-0dd338ecf29863c55"  # (optional, must match VPC ID used for each subnet below) 
  cidr: "192.168.0.0/16"   	# (optional, must match CIDR used by the given VPC) 
  subnets: 
  # must provide 'private' and/or 'public' subnets by availability zone as shown 
  private: 
  	 eu-north-1a: 
    	  id: "subnet-0b2512f8c6ae9bf30" 
         cidr: "192.168.128.0/19" # (optional, must match CIDR used by the given subnet) 
  
  	 eu-north-1b: 
    	 id: "subnet-08cb9a2ed60394ce3" 
         cidr: "192.168.64.0/19"  # (optional, must match CIDR used by the given subnet) 
  
  	eu-north-1c: 
    	 id: "subnet-00f71956cdec8f1dc" 
         cidr: "192.168.0.0/19"   # (optional, must match CIDR used by the given subnet) 
  
nodeGroups: 
  - name: ng-1 
    spotOcean: 
  	strategy: 
    	# Percentage of Spot instances that would spin up from the desired  
    	# capacity. 
        spotPercentage: 100 

    	# Allow Ocean to utilize any available reserved instances first before  
    	# purchasing Spot instances. 
        utilizeReservedInstances: true 

    	# Launch On-Demand instances in case of no Spot instances available. 
        fallbackToOnDemand: true 

    autoScaler: 
    	# Enable the Ocean autoscaler.  
    	enabled: true 
    	# Cooldown period between scaling actions. 
    	cooldown: 300 

    	# Spare resource capacity management enabling fast assignment of Pods  
    	# without waiting for new resources to launch. 
        headrooms: 
        	# Number of CPUs to allocate. CPUs are denoted in millicores, where  
        	# 1000 millicores = 1 vCPU. 
      	- cpuPerUnit: 2 
 
        	# Number of GPUs to allocate. 
            gpuPerUnit: 0 

        	# Amount of memory (MB) to allocate. 
            memoryPerUnit: 64 

        	# Number of units to retain as headroom, where each unit has the  
        	# defined CPU and memory. 
            numOfUnits: 1 

  	compute: 
        instanceTypes: 
      	# Instance types allowed in the Ocean cluster. Cannot be configured  
      	# if the blacklist is configured. 
      	whitelist: # OR blacklist 
        	- t2.large

2. Create the cluster and Ocean

$ eksctl create cluster --name prod --nodegroup-name standard-workers --spot-ocean

Alternatively, use yaml config: predefined default values

$ eksctl create cluster -f MyEKSwithOcean.yaml

Provisioning the cluster takes about 10-15 minutes, and once it’s complete, you can migrate existing worker nodes and create new Ocean-managed node groups. 

Next Steps

With Ocean, container infrastructure management is simplified, and you can be up and running with your Kubernetes cluster in just a few minutes. Explore the Ocean documentation library to learn more about what you can do with Ocean, or get started today with a free trial.

Sign up for our free trial to quickly and easily get started with Ocean for your containerized workloads.