Kubernetes Volume Autoscaler (with Prometheus)
This repository contains a service that automatically increases the size of a Persistent Volume Claim in Kubernetes when its nearing full. Initially engineered based on AWS EKS, this should support any Kubernetes cluster or cloud provider which supports dynamically resizing storage volumes in Kubernetes.
Keeping your volumes at a minimal size can help reduce cost, but having to manually scale them up can be painful and a waste of time for an DevOps / Systems Administrator.
Requirements
- Kubernetes 1.17+ Cluster
- kubectl binary installed and setup with your cluster
- The helm 3.0+ binary
- Prometheus installed on your cluster Example 1 / Example 2 (old)
- Using an Storage Class with
allowVolumeExpansion == true
- Using an Volume provisioner which supports dynamic volume expansion
- EKS default driver on 1.17+ does
- AWS EBS CSI driver also does
Prerequisites
As mentioned above, you must have a storageclass which supports volume expansion, and the provisioner you're using must also support volume expansion. Ideally, "hot"-volume expansion so your services never have to restart. AWS EKS built-in provisioner kubernetes.io/aws-ebs
supports this, and so does the efs.csi.aws.com
CSI driver. To check/enable this...
# First, check if your storage class supports volume expansion...
$ kubectl get storageclasses
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
standard kubernetes.io/aws-ebs Delete Immediate false 10d
# If ALLOWVOLUMEEXPANSION is not set to true, patch it to enable this
kubectl patch storageclass standard -p '{"allowVolumeExpansion": true}'
NOTE: The above storageclass comes with EKS, however, it only supports gp2, which is largely a deprecated and much slower storage driver than gp3. I HIHGLY recommend before using EKS you install the AWS EBS CSI driver to gain gp3 support and more future-proof support of Amazon's various storage volumes and their lifecycles.
If you do this, you can/should completely remove GP2 support, and after installing the above CSI driver, create a storageclass with the new driver with best-practices in it by default including...
- Retain-ing the volume if it was deleted (to prevent accidental data loss)
- Having all disks encrypted-at-rest by default, for compliance/security
- Using gp3 by default for faster disk bandwidth and IO
# For this, simply delete your old default StorageClass
kubectl delete storageclass standard
# Then apply/create a new default gp3 using the AWS EBS CSI driver you installed
kubectl apply -f https://raw.githubusercontent.com/DevOps-Nirvana/Kubernetes-Volume-Autoscaler/master/examples/gp3-default-encrypt-retain-allowExpansion-storageclass.yaml
Installation with Helm
Now that your cluster has a StorageClass
which supports expansion, you can install the Volume Autoscaler
# First, setup this repo for your helm
helm repo add devops-nirvana https://devops-nirvana.s3.amazonaws.com/helm-charts/
# Example Install 1 - Using autodiscovery, must be in the same namespace as Prometheus
helm upgrade --install volume-autoscaler devops-nirvana/volume-autoscaler \
--namespace REPLACEME_WITH_PROMETHEUS_NAMESPACE
# Example 2 - Manually setting where Prometheus is
helm upgrade --install volume-autoscaler devops-nirvana/volume-autoscaler \
--namespace ANYWHERE_DOESNT_MATTER \
--set "prometheus_url=http://prometheus-server.namespace.svc.cluster.local"
# Example 3 - Recommended usage, automatically detect Prometheus and use slack notifications
helm upgrade --install volume-autoscaler devops-nirvana/volume-autoscaler \
--namespace REPLACEME_WITH_PROMETHEUS_NAMESPACE \
--set "slack_webhook_url=https://hooks.slack.com/services/123123123/4564564564/789789789789789789" \
--set "slack_channel=my-slack-channel-name"
Advanced helm usage...
# To update your local knowledge of remote repos, you may need to do this before upgrading...
helm repo update
# To view what changes it will make, if you change things, this requires the helm diff plugin - https://github.com/databus23/helm-diff
helm diff upgrade volume-autoscaler --allow-unreleased devops-nirvana/volume-autoscaler \
--namespace infrastructure \
--set "slack_webhook_url=https://hooks.slack.com/services/123123123/4564564564/789789789789789789" \
--set "slack_channel=my-slack-channel-name" \
--set "prometheus_url=http://prometheus-server.infrastructure.svc.cluster.local"
# To remove the service, simply run...
helm uninstall volume-autoscaler
kubectl
(Alternate) Installation with # This simple installation will work as long as you put this in the same namespace as Prometheus
# The default namespace of this yaml is hardcoded to is `infrastructure`. If you'd like to change
# the namespace you can run the first few commands below...
# IF YOU USE `infrastructure` AS THE NAMESPACE FOR PROMETHEUS SIMPLY...
kubectl --namespace infrastructure apply https://devops-nirvana.s3.amazonaws.com/volume-autoscaler/volume-autoscaler-1.0.1.yaml
# OR, IF YOU NEED TO CHANGE THE NAMESPACE...
# #1: Download the yaml...
wget https://devops-nirvana.s3.amazonaws.com/volume-autoscaler/volume-autoscaler-1.0.1.yaml
# #1: Or download with curl
curl https://devops-nirvana.s3.amazonaws.com/volume-autoscaler/volume-autoscaler-1.0.1.yaml -o volume-autoscaler-1.0.1.yaml
# #2: Then replace the namespace in this, replacing
cat volume-autoscaler-1.0.1.yaml | sed 's/"infrastructure"/"PROMETHEUS_NAMESPACE_HERE"/g' > ./to_be_applied.yaml
# #3: If you wish to have slack notifications, edit this to_be_applied.yaml and embed your webhook on the value: line for SLACK_WEBHOOK and set the SLACK_CHANNEL as well accordingly
# #4: Finally, apply it...
kubectl --namespace REPLACEME_WITH_PROMETHEUS_NAMESPACE apply ./to_be_applied.yaml
Validation
To confirm the volume autoscaler is working properly this repo has an example which you can apply to your Kubernetes cluster which is an PVC and a pod which uses that PVC and fills the disk up constantly. To do this...
# Simply run this on your terminal
kubectl apply -f https://raw.githubusercontent.com/DevOps-Nirvana/Kubernetes-Volume-Autoscaler/master/examples/simple-pod-with-pvc.yaml
Then if you'd like to follow-along, "follow" the logs of your volume autoscaler to watch it detect full disk and scale up.
Per-Volume Configuration / Annotations
This controller also supports tweaking your volume-autoscaler configuration per-PVC with annotations. The annotations supported are...
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: sample-volume-claim
annotations:
# This is when we want to scale up after the disk is this percentage (out of 100) full
volume.autoscaler.kubernetes.io/scale-above-percent: "80" # 80 is the default value
# This is how many intervals must go by above the scale-above-percent before triggering an autoscale action
volume.autoscaler.kubernetes.io/scale-after-intervals: "5" # 5 is this default value
# This is how much to scale a disk up by, in percentage of the current size.
# Eg: If this is set to "10" and the disk is 100GB, it will scale to 110GB
# At larger disk sizes you may want to set this on your PVCs to like "5" or "10"
volume.autoscaler.kubernetes.io/scale-up-percent: "50" # 50 (percent) is the default value
# This is the smallest increment to scale up by. This helps when the disks are very small, and helps hit the minimum increment value per-provider (this is 1GB on AWS)
volume.autoscaler.kubernetes.io/scale-up-min-increment: "1000000000" # 1GB by default (in bytes)
# This is the largest disk size ever allowed for this tool to scale up to. This is set to 16TB by default, because that's the limit of AWS EBS
volume.autoscaler.kubernetes.io/scale-up-max-size: "16000000000000" # 16TB by default (in bytes)
# How long (in seconds) we must wait before scaling this volume again. For AWS EBS, this is 6 hours which is 21600 seconds but for good measure we add an extra 10 minutes to this, so 22200
volume.autoscaler.kubernetes.io/scale-cooldown-time: "22200"
# If you want the autoscaler to completely ignore/skip this PVC, set this to "true"
volume.autoscaler.kubernetes.io/ignore: "false"
# Finally, Do not set this, and if you see this ignore this, this is how Volume Autoscaler keeps its "state"
volume.autoscaler.kubernetes.io/last-resized-at: "123123123" # This will be an Unix epoch timestamp
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: standard
TODO
This todo list is mostly for the Author(s), but any contributions are also welcome. Please submit an Issue for issues or requests, or an Pull Request if you added some code.
- Make helm chart able to customize the prometheus label selector
- Add scale up max increment
- Make log have more full (simplified) data about disks (max size, usage, etc, for debugging purposes)
- Add dry-run as top-level arg to easily adjust, add to examples on this README
- Push to helm repo in a Github Action and push the static yaml as well
- Add tests coverage to ensure the software works as intended moving forward
- Do some load testing to see how well this software deals with scale (100+ PVs, 500+ PVs, etc)
- Figure out what type of Memory/CPU is necessary for 500+ PVs, see above
- Add verbosity levels for print statements, to be able to quiet things down in the logs
- Generate kubernetes EVENTS (add to rbac) so everyone knows we are doing things, to be a good controller
- Add badges to the README
- Listen/watch to events of the PV/PVC to monitor and ensure the resizing happens, log and/or slack it accordingly
- Test it and add working examples of using this on other cloud providers (Azure / Google Cloud)
- Make per-PVC annotations to (re)direct Slack to different webhooks and/or different channel(s)
- Discuss what the ideal "default" amount of time before scaling. Currently is 5 minutes (5, 60 minute intervals)