Step-by-Step Guide: In-Place Vertical Scaling for Pod-Level Resources in Kubernetes v1.36
Introduction
Kubernetes v1.36 brings a powerful enhancement: the In-Place Pod-Level Resources Vertical Scaling feature has graduated to Beta, enabled by default via the InPlacePodLevelResourcesVerticalScaling feature gate. This allows you to update the aggregate pod resource budget (.spec.resources) for a running pod without always restarting containers. This guide walks you through the process step by step, from prerequisites to verification.
What You Need
- A Kubernetes cluster running v1.36 or later (the feature gate is enabled by default).
kubectlconfigured to communicate with your cluster.- Basic understanding of pod-level resources and cgroups.
- A running pod that uses pod-level resource limits (without individual container limits) to benefit from in-place resizing.
Step-by-Step Instructions
Step 1: Verify Cluster Version and Feature Gate
Confirm your cluster is on v1.36+ and the feature is active. Run:
kubectl version --shortCheck that the InPlacePodLevelResourcesVerticalScaling feature gate is enabled (it is by default). Ensure your nodes support container runtime interface (CRI) in-place updates (most modern runtimes do).
Step 2: Define a Pod with Pod-Level Resources
Create a YAML file like shared-pool-app.yaml. The pod must define spec.resources (the pod-level limit) and omit individual container limits so containers inherit the shared pool. Example:
apiVersion: v1
kind: Pod
metadata:
name: shared-pool-app
spec:
resources:
limits:
cpu: "2"
memory: "4Gi"
containers:
- name: main-app
image: my-app:v1
resources: {}
resizePolicy:
- resourceName: "cpu"
restartPolicy: "NotRequired"
- name: sidecar
image: logger:v1
resources: {}
resizePolicy:
- resourceName: "cpu"
restartPolicy: "NotRequired"Note: The resizePolicy is set per container. Currently, pod-level resizePolicy is not supported—the Kubelet defers to individual container settings.
Step 3: Apply the Pod
Deploy the pod:
kubectl apply -f shared-pool-app.yamlWait for the pod to become Running. Verify it has no individual container limits:
kubectl describe pod shared-pool-app | grep -A5 LimitsYou should see only the pod-level limits listed under Status or Spec.
Step 4: Initiate a Resize Operation
To double the CPU pool from 2 to 4 CPUs, use the resize subresource. This is the recommended way to perform in-place vertical scaling:
kubectl patch pod shared-pool-app --subresource resize --patch '{"spec":{"resources":{"limits":{"cpu":"4"}}}}'This triggers the Kubelet to adjust the cgroup limits. The resizePolicy determines whether containers restart: NotRequired (non-disruptive) updates cgroups dynamically via CRI; RestartContainer restarts the container.
Step 5: Verify the Resize
Check the pod status for a ResizedPod condition or view the new limits:
kubectl describe pod shared-pool-app | grep -i -A2 "Pod Resources"
kubectl get pod shared-pool-app -o jsonpath='{.status.podResources}'If containers were not restarted, you should see the new CPU limit (4) applied instantly. Use kubectl top pod shared-pool-app to monitor resource usage.
Step 6: Understand ResizePolicy Behavior
The Kubelet checks each container's resizePolicy:
- NotRequired: The Kubelet attempts a non-disruptive cgroup update via CRI. If the runtime supports it, no restart occurs.
- RestartContainer: The container is restarted to apply the new pod-level boundary safely.
If a container has no resizePolicy defined, the Kubelet defaults to NotRequired.
Step 7: Monitor Node Stability and Kubelet Checks
When you apply a resize patch, the Kubelet performs a sequence of checks before applying changes:
- Feasibility: Verifies the requested resources don't exceed node capacity.
- Safety: Ensures the new pod-level limits are compatible with existing containers (e.g., no negative values).
- Inheritance: For containers without individual limits, recalculates effective boundaries from the new pod-level pool.
If any check fails, the resize is rejected and the pod remains unchanged. Monitor events:
kubectl get events --field-selector involvedObject.name=shared-pool-appTips for Success
- Start with non-disruptive policies: Use
NotRequiredfor critical containers that must stay alive during resizing. - Use resize subresource: Always use
--subresource resizerather than directly patchingspec.resourcesto ensure the Kubelet handles the update correctly. - Test in a non-production environment: Verify that your container runtime supports in-place cgroup updates (most do, including containerd and CRI-O).
- Combine with Horizontal Pod Autoscaler: In-place vertical scaling complements HPA by allowing you to adjust pod-level budgets when individual container limits are not defined.
- Watch for sidecar containers: If sidecars have strict resource requirements, define individual limits to avoid unexpected sharing.
- No pod-level resizePolicy yet: Currently, you must set
resizePolicyon each container. A future release may introduce a pod-level default.
With this guide, you can effectively leverage in-place vertical scaling for pod-level resources in Kubernetes v1.36, reducing downtime and simplifying resource management.
Related Articles
- Ubuntu Pro Finds a Streamlined Home in the Security Center
- Philanthropist Unveils $21M Rural Guaranteed Minimum Income Initiative, Calls for National Pledge
- A Look at Contrary to popular superstition, AES 128 is just fine in a post-qu...
- Unlocking Complex Systems: How HASH Simulation Platform Works
- Turn Your Dusty Old Android Into a Free Wi-Fi Extender – Here’s How
- Navigating Rust 1.94.1: A Comprehensive Update Guide
- Leadership Moves in Biotech: Ailux Appoints Maria Belvisi as Chief Scientific Officer
- React Native 0.82: The Full Transition to the New Architecture Begins