Optimizing Go Performance: Stack vs Heap Allocation for Slices
Introduction
Go developers constantly seek ways to make their programs faster. In recent releases (Go 1.23 and 1.24), the core team has focused on reducing a major source of performance overhead: heap allocations. Every time a Go program allocates memory on the heap, a substantial amount of code must run to satisfy that request. Additionally, heap allocations increase the load on the garbage collector, which—even with modern improvements like the Green Tea collector—still introduces significant overhead. To address this, the Go team has enhanced the compiler and runtime to perform more allocations on the stack whenever possible. Stack allocations are far cheaper—sometimes completely free—and they put no pressure on the garbage collector. They are automatically reclaimed when the stack frame is popped, and they promote cache-friendly memory reuse.

The Cost of Heap Allocation
Heap allocation involves several steps: finding a suitable block of memory, potentially triggering a garbage collection cycle, and eventually freeing the memory. The allocator must manage fragmentation and maintain data structures like free lists. Even a simple allocation can cause a chain of operations that slow down a hot code path. Moreover, each heap-allocated object adds to the collector’s workload during marking and sweeping. Reducing heap allocations directly improves throughput and latency.
Stack Allocation Advantages
Stack allocations are fundamentally different. When a function is called, a stack frame is created, and local variables are allocated by simply adjusting the stack pointer. No separate allocator call is needed. The memory is automatically freed when the function returns, and the data is often already in the CPU cache. This makes stack allocations orders of magnitude faster than heap allocations. The Go compiler tries to prove that an object’s lifetime does not outlive its function, and if so, it allocates it on the stack. This decision is called “escape analysis.”
The Slice Growth Problem
A common pattern that causes many heap allocations is building a slice by repeatedly appending elements. Consider a function that reads tasks from a channel:
func process(c chan task) {
var tasks []task
for t := range c {
tasks = append(tasks, t)
}
processAll(tasks)
}
On the first iteration, tasks has no backing array, so append allocates a backing store of size 1 on the heap. On the second iteration, the backing store is full, so a new store of size 2 is allocated, and the old one becomes garbage. On the third iteration, a new store of size 4 is allocated. This exponential growth continues (1, 2, 4, 8, 16, …) until the slice reaches a more stable size. During this startup phase, the program spends a lot of time in the allocator and generates many short-lived objects that the garbage collector must later reclaim. If the slice never grows large—for example, it processes only a few tasks—the overhead of these repeated allocations can dominate the function’s cost.
Optimizing with Preallocation
The most effective way to avoid these startup allocations is to preallocate the slice with sufficient capacity. You can do this if you know the expected number of elements, or you can guess conservatively. The make function allows you to specify both length and capacity:
func process(c chan task) {
tasks := make([]task, 0, 100) // capacity hint
for t := range c {
tasks = append(tasks, t)
}
processAll(tasks)
}
With a capacity hint of 100, the runtime allocates a single backing array of size 100 on the heap (or on the stack if the compiler can prove the array does not escape). This eliminates the repeated allocations and the garbage bloat. For many workloads, this simple change yields significant performance improvements. The Go toolchain even provides a static analysis check (makezero) to flag slices that are appended to without a capacity hint.
Can Slices Be Allocated on the Stack?
The Go compiler now performs more aggressive escape analysis to determine if a slice’s backing array can be placed on the stack. For slices whose size is known at compile time (constant-sized slices), the compiler may allocate the backing array directly on the stack. For example, if you declare var buf [64]byte and then assign buf[:] to a slice, the arrays is on the stack. Similarly, the new preallocation optimization for growing slices (introduced in Go 1.24) can, in certain cases, allocate the initial backing array on the stack if the compiler can prove the slice does not escape.
But the safest way to benefit from stack allocation is to use fixed-size arrays when the size is known and small. If you need a dynamic slice, providing a good capacity hint helps the runtime avoid repeated heap allocations.
Conclusion
Reducing heap allocations is one of the most effective ways to speed up Go programs. The Go team’s focus on stack allocation—through improved escape analysis, preallocation hints, and compiler optimizations—has made it easier to write high-performance code. By understanding how slices grow and using capacity hints appropriately, developers can eliminate unnecessary garbage and reduce GC pressure. As a rule, always consider preallocating slices when the number of elements is known, and rely on the compiler to promote allocations to the stack when safe. Future Go releases will continue to push more allocations to the stack, making your programs faster with each update.
Related Articles
- Everything You Need to Know About Joining the Python Security Response Team
- Developer Launches Completely Free AI Writing Platform with No Signups, No Limits
- Mastering IntelliJ IDEA: Key Techniques and Workflows
- Modernizing Go Code with Source-Level Inlining
- 5 Exciting Enhancements in the Python VS Code Extension – March 2026
- Modernizing Go Code with //go:fix and Source-Level Inlining
- How to Become a Member of the Python Security Response Team: A Step-by-Step Guide
- Mastering Stack Allocation in Go: Avoiding Heap Pitfalls