There is a class of security vulnerabilities that doesn't appear in your logic, your algorithms, or your API design. It lives in the space between when your program finishes using a value and when that memory is actually cleared. It's quiet, it doesn't crash anything, and the compiler won't warn you about it.
This article is about that gap — specifically how the stack and heap retain sensitive values after you think you're done with them, how an attacker can extract them, and how Rust gives you the tools to close the window completely.
We'll go hands-on: first reproducing the leak, then exploiting it in a controlled setting, then applying the correct mitigations with working code. The context throughout is cryptographic key material, which is the highest-stakes case — but the same principles apply to passwords, tokens, seeds, and any other secret your program handles.
The Fundamental Problem: Drop Does Not Mean Zero
When a variable goes out of scope in Rust, its Drop implementation runs and the memory is deallocated — returned to the allocator for future use. But deallocated is not the same as erased. The bytes that used to represent your private key are still sitting in memory, unchanged, until something else happens to write over them.
On the stack, this happens predictably. A stack frame is just a contiguous region of memory carved out by moving the stack pointer. When the frame is popped, the pointer moves back. The old values are still there — they're just "below" the stack pointer and considered available for the next allocation.
Let's prove this empirically.
Demo 1: Observing Residual Key Material
fn generate_and_drop_key() {
// Simulate a 32-byte secret key
let key: [u8; 32] = [
0xDE, 0xAD, 0xBE, 0xEF, 0xCA, 0xFE, 0xBA, 0xBE,
0x01, 0x23, 0x45, 0x67, 0x89, 0xAB, 0xCD, 0xEF,
0xFE, 0xDC, 0xBA, 0x98, 0x76, 0x54, 0x32, 0x10,
0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77, 0x88,
];
println!("Key in use at: {:p}", key.as_ptr());
// key goes out of scope here - dropped but NOT zeroed
}
fn observe_stack() {
// Allocate a new array in the same region the key just occupied
let stack_region: [u8; 32] = unsafe {
// Read whatever happens to be in this stack slot
// In a real exploit this is done via a buffer over-read or
// a use-after-free vulnerability
std::mem::MaybeUninit::<[u8; 32]>::uninit().assume_init()
};
print!("Stack region after key drop: ");
for byte in &stack_region {
print!("{:02x} ", byte);
}
println!();
}
fn main() {
generate_and_drop_key();
observe_stack();
}
Run this in release mode. In many cases — especially without inlining — you will see the exact bytes of the key still present in the stack region. The output will look something like:
Stack region after key drop: de ad be ef ca fe ba be 01 23 45 67 89 ab cd ef fe dc ba 98 76 54 32 10 11 22 33 44 55 66 77 88
The key is gone from Rust's perspective. From the hardware's perspective, it's exactly where it was.
Note: The behavior is technically undefined because we're reading uninitialized memory. In a controlled attack, the adversary doesn't need to rely on this — they use a legitimate read primitive (a buffer over-read, a format string vulnerability, a side channel) to reach the same data.
Demo 2: A Realistic Attack Surface — Use-After-Free in an Unsafe Context
Residual stack data is interesting, but a more realistic attack surface is when an allocator reuses heap memory that previously held key material. This happens in any program that allocates and deallocates Vec<u8> buffers for cryptographic operations.
use std::alloc::{alloc, dealloc, Layout};
fn sign_and_free() -> *mut u8 {
let layout = Layout::array::<u8>(64).unwrap();
unsafe {
let ptr = alloc(layout);
// Simulate writing a secret key into heap-allocated memory
let key_data: [u8; 64] = [0x42u8; 64]; // 64 bytes of "secret" key material
std::ptr::copy_nonoverlapping(key_data.as_ptr(), ptr, 64);
// Sign something, use the key...
println!("Key loaded at heap address: {:p}", ptr);
// Deallocate - common mistake: no zeroing before free
dealloc(ptr, layout);
ptr // Return the now-dangling pointer to observe residual data
}
}
fn allocate_in_same_region() -> Vec<u8> {
// The allocator will likely hand out the same memory region
vec![0u8; 64]
}
fn main() {
let dangling_ptr = sign_and_free();
let new_buffer = allocate_in_same_region();
// In a heap inspection attack (e.g., after a process crash dumps memory,
// or via a controlled heap spray), the attacker reads the new_buffer's
// backing memory at the address that used to hold key material.
// The allocator hasn't zeroed it. The key is still there.
unsafe {
print!("Heap region contents: ");
for i in 0..64 {
print!("{:02x} ", *dangling_ptr.add(i));
}
println!();
}
}
This pattern is not theoretical. It shows up in:
- Crash dumps: A process crash writes all memory to disk. Any secret that was allocated and freed without zeroing is on disk now.
- Memory inspection by co-tenant processes: In containerized environments without memory isolation, a privileged co-tenant can inspect
/proc/<pid>/mem. - Cold boot attacks: Physical access + freezing RAM preserves contents after power-off. DRAM retains data for seconds to minutes at room temperature, much longer when cooled.
- Swap files: The OS can page out heap memory containing your unzeroed keys to disk, where they persist indefinitely.
Demo 3: Timing Oracles — The Subtler Leak
Memory retention is not the only way sensitive data leaks through operations. Comparison operations that short-circuit are a second major attack surface.
Consider MAC verification:
fn verify_mac_insecure(expected: &[u8], computed: &[u8]) -> bool {
// This is the most natural way to write this in Rust.
// It is also wrong for cryptographic use.
expected == computed
}
The == operator on byte slices returns false as soon as it finds the first differing byte. This means the function returns faster when the first byte is wrong than when the first 31 bytes are right and only the last byte differs.
An attacker who can make many verification requests and measure response times can use this timing difference to recover the expected MAC byte by byte — a timing oracle attack. The complexity is O(256 * N) guesses instead of O(256^N). For a 32-byte MAC, that's 8,192 requests instead of 2²⁵⁶.
Here's a controlled demonstration:
use std::time::{Duration, Instant};
fn verify_timing_leak(expected: &[u8], guess: &[u8]) -> (bool, Duration) {
let start = Instant::now();
let result = expected == guess;
let elapsed = start.elapsed();
(result, elapsed)
}
fn main() {
let secret_mac = vec![0xAAu8; 32];
// Guess with correct first byte, wrong rest
let mostly_wrong = {
let mut g = vec![0x00u8; 32];
g[0] = 0xAA; // First byte correct
g
};
// Guess with all wrong bytes
let all_wrong = vec![0x00u8; 32];
// In practice you'd average thousands of measurements.
// Even here you'll often see a nanosecond difference.
let (_, t1) = verify_timing_leak(&secret_mac, &mostly_wrong);
let (_, t2) = verify_timing_leak(&secret_mac, &all_wrong);
println!("Correct first byte: {:?}", t1);
println!("All wrong: {:?}", t2);
println!("Difference: {:?}", t1.checked_sub(t2));
}
This timing difference is usually single-digit nanoseconds, which is too small to measure reliably over a network. But with local access, or over a very low-latency connection, it's been demonstrated successfully in practice — including against real-world TLS implementations.
Prevention Part 1: Zeroizing Memory with the zeroize Crate
The zeroize crate is the standard Rust solution for explicit memory erasure. It provides guaranteed zeroing that the compiler cannot optimize away — which is a critical distinction.
The naive approach:
// WRONG: the compiler may eliminate this as a "dead store"
// since the value is never read after writing
fn bad_zero(key: &mut [u8]) {
for b in key.iter_mut() {
*b = 0;
}
// Compiler sees: this write is followed by no read → dead store → can be removed
}
Modern compilers are very good at eliminating dead stores. If the key is never read after you zero it, the zeroing write has no observable effect on program behavior — so the optimizer removes it. Your zeroing code disappears at -O2.
The zeroize crate prevents this by using platform-specific techniques (volatile_set_memory, memset_explicit, or inline assembly depending on the target) that the compiler cannot prove are dead:
use zeroize::Zeroize;
fn sign_message(key_bytes: &[u8], message: &[u8]) -> Vec<u8> {
let mut working_key = key_bytes.to_vec();
// ... perform signing operation using working_key ...
let signature = simulate_signing(&working_key, message);
// Explicitly zero before drop
// This CANNOT be optimized away
working_key.zeroize();
signature
}
fn simulate_signing(key: &[u8], message: &[u8]) -> Vec<u8> {
// Placeholder
message.to_vec()
}
The ZeroizeOnDrop Derive Macro
For types that own secret data, ZeroizeOnDrop implements Drop automatically, making the zeroing impossible to forget:
use zeroize::{Zeroize, ZeroizeOnDrop};
#[derive(Zeroize, ZeroizeOnDrop)]
struct SigningKey {
key_bytes: Vec<u8>,
// All fields are zeroed when this struct drops,
// even if the drop happens due to a panic
}
impl SigningKey {
fn new(bytes: Vec<u8>) -> Self {
assert_eq!(bytes.len(), 32, "Key must be exactly 32 bytes");
Self { key_bytes: bytes }
}
fn sign(&self, message: &[u8]) -> Vec<u8> {
// use self.key_bytes...
message.to_vec() // placeholder
}
}
fn main() {
let raw_key = vec![0x42u8; 32];
{
let signing_key = SigningKey::new(raw_key);
let _sig = signing_key.sign(b"important message");
// signing_key drops here - key_bytes is zeroed before deallocation
}
// At this point, no trace of the key material remains in
// the heap memory that backed key_bytes.
}

