If you've been writing Rust for more than a week, you've almost certainly reached for .iter().map().filter().collect(). But do you actually know what's happening under the hood when you write that chain? And do you know how to use iterator adapters to write code that is not just readable, but genuinely composable and, in many cases, faster than the equivalent imperative loop?
This post is a deep dive into Rust's iterator system — what it is, how the compiler thinks about it, and how to wield it effectively.
The Iterator Trait: Where Everything Starts
At its core, Rust's entire iterator system is built on one remarkably minimal trait:
pub trait Iterator {
type Item;
fn next(&mut self) -> Option<Self::Item>;
// Hundreds of provided methods follow...
}
That's it. You define Item and implement next, and you get access to the entire standard library's iterator adapter machinery for free. Every map, filter, flat_map, take, skip, zip, chain, fold, and collect is a provided method defined in terms of next.
This design is not just elegant — it has profound performance implications, which we'll get to shortly.
What Is an Iterator Chain?
An iterator chain is a sequence of iterator adapters stacked on top of a source iterator. Each adapter wraps the previous one, forming a lazy pipeline.
let data = vec![1u32, 2, 3, 4, 5, 6, 7, 8, 9, 10];
let result: Vec<u32> = data
.iter()
.filter(|&&x| x % 2 == 0) // keep even numbers
.map(|&x| x * x) // square them
.take(3) // take the first 3
.collect();
assert_eq!(result, vec![4, 16, 36]);
The key insight: nothing happens until collect() is called. The filter, map, and take calls each return a new struct that describes a transformation. Only when a terminal adapter (collect, for_each, fold, sum, any, all, etc.) drives the pipeline does any computation occur.
Under the Hood: Monomorphization and Zero-Cost Abstraction
Let's look at what the compiler actually generates. When you write:
fn process(v: &[u32]) -> Vec<u32> {
v.iter()
.filter(|&&x| x % 2 == 0)
.map(|&x| x * x)
.collect()
}
The type of the chain before collect is something like:
Map<Filter<std::slice::Iter<'_, u32>, [closure]>, [closure]>
Each adapter is a concrete struct. filter returns a Filter<I, P> where I is the upstream iterator and P is the predicate. map returns a Map<I, F>. These types are fully resolved at compile time through monomorphization — the compiler generates specialized machine code for this exact combination of types and closures.
Here's a simplified view of how Map is implemented internally:
pub struct Map<I, F> {
iter: I,
f: F,
}
impl<B, I: Iterator, F: FnMut(I::Item) -> B> Iterator for Map<I, F> {
type Item = B;
fn next(&mut self) -> Option<B> {
self.iter.next().map(|x| (self.f)(x))
}
}
Because F is a type parameter (not a trait object), the compiler can inline the closure body directly into next. Because next is called in a loop by collect, the optimizer can further inline the entire chain into a single tight loop — often producing code that is identical to what you'd write by hand.
This is what Rust means by zero-cost abstraction: you pay no runtime cost for the layering. The abstractions dissolve at compile time.
Building Your Own Iterator
Understanding the trait means you can build custom iterators that plug seamlessly into the rest of the ecosystem. Here's a classic example — a Chunks iterator that yields fixed-size windows of a slice as owned Vecs:
struct Chunks<'a, T> {
slice: &'a [T],
size: usize,
}
impl<'a, T> Chunks<'a, T> {
fn new(slice: &'a [T], size: usize) -> Self {
assert!(size > 0, "chunk size must be non-zero");
Chunks { slice, size }
}
}
impl<'a, T> Iterator for Chunks<'a, T> {
type Item = &'a [T];
fn next(&mut self) -> Option<Self::Item> {
if self.slice.is_empty() {
return None;
}
let end = self.size.min(self.slice.len());
let chunk = &self.slice[..end];
self.slice = &self.slice[end..];
Some(chunk)
}
}
// Usage — composes perfectly with the rest of the stdlib
fn main() {
let data = vec![1, 2, 3, 4, 5, 6, 7];
let sums: Vec<i32> = Chunks::new(&data, 3)
.map(|chunk| chunk.iter().sum())
.collect();
println!("{:?}", sums); // [6, 15, 7]
}
Your custom type participates in the full iterator protocol. That means you can call .enumerate(), .zip(), .flat_map(), .peekable(), or anything else on it for free.
Composability in Practice
The real power of iterator chains is how naturally they compose across function boundaries. Consider a data-processing pipeline for a simplified order book scenario:
#[derive(Debug, Clone)]
struct Order {
id: u64,
price: f64,
quantity: u64,
side: Side,
}
#[derive(Debug, Clone, PartialEq)]
enum Side { Buy, Sell }
fn best_bids(orders: &[Order], top_n: usize) -> Vec<(u64, f64)> {
let mut bids: Vec<&Order> = orders
.iter()
.filter(|o| o.side == Side::Buy)
.collect();
// Sort descending by price (best bid = highest price)
bids.sort_unstable_by(|a, b| b.price.partial_cmp(&a.price).unwrap());
bids.iter()
.take(top_n)
.map(|o| (o.id, o.price))
.collect()
}
fn total_ask_volume(orders: &[Order], price_limit: f64) -> u64 {
orders
.iter()
.filter(|o| o.side == Side::Sell && o.price <= price_limit)
.map(|o| o.quantity)
.sum()
}
Both functions are self-contained, readable, and compose cleanly. Neither one needs a loop variable, an index, or a mutable accumulator in the body — the iterator chain is the logic.


