The paper chase
The previous lesson's array had an answer for "where is element 999,999?" — a single multiplication. A linked list has no such answer. To find the seventh node, you read the first, which tells you where the second lives, which tells you where the third lives. It's a paper chase. Every clue is on the previous spot. This sounds like a worse data structure, and most of the time it is — but the moment you already hold a clue, splicing in a new one is two pointer rewrites, no matter how long the chain.
A node is a value plus a forwarding address
A linked-list node is two fields: a value, and the address of the next node. Nothing else. The list as a whole is a single pointer — head — that knows where the first node lives. Everything else is reached by following arrows.
type Node = { value: number; next: Node | null };
type List = { head: Node | null; size: number };
There is no slot 0, slot 1, slot 2. There is no formula. The second node sits wherever the heap allocator put it when you called new, and the only thing in the universe that knows that address is the first node's next field.
Operations — click to jump
Empty list. `head` points at nothing.
Code
function construct() {
return { head: null, size: 0 };
}State
- Size
- 0
- Hops
- 0
- Writes
- 0
The visualiser above starts empty and walks through the operations one at a time. Drive prepend a few times and watch what happens: a new node materialises, two arrows redraw, and head repoints. Two pointer writes. The list could be empty or it could be a million nodes long — the cost of prepend is the same.
We just took 3 hops to read index 2. If the list had 1,000,000 nodes and we asked for index 999,999, how many `cur = cur.next` hops would the CPU need?
If you said 999,999, you've already absorbed the punchline. There is no shortcut. Every index becomes a walk, and the walk costs one pointer dereference per node. Indexing in a linked list is not arithmetic; it is a literal traversal of the chain.
This is the trade. We gave up the address-arithmetic trick from arrays. In exchange, we got a structure where rearrangement is cheap — if you already hold the right pointer.
The list currently reads `20 → 10 → null` with size 2. Which of these matches the list immediately after `prepend(30)` completes?
prepend looks like this in code:
function prepend(list, value) {
const node = { value, next: list.head };
list.head = node;
list.size++;
}
Two pointer writes. The first wires the new node to whatever the head used to be — preserving the rest of the list. The second repoints head itself. Notice there is no walk anywhere. The new node never had to "find its place." That's the property linked lists are actually good at.
The textbook's favourite half-truth
Open any data-structures textbook and you'll find some variation of the claim "linked-list insertion is O(1)." That sentence is doing more work than it admits. Watch what happens when we ask the visualiser for insertAfter(node, 99) versus insertAt(2, 77). Both end with the same number of pointer rewrites. They differ only in how we got to the node where the rewires happen.
insertAfter is given the predecessor pointer directly. No walk. The work is two assignments, the cost is constant, and the textbook's claim is honest.
insertAt(2, 77) is given an index. To find the node before index 2, the function walks two hops from head. Then it does the same two assignments. The walk dominates the cost — and the walk is O(n) in the worst case.
A textbook says "linked-list insertion is O(1)." Looking at the two insertions you just saw, when is that claim actually true?
Most linked-list code in production uses the insertAfter shape, not the insertAt shape. An LRU cache holds node pointers in a hash map and splices them around. A free-list allocator pulls and pushes nodes at the head, where it already has the pointer. The kernel's task list passes node pointers around, never indices. The "O(1) insertion" claim is real — but only when the position is something you already have, never something you ask for.
Starting list: `A → B → C → D → null`. Apply these in order: `insertAfter(B, X)`, then `deleteAt(0)`, then `prepend(Y)`. What is the final list, head to tail? (Use `→` between values, e.g. `A → B → null`.)
If you traced that one wrong, the most common stumble is forgetting that deleteAt(0) removes the head, not "the node that was at the head before any of the inserts." Indices are interpreted against the current list state, not the original.
Reversing in place: the canonical trick
Reversing a linked list is the single most common interview problem in any language, and the reason is that it is the cleanest demonstration of why pointer manipulation is its own skill. The structure looks deceptively simple — flip every next arrow — but a naive attempt loses the rest of the list the moment you flip the first arrow.
function reverse(list) {
let prev = null, cur = list.head;
while (cur !== null) {
const next = cur.next; // save the rest of the list
cur.next = prev; // flip
prev = cur; // advance prev
cur = next; // advance cur
}
list.head = prev;
}
The const next = cur.next line is the line that earns the algorithm. Without it, the moment we write cur.next = prev, we have severed our only path to the rest of the chain. Three cursors, four lines per iteration, O(n) time, O(1) extra space. It is one of the few interview problems where every pointer in the loop matters.
Drag the four operations into the correct order for a single iteration of the reverse loop, given `prev`, `cur`, and a reference `next`.
- 1.const next = cur.next
- 2.cur.next = prev
- 3.prev = cur
- 4.cur = next
The order is non-negotiable. Save first, flip second, then advance — and prev advances before cur, because prev = cur reads the current value of cur, not the next one. Get the order wrong and you either lose the rest of the list (flip-before-save) or never actually reverse anything (advance-before-flip).
Where the bugs hide
Pointer code has its own family of bugs, distinct from the off-by-ones that haunt array code. The single most common one is walking too far when you should have stopped at the predecessor.
Here is a four-step sequence claiming to delete the element at index 2 from `A → B → C → D → null`. One step is wrong. Click it.
To delete index 2, the cursor must stop at the node before index 2 — the predecessor. Walk one hop too far, and now you're standing on the doomed node with no way to reach the predecessor short of starting over from head. This is exactly the bug that ships in real deleteAt implementations. The fix is a tighter loop bound — k < i - 1, not k < i.
The other bug families worth naming:
- Losing
head: any operation that mutates the head pointer needs to assign back tolist.head. A function that walks a copy oflist.headand rewires it locally has done nothing to the actual list. - Forgetting
nullchecks:cur.next.nextis a footgun. Ifcur.nextisnull, the second.nextthrows. Write the bound ascur !== null && cur.next !== nullwhenever you reach two hops ahead. - Cycles: assigning
node.next = headwithout checking creates a loop. Walks turn into infinite walks. Cycle-detection is its own topic (Floyd's algorithm) precisely because once a cycle exists, the structure is no longer a list.
The cache is not your friend
There is one more thing the textbook skips, and it is the reason real systems use arrays even for "insertion-heavy" workloads. Pointer chasing serialises memory loads. Each cur = cur.next waits for the previous load to finish before the next address is even known — the CPU's prefetcher cannot help, because there is nothing predictable about where the next node lives.
A million-element array iteration touches roughly 62,000 cache lines and reads them in a tight prefetched stream. A million-node linked-list iteration touches a million cache lines, almost none of which the prefetcher saw coming. On the same hardware, the array traversal is often 10–50× faster, and the constants get worse, not better, as memory hierarchies grow.
Bjarne Stroustrup demonstrated this empirically in his 2012 GoingNative talk: insertions and deletions in the middle of a list, with the position found by linear search, are still faster on an array than on a linked list, up to surprisingly large sizes. The shifting cost loses to the cache cost. Chandler Carruth's CppCon talks make the same point with newer benchmarks.
For which of these workloads is a singly linked list the worst fit?
Pick a linked list when:
- You hold node pointers from somewhere else (hash map, iterator, intrusive struct) — the splice is genuinely O(1).
- You need stable iterators across mutations.
- You're writing a kernel, allocator, or embedded system where heap allocation has to be explicit and arrays-of-pointers aren't an option.
Reach for an array (or a deque) when:
- You read by position more than you splice.
- The data set is small enough that cache thrash dominates everything else.
- You don't already have node pointers and would have to walk to find them.
Where this shows up in real systems
Linked lists are everywhere in systems code, almost nowhere in application code. Three places worth knowing:
- The Linux kernel, in
include/linux/list.h, defines an intrusive doubly linked list used for task lists, file descriptor tables, dirty page lists, and dozens of other places. The singly linked varianthlistis used for hash bucket chains. Every Linux process you have ever run is on a kernel-maintained list of this shape. - Free-list allocators inside
mallocimplementations — glibc'sptmalloc, jemalloc, mimalloc — track unused memory blocks via singly linked lists per size class. Allocation is "pop the head"; freeing is "push the head". Both are genuinely O(1), with no walk involved. - Hash table chaining. Java's
HashMap, Python's pre-3.6dict, almost every textbook hash map — they all use a short singly linked list per bucket to handle collisions. The lists are short by design (load factor under 1), so the cache penalty is bounded, and the list operations are always pointer-relative — exactly the regime where they shine.
The structure is fundamental. Trees are linked lists with two children. Graphs are linked lists where the chain branches. Hash buckets are arrays of linked lists. Once you have built a singly linked list, you have built half of the data-structure landscape — even if you rarely reach for the linked list itself in your application code.
What to learn next
- Doubly linked lists — what changes when each node knows its predecessor too. The data structure that LRU caches actually use.
- Stacks — a linked list with a single end and a strict access pattern. The first place "use a linked list" is genuinely the right answer in application code.
- Fast & slow pointers — a pattern that exploits linked-list structure to detect cycles, find midpoints, and split lists in one pass.