The list that scrolls forever
Open your phone's contacts and flick to the bottom. Item 4,217 appears in the same instant as item 3. The list isn't pre-loading. It isn't searching. The reason that scroll feels free at any depth is the array — and the trick is older and simpler than the phone in your hand.
One calculation, no traversal
An array is a row of identically-sized boxes glued end-to-end in memory. Pick a starting address, fix an element width, and every other slot is reachable by arithmetic alone:
address(i) = base + i × elementSize
That's the whole trick. The CPU doesn't walk the row to find slot i. It computes one number, issues one load, and the value comes back. Reading slot 0 and reading slot 999,999 cost the same.
Operations — click to jump
8 contiguous slots, 4 bytes each.
Code
function construct(capacity) {
return new Array(capacity); // 8 slots, all empty
}State
No active operation.
The visualiser above shows an 8-slot array, each cell 4 bytes wide, starting at address 0x1000. Step through set and get and watch the address strip resolve. The formula is the lesson — every box of values you'll ever scroll past on a screen relies on it.
We just read index 2 in one step. If the array had 1,000,000 elements and we asked for index 999,999, how many memory reads would the CPU need?
The count never enters the formula. There's no i-shaped loop, no comparison, no branching. Index 999,999 is one multiply, one add, one load — same as index 2.
Base address is 0x1000. Element width is 4 bytes. What is the address of the element at index 7? (Answer in hex, e.g. 0x100C.)
If you got that one off by a factor of four, you're in good company. Forgetting to multiply by element size is the single most common mistake people make when they first reason about memory layout. It's also why typed arrays exist: the compiler needs to know the element width to do the maths for you.
Indexing is free; rearranging is not
Random access is the array's gift. Rearrangement is the price. Because every slot's address depends on its position, you cannot insert a value in the middle without moving everything after it.
function insertAt(arr, length, i, value) {
for (let k = length; k > i; k--) {
arr[k] = arr[k - 1]; // shift right
}
arr[i] = value;
}
Watch what happens when you ask the visualiser to insertAt(1, 99) on [10, 20, 30]. The values at indices 1 and 2 don't disappear — they slide one slot to the right. Only after the gap is open does 99 land at index 1.
The array currently holds [10, 20, 30, _, _, _, _, _] with length 3. Which slot layout matches the array immediately after insertAt(1, 99) completes?
That suffix-shift is the entire cost of the operation. Inserting near the end of a long array is cheap; inserting near the front is not.
Inserting at index 1 in an array of length 1,000,000 requires roughly how many element moves?
Delete is the mirror image — open a gap, then slide the suffix one slot to the left. Same shape, same cost.
Starting from an empty array, drag these three operations into the order that produces the state [5, 7, 9] with length 3.
- 1.insertAt(0, 5)
- 2.insertAt(0, 9)
- 3.insertAt(0, 7)
If you ordered those wrong, the most common reason is reading insertAt(0, x) as "append at the front" instead of "shift everything right and place at index 0." The second framing is correct: each call shifts the entire current contents to the right, so the last call places the value that ends up at index 0.
Where the bugs hide
The address-arithmetic trick is so robust that the bugs almost always live in the bookkeeping around it — the loop bounds, the length pointer, the off-by-one when shifting.
Here is a four-step sequence that claims to delete the element at index 1 from [10, 20, 30, 40, _, _, _, _] with length 4. One step is wrong. Click it.
The shift-left loop is one of those rare loops where reading it three times still doesn't make the index relationship obvious. If you write arr[k] = arr[k + 1], your loop bound is k < length - 1, not k < length. Get either wrong and you'll either read past the end or leave a stale value behind.
When the array is the wrong answer
A static array is brilliant when:
- You read by position, often, and don't move things around.
- The size is known up front, or grows rarely.
- The values are dense — no holes you have to skip over.
It's the wrong answer when the workload is "constantly grow at the front" or "constantly delete in the middle of a long sequence." Each operation drags an entire suffix with it, and you've turned an O(1) data structure into an O(n)-per-step machine.
For which of these workloads is a static array the worst fit?
This is also why the next topic exists. Real programs rarely know their size up front, and the cost of "the array is full, allocate a bigger one" needs an honest treatment. That's dynamic arrays.
The hardware's favourite shape
Textbooks talk about arrays in terms of operations and Big-O. The reason production systems lean on them runs deeper than that. Modern CPUs prefetch sequential memory aggressively — once you read slot i, the next cache line is already on its way. A linear scan over a 10,000-element array will routinely beat a "more efficient" tree lookup of the same data, because the array fits in L2 and the tree's pointer hops blow out the cache.
A few places this matters in real systems you have probably used:
- CPython's
list: not a linked list. It's a contiguous array of pointers to Python objects. The C source lives atObjects/listobject.c. The "list" name is a historical accident. - V8 arrays: when an array is "packed" (no holes, all the same element kind), V8 stores the values inline as a true contiguous block.
arr[i]becomes one load. The moment you create a hole or mix types, V8 falls back to a slower dictionary representation — which is whydelete arr[3]is a performance trap. - Postgres heap pages: each 8 KB page begins with a small array of line pointers. Looking up a tuple within a page is array-indexing speed.
- NumPy
ndarray: the entire library exposes the index arithmetic — strides, views, broadcasting — and lets numerical code exploit it directly. - Game engines: entity component systems pack components into tightly-typed arrays per component type, specifically to keep cache lines moving. It's the same insight as the contacts app, scaled up.
The contacts list at the start was not a metaphor. It is the same data structure, used the same way, for the same reason: index arithmetic is the cheapest thing a computer can do, and arrays let you cash that in on every read.
What to learn next
- Dynamic arrays — what happens when "the array is full" stops being an error and starts being amortised growth.
- Hash maps — when you need O(1) lookup by a key, not by a position.
- Two pointers — the first pattern that exploits array layout for problem solving.