Big O Cheat Sheet

crackr.dev

Complexity Growth Rates

n →ops →O(1)O(log n)O(n)O(n log n)O(n²)

Operations at n = 1,000

O(1)
1
O(log n)
~10
O(n)
1K
O(n log n)
~10K
O(n²)
1M
O(2ⁿ)
10³⁰⁰
O(n!)

Array / Dynamic Array

OpAvgWorst
AccessO(1)O(1)
SearchO(n)O(n)
Insert endO(1)*O(n)
Insert midO(n)O(n)
DeleteO(n)O(n)

*amortized. Space O(n). Cache-friendly, random access.

0123456index

Hash Map / Hash Set

OpAvgWorst
SearchO(1)O(n)
InsertO(1)O(n)
DeleteO(1)O(n)

Worst = collisions. Space O(n).

#1 interview DS. O(1) lookup, counting, dedup, two-sum.

k1k2k3h()[0]→ val[1]→ val[2]→ val[3]

Sorting Algorithms

BestAvgWorstSpace
BubbleO(n)O(n²)O(n²)O(1)
SelectionO(n²)O(n²)O(n²)O(1)
InsertionO(n)O(n²)O(n²)O(1)
MergeO(n log n)O(n log n)O(n log n)O(n)
QuickO(n log n)O(n log n)O(n²)O(log n)
HeapO(n log n)O(n log n)O(n log n)O(1)
CountingO(n+k)O(n+k)O(n+k)O(k)
RadixO(nk)O(nk)O(nk)O(n+k)

Python = Timsort O(n log n) stable. k = range of values.

Linked List

OpSinglyDoubly
AccessO(n)O(n)
SearchO(n)O(n)
Insert headO(1)O(1)
Insert tailO(n)O(1)
Delete nodeO(n)O(1)

Space O(n). LRU cache, merge sorted lists.

36912null

Stack & Queue

Stack (LIFO)

Push / PopO(1)
PeekO(1)
SearchO(n)

Parens, DFS, undo, monotonic

top

Queue (FIFO)

Enq / DeqO(1)
PeekO(1)
SearchO(n)

BFS, scheduling, sliding window

outin

Binary Trees (BST / Balanced)

OpBST AvgWorstBalanced
SearchO(log n)O(n)O(log n)
InsertO(log n)O(n)O(log n)
DeleteO(log n)O(n)O(log n)
Min / MaxO(log n)O(n)O(log n)

BST worst = skewed → linked list. Balanced = AVL / Red-Black. Space O(n).

Heap / Priority Queue

OpTime
Find min / maxO(1)
InsertO(log n)
ExtractO(log n)
HeapifyO(n)

Space O(n). Top-K, merge K lists, median, Dijkstra's.

13579

Graph Algorithms

AlgorithmTimeSpace
BFS / DFSO(V+E)O(V)
DijkstraO(E log V)O(V)
Bellman-FordO(V·E)O(V)
Floyd-WarshallO(V³)O(V²)
Topo SortO(V+E)O(V)
Kruskal MSTO(E log E)O(V)
Prim MSTO(E log V)O(V)

BFS = shortest unweighted. Dijkstra = weighted (no negative edges).

Graph Representation

Adjacency List

SpaceO(V+E)
Add edgeO(1)
Check edgeO(deg)

Sparse graphs, most problems

Adjacency Matrix

SpaceO(V²)
Add edgeO(1)
Check edgeO(1)

Dense graphs, small V

Trie (Prefix Tree)

OpTime
Search / InsertO(m)
Prefix searchO(m)
DeleteO(m)

m = word length. Space O(n·m). Autocomplete, word search, spell check.

cdaoot

Union-Find (Disjoint Set)

OpTime
FindO(α(n))
UnionO(α(n))

α(n) ≈ O(1). Path compression + union by rank. Components, cycle detection, Kruskal's.

Common Interview Patterns

Two PointersO(n)Sorted array, pair sum, palindrome
Sliding WindowO(n)Subarray / substring problems
Binary SearchO(log n)Sorted input, min/max answer
Prefix SumO(n)/O(1)Range sums, subarray sum = K
Monotonic StackO(n)Next greater element, histogram
Fast & Slow PtrO(n)Cycle detect, middle of list
Merge IntervalsO(n log n)Overlapping intervals
BacktrackingO(2ⁿ)Combos, permutations, N-queens
DP (bottom-up)O(n²)Overlapping subproblems
GreedyO(n log n)Local optimal → global optimal
Topo SortO(V+E)DAG ordering, prerequisites

Bit Manipulation

Power of 2?n & (n-1) == 0
Even / odd?n & 1
Toggle bit kn ^ (1 << k)
Lowest set bitn & (-n)
Count bitsbin(n).count('1')
XOR uniquereduce(^, arr)
Swap no tempa^=b; b^=a; a^=b
All subsetsfor m in range(1<<n)

Searching

LinearO(n)Unsorted, single scan
BinaryO(log n)Sorted array, monotonic function
HashO(1) avgExact match, membership test
BFSO(V+E)Shortest unweighted path
DFSO(V+E)Path exists, cycle detect

Space Complexity Reference

Hash map (n keys)O(n)
Recursion depth dO(d)
2D matrix n×mO(n·m)
Adjacency listO(V+E)
Adjacency matrixO(V²)
BFS / DFS visitedO(V)
Merge sort extraO(n)
Quick sort stackO(log n)
Trie (n words)O(n·m)
Bit manipulationO(1)

Big O Quick Rules

Single loop over n → O(n)

Nested loops → O(n²)

Halving each step → O(log n)

Sort + linear scan → O(n log n)

All subsets → O(2ⁿ)

All permutations → O(n!)

Drop constants: O(2n) → O(n)

Drop lower terms: O(n² + n) → O(n²)

Amortized: dynamic array append = O(1)*

Interview Pro Tips

Always state it: "O(n) time, O(1) space." Don't wait to be asked.

Brute force first: mention O(n²) then optimize. Shows trade-off thinking.

n² → n? Hash map. n² → n log n? Sorting.

Hidden costs: string concat in loop = O(n²), slice = O(n), sort ≠ free.

Space-time: hash map adds O(n) space to drop O(n²) → O(n) time.

The Complete Big O Notation Cheat Sheet

This Big O cheat sheet is a single-screen reference covering time and space complexity for every data structure and algorithm you will encounter in coding interviews. Whether you are preparing for Google, Meta, Amazon, or any top tech company, understanding Big O notation is the foundation of writing efficient code.

Our Big O notation cheat sheet covers arrays, linked lists, hash maps, stacks, queues, binary search trees, heaps, tries, graphs, and union-find. For algorithms, you will find all major sorting algorithms (merge sort, quick sort, heap sort, counting sort, radix sort), graph algorithms (BFS, DFS, Dijkstra, Bellman-Ford, Floyd-Warshall, topological sort, Kruskal, Prim), and the 12 most common coding interview patterns with their time and space complexity.

Unlike other big-o cheat sheets that require scrolling through long pages, this reference is designed to fit your entire screen in one view. Every data structure, every sorting algorithm, every graph algorithm, and every common pattern — visible at a glance.

Big O Cheat Sheet FAQ

What is a Big O cheat sheet?expand_more

A Big O cheat sheet is a quick-reference guide that lists the time and space complexity of common data structures and algorithms using Big O notation. It helps developers quickly compare the efficiency of different approaches during coding interviews and software design. This Big O notation cheat sheet covers arrays, linked lists, trees, graphs, sorting algorithms, and common coding patterns.

What is Big O notation?expand_more

Big O notation describes the upper bound of an algorithm's growth rate as input size increases. It tells you how an algorithm's time or space requirements scale. O(n) means linear growth, O(1) means constant time, and O(n²) means quadratic growth. It drops constants and lower-order terms — O(2n + 5) simplifies to O(n). This big-o cheat sheet uses standard Big O notation throughout.

What are the most common Big O complexities?expand_more

From fastest to slowest: O(1) constant, O(log n) logarithmic, O(n) linear, O(n log n) linearithmic, O(n²) quadratic, O(2ⁿ) exponential, and O(n!) factorial. Most coding interview solutions should be between O(log n) and O(n²). If your solution is O(2ⁿ) or worse, there is almost always a dynamic programming or greedy approach that does better.

Why does Big O matter for coding interviews?expand_more

Interviewers use Big O to evaluate whether your solution scales for production inputs. Knowing Big O helps you pick the right data structure (hash map O(1) lookup vs array O(n) lookup) and articulate trade-offs. After writing code, always state your time and space complexity — it is often the difference between passing and failing.

What is the difference between best, average, and worst case?expand_more

Best case is the minimum operations (e.g., finding your target at index 0). Average case is the expected operations over all inputs. Worst case is the maximum operations (e.g., target is last element). Big O typically refers to worst case. This Big O cheat sheet shows average and worst case for data structures, and best/average/worst for sorting algorithms.

How do I memorize Big O for all data structures?expand_more

Focus on patterns rather than memorization: hash-based structures are O(1) average for insert/search/delete, tree-based structures are O(log n), and linked structures are O(n) for search. Sorting is O(n log n) for comparison-based, O(n+k) for counting-based. Use this Big O notation cheat sheet as a reference until the patterns become intuitive.

Ready to put Big O into practice?

Practice with an AI interviewer that tests your ability to choose optimal data structures and analyze complexity in real time.

Try Crackr freearrow_forward

Continue learning