Operations at n = 1,000
| Op | Avg | Worst |
|---|---|---|
| Access | O(1) | O(1) |
| Search | O(n) | O(n) |
| Insert end | O(1)* | O(n) |
| Insert mid | O(n) | O(n) |
| Delete | O(n) | O(n) |
*amortized. Space O(n). Cache-friendly, random access.
| Op | Avg | Worst |
|---|---|---|
| Search | O(1) | O(n) |
| Insert | O(1) | O(n) |
| Delete | O(1) | O(n) |
Worst = collisions. Space O(n).
#1 interview DS. O(1) lookup, counting, dedup, two-sum.
| Best | Avg | Worst | Space | |
|---|---|---|---|---|
| Bubble | O(n) | O(n²) | O(n²) | O(1) |
| Selection | O(n²) | O(n²) | O(n²) | O(1) |
| Insertion | O(n) | O(n²) | O(n²) | O(1) |
| Merge | O(n log n) | O(n log n) | O(n log n) | O(n) |
| Quick | O(n log n) | O(n log n) | O(n²) | O(log n) |
| Heap | O(n log n) | O(n log n) | O(n log n) | O(1) |
| Counting | O(n+k) | O(n+k) | O(n+k) | O(k) |
| Radix | O(nk) | O(nk) | O(nk) | O(n+k) |
Python = Timsort O(n log n) stable. k = range of values.
| Op | Singly | Doubly |
|---|---|---|
| Access | O(n) | O(n) |
| Search | O(n) | O(n) |
| Insert head | O(1) | O(1) |
| Insert tail | O(n) | O(1) |
| Delete node | O(n) | O(1) |
Space O(n). LRU cache, merge sorted lists.
Stack (LIFO)
Parens, DFS, undo, monotonic
Queue (FIFO)
BFS, scheduling, sliding window
| Op | BST Avg | Worst | Balanced |
|---|---|---|---|
| Search | O(log n) | O(n) | O(log n) |
| Insert | O(log n) | O(n) | O(log n) |
| Delete | O(log n) | O(n) | O(log n) |
| Min / Max | O(log n) | O(n) | O(log n) |
BST worst = skewed → linked list. Balanced = AVL / Red-Black. Space O(n).
| Op | Time |
|---|---|
| Find min / max | O(1) |
| Insert | O(log n) |
| Extract | O(log n) |
| Heapify | O(n) |
Space O(n). Top-K, merge K lists, median, Dijkstra's.
| Algorithm | Time | Space |
|---|---|---|
| BFS / DFS | O(V+E) | O(V) |
| Dijkstra | O(E log V) | O(V) |
| Bellman-Ford | O(V·E) | O(V) |
| Floyd-Warshall | O(V³) | O(V²) |
| Topo Sort | O(V+E) | O(V) |
| Kruskal MST | O(E log E) | O(V) |
| Prim MST | O(E log V) | O(V) |
BFS = shortest unweighted. Dijkstra = weighted (no negative edges).
Adjacency List
Sparse graphs, most problems
Adjacency Matrix
Dense graphs, small V
| Op | Time |
|---|---|
| Search / Insert | O(m) |
| Prefix search | O(m) |
| Delete | O(m) |
m = word length. Space O(n·m). Autocomplete, word search, spell check.
| Op | Time |
|---|---|
| Find | O(α(n)) |
| Union | O(α(n)) |
α(n) ≈ O(1). Path compression + union by rank. Components, cycle detection, Kruskal's.
n & (n-1) == 0n & 1n ^ (1 << k)n & (-n)bin(n).count('1')reduce(^, arr)a^=b; b^=a; a^=bfor m in range(1<<n)Single loop over n → O(n)
Nested loops → O(n²)
Halving each step → O(log n)
Sort + linear scan → O(n log n)
All subsets → O(2ⁿ)
All permutations → O(n!)
Drop constants: O(2n) → O(n)
Drop lower terms: O(n² + n) → O(n²)
Amortized: dynamic array append = O(1)*
Always state it: "O(n) time, O(1) space." Don't wait to be asked.
Brute force first: mention O(n²) then optimize. Shows trade-off thinking.
n² → n? Hash map. n² → n log n? Sorting.
Hidden costs: string concat in loop = O(n²), slice = O(n), sort ≠ free.
Space-time: hash map adds O(n) space to drop O(n²) → O(n) time.
This Big O cheat sheet is a single-screen reference covering time and space complexity for every data structure and algorithm you will encounter in coding interviews. Whether you are preparing for Google, Meta, Amazon, or any top tech company, understanding Big O notation is the foundation of writing efficient code.
Our Big O notation cheat sheet covers arrays, linked lists, hash maps, stacks, queues, binary search trees, heaps, tries, graphs, and union-find. For algorithms, you will find all major sorting algorithms (merge sort, quick sort, heap sort, counting sort, radix sort), graph algorithms (BFS, DFS, Dijkstra, Bellman-Ford, Floyd-Warshall, topological sort, Kruskal, Prim), and the 12 most common coding interview patterns with their time and space complexity.
Unlike other big-o cheat sheets that require scrolling through long pages, this reference is designed to fit your entire screen in one view. Every data structure, every sorting algorithm, every graph algorithm, and every common pattern — visible at a glance.
A Big O cheat sheet is a quick-reference guide that lists the time and space complexity of common data structures and algorithms using Big O notation. It helps developers quickly compare the efficiency of different approaches during coding interviews and software design. This Big O notation cheat sheet covers arrays, linked lists, trees, graphs, sorting algorithms, and common coding patterns.
Big O notation describes the upper bound of an algorithm's growth rate as input size increases. It tells you how an algorithm's time or space requirements scale. O(n) means linear growth, O(1) means constant time, and O(n²) means quadratic growth. It drops constants and lower-order terms — O(2n + 5) simplifies to O(n). This big-o cheat sheet uses standard Big O notation throughout.
From fastest to slowest: O(1) constant, O(log n) logarithmic, O(n) linear, O(n log n) linearithmic, O(n²) quadratic, O(2ⁿ) exponential, and O(n!) factorial. Most coding interview solutions should be between O(log n) and O(n²). If your solution is O(2ⁿ) or worse, there is almost always a dynamic programming or greedy approach that does better.
Interviewers use Big O to evaluate whether your solution scales for production inputs. Knowing Big O helps you pick the right data structure (hash map O(1) lookup vs array O(n) lookup) and articulate trade-offs. After writing code, always state your time and space complexity — it is often the difference between passing and failing.
Best case is the minimum operations (e.g., finding your target at index 0). Average case is the expected operations over all inputs. Worst case is the maximum operations (e.g., target is last element). Big O typically refers to worst case. This Big O cheat sheet shows average and worst case for data structures, and best/average/worst for sorting algorithms.
Focus on patterns rather than memorization: hash-based structures are O(1) average for insert/search/delete, tree-based structures are O(log n), and linked structures are O(n) for search. Sorting is O(n log n) for comparison-based, O(n+k) for counting-based. Use this Big O notation cheat sheet as a reference until the patterns become intuitive.
Practice with an AI interviewer that tests your ability to choose optimal data structures and analyze complexity in real time.
Try Crackr freearrow_forward