12 essential data structures for C++ developers. Each one explained with Big O complexity, animated visuals, and real code samples you can copy.
std::array<T,N>Fixed-size, contiguous block of memory. Elements are stored sequentially and accessed by index in constant time. The foundation of most other data structures.
arr[0] = 7
Complexity
When to use
#include <array>
// std::array - fixed-size, stack-allocated, contiguous
std::array<int, 5> nums = {2, 3, 5, 7, 11};
nums[0] = 13; // O(1) access
int third = nums.at(2); // O(1) bounds-checked -> 5
// Size known at compile time
constexpr auto sz = nums.size(); // 5
// Iterate
for (int n : nums)
std::cout << n << " ";
// Search
auto it = std::find(nums.begin(), nums.end(), 7); // O(n)
// Sort
std::sort(nums.begin(), nums.end()); // O(n log n)
bool found = std::binary_search(nums.begin(), nums.end(), 7); // O(log n)std::vector<T>Resizable array that automatically grows when capacity is exceeded. The most commonly used data structure in most languages. Doubles its internal storage when full, giving amortized O(1) appends.
arr[0] = 7
Complexity
When to use
#include <vector>
// std::vector - dynamic array (most used container)
std::vector<int> nums = {10, 20, 30};
nums.push_back(40); // O(1) amortized
nums.emplace_back(50); // O(1) amortized, in-place
nums[1] = 25; // O(1) index access
// Insert at position 2 - O(n)
nums.insert(nums.begin() + 2, 28);
// Erase at position 1 - O(n)
nums.erase(nums.begin() + 1);
// Search
auto it = std::find(nums.begin(), nums.end(), 40); // O(n)
bool has = it != nums.end();
std::sort(nums.begin(), nums.end()); // O(n log n)
nums.reserve(100); // pre-allocate to avoid reallocsstd::stack<T>Last-In-First-Out (LIFO) collection. Only the top element is accessible. Used for tracking state that must be unwound in reverse order.
Push 10
Complexity
When to use
#include <stack>
// std::stack - LIFO adapter (default: deque backend)
std::stack<int> st;
st.push(10); // O(1)
st.push(20);
st.push(30);
int top = st.top(); // O(1) peek -> 30
st.pop(); // O(1) remove top
std::cout << st.size(); // 2
// Interview pattern: valid parentheses
bool isValid(const std::string& s) {
std::stack<char> st;
std::unordered_map<char,char> p = {{')', '('}, {']', '['}, {'}', '{'}};
for (char c : s) {
if (!p.count(c)) st.push(c);
else if (st.empty() || st.top() != p[c]) return false;
else st.pop();
}
return st.empty();
}std::queue<T>First-In-First-Out (FIFO) collection. Elements are added at the back and removed from the front. Fundamental for breadth-first processing.
Enqueue 10
Complexity
When to use
#include <queue>
// std::queue - FIFO adapter (default: deque backend)
std::queue<std::string> q;
q.push("A"); // O(1) enqueue
q.push("B");
q.push("C");
std::string front = q.front(); // O(1) peek -> "A"
q.pop(); // O(1) dequeue
// Interview pattern: BFS
void bfs(TreeNode* root) {
std::queue<TreeNode*> q;
q.push(root);
while (!q.empty()) {
auto* node = q.front();
q.pop(); // O(1)
std::cout << node->val << " ";
if (node->left) q.push(node->left);
if (node->right) q.push(node->right);
}
}std::unordered_map<K,V>Maps keys to values using a hash function for near-constant-time lookups. The single most important data structure for coding interviews. Every language has a built-in implementation.
hash("age") = 0
Complexity
When to use
#include <unordered_map>
// std::unordered_map - hash table
std::unordered_map<std::string, int> map = {
{"apple", 3}, {"banana", 5}
};
map["cherry"] = 2; // O(1) add
map["apple"] = 10; // O(1) update
map.erase("cherry"); // O(1)
// Safe lookup
if (auto it = map.find("apple"); it != map.end())
std::cout << it->second; // O(1) -> 10
bool has = map.contains("banana"); // O(1) - C++20
// Interview pattern: Two Sum
std::vector<int> twoSum(std::vector<int>& nums, int target) {
std::unordered_map<int,int> seen;
for (int i = 0; i < nums.size(); i++) {
int need = target - nums[i];
if (seen.contains(need)) return {seen[need], i};
seen[nums[i]] = i;
}
return {};
}std::unordered_set<T>Unordered collection of unique elements. Uses hashing internally for O(1) membership testing. Supports mathematical set operations like union, intersection, and difference.
hash("age") = 0
Complexity
When to use
#include <unordered_set>
// std::unordered_set - hash-based unique elements
std::unordered_set<int> s = {1, 2, 3, 4, 5};
s.insert(6); // O(1)
s.insert(3); // O(1) - no duplicate
s.erase(1); // O(1)
bool has = s.contains(4); // O(1) -> true (C++20)
// Set operations (manual - no built-in)
std::unordered_set<int> a = {1,2,3}, b = {2,3,4};
std::unordered_set<int> inter;
for (int x : a)
if (b.contains(x)) inter.insert(x); // {2, 3}
// Interview pattern: contains duplicate
bool hasDup(std::vector<int>& nums) {
std::unordered_set<int> seen;
for (int n : nums)
if (!seen.insert(n).second) return true; // O(1)
return false;
}std::list<T>Sequence of nodes where each node points to the next (singly linked) or both next and previous (doubly linked). Efficient insertion and deletion at any known position, but no index-based access.
traversing: 5
Complexity
When to use
#include <list>
// std::list - doubly linked list
std::list<int> ll = {10, 20};
ll.push_back(30); // O(1)
ll.push_front(5); // O(1)
// Find and insert before - O(n) search, O(1) insert
auto it = std::find(ll.begin(), ll.end(), 20);
ll.insert(it, 15); // O(1) given iterator
// Remove - O(1) given iterator
ll.erase(it); // removes 20
// Iterate: 5 -> 10 -> 15 -> 30
for (int val : ll) std::cout << val << " ";
// Splice - O(1) move nodes between lists
std::list<int> other = {99};
ll.splice(ll.begin(), other); // moves 99 to front
// Useful for LRU cache: splice to front on accessstd::set<T>Collection of unique elements maintained in sorted order, typically backed by a balanced binary search tree (red-black tree). Supports range queries and O(log n) min/max.
search(8)
Complexity
When to use
#include <set>
// std::set - red-black tree, sorted unique elements
std::set<int> s = {5, 3, 8, 1, 9};
// Internal order: 1, 3, 5, 8, 9
s.insert(4); // O(log n)
s.erase(3); // O(log n)
bool has = s.contains(8); // O(log n) - C++20
auto minIt = s.begin(); // O(1) -> 1
auto maxIt = std::prev(s.end()); // O(1) -> 9
// Range query: [4, 8]
auto lo = s.lower_bound(4);
auto hi = s.upper_bound(8);
for (auto it = lo; it != hi; ++it)
std::cout << *it << " "; // 4 5 8
// Interview: sliding window with sorted order
// Use std::multiset if duplicates are neededstd::map<K,V>Key-value pairs maintained in sorted key order, typically backed by a balanced BST. Enables ordered iteration and range lookups that hash maps cannot provide.
search(8)
Complexity
When to use
#include <map>
// std::map - red-black tree, sorted by key
std::map<std::string, int> m = {
{"banana", 2}, {"apple", 5}, {"cherry", 1}
};
m["date"] = 3; // O(log n) insert
m.erase("cherry"); // O(log n)
// Lookup
if (auto it = m.find("apple"); it != m.end())
std::cout << it->second; // O(log n) -> 5
// Iterates in sorted key order automatically
for (auto& [key, val] : m)
std::cout << key << ": " << val << "\n";
// apple: 5, banana: 2, date: 3
// lower_bound/upper_bound for range queries - O(log n)
auto it = m.lower_bound("b"); // -> "banana"std::priority_queue<T>Collection where elements are dequeued by priority rather than insertion order. Typically implemented as a binary heap. Essential for shortest-path algorithms and top-K problems.
min-heap
Complexity
When to use
#include <queue>
// std::priority_queue - max-heap by default
std::priority_queue<int> maxPQ;
maxPQ.push(3); // O(log n)
maxPQ.push(1);
maxPQ.push(2);
int top = maxPQ.top(); // O(1) -> 3 (max)
maxPQ.pop(); // O(log n)
// Min-heap using greater<>
std::priority_queue<int, std::vector<int>, std::greater<>> minPQ;
minPQ.push(3); minPQ.push(1); minPQ.push(2);
int minTop = minPQ.top(); // O(1) -> 1 (min)
// Interview pattern: K closest points
auto cmp = [](auto& a, auto& b) {
return a[0]*a[0]+a[1]*a[1] < b[0]*b[0]+b[1]*b[1];
};
std::priority_queue<std::vector<int>, std::vector<std::vector<int>>,
decltype(cmp)> pq(cmp); // max-heap, pop farthestmutex + unordered_mapThread-safe hash map designed for concurrent read/write access from multiple threads. Uses fine-grained locking or lock-free techniques instead of a single global lock.
hash("age") = 0
Complexity
When to use
#include <shared_mutex>
#include <unordered_map>
// No built-in concurrent map - use shared_mutex + map
class ConcurrentMap {
std::unordered_map<std::string, int> map_;
mutable std::shared_mutex mtx_;
public:
void set(const std::string& k, int v) {
std::unique_lock lock(mtx_); // exclusive write
map_[k] = v; // O(1)
}
std::optional<int> get(const std::string& k) const {
std::shared_lock lock(mtx_); // shared read
auto it = map_.find(k); // O(1)
return it != map_.end() ? std::optional(it->second) : std::nullopt;
}
void erase(const std::string& k) {
std::unique_lock lock(mtx_);
map_.erase(k); // O(1)
}
// Multiple readers can hold shared_lock simultaneously
// Writers get exclusive access via unique_lock
};std::span<T>Zero-copy view over a contiguous region of memory. Lets you reference a portion of an array or buffer without allocating new memory. Critical for performance-sensitive parsing and processing.
Span[0..3] = [1, 2, 3]
Complexity
When to use
#include <span> // C++20
// std::span - non-owning view over contiguous memory
int data[] = {1, 2, 3, 4, 5};
std::span<int> view(data); // O(1) - no copy
view[0] = 42; // mutates original array
// Subspan - zero-copy slice
auto slice = view.subspan(1, 3); // O(1) -> [2, 3, 4]
// Works with vector, array, C arrays
std::vector<int> vec = {10, 20, 30};
std::span<int> vecView(vec);
// Fixed-extent (size known at compile time)
std::span<int, 5> fixed(data); // compile-time bounds
// Common pattern: function taking any contiguous container
void process(std::span<const int> s) {
for (int v : s) std::cout << v;
}
process(data); // C array
process(vec); // vector - all work seamlesslyAverage-case time complexity. * = amortized.
| Structure | Access | Search | Insert | Delete |
|---|---|---|---|---|
| Array | O(1) | O(n) | O(n) | O(n) |
| Dynamic Array | O(1) | O(n) | O(1)* | O(n) |
| Stack | O(n) | O(n) | O(1)* | O(1) |
| Queue | O(n) | O(n) | O(1)* | O(1) |
| Hash Map | O(1) | O(1) | O(1)* | O(1) |
| Hash Set | N/A | O(1) | O(1)* | O(1) |
| Linked List | O(n) | O(n) | O(1) | O(1) |
| Sorted Set | O(n) | O(log n) | O(log n) | O(log n) |
| Sorted Map | O(log n) | O(log n) | O(log n) | O(log n) |
| Priority Queue | O(n) | O(n) | O(log n) | O(log n) |
| Concurrent Map | O(1) | O(1) | O(1)* | O(1) |
| Memory View | O(1) | O(n) | N/A | N/A |
| I need to... | Use |
|---|---|
| Store items by index, resize dynamically | List / Dynamic Array |
| Map keys to values with O(1) lookup | HashMap / Dictionary |
| Track unique items, check existence in O(1) | HashSet / Set |
| Last-in-first-out (undo, DFS, brackets) | Stack |
| First-in-first-out (BFS, task queues) | Queue |
| Keep elements sorted at all times | SortedSet / TreeSet |
| Process items by priority (Dijkstra, top-K) | PriorityQueue / Heap |
| Insert/delete at a known position in O(1) | LinkedList |
| Sorted key-value pairs | SortedDictionary / TreeMap |
| Thread-safe shared cache | ConcurrentDictionary |
| Slice arrays/strings without copying | Span / Slice / memoryview |
The most commonly used are dynamic arrays (List/ArrayList/vector), hash maps (Dictionary/HashMap/dict), and hash sets. For interviews, also know stacks, queues, trees, and priority queues. These cover 90%+ of coding interview problems.
Start with the dynamic array and hash map. Together they solve the majority of interview problems. Then learn stacks (for DFS, bracket matching) and queues (for BFS). After that, tackle trees, heaps, and graphs.
No. Big O measures algorithmic complexity, not language-specific performance. A hash map lookup is O(1) whether you use Python dict, Java HashMap, or C# Dictionary. Constant factors differ (C++ is faster than Python in wall-clock time), but Big O is the same.
Yes. std::priority_queue is a max-heap by default. For a min-heap, use std::greater<T> as the comparator.
Practice with an AI interviewer that asks you to implement, optimize, and explain data structure choices in real time.
Try a free mock interviewarrow_forward