Time Complexity of Data Structures

This is a quick reference for common data structure tradeoffs. Unless stated otherwise, is the number of stored elements, is the number of inserted or concatenated elements, is the key length, and hash table operations are average-case under a good hash function and normal load factor.

Random Access ADT

Random access means retrieving or updating the item at an index.

Data StructureAccessSearchInsert beginInsert middleInsert endDelete beginDelete middleDelete endConcatSpace
Static array if space exists
Dynamic array amortized
Singly linked list with tail, without with tail
Doubly linked list with tail with tail with tail
Hashed array / hash map by index average by value average average if order does not matter average average average if order does not matter average
RRB tree / persistent vector to amortized to amortized

Use arrays or dynamic arrays when index lookup and cache locality matter. Use linked lists when stable node identity and cheap splicing matter more than indexing. Use persistent vectors or RRB trees when old versions must remain usable after updates.

Stack ADT

A stack is last-in, first-out: push and pop happen at the same end.

Data StructurePeekPushPopSearchSpace
Dynamic array amortized
Singly linked list

Dynamic arrays are usually the default stack implementation because they are compact and cache-friendly. Linked-list stacks are useful when node allocation or stable node references are part of the design.

Queue ADT

A queue is first-in, first-out: enqueue at the back and dequeue at the front.

Data StructurePeek frontEnqueueDequeueSearchSpace
Circular buffer / ring buffer amortized
Singly linked list with head and tail
Two-stack queue amortized amortized

Use a ring buffer for a bounded or mostly bounded queue. Use a linked queue when the queue can grow unpredictably and pointer allocation cost is acceptable. Use two stacks when implementing a queue on top of stack primitives.

Deque ADT

A deque supports pushing and popping from both ends.

Data StructureAccessPush frontPush backPop frontPop backSearchSpace
Circular buffer amortized amortized
Doubly linked list
Linked block deque to depending on indexing amortized amortized

Circular buffers are fast and compact. Doubly linked lists make node splicing cheap. Linked block deques, like many standard-library deques, reduce the reallocation cost of large contiguous arrays.

Sorted Access ADT

Sorted access means maintaining an order so that minimum, maximum, range scans, predecessor, and successor queries are efficient.

Data StructureSearchInsertDeleteMin / MaxPredecessor / SuccessorRange query of resultsConcat / MergeSpace
Sorted array
Sorted linked list
Balanced BST or cached usually
Skip list expected expected expected expected expected usually
B-tree / B+ tree usually

Use sorted arrays when reads dominate writes. Use balanced trees or skip lists when inserts and deletes are common. Use B-trees and B+ trees when disk, pages, or cache-line-sized blocks matter.

Hash Table ADT

Hash tables optimize key lookup and update, not ordered traversal.

OperationAverageWorst case
Insert
Lookup
Delete
Iterate all items
Resize / rehash
Ordered range query
Space

The worst case appears when many keys collide or when an implementation is forced to resize. Some hash maps use balanced trees inside buckets, which can improve collision-heavy lookup from to .

Heap ADT

Heaps optimize repeated access to the highest or lowest priority item.

Data StructureFind min / maxInsertExtract min / maxDelete arbitrary itemDecrease / increase keyMergeSpace
Binary heap unless indexed if indexed
d-ary heap unless indexed if indexed
Binomial heap
Fibonacci heap amortized amortized amortized amortized amortized

Binary heaps are the practical default priority queue. Fibonacci heaps matter mostly in algorithm analysis, especially when many decrease-key operations dominate extract-min operations.

Trie ADT

Tries optimize operations on strings or sequences by key length instead of by the number of keys.

OperationComplexityNotes
Insert key is the key length.
Lookup exact keyDoes not depend directly on .
Delete keyMay also prune unused nodes.
Prefix lookupFind the node for the prefix.
Enumerate prefix results is the number of returned characters or keys.
SpaceCan be high without compression.

Use a trie when prefix operations matter: autocomplete, lexicographic enumeration, routing tables, and dictionaries. Use a radix tree or compressed trie when long one-child chains waste too much memory.

By Use

Geographic Data

Use R-trees for rectangles and spatial indexes, quadtrees for recursively partitioned 2D space, k-d trees for nearest-neighbor queries over points, and geohashes when a spatial key needs to work with ordinary string or B-tree indexing.

Sorted Data

Use sorted arrays for mostly static data, balanced BSTs or skip lists for mutable in-memory sorted sets, and B-trees or B+ trees for database and filesystem indexes.

Least Frequently Used

An LFU cache usually combines a hash map from key to node with frequency buckets. With a hash map plus doubly linked lists per frequency, get, put, and eviction can be average. With a min-heap, eviction is and stale heap entries need cleanup.

Caching

An LRU cache usually combines a hash map with a doubly linked list: the hash map gives average lookup, and the list gives movement to the front and eviction from the back.

Indexing

Use hash indexes for exact-match lookup, B-tree indexes for ordered lookup and range scans, LSM-tree indexes for write-heavy storage engines, inverted indexes for text search, and bitmap indexes for low-cardinality analytical filters.

Text Editing

Use a gap buffer for single-cursor editing, a piece table for preserving original buffers and undo history, and a rope for very large text where concatenation, splitting, and substring operations need to stay efficient.