Worksheet 1: Analysis and Design of Algorithms
Big-O notation · Time complexity · Space complexity
Time Complexity & Big-O Notation
When analysing an algorithm, we consider two properties: time complexity (how long it takes to run) and space complexity (how much memory it uses). Big-O notation expresses the upper bound of time taken relative to the input size n — it lets us predict how an algorithm scales.
The key rule: drop constants and lower-order terms. An algorithm with 5n² + 10n + 2 steps is O(n²) because as n grows large, only the dominant term matters.
| Big-O | Name | Description | Example |
|---|---|---|---|
O(1) | Constant | Independent of input size | Hashing, array access |
O(log n) | Logarithmic | Halves the problem each step | Binary search |
O(n) | Linear | Directly proportional to n | Linear search, single loop |
O(n log n) | Linearithmic | Divide-and-conquer with merge | Merge sort |
O(n²) | Polynomial | Nested loops over n | Bubble sort, insertion sort |
O(2ⁿ) | Exponential | Doubles with each extra element | Recursive Fibonacci |
O(n!) | Factorial | All permutations | Travelling Salesman brute force |
Efficiency order (best → worst): O(1) < O(log n) < O(n) < O(n log n) < O(n²) < O(2ⁿ) < O(n!)
Tractable problems have polynomial-time solutions or better (≤ O(nᵏ)). Intractable problems are theoretically solvable but not in reasonable time — typically O(2ⁿ) or O(n!).
Space complexity uses the same Big-O notation but measures memory usage. To reduce it, avoid making copies of data — operate in-place. To reduce time complexity, minimise nested loops.
Figure 1 — Growth rates of common time complexities (n = 1 to 20)
f(n) = 5n² + 10n + 2 to show that as n grows, only the n² term dominates.| n | n² | 5n² | 10n | 2 | f(n) |
|---|---|---|---|---|---|
| 10 | 2 | ||||
| 100 | 2 | ||||
| 1,000 | 2 | ||||
| 10,000 | 2 |
| n | n² | 5n² | 10n | 2 | f(n) |
|---|---|---|---|---|---|
| 10 | 100 | 500 | 100 | 2 | 602 |
| 100 | 10,000 | 50,000 | 1,000 | 2 | 51,002 |
| 1,000 | 1,000,000 | 5,000,000 | 10,000 | 2 | 5,010,002 |
| 10,000 | 100,000,000 | 500,000,000 | 100,000 | 2 | 500,100,002 |
Big-O = O(n) — single loop, linear time.
Big-O = O(n²) — nested loops each of size n give polynomial time.
Big-O = O(n)
3(2)/2=3 · 4(3)/2=6 · 5(4)/2=10 · 6(5)/2=15 ✓
O(n²)
| n | Recursive (ms) | Iterative (ms) |
|---|---|---|
| 10 | 0.31 | 0.18 |
| 20 | 16.24 | 0.38 |
| 30 | 1,777 | 0.66 |
| 35 | 20,109 | 0.92 |
| 100 (est.) | Several million years | ~2 ms |
(a) ✅ TRUE — Big-O measures time or space complexity growth.
(b) ✅ TRUE — approximately; may not hold for very small n.
(c) ❌ FALSE — there may be many statements, but the count stays constant regardless of n.
(d) ❌ FALSE — constant coefficients are ignored; O(100n) = O(n).
(e) ❌ FALSE — polynomial-time algorithms are tractable and widely used.
(f) ✅ TRUE — the hash function executes in constant time regardless of data size.
(g) ❌ FALSE — time complexity is a fixed property of the algorithm, it doesn't change.
(h) ✅ TRUE — halving the problem each step gives logarithmic growth.
Worksheet 2: Searching Algorithms
Linear search · Binary search · Binary search trees
Searching Algorithms — Theory
Searching algorithms find a specified element within a data structure. The choice of algorithm depends on whether data is sorted and how large the dataset is.
| Algorithm | Requires sorted data? | Big-O | How it works |
|---|---|---|---|
| Linear Search | No | O(n) | Check each element one by one until found or end reached |
| Binary Search | Yes | O(log n) | Find midpoint; discard half; repeat — halves the search space each step |
Linear Search pseudocode:
Function linearSearch(list, item)
index = -1 · i = 0 · found = False
while i < length(list) and found = False
if list[i] = item then
index = i · found = True
endif
i = i + 1
endwhile
return index
endfunction ← O(n): single while loop
Binary Search pseudocode:
function binarySearch(list, item)
found=False · index=-1 · first=0 · last=length(list)-1
while first <= last and found = False
midpoint = int((first + last) / 2)
if list[midpoint] = item then
found=True · index=midpoint
else
if list[midpoint] < item then first = midpoint + 1
else last = midpoint - 1
endif
endif
endwhile
return index ← O(log n): list halved each iteration
Why O(log n)? Each iteration halves the remaining data. For 2ⁿ items, only n+1 comparisons are needed. Doubling the list size adds just one extra comparison.
| List size | 8 | 16 | 32 | 37 | 64 | 2ⁿ |
|---|---|---|---|---|---|---|
| Max comparisons |
Pattern: log₂(list size) + 1
False(2)
True(3)
index = midpoint(4)
midpoint + 1(5)
last = midpoint – 1
Binary Search Trees (BST)
A BST stores data so that for any node: all values in the left subtree are smaller, all values in the right subtree are larger. Searching traverses from root downwards, making O(log n) comparisons on a balanced tree.
When implemented as an array, each node stores: Left pointer | Data | Right pointer. A pointer of −1 means no child (null).
| Index | Left | Data | Right |
|---|---|---|---|
| 0 | 1 | K | 3 |
| 1 | 7 | E | 2 |
| 2 | -1 | H | -1 |
| 3 | 5 | R | 4 |
| 4 | 6 | U | 9 |
| 5 | -1 | N | -1 |
| 6 | -1 | T | -1 |
| 7 | -1 | B | 8 |
| 8 | -1 | C | -1 |
| 9 | -1 | Z | -1 |
Worksheet 3: Bubble Sort and Insertion Sort
Comparison-based sorting algorithms · O(n²) · Practical tracing
Sorting Algorithms — Theory
Sorting algorithms arrange elements into a logical order (usually ascending). Most output ascending order but can be reversed by switching inequality signs.
Bubble Sort — repeatedly compares adjacent pairs and swaps if out of order. The largest unsorted element "bubbles" to the end of the unsorted portion each pass. After each pass, one more element is in its correct final position.
Optimised bubble sort uses a noSwap flag — if a complete pass makes no swaps, the list is already sorted and the algorithm terminates early.
Insertion Sort — builds a sorted sequence from left to right. The iᵗʰ iteration inserts the iᵗʰ element into its correct position in the already-sorted left portion. Unlike bubble sort, the sorted elements on the left are not necessarily the smallest elements overall.
Both have worst-case O(n²). For nearly-sorted data, insertion sort with the noSwap optimisation can approach O(n).
if names[j] > names[j + 1]
temp = names[j]
names[j] = names[j + 1]
names[j+1] = temp| Pass | Array | |||||||
|---|---|---|---|---|---|---|---|---|
| Start | 15 | 73 | 29 | 66 | 35 | 11 | 43 | 21 |
| 1 | ||||||||
| 2 | ||||||||
| 3 | ||||||||
| 4 | ||||||||
| 5 | ||||||||
| 6 | ||||||||
| 7 | ||||||||
| Pass 1 | 15 73 29 66 35 11 43 21 |
| Pass 2 | 15 29 73 66 35 11 43 21 |
| Pass 3 | 15 29 66 73 35 11 43 21 |
| Pass 4 | 15 29 35 66 73 11 43 21 |
| Pass 5 | 11 15 29 35 66 73 43 21 |
| Pass 6 | 11 15 29 35 43 66 73 21 |
| Pass 7 | 11 15 21 29 35 43 66 73 |
Worksheet 4: Merge Sort and Quick Sort
Divide-and-conquer · O(n log n) · Recursion
Merge Sort & Quick Sort — Theory
Merge Sort is a recursive divide-and-conquer algorithm with guaranteed O(n log n) performance. It splits the list in half repeatedly until each sub-list has one element (trivially sorted), then merges sub-lists back in order. The merge phase compares the front elements of two sorted sub-lists, always taking the smaller.
Quick Sort selects a pivot element and partitions the list into elements less than the pivot (left) and greater than the pivot (right). The pivot ends up in its correct final position. The algorithm recurses on both sides. Average case is O(n log n), worst case (already sorted data with bad pivot choice) is O(n²).
| Algorithm | Best | Average | Worst | Space |
|---|---|---|---|---|
| Bubble Sort | O(n) | O(n²) | O(n²) | O(1) |
| Insertion Sort | O(n) | O(n²) | O(n²) | O(1) |
| Merge Sort | O(n log n) | O(n log n) | O(n log n) | O(n) |
| Quick Sort | O(n log n) | O(n log n) | O(n²) | O(log n) |
For large datasets, merge sort and quick sort dramatically outperform bubble/insertion sort. For small n (<10), the simpler sorts may be faster due to lower overhead.
After merging pairs into 4-element lists:
Left: Ava, Jessica, Olivia, Sophie
Right: Amelia, Isla, Lily, Poppy
Final merge: Amelia, Ava, Isla, Jessica, Lily, Olivia, Poppy, Sophie
| i | j | k | L[i] | R[j] | alist[k] |
|---|---|---|---|---|---|
| 0 | 0 | 0 | 1 | 4 | [1] |
| i | j | k | L[i] | R[j] | alist[k] |
|---|---|---|---|---|---|
| 0 | 0 | 0 | 1 | 4 | [1] |
| 1 | 0 | 1 | 3 | 4 | [1,3] |
| 2 | 0 | 2 | 9 | 4 | [1,3,4] |
| 2 | 1 | 3 | 9 | 6 | [1,3,4,6] |
| 2 | 2 | 4 | 9 | 8 | [1,3,4,6,8] |
| — | — | 5 | — | — | [1,3,4,6,8,9] |
34 56 23 81 28 66 35 17 88 37 18 50 P=34, L→56, R→50Swap 56↔18:
34 18 23 81 28 66 35 17 88 37 56 50 L→81, R→17Swap 81↔17:
34 18 23 17 28 66 35 81 88 37 56 50 L→66, R→28; pointers cross (R<L at 28)Swap pivot 34 with R(28):
28 18 23 17 34 35 66 81 88 37 56 5034 is now in its correct final position (index 4). Recurse on left [28,18,23,17] and right [35,66,81,88,37,56,50].
Worksheet 5: Graph Traversal Algorithms
Depth-first search · Breadth-first search · Tree traversals
Graph Traversal — Theory
Graph traversal visits every vertex systematically. The two main strategies differ in order of exploration:
| Algorithm | Data structure | Iterative or recursive? | Explores |
|---|---|---|---|
| DFS (Depth-First Search) | Stack (or call stack if recursive) | Both | As deep as possible before backtracking |
| BFS (Breadth-First Search) | Queue | Iterative | All neighbours before going deeper |
BFS node colours: White = unvisited · Grey = queued, not yet fully explored · Black = dequeued and fully explored.
Tree traversals (for binary trees, leftmost child first):
- Pre-order: Root → Left subtree → Right subtree
- In-order: Left subtree → Root → Right subtree (gives sorted output for BST)
- Post-order: Left subtree → Right subtree → Root
- Breadth-first: Level by level, left to right
In a recursive DFS, the system call stack automatically stores return addresses, parameters and local variables — no explicit stack management is needed in the program.
vertex = "B", visited = ["A"]Worksheet 6: Optimisation Algorithms
Dijkstra's algorithm · Tractable and intractable problems · TSP
Dijkstra's Shortest Path Algorithm
Dijkstra's algorithm finds the shortest path from a start node to every other node in a weighted graph (non-negative weights only).
Algorithm steps:
- Assign distance 0 to start node; ∞ to all others. Add all nodes to a priority queue.
- While the queue is not empty: dequeue the node with lowest cost (current node).
- For each neighbour: compute
new distance = current distance + edge weight. - If new distance < neighbour's current distance, update it (relax the edge).
- Repeat until all nodes processed.
The priority queue ensures we always process the nearest unvisited node first. Time complexity: O(n²) with a simple array, O((V+E) log V) with a heap.
Tractable vs Intractable: A problem is tractable if it can be solved in polynomial time O(nᵏ). Intractable problems (e.g. TSP brute force) have exponential/factorial complexity — solvable in theory but not in practice for large n. Dijkstra's is tractable; TSP brute force is intractable.
D=7 | B=10 | E=10 | F=∞
Queue: B=10 | E=10 | F=27
| After visiting | Priority queue |
|---|---|
| A | C=3 | B=15 | D=∞ E=∞ F=∞ G=∞ H=∞ |
| C | D=5 | B=15 | E=∞ F=∞ G=∞ H=∞ |
| D | G=6 | F=10 | B=15 | E=∞ H=∞ |
| G | F=10 | B=15 | E=∞ H=∞ |
| F | E=15 | B=15 | H=∞ |
| E & B | B=15 | H=27 |
| H | H=27 — done |
(a) ❌ FALSE — TSP is computable but intractable for large n.
(b) ✅ TRUE — brute force works for very small n.
(c) ❌ FALSE — 4 cities: 3!=6 routes; 8 cities: 7!=5040. Ratio is 840×, not 2×.
(d) ❌ FALSE — 6! = 720 > 700.
Intractable: theoretically solvable but not within reasonable time — O(2ⁿ) or O(n!). Not the same as insoluble (which means no algorithm can ever solve it).
| f(n) | n=8 | n=16 | n=128 |
|---|---|---|---|
| log₂n | |||
| n log₂n | |||
| n² | |||
| n³ | |||
| 2ⁿ | 340,282…456 (39 digits) | ||
| n! | 20,922,789,888,000 | Astronomical |
| f(n) | n=8 | n=16 | n=128 |
|---|---|---|---|
| log₂n | 3 | 4 | 7 |
| n log₂n | 24 | 64 | 896 |
| n² | 64 | 256 | 16,384 |
| n³ | 512 | 4,096 | 2,097,152 |
| 2ⁿ | 256 | 65,536 | 39-digit number |
| n! | 40,320 | ~21 trillion | Astronomical |