Ch8.8: Time & Space Complexity
Overview
Time and space complexity describe how the running time and memory usage of an algorithm grow as the input size grows. In this chapter we focus on time complexity, but we also explain how space complexity usually gives a lower bound on time.
You will learn:
- a high-level view of the RAM model of computation
- why space complexity is usually a lower bound on time complexity
- the meaning of
O,Ω, andΘ - basic Θ computations
- how to analyze simple loops
- how to analyze ranges algorithms
- why projection cost matters
- why recursive Fibonacci has exponential time complexity
- why games like Go are very hard for computers
- how to compute recursive time complexity using the Master Theorem (appendix)
0. The model of computation
When we talk about time complexity, we are always talking about the running time of an algorithm on a particular abstract machine model. The most common model is the RAM model (Random Access Machine).
The RAM model assumes:
- each basic operation (addition, comparison, assignment) takes constant time
- memory access to any element takes constant time
- the machine has unlimited memory
These assumptions make it possible to describe algorithms using simple
notations like Θ(n) and O(n ln n).
1. Space complexity as a lower bound
Space complexity measures how much memory an algorithm uses as a function
of the input size n. Time complexity measures how long it runs.
Space complexity is usually a lower bound on time complexity.
If an algorithm uses S(n) space, it must spend at least
Ω(S(n)) time to allocate, initialize, read, or manipulate
that space.
Examples:
- If an algorithm stores
nelements → at leastΩ(n)time. - If an algorithm uses
Θ(1)space → time may be larger, but never smaller thanΩ(1).
2. The three basic notations
Time complexity uses three related notations:
- O(f(n)): an upper bound
- Ω(f(n)): a lower bound
- Θ(f(n)): a tight bound
We use them precisely:
- Θ only when the bound is tight
- O when only an upper bound is known (e.g.,
std::ranges::sort) - Ω when only a lower bound is known
3. Basic Θ computations
Θ(3n^3) = Θ(n^3)
Θ(n^3 + 10000000000n^2) = Θ(n^3)
Only the highest-order term matters.
4. Complexity of simple loops
for (::std::size_t i{}; i != n; ++i) {
// constant-time work
}
Θ(n)
for (::std::size_t i{}; i != n; ++i)
for (::std::size_t j{}; j != n; ++j)
;
Θ(n^2)
5. Linear-time algorithms
Examples:
find,find_ifcount,count_iftransformremove,remove_ifaccumulate,reduce
::std::size_t ones = ::std::ranges::count(v, 1);
Θ(n)
6. Binary search: O(ln n)
auto it = ::std::ranges::lower_bound(v, value);
Each iteration halves the search space → O(ln n).
7. Sorting: O(n ln n)
::std::ranges::sort(v, {}, &Item::key);
The C++ standard guarantees only an upper bound → O(n ln n).
8. Why projection cost matters
::std::ranges::sort(v, {}, &Item::key);
::std::ranges::sort(v, {}, [](Item const& x) {
return expensive_computation(x);
});
Expensive projections dominate runtime.
9. Numeric algorithms
::std::size_t total = ::std::transform_reduce(
v.begin(), v.end(),
::std::size_t{},
::std::plus<>(),
&Item::value
);
Θ(n)
10. Recursive Fibonacci: exponential time
::std::size_t fib(::std::size_t n) {
if (n < 2) return n;
return fib(n - 1) + fib(n - 2);
}
Θ(2^n)
11. NP problems and hard search spaces
Many real-world problems have exponential search spaces and are believed to require exponential time to solve exactly.
12. Why Go is extremely hard for computers
200^200
Go has an exponential game tree → brute force is impossible.
13. Why AlphaGo succeeded
AlphaGo did not solve Go exactly. It used:
- deep neural networks
- policy networks
- Monte Carlo Tree Search
It approximated good moves instead of exploring the full exponential tree.
14. Appendix: Ch8.8.1 — Master Theorem
Many recursive algorithms have time complexity described by a recurrence of the form:
T(n) = a · T(n / b) + f(n)
The appendix teaches you how to solve such recurrences using the Master Theorem, with examples including mergesort and quicksort.
You can read the full appendix here:
Key takeaways
- Θ(n): linear scans
- Θ(n^2): nested loops
- O(ln n): binary search
- O(n ln n): sorting
- Θ(2^n): naive Fibonacci
- Go and many NP-like problems have exponential search spaces
- AlphaGo succeeds by approximating, not solving exactly
- The Master Theorem (appendix) helps analyze recursive algorithms