Chapter 8: Sorting in Linear Time: Counting, Radix, and Bucket Sort

Loading audio…

ⓘ This audio and summary are simplified educational interpretations and are not a substitute for the original text.

If there is an issue with this chapter, please let us know → Contact Us

The lower bound is established through decision tree analysis: any comparison sort can be modeled as a binary tree with at least n! leaves — one per possible permutation — whose minimum height is Ω(n log n), meaning merge sort and heapsort are asymptotically optimal within the comparison model but cannot be improved upon. Counting sort escapes this bound by assuming integer inputs in a range 0 to k, using an auxiliary array to count occurrences, accumulate positions, and place each element directly into its correct output index, achieving Θ(n + k) time and producing a stable sort — preserving the relative order of equal elements — making it ideal when k = O(n). Radix sort extends this idea to multi-digit numbers by applying a stable sort digit by digit from least to most significant, running in Θ(d(n + k)) time, and reducing to linear time when d is constant and k = O(n), with the choice of digit size r offering a tunable performance tradeoff. Bucket sort takes a different approach, assuming input is drawn from a uniform distribution over [0, 1), distributing elements into n equally sized buckets, sorting each with insertion sort, and concatenating the results — achieving O(n) average-case time because uniformly distributed input keeps expected bucket sizes small, though worst-case degrades to Θ(n²) if all elements cluster in one bucket. Together, these three algorithms illustrate a unifying principle: by making informed assumptions about the input domain, it is possible to sidestep the comparison-based lower bound entirely and sort in linear time.