Chapter 23: All-Pairs Shortest Paths: Floyd-Warshall and Johnson's Algorithm

Loading audio…

ⓘ This audio and summary are simplified educational interpretations and are not a substitute for the original text.

If there is an issue with this chapter, please let us know → Contact Us

The naïve approach of running a single-source algorithm from every vertex works but is slow: Dijkstra repeated n times costs O(V² log V + VE) and Bellman-Ford costs O(V²E), motivating more specialized methods. A matrix-based dynamic programming approach defines lᵢⱼ^(r) as the shortest path weight using at most r edges, computing each layer from the previous via a recurrence analogous to matrix multiplication over the tropical semiring — replacing addition with min and multiplication with addition — giving SLOW-APSP a runtime of Θ(n⁴); applying repeated squaring to this framework yields FASTER-APSP at Θ(n³ log n). The Floyd-Warshall algorithm improves on this by instead iterating over intermediate vertices, using the recurrence dᵢⱼ^(k) = min(dᵢⱼ^(k–1), dᵢₖ^(k–1) + dₖⱼ^(k–1)) to build up shortest paths through progressively larger vertex sets, achieving Θ(n³) time and Θ(n²) space while maintaining a predecessor matrix for path reconstruction — and can detect negative-weight cycles when any diagonal entry becomes negative, or compute transitive closure by substituting Boolean OR and AND for min and addition. For sparse graphs with possible negative weights but no negative cycles, Johnson's algorithm achieves O(V² log V + VE) by introducing a reweighting technique: a new source vertex s is added with zero-weight edges to all others, Bellman-Ford computes potentials h(v) from s, and each edge is reweighted as ŵ(u,v) = w(u,v) + h(u) – h(v) to eliminate negatives, after which Dijkstra is run from every vertex on the reweighted graph and original distances are recovered by reversing the transformation.