initSingleSource(Graph, source):
for each vertex v ∈ Graph.vertices:
v.distance = ∞
v.predecessor = nil
source.distance = 0
Shortest Path

The shortest path problem is the problem of finding a path between two vertices (aka nodes) in a graph such that the sum of the weights of the edges in the path is minimized.

We distinguish several variations of the shortest path problem:

Singlepair shortest path problem, in which we have to find the shortest path between a pair of vertices.

Singlesource shortest path problem, in which we have to find the shortest pair between a pair of vertices.

Singledestination shortest path problem, in which we have to find the shortest pair between a pair of vertices.

Allpairs shortest path problem, in which we have to find shortest paths between all pairs of vertices in the graph.


Here are the most important algorithms for solving the shortest path problem:

Breadthfirst search finds the shortest paths for unweighted graphs.

Dijkstra’s algorithm solves the singlesource shortest path problem for a graph with nonnegative edge weights.

Bellman–Ford algorithm solves the singlesource shortest path problem for a graph where edge weights may be negative.

A* algorithm solves the singlepair shortest path problem using heuristics to speed up the search.

Floyd–Warshall algorithm solves allpairs shortest path problem for a graph where edge weights may be negative.

BreadthFirst Search

If a graph is unweighted (edges do not have weights), we can assume that all edges have weight 1. If all edges have the same weight, we can do a breadthfirst search starting at a source node $s$ to find the shortest paths to all other nodes.
Dijkstra’s Algorithm

Dijkstra’s algorithm finds the shortest path tree (SPT) that stores the shortest paths from a single source vertex $s$ to all other vertices.

Dijkstra’s algorithm uses relaxation. For each vertex $v$, we maintain its distance
v.distance
from the source as an upper bound (shortestpath estimate aka current bestknown distance) and its predecessorv.predecessor
.At the beginning of the algorithm, we set all the distances to $\inf$ (infinity) and all predecessors to
nil
. We then set the distance of the source vertex $s$ to $0$:Distance and predecessor can also be stored in separate arrays (instead of being stored as attributes of vertices) where indices correspond to the indices of each vertex.

The process of relaxing and edge $e = (v, w)$ consists of testing whether we can improve the shortest path from source $s$ to $w$ found so far by going through the vertex $v$ and, if so, updating
w.distance
andw.predecessor
:relax(e): v = e.source w = e.target currentBestKnownWeight = w.distance possiblyBetterWeight = v.distance + e.weight if possiblyBetterWeight < currentBestKnownWeight w.distance = possiblyBetterWeight w.predecessor = v

Pseudocode for Dijkstra’s algorithm:
Dijkstra(Graph, source): initSingleSource(Graph, source) while not every vertex is part of tree: v = vertex with smallest distance and not in tree v.partOfTree = true for each edge e of v: relax(e)

Runtime is $O(V \log V + V \log V + E \log V)$, and since $E > V$ for any graph on which we wound run Dijkstra’s algorithm, we can more simple write the complexity as $O(E \log V)$.

Visualization:
BellmanFord Algorithm

Like Dijkstra’s algorithm, Bellman–Ford algorithm solves the singlesource shortest path problem, but also works with graphs where some edge weights are negative.

Note that we did not cover Bellman–Ford algorithm in EECS 281 this semester.
A* Algorithm

If we only need to find a path from a single source to a single target, then Dijkstra’s algorithm is inefficient, as it explores many edges that we do not care about.

For example, suppose you would like to find the shortest path from Ann Arbor to Chicago, then it would not make sense to explore paths longer that ~300 miles in all direction before reaching Chicago.

To fix this, we make a very minor change to Dijkstra’s algorithm: instead of visiting vertices in order of distance from the source, we visit them in order of distance from the source + $h(v)$, where $h(v)$ is some heuristic.

Note that we did not cover the A* algorithm in EECS 281 this semester.
Floyd–Warshall Algorithm

Floyd–Warshall algorithm solves the allpairs shortest path problem. Negativeweight edges may be present, but we assume that there are no negativeweight cycles.

First, let’s start with a simple example of a graph.
Although Floyd–Warshall algorithm finds shortest paths between all pairs of vertices, let’s focus on the shortest path between $A$ and $B$. First, notice that there is an edge from $A$ to $B$ with weight 8, so a path of weight 8 exists between $A$ and $B$.
Next, let’s try adding other nodes into the path to see if the weight of the path from $A$ to $B$ decreases. If we try to add $C$ into the path, the length of the path becomes 5, which is better than 8. If we then tried to add $D$ into the path, the length would become 8, which is worse than 5. Finally, when we add $E$ into the path, the length becomes 4, which is better than 5.

Floyd–Warshall algorithm algorithm breaks the allpairs shortest path problem into subproblems that can be easily solved using dynamic programming. We start by letting $T^0(i,j)$ be the length of the shortest direct path (of length ≤ 1) from $i$ to $j$. It is equal to the length of the edge $(i, j)$ if it exists and ∞ otherwise. We set $T^0(i,i) = 0$ for all $i$.

We define the problem $T^k(i, j)$ as the shortest path from $i$ to $j$ that is only allowed to use the first $k$ vertices as intermediate vertices (numbered from $0$ to $k  1$). For example, $T^1(i,j)$ is the shortest path from $i$ to $j$ that is either $i \rightarrow j$ or $i \rightarrow 0 \rightarrow j$. $T^2(i,j)$ is the shortest path from $i$ to $j$ that is either $i \rightarrow j$, $i \rightarrow 0 \rightarrow j$, $i \rightarrow 1 \rightarrow j$, $i \rightarrow 0 \rightarrow 1 \rightarrow j$ or $i \rightarrow 1 \rightarrow 0 \rightarrow j$.

We now need to define the problem $T^k(i, j)$ in terms of $T^{k  1}$ in order to use dynamic programming. Suppose we want to find the shortest path from $i$ to $j$ that uses the first $k$ vertices (up to $k  1$) as intermediate vertices. There are two cases:

The path does not contain vertex $k  1$. Then $T^k(i, j) = T^{k  1}(i, j)$.

The path contains vertex $k  1$. Then the path will pass through vertex $k  1$ only once (since it is the shortest path). The path therefore must be $i \rightarrow ... \rightarrow k  1 \rightarrow ... \rightarrow j$, and it goes from $i$ to $k  1$ via vertices $0$ through $k  2$, and then from $k  1$ to $j$ via vertices $0$ through $k  2$. Or more simply, $T^k(i, j) = T^{k  1}(i, k) + T^{k  1}(i, k)$.


We can write the recursive definition for the algorithm as $T^k(i, j) = \min( T^{k  1}(i, j), T^{k  1}(i, k) + T^{k  1}(i, k))$.

Pseudocode:
FloydWarshall(Graph): for each vertex v in Graph.V: dist[v][v] = 0 for each edge e in Graph.E: v = e.source w = e.target dist[v][w] = e.weight for k = 0 ... Graph.V  1: for i = 0 ... Graph.V  1: for j = 0 ... Graph.V  1: dist[i][j] = min(dist[i][j], dist[i][k] + dist[k][j])

When we implement the Floyd–Warshall algorithm, we want to keep track of distances and predecessor, similarly to Dijkstra’s algorithm. However, here we use two 2D arrays for that purpose, instead of two 1D arrays.

Here is a visualization of the Floyd–Warshall algorithm.