Shortest Path

  • The shortest path problem is the problem of finding a path between two vertices (aka nodes) in a graph such that the sum of the weights of the edges in the path is minimized.

  • We distinguish several variations of the shortest path problem:

    • Single-pair shortest path problem, in which we have to find the shortest path between a pair of vertices.

    • Single-source shortest path problem, in which we have to find the shortest pair between a pair of vertices.

    • Single-destination shortest path problem, in which we have to find the shortest pair between a pair of vertices.

    • All-pairs shortest path problem, in which we have to find shortest paths between all pairs of vertices in the graph.

  • Here are the most important algorithms for solving the shortest path problem:

    • Breadth-first search finds the shortest paths for unweighted graphs.

    • Dijkstra’s algorithm solves the single-source shortest path problem for a graph with non-negative edge weights.

    • Bellman–Ford algorithm solves the single-source shortest path problem for a graph where edge weights may be negative.

    • A* algorithm solves the single-pair shortest path problem using heuristics to speed up the search.

    • Floyd–Warshall algorithm solves all-pairs shortest path problem for a graph where edge weights may be negative.

  • If a graph is unweighted (edges do not have weights), we can assume that all edges have weight 1. If all edges have the same weight, we can do a breadth-first search starting at a source node ss to find the shortest paths to all other nodes.

Dijkstra’s Algorithm

  • Dijkstra’s algorithm finds the shortest path tree (SPT) that stores the shortest paths from a single source vertex ss to all other vertices.

  • Dijkstra’s algorithm uses relaxation. For each vertex vv, we maintain its distance v.distance from the source as an upper bound (shortest-path estimate aka current best-known distance) and its predecessor v.predecessor.

    At the beginning of the algorithm, we set all the distances to inf\inf (infinity) and all predecessors to nil. We then set the distance of the source vertex ss to 00:

    initSingleSource(Graph, source):
        for each vertex v ∈ Graph.vertices:
            v.distance = ∞
            v.predecessor = nil
        source.distance = 0

    Distance and predecessor can also be stored in separate arrays (instead of being stored as attributes of vertices) where indices correspond to the indices of each vertex.

  • The process of relaxing and edge e=(v,w)e = (v, w) consists of testing whether we can improve the shortest path from source ss to ww found so far by going through the vertex vv and, if so, updating w.distance and w.predecessor:

    relax(e):
        v = e.source
        w = e.target
        currentBestKnownWeight = w.distance
        possiblyBetterWeight = v.distance + e.weight
        if possiblyBetterWeight < currentBestKnownWeight
            w.distance = possiblyBetterWeight
            w.predecessor = v
  • Pseudocode for Dijkstra’s algorithm:

    Dijkstra(Graph, source):
        initSingleSource(Graph, source)
        while not every vertex is part of tree:
            v = vertex with smallest distance and not in tree
            v.partOfTree = true
            for each edge e of v:
                relax(e)
  • Runtime is O(VlogV+VlogV+ElogV)O(|V| \log |V| + |V| \log |V| + |E| \log |V|), and since E>VE > V for any graph on which we wound run Dijkstra’s algorithm, we can more simple write the complexity as O(ElogV)O(|E| \log |V|).

  • Visualization:

Bellman-Ford Algorithm

  • Like Dijkstra’s algorithm, Bellman–Ford algorithm solves the single-source shortest path problem, but also works with graphs where some edge weights are negative.

  • Note that we did not cover Bellman–Ford algorithm in EECS 281 this semester.

A* Algorithm

  • If we only need to find a path from a single source to a single target, then Dijkstra’s algorithm is inefficient, as it explores many edges that we do not care about.

  • For example, suppose you would like to find the shortest path from Ann Arbor to Chicago, then it would not make sense to explore paths longer that ~300 miles in all direction before reaching Chicago.

  • To fix this, we make a very minor change to Dijkstra’s algorithm: instead of visiting vertices in order of distance from the source, we visit them in order of distance from the source + h(v)h(v), where h(v)h(v) is some heuristic.

  • Note that we did not cover the A* algorithm in EECS 281 this semester.

Floyd–Warshall Algorithm

  • Floyd–Warshall algorithm solves the all-pairs shortest path problem. Negative-weight edges may be present, but we assume that there are no negative-weight cycles.

  • First, let’s start with a simple example of a graph.

    floyd warshall 0.svg

    Although Floyd–Warshall algorithm finds shortest paths between all pairs of vertices, let’s focus on the shortest path between AA and BB. First, notice that there is an edge from AA to BB with weight 8, so a path of weight 8 exists between AA and BB.

    Next, let’s try adding other nodes into the path to see if the weight of the path from AA to BB decreases. If we try to add CC into the path, the length of the path becomes 5, which is better than 8. If we then tried to add DD into the path, the length would become 8, which is worse than 5. Finally, when we add EE into the path, the length becomes 4, which is better than 5.

  • Floyd–Warshall algorithm algorithm breaks the all-pairs shortest path problem into subproblems that can be easily solved using dynamic programming. We start by letting T0(i,j)T^0(i,j) be the length of the shortest direct path (of length ≤ 1) from ii to jj. It is equal to the length of the edge (i,j)(i, j) if it exists and ∞ otherwise. We set T0(i,i)=0T^0(i,i) = 0 for all ii.

  • We define the problem Tk(i,j)T^k(i, j) as the shortest path from ii to jj that is only allowed to use the first kk vertices as intermediate vertices (numbered from 00 to k1k - 1). For example, T1(i,j)T^1(i,j) is the shortest path from ii to jj that is either iji \rightarrow j or i0ji \rightarrow 0 \rightarrow j. T2(i,j)T^2(i,j) is the shortest path from ii to jj that is either iji \rightarrow j, i0ji \rightarrow 0 \rightarrow j, i1ji \rightarrow 1 \rightarrow j, i01ji \rightarrow 0 \rightarrow 1 \rightarrow j or i10ji \rightarrow 1 \rightarrow 0 \rightarrow j.

  • We now need to define the problem Tk(i,j)T^k(i, j) in terms of Tk1T^{k - 1} in order to use dynamic programming. Suppose we want to find the shortest path from ii to jj that uses the first kk vertices (up to k1k - 1) as intermediate vertices. There are two cases:

    • The path does not contain vertex k1k - 1. Then Tk(i,j)=Tk1(i,j)T^k(i, j) = T^{k - 1}(i, j).

    • The path contains vertex k1k - 1. Then the path will pass through vertex k1k - 1 only once (since it is the shortest path). The path therefore must be i...k1...ji \rightarrow ... \rightarrow k - 1 \rightarrow ... \rightarrow j, and it goes from ii to k1k - 1 via vertices 00 through k2k - 2, and then from k1k - 1 to jj via vertices 00 through k2k - 2. Or more simply, Tk(i,j)=Tk1(i,k)+Tk1(i,k)T^k(i, j) = T^{k - 1}(i, k) + T^{k - 1}(i, k).

  • We can write the recursive definition for the algorithm as Tk(i,j)=min(Tk1(i,j),Tk1(i,k)+Tk1(i,k))T^k(i, j) = \min( T^{k - 1}(i, j), T^{k - 1}(i, k) + T^{k - 1}(i, k)).

  • Pseudocode:

    FloydWarshall(Graph):
        for each vertex v in Graph.V:
            dist[v][v] = 0
        for each edge e in Graph.E:
            v = e.source
            w = e.target
            dist[v][w] = e.weight
        for k = 0 ... |Graph.V| - 1:
            for i = 0 ... |Graph.V| - 1:
                for j = 0 ... |Graph.V| - 1:
                    dist[i][j] = min(dist[i][j], dist[i][k] + dist[k][j])
  • When we implement the Floyd–Warshall algorithm, we want to keep track of distances and predecessor, similarly to Dijkstra’s algorithm. However, here we use two 2D arrays for that purpose, instead of two 1D arrays.

  • Here is a visualization of the Floyd–Warshall algorithm.

results matching ""

    No results matching ""