Dijkstra algorithm

9 min read

By Mario Pisa

Start learning all about the Dijkstra algorithm for finding the shortest path. We briefly review the Kruskal algorithm, Prim algorithm, Johnson algorithm and Bellman algorithm as well. We'll cover:


What is the Dijkstra algorithm?

Edsger W. Dijkstra (1930-2002) was an eminent physicist who made great advances in the world of distributed and concurrent computing and other fields of mathematics.

The algorithm that bears his name finds the optimal solution to obtain the shortest path in a graph or net, although as we will see below there are improvements to the algorithm to enhance efficiency.

The Dijkstra algorithm belongs to the family of so-called greedy algorithms as it makes its decisions only considering the present moment without regard to how that decision may affect the future, i.e. it makes the best decision at a given moment without thinking about future consequences.

Greedy algorithms are typically used to solve optimization problems, such as selecting the shortest path or the best order to execute tasks on a computer.

In this context, the greedy algorithm selects the most promising segment or task at a given instant, without ever reconsidering whether that was the best decision later on. It is therefore a simple algorithm to implement since there is no need to control alternatives and no follow-up to undo previous decisions.

Greedy algorithms such as Dijkstra algorithm, Kruskal algorithm or Prim algorithm are characterized by the following general properties:

  • Algorithms require solving a problem in an optimal way, where to construct the solution we have a set or list of candidates such as the edges of a graph, the tasks to plan, etc.
  • As the algorithm progresses, two sets are accumulated, one containing the candidates that have been evaluated and selected and the other containing the candidates that have been evaluated but rejected.
  • There is a function that checks whether a set of candidates form a solution to the problem, ignoring for the moment whether this is the optimal solution.
  • There is a second function that tests whether a given set of candidates is feasible, i.e. whether it is possible to reach a solution with other candidates to achieve the final solution. Again, it is ignored for the moment whether the solution is optimal.
  • There is a third function that selects the candidate that has neither been selected nor rejected and is the most promising candidate for the solution at a given point in time.
  • Finally, there is a fourth function called objective that returns the solution we have found, although it is strictly speaking not part of the greedy algorithm.

To solve the problem, we look for the set of candidates (first) that constitutes a solution and (second) that optimizes the value of the objective function. The greedy algorithms proceed step by step.

Initially the set of selected candidates is empty and at each step the best candidate is considered to be added to this set by the selection function.

  • If the new extended set of selected candidates is not feasible for a solution, optimal or not, the candidate is rejected.
  • If the extended set of candidates still forms a possible solution, optimal or not, the new candidate is added to the set of selected candidates.

In this way, we continue to advance step by step until we find a solution that must necessarily be optimal.

As we have said, these characteristics are common to the greedy algorithms although, as we will see later, the Dijkstra, Kruskal and Prim greedy algorithms have distinctive characteristics.


How does the Dijkstra algorithm work?

The Dijkstra algorithm solves the minimum path problem for a given graph.

Given a directed graph G = {N, E} where N is the set of nodes of G and E is the set of directed edges, each edge has a non-negative length, we can talk about weight or cost too, and one of the nodes is taken as the origin-node.

The problem is to determine the length of the minimum path from the origin to each of the other nodes.

As we have seen in the general characteristics of greedy algorithms, the Dijkstra algorithm uses two sets of nodes S and C. The set S holds the set of selected nodes and the distance to the origin-node of each node at a given time. The set C contains all candidate nodes that have not been selected and whose distance is not yet known.

From this we derive an invariant property N = S U C.

That is, the set of nodes is equal to the union of the sets of selected nodes and unselected nodes.

In the first step of the algorithm the set S has only the node-origin and when the algorithm finishes it contains all the graph nodes with the cost of each edge.

We talk about a special node if all the nodes involved in the path from the origin to it are within the set of selected nodes S. Dijkstra algorithm maintains a matrix D that is updated at each step with the lengths or weights of the shortest special path of each node of the set S.

When a new 'v' node is tried to be added to S, the shortest special path to 'v' is also the shortest path to all other nodes (see demonstration in the reference book). When the algorithm is finished, all the nodes are in S and the matrix D contains all the special paths from the origin to any of the other nodes in the graph and thus the solution to our minimum path problem.

Let's see how the Dijkstra algorithm works in pseudo-code before looking at the Python implementation.


Pseudo code of Dijkstra algorithm

The pseudo code of Dijkstra algorithm is:


Dijkstra algorithm table

Assume the following routed graph:

dijkstra algorithm table
Dijkstra algorithm table

Step

v

C

D

Initialization

-

{2, 3, 4, 5}

[50, 30, 100, 10]

1

5

{2, 3, 4}

[50, 30, 20, 10]

2

4

{2, 3}

[40, 30, 20, 10]

3

3

{2}

[35, 30, 20, 10]

Obviously the matrix D would not change if we did one more interaction to remove the last element of C {2} and for this reason the main loop only repeats n-2 times.


Dijkstra algorithm time complexity

The initialisation requires a matrix L[1..n, 1..n] and therefore requires a time that is in the Order of n O(n)

The repeat loop requires looping through all the elements of C, so the total time is in the order of \(n^2. O(n^2)\)

The loop for each requires looping through all the elements of C, so the total time is on the order of \(n^2. O(n^2)\)

Therefore, the simple implementation of the Dijkstra algorithm requires a runtime that is \(O(n^2)\)

Depending on the implementation of the algorithm and as long as the number of edges is much smaller than \(n^2\), we can improve the complexity up to O((a + n) log n) if the graph is connected and is in O(a log n), but if the graph is dense we can only reach \(O(n^2 / log n).\)

To improve Dijkstra's simple algorithm, one can make an implementation of k-ary heaps as proposed by Johnson. Fredman and Tarjan further improve the complexity by using a Fibonacci heap.

See references for more information.


When to use the Dijkstra algorithm?

As we have seen, the Dijkstra Algorithm is used to solve the problem of minimum paths in a directed graph. The implementation we have been analysing always gives us the optimal matrix of minimum paths.

Typical applications of the Dijkstra algorithm:

  • Robot navigation.
  • Typesetting in TeX.
  • Urban traffic planning.
  • Tramp steamer problem.
  • Optimal pipelining of VLSI chip.
  • Telemarketer operator scheduling.
  • Subroutine in higher level algorithms.
  • Routing of telecommunications messages.
  • Approximating piecewise linear functions.
  • Exploiting arbitrage opportunities in currency exchange.
  • Open Shortest Path First (OSPF) routing protocol for IP.
  • Optimal truck routing through a given traffic congestion pattern.

Graphs

Graph

Vertices

Edges

Financial

Stocks, currency

Transactions

Neural networks

Neurons

Synapses

Scheduling

Tasks

Precedence constraints

Communication

Telephones, computers

Fibre optic cables

Circuits

Gates, registers, processors

Wires

Mechanical

Joints

Rods, beams, springs

Hydraulic

Reservoirs, pumping stations

Pipelines

Games

Board positions

Legal moves

Internet

Web pages

Hyperlinks


Dijkstra algorithm vs Kruskal algorithm

Dijkstra algorithm and Kruskal algorithm - both these algorithms belong to the family of greedy algorithms. Although Dijkstra algorithm is used to solve the shortest path problem, the Kruskal algorithm is used to solve the minimum covering graph.


Dijkstra algorithm vs Prim algorithm

Dijkstra algorithm and Prim algorithm - both algorithms belong to the family of greedy algorithms. Although the Dijkstra algorithm is used to solve the shortest path problem, the Prim algorithm is used to solve the minimum covering graph as the Kruskal algorithm.


Prim algorithm vs Kruskal algorithm

What is the difference between Prim algorithm and Kruskal algorithm? The difference is in the way the greedy algorithm is implemented. While the Kruskal algorithm always chooses the edges in increasing order of length and forms disjoint subsets to finally find the optimal solution, the Prim algorithm always has the optimal set at a given time.


How to find the shortest path using the Dijkstra algorithm?

In the following example, we will use the Dijkstra algorithm implemented in the Dijkstra library. However, we are not going to install the library in our development environment, but we will copy and use the code of the two necessary classes directly in order to be able to analyse the algorithm.

Note that the Dijkstra algorithm is implemented in other python modules as the Dijkstra from scipy.sparse.csgraph library.

The class ‘DijkstraSPF’ inherits from ‘​​AbstractDijkstraSPF’ and has two methods get_adjacent_nodes and get_edge_weight.

The __init__ function from ‘​​AbstractDijkstraSPF’ class has the Dijkstra algorithm as we have seen in the previous pseudocode.

The Graph class just has methods two deal with a graph.

Let's try the shortest path from A to E. As we can see, the straight path has a weight of 15, however, if we go through the nodes A-B (6), B-C(1), C-D(1) and D-E(4) that adds up to a total weight of 12, which means that this path, despite having more nodes, has a lower cost.

shortest path using the dijkstra algorithm
Example - The shortest path using the Dijkstra algorithm

The below classes are extracted from the Dijkstra library on GitHub.

# If we install the dijkstra library, we can import the classes as usual.
# from Dijkstra import DijkstraSPF, Graph

Let's create a simple graph to test the Dijkstra shortest path algorithm.

Let's try the shortest path from A to E. As we can see, the direct path has a weight of 15, however, if we go through the nodes A-B (6), B-C(1), C-D(1) and D-E(4) that adds up to a total weight of 12, which means that this path, despite having more nodes, has a lower cost.

label distance
A     0.000000
B     6.000000
C     7.000000
D     8.000000
E     12.000000

Shortest path:
A -> B -> C -> D -> E

If we change the weight of segment [A, E] from 15 to 10, the algorithm has a direct path to the node E.

Shortest path:
A -> E

As we can see, Dijkstra's greedy algorithm is an optimal solution for finding shortest paths in a directed graph. However, it is not a valid algorithm for arbitrage since, having to normalise the prices to a logarithmic scale, we can obtain negative values for the weights. With the Dijkstra algorithm we would obtain infinite cycles, for which the Bellman-Ford algorithm is used.


Why does the Dijkstra algorithm fail for negative weights?

Dijkstra algorithm doesn’t work for graphs with negative distances. Negative distances can lead to infinite cycles that must be handled by specialized algorithms such as Bellman-Ford’s algorithm or Johnson’s algorithm.

For us who are trying to do quantitative trading, this is not a minor problem, but quite the opposite. It is a big problem if we want to use this technique for trading.

One of the applications we have of Dijkstra algorithm is currency arbitrage, where each node is a currency and each vertex addresses, telling us the exchange rate. We can therefore construct a graph for this scenario. You can also refer the article Real Effective Exchange Rate (REER).


Conclusion

As we have seen, the Dijkstra algorithm finds the optimal solution to obtain the shortest path in a graph from an origin to the rest of its nodes.

This algorithm has application in countless problems that can be represented as a graph or tree.

It differs from the Kruskal algorithm and Prim algorithm in that they focus on discovering the minimum coverage tree, i.e., how to cover the entire graph efficiently, while the Dijkstra algorithm focuses on optimizing the shortest path where it is not necessary to cover all the edges of the graph, but to reach all the nodes.

Finally, we have seen that the Dijkstra algorithm has problems such as negative weights for the edges, for which the Bellman-Ford or Johnson algorithms are used.

Dig into the world of algorithms and trading, start your quest to upgrade your knowledge of Algorithmic Trading with the Executive Programme in Algorithmic Trading (EPAT) - a comprehensive course covering topics ranging from Statistics & Econometrics to Financial Computing & Technology including Machine Learning and more. Check it out here.


References


Disclaimer: All investments and trading in the stock market involve risk. Any decisions to place trades in the financial markets, including trading in stock or options or other financial instruments is a personal decision that should only be made after thorough research, including a personal risk and financial assessment and the engagement of professional assistance to the extent you believe necessary. The trading strategies or related information mentioned in this article is for informational purposes only.

Advanced Momentum Trading Strategies Course