Have you been curious to learn how you can use Machine Learning in Python for algorithmic trading?
In recent years, machine learning, more specifically machine learning in Python has become the buzz-word for many quant firms. In their quest to seek the elusive alpha, a number of funds and trading firms have adopted to machine learning algorithms for algorithmic trading.
While the algorithms deployed by quant hedge funds are never made public, we know that top funds employ machine learning algorithms for trading to a large extent.
Take, for example, Man Group's AHL Dimension programme ⁽¹⁾ is a $5.1 billion dollar hedge fund which is partially managed by AI. There is also Taaffeite Capital ⁽²⁾ which stated that it trades in a fully systematic and automated fashion using proprietary machine learning systems.
In this Python machine learning tutorial, we have tried to understand how machine learning has transformed the world of trading. Then we create a simple Python machine learning algorithm to predict the next day’s closing price for a stock.
Thus, in this Python machine learning tutorial, we will cover the following topics:
- How Machine Learning in Python gained popularity
- Why use Machine Learning for trading in Python?
- Prerequisites for creating Machine Learning algorithms for trading using Python
- Getting the data and making it usable for machine learning algorithm
- Creating Hyperparameters
- Splitting the data into test and train sets
- Getting the best fit parameters to create a new function
- Making the predictions and checking the performance
- Bonus content
How Machine Learning in Python gained popularity
Machine learning packages/libraries are developed in-house by firms for their proprietary use or by third parties who make it freely available to the user community.
In recent years, the number of machine learning packages has increased substantially which has helped the developer community in accessing various machine learning techniques and applying the same to their trading needs.
There are hundreds of ML algorithms which can be classified into different types depending on how these work.
For example, machine learning regression algorithms are used to model the relationship between variables; decision tree algorithms construct a model of decisions and are used in classification or regression problems. Of these, some algorithms have become popular among quants.
Some of these include:
- Linear Regression
- Logistic Regression
- Random Forests (RM)
- Support Vector Machine (SVM)
- k-Nearest Neighbor (kNN)
- Classification and Regression Tree (CART)
- Deep Learning
These Machine Learning algorithms for trading are used by trading firms for various purposes including:
- Analyzing historical market behaviour using large data sets
- Determine optimal inputs (predictors) to a strategy
- Determining the optimal set of strategy parameters
- Making trade predictions etc.
Why use Machine Learning for trading in Python?
Over the years, we have realised that Python is becoming a popular language for programmers with that, a generally active and enthusiastic community who are always there to support each other.
According to Stack Overflow's 2020 Developer Survey ⁽³⁾, developers reported that they want to learn Python, it takes the top spot for the fourth year in a row.

Python trading has gained traction in the quant finance community as it makes it easy to build intricate statistical models with ease due to the availability of sufficient scientific libraries like:
- Pandas
- NumPy
- PyAlgoTrade
- Pybacktest and more.
First updates to Python trading libraries are a regular occurrence in the developer community. In fact, Scikit-learn is a Python package developed specifically for machine learning which features various classification, regression and clustering algorithms ⁽⁴⁾. Thus, it only makes sense for a beginner (or rather, an established trader themselves), to start out in the world of machine learning in python.
The rise of technology and electronic trading has only accelerated the rate of automated trading in recent years ⁽⁵⁾. For a trader or a fund manager, the pertinent question is “How can I apply this new tool to generate more alpha?”. I will explore one such model that answers this question now.
Prerequisites for creating Machine Learning algorithms for trading using Python
You may add one line to install the packages “pip install numpy pandas …” You can install the necessary packages using the following code in the Anaconda Prompt. To know more about Python numpy click here for Python Numpy Tutorial.
- pip install pandas
- pip install pandas-datareader
- pip install numpy
- pip install sklearn
- pip install matplotlib
Before we go any further, let me state that this code is written in Python 2.7. So let’s dive in.
Problem Statement
Let’s start by understanding what we are aiming to do. By the end of this Python machine learning tutorial, I will show you how to create an algorithm that can predict the closing price of a day from the previous OHLC (Open, High, Low, Close) data.
I also want to monitor the prediction error along with the size of the input data.
Let us import all the libraries and packages needed for us to build this machine learning algorithm.
Getting the data and making it usable for machine learning algorithm
To create any algorithm we need data to train the algorithm and then to make predictions on new unseen data. In this Python machine learning tutorial, we will fetch the data from Yahoo.
To accomplish this we will use the data reader function from the panda's library. This function is extensively used and it enables you to get data from many online data sources.
We are fetching the data of the SPDR ETF linked to the S&P 500. This stock can be used as a proxy for the performance of the S&P 500 index. We specify the year starting from which we will be pulling the data.
Once the data is in, we will discard any data other than the OHLC, such as volume and adjusted Close, to create our data frame ‘df ’.
Now we need to make our predictions from past data, and these past features will aid the machine learning model trade. So, let's create new columns in the data frame that contain data with one day lag.
Note the capital letters are dropped for lower-case letters in the names of new columns.
Creating Hyperparameters
Although the concept of hyper-parameters is worthy of a blog in itself, for now I will just say a few words about them. These are the parameters that the machine learning algorithm can’t learn over but needs to be iterated over. We use them to see which predefined functions or parameters yield the best fit function.
In this example, I have used Lasso regression which uses L1 type of regularization. This is a type of machine learning model based on regression analysis which is used to predict continuous data.
This type of regularization is very useful when you are using feature selection. It is capable of reducing the coefficient values to zero. The SimpleImputer function replaces any NaN values that can affect our predictions with mean values, as specified in the code.
The ‘steps’ is a bunch of functions that are incorporated as a part of the Pipeline function. The pipeline is a very efficient tool to carry out multiple operations on the data set. Here we have also passed the Lasso function parameters along with a list of values that can be iterated over.
Although I am not going into details of what exactly these parameters do, they are something worthy of digging deeper into. Finally, I called the randomized search function for performing the cross-validation.
In this example, we used 5 fold cross-validation. In k-fold cross-validation, the original sample is randomly partitioned into k equal sized subsamples. Of the k subsamples, a single subsample is retained as the validation data for testing the model, and the remaining k-1 subsamples are used as training data.
The cross-validation process is then repeated k times (the folds), with each of the k subsamples used exactly once as the validation data. Cross-validation combines (averages) measures of fit (prediction error) to derive a more accurate estimate of model prediction performance.
Based on the fit parameter we decide the best features. In the next section of the Python machine learning tutorial, we will look int test and train sets.
Splitting the data into test and train sets
First, let us split the data into the input values and the prediction values. Here we pass on the OHLC data with one day lag as the data frame X and the Close values of the current day as y. Note the column names below in lower-case.
In this example, to keep the Python machine learning tutorial short and relevant, I have chosen not to create any polynomial features but to use only the raw data. If you are interested in various combinations of the input parameters and with higher degree polynomial features, you are free to transform the data using the PolynomialFeature() function from the preprocessing package of scikit learn.
Now, let us also create a dictionary that holds the size of the train data set and its corresponding average prediction error.
Getting the best fit parameters to create a new function
I want to measure the performance of the regression function as compared to the size of the input dataset. In other words, I want to see if by increasing the input data, will we be able to reduce the error. For this, I used for loop to iterate over the same data set but with different lengths.
At this point, I would like to add that for those of you who are interested, explore the ‘reset’ function and how it will help us in making a more reliable prediction.
(Hint: It is a part of the Python magic commands)
Let me explain what I did in a few steps.
First, I created a set of periodic numbers ‘t’ starting from 50 to 97, in steps of 3. The purpose of these numbers is to choose the percentage size of the dataset that will be used as the train data set.
Second, for a given value of ‘t’, I split the length of the data set to the nearest integer corresponding to this percentage. Then I divided the total data into train data, which includes the data from the beginning till the split, and test data, which includes the data from the split till the end. The reason for adopting this approach and not using the random split is to maintain the continuity of the time series.
After this, we pull the best parameters that generated the lowest cross-validation error and then use these parameters to create a new reg1 function which will be a simple Lasso regression fit with the best parameters.
Making the predictions and checking the performance
Now let us predict the future close values. To do this we pass on test X, containing data from split to end, to the regression function using the predict() function. We also want to see how well the function has performed, so let us save these values in a new column.
As you might have noticed, I created a new error column to save the absolute error values. Then I took the mean of the absolute error values, which I saved in the dictionary that we had created earlier.
Now it's time to plot and see what we got.
I created a new Range value to hold the average daily trading range of the data. It is a metric that I would like to compare with when I am making a prediction. The logic behind this comparison is that if my prediction error is more than the day’s range then it is likely that it will not be useful.
I might as well use the previous day’s High or Low as the prediction, which will turn out to be more accurate. Please note I have used the split value outside the loop. This implies that the average range of the day that you see here is relevant to the last iteration.
Let’s execute the code and see what we get.

Some food for thought.
What does this scatter plot tell you? Let me ask you a few questions.
- Is the equation over-fitting?
- The performance of the data improved remarkably as the train data set size increased. Does this mean if we give more data the error will reduce further?
- Is there an inherent trend in the market, allowing us to make better predictions as the data set size increases?
- Last but the best question How will we use these predictions to create a trading strategy?
Bonus Content
FAQ related to Machine Learning algorithms for Trading using Python
At the end of the last section of the tutorial Machine Learning algorithms for Trading, I asked a few questions. Now, I will answer them all at the same time. I will also discuss a way to detect the regime/trend in the market without training the algorithm for trends.
You can read more about 5 Things to know before starting Algorithmic Trading
But before we go ahead, please use a fix to fetch the data from Google to run the code below.
If you face challenges while downloading the market data from Yahoo and Google Finance platforms and are looking for an alternative source for market data, you can use Quandl for the same.
Let’s start with the questions now, shall we?
Q: Is the equation over-fitting?
A: This was the first question I had asked. To know if your data is overfitting or not, the best way to test it would be to check the prediction error that the algorithm makes in the train and test data.
To do this, we will have to add a small piece of code to the already written code.
First, let me begin my explanation by apologizing for breaking the norms: going beyond the 80 column mark.
Second, if we run this piece of code, then the output would look something like this.

Our algorithm is doing better in the test data compared to the train data. This observation in itself is a red flag. There are a few reasons why our test data error could be better than the train data error:
- If the train data had greater volatility (Daily range) compared to the test set, then the prediction would also exhibit greater volatility.
- If there was an inherent trend in the market that helped the algo make better predictions.
Now, let us check which of these cases is true. If the range of the test data was less than the train data, then the error should have decreased after passing more than 80% of the data as a train set, but it increases.
Next, to check if there was a trend, let us pass more data from a different time period.
If we run the code the result would look like this:

So, giving more data did not make your algorithm works better, but it made it worse. In time-series data, the inherent trend plays a very important role in the performance of the algorithm on the test data.
As we saw above it can yield better than expected results sometimes. The main reason why our algo was doing so well was the test data was sticking to the main pattern observed in the train data.
So, if our algorithm can detect underlying the trend and use a strategy for that trend, then it should give better results. I will explain this in more detail below.
Q: Can the machine learning algorithm detect the inherent trend or market phase (bull/bear/sideways/breakout/panic)?
Q: Can the database be trimmed in a way to train different algos for different situations
A: The answer to both the questions is YES!
We can divide the market into different regimes and then use these signals to trim the data and train different algorithms for these datasets. To achieve this, I choose to use an unsupervised machine learning algorithm.
From here on, this Python machine learning tutorial will be dedicated to creating an algorithm that can detect the inherent trend in the market without explicitly training for it.
First, let us import the necessary libraries.
Then we fetch the OHLC data from Google and shift it by one day to train the algorithm only on the past data.
Then drop all the NaN.
Next, we will instantiate an unsupervised machine learning algorithm using the ‘Gaussian mixture’ model from sklearn.
In the above code, I created an unsupervised-algo that will divide the market into 4 regimes, based on the criterion of its own choosing. We have not provided any train dataset with labels like in the previous section of the Python machine learning tutorial.
Next, we will fit the data and predict the regimes. Then we will be storing these regime predictions in a new variable called regime.
Now let us calculate the returns of the day.
Then, create a dataframe called Regimes which will have the OHLC and Return values along with the corresponding regime classification.
After this, let us create a list called ‘order’ that has the values corresponding to the regime classification, and then plot these values to see how well the algo has classified.
The final regime differentiation would look like this:

This graph looks pretty good to me. Without actually looking at the factors based on which the classification was done, we can conclude a few things just by looking at the chart.
- The red zone is the low volatility or the sideways zone
- The purple zone is high volatility zone or panic zone.
- The green zone is a breakout zone.
- The blue zone: Not entirely sure but let us find out.
Use the code below to print the relevant data for each regime.
The output would look like this:

The data can be inferred as follows:
- Regime 0: Low mean and High covariance.
- Regime 1: High mean and High covariance.
- Regime 2: High mean and Low covariance.
- Regime 3: Low mean and Low covariance.
So far, we have seen how we can split the market into various regimes.
But the question of implementing a successful strategy is still unanswered. If you want to learn how to code a machine learning trading strategy then your choice is simple:
To rephrase Morpheus from the Matrix movie trilogy,
This is your last chance. After this, there is no turning back.
You take the blue pill—the story ends, you wake up in your bed and believe that you can trade manually.
You take the red pill—you stay in the Algoland, and I show you how deep the rabbit hole goes.
Remember: All I'm offering is the truth. Nothing more.
A step further into the world of Machine Learning algorithms for Trading
Keeping oneself updated is of prime importance in today’s world. Having a learner’s mindset always helps to enhance your career and picking up skills and additional tools in the development of trading strategies for themselves or their firms.
Here are a few books which might be interesting:
- Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani Introduction to statistical learning
- The Hundred-Page Machine Learning Book by Andriy Burkov
- Hastie, Tibshirani, and Friedman's The Elements of Statistical Learning
Machine Learning Competitions
There are a number of sites which host ML competitions. These competitions although not specifically targeted towards the application of Python machine learning in trading.
They can give good exposure to quants and traders to different ML problems via participation in competitions & forums and help expand their ML knowledge. Some of the popular ML competition hosting sites include:
You will find it very useful and knowledgeable to read through this curated compilation of some of our top blogs on:
Machine Learning
Python for Trading
Sentiment Trading
Algorithmic Trading
Options Trading
Technical Analysis
Conclusion
Overall, we have gone through the entire journey of how you can learn to create and use your very own machine learning in python for algorithmic trading, using various examples. The entire process is explained with the help of Python codes that will be helpful in your practise as well.
If you have any comments or suggestions about this article, do feel free to share them with us in the comments below.
If you too wish to create trading strategies and understand the limitations of your models, check out this course on ‘Decision Trees in Trading‘ authored by Dr. Ernest P. Chan which demystifies the black box within classification trees.
Disclaimer: All data and information provided in this article are for informational purposes only. QuantInsti® makes no representations as to accuracy, completeness, currentness, suitability, or validity of any information in this article and will not be liable for any errors, omissions, or delays in this information or any losses, injuries, or damages arising from its display or use. All information is provided on an as-is basis.