# Data Preprocessing: Python, Machine Learning, Examples and more

Data preprocessing is a basic requirement of any good machine learning model. Preprocessing the data implies using the data which is easily readable by the machine learning model. In this article, we will discuss the basics of data preprocessing and how to make the data suitable for machine learning models.

## What is data preprocessing?

Data preprocessing is the process of preparing the raw data and making it suitable for machine learning models. Data preprocessing includes data cleaning for making the data ready to be given to machine learning model.

Our comprehensive blog on data cleaning helps you learn all about data cleaning as a part of preprocessing the data, covers everything from the basics, performance, and more.

After data cleaning, data preprocessing requires the data to be transformed into a format that is understandable to the machine learning model.

## Why is data preprocessing required?

Data preprocessing is mainly required for the following:

• Accurate data: For making the data readable for machine learning model, it needs to be accurate with no missing value, redundant or duplicate values.
• Trusted data: The updated data should be as accurate or trusted as possible.
• Understandable data: The data updated needs to be interpreted correctly.

All in all, data preprocessing is important for the machine learning model to learn from such data which is correct in order to lead the model to the right predictions/outcomes.

## Examples of data preprocessing for different data set types with Python

Since data comes in various formats, let us discuss how different data types can be converted into a format that the ML model can read accurately. Let us see how to feed correct features from datasets with:

• Missing values
• Outliers
• Overfitting
• Data with no numerical values
• Different date formats

### Missing values

Missing values are a common problem while dealing with data! The values can be missed because of various reasons such as human errors, mechanical errors, etc.

Data cleansing is an  important step before you even begin the algorithmic trading process, which begins with historical data analysis for making the prediction model as accurate as possible.

Based on this prediction model you create the trading strategy. Hence, leaving missed values in the data set can wreak havoc by giving faulty predictive results that can lead to erroneous strategy creation and further the results can not be great to state the obvious.

There are three techniques to solve the missing values’ problem in order to find out the most accurate features, and they are:

• Dropping
• Numerical imputation
• Categorical imputation

Dropping

Dropping is the most common method to take care of the missed values. Those rows in the data set or the entire columns with missed values are dropped in order to avoid errors to occur in data analysis.

There are some machines that are programmed to automatically drop the rows or columns that include missed values resulting in a reduced training size. Hence, the dropping can lead to a decrease in the model performance.

A simple solution for the problem of a decreased training size due to the dropping of values is to use imputation. We will discuss the interesting imputation methods further. In case of dropping, you can define a threshold to the machine.

For instance, the threshold can be anything. It can be 50%, 60% or 70% of the data. Let us take 60% in our example, which means that 60% of data with missing out values will be accepted by the model/algorithm as the training data set, but the features with more than 60% missing values will be dropped.

For dropping the values, following Python codes are used:

By using the above Python codes, the missed values will be dropped and the machine learning model will learn on the rest of the data.

Numerical imputation

The word imputation implies replacing the missing values with such a value that makes sense. And, numerical imputation is done in the data with numbers.

For instance, if there is a tabular data set with the number of stocks, commodities and derivatives traded in a month as the columns, it is better to replace the missed value with a “0” than leaving them as it is.

With numerical imputation, the data size is preserved and hence, predictive models like linear regression can work better to predict in the most accurate manner.

A linear regression model can not work with missing values in the data set since it is biased toward the missed values and considers them “good estimates”. Also, the missed values can be replaced with the median of the columns since median values are not sensitive to outliers unlike averages of columns.

Let us see the Python codes for numerical imputation, which are as follows:

Categorical imputation

This technique of imputation is nothing but replacing the missed values in the data with the one which occurs the maximum number of times in the column. But, in case there is no such value that occurs frequently or dominates the other values, then it is best to fill the same as “NAN”.

The following Python code can be used here:

### Outliers

An outlier differs significantly from other values and is too distanced from the mean of the values. Such values that are considered outliers are usually due to some systematic errors or flaws.

Let us see the following Python codes for identifying and removing outliers with standard deviation:

In the codes above, “lower” and “upper” signify the upper and lower limit in the data set.

### Overfitting

In both machine learning and statistics, overfitting occurs when the model fits the data too well or simply put when the model is too complex.

Overfitting model learns the detail and noise in the training data to such an extent that it negatively impacts the performance of the model on new data/test data.

The overfitting problem can be solved by decreasing the number of features/inputs or by increasing the number of training examples to make the machine learning algorithms more generalised.

The most common solution is regularisation in an overfitting case. Binning is the technique that helps with the regularisation of the data which also makes you lose some data every time you regularise it.

For instance, in the case of numerical binning, the data can be as follows:

 Stock value Bin 100-250 Lowest 251-400 Mid 401-500 High

Here is the Python code for binning:

Your output should look something like this:

     Value    Bin
0     102     Low
1     300     Mid
2     107     Low
3     470     High


### Data with no numerical values

In the case of the data set with no numerical values, it becomes impossible for the machine learning model to learn the information.

The machine learning model can only handle numerical values and thus, it is best to spread the values in the columns with assigned binary numbers “0” or “1”. This technique is known as one-hot encoding.

In this type of technique, the grouped columns already exist. For instance, below I have mentioned a grouped column:

 Infected Covid variants 2 Delta 4 Lambda 5 Omicron 6 Lambda 4 Delta 3 Omicron 5 Omicron 4 Lambda 2 Delta

Now, the above-grouped data can be encoded with the binary numbers ”0” and “1” with one hot encoding technique. This technique subtly converts the categorical data into a numerical format in the following manner:

 Infected Delta Lambda Omicron 2 1 0 0 4 0 1 0 5 0 0 1 6 0 1 0 4 1 0 0 3 0 0 1 5 0 0 1 4 0 1 0 2 1 0 0

Hence, it results in better handling of grouped data by converting the same into encoded data for the machine learning model to grasp the encoded (which is numerical) information quickly.

Problem with the approach

Going further, in case there are more than three categories in a data set that is to be used for feeding the machine learning model, the one-hot encoding technique will create as many columns. Let us say, there are 2000 categories, then this technique will create 2000 columns and it will be a lot of information to feed to the model.

Solution

To solve this problem, while using this technique, we can apply the target encoding technique which implies calculating the “mean” of each predictor category and using the same mean for all other rows with the same category under the predictor column. This will convert the categorical column into the numeric column and that is our main aim.

Let us understand this with the same example as above but this time we will use the “mean” of the values under the same category in all the rows. Let us see how.

In Python, we can use the following code:

Output:

 Infected Predictor Predictor_encoded 2 Delta 3 4 Lambda 5 5 Omicron 4 6 Lambda 5 4 Delta 3 3 Omicron 4

In the output above, the Predictor column depicts the Covid variants and the Predictor_encoded column depicts the “mean” of the same category of Covid variants which makes 2+4/2 = 3 as the mean value for Delta, 4+6/2 = 5 as the mean value for Lambda and so on.

Hence, the machine learning model will be able to feed the main feature (converted to a number) for each predictor category for the future.

### Different date formats

With the different date formats such as “25-12-2021”, “25th December 2021” etc. the machine learning model needs to be equipped with each of them. Or else, it is difficult for the machine learning model to understand all the formats.

With such a data set, you can preprocess or decompose the data by mentioning three different columns for the parts of the date, such as Year, Month and Day.

In Python, the preprocessing of the data with different columns for the date will look like this:

Output:

 Year Month Day 2019 1 5 2019 3 8 2019 3 3 2019 1 27 2019 2 8

In the output above, the data set is in date format which is numerical. And because of decomposing the date in different parts such as Year, Month and Day, the machine learning model will be able to learn the date format.

Courses

This course can help you learn the machine learning models and algorithms that are used for trading with the financial market data. Learning about machine learning in detail will help you understand how data preprocessing is essential.

With this course, you will equip yourself with the essential knowledge required for the two most important steps for any machine learning model, which are:

1. Data cleaning - This implies making the raw data error free by taking care of issues such as missed values, redundant values, duplicate values etc.
2. Feature engineering - To extract the important features for the machine learning model to learn the patterns of the data set with solutions to similar inputs in future.

### Conclusion

Data preprocessing is the prerequisite for making the machine learning model be able to read the data set and learn from the same. Any machine learning model is able to learn only when the data consists of no redundancy, no noise (outliers), and only such values that are numerical.

Hence, we discussed how to make the machine learning model learn with data it understands the best, learns from and performs with every time.

Find out the importance of data preprocessing in feature engineering while working with machine learning models with this comprehensive course on Data & Feature Engineering for Trading by Quantra. Enroll now!

Disclaimer: All data and information provided in this article are for informational purposes only. QuantInsti® makes no representations as to accuracy, completeness, currentness, suitability, or validity of any information in this article and will not be liable for any errors, omissions, or delays in this information or any losses, injuries, or damages arising from its display or use. All information is provided on an as-is basis.