By Devang Singh

**Introduction**

In the previous two articles in the series of blogs on Neural Networks, we have understood the working and training of Neural Networks. If you are new to Neural Networks and would like to gain an understanding of their working, I would recommend you to go through the following blogs before building a neural network.

Working of neural networks for stock price prediction

Training neural networks for stock price prediction

In this article, you will understand how to code a strategy using the predictions from a neural network that we will build from scratch. You will learn how to code the Artificial Neural Network in Python, making use of powerful libraries for building a robust trading model using the power of Neural Networks.

**Coding the Strategy**

**Importing Libraries **

We will start by importing a few libraries, the others will be imported as and when they are used in the program at different stages. For now, we will import the libraries which will help us in importing and preparing the dataset for training and testing the model.

import numpy as np import pandas as pd import talib

Numpy is a fundamental package for scientific computing, we will be using this library for computations on our dataset. The library is imported using the alias np.

Pandas will help us in using the powerful dataframe object, which will be used throughout the code for building the artificial neural network in Python.

Talib is a technical analysis library, which will be used to compute the RSI and Williams %R. These will be used as features for training our artificial neural network. We could add more features using this library.

**Setting the random seed to a fixed number**

import random random.seed(42)

Random will be used to initialize the seed to a fixed number so that every time we run the code we start with the same seed.

**Importing the dataset**

dataset = pd.read_csv('RELIANCE.NS.csv') dataset = dataset.dropna() dataset = dataset[['Open', 'High', 'Low', 'Close']]

We then import our dataset, which is stored in the .csv file named ‘RELIANCE.NS.csv’. This is done using the pandas library, and the data is stored in a dataframe named dataset. We then drop the missing values in the dataset using the dropna() function. The csv file contains daily OHLC data for the stock of Reliance trading on NSE for the time period from 1st January 1996 to 15th January 2018. We choose only the OHLC data from this dataset, which would also contain the date, Adjusted Close and Volume data. We will be building our input features by using only the OHLC values.

**Preparing the dataset**

dataset['H-L'] = dataset['High'] - dataset['Low'] dataset['O-C'] = dataset['Close'] - dataset['Open'] dataset['3day MA'] = dataset['Close'].shift(1).rolling(window = 3).mean() dataset['10day MA'] = dataset['Close'].shift(1).rolling(window = 10).mean() dataset['30day MA'] = dataset['Close'].shift(1).rolling(window = 30).mean() dataset['Std_dev']= dataset['Close'].rolling(5).std() dataset['RSI'] = talib.RSI(dataset['Close'].values, timeperiod = 9) dataset['Williams %R'] = talib.WILLR(dataset['High'].values, dataset['Low'].values, dataset['Close'].values, 7)

We then prepare the various input features which will be used by the artificial neural network learningf for making the predictions. We define the following input features:

- High minus Low price
- Close minus Open price
- Three day moving average
- Ten day moving average
- 30 day moving average
- Standard deviation for a period of 5 days
- Relative Strength Index
- Williams %R

dataset['Price_Rise'] = np.where(dataset['Close'].shift(-1) > dataset['Close'], 1, 0)

We then define the output value as price rise, which is a binary variable storing 1 when the closing price of tomorrow is greater than the closing price of today.

dataset = dataset.dropna()

Next, we drop all the rows storing NaN values by using the dropna() function.

X = dataset.iloc[:, 4:-1] y = dataset.iloc[:, -1]

We then create two data frames storing the input and the output variables. The dataframe ‘X’ stores the input features, the columns starting from the fifth column (or index 4) of the dataset till the second last column. The last column will be stored in the dataframe y, which is the value we want to predict, i.e. the price rise.

**Splitting the dataset**

split = int(len(dataset)*0.8) X_train, X_test, y_train, y_test = X[:split], X[split:], y[:split], y[split:]

In this part of the code, we will split our input and output variables to create the test and train datasets. This is done by creating a variable called split, which is defined to be the integer value of 0.8 times the length of the dataset.

We then slice the X and y variables into four separate dataframes: X*train, X*test, y*train and y*test. This is an essential part of any machine learning algorithm, the training data is used by the model to arrive at the weights of the model. The test dataset is used to see how the model will perform on new data which would be fed into the model. The test dataset also has the actual value for the output, which helps us in understanding how efficient the model is. We will look at the confusion matrix later in the code, which essentially is a measure of how accurate the predictions made by the model are.

**Feature Scaling**

from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test)

Another important step in data preprocessing is to standardize the dataset. This process makes the mean of all the input features equal to zero and also converts their variance to 1. This ensures that there is no bias while training the model due to the different scales of all input features. If this is not done the neural network might get confused and give a higher weight to those features which have a higher average value than others.

We implement this step by importing the StandardScaler method from the sklearn.preprocessing library. We instantiate the variable sc with the StandardScaler() function. After which we use the fit*transform function for implementing these changes on the X*train and X*test datasets. The y*train and y_test sets contain binary values, hence they need not be standardized. Now that the datasets are ready, we may proceed with building the Artificial Neural Network using the Keras library.

**Building the Artificial Neural Network**

from keras.models import Sequential from keras.layers import Dense from keras.layers import Dropout

Now we will import the functions which will be used to build the artificial neural network. We import the Sequential method from the keras.models library. This will be used to sequentially build the layers of the neural networks learning. The next method that we import will be the Dense function from the keras.layers library. This method will be used to build the layers of our artificial neural network.

classifier = Sequential()

We instantiate the Sequential() function into the variable classifier. This variable will then be used to build the layers of the artificial neural network learning in python.

classifier.add(Dense(units = 128, kernel_initializer = 'uniform', activation = 'relu', input_dim = X.shape[1]))

To add layers into our Classifier, we make use of the add() function. The argument of the add function is the Dense() function, which in turn has the following arguments:

- Units: This defines the number of nodes or neurons in that particular layer. We have set this value to 128, meaning there will be 128 neurons in our hidden layer.
- Kernel_initializer: This defines the starting values for the weights of the different neurons in the hidden layer. We have defined this to be ‘uniform’, which means that the weights will be initialized with values from a uniform distribution.
- Activation: This is the activation function for the neurons in the particular hidden layer. Here we define the function as the rectified Linear Unit function or ‘relu’.
- Input_dim: This defines the number of inputs to the hidden layer, we have defined this value to be equal to be equal to the number of columns of our input feature dataframe. This argument will not be required in the subsequent layers, as the model will know how many outputs the previous layer produced.

classifier.add(Dense(units = 128, kernel_initializer = 'uniform', activation = 'relu'))

We then add a second layer, with 128 neurons, with a uniform kernel initializer and ‘relu’ as its activation function. We are only building two hidden layers in this neural network.

classifier.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'sigmoid'))

The next layer that we build will be the output layer, from which we require a single output. Therefore, the units passed are 1, and the activation function is chosen to be the Sigmoid function because we would want the prediction to be a probability of market moving upwards.

classifier.compile(optimizer = 'adam', loss = 'mean_squared_error', metrics = ['accuracy'])

Finally, we compile the classifier by passing the following arguments:

- Optimizer: The optimizer is chosen to be ‘adam’, which is an extension of the stochastic gradient descent.
- Loss: This defines the loss to be optimized during the training period. We define this loss to be the mean squared error.
- Metrics: This defines the list of metrics to be evaluated by the model during the testing and training phase. We have chosen accuracy as our evaluation metric.

classifier.fit(X_train, y_train, batch_size = 10, epochs = 100)

Now we need to fit the neural network that we have created to our train datasets. This is done by passing X*train, y*train, batch size and the number of epochs in the fit() function. The batch size refers to the number of data points that the model uses to compute the error before backpropagating the errors and making modifications to the weights. The number of epochs represents the number of times the training of the model will be performed on the train dataset.

With this, our artificial neural network in Python has been compiled and is ready to make predictions.

**Predicting the movement of the stock**

y_pred = classifier.predict(X_test) y_pred = (y_pred > 0.5)

Now that the neural network has been compiled, we can use the predict() method for making the prediction. We pass X*test as its argument and store the result in a variable named y*pred. We then convert y*pred to store binary values by storing the condition y*pred > 5. Now, the variable y_pred stores either True or False depending on whether the predicted value was greater or less than 0.5 .

dataset['y_pred'] = np.NaN dataset.iloc[(len(dataset) - len(y_pred)):,-1:] = y_pred trade_dataset = dataset.dropna()

Next, we create a new column in the dataframe dataset with the column header ‘y*pred’ and store NaN values in the column. We then store the values of y*pred into this new column, starting from the rows of the test dataset. This is done by slicing the dataframe using the iloc method as shown in the code above. We then drop all the NaN values from dataset and store them in a new dataframe named trade_dataset.

**Computing Strategy Returns**

trade_dataset['Tomorrows Returns'] = 0. trade_dataset['Tomorrows Returns'] = np.log(trade_dataset['Close']/trade_dataset['Close'].shift(1)) trade_dataset['Tomorrows Returns'] = trade_dataset['Tomorrows Returns'].shift(-1)

Now that we have the predicted values of the stock movement. We can compute the returns of the strategy. We will be taking a long position when the predicted value of y is true and will take a short position when the predicted signal is False.

We first compute the returns that the strategy will earn if a long position is taken at the end of today, and squared off at the end of the next day. We start by creating a new column named ‘Tomorrows Returns’ in the trade_dataset and store in it a value of 0. We use the decimal notation to indicate that floating point values will be stored in this new column. Next, we store in it the log returns of today, i.e. logarithm of the closing price of today divided by the closing price of yesterday. Next, we shift these values upwards by one element so that tomorrow’s returns are stored against the prices of today.

trade_dataset['Strategy Returns'] = 0. trade_dataset['Strategy Returns'] = np.where(trade_dataset['y_pred'] == True, trade_dataset['Tomorrows Returns'], - trade_dataset['Tomorrows Returns'])

Next, we will compute the Strategy Returns. We create a new column under the header ‘Strategy*Returns’ and initialize it with a value of 0. to indicate storing floating point values. By using the np.where() function, we then store the value in the column ‘Tomorrows Returns’ if the value in the ‘y*pred’ column stores True (a long position), else we would store negative of the value in the column ‘Tomorrows Returns’ (a short position); into the ‘Strategy Returns’ column.

trade_dataset['Cumulative Market Returns'] = np.cumsum(trade_dataset['Tomorrows Returns']) trade_dataset['Cumulative Strategy Returns'] = np.cumsum(trade_dataset['Strategy Returns'])

We now compute the cumulative returns for both the market and the strategy. These values are computed using the cumsum() function. We will use the cumulative sum to plot the graph of market and strategy returns in the last step.

**Plotting the graph of returns**

import matplotlib.pyplot as plt plt.figure(figsize=(10,5)) plt.plot(trade_dataset['Cumulative Market Returns'], color='r', label='Market Returns') plt.plot(trade_dataset['Cumulative Strategy Returns'], color='g', label='Strategy Returns') plt.legend() plt.show()

We will now plot the market returns and our strategy returns to visualize how our strategy is performing against the market. For this, we will import matplotlib.pyplot. We then use the plot function to plot the graphs of Market Returns and Strategy Returns using the cumulative values stored in the dataframe trade_dataset. We then create the legend and show the plot using the legend() and show() functions respectively. The plot shown below is the output of the code. The green line represents the returns generated using the strategy and the red line represents the market returns.

**Conclusion**

Now you can build your own Artificial Neural Network in Python and start trading using the power and intelligence of your machines. Apart from Neural Networks, there are many other machine learning models that can be used for trading. The Artificial Neural Network or any other Deep Learning model will be most effective when you have more than 100,000 data points for training the model. This model was developed on daily prices to make you understand how to build the model. It is advisable to use the minute or tick data for training the model, which will give you enough data for an effective training.

You can enroll for the neural network tutorial on Quantra where you can use advanced neural network techniques and latest research models such as LSTM & RNN to predict markets and find trading opportunities. Keras, the relevant python library is used.

**Next Step**

We all know that Machine Learning is the new buzz word. Our next article talks about the use of Decision Trees in Machine Learning and how it can help you in predicting the stock price movements. Click here in case you are interested in knowing how to visualise a sample data set and decision tree structure, learn about the components of decision tree, and how does its algorithm work.

*Disclaimer: All investments and trading in the stock market involve risk. Any decisions to place trades in the financial markets, including trading in stock or options or other financial instruments is a personal decision that should only be made after thorough research, including a personal risk and financial assessment and the engagement of professional assistance to the extent you believe necessary. The trading strategies or related information mentioned in this article is for informational purposes only.*

**Download Data File:**

- Artificial Neural Network Using Keras Python Code
- RELIANCE.NS

```
```