Machine Learning Stock Prediction with LSTM and Keras – Python Source Code

Python Source Code for Machine Learning Stock Prediction with LSTM and Keras – Python Source Code with LSTM and Keras Below is the code for machine learning stock prediction with LSTM neural network.

# -*- coding: utf-8 -*-

import numpy as np
import pandas as pd
from sklearn import preprocessing
import matplotlib.pyplot as plt


fname="C:\\Users\\stock_data.csv"
data_csv = pd.read_csv (fname)

#how many data we will use 
# (should not be more than dataset length )
data_to_use= 100

# number of training data
# should be less than data_to_use
train_end =70


total_data=len(data_csv)

#most recent data is in the end 
#so need offset
start=total_data - data_to_use


#currently doing prediction only for 1 step ahead
steps_to_predict =1

 
yt = data_csv.iloc [start:total_data ,4]    #Close price
yt1 = data_csv.iloc [start:total_data ,1]   #Open
yt2 = data_csv.iloc [start:total_data ,2]   #High
yt3 = data_csv.iloc [start:total_data ,3]   #Low
vt = data_csv.iloc [start:total_data ,6]    # volume


print ("yt head :")
print (yt.head())

yt_ = yt.shift (-1)
    
data = pd.concat ([yt, yt_, vt, yt1, yt2, yt3], axis =1)
data. columns = ['yt', 'yt_', 'vt', 'yt1', 'yt2', 'yt3']
    
data = data.dropna()
    
print (data)
    
# target variable - closed price
# after shifting
y = data ['yt_']

       
#       closed,  volume,   open,  high,   low    
cols =['yt',    'vt',  'yt1', 'yt2', 'yt3']
x = data [cols]

  
   
scaler_x = preprocessing.MinMaxScaler ( feature_range =( -1, 1))
x = np. array (x).reshape ((len( x) ,len(cols)))
x = scaler_x.fit_transform (x)

   
scaler_y = preprocessing. MinMaxScaler ( feature_range =( -1, 1))
y = np.array (y).reshape ((len( y), 1))
y = scaler_y.fit_transform (y)

    
x_train = x [0: train_end,]
x_test = x[ train_end +1:len(x),]    
y_train = y [0: train_end] 
y_test = y[ train_end +1:len(y)]  
x_train = x_train.reshape (x_train. shape + (1,)) 
x_test = x_test.reshape (x_test. shape + (1,))

    
    

from keras.models import Sequential
from keras.layers.core import Dense
from keras.layers.recurrent import LSTM
from keras.layers import  Dropout


seed =2016 
np.random.seed (seed)
fit1 = Sequential ()
fit1.add (LSTM (  1000 , activation = 'tanh', inner_activation = 'hard_sigmoid' , input_shape =(len(cols), 1) ))
fit1.add(Dropout(0.2))
fit1.add (Dense (output_dim =1, activation = 'linear'))

fit1.compile (loss ="mean_squared_error" , optimizer = "adam")   
fit1.fit (x_train, y_train, batch_size =16, nb_epoch =25, shuffle = False)

print (fit1.summary())

score_train = fit1.evaluate (x_train, y_train, batch_size =1)
score_test = fit1.evaluate (x_test, y_test, batch_size =1)
print (" in train MSE = ", round( score_train ,4)) 
print (" in test MSE = ", score_test )

   
pred1 = fit1.predict (x_test) 
pred1 = scaler_y.inverse_transform (np. array (pred1). reshape ((len( pred1), 1)))
    
 

 
prediction_data = pred1[-1]     
   

fit1.summary()
print ("Inputs: {}".format(fit1.input_shape))
print ("Outputs: {}".format(fit1.output_shape))
print ("Actual input: {}".format(x_test.shape))
print ("Actual output: {}".format(y_test.shape))
  

print ("prediction data:")
print (prediction_data)


print ("actual data")
x_test = scaler_x.inverse_transform (np. array (x_test). reshape ((len( x_test), len(cols))))
print (x_test)


plt.plot(pred1, label="predictions")


plt.plot( [row[0] for row in x_test], label="actual")

plt.legend(loc='upper center', bbox_to_anchor=(0.5, -0.05),
          fancybox=True, shadow=True, ncol=2)

import matplotlib.ticker as mtick
fmt = '$%.0f'
tick = mtick.FormatStrFormatter(fmt)

ax = plt.axes()
ax.yaxis.set_major_formatter(tick)


plt.show()

References
1. Machine Learning Stock Prediction with LSTM and Keras – Python Source Code with LSTM and Keras

Machine Learning Stock Prediction with LSTM and Keras

In this post I will share experiments on machine learning stock prediction with LSTM and Keras with one step ahead. I tried to do first multiple steps ahead with few techniques described in the papers on the web. But I discovered that I need fully understand and test the simplest model – prediction for one step ahead. So I put here the example of prediction of stock price data time series for one next step.

Preparing Data for Neural Network Prediction

Our first task is feed the data into LSTM. Our data is stock price data time series that were downloaded from the web.
Our interest is closed price for the next day so target variable will be closed price but shifted to left (back) by one step.

Figure below is showing how do we prepare X, Y data.
Please note that I put X Y horizontally for convenience but still calling X, Y as columns.
Also on this figure only one variable (closed price) is showed for X but in the code it is actually more than one variable (it is closed price, open price, volume and high price).

LSTM data input
LSTM data input

Now we have the data that we can feed for LSTM neural network prediction. Before doing this we need also do the following things:

1. Decide how many data we want to use. For example if we have data for 10 years but want use only last 200 rows of data we would need specify start point because the most recent data will be in the end.

#how many data we will use 
# (should not be more than dataset length )
data_to_use= 100

# number of training data
# should be less than data_to_use
train_end =70


total_data=len(data_csv)

#most recent data is in the end 
#so need offset
start=total_data - data_to_use

2. Rescale (normalize) data as below. Here feature_range is tuple (min, max), default=(0, 1) is desired range of transformed data. [1]

scaler_x = preprocessing.MinMaxScaler ( feature_range =( -1, 1))
x = np. array (x).reshape ((len( x) ,len(cols)))
x = scaler_x.fit_transform (x)


scaler_y = preprocessing. MinMaxScaler ( feature_range =( -1, 1))
y = np.array (y).reshape ((len( y), 1))
y = scaler_y.fit_transform (y)

3. Divide data into training and testing set.

x_train = x [0: train_end,]
x_test = x[ train_end +1:len(x),]    
y_train = y [0: train_end] 
y_test = y[ train_end +1:len(y)]  

Building and Running LSTM

Now we need to define our LSTM neural network layers , parameters and run neural network to see how it works.

fit1 = Sequential ()
fit1.add (LSTM (  1000 , activation = 'tanh', inner_activation = 'hard_sigmoid' , input_shape =(len(cols), 1) ))
fit1.add(Dropout(0.2))
fit1.add (Dense (output_dim =1, activation = 'linear'))

fit1.compile (loss ="mean_squared_error" , optimizer = "adam")   
fit1.fit (x_train, y_train, batch_size =16, nb_epoch =25, shuffle = False)

The first run was not good – prediction was way below actual.

However by changing the number of neurons in LSTM neural network prediction code, it was possible to improve MSE and get predictions much closer to actual data. Below are screenshots of plots of prediction and actual data for LSTM with 5, 60 and 1000 neurons. Note that the plot is showing only testing data. Training data is not shown on the plots. Also you can find below the data for MSE and number of neurons. As the number neorons in LSTM layer is changing there were 2 minimums (one around 100 neurons and another around 1000 neurons)

Results of prediction on LSTM with 5 neurons
Results of prediction on LSTM with 5 neurons
Results of prediction on LSTM with 60 neurons
Results of prediction on LSTM with 60 neurons
Results of prediction on LSTM with 1000 neurons
LSTM1000_one_steap_ahead_prediction

LSTM 5
in train MSE = 0.0475
in test MSE = 0.2714831660102521

LSTM 60
in train MSE = 0.0127
in test MSE = 0.02227417068206705

LSTM 100
in train MSE = 0.0126
in test MSE = 0.018672733913263073

LSTM 200
in train MSE = 0.0133
in test MSE = 0.020082660237026824

LSTM 900
in train MSE = 0.0103
in test MSE = 0.015546535929778267

LSTM 1000
in train MSE = 0.0104
in test MSE = 0.015037054958075455

LSTM 1100
in train MSE = 0.0113
in test MSE = 0.016363980411369994

Conclusion

The final LSTM was running with testing MSE 0.015 and accuracy 98.1%. This was obtained by changing number of neurons in LSTM. This example will serve as baseline for further possible improvements on machine learning stock prediction with LSTM. If you have any tips or anything else to add, please leave a comment the comment box. The full source code for LSTM neural network prediction can be found here

References
1. sklearn.preprocessing.MinMaxScaler
2. Keras documentation – RNN Layers
3. How to Reshape Input Data for Long Short-Term Memory Networks in Keras
4. Keras FAQ: Frequently Asked Keras Questions
5. Time Series Prediction with LSTM Recurrent Neural Networks in Python with Keras
6.Deep Time Series Forecasting with Python: An Intuitive Introduction to Deep Learning for Applied Time Series Modeling
7. Machine Learning Stock Prediction with LSTM and Keras – Python Source Code

Time Series Prediction with LSTM and Keras for Multiple Steps Ahead

In this post I will share experiment with Time Series Prediction with LSTM and Keras. LSTM neural network is used in this experiment for multiple steps ahead for stock prices data. The experiment is based on the paper [1]. The authors of this paper examine independent value prediction approach. In this approach a separate model is built for each prediction step. This approach helps to avoid error accumulation problem that we have when we use multi-stage step prediction.

LSTM Implementation

Following this approach I decided to use Long Short-Term Memory network or LSTM network. This is a type of recurrent neural network used in deep learning. LSTMs have been used to advance the state-of the-art
for many difficult problems. [2]

In this experiment I selected the number of steps to predict ahead = 5 and built 5 LSTM models with Keras in python. It does not mean that the code got 5 times more. I used the same code for all models but only recalculated changing parameters for input data to neural network within for loop.

The data were obtained from stock prices from Internet. LSTM network was constructed as following:

fit1 = Sequential ()
fit1.add (LSTM (output_dim =len(cols), activation = 'tanh', inner_activation = 'hard_sigmoid' , input_shape =(len(cols), 1), W_regularizer=l2(0.01) ))
fit1.add(Dropout(0.2))
fit1.add (Dense (output_dim =1, activation = 'linear', W_regularizer=l1(0.01)))
fit1.compile (loss ="mean_squared_error" , optimizer = "adam")   
fit1.fit (x_train, y_train, batch_size =16, nb_epoch =25, shuffle = False)

The full python source code for time series prediction with LSTM in python is shown here

Experiment Results

The LSTM neural network was running with :
Training MSE 0.026
Testing MSE 0.066
Accuracy of prediction 87.2%
The results are showing that there are still some place for improvements. This however will serve as baseline for further improved models.

References
1. Multistep-ahead Time Series Prediction
2. LSTM: A Search Space Odyssey