Integrating Sentiment Analysis API Python Django into Web Application

In this post we will learn how to use sentiment analysis with API python from paralleldots.com. We will look at running this API from python environment on laptop and also in web application environment with python Django on pythonanywhere hosting site.

In the one of previous post we set python Django project for chatbot. Here we will add file to this environment. Setting the chatbot files from previous project is not necessary. We just need folder structure.

Thus in this post we will reuse and extend some python Django knowledge that we got in the previous post. We will learn how to pass parameters from user form to server and back to user form, how to serve images, how to have logic block in web template.

ParrallelDots [1] provides several machine learning APIs such as named entity recognition (NER), intent identification, text classification and sentiment analysis. In this post we will explore sentiment analysis API and how it can be deployed on web server with Diango python.

Running Text Analysis API Locally

First we need install the library:
pip install paralleldots

We need also obtain key. It is free and no credit card required.

Now we run code as below

import paralleldots
paralleldots.set_api_key("XXXXXXXXXXX")

# for single sentence
text="the day is very nice"
lang_code="en"
response=paralleldots.sentiment(text,lang_code)

print(response)
print (response['sentiment'])
print (response['code'])
print (response['probabilities']['positive'])
print (response['probabilities']['negative'])
print (response['probabilities']['neutral'])


# for multiple sentence as array
text=["the day is very nice,the day is very good,this is the best day"]
response=paralleldots.batch_sentiment(text)
print(response)

Output:
{'probabilities': {'negative': 0.001, 'neutral': 0.002, 'positive': 0.997}, 'sentiment': 'positive', 'code': 200}
positive
200
0.997
0.001
0.002
{'sentiment': [{'negative': 0.0, 'neutral': 0.001, 'positive': 0.999}], 'code': 200}

This is very simple. Now we will deploy on web hosting site with python Django.

Deploying API on Web Hosting Site

Here we will build web form. Using this web form user can enter some text which will be passed to semantic analysis API. The result of analysis will be passed back to user and image will be shown based on result of sentiment analysis.
First we need install paralleldots library. To install the paralleldots module for Python 3.6, we’d run this in a Bash console (not in a Python one): [2]
pip3.6 install –user paralleldots

Note it is two dashes before user.
Now create or update the following files:

views.py

In this file we are getting user input from web form and sending it to API. Based on sentiment output from API we select image filename.

from django.shortcuts import render

import paralleldots

def do_sentiment_analysis(request):
    user_sent=""
    user_input=""
    fname="na"
    if request.POST:
       user_input=request.POST.get('user_input', '')
       lang_code="en"
       paralleldots.set_api_key("XXXXXXXXX")
       user_response=paralleldots.sentiment(user_input,lang_code)
       user_sent=user_response['sentiment']

       if (user_sent == 'neutral'):
             fname=  "emoticon-1634586_640.png"
       elif (user_sent == 'negative'):
             fname = "emoticon-1634515_640.png"
       elif (user_sent == 'positive'):
             fname = "smiley-163510_640.jpg"
       else:
             fname="na"

    return render(request, 'my_template_img.html', {'resp': user_sent, 'fname':fname, 'user_input':user_input})

my_template_img.html
Create new file my_template_img.html This file will have web input form for user to enter some text. We have also if statement here because we do not want display image when the form is just opened and no submission is done.

<html>
<form method="post">
    {% csrf_token %}

    <textarea rows=10 cols=50 name="user_input">{{user_input}}</textarea>
    <br>
    <label>{{resp}}</label>
    <br>
    <button type="submit">Submit</button>


  {% if "_640" in fname %}
     <img src="/media/{{fname}}" width="140px" height="100px">
  {% endif %}
</form>
</html>

Media folder
In the media folder download images to represent negative, neutral and positive. We can find images on pixabay site.

So the folder can look like this. Note if we use different file names we will need adjust the code.

 emoticon-1634515_640.png
 emoticon-1634586_640.png
 smiley-163510_640.jpg
 		
 

urls.py
This file is located under /home/username/projectname/projectname. Add import line to this file and also include pattern for do_sentiment_analysis:

from views import do_sentiment_analysis

urlpatterns = [
 
url(r'^press_my_buttons/$', press_my_buttons),
url(r'^do_sentiment_analysis/$', do_sentiment_analysis),

]

settings.py

This file is also located under /home/username/projectname/projectname
Make sure it has the following

STATIC_URL = '/static/'

MEDIA_ROOT = u'/home/username/projectname/media'
MEDIA_URL = '/media/'

STATIC_ROOT = u'/home/username/projectname/static'
STATIC_URL = '/static/'

Now when all is set, just access link. In case we use pythonanywhere it will be: http://username.pythonanywhere.com/do_sentiment_analysis/

Enter some text into text box and click Submit. We will see the output of API for sentiment analysis result and image based on this sentiment. Below are some screenshots.

Conclusion
We integrated machine learning sentiment analysis API from parallelDots into our python Diango web environment. We built web user input form that can send data to this API and receive output from API to show it to user. While building this we learned some Django things:
how to pass parameters from user form to server and back to user form,
how to serve images,
how to have logic block in web template.
We can build now different web applications that would use API service from ParallelDots. And we are able now integrate emotion analysis from text into our website.

References

ParallelDots
Installing New Modules
Handling Media Files in Django
Django Book
How to Create a Chatbot with ChatBot Open Source and Deploy It on the Web – Here we set project folder that we use in this post



Applied Machine Learning Classification for Decision Making

Making the good decision is the challenge that we often have. So, in this post, we will look at how applied machine learning classification can be used for the process of decision making.

The simple and quick approach to make decision is follow our past experience of similar situations. Usually we use compiling a list of pros and cons, asking someone for help or searching on the web. According to [1] we have two systems in our brain: logical and intuitive system:

“With every decision you take, every judgement you make, there is a battle in your mind – a battle between intuition and logic.”

“Most of the beliefs or opinions you have come from an automatic response. But then your logical mind invents a reason why you think or believe something.”

Most of the time our intuitive system is working efficiently, taking charge of all the thousands of decisions we make each day. But our intuitive system can be biased in many ways. For example it can be biased toward the latest unsuccessful outcome.

Besides this our memory can not remember a lot of information so we can not use efficiently all our past information and experience. That’s why people created tools like decision matrix that can help to improve decision making. In the next section we will look at techniques that facilitates using our rational decision making.

Decision Making

Decision Matrix

More advanced approach for making decision is score each possible option. In this approach we score each option against some criteria or feature. For example for the candidate product that we need to buy we are looking at price, quality, service and safety features. This approach results in creating decision matrix for analysis of possible options.

Below is an example of decision matrix [3] for choosing strategy for building some software project. Here we score 4 options based on the time to build and cost. The score is on the scale 0 (worst) – 100 (best). After we score each cell for time and cost rows, we can get sum of scores, rank and then make our choice (see last 3 rows)

Decision Matrix Example
Decision Matrix Example

There are also other, similar to decision matrix tools: belief decision matrix[3], Pugh Matrix[4].
These tools allow do comparison analysis of available options vs features and this enforces our mind logically evaluate and rank as much as possible pros and cons based on some numerical metrics.

However there are some limitations also. Incorrect selection criteria will obviously lead to the wrong conclusion. Poorly defined criteria can have multiple interpretations. For example too low can mean different things. With many rows or columns it becomes labor intensive to fill out the matrix.

Machine Learning Approach for Making Decision

Machine learning techniques can also help us improve decision making and even solve some of the above limitations. For example with Feature Engineering we can evaluate what criteria are important for decision.

In machine learning making decision can be viewed as assigning or predicting correct label (for example buy, not buy) based on data for the item features. In the field of machine learning or AI this is known as classification problem.

Classification algorithms learn correct decisions from data. Below is the example of training data that we input to machine learning classification algorithm. (Xij represent some numerical values)

Classification problem

Our options (decisions) are now represented by class label (most right column), criteria are represented by features. So we now switched columns with rows. Using training data like above we train classifier and then use it to choose the class (make decision) for new data.

There are different classification algorithms such as decision tree, SVM, Naive Bayes, neural network classification. In this post we will look at classification with neural network.
We will use Keras neural network with 2 dense layers.

Per Keras documentation[5] Dense layer implements the operation:
output = activation(dot(input, kernel) + bias)
where activation is the element-wise activation function passed as the activation argument,
kernel is a weights matrix created by the layer,
and bias is a bias vector created by the layer (only applicable if use_bias is True).

So we can see that the dense layer is performing similar math that we were doing in decision matrix.

Python Source Code for Neural Network Classification Algorithms

Finally, below is the python source code for classification problem. To test this code we will use iris dataset. This dataset has 3 classes and 4 features. Our task here to make the classifier able to assign correct label class.

# -*- coding: utf-8 -*-

from keras.utils import to_categorical
from sklearn import datasets

iris = datasets.load_iris()

# Create feature matrix
X = iris.data
print (X)

# Create target vector
y = iris.target
y = to_categorical(y, num_classes=3)
print (y)

from sklearn.model_selection import train_test_split

x_train, x_test, y_train, y_test = train_test_split( X, y, test_size=0.33, random_state=42)

print (x_train)
print (y_train)

from keras.models import Sequential
from keras.layers import Dense

model = Sequential()
model.add(Dense(32, input_shape=(4,), activation='relu',  name='L1'))
model.add(Dense(3, activation='softmax', name='L2'))
model.compile(optimizer='rmsprop',
              loss='categorical_crossentropy',
              metrics=['accuracy'])

print('Model Summary:')
print(model.summary())

model.fit(x_train, y_train, verbose=2, batch_size=10, epochs=100)
output = model.evaluate(x_test, y_test)

print('Final test loss: {:4f}'.format(output[0]))
print('Final test accuracy: {:4f}'.format(output[1]))


Below are results of neural network run.

Results of neural network run
Results of neural network run

As we can see our classifier is able to make correct decisions with 98% accuracy.

Thus we investigated different approaches for making decision. We saw how machine learning can be applied to this too. Specifically we looked at neural network classification algorithm for selecting correct label.
I would love to hear what types of decision making tools do you use for making decisions? Also feel free to provide feedback or suggestions.

References
1. How do we really make decisions?
2. What Is a Decision Matrix? Definition and Examples
3. Decision matrix From Wikipedia, the free encyclopedia
4. The Systems Engineering Tool Box
5. Keras Documentation



Inferring Causes and Effects from Daily Data

Doing different activities we often are interesting how they impact each other. For example, if we visit different links on Internet, we might want to know how this action impacts our motivation for doing some specific things. In other words we are interesting in inferring importance of causes for effects from our daily activities data.

In this post we will look at few ways to detect relationships between actions and results using machine learning algorithms and python.

Our data example will be artificial dataset consisting of 2 columns: URL and Y.
URL is our action and we want to know how it impacts on Y. URL can be link0, link1, link2 wich means links visited, and Y can be 0 or 1, 0 means we did not got motivated, and 1 means we got motivated.

The first thing we do hot-encoding link0, link1, link3 in 0,1 and we will get 3 columns as below.

Sample of data after one hot encoding
Sample of data after one hot encoding

So we have now 3 features, each for each URL. Here is the code how to do hot-encoding to prepare our data for cause and effect analysis.

filename = "C:\\Users\\drm\\data.csv"
dataframe = pandas.read_csv(filename)

dataframe=pandas.get_dummies(dataframe)
cols = dataframe.columns.tolist()
cols.insert(len(dataframe.columns)-1, cols.pop(cols.index('Y')))
dataframe = dataframe.reindex(columns= cols)

print (len(dataframe.columns))

#output
#4 

Now we can apply feature extraction algorithm. It allows us select features according to the k highest scores.

# feature extraction
test = SelectKBest(score_func=chi2, k="all")
fit = test.fit(X, Y)
# summarize scores
numpy.set_printoptions(precision=3)
print ("scores:")
print(fit.scores_)

for i in range (len(fit.scores_)):
    print ( str(dataframe.columns.values[i]) + "    " + str(fit.scores_[i]))
features = fit.transform(X)

print (list(dataframe))

numpy.set_printoptions(threshold=numpy.inf)


scores:
[11.475  0.142 15.527]
URL_link0    11.475409836065575
URL_link1    0.14227166276346598
URL_link2    15.526957539965377
['URL_link0', 'URL_link1', 'URL_link2', 'Y']


Another algorithm that we can use is <strong>ExtraTreesClassifier</strong> from python machine learning library sklearn.


from sklearn.ensemble import ExtraTreesClassifier
from sklearn.feature_selection import SelectFromModel

clf = ExtraTreesClassifier()
clf = clf.fit(X, Y)
print (clf.feature_importances_)  
model = SelectFromModel(clf, prefit=True)
X_new = model.transform(X)
print (X_new.shape)      

#output
#[0.424 0.041 0.536]
#(150, 2)


The above two machine learning algorithms helped us to estimate the importance of our features (or actions) for our Y variable. In both cases URL_link2 got highest score.

There exist other methods. I would love to hear what methods do you use and for what datasets and/or problems. Also feel free to provide feedback or comments or any questions.

References
1. Feature Selection For Machine Learning in Python
2. sklearn.ensemble.ExtraTreesClassifier

Below is python full source code

# -*- coding: utf-8 -*-
import pandas
import numpy
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2


filename = "C:\\Users\\drm\\data.csv"
dataframe = pandas.read_csv(filename)

dataframe=pandas.get_dummies(dataframe)
cols = dataframe.columns.tolist()
cols.insert(len(dataframe.columns)-1, cols.pop(cols.index('Y')))
dataframe = dataframe.reindex(columns= cols)

print (dataframe)
print (len(dataframe.columns))


array = dataframe.values
X = array[:,0:len(dataframe.columns)-1]  
Y = array[:,len(dataframe.columns)-1]   
print ("--X----")
print (X)
print ("--Y----")
print (Y)



# feature extraction
test = SelectKBest(score_func=chi2, k="all")
fit = test.fit(X, Y)
# summarize scores
numpy.set_printoptions(precision=3)
print ("scores:")
print(fit.scores_)

for i in range (len(fit.scores_)):
    print ( str(dataframe.columns.values[i]) + "    " + str(fit.scores_[i]))
features = fit.transform(X)

print (list(dataframe))

numpy.set_printoptions(threshold=numpy.inf)
print ("features")
print(features)


from sklearn.ensemble import ExtraTreesClassifier
from sklearn.feature_selection import SelectFromModel

clf = ExtraTreesClassifier()
clf = clf.fit(X, Y)
print ("feature_importances")
print (clf.feature_importances_)  
model = SelectFromModel(clf, prefit=True)
X_new = model.transform(X)
print (X_new.shape) 



Machine Learning for Correlation Data Analysis Between Food and Mood

Can sweet food affect our mood? A friend of mine was interesting if some of his minor mood changes are caused by sugar intake from sweets like cookies. He collected and provided records and in this post we will use correlation data analysis with python pandas dataframes to check the connection between food and mood. We will create python script for this task.


food and mood

Dataset from Correlation Data Analysis Between Food and Mood
Source Code for Machine Learning Correlation Data Analysis Between Food and Mood

Connection Between Eating and Mental Health

From internet resources we can confirm that relationship between how we feel and what we eat exists.[1] Sweet food is not recommended to eat as fluctuations in blood sugar cause mood swings, lack of energy [2]. The information about chocolate is however contradictory. Chocolate affects us both negatively and positively.[3] But chocolate has also sugar.
What if we eat only small amount of sweets and not each day – is there still any connection and how strong is it? The machine learning data analysis can help us to investigate this.

The Problem

So in this post we will estimate correlation between sweet food and mood based on provided daily data.
Correlation means association – more precisely it is a measure of the extent to which two variables are related. [4]

Data

The dataset has two columns, X and Y where:
X is how much sweet food was taken on daily basis, on the scale 0 – 1 , 0 is nothing, 1 means a max value.
Y is variation of mood from optimal state, on the scale 0 – 1 , 0 – means no variations or no defects, 1 means a max value.

Approach

If we calculate correlation between 2 columns of daily data we will get something around 0. However this would not show whole picture. Because the effect of the food might take action in a few days. The good or bad feeling can also stay for few days after the event that caused this feeling.
So we would need to take average data for several days for both X (looking back) and Y (looking forward). Here is the diagram that explains how data will be aggregated:

Changing the data - averaging
Changing the data – averaging

And here is how we can do this in the program:
1.for each day take average X data for last N days and take average Y data for M next days.
2.create a pandas dataframe which has now new moving averages for X and Y.
3.calculate correlation between new X and Y data

What should be N and M? We will use different values – from 1 to 14. And we will check what is the highest value for correlation.

Here is the python code to use pandas dataframe for calculating averages:

def get_data (df_pandas,k,z):
    
    x = np.zeros(df_pandas.shape[0]) 
    y = np.zeros(df_pandas.shape[0])
       
    new_df = pd.DataFrame() #creates a new dataframe that's empty
    for index, row in df_pandas.iterrows():
       
        x[index]=df_pandas.loc[index-k:index,'X'].mean()
     
        y[index]=df_pandas.loc[index:index+z,'Y'].mean()
    
    new_df=pd.concat([pd.DataFrame(x),pd.DataFrame(y)], "columns")
    new_df.columns = ['X', 'Y']
   
    return new_df    

Correlation Data Analysis

For calculating correlation we use also pandas dataframe. Here is the code snipped for this:

for i in range (1,n):
    for j in range (1,m):
   
       data=get_data(df, i, j)
       corr_df.loc[i, j] = data['X'].corr(data['Y'])

print ("corr_df")       
print (corr_df)  

pandas.DataFrame.corr by default is calculating pearson correlation coefficient – it is the measure of the strength of the linear relationship between two variables. In our code we use this default option. [8]

Results

After calculating correlation coefficients we output data in the table format and plot results on heatmap using seaborn module. Below is the data output and the plot. The max value of correlation for each column is highlighted in yellow in the data table. Input data and full source code are available at [5],[6].

Correlation data
Correlation data between sweet food (taken in n days) and mood (in next m days)
Correlation data between sweet food (taken in N days)  and mood in the following averaged M days,
Correlation data between sweet food (taken in N days) and mood in the following M days, averaged

Conclusion

We performed correlation analysis between eating sweet food and mental health. And we confirmed that in our data example there is a moderate correlation (0.4). This correlation is showing up when we use moving averaging for 5 or 6 days. This corresponds with observation that swing mood may appear in several days, not on the same or next day after eating sweet food.

We also learned how we can estimate correlation between two time series variables X, Y.

References
1. Our Moods, Our Foods The messy relationship between how we feel and what we eat
2. Can food affect your mood? By Cynthia Ramnarace, upwave.com
3. The Effects Of Chocolate On The Emotions
4. Correlation
5. Dataset from Correlation Data Analysis Between Food and Mood
6.Source Code for Machine Learning Correlation Data Analysis Between Food and Mood
7.Calculating Correlations of Forex Currency Pairs in Python
8.pandas.DataFrame.corr



LSTM Neural Network Training – Few Useful Techniques for Tuning Hyperparameters and Saving Time

Neural networks are among the most widely used machine learning techniques.[1] But neural network training and tuning multiple hyper-parameters takes time. I was recently building LSTM neural network for prediction for this post Machine Learning Stock Market Prediction with LSTM Keras and I learned some tricks that can save time. In this post you will find some techniques that helped me to do neural net training more efficiently.

1. Adjusting Graph To See All Details

Sometimes validation loss is getting high value and this prevents from seeing other data on the chart. I added few lines of code to cut high values so you can see all details on chart.

import matplotlib.pyplot as plt
import matplotlib.ticker as mtick

T=25
history_val_loss=[]

for x in history.history['val_loss']:
      if x >= T:
             history_val_loss.append (T)
      else:
             history_val_loss.append( x )

plt.figure(6)
plt.plot(history.history['loss'])
plt.plot(history_val_loss)
plt.title('model loss adjusted')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')

Below is the example of charts. Left graph is not showing any details except high value point because of the scale. Note that graphs are obtained from different tests.

LSTM NN Training Value Loss Charts with High Number and Adjusted
LSTM NN Training Value Loss Charts with High Number and Adjusted

2. Early Stopping

Early stopping is allowing to save time on not running tests when a monitored quantity has stopped improving. Here is how it can be coded:

earlystop = EarlyStopping(monitor='val_loss', min_delta=0.0001, patience=80,  verbose=1, mode='min')
callbacks_list = [earlystop]

history=model.fit (x_train, y_train, batch_size =1, nb_epoch =1000, shuffle = False, validation_split=0.15, callbacks=callbacks_list)

Here is what arguments mean per Keras documentation [2].

min_delta: minimum change in the monitored quantity to qualify as an improvement, i.e. an absolute change of less than min_delta, will count as no improvement.
patience: number of epochs with no improvement after which training will be stopped.
verbose: verbosity mode.
mode: one of {auto, min, max}. In min mode, training will stop when the quantity monitored has stopped decreasing; in max mode it will stop when the quantity monitored has stopped increasing; in auto mode, the direction is automatically inferred from the name of the monitored quantity.

3. Weight Regularization

Weight regularizer can be used to regularize neural net weights. Here is the example.

from keras.regularizers import L1L2
model.add (LSTM ( 400,  activation = 'relu', inner_activation = 'hard_sigmoid' , bias_regularizer=L1L2(l1=0.01, l2=0.01),  input_shape =(len(cols), 1), return_sequences = False ))

Below are the charts that are showing impact of weight regularizer on loss value :

LSTM NN Training Value Loss without weigh regularization
LSTM NN Training Value Loss without weigh regularization

LSTM NN Training Value Loss without weigh regularization
LSTM NN Training Value Loss without weigh regularization

Without weight regularization validation loss is going more up during the neural net training.

4. Optimizer

Keras software allows to use different optimizers. I was using adam optimizer which is widely used. Here is how it can be used:

adam=optimizers.Adam(lr=0.01, beta_1=0.91, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=True)
model.compile (loss ="mean_squared_error" , optimizer = "adam") 

I found that beta_1=0.89 performed better then suggested 0.91 or other tested values.

5. Rolling Window Size

Rolling window (in case we use it) also can impact on performance. Too small or too big will drive higher validation loss. Below are charts for different window size (N=4,8,16,18, from left to right). In this case the optimal value was 16 which resulted in 81% accuracy.

LSTM Neural Net Loss Charts with Different N
LSTM Neural Net Loss Charts with Different N

I hope you enjoyed this post on different techniques for tuning hyper parameters. If you have any tips or anything else to add, please leave a comment below in the comment box.

Below is the full source code:

import numpy as np
import pandas as pd
from sklearn import preprocessing

import matplotlib.pyplot as plt
import matplotlib.ticker as mtick

from keras.regularizers import L1L2

fname="C:\\Users\\stock data\\GM.csv"
data_csv = pd.read_csv (fname)

#how many data we will use 
# (should not be more than dataset length )
data_to_use= 150

# number of training data
# should be less than data_to_use
train_end =120


total_data=len(data_csv)

#most recent data is in the end 
#so need offset
start=total_data - data_to_use


yt = data_csv.iloc [start:total_data ,4]    #Close price
yt_ = yt.shift (-1)   

print (yt_)

data = pd.concat ([yt, yt_], axis =1)
data. columns = ['yt', 'yt_']


N=16    
cols =['yt']
for i in range (N):
  
    data['yt'+str(i)] = list(yt.shift(i+1))
    cols.append ('yt'+str(i))
    
data = data.dropna()
data_original = data
data=data.diff()
data = data.dropna()
    
    
# target variable - closed price
# after shifting
y = data ['yt_']
x = data [cols]

   
scaler_x = preprocessing.MinMaxScaler ( feature_range =( -1, 1))
x = np. array (x).reshape ((len( x) ,len(cols)))
x = scaler_x.fit_transform (x)

scaler_y = preprocessing. MinMaxScaler ( feature_range =( -1, 1))
y = np.array (y).reshape ((len( y), 1))
y = scaler_y.fit_transform (y)

    
x_train = x [0: train_end,]
x_test = x[ train_end +1:len(x),]    
y_train = y [0: train_end] 
y_test = y[ train_end +1:len(y)]  

x_train = x_train.reshape (x_train. shape + (1,)) 
x_test = x_test.reshape (x_test. shape + (1,))

from keras.models import Sequential
from keras.layers.core import Dense
from keras.layers.recurrent import LSTM
from keras.layers import  Dropout
from keras import optimizers

from numpy.random import seed
seed(1)
from tensorflow import set_random_seed
set_random_seed(2)

from keras import regularizers

from keras.callbacks import EarlyStopping


earlystop = EarlyStopping(monitor='val_loss', min_delta=0.0001, patience=80,  verbose=1, mode='min')
callbacks_list = [earlystop]

model = Sequential ()
model.add (LSTM ( 400,  activation = 'relu', inner_activation = 'hard_sigmoid' , bias_regularizer=L1L2(l1=0.01, l2=0.01),  input_shape =(len(cols), 1), return_sequences = False ))
model.add(Dropout(0.3))
model.add (Dense (output_dim =1, activation = 'linear', activity_regularizer=regularizers.l1(0.01)))
adam=optimizers.Adam(lr=0.01, beta_1=0.89, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=True)
model.compile (loss ="mean_squared_error" , optimizer = "adam") 
history=model.fit (x_train, y_train, batch_size =1, nb_epoch =1000, shuffle = False, validation_split=0.15, callbacks=callbacks_list)


y_train_back=scaler_y.inverse_transform (np. array (y_train). reshape ((len( y_train), 1)))
plt.figure(1)
plt.plot (y_train_back)


fmt = '%.1f'
tick = mtick.FormatStrFormatter(fmt)
ax = plt.axes()
ax.yaxis.set_major_formatter(tick)
print (model.summary())

print(history.history.keys())

T=25
history_val_loss=[]

for x in history.history['val_loss']:
      if x >= T:
             history_val_loss.append (T)
      else:
             history_val_loss.append( x )


plt.figure(2)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
fmt = '%.1f'
tick = mtick.FormatStrFormatter(fmt)
ax = plt.axes()
ax.yaxis.set_major_formatter(tick)



plt.figure(6)
plt.plot(history.history['loss'])
plt.plot(history_val_loss)
plt.title('model loss adjusted')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')


score_train = model.evaluate (x_train, y_train, batch_size =1)
score_test = model.evaluate (x_test, y_test, batch_size =1)
print (" in train MSE = ", round( score_train ,4)) 
print (" in test MSE = ", score_test )

pred1 = model.predict (x_test) 
pred1 = scaler_y.inverse_transform (np. array (pred1). reshape ((len( pred1), 1)))
 
prediction_data = pred1[-1]     
model.summary()
print ("Inputs: {}".format(model.input_shape))
print ("Outputs: {}".format(model.output_shape))
print ("Actual input: {}".format(x_test.shape))
print ("Actual output: {}".format(y_test.shape))

print ("prediction data:")
print (prediction_data)

y_test = scaler_y.inverse_transform (np. array (y_test). reshape ((len( y_test), 1)))
print ("y_test:")
print (y_test)

act_data = np.array([row[0] for row in y_test])

fmt = '%.1f'
tick = mtick.FormatStrFormatter(fmt)
ax = plt.axes()
ax.yaxis.set_major_formatter(tick)

plt.figure(3)
plt.plot( y_test, label="actual")
plt.plot(pred1, label="predictions")

print ("act_data:")
print (act_data)

print ("pred1:")
print (pred1)

plt.legend(loc='upper center', bbox_to_anchor=(0.5, -0.05),
          fancybox=True, shadow=True, ncol=2)


fmt = '$%.1f'
tick = mtick.FormatStrFormatter(fmt)
ax = plt.axes()
ax.yaxis.set_major_formatter(tick)

def moving_test_window_preds(n_future_preds):

    ''' n_future_preds - Represents the number of future predictions we want to make
                         This coincides with the number of windows that we will move forward
                         on the test data
    '''
    preds_moving = []                                    # Store the prediction made on each test window
    moving_test_window = [x_test[0,:].tolist()]          # First test window
    moving_test_window = np.array(moving_test_window)    
   
    for i in range(n_future_preds):
      
      
        preds_one_step = model.predict(moving_test_window) 
        preds_moving.append(preds_one_step[0,0]) 
                       
        preds_one_step = preds_one_step.reshape(1,1,1) 
        moving_test_window = np.concatenate((moving_test_window[:,1:,:], preds_one_step), axis=1) # new moving test window, where the first element from the window has been removed and the prediction  has been appended to the end
        

    print ("pred moving before scaling:")
    print (preds_moving)
                                         
    preds_moving = scaler_y.inverse_transform((np.array(preds_moving)).reshape(-1, 1))
    
    print ("pred moving after scaling:")
    print (preds_moving)
    return preds_moving
    
print ("do moving test predictions for next 22 days:")    
preds_moving = moving_test_window_preds(22)


count_correct=0
error =0
for i in range (len(y_test)):
    error=error + ((y_test[i]-preds_moving[i])**2) / y_test[i]

 
    if y_test[i] >=0 and preds_moving[i] >=0 :
        count_correct=count_correct+1
    if y_test[i] < 0 and preds_moving[i] < 0 :
        count_correct=count_correct+1

accuracy_in_change =  count_correct / (len(y_test) )

plt.figure(4)
plt.title("Forecast vs Actual, (data is differenced)")          
plt.plot(preds_moving, label="predictions")
plt.plot(y_test, label="actual")
plt.legend(loc='upper center', bbox_to_anchor=(0.5, -0.05),
          fancybox=True, shadow=True, ncol=2)


print ("accuracy_in_change:")
print (accuracy_in_change)

ind=data_original.index.values[0] + data_original.shape[0] -len(y_test)-1
prev_starting_price = data_original.loc[ind,"yt_"]
preds_moving_before_diff =  [0 for x in range(len(preds_moving))]

for i in range (len(preds_moving)):
    if (i==0):
        preds_moving_before_diff[i]=prev_starting_price + preds_moving[i]
    else:
        preds_moving_before_diff[i]=preds_moving_before_diff[i-1]+preds_moving[i]


y_test_before_diff = [0 for x in range(len(y_test))]

for i in range (len(y_test)):
    if (i==0):
        y_test_before_diff[i]=prev_starting_price + y_test[i]
    else:
        y_test_before_diff[i]=y_test_before_diff[i-1]+y_test[i]


plt.figure(5)
plt.title("Forecast vs Actual (non differenced data)")
plt.plot(preds_moving_before_diff, label="predictions")
plt.plot(y_test_before_diff, label="actual")
plt.legend(loc='upper center', bbox_to_anchor=(0.5, -0.05),
          fancybox=True, shadow=True, ncol=2)
plt.show()

References
1. Enhancing Neural Network Models for Knowledge Base Completion
2. Usage of callbacks
3. Rolling Window Regression: a Simple Approach for Time Series Next value Predictions