Integrating Sentiment Analysis API Python Django into Web Application

In this post we will learn how to use sentiment analysis with API python from We will look at running this API from python environment on laptop and also in web application environment with python Django on pythonanywhere hosting site.

In the one of previous post we set python Django project for chatbot. Here we will add file to this environment. Setting the chatbot files from previous project is not necessary. We just need folder structure.

Thus in this post we will reuse and extend some python Django knowledge that we got in the previous post. We will learn how to pass parameters from user form to server and back to user form, how to serve images, how to have logic block in web template.

ParrallelDots [1] provides several machine learning APIs such as named entity recognition (NER), intent identification, text classification and sentiment analysis. In this post we will explore sentiment analysis API and how it can be deployed on web server with Diango python.

Running Text Analysis API Locally

First we need install the library:
pip install paralleldots

We need also obtain key. It is free and no credit card required.

Now we run code as below

import paralleldots

# for single sentence
text="the day is very nice"

print (response['sentiment'])
print (response['code'])
print (response['probabilities']['positive'])
print (response['probabilities']['negative'])
print (response['probabilities']['neutral'])

# for multiple sentence as array
text=["the day is very nice,the day is very good,this is the best day"]

{'probabilities': {'negative': 0.001, 'neutral': 0.002, 'positive': 0.997}, 'sentiment': 'positive', 'code': 200}
{'sentiment': [{'negative': 0.0, 'neutral': 0.001, 'positive': 0.999}], 'code': 200}

This is very simple. Now we will deploy on web hosting site with python Django.

Deploying API on Web Hosting Site

Here we will build web form. Using this web form user can enter some text which will be passed to semantic analysis API. The result of analysis will be passed back to user and image will be shown based on result of sentiment analysis.
First we need install paralleldots library. To install the paralleldots module for Python 3.6, we’d run this in a Bash console (not in a Python one): [2]
pip3.6 install –user paralleldots

Note it is two dashes before user.
Now create or update the following files:

In this file we are getting user input from web form and sending it to API. Based on sentiment output from API we select image filename.

from django.shortcuts import render

import paralleldots

def do_sentiment_analysis(request):
    if request.POST:
       user_input=request.POST.get('user_input', '')

       if (user_sent == 'neutral'):
             fname=  "emoticon-1634586_640.png"
       elif (user_sent == 'negative'):
             fname = "emoticon-1634515_640.png"
       elif (user_sent == 'positive'):
             fname = "smiley-163510_640.jpg"

    return render(request, 'my_template_img.html', {'resp': user_sent, 'fname':fname, 'user_input':user_input})

Create new file my_template_img.html This file will have web input form for user to enter some text. We have also if statement here because we do not want display image when the form is just opened and no submission is done.

<form method="post">
    {% csrf_token %}

    <textarea rows=10 cols=50 name="user_input">{{user_input}}</textarea>
    <button type="submit">Submit</button>

  {% if "_640" in fname %}
     <img src="/media/{{fname}}" width="140px" height="100px">
  {% endif %}

Media folder
In the media folder download images to represent negative, neutral and positive. We can find images on pixabay site.

So the folder can look like this. Note if we use different file names we will need adjust the code.

This file is located under /home/username/projectname/projectname. Add import line to this file and also include pattern for do_sentiment_analysis:

from views import do_sentiment_analysis

urlpatterns = [
url(r'^press_my_buttons/$', press_my_buttons),
url(r'^do_sentiment_analysis/$', do_sentiment_analysis),


This file is also located under /home/username/projectname/projectname
Make sure it has the following

STATIC_URL = '/static/'

MEDIA_ROOT = u'/home/username/projectname/media'
MEDIA_URL = '/media/'

STATIC_ROOT = u'/home/username/projectname/static'
STATIC_URL = '/static/'

Now when all is set, just access link. In case we use pythonanywhere it will be:

Enter some text into text box and click Submit. We will see the output of API for sentiment analysis result and image based on this sentiment. Below are some screenshots.

We integrated machine learning sentiment analysis API from parallelDots into our python Diango web environment. We built web user input form that can send data to this API and receive output from API to show it to user. While building this we learned some Django things:
how to pass parameters from user form to server and back to user form,
how to serve images,
how to have logic block in web template.
We can build now different web applications that would use API service from ParallelDots. And we are able now integrate emotion analysis from text into our website.


Installing New Modules
Handling Media Files in Django
Django Book
How to Create a Chatbot with ChatBot Open Source and Deploy It on the Web – Here we set project folder that we use in this post

Image Processing Using Pixabay API and Python

Recently I visited great website Pixabay [1] that offers a wide range of images from people all around the world. These images are free to use even for commercial use. And there is an API [2] for accessing images on Pixabay. This brings a lot of ideas for interesting web applications with using of machine learning technique.

For example, what if we want find and download 10 images that very closely match to current image or theme. Or maybe there is a need to automatically scan new images that match some theme. As the first step in this direction, in this post we will look how to download images from Pixabay, save and do some analysis of images like calculating similarity between images.

As usually with most of APIs, the first step is sign up and get API key. This is absolutely free on Pixabay.

We will use python library python_pixabay to get links to images from Pixabay site.
To download images to local folder the python library urllib.request is used in the script.

Once the images are saved on local folder, we can calculate similarity between any chosen two images.
The python code for similarity functions is taken from [4]. In this post image similarity histogram via PIL (python image library) and image similarity vectors via numpy are calculated.

An image histogram is a type of histogram that acts as a graphical representation of the tonal distribution in a digital image. It plots the number of pixels for each tonal value. By looking at the histogram for a specific image a viewer will be able to judge the entire tonal distribution at a glance.[5]

In the end of script, the image similarity via vectors between images in pair is calculated. They all are in the range between 0 and 1. The script is downloading only 8 images from Pixabay and is using default image search function.

Thus we learned how to find and download images from Pixabay website. Also few techniques for calculating image similarities were tested.

Here is the source code of the script.

# -*- coding: utf-8 -*-

import python_pixabay
import urllib

from PIL import Image
pix = python_pixabay.Pixabay(apikey)

# default image search
img_search = pix.image_search()

# view the content of the searches


for hit in hits:
    userImageURL = hit["userImageURL"]
    print (userImageURL)
    images.append (userImageURL)

print (images)    

from functools import reduce

import urllib.request
image_directory = 'C:\\Users\\Owner\\Desktop\\A\\Python_2016_A\\images'
for i in range(8):
    local_filename, headers = urllib.request.urlretrieve(images[i])
    print (local_filename)
    local_filenames.append (local_filename)

def image_similarity_histogram_via_pil(filepath1, filepath2):
    from PIL import Image
    import math
    import operator
    image1 =
    image2 =
    image1 = get_thumbnail(image1)
    image2 = get_thumbnail(image2)
    h1 = image1.histogram()
    h2 = image2.histogram()
    rms = math.sqrt(reduce(operator.add,  list(map(lambda a,b: (a-b)**2, h1, h2)))/len(h1) )
    print (rms)
    return rms
def image_similarity_vectors_via_numpy(filepath1, filepath2):
    # source:
    # may throw: Value Error: matrices are not aligned . 
    from numpy import average, linalg, dot
    image1 =
    image2 =
    image1 = get_thumbnail(image1, stretch_to_fit=True)
    image2 = get_thumbnail(image2, stretch_to_fit=True)
    images = [image1, image2]
    vectors = []
    norms = []
    for image in images:
        vector = []
        for pixel_tuple in image.getdata():
        norms.append(linalg.norm(vector, 2))
    a, b = vectors
    a_norm, b_norm = norms
    # ValueError: matrices are not aligned !
    res = dot(a / a_norm, b / b_norm)
    print (res)
    return res

def get_thumbnail(image, size=(128,128), stretch_to_fit=False, greyscale=False):
    " get a smaller version of the image - makes comparison much faster/easier"
    if not stretch_to_fit:
        image.thumbnail(size, Image.ANTIALIAS)
        image = image.resize(size); # for faster computation
    if greyscale:
        image = image.convert("L")  # Convert it to grayscale.
    return image

image_similarity_histogram_via_pil(local_filenames[0], local_filenames[1])
image_similarity_vectors_via_numpy(local_filenames[0], local_filenames[1])

for i in range(7):
  print (local_filenames[i])  
  for j in range(i+1,8):
      print (local_filenames[j])
      image_similarity_vectors_via_numpy(local_filenames[i], local_filenames[j])

1. Pixabay
2. Pixabay API
3. Python 2 & 3 Pixabay API interface
4. Python – Image Similarity Comparison Using Several Techniques
5. Image histogram Wikipedia

Useful APIs for Your Web Site

Here’s a useful list of resources on how to create an API, compiled from posts that were published recently on this blog. The included APIs can provide a fantastic ways to enhance websites.

1. The WordPress(WP) API exposes a simple yet powerful interface to WP Query, the posts API, post meta API, users API, revisions API and many more. Chances are, if you can do it with WordPress, WP API will let you do it. [1] With this API you can get all posts with specific search term and display it on your website or get text from all posts and do text analytics.
Here is the link to post that is showing how to Retrieve Post Data Using the WordPress API with Python Script
You will find there python script that is able to get data from WordPress blog using WP API. This script will save downloaded data into csv file for further analysis or other purposes.

2. Everyone likes quotes. They can motivate, inspire or entertain. It is good to put quotes on website and here is the link to post that showing how to use 3 quotes API:
Quotes API for Web Designers and Developers
You will find there the source code in perl that will help to integrate quotes API from Random Famous Quotes, and favqs.cominto into your website.

3. Fresh content is critical for many websites as it is keeping the users to return back. One of possible ways to have fresh content on website is adding news content to your website. Here is the set of posts where several free APIs such as Faroo API and Guardian APIs are shown how to use to get news feeds:
Getting the Data from the Web using PHP or Python for API
Getting Data from the Web with Perl and The Guardian API
Getting Data from the Web with Perl and Faroo API

In these posts different and most popular for web development programming languages (Perl, Python , PHP) are used with Faroo API and Guardian APIs to get fresh content.

4. Twitter API can be also used to put fresh content on web site as Twitter is increasingly being used for business or personal purposes. Additionally Twitter API is used as the source of data for data mining to find interesting information. Below is the post that is showing how to get data through Twitter API and how to process data.
Using Python for Mining Data From Twitter

5. The MediaWiki API is a web service that provides convenient access to Wikipedia. With a python module Wikipedia that wraps the MediaWiki API, you can focus on using Wikipedia data, not getting it. [2] That makes it easy to access and parse data from Wikipedia. Using this library you can search Wikipedia, get article summaries, get data like links and images from a page, and more.

This is a great way to complement the web site with Wikipedia information about web site product, service or topic discussed. The other example of usage could be showing to web users random page from Wikipedia, extracting topics or web links from Wikipedia content, tracking new pages or updates, using downloaded text in text mining projects. Here is the link to post with example how to use this API:
Getting Data From Wikipedia Using Python


2. Wikipedia API for Python

Retrieving Post Data Using the WordPress API with Python Script

In this post we will create python script that is able to get data from WordPress (WP) blog using WP API. This script will save downloaded data into csv file for further analysis or other purposes.
WP API is returning data in json format and is accessible through link The WP API is packaged as a plugin so it should be added to WP blog from plugin page[6]

Once it is added and activated in WordPress blog, here’s how you’d typically interact with WP-API resources:
GET /wp-json/wp/v2/posts to get a collection of Posts.

Other operations such as getting random post, getting posts for specific category, adding post to blog (with POST method, it will be shown later in this post) and retrieving specific post are possible too.

During the testing it was detected that the default number of post per page is 10, however you can specify the different number up to 100. If you need to fetch more than 100 then you need to iterate through pages [3]. The default minimum per page is 1.

Here are some examples to get 100 posts per page from category sport:[category_name]=sport&per_page=100

To return posts from all categories:

To return post from specific page 2 (if there are several pages of posts) use the following:

Here is how can you make POST request to add new post to blog. You would need use method post with requests instead of get. The code below however will not work and will return response code 401 which means “Unauthorized”. To do successful post adding you need also add actual credentials but this will not be shown in this post as this is not required for getting posts data.

import requests

data = {'title':'new title', "content": "new content"}
r =, data=data)
print (r.status_code)
print (r.headers['content-type'])
print (r.json)

The script provided in this post will be doing the following steps:
1. Get data from blog using WP API and request from urlib as below:

from urllib.request import urlopen
with urlopen(url_link) as url:
    data =
print (data)

2. Save json data as the text file. This also will be helpful when we need to see what fields are available, how to navigate to needed fields or if we need extract more information later.

3. Open the saved text file, read json data, extract needed fields(such as title, content) and save extracted information into csv file.

In the future posts we will look how to do text mining from extracted data.

Here is the full source code for retrieving post data script.

# -*- coding: utf-8 -*-

import os
import csv
import json


from urllib.request import urlopen

with urlopen(url_link) as url:
    data =

print (data)

# Write data to file
filename = "posts json1.txt"
file_ = open(filename, 'wb')

def save_to_file (fn, row, fieldnames):
         if (os.path.isfile(fn)):
         with open(fn, m, encoding="utf8", newline='' ) as csvfile: 
             writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
             if (m=="w"):

with open(filename) as json_file:
    json_data = json.load(json_file)
for n in json_data:  
  r["Title"] = n['title']['rendered']
  r["Content"] = n['content']['rendered']
  save_to_file ("posts.csv", r, ['Title', 'Content'])  

Below are the links that are related to the topic or were used in this post. Some of them are describing the same but in javascript/jQuery environment.


2. Glossary
3. Get all posts for a specific category
4. How to Retrieve Data Using the WordPress API (javascript, jQuery)
5. What is an API and Why is it Useful?
6. Using the WP API to Fetch Posts
7. urllib Tutorial Python 3
8. Using Requests in Python
9.Basic urllib get and post with and without data
10. Submit WordPress Posts from Front-End with the WP API (javascript)

Quotes API for Web Designers and Developers

No one can deny the power of a good quote. They motivate and inspire us to be our best. [1]

Here are 3 quotes API that can be integrated in your website with source code example in perl.

1. Random Famous Quotes provides a random quote from famous movies in JSON format. This API is hosted on Mashape.
Mashape has example how to use API with module however as perl is not supported by unirest here is the example of consuming API in different way. You need obtain Mashape key to run your own code.

print "Content-type: text/html\n\n";
use LWP::UserAgent;
use HTTP::Request::Common qw{ POST };
use JSON qw( decode_json );

my $ua = LWP::UserAgent->new;

my $req = HTTP::Request->new(POST => $server_endpoint);   

$req->header('content-type' => 'application/x-www-form-urlencoded');
$req->header('X-Mashape-Key' => 'xxxxxxxxxxxxx');
$req->header('Accept' => 'application/json');


$resp = $ua->request($req);
if ($resp->is_success) {
     my $message = decode_json($resp->content);

print $message->{"quote"};
print $message->{"author"};

else {
    print "HTTP GET error code: ", $resp->code, "\n";
    print "HTTP GET error message: ", $resp->message, "\n";

2. provides API to collection of the most inspiring expressions of mankind. The output is supported different formats such as xml, json, jsonp, html,
text. The quotes can be in English or Russian language. Below is the source code example how to consume this API. Note that in this example we use method GET while in previous example we used method POST. Also the code example is showing how to get output in json and html formats.

print "Content-type: text/html\n\n";
use LWP::UserAgent;
use HTTP::Request::Common qw{ POST };
use JSON qw( decode_json );

my $ua5 = LWP::UserAgent->new;

my $req5 = HTTP::Request->new(GET => "");

$req5->header('content-type' => 'application/json');
$resp5 = $ua5->request($req5);

if ($resp5->is_success) {
     $message = decode_json($resp5->content);

print $message->{"quoteText"};
print $message->{"quoteAuthor"};

else {
    print "HTTP GET error code: ", $resp->code, "\n";
    print "HTTP GET error message: ", $resp->message, "\n";

# below is the code to get output in html format
my $req5 = HTTP::Request->new(GET => "");

     $req5->header('content-type' => 'text/html');

     $resp5 = $ua5->request($req5);
     if ($resp5->is_success) {
     $message5 = $resp5->content;
     print $message5;
else {
    print "HTTP GET error code: ", $resp->code, "\n";
    print "HTTP GET error message: ", $resp->message, "\n";

3. also provides quotes API. You can do many different things with quotes on this site on with API. FavQs allows you to collect, discover, and share your favorite quotes. You can get quote of the day, search by authors, tags or users. Here is the code example

print "Content-type: text/html\n\n";
use LWP::UserAgent;
use HTTP::Request::Common qw{ POST };
use JSON qw( decode_json );

    my $req5 = HTTP::Request->new(GET => "");

     $req5->header('content-type' => 'application/json');
     $resp5 = $ua5->request($req5);
     if ($resp5->is_success) {
     $message5 = $resp5->content;
     print $message5;

     $message = decode_json($resp5->content);
print $message->{"quote"}->{"body"};
print $message->{"quote"}->{"author"};

else {
    print "HTTP GET error code: ", $resp->code, "\n";
    print "HTTP GET error message: ", $resp->message, "\n";


1. 38 of the Most Inspirational Leadership Quotes
2. LWP
3. How to send http get or post request in perl.html