Converting Categorical Text Variable into Binary Variables

Sometimes we might need convert categorical feature into multiple binary features. Such situation emerged while I was implementing decision tree with independent categorical variable using python sklearn.tree for the post Building Decision Trees in Python – Handling Categorical Data and it turned out that a text independent variable is not supported.

One of solution would be binary encoding, also called one-hot-encoding when we might code [‘red’,’green’,’blue’] with 3 columns, one for each category, having 1 when the category match and 0 otherwise. [1]

Here we implement the python code that makes such binary encoding. The script looks at text data column and add numerical columns with values 0 or 1 to the original data. If category word exists in the column then it will be 1 in the column for this category, otherwise 0.

The list of categories is initialized in the beginning of the script. Additionally we initialize data source file, number of column with text data, and number of first empty column on right side. The script will add columns on right side starting from first empty column.

The next step in the script is to navigate through each row and do binary conversion and update data.

Below is some example of added binary columns to data input .

Below is full source code.


# -*- coding: utf-8 -*-

import pandas as pd

words = ["adwords", "adsense","mortgage","money","loan"]
data = pd.read_csv('adwords_data5.csv', sep= ',' , header = 0)


total_rows = len(data.index)


y_text_column_index=7
y_column_index=16





for index, w in enumerate(words):
  data[w] = 0   
  col_index=data.columns.get_loc(w)
  
  for x in range (total_rows):
      if w in data.iloc[x,y_text_column_index] :
           data.iloc[x,y_column_index+index]=1
      else :
           data.iloc[x,y_column_index+index]=0  


print (data)

References
1. strings as features in decision tree/random forest
2. Building Decision Trees in Python
3. Building Decision Trees in Python – Handling Categorical Data



Building Decision Trees in Python – Handling Categorical Data

In the post Building Decision Trees in Python we looked at the decision tree with numerical continuous dependent variable. This type of decision trees can be called also regression tree.

But what if we need to use categorical dependent variable? It is still possible to create decision tree and in this post we will look how to create decision tree if dependent variable is categorical data. In this case the decision tree is called classification tree. Classification trees, as the name implies are used to separate the dataset into classes belonging to the response variable. [4] Classification is a typical problem that can be found in such fields as machine learning, predictive analytics, data mining.

Getting Data
For simplicity we will use the same dataset as before but will convert numerical target variable into categorical variable as we want build python code for decision tree with categorical dependent variable.
To convert dependent variable into categorical we use the following simple rule:
if CPC < 22 then CPC = "Low"
else CPC = “High”

For independent variables we will use “keyword” and “number of words” fields.
Keyword (usually it is several words) is a categorical variable. In general it is not the problem as in either case (regression or classification tree), the predictors or independent variables may be categorical or numeric. It is the target variable that determines the type of decision tree needed.[4]

However sklearn.tree (at least the version that is used here) does not support categorical independent variable. See discussion and suggestions on stack overflow [5]. To use independent categorical variable, we will code the categorical feature into multiple binary features. For example, we might code [‘red’,’green’,’blue’] with 3 columns, one for each category, having 1 when the category match and 0 otherwise. This is called one-hot-encoding, binary encoding, one-of-k-encoding or whatever. [5]

Below are shown few rows from the table data. We added 2 columns for categories “insurance”, “Adsense”. We actually have more categories and therefore we added more columns but this is not shown in the table.

For small dataset such conversion can be done manually. But we also created python script in the post Converting Categorical Text Variable into Binary Variables for this specific task. [10]

KeywordNumber of wordsInsuranceAdsenseCTRCostCost (categorical)
car insurance premium310 0.012 20 Low
AdSense data20 1 0.025 1061 High

Building the Code
Now we need to build the code. The call for decision tree is looking like this:

clf_gini = DecisionTreeClassifier(criterion = “gini”, random_state = 100,
max_depth=8, min_samples_leaf=4)

we use here criterion Gini index for splitting data.
In the call to export_graphviz we specify class names:

export_graphviz(tree, out_file=f, feature_names=feature_names, filled=True, rounded=True, class_names=[“Low”, “High”] )

The rest of the code is the same as in previous post for regression tree.

Here is the python computer code:


# -*- coding: utf-8 -*-


import pandas as pd
from sklearn.cross_validation import train_test_split
from sklearn.tree import DecisionTreeClassifier
import subprocess

from sklearn.tree import  export_graphviz


def visualize_tree(tree, feature_names):
    
    with open("dt.dot", 'w') as f:
        
        export_graphviz(tree, out_file=f, feature_names=feature_names,  filled=True, rounded=True, class_names=["Low", "High"] )

    command = ["C:\\Program Files (x86)\\Graphviz2.38\\bin\\dot.exe", "-Tpng", "C:\\Users\\Owner\\Desktop\\A\\Python_2016_A\\dt.dot", "-o", "dt.png"]
    
       
    try:
        subprocess.check_call(command)
    except:
        exit("Could not run dot, ie graphviz, to "
             "produce visualization")
    
data = pd.read_csv('adwords_data.csv', sep= ',' , header = 1)



X = data.values[:, [3,17,18,19,20,21,22]]
Y = data.values[:,8]

                           
X_train, X_test, y_train, y_test = train_test_split( X, Y, test_size = 0.3, random_state = 100)                           
                           
clf_gini = DecisionTreeClassifier(criterion = "gini", random_state = 100, 
                               max_depth=8, min_samples_leaf=4)
clf_gini.fit(X_train, y_train)   


visualize_tree(clf_gini, ["Words in Key Phrase", "AdSense",	"Mortgage",	"Money",	"loan", 	"lawyer", 	"attorney"])


Decision Tree (partial view)

References
1. Decision tree Wikipedia
2. MLlib – Decision Trees
3. Visual analysis of AdWords data: a primer
4. 2 main differences between classification and regression trees
5. strings as features in decision tree/random forest
6. Decision Trees with scikit-learn
7. Classification: Basic Concepts, Decision Trees, and Model Evaluation
8. Understanding decision tree output from export_graphviz
9. Building Decision Trees in Python
10. Converting Categorical Text Variable into Binary Variables



Building Decision Trees in Python

A decision tree is a decision support tool that uses a tree-like graph or model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility. It is one way to display an algorithm.
Decision trees are commonly used in operations research, specifically in decision analysis, to help identify a strategy most likely to reach a goal, but are also a popular tool in machine learning. [1]

Decision trees are widely used since they are easy to interpret, handle categorical features, extend to the multiclass classification setting, do not require feature scaling, and are able to capture non-linearities and feature interactions. [2] Decision trees are also one of the most widely used predictive analytics techniques.

Recently I decided to build python code for decision tree for AdWords data. This was motivated by the post [3] about visual analytics used for AdWords dataset. Below are the main components that I used for implementing decision tree.

Dataset
AdWords dataset – the dataset was obtained on Internet. Below is the table with few rows to show data. Only the columns that were used for decision tree, are shown in the table.

KeywordNumber of wordsCPCClicksCTRCostImpressions
car insurance premium3176 7 0.012 1399 484
AdSense data2119 13 0.025 1061 466

The following independent variables were selected:
Number of words in keyword phrase – this column was added based on the keyword phrase column.
CTR – click through rate

For the dependent variable it was selected CPC – Average Cost per Click.

Python Module
As the dependent variable is numeric and continuos, the regression decision tree from python module sklearn.tree was used in the script:
from sklearn.tree import DecisionTreeRegressor

In sklearn.tree Decision Trees (DTs) are a non-parametric supervised learning method used for classification and regression. The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features. [5]

Visualization
For visualization of decision tree graphviz was installed. “Graphviz is open source graph visualization software. Graph visualization is a way of representing structural information as diagrams of abstract graphs and networks. It has important applications in networking, bioinformatics, software engineering, database and web design, machine learning, and in visual interfaces for other technical domains.”

Python Script
The created code consisted of the following steps:
reading data from csv data file
selecting needed columns
splitting dataset for testing and training
initializing DecisionTreeRegressor
visualizing decision tree via function. Note that the path to Graphviz are specified inside of scripts.

Decision tree and Python code are shown below. Online resources used for this post are provided in the reference section.

Decision Tree
Decision Tree

Python computer code:


# -*- coding: utf-8 -*-

import pandas as pd
from sklearn.cross_validation import train_test_split
from sklearn.tree import DecisionTreeRegressor


import subprocess

from sklearn.tree import  export_graphviz


def visualize_tree(tree, feature_names):
    
    with open("dt.dot", 'w') as f:
        
        export_graphviz(tree, out_file=f, feature_names=feature_names,  filled=True, rounded=True )

    command = ["C:\\Program Files (x86)\\Graphviz2.38\\bin\\dot.exe", "-Tpng", "C:\\Users\\Owner\\Desktop\\A\\Python_2016_A\\dt.dot", "-o", "dt.png"]
    
        
    try:
        subprocess.check_call(command)
    except:
        exit("Could not run dot, ie graphviz, to "
             "produce visualization")
    
data = pd.read_csv('adwords_data.csv', sep= ',' , header = 1)


X = data.values[:, [3,13]]
Y = data.values[:,11]

                       
X_train, X_test, y_train, y_test = train_test_split( X, Y, test_size = 0.3, random_state = 100)                           
                           
clf = DecisionTreeRegressor( random_state = 100,
                               max_depth=3, min_samples_leaf=4)
clf.fit(X_train, y_train)   


visualize_tree(clf, ["Words in Key Phrase", "CTR"])

References
1. Decision tree Wikipedia
2. MLlib – Decision Trees
3. Visual analysis of AdWords data: a primer
4. Decision trees in python with scikit_learn and pandas
5. Decision Tree
6. Graphviz – Graph Visualization Software



How to Run Online Machine Learning Algorithms Tool

In this post we will look at how to run Online Machine Learning Algorithms from this website. This service is the free tool that allows to run some algorithms without coding or installing software modules.

We will use example for classification iris data set with neural network. This example is described in Iris Plant Classification Using Neural Network – Online Experiments with Normalization and Other Parameters

Below are the steps how to use online Machine Learning Algorithms tool:

1. Access the link Online Machine Learning Algorithms with feed-forward neural network selected: Online Machine Learning Algorithms tool for classification of iris data set with feed-forward neural network. If you want to change algorithm or no algorithm is shown in pull down menu select the needed algorithm. In our example we use Feedforward Neural Network (GD).
Click Load parameters for this model.

2. Input the data that you want to run. In our case it is iris data sets for training and testing, learning rate and number of neurons in hidden layer. On screenshot below you can see normalized data set is loaded.

3. Click Run now.

4. Click results link.

5. Click Refresh Page button on this new page , you maybe will need click few times untill data output show up. Usually it takes less than 1 min, but it will depend how much data you need to process.
Scroll to the bottom page to see calculations.



Iris Data Set – Normalized Data

On this page you can find normalized iris data set that was used in Iris Plant Classification Using Neural Network – Online Experiments with Normalization and Other Parameters. The data set is divided to training data set (141 records) and testing data set (9 records, 3 for each class). Class label is shown separately.

To calculate normalized data, the below table was built.

min 4.3 2 1 0.1
max 7.9 4.4 6.9 2.5
max-min 3.6 2.4 5.9 2.4

Here min, max and min-max are taken over the columns of iris data set which are:
1. sepal length in cm
2. sepal width in cm
3. petal length in cm
4. petal width in cm

Training data set:

0.083333333 0.458333333 0.084745763 0.041666667
0.194444444 0.666666667 0.06779661 0.041666667
0.305555556 0.791666667 0.118644068 0.125
0.083333333 0.583333333 0.06779661 0.083333333
0.194444444 0.583333333 0.084745763 0.041666667
0.027777778 0.375 0.06779661 0.041666667
0.166666667 0.458333333 0.084745763 0
0.305555556 0.708333333 0.084745763 0.041666667
0.138888889 0.583333333 0.101694915 0.041666667
0.138888889 0.416666667 0.06779661 0
0 0.416666667 0.016949153 0
0.416666667 0.833333333 0.033898305 0.041666667
0.388888889 1 0.084745763 0.125
0.305555556 0.791666667 0.050847458 0.125
0.222222222 0.625 0.06779661 0.083333333
0.388888889 0.75 0.118644068 0.083333333
0.222222222 0.75 0.084745763 0.083333333
0.305555556 0.583333333 0.118644068 0.041666667
0.222222222 0.708333333 0.084745763 0.125
0.083333333 0.666666667 0 0.041666667
0.222222222 0.541666667 0.118644068 0.166666667
0.138888889 0.583333333 0.152542373 0.041666667
0.194444444 0.416666667 0.101694915 0.041666667
0.194444444 0.583333333 0.101694915 0.125
0.25 0.625 0.084745763 0.041666667
0.25 0.583333333 0.06779661 0.041666667
0.111111111 0.5 0.101694915 0.041666667
0.138888889 0.458333333 0.101694915 0.041666667
0.305555556 0.583333333 0.084745763 0.125
0.25 0.875 0.084745763 0
0.333333333 0.916666667 0.06779661 0.041666667
0.166666667 0.458333333 0.084745763 0
0.194444444 0.5 0.033898305 0.041666667
0.333333333 0.625 0.050847458 0.041666667
0.166666667 0.458333333 0.084745763 0
0.027777778 0.416666667 0.050847458 0.041666667
0.222222222 0.583333333 0.084745763 0.041666667
0.194444444 0.625 0.050847458 0.083333333
0.055555556 0.125 0.050847458 0.083333333
0.027777778 0.5 0.050847458 0.041666667
0.194444444 0.625 0.101694915 0.208333333
0.222222222 0.75 0.152542373 0.125
0.138888889 0.416666667 0.06779661 0.083333333
0.222222222 0.75 0.101694915 0.041666667
0.083333333 0.5 0.06779661 0.041666667
0.277777778 0.708333333 0.084745763 0.041666667
0.194444444 0.541666667 0.06779661 0.041666667
0.333333333 0.125 0.508474576 0.5
0.611111111 0.333333333 0.610169492 0.583333333
0.388888889 0.333333333 0.593220339 0.5
0.555555556 0.541666667 0.627118644 0.625
0.166666667 0.166666667 0.389830508 0.375
0.638888889 0.375 0.610169492 0.5
0.25 0.291666667 0.491525424 0.541666667
0.194444444 0 0.423728814 0.375
0.444444444 0.416666667 0.542372881 0.583333333
0.472222222 0.083333333 0.508474576 0.375
0.5 0.375 0.627118644 0.541666667
0.361111111 0.375 0.440677966 0.5
0.666666667 0.458333333 0.576271186 0.541666667
0.361111111 0.416666667 0.593220339 0.583333333
0.416666667 0.291666667 0.525423729 0.375
0.527777778 0.083333333 0.593220339 0.583333333
0.361111111 0.208333333 0.491525424 0.416666667
0.444444444 0.5 0.644067797 0.708333333
0.5 0.333333333 0.508474576 0.5
0.555555556 0.208333333 0.661016949 0.583333333
0.5 0.333333333 0.627118644 0.458333333
0.583333333 0.375 0.559322034 0.5
0.638888889 0.416666667 0.576271186 0.541666667
0.694444444 0.333333333 0.644067797 0.541666667
0.666666667 0.416666667 0.677966102 0.666666667
0.472222222 0.375 0.593220339 0.583333333
0.388888889 0.25 0.423728814 0.375
0.333333333 0.166666667 0.474576271 0.416666667
0.333333333 0.166666667 0.457627119 0.375
0.416666667 0.291666667 0.491525424 0.458333333
0.472222222 0.291666667 0.694915254 0.625
0.305555556 0.416666667 0.593220339 0.583333333
0.472222222 0.583333333 0.593220339 0.625
0.666666667 0.458333333 0.627118644 0.583333333
0.555555556 0.125 0.576271186 0.5
0.361111111 0.416666667 0.525423729 0.5
0.333333333 0.208333333 0.508474576 0.5
0.333333333 0.25 0.576271186 0.458333333
0.5 0.416666667 0.610169492 0.541666667
0.416666667 0.25 0.508474576 0.458333333
0.194444444 0.125 0.389830508 0.375
0.361111111 0.291666667 0.542372881 0.5
0.388888889 0.416666667 0.542372881 0.458333333
0.388888889 0.375 0.542372881 0.5
0.527777778 0.375 0.559322034 0.5
0.222222222 0.208333333 0.338983051 0.416666667
0.388888889 0.333333333 0.525423729 0.5
0.555555556 0.541666667 0.847457627 1
0.416666667 0.291666667 0.694915254 0.75
0.777777778 0.416666667 0.830508475 0.833333333
0.555555556 0.375 0.779661017 0.708333333
0.611111111 0.416666667 0.813559322 0.875
0.916666667 0.416666667 0.949152542 0.833333333
0.166666667 0.208333333 0.593220339 0.666666667
0.833333333 0.375 0.898305085 0.708333333
0.666666667 0.208333333 0.813559322 0.708333333
0.805555556 0.666666667 0.86440678 1
0.611111111 0.5 0.694915254 0.791666667
0.583333333 0.291666667 0.728813559 0.75
0.694444444 0.416666667 0.762711864 0.833333333
0.388888889 0.208333333 0.677966102 0.791666667
0.416666667 0.333333333 0.694915254 0.958333333
0.583333333 0.5 0.728813559 0.916666667
0.611111111 0.416666667 0.762711864 0.708333333
0.944444444 0.75 0.966101695 0.875
0.944444444 0.25 1 0.916666667
0.472222222 0.083333333 0.677966102 0.583333333
0.722222222 0.5 0.796610169 0.916666667
0.361111111 0.333333333 0.661016949 0.791666667
0.944444444 0.333333333 0.966101695 0.791666667
0.555555556 0.291666667 0.661016949 0.708333333
0.666666667 0.541666667 0.796610169 0.833333333
0.805555556 0.5 0.847457627 0.708333333
0.527777778 0.333333333 0.644067797 0.708333333
0.5 0.416666667 0.661016949 0.708333333
0.583333333 0.333333333 0.779661017 0.833333333
0.805555556 0.416666667 0.813559322 0.625
0.861111111 0.333333333 0.86440678 0.75
1 0.75 0.915254237 0.791666667
0.583333333 0.333333333 0.779661017 0.875
0.555555556 0.333333333 0.694915254 0.583333333
0.5 0.25 0.779661017 0.541666667
0.944444444 0.416666667 0.86440678 0.916666667
0.555555556 0.583333333 0.779661017 0.958333333
0.583333333 0.458333333 0.762711864 0.708333333
0.472222222 0.416666667 0.644067797 0.708333333
0.722222222 0.458333333 0.745762712 0.833333333
0.666666667 0.458333333 0.779661017 0.958333333
0.722222222 0.458333333 0.694915254 0.916666667
0.416666667 0.291666667 0.694915254 0.75
0.694444444 0.5 0.830508475 0.916666667
0.666666667 0.541666667 0.796610169 1
0.666666667 0.416666667 0.711864407 0.916666667
0.555555556 0.208333333 0.677966102 0.75

Testing data set
0.222222222 0.625 0.06779661 0.041666667
0.166666667 0.416666667 0.06779661 0.041666667
0.111111111 0.5 0.050847458 0.041666667
0.75 0.5 0.627118644 0.541666667
0.583333333 0.5 0.593220339 0.583333333
0.722222222 0.458333333 0.661016949 0.583333333
0.444444444 0.416666667 0.694915254 0.708333333
0.611111111 0.416666667 0.711864407 0.791666667
0.527777778 0.583333333 0.745762712 0.916666667

Training data set – class label values

0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1

Testing data set – class label values
0
0
0
0.5
0.5
0.5
1
1
1