Classification metrics and Naive Bayes

We have seen how classification via logistic regression works and here we will look into a special classifier called Naive Bayes and the metrics used in classification problems, all using a text classification example.

We build an analytics model using text as our data, specifically trying to understand the sentiment of tweets about the company Apple. This is  a special classification problem, often called Sentiment Analysis.

The challenge is to see if we can correctly classify tweets as being negative, positive, or neutral about Apple.

The code is available as a Python notebook on GitHub.

Get the data

Twitter data is publicly available and you can collect it by using their API; in this case, only the content of the tweets, not sender nor date, is used. Language of the tweets is English.

The set is available in the Git repo, together with the Python code.
Here are other twitter data if you want to experiment more: http://help.sentiment140.com/for-students/

As usual we start by reading the data into a panda data frame:

import pandas as pd  # Start by importing the tweets data
X = pd.read_csv('tweets.csv')
X.shape
(1181, 2)
X.columns
Index(['Tweet', 'Avg'], dtype='object')
X.info()
RangeIndex: 1181 entries, 0 to 1180
Data columns (total 2 columns):
Tweet 1181 non-null object
Avg 1181 non-null float64
dtypes: float64(1), object(1)
memory usage: 18.5+ KB
X.head(5)
Tweet Avg
0 I have to say, Apple has by far the best custo… 2.0
1 iOS 7 is so fricking smooth & beautiful!! #Tha… 2.0
2 LOVE U @APPLE 1.8
3 Thank you @apple, loving my new iPhone 5S!!!!!… 1.8
4 .@apple has the best customer service. In and … 1.8

Data contains 1181 tweets (as text) and one sentiment label.
The sentiment label has been applied manually, as strongly negative, negative, neutral, positive and strongly positive (a discrete number on the scale from negative 2 to 2); it is the average of five different meanings (this is why at the end is a real number).

min(X.Avg)
-2.0
max(X.Avg)
2.0

2 means very positive, 0 is neutral and -2 is very negative

X.Avg.hist()

The graph shows the distribution of the number of tweets classified into each of the categories.
We can see here that the majority of tweets were classified as neutral (score = zero), with a small number classified as strongly or strongly positive.

Now we have a set of tweets that are labeled with their sentiment.
But how do we build independent features just from the text to be used to predict the sentiment?

output_12_0
Number of tweets by their sentiment average score

Process the data

Fully understanding text is difficult, but the Bag of Words (BoW) algorithm provides a very simple approach: it just counts the number of times each word appears in the text and uses these counts as the independent features.
For example, in the sentence, “This phone model is great. I would recommend it to my friends anytime” the word recommend is seen once, as well as the word great, etcetera.
In Bag of Words, there’s one feature for each “meaningful” word.
This is a very simple approach, but is often very effective and is commonly used as the  baseline in text analytics projects.

I wrote that the words used for the features shall be meaningful. Preprocessing the text can dramatically improve the performance of the Bag of Words method.
As we have seen before, preprocessing routine tasks are to clean up irregularities: removing mixture of uppercase and lowercase letters, punctuation and unhelpful terms (you can refer to the links above for an explanation of all those Natural Languages Processing terms).

First of all, we clean the tweets by lowering all the letters, removing punctuations and stop words and finally by tokenising and stemming them.

corpusTweets = X.Tweet.tolist() # get a list of all tweets, then is easier to apply preprocessign to each item

# Convert to lower-case
corpusLowered = [s.lower() for s in corpusTweets]

corpusLowered[0:5]  # check
['i have to say, apple has by far the best customer care service i have ever received! @apple @appstore',
 'ios 7 is so fricking smooth & beautiful!! #thanxapple @apple',
 'love u @apple',
 'thank you @apple, loving my new iphone 5s!!!!!  #apple #iphone5s pic.twitter.com/xmhjcu4pcb',
 '.@apple has the best customer service. in and out with a new phone in under 10min!']

As you see from the first five tweets (we will use them as a check) capital letters are now gone. We can check that this has worked by comparing the first five tweets with the ones in the table above.

Note the variable name corpusTweets.
One of the NLP concepts is that of a corpus. A corpus is a collection of documents.
For our example, the corpus is our tweets.

Now we remove the punctuation, using a Regular Expression (RE):

# Remove punctuation

import re
corpusNoPunct = [re.sub(r'([^\s\w_]|_)+', ' ', s.strip()) for s in corpusLowered]
corpusNoPunct[0:5]
['i have to say apple has by far the best customer care service i have ever received apple appstore',
'ios 7 is so fricking smooth beautiful thanxapple apple',
'love u apple',
'thank you apple loving my new iphone 5s apple iphone5s pic twitter com xmhjcu4pcb',
' apple has the best customer service in and out with a new phone in under 10min ']

And then the stop words. which are common words as articles or conjunctions that do not add too much value.
First we define which are the common words to be removed:

import os
def readStopwords():
    '''
    returns stopwords as strings

    !! Assume that a file called "stopwords.txt"
    exists in the folder
    '''
    filename = "stopwords.txt"
    path = os.path.join("", filename)
    file = open(path, 'r')
    return file.read().splitlines()  # splitlines is used to remove newlines

stopWords = set(readStopwords())
"the" in stopWords  # quick test
True

Let’s remove other common words in this context (e.g. Apple or iPhone) that do not tell us anything about the sentiment:

stopWords.add("apple")
stopWords.add("appl")
stopWords.add("iphone")
stopWords.add("ipad")
stopWords.add("ipod")
stopWords.add("itunes")
stopWords.add("http")

print ("apple" in stopWords)
print ("google" in stopWords)
True
False

To remove a word from the corpus if that word is contained in our stop words set, we need first to tokenise the corpus (i.e., split it into words or tokens):

# tokenise
corpusTokens = [s.split() for s in corpusNoPunct]
corpusTokens[0:3] # just the first three tweets as example
[['i','have','to','say','apple','has','by','far','the','best',
'customer','care','service','i','have','ever','received','apple',
'appstore'],
['ios','7','is','so','fricking','smooth','beautiful','thanxapple',
'apple'],
['love', 'u', 'apple']]

Lastly, an important preprocessing step is called stemming.
This step is motivated by the desire to represent words with different endings
as the same word.
We probably do not need to draw a distinction between argue, argued, argues, and arguing.
They could all be represented by a common stem: argu.
The algorithmic process of performing this reduction is called stemming.

We use the Porter’s version from the NLTK library and we apply it, while removing the stop words at the same time:

# clean stop words and stem the corpus
from nltk import PorterStemmer
porter = PorterStemmer()

corpus = []
for tweet in corpusTokens:
    cleanTokens = [token for token in tweet if token not in stopWords] # a list of tokens
    stemmedTokens = [porter.stem(token) for token in cleanTokens]
    cleanTweet = ' '.join(stemmedTokens)

    corpus.append(cleanTweet)

corpus[0:5]
['say far best custom care servic ever receiv appstor',
'7 frick smooth beauti thanxappl',
'love u',
'thank love new 5s iphone5 pic twitter com xmhjcu4pcb',
'best custom servic new phone 10min']

Now we can see that we have significantly fewer words, and we’re ready to extract the word frequencies to be used in our prediction problem.

Create a Document-Term matrix

In text mining, an important step is to create the document-term matrix (DTM) of the corpus we are interested in. A DTM is basically a matrix with the documents (in our case the tweets) designated by rows and the single words by columns, and where the matrix elements are the counts of those words in each tweet. I.e., if the word great appears 3 times in the tweet n, then the matrix will contain 3 as element in the row n and the column identifying the word great.

The scikit-learn package provides a function that generates the DT matrix where the rows correspond to documents and the columns correspond to words.

Let’s go ahead and generate this matrix.
We use the function CountVectorizer.  It has several parameters, we will use the default values beside for the lowercase (we don’t need it, we have already lowered our text) and we will limit the features extracted to the first 500 words ordered by number of counts.
Limiting the features is a normal step to avoid a too complex model and usually only the most frequent words will help with the prediction.
The number of terms is an issue for two main reasons.
One is computational: more terms means more independent variables, which means it takes longer to build the model.
The other is that – in building models – the ratio of independent variables to observations
will affect how good the model will generalise.
Alternatively to limit the features one could also limit the minimum or maximum number of counts (see the sklearn documentation).

from sklearn.feature_extraction.text import CountVectorizer

cv = CountVectorizer(lowercase=False, max_features=500)
cv.fit(corpus)
CountVectorizer(analyzer='word', binary=False, decode_error='strict',
dtype=, encoding='utf-8', input='content',
lowercase=False, max_df=1.0, max_features=500, min_df=1,
ngram_range=(1, 1), preprocessor=None, stop_words=None,
strip_accents=None, token_pattern='(?u)\\b\\w\\w+\\b',
tokenizer=None, vocabulary=None)
'apple' in cv.vocabulary_  # a quick test
False
print(cv.get_feature_names()[0:20])
['09', '10', '18xc8dk', '1za', '20', '2013', '20th', '244tsuyoponzu',
 '3g', '4sq', '5c', '5s', '64', '7evenstarz', 'act', 'actual', 'ad', 
 'adambain', 'add', 'again']

Above are the first twenty features (words) in alphabetical order counted in the corpus.

Now we use the vectoriser to transform the corpus into a sparse matrix (called bagOfWords) where each tweet has 1 if the feature is present in it or 0 if not.

bagOfWords = cv.transform(corpus)

bagOfWords
<1181x500 sparse matrix of type '' with 5893 stored elements in Compressed Sparse Row format>

The matrix has 1181 rows (the number of tweets) and 500 columns (because we limited the features) and there are only 5893 elements in it. This data is what we call sparse, it means that there are many zeros in our matrix.

We can look at what the most popular terms are:

sum_words = bagOfWords.toarray().sum(axis=0)

words_freq = [(word, sum_words[idx]) for word, idx in cv.vocabulary_.items()]

words_freq =sorted(words_freq, key = lambda x: x[1], reverse=True)

words_freq[:10]  # top 10
[('new', 113),
('ly', 112),
('com', 109),
('twitter', 108),
('ipodplayerpromo', 102),
('5s', 97),
('phone', 91),
('pic', 84),
('get', 75),
('5c', 63)]

Now we need one last step before training the model with a classifier: convert the sparse matrix into a data frame that we’ll be able to use for our predictive models.

df = pd.DataFrame(bagOfWords.toarray())

df.shape
(1181, 500)
df.info()
RangeIndex: 1181 entries, 0 to 1180
Columns: 500 entries, 0 to 499
dtypes: int64(500)
memory usage: 4.5 MB
df.head(1)

 

0 1 2 3 4 5 496 497 498 499
0 0 0 0 0 0 0 0 0 0
1 rows × 500 columns

Pretty boring data frame but we just need it to train the classifier so we don’t bother to rename columns, etc.

We start by splitting the tweets into training and test sets, as usual.
Note that the target variable (that was a real number, the average of tweet ratings)  is transformed into the nearest discrete number, therefore it’s now reduced to exactly five classes: -2, -1, 0, 1, 2.

from sklearn.model_selection import train_test_split

X.Avg = [int(round(a)) for a in X.Avg] # cluster target into 5 classes

import random
random.seed(100)  # just for reproducibility
X_train, X_test, y_train, y_test = train_test_split(df, X.Avg, test_size=0.25)

X_test.shape
(296, 500)

We have 296 tweets in the test dataset.
Our data is now ready, and we can build our predictive model.

In the next chapter I explain a bit how it works the Naive Bayes classifier but feel free to skip it if you just want to use it.

Naive Bayes classifier

Naive Bayes classifiers are built on rules derived from the Bayes theorem, which is an equation describing the relationship of conditional probabilities of statistical quantities.

Let’s start by taking a look at the Bayes equation, when we are interested in finding the probability of a label L given a word W,  in a set of documents, which we can write as:

P(L | W) = \frac{P( W | L) \cdot P(L) }{P(W)}

  • The term P(L) is our original belief i.e., the original label of the document being positive or negative (in terms of sentiments). It is known as Prior.
  • The term P(L|W) is the probability of label L given a word W in the documents. This term is also known as Posterior (latin for After).
  • P(W|L) is similar to the Posterior and is the probability of a document containing a word W given a label L. Known as Likelihood.
  • P(W) is the probability that a given document has the word W.

You can think of the term Posterior as your updated rule or updated belief obtained by multiplying Prior and Likelihood.

But how do we find out P(W|L) and P(L)? This is exactly where bag of words will come in handy.

P(L) is basically concerned with this question : “How often does this label occur?”

For example, you want to know if a tweet is positive.
You have 100 documents or tweets in the training set and you have only two words in the corpus: “innovative” and “awesome”.
Out of these 100 documents, 60 documents are labeled as positive and the remaining 40 are labeled as negative.
So P(+) = 0.6 and P(−) = 0.4

P(W|L):
Further, out of 60 positively labeled documents, 48 of them contain both words  “innovative” and “awesome”. While the remaining twelve tweets do not have both words.
So, the probability P(W|+) = P(innovative AND awesome | +) = 48 / 60

Finally, only 4 tweets out of the 40 negative-labelled tweets have both the words awesome and innovative and the remaining thirty-six negative tweets do NOT have both words.
Therefore, the probability P(W|-) = 4 / 40

P(W):
Remains only to found out P(W), the probability that a tweet has both words.
Well, there are 52 tweets that have both words: 48 that are positive-labelled plus 4 that aren’t.

Our Bayes equation can be therefore written as:

P(+|W) = \frac{P(W|+) \cdot P(+) }{P(W)}

= \frac{48/60 \cdot 60/100 }{ 52/100 }  = 48/52 = 0.92

that tells us a (new) tweet with the words innovative and awesome has a 92% probability of having a positive sentiment.

This example had considered only two words. Remember you have to compute the likelihood probabilities for all the words. So, the total number of combinations a scenario where you have 1000 total words and each document contains 100 words on an average will be (1000)ˆ100 which is an insanely large number!

Here is where the Naive Bayes classifier comes to rescue. Its Conditional Independence assumption states that the value of a particular feature is independent of the value of any other feature, given the class variable, i.e. the feature probabilities P(xi|cj) – not P(x1), P(x2) but P(x1|cj), P(x2|cj) – are independent of each other.

For example, a fruit may be considered to be an apple if it is red, round and about 10 cm in diameter. A naive Bayes classifier considers each of these features to contribute independently to the probability that this fruit is an apple, regardless of any possible correlations between the color, roundness and diameter features.

This assumption concretely means that we can exchange the following expressions:

P(x_{1}, ..., x_{n} | c) = P(x_{1} | c)\cdot P(x_{2} | c) \cdot ... \cdot P(x_{n} | c)

So, naturally, Xn combinations get reduced to Xn which is exponentially less.

Naive Bayes has two advantages:

  • Reduced number of parameters.
  • Linear time complexity as opposed to exponential time complexity.

Moreover the classifier needs to specify the hypothetical random process that generates the data. For example, in the Gaussian naive Bayes classifier, the assumption is that data from each label is drawn from a simple Gaussian distribution.

Another useful example is the multinomial naive Bayes, the one we will use, where the features are assumed to be generated from a simple multinomial distribution. The multinomial distribution describes the probability of observing counts among a number of categories, and thus multinomial naive Bayes is most appropriate for features that represent counts or count rates and is often used is in text classification, where the features are related to word counts or frequencies within the documents to be classified.

Train and test the classifier

Now we train the sklearn MultinomialNB classifier (a Naive Bayes  with multiple classes):

from sklearn.naive_bayes import MultinomialNB

classifier = MultinomialNB()

classifier.fit(X_train, y_train)
MultinomialNB(alpha=1.0, class_prior=None, fit_prior=True)
predictions = classifier.predict(X_test)
predictions[0:100]
array([ 0,  0,  1,  0, -1,  0,  0, -1,  0,  0,  0,  1,  0,  0, -1, -1,  0,
        0,  0,  0,  0,  0,  0,  0,  0,  0, -1,  0, -1,  0, -1, -1, -1,  0,
        0, -1, -1,  0,  0,  0,  0, -1,  1,  1,  0,  0,  0, -1, -1,  1,  0,
        0,  1, -1,  0,  0, -1, -1,  0,  0, -1, -1,  0,  0,  0, -1, -1,  0,
       -2,  1,  0,  0, -1,  0,  0,  0,  0,  0, -1,  0,  0,  0,  0, -1,  1,
        0,  1,  0,  1,  0,  0,  0, -1,  1,  0,  0, -1, -1,  0, -1])

Metrics: accuracy and confusion matrix

We can check the quality of the predictions by using the scikit-learn metric, specifically the accuracy (we will see in a moment what i means):

from sklearn import metrics

# Model Accuracy, how often is the classifier correct?
print("Accuracy: {:.2}".format(metrics.accuracy_score(y_test, predictions)))
Accuracy: 0.64

The classifier was correct 64% of times (not only if a tweet was negative but also if it was strongly negative or moderately negative).
A very useful metric is the confusion matrix that displays the predictions and the actual values in a table:

mat = metrics.confusion_matrix(y_test, predictions)
mat
array([[  4,   5,   3,   0,   0],
       [  2,  42,  28,   5,   1],
       [  1,  26, 128,   9,   2],
       [  0,   8,  16,  14,   0],
       [  0,   1,   1,   0,   0]])

It’s more clear if we visualise it as a heat map:

import matplotlib.pyplot as plt

labels = ['strongly neg.', 'negative', 'neutral', 'positive', 'strongly pos.']
fig = plt.figure()
ax = fig.add_subplot(111)
cm = ax.matshow(mat)
# plot the title, use y to leave some space before the labels
plt.title("Confusion matrix - Tweets arranged by sentiment", y=1.2)
ax.set_xticklabels([''] + labels)
ax.set_yticklabels([''] + labels)
plt.setp(ax.get_xticklabels(), rotation=-30, ha="right",
             rotation_mode="anchor")

plt.xlabel("Predicted")
plt.ylabel("Actual")
# Loop over data dimensions and create text annotations.
for i in range(len(mat)):
    for j in range(len(mat)):
        text = ax.text(j, i, mat[i, j],
                       ha="center", va="center", color="w")
    # Create colorbar
fig.colorbar(cm);
output_66_0
On the diagonal the correct prediction

The numbers in the diagonal are all the times when the predicted sentiment for a tweet was the same as the actual sentiment.
Now we can define accuracy as the sum of all the values in the diagonal (these are the observations we predicted correctly) divided by the total number of observations in the table.
The best accuracy would be 1.0 when all values are on the diagonal (no errors!), whereas the worst is 0.0 (nothing correct!):

correctPredictions = sum(mat[i][i] for i in range(len(mat)))
correctPredictions
188
print("Accuracy: {:.2}".format(correctPredictions / len(y_test)))
Accuracy: 0.64

Which is the same value as above.

A simple baseline

Now, how good is this accuracy?
Let’s compare this to a simple baseline model that always predicts neutral (the most common tweets in the test dataset).

neutralTweets = sum(1 for sentiment in y_test if sentiment == 0)  # neutral tweets in Test dataset
neutralTweets
166
len(y_test) - neutralTweets
130

This tells us that in our test dataset we have 166 observation with neutral sentiment and 130 with positive or negative tweets.
So the accuracy of a baseline model that always predict non-negative tweets would be:

print("Accuracy baseline: {:.2}".format(neutralTweets / len(y_test)))
Accuracy baseline: 0.56

So our Naive Bayesian model does better than the simple baseline.

By using a bag-of-words approach and a Naive Bayes model, we can reasonably predict sentiment with a relatively small data set of tweets.

Predict the sentiment of a new tweet

The classifier can be applied to new tweets, of course, to predict their sentiment:

# for simplicity, it re-uses the vectoriser and the classifier without passing them
# as arguments. Industrialising it would mean to create a pipeline with
# vectoriser > classifier > label string
def predictSentiment(t):
    bow = cv.transform([t])
    prediction = classifier.predict(bow)
    if prediction == 0:
        return "Neutral"
    elif prediction > 0:
        return "Positive"
    else:
        return "Negative"
predictSentiment("I don't know what to think about apple!")
'Neutral'

Ok. We try with two new tweets and see what we get, one positive and one negative:

predictSentiment("I love apple, its products are always the best, really!")
'Positive'
predictSentiment("Apple lost its mojo, I will never buy again an iphone better an Android")
'Negative'

So far so good. One thing we can note from the confusion matrix – specifically the diagonal – is that the strongly positive or strongly negative sentiments are not easy to predict and in general you have better results with fewer classes.

Binary Classification

Now, this is a more generic case, where we have 5 classes as target.
The case you see more often is the binary one – with only two classes – which has some special characteristics and metrics.
Let’s convert our target into a binary one: a tweet can be either negative or not negative (i.e., positive or neutral).

First of all, we need to transform our original dataset to reduce the sentiment classes to only two classes:

X.loc[X.Avg = 0] = 1 # NON-negative sentiment

We need then to re-apply the classifier:

X_train, X_test, y_train, y_test = train_test_split(df, X.Avg, test_size=0.25)

classifier.fit(X_train, y_train)
MultinomialNB(alpha=1.0, class_prior=None, fit_prior=True)
predictionsTwo = classifier.predict(X_test)

predictionsTwo[0:100]
array([ 1,  1,  1, -1, -1,  1,  1,  1, -1,  1, -1,  1,  1,  1,  1, -1,  1,
        1,  1,  1,  1,  1,  1, -1,  1,  1, -1,  1,  1, -1,  1,  1, -1,  1,
       -1,  1,  1,  1, -1,  1,  1, -1,  1,  1, -1,  1, -1,  1,  1,  1, -1,
        1,  1,  1,  1,  1,  1,  1,  1,  1,  1,  1, -1,  1, -1,  1,  1,  1,
       -1,  1,  1,  1,  1,  1, -1,  1,  1,  1, -1,  1,  1,  1,  1, -1,  1,
        1,  1,  1,  1,  1,  1,  1,  1,  1,  1,  1, -1,  1,  1,  1])

As you can see, there is no more classes 2, 0 or -2 now.

# Model Accuracy, how often is the classifier correct?
print("Accuracy: {:.2}".format(metrics.accuracy_score(y_test, predictionsTwo)))
Accuracy: 0.79

Of course it is now better, we have less classes to predict, less errors to make.
Let’s see how the confusion matrix for a binary classification looks like:

matBinary = metrics.confusion_matrix(y_test, predictionsTwo)
matBinary
array([[ 45,  40],
       [ 22, 189]])
labels = ['negative', 'NOT negative']
fig = plt.figure()
ax = fig.add_subplot(111)
cm = ax.matshow(matBinary)
# plot the title, use y to leave some space before the labels
plt.title("Confusion matrix - Tweets arranged by sentiment", y=1.2)
ax.set_xticklabels([''] + labels)
ax.set_yticklabels([''] + labels)
plt.setp(ax.get_xticklabels(), rotation=-30, ha="right",
             rotation_mode="anchor")

plt.xlabel("Predicted")
plt.ylabel("Actual")
# Loop over data dimensions and create text annotations.
for i in range(len(matBinary)):
    for j in range(len(matBinary)):
        text = ax.text(j, i, matBinary[i, j],
                       ha="center", va="center", color="w")
    # Create colorbar
fig.colorbar(cm);
output_101_0
Now with two classes

In a two-class problem, we are often looking to discriminate between observations with a specific outcome, from normal observations. Such as a disease state or no disease state or spam versus no-spam.
One being the positive event and the other the no-event, the negative event.

In our case, let’s say the negative event is the negative tweet and the positive event is the NON-negative tweet.

These are basic terms used in binary classification:

  • “true positive” for correctly predicted event values (in our scenario the non-negative tweets: positive or neutral).
  • “true negative” for correctly predicted no-event values (in our scenario the negative tweets).
  • “false positive” for incorrectly predicted event values. In Hypothesis Testing it is also known as Type 1 error or the incorrect rejection of Null Hypothesis.
  • “false negative” for incorrectly predicted no-event values. It is also known as Type 2 error, which leads to the failure in rejection of Null Hypothesis.
tn, fp, fn, tp = matBinary.ravel()

print("True Negatives: ",tn)
print("False Positives: ",fp)
print("False Negatives: ",fn)
print("True Positives: ",tp)
True Negatives: 45
False Positives: 40
False Negatives: 22
True Positives: 189

Accuracy can be re-formulated as the ratio between the true events (positive and negative) and the total events:

Accuracy = (tn+tp)/(tp+tn+fp+fn)
print("Accuracy: {:.2f}".format(Accuracy))
Accuracy: 0.79

Accuracy is not a reliable metric for the real performance of a classifier, because it will yield misleading results if the data set is unbalanced (that is, when the numbers of observations in different classes vary greatly).

Then you may consider additional metrics like Precision, Recall, F score (combined metric):

Sensitivity or Recall

It is the ‘Completeness’, ability of the model to identify all relevant instances, True Positive Rate, aka Sensitivity.
Imagine a scenario where your focus is to have the least False Negatives, for example if you are trying to predict if an email is a spam or not, you don’t want authentic messages to be wrongly classified as spam. Then Sensitivity can come to your rescue:

Sensitivity = tp/(tp+fn)
print("Sensitivity {:0.2f}".format(Sensitivity))
Sensitivity 0.9

Sensitivity is a real number between 0 and 1. A sensitivity of 1 means that ALL the Negative cases have been correctly classified.

Specificity

#Specificity
Specificity = tn/(tn+fp)
print("Specificity {:0.2f}".format(Specificity))
Specificity 0.53

ROC (Receiver Operating Characteristic) curve

Until now, we have seen classification problems where we predict the target class directly.

Sometimes it can be more insightful or flexible to predict the probabilities for each class instead. From one side you will get an idea of how confident is the classifier for each class, on the other side you can use them to calibrate the threshold for how to interpret the predicted probabilities.

For example, in a binary classifier the default is to use a threshold of 0.5, meaning that a probability less than 0.5 is a negative outcome and a probability equal or over 0.5 is a positive outcome.
But this threshold can be adjusted to tune the behaviour of the model for the specific problem, e.g. to reduce more of one or another type of error, as we have seen above.
Think about a classifier that predict if an event is a nuclear attack or not. Clearly you want to have as less as possible false alarms!

A diagnostic tools help in in choosing the right threshold is the ROC curve.

This is the plot of the ‘True Positive Rate’ (Sensitivity) on the y-axis against the ‘False Positive Rate’ (1 minus Specificity) on the x-axis, at different classification thresholds between 0 and 1.

It captures all the thresholds simultaneously and the area under the ROC curve measures how well a parameter can distinguish between two groups.
Threshold =0 is at the axis origin (0,0) while the threshold = 1 is at the top right end of the curve.

  • high threshold: means high specificity and low sensitivity
  • Low threshold: means low specificity and high sensitivity

Put another way, it plots the false alarm rate versus the hit rate.

Let’s see an example using our binary classification above.
First, we need probabilities to create the ROC curve.

probs = classifier.predict_proba(X_test) # get the probabilities

preds = probs[:,1]  ## keep probabilities for the positive outcome only
fpr, tpr, threshold = metrics.roc_curve(y_test, preds)  # calculate roc
roc_auc = metrics.auc(fpr, tpr)  # calculate AUC

plt.title('Receiver Operating Characteristic')
plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % roc_auc)
plt.plot([0, 1], [0, 1],'r--')  # plot random guessing

plt.legend(loc = 'lower right')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()

output_118_0.png
ROC curve in blue , in red the baseline

The ROC curve is a useful tool for a few reasons:

  • The curves of different models can be compared directly in general or for different thresholds.
  • The area under the curve (AUC) can be used as a summary of the model skill.

A random guessing classifier (the red line above) has an Area Under the Curve (often referred as AUC) of 0.5, while AUC for a perfect classifier is equal to 1.
In general AUC of above 0.8 is considered “good”.

Looking at the ROC curve you can choose a threshold that gives a desirable balance between the:

  • cost of failing to detect positive
  • cost of raising false alarms

Precision and F1 score metrics

Called also Positive Predictive Power, the Precision measures somehow how “exact” it is, i.e. the ability of the model to return only relevant instances. If your use case/problem statement involves minimising the False Positives then Precision is something you need:

# Precision
Precision = tp/(tp+fp)
print("Precision or Positive Predictive Power: {:0.2f}".format(Precision))
Precision or Positive Predictive Power: 0.83

Similarly, you can calculate the Negative Predictive Power

# Negative Predictive Value
print("Negative predictive Power: {:0.2f}".format(tn / (tn+fn)))
Negative predictive Power: 0.67

The F1 score is the harmonic mean of the Precision & Sensitivity, and is used to indicate a balance between them. It ranges from 0 to 1; F1 Score reaches its best value at 1 (perfect precision & sensitivity) and worst at 0.

# F1 Score
f1 = (2 * Precision * Sensitivity) / (Precision + Sensitivity)
print("F1 Score {:0.2f}".format(f1))
F1 Score 0.86

What do we use for the ROC?

classifierTuned = MultinomialNB(class_prior=[.4,.6]) # try to max specificity

classifierTuned.fit(X_train, y_train)
predictionsTuned = classifierTuned.predict(X_test)
matTuned = metrics.confusion_matrix(y_test, predictionsTuned)
matTuned
array([[ 53, 32],
[ 36, 175]])
tn, fp, fn, tp = matTuned.ravel()
Accuracy = (tn+tp)/(tp+tn+fp+fn)
print("Accuracy: {:.2f}".format(Accuracy)) # it was 0.79

Sensitivity = tp/(tp+fn)
print("Sensitivity {:0.2f}".format(Sensitivity)) #it was 0.9

Specificity = tn/(tn+fp)
print("Specificity {:0.2f}".format(Specificity)) # it was 0.53
Accuracy: 0.77
Sensitivity: 0.83
Specificity: 0.62

We have greatly improved the specificity at the cost of a smaller decrease of the sensitivity and the accuracy.

And the metrics for multiple classes?

In a 2×2, once you have picked one category as positive, the other is automatically negative. With 5 categories, you basically have 5 different sensitivities, depending on which of the five categories you pick as “positive”. You could still calculate their metrics by collapsing to a 2×2, i.e. Class1 versus not-Class1, then Class2 versus not-Class2, and so on, as we did above.

You can actually have sensitivity and specificity regardless of the number of classes. The only difference here is that you will get one specificity and sensitivity and accuracy and F1-score for each of the classes. If you want to report, you can report the average of these values.

We have to do these calculations for each class separately, then we average these measures, to get the average of precision and the average of recall. I leave this as exercise for you.

Note: this post is part of a series about Machine Learning with Python.

Advertisements