Welcome to week two of this specialization. You will learn about Naive Bayes. Concretely, you will be using Naive Bayes for sentiment analysis on tweets. Given a tweet, you will decide if it has a positive sentiment or a negative one. Specifically you will:
You may already be familiar with Naive Bayes and its justification in terms of conditional probabilities and independence.
Before submitting your assignment to the AutoGrader, please make sure you are not doing the following:
print
statement(s) in the assignment.If you do any of the following, you will get something like, Grader not found
(or similarly unexpected) error upon submitting your assignment. Before asking for help/debugging the errors in your assignment, check for these first. If this is the case, and you don’t remember the changes you have made, you can get a fresh copy of the assignment by following these instructions.
Lets get started!
Load the cell below to import some packages. You may want to browse the documentation of unfamiliar libraries and functions.
from utils import process_tweet, lookup
import pdb
from nltk.corpus import stopwords, twitter_samples
import numpy as np
import pandas as pd
import nltk
import string
from nltk.tokenize import TweetTokenizer
from os import getcwd
import w2_unittest
nltk.download('twitter_samples')
nltk.download('stopwords')
[nltk_data] Downloading package twitter_samples to
[nltk_data] /home/jovyan/nltk_data...
[nltk_data] Package twitter_samples is already up-to-date!
[nltk_data] Downloading package stopwords to /home/jovyan/nltk_data...
[nltk_data] Package stopwords is already up-to-date!
True
If you are running this notebook in your local computer, don’t forget to download the tweeter samples and stopwords from nltk.
nltk.download('stopwords')
nltk.download('twitter_samples')
filePath = f"{getcwd()}/../tmp2/"
nltk.data.path.append(filePath)
# get the sets of positive and negative tweets
all_positive_tweets = twitter_samples.strings('positive_tweets.json')
all_negative_tweets = twitter_samples.strings('negative_tweets.json')
# split the data into two pieces, one for training and one for testing (validation set)
test_pos = all_positive_tweets[4000:]
train_pos = all_positive_tweets[:4000]
test_neg = all_negative_tweets[4000:]
train_neg = all_negative_tweets[:4000]
train_x = train_pos + train_neg
test_x = test_pos + test_neg
# avoid assumptions about the length of all_positive_tweets
train_y = np.append(np.ones(len(train_pos)), np.zeros(len(train_neg)))
test_y = np.append(np.ones(len(test_pos)), np.zeros(len(test_neg)))
For any machine learning project, once you’ve gathered the data, the first step is to process it to make useful inputs to your model.
We have given you the function process_tweet
that does this for you.
custom_tweet = "RT @Twitter @chapagain Hello There! Have a great day. :) #good #morning http://chapagain.com.np"
# print cleaned tweet
print(process_tweet(custom_tweet))
['hello', 'great', 'day', ':)', 'good', 'morn']
To help you train your naive bayes model, you will need to compute a dictionary where the keys are a tuple (word, label) and the values are the corresponding frequency. Note that the labels we’ll use here are 1 for positive and 0 for negative.
You will also implement a lookup helper function that takes in the freqs
dictionary, a word, and a label (1 or 0) and returns the number of times that word and label tuple appears in the collection of tweets.
For example: given a list of tweets ["i am rather excited", "you are rather happy"]
and the label 1, the function will return a dictionary that contains the following key-value pairs:
{ (“rather”, 1): 2, (“happi”, 1) : 1, (“excit”, 1) : 1 }
Create a function count_tweets
that takes a list of tweets as input, cleans all of them, and returns a dictionary.
# UNQ_C1 GRADED FUNCTION: count_tweets
def count_tweets(result, tweets, ys):
'''
Input:
result: a dictionary that will be used to map each pair to its frequency
tweets: a list of tweets
ys: a list corresponding to the sentiment of each tweet (either 0 or 1)
Output:
result: a dictionary mapping each pair to its frequency
'''
### START CODE HERE ###
for y, tweet in zip(ys, tweets):
for word in process_tweet(tweet):
# define the key, which is the word and label tuple
pair = word, y
# if the key exists in the dictionary, increment the count
if pair in result:
result[pair] += 1
# else, if the key is new, add it to the dictionary and set the count to 1
else:
result[pair] = 1
### END CODE HERE ###
return result
# Testing your function
result = {}
tweets = ['i am happy', 'i am tricked', 'i am sad', 'i am tired', 'i am tired']
ys = [1, 0, 0, 0, 0]
count_tweets(result, tweets, ys)
{('happi', 1): 1, ('trick', 0): 1, ('sad', 0): 1, ('tire', 0): 2}
Expected Output: {(‘happi’, 1): 1, (‘trick’, 0): 1, (‘sad’, 0): 1, (‘tire’, 0): 2}
# Test your function
w2_unittest.test_count_tweets(count_tweets)
[92m All tests passed
Naive bayes is an algorithm that could be used for sentiment analysis. It takes a short time to train and also has a short prediction time.
$$P(D_{pos}) = \frac{D_{pos}}{D}\tag{1}$$
$$P(D_{neg}) = \frac{D_{neg}}{D}\tag{2}$$
Where $D$ is the total number of documents, or tweets in this case, $D_{pos}$ is the total number of positive tweets and $D_{neg}$ is the total number of negative tweets.
The prior probability represents the underlying probability in the target population that a tweet is positive versus negative. In other words, if we had no specific information and blindly picked a tweet out of the population set, what is the probability that it will be positive versus that it will be negative? That is the “prior”.
The prior is the ratio of the probabilities $\frac{P(D_{pos})}{P(D_{neg})}$. We can take the log of the prior to rescale it, and we’ll call this the logprior
$$\text{logprior} = log \left( \frac{P(D_{pos})}{P(D_{neg})} \right) = log \left( \frac{D_{pos}}{D_{neg}} \right)$$.
Note that $log(\frac{A}{B})$ is the same as $log(A) - log(B)$. So the logprior can also be calculated as the difference between two logs:
$$\text{logprior} = \log (P(D_{pos})) - \log (P(D_{neg})) = \log (D_{pos}) - \log (D_{neg})\tag{3}$$
To compute the positive probability and the negative probability for a specific word in the vocabulary, we’ll use the following inputs:
We’ll use these to compute the positive and negative probability for a specific word using this formula:
$$ P(W_{pos}) = \frac{freq_{pos} + 1}{N_{pos} + V}\tag{4} $$ $$ P(W_{neg}) = \frac{freq_{neg} + 1}{N_{neg} + V}\tag{5} $$
Notice that we add the “+1” in the numerator for additive smoothing. This wiki article explains more about additive smoothing.
To compute the loglikelihood of that very same word, we can implement the following equations:
$$\text{loglikelihood} = \log \left(\frac{P(W_{pos})}{P(W_{neg})} \right)\tag{6}$$
freqs
dictionarycount_tweets
function, you can compute a dictionary called freqs
that contains all the frequencies.freqs
dictionary, the key is the tuple (word, label)We will use this dictionary in several parts of this assignment.
# Build the freqs dictionary for later uses
freqs = count_tweets({}, train_x, train_y)
Given a freqs dictionary, train_x
(a list of tweets) and a train_y
(a list of labels for each tweet), implement a naive bayes classifier.
freqs
dictionary to get your $V$ (you can use the set
function).freqs
dictionary, you can compute the positive and negative frequency of each word $freq_{pos}$ and $freq_{neg}$.freqs
dictionary, you can also compute the total number of positive words and total number of negative words $N_{pos}$ and $N_{neg}$.train_y
input list of labels, calculate the number of documents (tweets) $D$, as well as the number of positive documents (tweets) $D_{pos}$ and number of negative documents (tweets) $D_{neg}$.lookup
function to get the positive frequencies, $freq_{pos}$, and the negative frequencies, $freq_{neg}$, for that specific word.$$ P(W_{pos}) = \frac{freq_{pos} + 1}{N_{pos} + V}\tag{4} $$ $$ P(W_{neg}) = \frac{freq_{neg} + 1}{N_{neg} + V}\tag{5} $$
Note: We’ll use a dictionary to store the log likelihoods for each word. The key is the word, the value is the log likelihood of that word).
# UNQ_C2 GRADED FUNCTION: train_naive_bayes
def train_naive_bayes(freqs, train_x, train_y):
'''
Input:
freqs: dictionary from (word, label) to how often the word appears
train_x: a list of tweets
train_y: a list of labels correponding to the tweets (0,1)
Output:
logprior: the log prior. (equation 3 above)
loglikelihood: the log likelihood of you Naive bayes equation. (equation 6 above)
'''
loglikelihood = {}
logprior = 0
### START CODE HERE ###
# calculate V, the number of unique words in the vocabulary
vocab = list(set([item[0] for item in set(freqs.keys())]))
V = len(vocab)
# calculate N_pos, N_neg, V_pos, V_neg
N_pos = N_neg = 0
for pair in freqs.keys():
# if the label is positive (greater than zero)
if pair[1] > 0:
# Increment the number of positive words by the count for this (word, label) pair
N_pos += freqs[pair]
# else, the label is negative
else:
# increment the number of negative words by the count for this (word,label) pair
N_neg += freqs[pair]
# Calculate D, the number of documents
D = len(train_y)
# Calculate D_pos, the number of positive documents
D_pos = len(train_y[train_y == 1.0])
# Calculate D_neg, the number of negative documents
D_neg = len(train_y[train_y == 0.0])
# Calculate logprior
logprior = np.log(D_pos) - np.log(D_neg)
# For each word in the vocabulary...
for word in vocab:
# get the positive and negative frequency of the word
freq_pos = lookup(freqs, word, 1)
freq_neg = lookup(freqs, word, 0)
# calculate the probability that each word is positive, and negative
p_w_pos = ((freq_pos + 1) / (N_pos + V))
p_w_neg = ((freq_neg + 1) / (N_neg + V))
# calculate the log likelihood of the word
loglikelihood[word] = np.log(p_w_pos) - np.log(p_w_neg)
### END CODE HERE ###
return logprior, loglikelihood
# UNQ_C3 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
logprior, loglikelihood = train_naive_bayes(freqs, train_x, train_y)
print(logprior)
print(len(loglikelihood))
0.0
9165
Expected Output:
0.0
9165
# Test your function
w2_unittest.test_train_naive_bayes(train_naive_bayes, freqs, train_x, train_y)
[92m All tests passed
Now that we have the logprior
and loglikelihood
, we can test the naive bayes function by making predicting on some tweets!
naive_bayes_predict
Instructions:
Implement the naive_bayes_predict
function to make predictions on tweets.
tweet
, logprior
, loglikelihood
.$$ p = logprior + \sum_i^N (loglikelihood_i)$$
Note we calculate the prior from the training data, and that the training data is evenly split between positive and negative labels (4000 positive and 4000 negative tweets). This means that the ratio of positive to negative 1, and the logprior is 0.
The value of 0.0 means that when we add the logprior to the log likelihood, we’re just adding zero to the log likelihood. However, please remember to include the logprior, because whenever the data is not perfectly balanced, the logprior will be a non-zero value.
# UNQ_C4 GRADED FUNCTION: naive_bayes_predict
def naive_bayes_predict(tweet, logprior, loglikelihood):
'''
Input:
tweet: a string
logprior: a number
loglikelihood: a dictionary of words mapping to numbers
Output:
p: the sum of all the logliklihoods of each word in the tweet (if found in the dictionary) + logprior (a number)
'''
### START CODE HERE ###
# process the tweet to get a list of words
word_l = process_tweet(tweet)
# initialize probability to zero
p = 0
# add the logprior
p += logprior
for word in word_l:
# check if the word exists in the loglikelihood dictionary
if word in loglikelihood:
# add the log likelihood of that word to the probability
p += loglikelihood[word]
### END CODE HERE ###
return p
# UNQ_C5 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# Experiment with your own tweet.
my_tweet = 'She smiled.'
p = naive_bayes_predict(my_tweet, logprior, loglikelihood)
print('The expected output is', p)
The expected output is 1.5577981920239683
Expected Output:
# Test your function
w2_unittest.test_naive_bayes_predict(naive_bayes_predict)
[92m All tests passed
Instructions:
test_naive_bayes
to check the accuracy of your predictions.test_x
, test_y
, log_prior, and loglikelihoodnaive_bayes_predict
function to make predictions for each tweet in text_x.# UNQ_C6 GRADED FUNCTION: test_naive_bayes
def test_naive_bayes(test_x, test_y, logprior, loglikelihood, naive_bayes_predict=naive_bayes_predict):
"""
Input:
test_x: A list of tweets
test_y: the corresponding labels for the list of tweets
logprior: the logprior
loglikelihood: a dictionary with the loglikelihoods for each word
Output:
accuracy: (# of tweets classified correctly)/(total # of tweets)
"""
accuracy = 0 # return this properly
### START CODE HERE ###
y_hats = []
for tweet in test_x:
# if the prediction is > 0
if naive_bayes_predict(tweet, logprior, loglikelihood) > 0:
# the predicted class is 1
y_hat_i = 1
else:
# otherwise the predicted class is 0
y_hat_i = 0
# append the predicted class to the list y_hats
y_hats.append(y_hat_i)
# error is the average of the absolute values of the differences between y_hats and test_y
error = np.average(np.abs(y_hats - test_y))
# Accuracy is 1 minus the error
accuracy = 1 - error
### END CODE HERE ###
return accuracy
print("Naive Bayes accuracy = %0.4f" %
(test_naive_bayes(test_x, test_y, logprior, loglikelihood)))
Naive Bayes accuracy = 0.9955
Expected Accuracy:
Naive Bayes accuracy = 0.9955
# UNQ_C7 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# Run this cell to test your function
for tweet in ['I am happy', 'I am bad', 'this movie should have been great.', 'great', 'great great', 'great great great', 'great great great great']:
p = naive_bayes_predict(tweet, logprior, loglikelihood)
print(f'{tweet} -> {p:.2f}')
I am happy -> 2.14
I am bad -> -1.31
this movie should have been great. -> 2.12
great -> 2.13
great great -> 4.26
great great great -> 6.39
great great great great -> 8.52
Expected Output:
# Feel free to check the sentiment of your own tweet below
my_tweet = 'you are bad :('
naive_bayes_predict(my_tweet, logprior, loglikelihood)
-8.83735173882565
# Test your function
w2_unittest.unittest_test_naive_bayes(test_naive_bayes, test_x, test_y)
[92m All tests passed
lookup(freqs,word,1)
to get the positive count of the word.lookup
function to get the negative count of that word.$$ ratio = \frac{\text{pos_words} + 1}{\text{neg_words} + 1} $$
Where pos_words and neg_words correspond to the frequency of the words in their respective classes.
Words | Positive word count | Negative Word Count |
glad | 41 | 2 |
arriv | 57 | 4 |
:( | 1 | 3663 |
:-( | 0 | 378 |
# UNQ_C8 GRADED FUNCTION: get_ratio
def get_ratio(freqs, word):
'''
Input:
freqs: dictionary containing the words
Output: a dictionary with keys 'positive', 'negative', and 'ratio'.
Example: {'positive': 10, 'negative': 20, 'ratio': 0.5}
'''
pos_neg_ratio = {'positive': 0, 'negative': 0, 'ratio': 0.0}
### START CODE HERE ###
# use lookup() to find positive counts for the word (denoted by the integer 1)
pos_neg_ratio['positive'] = lookup(freqs, word, 1)
# use lookup() to find negative counts for the word (denoted by integer 0)
pos_neg_ratio['negative'] = lookup(freqs, word, 0)
# calculate the ratio of positive to negative counts for the word
pos_neg_ratio['ratio'] = (pos_neg_ratio['positive'] + 1) / (pos_neg_ratio['negative'] + 1)
### END CODE HERE ###
return pos_neg_ratio
get_ratio(freqs, 'happi')
{'positive': 162, 'negative': 18, 'ratio': 8.578947368421053}
# Test your function
w2_unittest.test_get_ratio(get_ratio, freqs)
[92m All tests passed
get_ratio
function to get a dictionary containing the positive count, negative count, and the ratio of positive to negative counts.get_ratio
dictionary inside another dictinoary, where the key is the word, and the value is the dictionary pos_neg_ratio
that is returned by the get_ratio
function.
An example key-value pair would have this structure:{'happi':
{'positive': 10, 'negative': 20, 'ratio': 0.524}
}
# UNQ_C9 GRADED FUNCTION: get_words_by_threshold
def get_words_by_threshold(freqs, label, threshold, get_ratio=get_ratio):
'''
Input:
freqs: dictionary of words
label: 1 for positive, 0 for negative
threshold: ratio that will be used as the cutoff for including a word in the returned dictionary
Output:
word_list: dictionary containing the word and information on its positive count, negative count, and ratio of positive to negative counts.
example of a key value pair:
{'happi':
{'positive': 10, 'negative': 20, 'ratio': 0.5}
}
'''
word_list = {}
### START CODE HERE ###
for key in freqs.keys():
word, _ = key
# get the positive/negative ratio for a word
pos_neg_ratio = get_ratio(freqs, word)
# if the label is 1 and the ratio is greater than or equal to the threshold...
if label == 1 and pos_neg_ratio['ratio'] >= threshold:
# Add the pos_neg_ratio to the dictionary
word_list[word] = pos_neg_ratio
# If the label is 0 and the pos_neg_ratio is less than or equal to the threshold...
elif label == 0 and pos_neg_ratio['ratio'] <= threshold:
# Add the pos_neg_ratio to the dictionary
word_list[word] = pos_neg_ratio
# otherwise, do not include this word in the list (do nothing)
### END CODE HERE ###
return word_list
# Test your function: find negative words at or below a threshold
get_words_by_threshold(freqs, label=0, threshold=0.05)
{':(': {'positive': 1, 'negative': 3675, 'ratio': 0.000544069640914037},
':-(': {'positive': 0, 'negative': 386, 'ratio': 0.002583979328165375},
'zayniscomingbackonjuli': {'positive': 0, 'negative': 19, 'ratio': 0.05},
'26': {'positive': 0, 'negative': 20, 'ratio': 0.047619047619047616},
'>:(': {'positive': 0, 'negative': 43, 'ratio': 0.022727272727272728},
'lost': {'positive': 0, 'negative': 19, 'ratio': 0.05},
'♛': {'positive': 0, 'negative': 210, 'ratio': 0.004739336492890996},
'》': {'positive': 0, 'negative': 210, 'ratio': 0.004739336492890996},
'beli̇ev': {'positive': 0, 'negative': 35, 'ratio': 0.027777777777777776},
'wi̇ll': {'positive': 0, 'negative': 35, 'ratio': 0.027777777777777776},
'justi̇n': {'positive': 0, 'negative': 35, 'ratio': 0.027777777777777776},
'see': {'positive': 0, 'negative': 35, 'ratio': 0.027777777777777776},
'me': {'positive': 0, 'negative': 35, 'ratio': 0.027777777777777776}}
# Test your function; find positive words at or above a threshold
get_words_by_threshold(freqs, label=1, threshold=10)
{'followfriday': {'positive': 23, 'negative': 0, 'ratio': 24.0},
'commun': {'positive': 27, 'negative': 1, 'ratio': 14.0},
':)': {'positive': 2960, 'negative': 2, 'ratio': 987.0},
'flipkartfashionfriday': {'positive': 16, 'negative': 0, 'ratio': 17.0},
':D': {'positive': 523, 'negative': 0, 'ratio': 524.0},
':p': {'positive': 104, 'negative': 0, 'ratio': 105.0},
'influenc': {'positive': 16, 'negative': 0, 'ratio': 17.0},
':-)': {'positive': 552, 'negative': 0, 'ratio': 553.0},
"here'": {'positive': 20, 'negative': 0, 'ratio': 21.0},
'youth': {'positive': 14, 'negative': 0, 'ratio': 15.0},
'bam': {'positive': 44, 'negative': 0, 'ratio': 45.0},
'warsaw': {'positive': 44, 'negative': 0, 'ratio': 45.0},
'shout': {'positive': 11, 'negative': 0, 'ratio': 12.0},
';)': {'positive': 22, 'negative': 0, 'ratio': 23.0},
'stat': {'positive': 51, 'negative': 0, 'ratio': 52.0},
'arriv': {'positive': 57, 'negative': 4, 'ratio': 11.6},
'glad': {'positive': 41, 'negative': 2, 'ratio': 14.0},
'blog': {'positive': 27, 'negative': 0, 'ratio': 28.0},
'fav': {'positive': 11, 'negative': 0, 'ratio': 12.0},
'fantast': {'positive': 9, 'negative': 0, 'ratio': 10.0},
'fback': {'positive': 26, 'negative': 0, 'ratio': 27.0},
'pleasur': {'positive': 10, 'negative': 0, 'ratio': 11.0},
'←': {'positive': 9, 'negative': 0, 'ratio': 10.0},
'aqui': {'positive': 9, 'negative': 0, 'ratio': 10.0}}
Notice the difference between the positive and negative ratios. Emojis like :( and words like ‘me’ tend to have a negative connotation. Other words like glad, community, arrives, tend to be found in the positive tweets.
# Test your function
w2_unittest.test_get_words_by_threshold(get_words_by_threshold, freqs)
[92m All tests passed
In this part you will see some tweets that your model missclassified. Why do you think the missclassifications happened? Were there any assumptions made by your naive bayes model?
# Some error analysis done for you
print('Truth Predicted Tweet')
for x, y in zip(test_x, test_y):
y_hat = naive_bayes_predict(x, logprior, loglikelihood)
if y != (np.sign(y_hat) > 0):
print('%d\t%0.2f\t%s' % (y, np.sign(y_hat) > 0, ' '.join(
process_tweet(x)).encode('ascii', 'ignore')))
Truth Predicted Tweet
1 0.00 b'truli later move know queen bee upward bound movingonup'
1 0.00 b'new report talk burn calori cold work harder warm feel better weather :p'
1 0.00 b'harri niall 94 harri born ik stupid wanna chang :D'
1 0.00 b'park get sunlight'
1 0.00 b'uff itna miss karhi thi ap :p'
0 1.00 b'hello info possibl interest jonatha close join beti :( great'
0 1.00 b'u prob fun david'
0 1.00 b'pat jay'
0 1.00 b'sr financi analyst expedia inc bellevu wa financ expediajob job job hire'
In this part you can predict the sentiment of your own tweet.
# Test with your own tweet - feel free to modify `my_tweet`
my_tweet = 'I am happy because I am learning :)'
p = naive_bayes_predict(my_tweet, logprior, loglikelihood)
print(p)
9.571143871339594
Congratulations on completing this assignment. See you next week!