Naive Bayes Classifier From Scratch
Naive bayes is simple classifier known for doing well when only a small number of observations is available. In this tutorial we will create a gaussian naive bayes classifier from scratch and use it to predict the class of a previously unseen data point. This tutorial is based on an example on Wikipedia’s naive bayes classifier page, I have implemented it in Python and tweaked some notation to improve explanation.
Preliminaries
import pandas as pd
import numpy as np
Create Data
Our dataset is contains data on eight individuals. We will use the dataset to construct a classifier that takes in the height, weight, and foot size of an individual and outputs a prediction for their gender.
/* Create an empty dataframe */
data = pd.DataFrame()
/* Create our target variable */
data['Gender'] = ['male','male','male','male','female','female','female','female']
/* Create our feature variables */
data['Height'] = [6,5.92,5.58,5.92,5,5.5,5.42,5.75]
data['Weight'] = [180,190,170,165,100,150,130,150]
data['Foot_Size'] = [12,11,12,10,6,8,7,9]
/* View the data */
data
Gender | Height | Weight | Foot_Size | |
---|---|---|---|---|
0 | male | 6.00 | 180 | 12 |
1 | male | 5.92 | 190 | 11 |
2 | male | 5.58 | 170 | 12 |
3 | male | 5.92 | 165 | 10 |
4 | female | 5.00 | 100 | 6 |
5 | female | 5.50 | 150 | 8 |
6 | female | 5.42 | 130 | 7 |
7 | female | 5.75 | 150 | 9 |
The dataset above is used to construct our classifier. Below we will create a new person for whom we know their feature values but not their gender. Our goal is to predict their gender.
/* Create an empty dataframe */
person = pd.DataFrame()
/* Create some feature values for this single row */
person['Height'] = [6]
person['Weight'] = [130]
person['Foot_Size'] = [8]
/* View the data */
person
Height | Weight | Foot_Size | |
---|---|---|---|
0 | 6 | 130 | 8 |
Calculate Priors
Priors can be either constants or probability distributions. In our example, this is simply the probability of being a gender. Calculating this is simple:
/* Number of males */
n_male = data['Gender'][data['Gender'] == 'male'].count()
/* Number of males */
n_female = data['Gender'][data['Gender'] == 'female'].count()
/* Total rows */
total_ppl = data['Gender'].count()
/* Number of males divided by the total rows */
P_male = n_male/total_ppl
/* Number of females divided by the total rows */
P_female = n_female/total_ppl
Calculate Likelihood
This means that for each class (e.g. female) and feature (e.g. height) combination we need to calculate the variance and mean value from the data. Pandas makes this easy:
/* Group the data by gender and calculate the means of each feature */
data_means = data.groupby('Gender').mean()
/* View the values */
data_means
Height | Weight | Foot_Size | |
---|---|---|---|
Gender | |||
female | 5.4175 | 132.50 | 7.50 |
male | 5.8550 | 176.25 | 11.25 |
/* Group the data by gender and calculate the variance of each feature */
data_variance = data.groupby('Gender').var()
/* View the values */
data_variance
Height | Weight | Foot_Size | |
---|---|---|---|
Gender | |||
female | 0.097225 | 558.333333 | 1.666667 |
male | 0.035033 | 122.916667 | 0.916667 |
Now we can create all the variables we need. The code below might look complex but all we are doing is creating a variable out of each cell in both of the tables above.
/* Means for male */
male_height_mean = data_means['Height'][data_variance.index == 'male'].values[0]
male_weight_mean = data_means['Weight'][data_variance.index == 'male'].values[0]
male_footsize_mean = data_means['Foot_Size'][data_variance.index == 'male'].values[0]
/* Variance for male */
male_height_variance = data_variance['Height'][data_variance.index == 'male'].values[0]
male_weight_variance = data_variance['Weight'][data_variance.index == 'male'].values[0]
male_footsize_variance = data_variance['Foot_Size'][data_variance.index == 'male'].values[0]
/* Means for female */
female_height_mean = data_means['Height'][data_variance.index == 'female'].values[0]
female_weight_mean = data_means['Weight'][data_variance.index == 'female'].values[0]
female_footsize_mean = data_means['Foot_Size'][data_variance.index == 'female'].values[0]
/* Variance for female */
female_height_variance = data_variance['Height'][data_variance.index == 'female'].values[0]
female_weight_variance = data_variance['Weight'][data_variance.index == 'female'].values[0]
female_footsize_variance = data_variance['Foot_Size'][data_variance.index == 'female'].values[0]
/* Create a function that calculates p(x | y): */
def p_x_given_y(x, mean_y, variance_y):
/* Input the arguments into a probability density function */
p = 1/(np.sqrt(2*np.pi*variance_y)) * np.exp((-(x-mean_y)**2)/(2*variance_y))
/* return p */
return p
Apply Bayes Classifier To New Data Point
To do this, we just need to plug in the values of the unclassified person (height = 6), the variables of the dataset (e.g. mean of female height), and the function (p_x_given_y
) we made above:
/* Numerator of the posterior if the unclassified observation is a male */
P_male *
p_x_given_y(person['Height'][0], male_height_mean, male_height_variance) *
p_x_given_y(person['Weight'][0], male_weight_mean, male_weight_variance) *
p_x_given_y(person['Foot_Size'][0], male_footsize_mean, male_footsize_variance)
6.197071843878078e-09
/* Numerator of the posterior if the unclassified observation is a female */
P_female *
p_x_given_y(person['Height'][0], female_height_mean, female_height_variance) *
p_x_given_y(person['Weight'][0], female_weight_mean, female_weight_variance) *
p_x_given_y(person['Foot_Size'][0], female_footsize_mean, female_footsize_variance)
0.0005377909183630018
Because the numerator of the posterior for female is greater than male, then we predict that the person is female.
Python Example for Beginners
Two Machine Learning Fields
There are two sides to machine learning:
- Practical Machine Learning:This is about querying databases, cleaning data, writing scripts to transform data and gluing algorithm and libraries together and writing custom code to squeeze reliable answers from data to satisfy difficult and ill defined questions. It’s the mess of reality.
- Theoretical Machine Learning: This is about math and abstraction and idealized scenarios and limits and beauty and informing what is possible. It is a whole lot neater and cleaner and removed from the mess of reality.
Data Science Resources: Data Science Recipes and Applied Machine Learning Recipes
Introduction to Applied Machine Learning & Data Science for Beginners, Business Analysts, Students, Researchers and Freelancers with Python & R Codes @ Western Australian Center for Applied Machine Learning & Data Science (WACAMLDS) !!!
Latest end-to-end Learn by Coding Recipes in Project-Based Learning:
Applied Statistics with R for Beginners and Business Professionals
Data Science and Machine Learning Projects in Python: Tabular Data Analytics
Data Science and Machine Learning Projects in R: Tabular Data Analytics
Python Machine Learning & Data Science Recipes: Learn by Coding
R Machine Learning & Data Science Recipes: Learn by Coding
Comparing Different Machine Learning Algorithms in Python for Classification (FREE)
Disclaimer: The information and code presented within this recipe/tutorial is only for educational and coaching purposes for beginners and developers. Anyone can practice and apply the recipe/tutorial presented here, but the reader is taking full responsibility for his/her actions. The author (content curator) of this recipe (code / program) has made every effort to ensure the accuracy of the information was correct at time of publication. The author (content curator) does not assume and hereby disclaims any liability to any party for any loss, damage, or disruption caused by errors or omissions, whether such errors or omissions result from accident, negligence, or any other cause. The information presented here could also be found in public knowledge domains.
Google –> SETScholars
A list of Python, R and SQL Codes for Applied Machine Learning and Data Science at https://setscholars.net/Learn by Coding Categories:
- Classification: https://setscholars.net/category/classification/
- Data Analytics: https://setscholars.net/category/data-analytics/
- Data Science: https://setscholars.net/category/data-science/
- Data Visualisation: https://setscholars.net/category/data-visualisation/
- Machine Learning Recipe: https://setscholars.net/category/machine-learning-recipe/
- Pandas: https://setscholars.net/category/pandas/
- Python: https://setscholars.net/category/python/
- SKLEARN: https://setscholars.net/category/sklearn/
- Supervised Learning: https://setscholars.net/category/supervised-learning/
- Tabular Data Analytics: https://setscholars.net/category/tabular-data-analytics/
- End-to-End Data Science Recipes: https://setscholars.net/category/a-star-data-science-recipe/
- Applied Statistics: https://setscholars.net/category/applied-statistics/
- Bagging Ensemble: https://setscholars.net/category/bagging-ensemble/
- Boosting Ensemble: https://setscholars.net/category/boosting-ensemble/
- CatBoost: https://setscholars.net/category/catboost/
- Clustering: https://setscholars.net/category/clustering/
- Data Analytics: https://setscholars.net/category/data-analytics/
- Data Science: https://setscholars.net/category/data-science/
- Data Visualisation: https://setscholars.net/category/data-visualisation/
- Decision Tree: https://setscholars.net/category/decision-tree/
- LightGBM: https://setscholars.net/category/lightgbm/
- Machine Learning Recipe: https://setscholars.net/category/machine-learning-recipe/
- Multi-Class Classification: https://setscholars.net/category/multi-class-classification/
- Neural Networks: https://setscholars.net/category/neural-networks/
- Python Machine Learning: https://setscholars.net/category/python-machine-learning/
- Python Machine Learning Crash Course: https://setscholars.net/category/python-machine-learning-crash-course/
- R Classification: https://setscholars.net/category/r-classification/
- R for Beginners: https://setscholars.net/category/r-for-beginners/
- R for Business Analytics: https://setscholars.net/category/r-for-business-analytics/
- R for Data Science: https://setscholars.net/category/r-for-data-science/
- R for Data Visualisation: https://setscholars.net/category/r-for-data-visualisation/
- R for Excel Users: https://setscholars.net/category/r-for-excel-users/
- R Machine Learning: https://setscholars.net/category/r-machine-learning/
- R Machine Learning Crash Course: https://setscholars.net/category/r-machine-learning-crash-course/
- R Regression: https://setscholars.net/category/r-regression/
- Regression: https://setscholars.net/category/regression/
- XGBOOST: https://setscholars.net/category/xgboost/
- Excel examples for beginners: https://setscholars.net/category/excel-examples-for-beginners/
- C Programming tutorials & examples: https://setscholars.net/category/c-programming-tutorials/
- Javascript tutorials & examples: https://setscholars.net/category/javascript-tutorials-and-examples/
- Python tutorials & examples: https://setscholars.net/category/python-tutorials/
- R tutorials & examples: https://setscholars.net/category/r-for-beginners/
- SQL tutorials & examples: https://setscholars.net/category/sql-tutorials-for-business-analyst/
- Year 1 Mathematics Worksheet: https://setscholars.net/category/year-1-mathematics-worksheet/