Machine Learning for Beginners in Python: How to Calibrate Predicted Probabilities

Calibrate Predicted Probabilities

Class probabilities are a common and useful part of machine learning models. In scikit-learn, most learning algortihms allow us to see the predicted probabilities of class membership using predict_proba. This can be extremely useful if, for instance, we want to only predict a certain class if the model predicts the probability that they are that class is over 90%. However, some models, including naive Bayes classifiers output probabilities that are not based on the real world. That is, predict_proba might predict an observation has a 0.70 chance of being a certain class, when the reality is that it is 0.10 or 0.99. Specifically in naive Bayes, while the ranking of predicted probabilities for the different target classes is valid, the raw predicted probabilities tend to take on extreme values close to 0 and 1.

To obtain meaningful predicted probabilities we need conduct what is called calibration. In scikit-learn we can use the CalibratedClassifierCV class to create well calibrated predicted probabilities using k-fold cross-validation. In CalibratedClassifierCV the training sets are used to train the model and the test sets is used to calibrate the predicted probabilities. The returned predicted probabilities are the average of the k-folds.


/* Load libraries */
from sklearn import datasets
from sklearn.naive_bayes import GaussianNB
from sklearn.calibration import CalibratedClassifierCV

Load Iris Flower Dataset

/* Load data */
iris = datasets.load_iris()
X =
y =

Create Naive Bayes Classifier

/* Create Gaussian Naive Bayes object */
clf = GaussianNB()

Create Calibrator

/* Create calibrated cross-validation with sigmoid calibration */
clf_sigmoid = CalibratedClassifierCV(clf, cv=2, method='sigmoid')

Create Classifier With Calibrated Probabilities

/* Calibrate probabilities */, y)
CalibratedClassifierCV(base_estimator=GaussianNB(priors=None), cv=2,

Create Previously Unseen Observation

/* Create new observation */
new_observation = [[ 2.6,  2.6,  2.6,  0.4]]

View Calibrated Probabilities

/* View calibrated probabilities */
array([[ 0.31859969,  0.63663466,  0.04476565]])


Python Example for Beginners

Two Machine Learning Fields

There are two sides to machine learning:

  • Practical Machine Learning:This is about querying databases, cleaning data, writing scripts to transform data and gluing algorithm and libraries together and writing custom code to squeeze reliable answers from data to satisfy difficult and ill defined questions. It’s the mess of reality.
  • Theoretical Machine Learning: This is about math and abstraction and idealized scenarios and limits and beauty and informing what is possible. It is a whole lot neater and cleaner and removed from the mess of reality.

Data Science Resources: Data Science Recipes and Applied Machine Learning Recipes

Introduction to Applied Machine Learning & Data Science for Beginners, Business Analysts, Students, Researchers and Freelancers with Python & R Codes @ Western Australian Center for Applied Machine Learning & Data Science (WACAMLDS) !!!

Latest end-to-end Learn by Coding Recipes in Project-Based Learning:

Applied Statistics with R for Beginners and Business Professionals

Data Science and Machine Learning Projects in Python: Tabular Data Analytics

Data Science and Machine Learning Projects in R: Tabular Data Analytics

Python Machine Learning & Data Science Recipes: Learn by Coding

R Machine Learning & Data Science Recipes: Learn by Coding

Comparing Different Machine Learning Algorithms in Python for Classification (FREE)

Disclaimer: The information and code presented within this recipe/tutorial is only for educational and coaching purposes for beginners and developers. Anyone can practice and apply the recipe/tutorial presented here, but the reader is taking full responsibility for his/her actions. The author (content curator) of this recipe (code / program) has made every effort to ensure the accuracy of the information was correct at time of publication. The author (content curator) does not assume and hereby disclaims any liability to any party for any loss, damage, or disruption caused by errors or omissions, whether such errors or omissions result from accident, negligence, or any other cause. The information presented here could also be found in public knowledge domains.  

Google –> SETScholars

A list of Python, R and SQL Codes for Applied Machine Learning and Data  Science at by Coding Categories:

  1. Classification:
  2. Data Analytics:
  3. Data Science:
  4. Data Visualisation:
  5. Machine Learning Recipe:
  6. Pandas:
  7. Python:
  9. Supervised Learning:
  10. Tabular Data Analytics:
  11. End-to-End Data Science Recipes:
  12. Applied Statistics:
  13. Bagging Ensemble:
  14. Boosting Ensemble:
  15. CatBoost:
  16. Clustering:
  17. Data Analytics:
  18. Data Science:
  19. Data Visualisation:
  20. Decision Tree:
  21. LightGBM:
  22. Machine Learning Recipe:
  23. Multi-Class Classification:
  24. Neural Networks:
  25. Python Machine Learning:
  26. Python Machine Learning Crash Course:
  27. R Classification:
  28. R for Beginners:
  29. R for Business Analytics:
  30. R for Data Science:
  31. R for Data Visualisation:
  32. R for Excel Users:
  33. R Machine Learning:
  34. R Machine Learning Crash Course:
  35. R Regression:
  36. Regression:
  37. XGBOOST:
  38. Excel examples for beginners:
  39. C Programming tutorials & examples:
  40. Javascript tutorials & examples:
  41. Python tutorials & examples:
  42. R tutorials & examples:
  43. SQL tutorials & examples:


( FREE downloadable Mathematics Worksheet for Kids )