Feature Scaling – Part 1
Feature Scaling is a technique to standardize the independent features present in the data in a fixed range. It is performed during the data preprocessing.
Working:
Given a dataset with features Age, Salary, BHK Apartment with the data size of 5000 people, each having these independent data features.
Each data point is labeled as:
 Class1 YES (means with the given Age, Salary, BHK Apartment feature value one can buy the property)
 Class2 NO (means with the given Age, Salary, BHK Apartment feature value one can’t buy the property)
Using dataset to train the model, one aims to build a model that can predict whether one can buy a property or not with given feature values.
Once the model is trained, an Ndimensional (where N is the no. of features present in the dataset) graph with data points from the given dataset, can be created. The figure given below is an ideal representation of the model.
As shown in the figure, star data points belong to Class1 – Yes and circles represent Class2 – No labels and the model gets trained using these data points. Now a new data point (diamond as shown in the figure) is given and it has different independent values for the 3 features (Age, Salary, BHK Apartment) mentioned above. The model has to predict whether this data point belongs to Yes or No.
Prediction of the class of new data point:
The model calculates the distance of this data point from the centroid of each class group. Finally this data point will belong to that class, which will have a minimum centroid distance from it.
The distance can be calculated between centroid and data point using these methods

 Euclidean Distance : It is the squareroot of the sum of squares of differences between the coordinates (feature values – Age, Salary, BHK Apartment) of data point and centroid of each class. This formula is given by Pythagorean theorem.
where x is Data Point value, y is Centroid value and k is no. of feature values, Example: given data set has k = 3  Manhattan Distance : It is calculated as the sum of absolute differences between the coordinates (feature values) of data point and centroid of each class.
 Minkowski Distance : It is a generalization of above two methods. As shown in the figure, different values can be used for finding r.
 Euclidean Distance : It is the squareroot of the sum of squares of differences between the coordinates (feature values – Age, Salary, BHK Apartment) of data point and centroid of each class. This formula is given by Pythagorean theorem.
Need of Feature Scaling:
The given data set contains 3 features – Age, Salary, BHK Apartment. Consider a range of 10 60 for Age, 1 Lac 40 Lacs for Salary, 1 5 for BHK of Flat. All these features are independent of each of other.
Suppose the centroid of class 1 is [40, 22 Lacs, 3] and data point to be predicted is [57, 33 Lacs, 2].
Using Manhattan Method,
Distance = ((40  57) + (2200000  3300000) + (3  2))
It can be seen that Salary feature will dominate all other features while predicting the class of the given data point and since all the features are independent of each other i.e. a person’s salary has no relation with his/her age or what requirement of flat he/she has. This means that the model will always predict wrong.
So, the simple solution to this problem is Feature Scaling. Feature Scaling Algorithms will scale Age, Salary, BHK in fixed range say [1, 1] or [0, 1]. And then no feature can dominate other.
Python Example for Beginners
Two Machine Learning Fields
There are two sides to machine learning:
 Practical Machine Learning:This is about querying databases, cleaning data, writing scripts to transform data and gluing algorithm and libraries together and writing custom code to squeeze reliable answers from data to satisfy difficult and ill defined questions. It’s the mess of reality.
 Theoretical Machine Learning: This is about math and abstraction and idealized scenarios and limits and beauty and informing what is possible. It is a whole lot neater and cleaner and removed from the mess of reality.
Data Science Resources: Data Science Recipes and Applied Machine Learning Recipes
Introduction to Applied Machine Learning & Data Science for Beginners, Business Analysts, Students, Researchers and Freelancers with Python & R Codes @ Western Australian Center for Applied Machine Learning & Data Science (WACAMLDS) !!!
Latest endtoend Learn by Coding Recipes in ProjectBased Learning:
Applied Statistics with R for Beginners and Business Professionals
Data Science and Machine Learning Projects in Python: Tabular Data Analytics
Data Science and Machine Learning Projects in R: Tabular Data Analytics
Python Machine Learning & Data Science Recipes: Learn by Coding
R Machine Learning & Data Science Recipes: Learn by Coding
Comparing Different Machine Learning Algorithms in Python for Classification (FREE)
Disclaimer: The information and code presented within this recipe/tutorial is only for educational and coaching purposes for beginners and developers. Anyone can practice and apply the recipe/tutorial presented here, but the reader is taking full responsibility for his/her actions. The author (content curator) of this recipe (code / program) has made every effort to ensure the accuracy of the information was correct at time of publication. The author (content curator) does not assume and hereby disclaims any liability to any party for any loss, damage, or disruption caused by errors or omissions, whether such errors or omissions result from accident, negligence, or any other cause. The information presented here could also be found in public knowledge domains.