Hits: 9

# K-Nearest Neighbors Classification

## Preliminaries

```
import pandas as pd
from sklearn import neighbors
import numpy as np
%matplotlib inline
import seaborn
```

## Create Dataset

Here we create three variables, `test_1`

and `test_2`

are our independent variables, ‘outcome’ is our dependent variable. We will use this data to train our learner.

```
training_data = pd.DataFrame()
training_data['test_1'] = [0.3051,0.4949,0.6974,0.3769,0.2231,0.341,0.4436,0.5897,0.6308,0.5]
training_data['test_2'] = [0.5846,0.2654,0.2615,0.4538,0.4615,0.8308,0.4962,0.3269,0.5346,0.6731]
training_data['outcome'] = ['win','win','win','win','win','loss','loss','loss','loss','loss']
training_data.head()
```

test_1 | test_2 | outcome | |
---|---|---|---|

0 | 0.3051 | 0.5846 | win |

1 | 0.4949 | 0.2654 | win |

2 | 0.6974 | 0.2615 | win |

3 | 0.3769 | 0.4538 | win |

4 | 0.2231 | 0.4615 | win |

## Plot the data

This is not necessary, but because we only have three variables, we can plot the training dataset. The X and Y axes are the independent variables, while the colors of the points are their classes.

`seaborn.lmplot('test_1', 'test_2', data=training_data, fit_reg=False,hue="outcome", scatter_kws={"marker": "D","s": 100})`

```
<seaborn.axisgrid.FacetGrid at 0x11008aeb8>
```

## Convert Data Into np.arrays

The `scikit-learn`

library requires the data be formatted as a `numpy`

array. Here are doing that reformatting.

```
X = training_data.as_matrix(columns=['test_1', 'test_2'])
y = np.array(training_data['outcome'])
```

## Train The Learner

This is our big moment. We train a KNN learner using the parameters that an observation’s neighborhood is its three closest neighors. `weights = 'uniform'`

can be thought of as the voting system used. For example, `uniform`

means that all neighbors get an equally weighted “vote” about an observation’s class while `weights = 'distance'`

would tell the learner to weigh each observation’s “vote” by its distance from the observation we are classifying.

```
clf = neighbors.KNeighborsClassifier(3, weights = 'uniform')
trained_model = clf.fit(X, y)
```

## View The Model’s Score

How good is our trained model compared to our training data?

```
trained_model.score(X, y)
```

```
0.80000000000000004
```

Our model is 80% accurate!

*Note: that in any real world example we’d want to compare the trained model to some holdout test data. But since this is a toy example I used the training data*.

## Apply The Learner To A New Data Point

Now that we have trained our model, we can predict the class any new observation, ytest. Let us do that now!

```
/* Create a new observation with the value of the first independent variable, 'test_1', as .4
and the second independent variable, test_1', as .6 */
x_test = np.array([[.4,.6]])
```

```
/* Apply the learner to the new, unclassified observation. */
trained_model.predict(x_test)
```

```
array(['loss'], dtype=object)
```

Huzzah! We can see that the learner has predicted that the new observation’s class is `loss`

.

We can even look at the probabilities the learner assigned to each class:

`trained_model.predict_proba(x_test)`

```
array([[ 0.66666667, 0.33333333]])
```

According to this result, the model predicted that the observation was `loss`

with a ~67% probability and `win`

with a ~33% probability. Because the observation had a greater probability of being `loss`

, it predicted that class for the observation.

## Notes

- The choice of K has major affects on the classifer created.
- The greater the K, more linear (high bias and low variance) the decision boundary.
- There are a variety of ways to measure distance, two popular being simple euclidean distance and cosine similarity.

# Python Example for Beginners

## Two Machine Learning Fields

There are two sides to machine learning:

**Practical Machine Learning:**This is about querying databases, cleaning data, writing scripts to transform data and gluing algorithm and libraries together and writing custom code to squeeze reliable answers from data to satisfy difficult and ill defined questions. It’s the mess of reality.**Theoretical Machine Learning**: This is about math and abstraction and idealized scenarios and limits and beauty and informing what is possible. It is a whole lot neater and cleaner and removed from the mess of reality.

**Data Science Resources: Data Science Recipes and Applied Machine Learning Recipes**

**Introduction to Applied Machine Learning & Data Science for Beginners, Business Analysts, Students, Researchers and Freelancers with Python & R Codes @ Western Australian Center for Applied Machine Learning & Data Science (WACAMLDS) !!!**

Latest end-to-end Learn by Coding Recipes in Project-Based Learning:

**Applied Statistics with R for Beginners and Business Professionals**

**Data Science and Machine Learning Projects in Python: Tabular Data Analytics**

**Data Science and Machine Learning Projects in R: Tabular Data Analytics**

**Python Machine Learning & Data Science Recipes: Learn by Coding**

**R Machine Learning & Data Science Recipes: Learn by Coding**

**Comparing Different Machine Learning Algorithms in Python for Classification (FREE)**

Disclaimer: The information and code presented within this recipe/tutorial is only for educational and coaching purposes for beginners and developers. Anyone can practice and apply the recipe/tutorial presented here, but the reader is taking full responsibility for his/her actions. The author (content curator) of this recipe (code / program) has made every effort to ensure the accuracy of the information was correct at time of publication. The author (content curator) does not assume and hereby disclaims any liability to any party for any loss, damage, or disruption caused by errors or omissions, whether such errors or omissions result from accident, negligence, or any other cause.The information presented here could also be found in public knowledge domains.

**Google –> SETScholars**

A list of Python, R and SQL Codes for Applied Machine Learning and Data Science at **https://setscholars.net/**Learn by Coding Categories:

- Classification:
**https://setscholars.net/category/classification/** - Data Analytics:
**https://setscholars.net/category/data-analytics/** - Data Science:
**https://setscholars.net/category/data-science/** - Data Visualisation:
**https://setscholars.net/category/data-visualisation/** - Machine Learning Recipe:
**https://setscholars.net/category/machine-learning-recipe/** - Pandas:
**https://setscholars.net/category/pandas/** - Python:
**https://setscholars.net/category/python/** - SKLEARN:
**https://setscholars.net/category/sklearn/** - Supervised Learning:
**https://setscholars.net/category/supervised-learning/** - Tabular Data Analytics:
**https://setscholars.net/category/tabular-data-analytics/** - End-to-End Data Science Recipes:
**https://setscholars.net/category/a-star-data-science-recipe/** - Applied Statistics:
**https://setscholars.net/category/applied-statistics/** - Bagging Ensemble:
**https://setscholars.net/category/bagging-ensemble/** - Boosting Ensemble:
**https://setscholars.net/category/boosting-ensemble/** - CatBoost:
**https://setscholars.net/category/catboost/** - Clustering:
**https://setscholars.net/category/clustering/** - Data Analytics:
**https://setscholars.net/category/data-analytics/** - Data Science:
**https://setscholars.net/category/data-science/** - Data Visualisation:
**https://setscholars.net/category/data-visualisation/** - Decision Tree:
**https://setscholars.net/category/decision-tree/** - LightGBM:
**https://setscholars.net/category/lightgbm/** - Machine Learning Recipe:
**https://setscholars.net/category/machine-learning-recipe/** - Multi-Class Classification:
**https://setscholars.net/category/multi-class-classification/** - Neural Networks:
**https://setscholars.net/category/neural-networks/** - Python Machine Learning:
**https://setscholars.net/category/python-machine-learning/** - Python Machine Learning Crash Course:
**https://setscholars.net/category/python-machine-learning-crash-course/** - R Classification:
**https://setscholars.net/category/r-classification/** - R for Beginners:
**https://setscholars.net/category/r-for-beginners/** - R for Business Analytics:
**https://setscholars.net/category/r-for-business-analytics/** - R for Data Science:
**https://setscholars.net/category/r-for-data-science/** - R for Data Visualisation:
**https://setscholars.net/category/r-for-data-visualisation/** - R for Excel Users:
**https://setscholars.net/category/r-for-excel-users/** - R Machine Learning:
**https://setscholars.net/category/r-machine-learning/** - R Machine Learning Crash Course:
**https://setscholars.net/category/r-machine-learning-crash-course/** - R Regression:
**https://setscholars.net/category/r-regression/** - Regression:
**https://setscholars.net/category/regression/** - XGBOOST:
**https://setscholars.net/category/xgboost/** - Excel examples for beginners:
**https://setscholars.net/category/excel-examples-for-beginners/** - C Programming tutorials & examples:
**https://setscholars.net/category/c-programming-tutorials/** - Javascript tutorials & examples:
**https://setscholars.net/category/javascript-tutorials-and-examples/** - Python tutorials & examples:
**https://setscholars.net/category/python-tutorials/** - R tutorials & examples:
**https://setscholars.net/category/r-for-beginners/** - SQL tutorials & examples:
**https://setscholars.net/category/sql-tutorials-for-business-analyst/**

**FREE downloadable Mathematics Worksheet for Kids**)

- Year 1 Mathematics Worksheet:
**https://setscholars.net/category/year-1-mathematics-worksheet/**