# Logistic Regression With L1 Regularization

L1 regularization (also called least absolute deviations) is a powerful tool in data science. There are many tutorials out there explaining L1 regularization and I will not try to do that here. Instead, this tutorial is show the effect of the regularization parameter `C` on the coefficients and model accuracy.

## Preliminaries

``````
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler``````

## Create The Data

The dataset used in this tutorial is the famous iris dataset. The Iris target data contains 50 samples from three species of Iris, `y` and four feature variables, `X`.

The dataset contains three categories (three species of Iris), however for the sake of simplicity it is easier if the target data is binary. Therefore we will remove the data from the last species of Iris.

``````
/* Load the iris dataset */

/* Create X from the features */
X = iris.data

/* Create y from output */
y = iris.target

/* Remake the variable, keeping all data where the category is not 2. */
X = X[y != 2]
y = y[y != 2]``````

## View The Data

``````
/* View the features */
X[0:5]``````
``````array([[5.1, 3.5, 1.4, 0.2],
[4.9, 3. , 1.4, 0.2],
[4.7, 3.2, 1.3, 0.2],
[4.6, 3.1, 1.5, 0.2],
[5. , 3.6, 1.4, 0.2]])
``````
``````
/* View the target data */
y``````
``````array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])
``````

## Split The Data Into Training And Test Sets

``````
/* Split the data into test and training sets, with 30% of samples being put into the test set */
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)``````

## Standardize Features

Because the regularization penalty is comprised of the sum of the absolute value of the coefficients, we need to scale the data so the coefficients are all based on the same scale.

``````
/* Create a scaler object */
sc = StandardScaler()

/* Fit the scaler to the training data and transform */
X_train_std = sc.fit_transform(X_train)

/* Apply the scaler to the test data */
X_test_std = sc.transform(X_test)``````

## Run Logistic Regression With A L1 Penalty With Various Regularization Strengths

The usefulness of L1 is that it can push feature coefficients to 0, creating a method for feature selection. In the code below we run a logistic regression with a L1 penalty four times, each time decreasing the value of `C`. We should expect that as `C` decreases, more coefficients become 0.

``````
C = [10, 1, .1, .001]

for c in C:
clf = LogisticRegression(penalty='l1', C=c, solver='liblinear')
clf.fit(X_train, y_train)
print('C:', c)
print('Coefficient of each feature:', clf.coef_)
print('Training accuracy:', clf.score(X_train_std, y_train))
print('Test accuracy:', clf.score(X_test_std, y_test))
print('')``````
``````
C: 10
Coefficient of each feature: [[-0.00902649 -3.83902983  4.34904293  0.        ]]
Training accuracy: 0.9857142857142858
Test accuracy: 1.0

C: 1
Coefficient of each feature: [[ 0.         -2.27441684  2.56760315  0.        ]]
Training accuracy: 0.9857142857142858
Test accuracy: 1.0

C: 0.1
Coefficient of each feature: [[ 0.         -0.82143435  0.97187285  0.        ]]
Training accuracy: 0.9857142857142858
Test accuracy: 1.0

C: 0.001
Coefficient of each feature: [[0. 0. 0. 0.]]
Training accuracy: 0.5
Test accuracy: 0.5
``````

Notice that as `C` decreases the model coefficients become smaller (for example from `4.36276075` when `C=10` to `0.0.97175097` when `C=0.1`), until at `C=0.001` all the coefficients are zero. This is the effect of the regularization penalty becoming more prominent.

# Special 95% discount

## 2000+ Applied Machine Learning & Data Science Recipes

### Portfolio Projects for Aspiring Data Scientists: Tabular Text & Image Data Analytics as well as Time Series Forecasting in Python & R ## Two Machine Learning Fields

There are two sides to machine learning:

• Practical Machine Learning:This is about querying databases, cleaning data, writing scripts to transform data and gluing algorithm and libraries together and writing custom code to squeeze reliable answers from data to satisfy difficult and ill defined questions. It’s the mess of reality.
• Theoretical Machine Learning: This is about math and abstraction and idealized scenarios and limits and beauty and informing what is possible. It is a whole lot neater and cleaner and removed from the mess of reality.

Data Science Resources: Data Science Recipes and Applied Machine Learning Recipes

Introduction to Applied Machine Learning & Data Science for Beginners, Business Analysts, Students, Researchers and Freelancers with Python & R Codes @ Western Australian Center for Applied Machine Learning & Data Science (WACAMLDS) !!!

Latest end-to-end Learn by Coding Recipes in Project-Based Learning:

Applied Statistics with R for Beginners and Business Professionals

Data Science and Machine Learning Projects in Python: Tabular Data Analytics

Data Science and Machine Learning Projects in R: Tabular Data Analytics

Python Machine Learning & Data Science Recipes: Learn by Coding

R Machine Learning & Data Science Recipes: Learn by Coding

Comparing Different Machine Learning Algorithms in Python for Classification (FREE)

`Disclaimer: The information and code presented within this recipe/tutorial is only for educational and coaching purposes for beginners and developers. Anyone can practice and apply the recipe/tutorial presented here, but the reader is taking full responsibility for his/her actions. The author (content curator) of this recipe (code / program) has made every effort to ensure the accuracy of the information was correct at time of publication. The author (content curator) does not assume and hereby disclaims any liability to any party for any loss, damage, or disruption caused by errors or omissions, whether such errors or omissions result from accident, negligence, or any other cause. The information presented here could also be found in public knowledge domains.  `

A list of Python, R and SQL Codes for Applied Machine Learning and Data  Science at https://setscholars.net/Learn by Coding Categories:

1. Classification: https://setscholars.net/category/classification/
2. Data Analytics: https://setscholars.net/category/data-analytics/
3. Data Science: https://setscholars.net/category/data-science/
4. Data Visualisation: https://setscholars.net/category/data-visualisation/
5. Machine Learning Recipe: https://setscholars.net/category/machine-learning-recipe/
6. Pandas: https://setscholars.net/category/pandas/
7. Python: https://setscholars.net/category/python/
8. SKLEARN: https://setscholars.net/category/sklearn/
9. Supervised Learning: https://setscholars.net/category/supervised-learning/
10. Tabular Data Analytics: https://setscholars.net/category/tabular-data-analytics/
11. End-to-End Data Science Recipes: https://setscholars.net/category/a-star-data-science-recipe/
12. Applied Statistics: https://setscholars.net/category/applied-statistics/
13. Bagging Ensemble: https://setscholars.net/category/bagging-ensemble/
14. Boosting Ensemble: https://setscholars.net/category/boosting-ensemble/
15. CatBoost: https://setscholars.net/category/catboost/
16. Clustering: https://setscholars.net/category/clustering/
17. Data Analytics: https://setscholars.net/category/data-analytics/
18. Data Science: https://setscholars.net/category/data-science/
19. Data Visualisation: https://setscholars.net/category/data-visualisation/
20. Decision Tree: https://setscholars.net/category/decision-tree/
21. LightGBM: https://setscholars.net/category/lightgbm/
22. Machine Learning Recipe: https://setscholars.net/category/machine-learning-recipe/
23. Multi-Class Classification: https://setscholars.net/category/multi-class-classification/
24. Neural Networks: https://setscholars.net/category/neural-networks/
25. Python Machine Learning: https://setscholars.net/category/python-machine-learning/
26. Python Machine Learning Crash Course: https://setscholars.net/category/python-machine-learning-crash-course/
27. R Classification: https://setscholars.net/category/r-classification/
28. R for Beginners: https://setscholars.net/category/r-for-beginners/
30. R for Data Science: https://setscholars.net/category/r-for-data-science/
31. R for Data Visualisation: https://setscholars.net/category/r-for-data-visualisation/
32. R for Excel Users: https://setscholars.net/category/r-for-excel-users/
33. R Machine Learning: https://setscholars.net/category/r-machine-learning/
34. R Machine Learning Crash Course: https://setscholars.net/category/r-machine-learning-crash-course/
35. R Regression: https://setscholars.net/category/r-regression/
36. Regression: https://setscholars.net/category/regression/
37. XGBOOST: https://setscholars.net/category/xgboost/
38. Excel examples for beginners: https://setscholars.net/category/excel-examples-for-beginners/
39. C Programming tutorials & examples: https://setscholars.net/category/c-programming-tutorials/
40. Javascript tutorials & examples: https://setscholars.net/category/javascript-tutorials-and-examples/
41. Python tutorials & examples: https://setscholars.net/category/python-tutorials/
42. R tutorials & examples: https://setscholars.net/category/r-for-beginners/
43. SQL tutorials & examples: https://setscholars.net/category/sql-tutorials-for-business-analyst/