Machine Learning Mastery: Momentum-based Gradient Optimizer introduction

Momentum-based Gradient Optimizer introduction

 

Gradient Descent is an optimization technique used in Machine Learning frameworks to train different models. The training process consists of an objective function (or the error function), which determines the error a Machine Learning model has on a given dataset.
While training, the parameters of this algorithm are initialized to random values. As the algorithm iterates, the parameters are updated such that we reach closer and closer to the optimal value of the function.

However, Adaptive Optimization Algorithms are gaining popularity due to their ability to converge swiftly. All these algorithms, in contrast to the conventional Gradient Descent, use statistics from the previous iterations to robustify the process of convergence.

Momentum-based Optimization:

An Adaptive Optimization Algorithm which uses exponentially weighted averages of gradients over previous iterations to stabilize the convergence, resulting in quicker optimization. For example, in most real-world applications of Deep Neural Networks, the training is carried out on noisy data. It is, therefore, necessary to reduce the effect of noise when the data are fed in batches during Optimization. This problem can be tackled using Exponentially Weighted Averages (or Exponentially Weighted Moving Averages).

Implementing Exponentially Weighted Averages:
In order to approximate the trends in a noisy dataset of size N:
theta_{0}, theta_{1}, theta_{2}, ..., theta_{N}, we maintain a set of parameters v_{0}, v_{1}, v_{2}, v_{3}, ..., v_{N}. As we iterate through all the values in the dataset, we calculate the parameters as below:

On iteration t:
    Get next  theta_{t} 
    v_{theta} = beta v_{theta} + (1 - beta) theta_{t}

This algorithm averages the value of v_{theta} over its values from previous frac{1}{1 - beta} iterations. This averaging ensures that only the trend is retained and the noise is averaged out. This method is used as a strategy in momentum based gradient descent to make it robust against noise in data samples, resulting in faster training.

As an example, if you were to optimize a function f(x) on the parameter x, the following pseudo code illustrates the algorithm:

On iteration t:
    On the current batch, compute  frac{partial f(x)}{partial x} 
     v := v + (1 - beta) frac{partial f(x)}{partial x}
     x := x - alpha v

The HyperParameters for this Optimization Algorithm are alpha, called the Learning Rate and, beta, similar to acceleration in mechanics.

Following is an implementation of Momentum-based Gradient Descent on a function f(x) = x ^ 2 - 4 x + 4 :

 

import math
# HyperParameters of the optimization algorithm
alpha = 0.01
beta = 0.9
# Objective function
def obj_func(x):
return x * x - 4 * x + 4
# Gradient of the objective function
def grad(x):
return 2 * x - 4
# Parameter of the objective function
x = 0
# Number of iterations
iterations = 0
v = 0
while (1):
iterations += 1
v = beta * v + (1 - beta) * grad(x)
x_prev = x
x = x - alpha * v
print("Value of objective function on iteration", iterations, "is", x)
if x_prev == x:
print("Done optimizing the objective function. ")
break

 

 

Python Example for Beginners

Two Machine Learning Fields

There are two sides to machine learning:

  • Practical Machine Learning:This is about querying databases, cleaning data, writing scripts to transform data and gluing algorithm and libraries together and writing custom code to squeeze reliable answers from data to satisfy difficult and ill defined questions. It’s the mess of reality.
  • Theoretical Machine Learning: This is about math and abstraction and idealized scenarios and limits and beauty and informing what is possible. It is a whole lot neater and cleaner and removed from the mess of reality.

 

Data Science Resources: Data Science Recipes and Applied Machine Learning Recipes

Introduction to Applied Machine Learning & Data Science for Beginners, Business Analysts, Students, Researchers and Freelancers with Python & R Codes @ Western Australian Center for Applied Machine Learning & Data Science (WACAMLDS) !!!

Latest end-to-end Learn by Coding Recipes in Project-Based Learning:

Applied Statistics with R for Beginners and Business Professionals

Data Science and Machine Learning Projects in Python: Tabular Data Analytics

Data Science and Machine Learning Projects in R: Tabular Data Analytics

Python Machine Learning & Data Science Recipes: Learn by Coding

R Machine Learning & Data Science Recipes: Learn by Coding

Comparing Different Machine Learning Algorithms in Python for Classification (FREE)

Disclaimer: The information and code presented within this recipe/tutorial is only for educational and coaching purposes for beginners and developers. Anyone can practice and apply the recipe/tutorial presented here, but the reader is taking full responsibility for his/her actions. The author (content curator) of this recipe (code / program) has made every effort to ensure the accuracy of the information was correct at time of publication. The author (content curator) does not assume and hereby disclaims any liability to any party for any loss, damage, or disruption caused by errors or omissions, whether such errors or omissions result from accident, negligence, or any other cause. The information presented here could also be found in public knowledge domains.  

Google –> SETScholars