Optimisation
Minimise loss
Optimisation algorithms are crucial in training neural networks as they determine how the model’s parameters (weights and biases) are updated based on the computed gradients. These updates aim to minimise the loss function, thus improving the model’s performance.
Gradient Descent
Gradient descent is a foundational optimisation algorithm in machine learning, enabling models to learn from data by iteratively updating parameters to minimise error. It essentially performs a step wise search along the slope of the derivative's parabola. The derivative being the gradient with respect to the loss function. All gradients across the network are searched across iterations, until the loss has been minimised.
Here are a few of the most common optimisation algorithms used in deep learning:
Stochastic Gradient Descent (SGD)
SGD is an optimisation algorithm that updates the model’s parameters using only a single or a small batch of training examples at each iteration. This makes it more efficient and faster than traditional gradient descent, which uses the entire dataset to compute the gradients. Key Points:
Efficiency: Suitable for large datasets since it updates parameters more frequently.
Noise: Introduces noise into the updates, which can help escape local minima and explore the parameter space more effectively.
Learning Rate: The size of each update step. Choosing an appropriate learning rate is crucial for convergence.
Use Case: Large datasets, online learning.
Formula:
Where:
SGD with Momentum
SGD with Momentum helps accelerate gradients vectors in the right directions, thus leading to faster converging. It accumulates a velocity vector in the direction of the gradients to smooth out the updates.
Key Points:
Momentum: A fraction of the previous update is added to the current update, which helps dampen oscillations and smooth out the updates.
Velocity: The velocity term accumulates the gradient of the loss function over time.
Use Case: Cases with significant oscillations in SGD.
Formula:
Where:
Root Mean Square Propagation (RMSProp)
RMSProp adapts the learning rate for each parameter by dividing the learning rate by an exponentially decaying average of squared gradients. This helps handle the problem of vanishing or exploding gradients.
Key Points:
Adaptive Learning Rate: Adjusts the learning rate based on the recent magnitudes of the gradients.
Stabilisation: Prevents oscillations and stabilises training by normalising the gradients.
Use Case: Recurrent neural networks, deep networks.
Formula:
Where:
Adaptive Moment Estimation (ADAM)
Adam combines the advantages of two other extensions of SGD, namely AdaGrad and RMSProp. It maintains an exponentially decaying average of past gradients (momentum) and an exponentially decaying average of past squared gradients (RMSProp).
Key Points:
Adaptive Learning Rate: Maintains separate learning rates for each parameter.
Bias Correction: Includes bias correction terms to account for the initialisation at zero.
Use Case: General-purpose optimiser, works well in most cases.
Formula:
Where:
Summary
SGD: Good for large datasets and online learning but can be noisy.
SGD with Momentum: Adds a velocity term to smooth updates and accelerate convergence.
RMSProp: Uses adaptive learning rates to stabilise training and handle vanishing/exploding gradients.
Adam: Combines momentum and RMSProp for adaptive learning rates and faster convergence with bias correction.
Last updated