Neural Network Components

Understanding the components of a neural network's design

  • Tensor - The key data structure which flows through the network

  • Nodes - Individual units, connected using weights and grouped within layers

  • Weights - Weighted connections between nodes

  • Bias term - An offset parameter which is added to the weighted sum of the inputs

  • Gradients - Represent the partial derivatives of the loss function with respect weights and biases

  • Loss Function - Measures the neural network’s predictions against the target values

  • Activation Function - Applied to the output of each node, introduces non-linearity into the network

  • Regularisation - Used to prevent overfitting by adding a penalty to the loss function

  • Optimisation - Used to update the model parameters based on the computed gradients

Last updated