Forward, Loss, Backprop: The Loop
The Forward, Loss, Backprop loop is the core training process for a neural network, where a forward pass makes a prediction, a loss function calculates how wrong it is, and backpropagation computes gradients to update the model's weights, reducing error over many iterations to improve future predictions.
- Forward: compute predictions from inputs via layers and activations.
- Loss: compare predictions to targets.
- Backward: compute gradients of loss w.r.t. each parameter (
backpropagation
). - Update: adjust parameters with
gradient descent
(or a fancier optimizer).
