We’ve noticed an unusual training pattern in fine-tuning LLMs. At first we thought it’s a bug, but now we think it shows LLMs can learn effectively from a single example.
How neural networks learn
We train neural network classifiers by showing them examples of inputs and outputs, and they learn to predict outputs based on inputs. For example, we show examples of pictures of dogs and cats, along with the breed of each, and they learn to guess the breed from the image. To be more precise, for a list of possible breeds, they output their guess as to the probability of each breed. If it’s unsure, it will guess a roughly equal probability of each possible breed, and if it’s highly confident, it will guess a nearly 1.0 probability of its predicted breed.
The training process consists of every image in a training set being shown to the network, along with the correct label. A pass through all the input data is called an “epoch”. We have to provide many examples of the training data for the model to learn effectively.
During training the neural network attempts to reduce the loss, which is (roughly speaking) a measure of how often the model is wrong, with highly confident wrong predictions penalised the most, and vise versa. We calculate the loss after each batch for the training set, and from time to time (often at the end of each epoch) we also calculated the loss for a bunch of inputs the model does not get to learn from – this is the “validation set”. Here’s what that looks like in practice when we train for 11 epochs.
To continue reading this article, click here.
You must be logged in to post a comment.
RUBY ROGER, is a company registered in Mozambique, the company’s main activity is mining and marketing of preciousRuby Sugarloaf stones.
When I am very eager to decipher a mysterious word but can’t think of it, a wordle hint can solve all difficult problems and help me find the mysterious word quickly.
I love to read posts like this that are both informative and entertaining age of war