Scaling Law in Machine Learning

Sep. 13, 2024

Scaling law in machine learning1:

In machine learning, a neural scaling law is an empirical scaling law that describes how neural network performance changes as key factors are scaled up or down. These factors typically include the number of parameters, training dataset size, and training cost.

In general, a neural model can be characterized by 4 parameters: size of the model, size of the training dataset, cost of training, error rate after training. Each of these four variables can be precisely defined into a real number. These are usually written as $N$, $D$, $C$, $L$ (number of parameters, dataset size, computing cost, loss).

A neural scaling law is a theoretical or empirical statistical law between these parameters. There are also other parameters with other scaling laws.

The abstract of paper, Scaling Laws for Neural Language Models2:

We study empirical scaling laws for language model performance on the cross-entropy loss. The loss scales as a power-law with model size, dataset size, and the amount of compute used for training, with some trends spanning more than seven orders of magnitude. Other architectural details such as network width or depth have minimal effects within a wide range. Simple equations govern the dependence of overfitting on model/dataset size and the dependence of training speed on model size. These relationships allow us to determine the optimal allocation of a fixed compute budget. Larger models are significantly more sample-efficient, such that optimally compute-efficient training involves training very large models on a relatively modest amount of data and stopping significantly before convergence.

References

  1. Neural scaling law˄

  2. Kaplan, Jared, et al. “Scaling laws for neural language models.” arXiv preprint arXiv:2001.08361 (2020), available at: https://arxiv.org/abs/2001.08361˄