
Obtaining Top Neural Network Performance Without Any Training
How do neural networks generalize when they are so overparametrized?
As we try to answer this question with recent research, we will find that we know much less about neural networks than we thought, and understand why a random initialized network can perform just as well as a trained one.
In more standard machine learning practice, it’s conventional to minimize the number of parameters in a model to prevent overfitting and ensure true learning instead of memorization. On the other hand, machine learning engineers simply keep on stuffing neural networks to become larger and larger, which somehow works. This violates what should be common sense.
One should not increase, beyond what is necessary, the number of entities required to explain anything.
- Occam’s Razor
It’s not uncommon for modern neural networks to achieve 99.9 percent or even 100 percent accuracy on the training set — which usually would be a warning of overfitting. Surprisingly, however, neural networks can achieve similarly high test set scores.








