Source: Unsplash

Obtaining Top Neural Network Performance Without Any Training

We know less about NNs than we thought

Andre Ye
8 min readSep 21, 2020

How do neural networks generalize when they are so overparametrized?

As we try to answer this question with recent research, we will find that we know much less about neural networks than we thought, and understand why a random initialized network can perform just as well as a trained one.

In more standard machine learning practice, it’s conventional to minimize the number of parameters in a model to prevent overfitting and ensure true learning instead of memorization. On the other hand, machine learning engineers simply keep on stuffing neural networks to become larger and larger, which somehow works. This violates what should be common sense.

One should not increase, beyond what is necessary, the number of entities required to explain anything.
- Occam’s Razor

It’s not uncommon for modern neural networks to achieve 99.9 percent or even 100 percent accuracy on the training set — which usually would be a warning of overfitting. Surprisingly, however, neural networks can achieve similarly high test set scores.

Create an account to read the full story.

The author made this story available to Medium members only.
If you’re new to Medium, create a new account to read this story on us.

Or, continue in mobile web

Already have an account? Sign in

Andre Ye
Andre Ye

Responses (4)

What are your thoughts?

Great article!! I’ve been looking for alternative training methods since i read the original winning lottery ticket article.. the field will change drastically in the next few years

4

Awesome article as usual!
After pruning can the resulting subnetworks weights be put back into a new larger network as the initial values and see if then it will again result in only 5% of the neurons being used or a higher percentage.
Maybe this idea…...

2

In my mind the discussed ideas are related by the double-dip gradient effect that was observed by OpenAI team and, to the best of my knowledge is still not fully explained