Hanna Sophia Wutte
How implicit regularization of Neural Networks affects the learned function
Was |
|
---|---|
Wann |
06.12.2019 von 12:00 bis 13:15 |
Wo | Ernst-Zermelo-Straße 1, Raum 404, 4. OG |
Termin übernehmen |
vCal iCal |
Today, various forms of neural networks are
trained to perform approximation tasks in many fields. However, the
solutions obtained are not wholly understood. Empirical results suggest
that the training favors regularized solutions.
These observations motivate us to analyze properties of the solutions
found by the gradient descent algorithm frequently employed to perform
the training task. As a starting point, we consider one dimensional
(shallow) neural networks in which weights are chosen randomly and only
the terminal layer is trained. We show, that the resulting solution
converges to the smooth spline interpolation of the training data as the
number of hidden nodes tends to infinity. This might give valuable
insight on the properties of the solutions obtained using gradient
descent methods in general settings.