Have a personal or library account? Click to login
Encouraging an appropriate representation simplifies training of neural networks Cover

Encouraging an appropriate representation simplifies training of neural networks

By: Krisztian Buza  
Open Access
|Jul 2020

Abstract

A common assumption about neural networks is that they can learn an appropriate internal representation on their own, see e.g. end-to-end learning. In this work we challenge this assumption. We consider two simple tasks and show that the state-of-the-art training algorithm fails, although the model itself is able to represent an appropriate solution. We will demonstrate that encouraging an appropriate internal representation allows the same model to solve these tasks. While we do not claim that it is impossible to solve these tasks by other means (such as neural networks with more layers), our results illustrate that integration of domain knowledge in form of a desired internal representation may improve the generalization ability of neural networks.

Language: English
Page range: 102 - 111
Submitted on: Mar 26, 2020
|
Accepted on: Jun 7, 2020
|
Published on: Jul 16, 2020
In partnership with: Paradigm Publishing Services
Publication frequency: 2 issues per year

© 2020 Krisztian Buza, published by Sapientia Hungarian University of Transylvania
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.