# A simple way to prevent neural networks from overfitting > Dropout regularization 개념을 소개한 논문. [Dropout regularization](https://wiki.g15e.com/pages/Dropout%20regularization.txt) 개념을 소개한 논문. ## Abstract > [Deep neural nets](https://wiki.g15e.com/pages/Deep%20neural%20network.txt) with a large number of [parameters](https://wiki.g15e.com/pages/Parameter%20(machine%20learning.txt)) are very powerful [machine learning](https://wiki.g15e.com/pages/Machine%20learning.txt) systems. However, [overfitting](https://wiki.g15e.com/pages/Overfitting.txt) is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. [Dropout](https://wiki.g15e.com/pages/Dropout%20regularization.txt) is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during [training](https://wiki.g15e.com/pages/Training%20(machine%20learning.txt)). This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different "thinned" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other [regularization methods](https://wiki.g15e.com/pages/Regularization%20(machine%20learning.txt)). We show that dropout improves the performance of neural networks on [supervised learning](https://wiki.g15e.com/pages/Supervised%20learning.txt) tasks in vision, , document classification and , obtaining state-of-the-art results on many benchmark data sets.