Deep neural networks (DNNs) can have tens of thousands of parameters, and in some cases, maybe even millions. This huge number of parameters gives the network a huge amount of freedom and the flexibility to fit a high degree of complexity.
This flexibility is only good up to a certain level. When this level is crossed, the term overfitting is brought to the table.
Continue reading 5 TensorFlow techniques to eliminate overfitting in DNNs