Complex models such as deep neural networks are prone to overfitting because of their flexibility in memorizing the idiosyncratic patterns in the training set, instead of generalizing to unseen data.
Any modification we make to a learning algorithm that’s intended to reduce its generalization error but not its training error is called regularization. Keeping the model simple enough by using regularization techniques allows the network to generalize well on data points it hasn’t seen before.