Bias
In very simple terms, Bias is the amount that a model’s prediction differs from the target value, compared to the training data, i.e., the training error. Every algorithm starts with some level of bias because bias results from assumptions in the model that make the target function easier to learn. A high level of bias can lead to underfitting, which occurs when the algorithm is unable to capture relevant relations between features and target outputs. A high bias model typically includes more assumptions about the target function or end result. A low bias model incorporates fewer assumptions about the target function.
Variance
Variance indicates how much the performance of the model would differ if different training data is used. A model with high variance would fit the training data too closely and will result in significant changes for the small changes in the dataset, i.e., the model will not give generalized performance will lead to overfitting.
If we increase the bias of an overfit model, we are making the model simpler and capable of generalizing over the validation set. Its performance on the validation set will improve and, consequently, the variance will decrease. On the other hand, if we decrease the bias of an overfit model, we are making the model more complex and it will not perform well on the validation set. This will cause an increase in the validation error. This phenomenon is called the Bias-Variance trade-off.
The below image shows this trade-off between bias and variance. It shows that the model with high bias and low variance will underfit the model and the model with low bias and high variance will overfit the data.