Please can these terms be clarified with their differences as they appear to all indicate the same thing in machine learning:
Overfitting vs high variance vs large estimation error vs high true risk vs low generalisation.
- and -
Underfitting vs high bias vs large approximation error vs large empirical risk vs large test error.
Can all these terms effectively be used interchangeably?