K- Fold Cross Validation for reducing over-fit issue on classifiers
In k-fold cross-validation, the original sample is randomly partitioned into k equal sized sub-samples of the k subsamples, A single sub-sample is retained as the validation data for testing the model, and the remaining k − 1 subsamples are used as training data. The cross-validation process is then repeated k times (the folds), with each of the k subsamples used exactly once as the validation … Continue reading K- Fold Cross Validation for reducing over-fit issue on classifiers