Cross Validation
Cross-validation allows us to compare different machine learning methods get a sense of how they work practically and access the performance of the model. It usually tells us how the model handles the new data in the model
How Cross validation works
We need the data to train and test the machine-learning models or methods
so we usually distribute the data among the data set itself.
Bad Approach:
- Use all the data to estimate the parameter (train the algorithm) meaning we use the entire data for modeling but have no data left to test the model
- Reusing the same data set for both training and testing the model, since we need to check how the model behaves on data that is not yet trained
The better approach is that we use 75% of the data for training purposes and the rest 25% of the data for training the model
Usually, we divide the data into four different parts and we check three folds for training and one for testing, we repeat all other folds in order to get the proficiency and we use the best one, this method is called Four fold cross validation
There’s certainly a great deal to know about this subject.
I like all of the points you have made.
Review my page https://s3.amazonaws.com/rdcms-snmmi/