I was thinking about something that could increase test performance a little and was wondering if any of you do this:
- Train models on training set
- Compare models on validation set
- Pick best model
- Retrain model on training + validation set for extra performance
Does anyone do this? Why or why not? I don’t because it confuses me when model validation scores come back so high on submission receipts. But this could possibly improve test performance. Let me know thoughts and opinions.