Does the 4 week model evaluation affect the final leaderboard standings?

Hi everyone,

2 questions please.

  1. As per the title, are the final tournament standings when the tournament closes? or do they change throughout the 4 week evaluation depending on your submissions performance?

  2. If standings change throughout the 4 weeks, what are peoples experiences by how much? i.e if at tournament close you are sitting at position # 50, how much could you move from this position? can you drop out of the top 100?

Thank you kindly.

The “leaderboard” on the website isn’t a leaderboard. It can be sorted in any way you like. By default it’s sorted by logloss, but you can click on any one of the columns and sort it that way. All the leaderboard is showing is that you’ve passed the originality and concordance checks and your are > 75% consistent across eras. The logloss on the “leaderboard” is based on known validation data, so you could get any value you like by training your models on the validation data. I suspect that anyone with a logloss < 0.690 is training on the validation data and won’t do well on the live data in 4 weeks. Most of the people currently in the top100 (based on logloss) of the “leaderboard” will probably end up with nothing at the end of 4 weeks. It happens almost every week.

Check the previous tournament results. The best models on the live data have logloss on the validation data of somewhere in the range of ~0.690. Having the best validation logloss doesn’t mean you’ve got good live logloss performance.


Thanks so much. Just the answer I was after.
Much appreciated.