As I continue to wrap my head around what TC is and how to train for it, I would be very curious to see the historical TC scores for the actual resolved final target values in each round. In other words, can you please share the TC score of what would have been a 100% perfect prediction set for already-completed rounds?
In the world of corr, a perfect corr of 1.0 is theoretically possible if you predict the exact sort order for the samples. But what does that translate to in the TC world? Does a perfect set of predictions lead to a high TC? Or even a positive one? I would hope so, since the target values are all that we have to train on. But given the mysterious impact of the optimizer, who knows for sure?
I’d also be interested in how many user-submitted predictions each round (if any) beat out the perfect predictions in terms of TC.
I would like to know just how good a TC one gets from perfect predictions.
Given the various pieces of state in the optimizer that we cannot know nor estimate, such as how much of this or that security is already being held, the balance across sectors, the balance across geographical regions, and so forth, it might be that imperfect predictions actually score a higher TC than perfect ones.
The TC metric measures the improvement in performance achieved by combining a user’s predictions with a meta-model, compared to the baseline performance of the meta-model alone. The final TC score is influenced by the state of the optimizer at the time of prediction, and a less-than-perfect set of predictions can achieve a higher score if they complement the imperfect state of the meta-model. So the TC score rewards the discovery of information that enhances the performance of the meta-model. In that sense, you could say that the ability to uncover hard-to-find information is a key factor in achieving a high TC score.