I have a question from before, but might have been missed, or I didn’t understand the answer, so I try again.
You write the goal of true contribution is
“to estimate how much a user’s signal improves or detracts from the returns of Numerai’s portfolio”
and true contribution is
“gradient of optimized portfolio returns with respect to the NMR staked”
“if a data scientist staked slightly more on their model (thereby increasing their weight in the Stake-Weighted Meta Model), what would the change be to post-optimization portfolio returns?”
So is the gradient the appropriate metric for true contribution goal?
For instance, imagine the user with the best predictions and the perfect stake. Sure, other users predictions can be used to improve, but this particular user shouldn’t increase or decrease their stake because it is already just right.
Do they get a TC of zero? Then as a result, would they be incentivised not to stake on this model?
If so the interests might be misaligned. HF wants TC=0 (ie optimal stake on a model) but staker wants to maximise (TC*stake) and so wants TC>>0
Or is the users own stake always zeroed out when calculating the TC?
TLDR: where am i going wrong if I conclude that TC encourages increased staking on models that will help HF, but discourages continued staking on models already contributing closer to optimal ?
I don’t think it works that way; if someone’s prediction is the best, then increasing their stake will increase the performance of the portfolio, so TC should be positive.
One would, OTOH, get a zero TC if neither increasing nor decreasing one’s stake makes any difference to the portfolio. Which would imply that one’s predictions are pretty much the same as an average prediction.
Note: this is predicated on the assumption I understand TC. That’s still a very weak assumption…
Thanks @mic for the great question. This is a concern I had as well, but it turns out to be far more of a theoretical than an empirical or practical concern. A nice way to assess this is to evaluate the distribution of gradients for each user across the 100 rounds of dropout. Because in each round of dropout ~50% of staked users have their stakes zeroed out, for each user there are ~50 gradients taken with their stake set to 0 and ~50 taken at their full stake. If we compare these two distributions of gradients using a t-test and find their difference to be statistically insignificant then the effect of stake on the TC estimate isn’t much of a concern. I did this analysis on with the largest staker, user stocks_ai_g, and found that indeed it was the case that the difference between the two distributions was statistically insignificant. It looks like there could be a significant difference with extremely large stakes, i.e. 5%+ of the total staked, but no one is even close to that so it really doesn’t matter. Furthermore, the optimal distribution of stakes is a moving target as the market evolves, i.e. what is optimal one week may not be next week, which makes it even less of a concern. And to encourage originality, it has to work such that increased stakes on similar signals yield less and less payout, otherwise it would have the same problems as CORR. But it is something we’ll keep an eye on, just in case!
The results of your gradient analysis are interesting.
Yes, and then throw in the effect of TC feedback on staking, which is obviously not represented in the back filled history. This will probably have a longer response time, but both could affect the stability of TC numbers.
Yes, with care to maintain the core existing signals which have value themselves and upon which the originality of the new signals is valuable.
Have you tried a simulation on a round where stakes are modified in response to TC feedback over a number of iterations? To see if it is stable and where HF returns end up in relation to existing staking and to optimal staking?
There are many interactions and levers, it’s definitely going to be interesting to see how it works! Good job so far!