# True Contribution for dummies

Here is my attempt at summarizing how TC works. I would really appreciate if you could correct me if I am wrong in any part of this. My goal is to have a quick overview of TC mechanics without going through the code details in the original post. Although I personally prefer code over words, not everybody has the time to go through the code and that prevents useful discussions. So here is my summary:

1 - At the beginning of a round, Numerai computes the Stake Weighted Meta Model (SWMM): each model predictions count in the Metal Model as much as the model stake.

2- The SWMM predictions are given in input to the portfolio optimizer, which decides the Numeraiâ€™s portfolio positions considering also the constraints the fund has to obey (e.g. max exposition to sector, country, stock, factor etc). Because of these constraints the optimizer is limited on how much the predictions are taken into consideration, thatâ€™s why the simple correlation of each model predictions with the real market performance cannot express the true contribution of a model.

3 - At the end of a round, the real market data is used to compute the returns of the Numeraiâ€™s portfolio and its stake gradient with respect to real market returns. This gradient is the TC.

4 - TC is â€śthe direction and relative magnitude to modify stakesâ€ť to obtain a portfolio with higher returns. That is, if we built a new SWMM with the modified stakes and gave it in input to the same optimizer of step 2, it would result in a portfolio that would have produced higher returns in that round. And if we applied this process (gradient computation and stake update) multiple times we could find the optimal stake values for that round, the one that produces the portfolio with higher returns. That would be overfitting though, so the stakes are updated only once per round.

5 - Round by round, the model stakes, which are being updated by TC, will tend to gradually reach the values that generate the optimal Stake Weighted Meta Model. i.e. the SWMM that given in input to the optimizer would result in the portfolio with higher returns

6 - However, because of market and model volatility, models addition/removal and stakes increase/decrease by users, there will never be an optimal stake value, so we have to always expect TC fluctuations.

7 - We can finally say that TC is the direction and relative magnitude to modify a model stake to make it optimal wrt the Numeraiâ€™s portfolio. The stake value is the actual model contribution and TC is the round-by-round adjustment.

8 - The payout is based on TC. This works great if a model stake is below its optimal value: TC is positive if a model is useful in the Numeraiâ€™s portfolio and it is negative otherwise. Also a model with negative TC will have its stake depleted and a model with positive TC will have its stake increased - however only to some extent.

9 - I see the following problems with the payout based on TC when a model is indeed contributing to the Numeraiâ€™s fund:

• when the model stake reaches its optimal value then TC will be fluctuating around 0 (continuously adjusting due to the tournament noise)
• when a user increases the model stake above the model optimal stake then TC will be negative

I have read several times users saying that the model is not useful anymore because TC is zero or even negative, however that is not correct. In the two scenarios above, If the model was useless or detrimental to the fund, then its stake would have already reached 0. So the model is still useful to the fund and if it was removed from the tournament the Numeraiâ€™s portfolio would be affected negatively.

So we have a problem here: the model is useful, but it is not rewarded for that. Even worse the model has the stake stuck in the blockchain and even burned.

The fact that the users donâ€™t know the optimal stake value for their models makes the issue impossible to deal with. And itâ€™s not a small issue.

10 - There is an additional step in the TC computation that I havenâ€™t discussed yet:

How much these 100 dropout rounds affect the conclusions I drew on TC? Nothing that I can think of.

7 Likes

One thing to note is that TC is not computed with respect to the actual Numerai portfolio. The first version of TC did that but it was deemed (properly, I think) unfair that good predictions could be essentially rejected because they didnâ€™t match well with the actual current Numerai holdings even if they fit the constraints of the optimizer otherwise. So TC is calculated on a proxy portfolio that is created by running SWMM through the optimizer, but it isnâ€™t the actual portfolio which has a few additional constraints concerning which trades are actually possible given current holdings. (Because SWMM is built from scratch every week, but you canâ€™t replace the entire real portfolio every week.) You can find discussion of this change from the initial version of TC to the final version around here somewhere.

2 Likes

How will they simulate TC on validation? Wouldnâ€™t they need to simulate the SWMM on validation as well then? Or would they use the previous live eras from validation? If the latter, would the simulated TC really be indicative of future TC?

1 Like

In the fireside, they seemed to indicate theyâ€™d use the actual SWMM from the period (and those SWMM preds would be available to us as well). And yes that might not hold up forever but it is better than nothing. If they keep releasing the SWMM predictions, then we can track where the metamodel is going to some extent.

3 Likes

I donâ€™t understand the scoring at all and how the current process is deemed â€śfairâ€ť when my models out perform most all top 10 models. I have been on this platform over a year and have never been in even the top 100. It makes no sense because I can just hold numerai and donâ€™t risk anything, without setting up compute and using my resources for such a small return.
whats the point if a non correlating model is number 1?

I have read several times users saying that the model is not useful anymore because TC is zero or even negative, however that is not correct. In the two scenarios above, If the model was useless or detrimental to the fund, then its stake would have already reached 0. So the model is still useful to the fund and if it was removed from the tournament the Numeraiâ€™s portfolio would be affected negatively.

So we have a problem here: the model is useful, but it is not rewarded for that.

Iâ€™m not sure if Numerai does this but it would seem to me that this issue could be handled the same way the same way that MMC is calculated. For MMC they remove the modelâ€™s contribution from the MMC before â€” effectively theyâ€™re comparing your predictions to a metamodel without your modelâ€™s predictions. Presumably a similar thing could be done with TC which should address the issue you raise. Iâ€™m not sure they do that though (I havenâ€™t seen anything about it in the documentation).

That said, Iâ€™d be curious to know how much of a problem this turns out to be in practice. I would think that a modelâ€™s stake would have to be quite high before a valuable and unique model would get penalized in this way.

1 Like

I donâ€™t understand the scoring at all and how the current process is deemed â€śfairâ€ť when my models out perform most all top 10 models.

What metric are you looking at when you say that your models outperform the top 10? Correlation?

Maybe im wrong and it was supposed to say top 100â€¦ and yes correlation. maybe Im just too simple and donâ€™t understand the complex way in which the leader board is determined. But if models that have been burning NMR for 2 months are considered good then I have it way backwards as to what the goal is.

Burns depend on what you stake on and how much round-to-round â€“ lots of choices there. Leaderboard position is score based only â€“ it used to be corr, now it is TC. (20 round moving average with the current live rounds gaining in weight each day while the 4 least recent of 20 lose weight each day â€“ at least thatâ€™s how it was done on corr). Still, I only see people with a lot of green (earns) on top of the leaderboard so donâ€™t know what youâ€™re talking about there really.

Whether or not you see a lot of earns really doesnâ€™t matter as I am obviously not speaking on those models. Just those that have a lot of burns. It seems they are being rewarded for the stake and not the work put into building the model. I see just as many with a lot of burns, which my models dont burn so i was just wondering. No offense to you if one of the models are yours. I mean no harm just want to understand.
Thanks for the info on how TC is calculated.

Can you give an example of a model thatâ€™s been burning NMR the last two months that is also ranked high?

This has been already acknowledged in here, Numerai knows it works like that, but they dismissed the problem as a theoretical one that doesnâ€™t happen in practice. I would be happy if they ran some proper simulations and proved the users that they donâ€™t have to worry, but they simply got rid of the matter with just one questionable explanation. But that is a fundamental property of TC that requires more attention.

At the same time, users keep seeing that TC doesnâ€™t correlate well with model metrics, so there would be good reasons to investigate more.

Numerai has properly tested the effects of the TC mechanics on their fund (it has been reported multiple times how the performance has improved with TC,how many simulations they ran, etc), so we know it has been a great change for them, but why donâ€™t they do a thorough analysis on the payout scheme too (the user perspective of TC)? I mean, even if there was a problem in the payout, that could be improved without getting rid of the benefits that TC brings to their fund. I donâ€™t see why there is no discussion on this topic. Maybe I am just wrong.

I just would like to see evidence that it is not a real concern and I would be happy.

No I wonâ€™t give any examples. Iâ€™ve done that before and ended up offending someone.l.

I think anyone can go through the models and compare their standing to others. How does a model with a consistent negative corr have 100 percentile TC? Thats all I want to know, and until I can answer that question I will only hold NMR.

I put so much work into learning this that at one point I had to see a doctor for a neck strain.

I knew nothing about data science a year ago, now I can build models in both python and R. I can also build compute nodes, and that alone made Numerai worth it.

Iâ€™ll be back once I have an understanding of the scoring.

Good luck to all

You didnâ€™t offend me, if Iâ€™m who you are referring to. I was just wondering what you were talking about, same as @iceshark. Because the top of the leaderboard (which I am not on I assure you) doesnâ€™t contain a bunch of models that are doing a bunch of burning. It just seemed a weird thing to say.

2 Likes

Whats even weirder is that youâ€™re so focused on whether or not there are a bunch burns in the models. Listen this post is about TC, and my question is about TC, yes iâ€™m a dummy. Iâ€™m sure youâ€™re like a genius or something, but If you cant answer the question then move on.

Let me reiterate, why are there models with no correlation ranking at the top of TC? Said models would burn NMR under the previous scoring system. Im sure my question doesnâ€™t apply to all models and iâ€™m sure you havenâ€™t checked all models.

Sorry I offended you or your model.

I am unhappy about the current scoring system as you are and I can understand the inconsistency of the leaderboard you are referring to. I believe there are good reasons to consider the current scoring system unfair, although I think that TC is a good mechanics for the fund and should be kept while fixing the payout. I wish Numerai could provide more data, tests and explanations on why they believe the current tournament is fair.Maybe it is just a matter of seeing things from the right perspective.

Hello, Though my English is very limited,
I have tried to make an explanation for your question.

This is just a schematic of my personal understanding,
and there may be incorrect things.

I would be happy if it helps. If not, please offend me too.

3 Likes

@annon I like you explanation, very intuitive. That explains why a model with negative correlation might be required by the Numeraiâ€™s fund and for that reason it has positive TC. All good, but what about the payment based on TC alone? Could you tell us your thoughts?

I believe you cannot pay models on TC alone. The meta model itself is built from all the models that are indeed highly correlated with it. They contribute for the majority of the predictions and they need to be paid for the computation and their stake at risk, although the gradient will give them TC~0. A fair payment logic would include not only TC but also the part of the predictions correlated with the Meta Model.

2 Likes

I agree with you, I think the payout system that includes Corr is better than only TC.

The reason I think so is that the source of TC is finite.

Iâ€™ll try to write some more intuitive things.

Below is an intuitive diagram.

As the metamodel improves, the sources of TC will decrease.
So TC could be more difficult to find out.

Also, if the sources of TC decrease, The signal/noise ratio will then get worse, and the volatility of TC would increase.

(Regarding why TC is noisy. Besides the guess that extreme prediction would get high TC scores, I think one reason is that the signal of TC is relatively small.)

I am currently staking on 3xCorr and trying to make adjustments for TC, but if the TC difficulty increases in the future, I may back to Corr.

4 Likes

When we use CORR to evaluate models, users are rewarded if the model performs well, and punished if the model performs poorly.

When it comes to TC, itâ€™s not the case. TC is not a metric to evaluate the quality of the model itself, it reflects whether a model can improve the overall return when it working with other models.

A model may get punished for not working well with other models, even the model may be a good model itself. (For example, models with positive CORR MMC and FNC get a negative TC.) This situation is unacceptable for model developers. Therefore, TC is a big risk to me.

Doesnâ€™t it make more sense if TC is just for rewards and no punishments?

2 Likes