MMC2 Announcement

Hi @richai. I would like to explain my concerns one more time in more clear way. I got your points and I’m completely agree with that. But my concern about another type of situations: When a user’s model contribute in increasing of metamodel’s sharpe. The most extreme example if a user submit a model that’s strongly correlated with the metamodel, have the same expected return of correlations, but has a much better Sharpe at the same time.

What do we have in the results:
In old payout system, the user will be ok. He will get the same payout as the person who submits metamodel ( :frog: ), but payment will be less volatile due to higher sharpe for the first user and everyone happy here. In the new MMC system, the higher sharpe model user submits - the more volatile payment will be. That is what I really don’t like here. If someone works on MMC-profitable models, he has to keep sharpe as much close to metamodel as he can. Even in not such extreme cases, when someone developed model low-correlated with metamodel - his MMC payment probably will be more volatile for model with higher sharpe even the rest of performance metrics will be exactly the same.

And I don’t argue here about the topic - should more stable models get higher payout than less stable models (however it is also important question). My main point here - more stable models should get more stable payout as it was in correlation-based payout system. Current MMC payment system just do the opposite thing.

Regards,
Mark

4 Likes

Another way to think of the same problem is to see the rounds as being a mixture of “easy linear-signal round” and “difficult round”. The model that does decently in both types of rounds get penalised by the MMC2 scoring algorithm each time we are in an “easy linear round” (of which there are quite some), because the models that do well in both regimes get outcompeted by those that are right only in the easy rounds.

My “solution + unthought of flaws :-)” is for Numerai to split their test set into two, sampling 20 random eras each round from the part of the test set that they are going to leak a tiny bit of data from (keeping the other half pure), and scoring MMC in some way (sharpe? smart sharpe?) on those 20 random eras + live eras.

Numerai might need to ensure that people aren’t using multiple models to generate the test results that they submit, based on some knowledge of when what test era happened and how linear models performed then - for purpose of submitting, all test eras could just become one giant era - we don’t need to have that data split up in eras.

TLDR. The current metamodel appears to promote strategies that are similar but slightly better than the metamodel.

2 Likes

Regarding “if a user submit a model that’s strongly correlated with the metamodel, have the same expected return of correlations, but has a much better Sharpe at the same time”. The model as graphed is basically a no-risk near-flat line. If you had such a model (basically impossible, especially if the metamodel was really was going up and down like that), then why would you even be thinking about MMC? The MMC payout will be optional (and according to MikeP must remain optional for game-theoretic reasons), and so if you had a model like that you’d just leave it on the normal payout system and you’d do fine. With compounding, you could retire on such a model.

MMC doesn’t have to be all things to all people – I don’t think it is a problem that some models just aren’t going to do well under MMC scheme since you aren’t forced into it (especially if they do well under normal scheme). So the question isn’t really can we think of scenarios where MMC payment is going to be worse than regular payment – yes of course we can, so if we have a model like that (and can tell it is a model like that), we simply wouldn’t opt for MMC payouts.

But you are getting to something here that I think is interesting, in that what a “stable” model is (in terms of getting high scores and stable payouts) is going to be different depending on which payout system you are opting for. So we know what a main score stable model looks like – it is right in an absolute way about the same amount most of the time (sometimes at a high level compared to metamodel, sometimes lower – like in your upper graph). Whereas an MMC stable model (in terms of getting around the same MMC score most rounds) will probably have main scores that are all over the place and the idea is to always be adding value to the meta model. And that may turn out to be an easier proposition. This does NOT mean it has to be “better” than the metamodel (on average), I think it is misleading when people say that. Probably nobody will be better than the metamodel over time – my guess is that it will be impossible to achieve that (as long as we have a sufficient user base submitting many models). The idea is not to have a model be better than the metamodel (which of course includes that same model), but that the metamodel becomes better with your model in it vs without it.

5 Likes