MMC staking starts Jan 2, 2024

We are reviving Meta Model Contribution (MMC) to replace True Contribution (TC). For rounds starting on or after January 2nd, 2024 staking and payouts will transition to the fixed multipliers 0.5xCORR + 2xMMC. Furthermore, the 2024 Grandmasters season will be determined on CORR and MMC.

We are doing this for a few reasons:

  • MMC is more stable than TC
  • MMC is locally calculable while TC is not
  • We realized our most stable performance when paying MMC

What is MMC (and BMC)?

From our docs:

MMC is the covariance of a model with the target, after its predictions have been neutralized to the Meta Model. Similarly, Benchmark Model Contribution (BMC) is the covariance of a model with the target, after its predictions have been neutralized to the stake-weighted Benchmark Models.

The idea to revive MMC started with a simple question from our founder, Richard Craib:

“Given a model, how much does the Meta Model’s correlation with the target change if we increase the model’s stake by some small amount?”

He asked this because we know that the Meta Model’s correlation with our target is a directly monetizable metric for which we could optimize. This produced a simple formula for calculating MMC that I call “Richard’s MMC”:

Where y is the target, m is the Meta Model, and p are a model’s predictions.

Using a derivative with respect to the 0.001 as it goes to 0, we reach the following formula that I call “Murky’s MMC” (big thanks to our user Murky for this derivation):

Screenshot 2023-11-28 at 1.05.14 PM

Assuming that p and m are both centered, normalized column vectors (using the tie_kept_rank and gaussian functions from our open-source scoring tools package) both formulas reach results that are 100% correlated with each other.

Finally, to sanity check these methods of calculating MMC, I revived the old method for calculating MMC, removed the bagging and uniform transformation to yield a third formula that i dub “Mike’s MMC”:

Screenshot 2023-11-28 at 1.05.23 PM

It took some time to convince myself of the mathematical equivalence of Murky’s and Mike’s.
Here are some thoughts:

Murky’s version is the fastest and simplest to compute, so we are using it to calculate MMC:

def contribution(
predictions: pd.DataFrame,
meta_model: pd.Series,
live_targets: pd.Series,
) -> pd.Series:

"""Calculate the contribution of the given predictions
to the given meta model.

Then calculate contribution by:
1. tie-kept ranking each prediction and the meta model
2. gaussianizing each prediction and the meta model
3. orthogonalizing each prediction wrt the meta model
4. multiplying the orthogonalized predictions and the targets

predictions: pd.DataFrame - the predictions to evaluate
meta_model: pd.Series - the meta model to evaluate agains
live_targets: pd.Series - the live targets to evaluate against

pd.Series - the resulting contributive correlation
scores for each column in predictions

# filter and sort preds, mm, and targets wrt each other
meta_model, predictions = filter_sort_index(meta_model, predictions)
live_targets, predictions = filter_sort_index(live_targets, predictions)
live_targets, meta_model = filter_sort_index(live_targets, meta_model)

# rank and normalize meta model and predictions so mean=0 and std=1
p = gaussian(tie_kept_rank(predictions)).values
m = gaussian(tie_kept_rank(meta_model.to_frame()))[].values

# orthogonalize predictions wrt meta model
neutral_preds = orthogonalize(p, m)

# center the target
live_targets -= live_targets.mean()

# multiply target and neutralized predictions
# this is equivalent to covariance b/c mean = 0
mmc = (live_targets @ neutral_preds) / len(live_targets)
return pd.Series(mmc, index=predictions.columns)

We divide by the length of the target to bring the final values inside the range of something like CORR20v2.

Your BMC is basically MMC, but using just the stake-weighted benchmark models instead of the Meta Model. This is helpful to tell us how well your model ensembles with just our internal Benchmark Models. A high score in both would indicate a truly unique and contributive signal.

Why MMC?

The fact that we can calculate MMC 3 different ways and they are all 100% correlated means that this is an easily explainable metric regardless of how you intuit the linear algebra and can be calculated locally (unlike TC).

Furthermore, MMC is much more stable than TC. Take a look at the following charts showing the distribution of each score over time:

Clearly MMC is much closer to the distribution of CORR than TC ever was or will be. This stability in the score is significant when we consider how users need to optimize their models for MMC and CORR.


Out of curiosity, why is that the grandmaster raking is caldendar year based and not rolling year window based?

Also, will you ever consider removing the test daily rounds from the computation of leaderboard and grandmaster scores? There was no stake allowed on those testing daily rounds so they should not be part of the ranking computation.

  1. There must be some cutoff date to award titles and we want to highlight long-term performance. Yearly seasons are the most reasonable to achieve this.

  2. The test daily rounds are now starting to phase out and will finish phasing out by 2024-06-02. This is partially by design b/c those that switched over right away clearly worked harder to stay up-to-date with tournament functions and thus deserve a higher rank.

1 Like


You initially convinced me that it is ok to reward fast-adapting/hard-workings users more by including the test daily rounds in the ranking computation, but then this choice makes very hard to compare models. Forget about prestige, I would just like to compare my models with benchmark models, for example. It’s a pity that I cannot properly use this functionality due to the inclusion of test daily rounds in the statistics and ranking.

Wouldn’t be more useful to keep the test daily rounds out of the leaderboard and use them only for the grandmaster ranking, which is prestige focused?

I’ve seen some posts about changing the multipliers. On Jan 2, 2024 will the multipliers be 0.5 x CORR + 2 x MMC?

Or something else?

The latest announcement is 0.5 Corr + 3x MMC, although I’m not sure you should bank on that either. (And MMC was corrected on the website earlier in the week from an erroneous version that was previously displayed. So if you had looked at your models a week ago thinking about MMC, check again.)

Thanks @wigglemuse, yes I’m aware of the new MMC calculation. It’s hard to follow all the threads.

That really puts a lot of emphasis on MMC. I hope it correlates with fund performance, I’m not sure we can survive another drawdown like we recently experienced.

Where is described the bug and the correct way to compute MMC?
Is Murky’s version correct as said before?
Is there a R implementation of the function to do it?

The code released is correct, but the website displayed version (before sometime last week) was still using the old Nomi target (leftover from original MMC), not the Cyrus target as it should have been. (Seems like if that was the case, the code was probably wrong too, but whatever – supposedly it is all correct now. I leave it to others to verify that as it seems like many are recreating locally. Do it all match up now people?)

1 Like

The most up-to-date function is always located here:

You can check what has been fixed in the commits history: History for numerai_tools/ - numerai/numerai-tools · GitHub

As you can see, the latest fixes were pushed 6 hours ago. :slightly_smiling_face:
So it might be a good idea to subscribe to repo changes.


Thanks for sharing this. I have some (probably basic) questions about how the MMC and the payout is calculated.

  1. Why in the correlation_contribution function we don’t raise the gaussian to the 1.5 power as in the numerai_corr?
  2. Am I oversimplifying if I see the payout (excluding clipping and payout factor) as 0.5xCORR(P, T) + 3xCORR(P⊥M, T)?
    CORR → Correlation (as calculated in numerai_corr)
    P → User predictions
    M → Meta Model predictions
    T → Cyrus target
    P⊥M → The user predictions component independent of (orthogonal to) the meta model predictions

Thank you in advance

There’s also the Gaussianization step for both the meta model and the user predictions that happens before 1.5 power and corr (for corr) and neutralization (for mmc).

Thanks @andralienware, yes this is what I meant with “as calculated in numerai_corr”.
It just surprised me that the steps to calculate CORR (the score) seem to be different from those needed to calculate MMC and not just the neutralization but also the 1.5 power.
I was trying to find a simple way to tie together CORR and MMC like in 0.5xCORR(P, T)+3xCORR(P⊥M, T) but it doesn’t match the code I see in the repository so I guess there is no easy way to represent it.

CORR and MMC are fundamentally different metrics and thus cannot be calculated the same way. They are different both mathematically but also different in their intent. We wanted CORR to capture performance in the tails of your prediction distribution (hence the pow 1.5) whereas MMC cares about how your predictions improve the entire Meta Model (hence no accentuation of the tails with a pow 1.5). As others have stated, we do similar pre-processing in both CORR and MMC, so they aren’t completely dissimilar.

1 Like

Isn’t the reason that pow 1.5 is employed to capture the leptokurtosis of real returns? It seems that if there are issues in the tail of the meta model (which is probably what drives most trading losses and gains), and a model’s predictions correct them, there should be a payout increase corresponding to that. I understand the idea of getting rid of predictions’ components made up of each’s projection onto the meta model, but that does not preclude the idea of using Gaussian-ized and then pow 1.5’ed predictions from being projected. I agree that Spearman correlation does not make sense, but I agree with sirbradflies’s idea that the 1.5 pow makes sense. We all know market returns are not actually Gaussian, and if the pow 1.5 pre-processing step makes sense for CORR for its effects on tails, then it should make sense as something to do before neutralization. Run it by @murkyautomata, and I’m sure you’ll get agreement that pow 1.5 (or whichever transformation you think matches the tails) would be a good idea in MMC.

1 Like


Shouldn’t the Meta Model be deprived of the model contribution before we can use it to compute MMC?

Firstly using a model to compute the Meta Model and asking in a second step how much a model can still improve the Meta Model (MMC), means that MMC disregards the contribution that a model gave in building the Meta Model in the first place.


I’m very interested too in this topic. Computing MMC as taori says have a big computational cost (a MM should be computed with a LOO strategy).
Is there other way to adjust the ‘size’ efect of a model in the MMC?
Perhaps computing MM with LOO only for big staked predictions, those that the difference in MMC computed as now and with LOO have a significant difference. I would like Numerai Team does some test with top 100 stacked models and check the impact in MMC with/without LOO MM build.

1 Like

Thanks to @PTR for clarifying my question on discord. You are totally right.

I forgot that this is also how TC worked. Both MMC and TC try to estimate how much more (or less) of a model is needed in the Meta Model to improve its performance.

So MMC serves to optimize the weights of the models within the SWMM.

From the hedge fund that seems reasonable, but from the user perspective it is totally unfair. The users would like to be paid on how much their models contributed to the performance of the SWMM. That part is the 0.5xCORR, but I believe that is too low for the risk associated with NMR.

Using the stake, and hence payout, as a mechanism to optimize the MM is in conflict with how the users see the stake and the payout and I hope this part of the tournament changes in the future.

1 Like

A models CORR is not how much it contributed to the MM - this is just a measure of how predictive of the target it is. MMC is the models contribution to the MMs CORR. Here are some facts about MMC that you may be missing:

  • If you have positive CORR but 0 MMC then you are providing no uniquely additive signal - you just have a model that we (and other data scientists) already know about.

  • We orthogonalize predictions wrt the MM and most predictions have very small stake so there is virtually no difference between raw MMC and bagged (LOO) MMC.

  • Calculating MMC in a bagged/LOO formulation makes the calculation more opaque to data scientists because you all don’t have access to other models’ raw predictions, thus you can’t optimize for it locally. Local optimization is a key characteristic.

1 Like

If the MM was so good that it can perfectly predict the target, then all models would have MMC 0 and not rewarded.

However MM doesn’t come for free, so you should also reward the models that created the MM in the first place and the risk the users take in staking those models. Currently this part is rewarded 0.5xCORR, which is too low.

if you have positive CORR but 0 MMC then you are providing no uniquely additive signal - you just have a model that we (and other data scientists) already know about.

This is where I disagree with you. You believe you don’t have to pay for the MM, but it has a cost for the users and I am saying you need to reward that