Target Cyrus - New Primary Target


4 new target variations are being released on Numerai. There are 20D and 60D versions of each, for a total of 8 new targets. They will be released in the v4.1 dataset starting with the round opening on April 18.

One of them, target Cyrus, will become the official target used for payouts in one month, beginning with the round opening on May 13.

Along with this change, we are also implementing a change in the way correlation is calculated. This change weights your lowest and highest predictions more, and it is called Numerai Corr.

Models trained on Nomi still perform fairly well on this new score, but we do expect models trained on the newer targets to be a bit better.

Signals has no new targets released, but it will begin using the Numerai Corr variation of correlation for all scores.

New Correlation

When Numerai builds portfolios out of the Meta Model, a user’s highest and lowest predictions impact the Meta Model significantly more, and ultimately are more likely to make it into the portfolio. For this reason, in addition to looking at your model’s performance across all of its predictions, it’s important to also pay attention to the performance of the most extreme predictions.

We’ve previously suggested using something like “the correlation of your top and bottom 200 predictions” in order to make sure your predictions are also good in the extremes.

Improving on this idea, we’ve made a new correlation function which does the following:

  • Rank your predictions
  • Gaussianize your ranked predictions
  • Raise those to the 1.5 power
  • Transform target to be between -2 and 2 instead of 0 and 1
  • Raise the target to the 1.5 power
  • Take the Pearson correlation between the resulting predictions and target
def numerai_corr(preds, target):
  # rank (keeping ties) then Gaussianize predictions to standardize prediction distributions
  ranked_preds = (preds.rank(method="average").values - 0.5) / preds.count()
  gauss_ranked_preds = stats.norm.ppf(ranked_preds)
  # make targets centered around 0. 
  centered_target = target - target.mean()
  # raise both preds and target to the power of 1.5 to accentuate the tails
  preds_p15 = np.sign(gauss_ranked_preds) * np.abs(gauss_ranked_preds) ** 1.5
  target_p15 = np.sign(centered_target) * np.abs(centered_target) ** 1.5
  # finally return the Pearson correlation
  return np.corrcoef(preds_p15, target_p15)[0, 1]

The result is that as with Spearman correlation, you still don’t need to worry about the distribution of your submissions, only the rank ordering. However the tails are now emphasized more than in a Spearman correlation.

This correlation, when applied to every score, is more similar to TC than the Spearman version of the same scores, indicating that these tails really have been under-emphasized before now.

Here’s the before and after for Numerai scores:

And the before and after for Signals scores:

Both Numerai and Numerai Signals will move all non-payout scores to use Numerai Corr immediately, while the payout scores (CORR20) will switch to Numerai Corr soon. For Numerai, payouts will switch to CORR20V2 on May 13. For Signals, payouts will switch to FNCV4 on June 3.

New Targets

Cyrus, Caroline, Sam, and Xerxes are four new targets which are all similar, with small variations. They will be included in the v4 and v4.1 datasets beginning on April 18.

Cyrus is our best target, and will become the CORR20 payout target in one month. The other three will not be used for payouts, but you might find that they are useful for training your models to be good at predicting Cyrus.

The “target” column in the datasets will also contain values for target_cyrus_20 starting on May 13.

Below is a comparison between a model trained on Nomi scored with the current CORR20 (Spearman correlation with Nomi) and a model trained on Cyrus scored with the new Numerai Corr with Cyrus.

The mean score is about the same, but the consistency of the new model with the new scoring method is vastly improved.

Here’s a model built on target Nomi vs a model built on target Cyrus, both scored with Spearman on Nomi.

So even before the change to the definition of CORR20, switching to target Cyrus for training your models is beneficial, especially in terms of consistency.

Here we have an assortment of models on the new scoring.

All of the targets have something to offer and we hope that you use many of them for your ensembles. By themselves however, the newer targets tend to outclass Nomi.

Website changes

CORR20V2 is the temporary name for the new Numerai Corr Cyrus score. It has been added to the compare scores page so you can see how your existing models would be affected by the change:

On May 13 we will remove the existing CORR20, and CORR20V2 will become known simply as CORR20, and payouts will switch over to this new measure of CORR20.

Existing stakes will automatically switch to this new CORR20 for the round opening on May 13, so there’s no action required.

For Signals, CORR20V2 uses the same Signals target as the current CORR20, except it uses the new Numerai Corr instead of Spearman. Models will have their stakes transitioned to FNCV4 on Numerai Corr starting with the round opening on June 3.

Happy modeling


Should we assume the target will be binned in range <0, 1> as it used to be? If so, the expression can be simplified I guess? (no need to use SGN and ABS)…


Have made a prototype for Numerai Corr in Numerblox, but 1st would like clarification on what distribution you expect for this function in the targets. Do we expect the targets are in range [-1...1] here?

If targets are in [0...1] then it doesn’t seem to make sense to calculate Pearson against gaussianized rank predictions. :thinking:

Numerblox pull request for Numerai Corr:

1 Like

Thanks for pointing this out - we actually use targets in the range of -2 to 2 internally so I had missed this. I’ve updated the numerai_corr code to move targets from the [0, 1] range to [-2, 2] range before raising to the 1.5 power


You’re right - we use -2 to 2 targets, not 0 to 1 targets. I updated the code in this post accordingly. Thanks for catching this.


Will the change to corrv2 scoring payouts start at the same time as the daily payouts? (as in a big bang release event)

Another question that pops up :slight_smile: would it be technically possible to also keep the corrv1 as scoring payout next to the corrv2? As in, make it optional for users to select which scoring they want to use (similar to the multiplier options). I can imagine this one would make things quite complex and maybe is not necessary.

centered_target = target - 0.5
This line just transform the target to be -0.5 to 0.5, not -2 to 2. Am i missing something?

Correlation is invariant to scale so multiplying the [-.5, .5] centered target by 4 to make it [-2, 2] doesn’t change the score.

Is this new target going to be in the same column?
Also, my corr20v2 is like one quarter of corr20.
Is there a way of opting out of this?

Hi there,
I have downloaded the new dataset after I saw this topic and my model’s rmse is much worse using the same target_nomi_v4_20, same model. Is anyone seeing that as well? The previous dataset I was using was downloaded February 23.

1 Like

hi everyone, I’m having problems replicating the scores of corrv2 from the leaderboard using the code above, my correlations are not exactly the same. Have anyone succesfully implemented and tested the new metric?

Yes, me too. Most of my corr20’s which I was betting on have gone from 90% to 0%.

We are getting very strange results with this new dataset, it look likes all the targets have changed after you added the cyrus target to it (mainly target_nomi_v4_20). Can you have a look, please?

Basically, I fit the same model on target_nomi_v4_20, using the same rows I had in the dataset I download in February 23 and get completely different results. Unfortunately, I overwrote the February dataset and can’t compare it against the new one.


I downloaded the train set just now to double-check (and the validation set yesterday to grab the latest eras w/ targets). Ignoring the new targets, I do not detect any changes in this data from data of weeks’ past, i.e. nothing that existed in the downloads 2 weeks ago has changed. Somebody on the discord was grumbling yesterday about some file corruption or something and couldn’t extract the data, and then Mike said he was fixing…you might re-download just to make sure if it really looks like the targets have changed, but much more likely you are doing something off (check the column headers) or your model results are much more stochastic than you realized using same data…


Although corr20 has a longer right tail, corr20V2’s left tail was pulled in quite a bit (likely a large contributing factor to the increased sharpes as well). The average corr20v2 score is lower than the average old corr20 score.

The correlation between corr20 and corr20v2 is quite high.

@master_key Is there still any work going into open sourcing some of the scoring pipelines?

(this post was edited to be the most up to date and removed the previous post to avoid confusion)

1 Like

You beat me to it - we are just triple checking things before we put it back on the website and add some more details to the post.

The scoring for the website actually had the same issue that was found in the correlation function in this original post, where the targets weren’t being centered properly.

So we’ve recalculated all of the historical scores, and you should see scores that are much more similar to the previous corr20 scores.

I see a 98% correlation between corr20 and corr20V2 rep for example.

It does look like the typical and best correlation reputations are expected to decrease, while the Sharpe of correlation tends to increase.

You might want to filter out reps from this analysis which have many missing rounds, as it makes it look like reputations are much closer to 0 than they are in practice.


After a lot of effort, I finally had a model that was performing really well on Corr20 and now you have messed it all up. Corr20v2 is much worse. I feel like I can not win here with these moving goalposts and constant changes for the worse. I am draining my stake.

You already have TC. I do not understand your motivation for redefining correlation to be “more like it”.

What was wrong with correlations > 0.015, that you have to actively prevent them?

Can the graphs in the thread be also updated?

@master_key corr20v2 is much lower than corr20 for every model which means you are automatically decreasing everyone’s payouts, why is that? Also, given you doing that, isn’t now time to allow users to set the corr weight to be higher the the current max of 1?