Do you use FNC to measure your model's performance?

A few weeks ago, Numerai added feature neutral correlation (FNC) to the leaderboard. I’m curious, do you use FNC as a measure of performance in your model development?

Do you use FNC to measure your model’s performance?
  • Yes, it’s my primary measure of performance.
  • Yes, it’s an important indicator.
  • Yes, but I don’t think it’s an important indicator.
  • No.

0 voters

I’d be happy to hear your thoughts about FNC in the comments as well! :slight_smile:

Disclosure: In my own backtesting, I have found FNC to be a worse predictor of future performance (measured by correlation) than sharpe and certainly than correlation. For me, it also didn’t add anything beyond correlation as a predictor of performance. And since we are also not paid for FNC, I don’t use it in my process at all. In my humble opinion, FNC has a weak theoretical basis and should not appear right next to CORR and MMC, the two measures that actually determine our payout.


I responded “no” despite having a model ranking in the teens since FNC ranking launched. All my models that I tested with FN were substantially worse almost every single era.

1 Like

To me, its just one more piece of meta, and there is already plenty to go off of.