[ACTIVE] Bounty for using and feedback on NumerBlox

Hi all,

Our first bounty is here. The makers of NumerBlox , @perfect_fit and @jrai, are interested in feedback from users.

Numerblox is a library to simplify software engineering around Numerai inference pipelines. Its older sibling is being used by the CrowdCent models, and @perfect_fit and @jrai are interested in feedback on this new version.

What is NumerBlox 1.0?

NumerBlox 1.0 focuses on:

  1. End-to-end pipelines and full scikit-learn compatibility.
  2. Simplification of the package structure, with fewer mandatory dependencies.
  3. Leveraging new v4.2 data fully.

For more details on the library’s features and improvements, see the preview post.

Installation Instructions:

The library is compatible with Python 3.9+. You install numerbloc v1 with:
pip install -U numerblox

How to Participate in the Bounty:

  1. Use the library
  2. Provide constructive feedback on your experience, including any bugs or improvement suggestions in this bounty thread for a 1 NMR bounty (bounties are low, but their purpose is to give starting numerati outside of the crypto-sphere an opportunity to add some NMR to their accounts and get into the stake in the game mode).
  3. Bonus NMR for pull requests and bug fixes on github.
  4. If the system is not being gamed, and you want to get paid, add your discord link or an address capable of receiving NMR, and @bor1 will sporadically go through the posts and organize the payouts.

Thanks for participating in community-made tools!
@bor1.

4 Likes

Thanks for organizing this @bor1! Really appreciate it! Excited to improve the library with this feedback.

For more context check this forum post:

UPDATE: NumerBlox v1 has been merged and uploaded to PyPi.

From now on you can install the new NumerBlox version with

pip install -U numerblox

One recent fun thing we included in NumerBlox v1 is the ability to add benchmark models to your evaluation. If you include benchmark model columns in the evaluator it will return Corrv2 and Sharpe outperformance of your predictions vs. benchmark models and correlation of your predictions with benchmarks.

Here is a simple example for usage:

from numerblox.evaluation import NumeraiClassicEvaluator

# fast_mode will skip calculation of FNC, because this can take a while.
evaluator = NumeraiClassicEvaluator(fast_mode=True)

# Your validation data with columns prediction, era, target
# example_preds, meta_model_prediction and rain_ensemblev2 cols.
val_df = ... 

# meta_model_col and benchmark_cols are optional
metrics = evaluator.full_evaluation(val_df, 
                                    example_col="example_preds", 
                                    pred_cols=["prediction"], 
                                    meta_model_col="meta_model_prediction",
                                    target_col="target",
                                    benchmark_cols="rain_ensemblev2")
# Pandas DataFrame with metrics
metrics

Quick overview of core metrics.

2 Likes

mentioned was calculating local MMC & EPC. is that something that will be in the Numerblox pipeline?

1 Like

From the developers - yes, that will come.

Good one! NumeraiClassicEvaluator will now give legacy EPC and MMC metrics with .full_evaluation. Legacy MMC will only be calculated if you define a meta_model_col and have meta model predictions in your validation DataFrame.

NOTE: Numerai hasn’t released the full details on these metrics yet so we use the “MMC2” calculation and deliberately call it “legacy”.

This quickstart notebook explains evaluation in step 3. Note that you also need to add meta model predictions and specify meta_model_col.

2 Likes