Hi everyone! I’m going to leave Numerai tournament and would like to share my workflow which is focused on feature neutralization topic.
- Find optimal hyperparameters using 5-fold shuffled by-eras CV and Random Search (~100 trials). Metric for maximization was mean CORR after full feature neutralization of predicted values (full means all features and 1.0 coefficient for neutralization). LightGBM was used for boosting.
- Find all features which increase mean CORR when they are “not used” for prediction. It was done by shuffling features one-by-one in an era-wise way and applied for prediction using normally trained model.
- Generate a short list of features and train a new model. This is jackerparker3 account predictions.
- Neutralize predictions for this model using a short list of features except one for all these features one-by-one. Calculate difference in sharpe and mean CORR for every feature.
- Take the short list of features and remove all features which decrease sharpe when used in neutralization. Neutralize basic predictions on this list and it will be jackerparker2.
- Take the short list of features and remove all features which decrease mean CORR when used in neutralization. Neutralize basic predictions on this list and it will be jackerparker6.
I don’t really want to share the code for this workflow because it is too messy and contains a lot of bugs. For example, a list of features used in jackerparker6 for neutralization contains features which were not used for prediction. But despite all the bugs, the results for both validation and live-performance are quite interesting and it seems that the general idea is worth investigating. Here is a link to github with all features lists, pickled model and iPython notebook which are ready to generate predictions from jackerparker2 (#12 position right now) and jackerparker6 (#33) accounts.
Hope someone will find it useful,