Ever since 22 October, the fund’s volatility skyrocketed, and since March it has been on a massive drawdown. I’m guessing the increased volatility is from the addition of daily predictions (and trades?).
Could someone explain this situation?
Ever since 22 October, the fund’s volatility skyrocketed, and since March it has been on a massive drawdown. I’m guessing the increased volatility is from the addition of daily predictions (and trades?).
Could someone explain this situation?
Richard said in one interview that they increased the metamodels exposure to more volatility right before the big drawdown. It would have fell either way, but the higher volatility certainly didn’t help.
Thanks, that’s quite reassuring.
Do you know if there is any available historical data on the corr. scores of the meta-model? I’m trying to see if there is a “performance degradation” of the models submitted or if it’s just a “below EV” period.
Do you mean corr meta-model with target? You can calculate it on your own if you download the historical meta_model predictions here https://numer.ai/data/v4.2 napi.download_dataset("v4.2/meta_model.parquet", "meta_model.parquet")
Is there any updates to how the fund is doing now? I see the performance stats are no longer on the website now
My hunch is that they got hit badly during the yen carry fallout. The nature of the predictions - the 20 day lag, and the 0, .25, .5, .75, 1.0 grain, and the fact that many people arent ranking or neutralizing their predictions was probably telling the fund to generate income during a period of range-bound price action, meaning they we’re over exposed short volatility. In any normal market, selling puts below the price range would have been low-risk income. But the way we pass predictions, there is probably going to be a bias towards mischaracterizing that risk. And from what I can see, the new dataset is overfitting the neutralization.
In a model that generally performs well, neutralization has no effect until after the carry trade unwind, or when they would have trained for metamodels and neutralization for v5. The release post said that it was neutralized differently to reflect the goals of the fund. They told us they want to reward something other than what we predominately predict. We can also see this in recent the divergence of CORR and MMC. Ive suspected this is from losses still being realized.
The questions Im trying to figure out is what to model going forward. I wonder if adding granularity to the predictions might help and adding shorter-window targets. If I were managing the use of numerous predictions to trade, Id still want my 1, 5, and 15m. Id imagine some features are lags of others, and some targets are just lags and leads of others, but I cant help but wonder how we’d treat the data if we could be more intentional.