Your fund is performing poorly compared to the metrics? I would be very surprised if it was otherwise. This is the Representation Problem: whatever metrics you may design and model, the unpredictable markets will always end up being worse for you. This goes against your core belief, more of an unfounded ‘chartist’ hype really, that you can make money with trading robots. (In other ways than instantaneous reactions to data that the suckers out there have not even seen yet). So you struggle on, fiddling endlessly with the metrics. “Let us just adjust the numerous ‘targets’ again and everything will be good”.
Moving onto a concrete question. I have TC= 0.0104 and CORRV2=0.0058 but negative
CWMM=-0.0175. Am I going to be penalised by these changes, or am I going to benefit?
Best regards for everyone!
Anyone have a quick code snippet to test, just for example, the MMC of example preds? I’ve tried using the new contributive_correlation function but am not quite sure I’m doing it right.
As long as we’re able to accurately test our models with MMC internally, this should be a great change.
I don’t think it’s a coincidence that things started to go down hill for numerai and the fund when they introduced new datasets and tinkered with scoring earlier in the year. They appear to have been scrambling and floundering ever since to pick things up again, but to no avail and the detriment of the competition as a whole. I can’t help but wonder how the fund and competition would be doing if they had simply left well alone this year and kept things stable, and if a burgeoning fear of standing still and stagnating gave rise to an irresistible urge to experiment that couldn’t be avoided, by all means introduce a new higher risk fund, more targets and datasets to see how that would work out, but keep what was already working well. Changing multiple aspects of the competition at the same time seems highly problematic; if there is a well founded hypothesis that scoring on MMC will improve fund performance, make the change, but don’t change anything else and let it run for 6 months to a year. If there’s another idea, do that in a separate experiment and run in parallel. But if scoring, targets and probably features are changed all at the same time, how can you know what really works and makes a difference.
I don’t think no Corr is good. What about only corr and mmc? Without corr, the goal is to maximize MMC. Is maximizing MMC, even on the new target, really in the best interest of the fund? You could also fix the multiplier to have at least 1x corr and 1x mmc, so that nobody can ignore and just focus on something.
When we get the new target at the end of the month will we also get new diagnostics?
Sounds like we are sticking with Cyrus for scoring. (re ark in discord)
This is possible, but personally I am more suspicious that the sudden drop in performance also correlates with the sudden rise in interest rates. Our training data comes from a period of time where interest rates were very low and it’s not clear that the strategies we have developed that worked well in that era will continue to work in this new environment.
But I agree with the broader point — if you change a lot of things too quickly it’s very hard to work out what actually had an impact. Maybe it’s interest rates, or maybe it was changing the dataset, or maybe it was changing the metrics. Since everything got changed in quick succession it’s hard to know.