Ever wondered how you perform versus your fellow Numerati?
Benchmark models are too easy / not enough of embarrassment?
I present to you: yet another Numerai dashboard! Yep, that’s the name.
Just an average of whatever you see in the charts. And the charts, in turn, come from user profiles, so it’s the average over cumulative stake-weighted-average CORRv2/TC of all models.
how account titles like grand master, master, etc. are created? Is this part of the API?
Yes. I think it’s part of the not-yet-released grandmasters proposal: Grandmaster Proposals - Google Docs
No real purpose here, just think of it as an easter egg, expert danzell
A small update. I managed to get Numerai endpoints added to Allowlisted sites for free users: PythonAnywhere, so now community projects on pythonanywhere should be much easier to make. Heck, you can even try to do submissions from there
Really great! Thank you a lot!
For YAND 3.0 this is my ‘wish list’
Save groups of models and assign a label, like ‘top performers MMC’, ‘top stackers’, ‘My lgb models’,…
and then be able to select a group to watch, instead of one by one.
YAND supports both user and model comparison for all 3 tournaments now: Main, Signals, and Crypto. This includes displaying the corresponding payout metrics for each of the tournaments. Check it out:
Signals:
I haven’t had much time for this project recently, so no UI changes, but for those who need it, I just pushed an updated version of YAND with the feature to support sets of models and users for all 3 tournaments.
To use it, simply add ?set=... in the URL, where ... is a comma-separated list of models/users for the appropriate dashboard.
For example,
Note: the URL param overrules whatever is set in the dropdown, so the dropdown will be ignored if the set param is present. You can still freely adjust the time range.
Using your browser bookmarks, you can achieve the groups @eleven_sigma requested, just with your browser instead of fancy UI.