Help us improve Numerai Compute!

Hi all,

Research has shown that trading earlier (at market open instead of market close) and more often (daily instead of weekly) can lead to better execution and better performance for the Numerai Hedge Fund.

To support this future, we are exploring the idea of daily rounds with much shorter submission windows. This change will effectively make model automation mandatory.

Currently, only 4.4% of staked submission models are automated using the Numerai Compute system. Our goal is to bring this number up to 100% by making the developer experience of model automation as smooth and pain free as possible.

Some questions for you:

  • Are your submissions automated now?
  • Do you submit from a home machine, compute/cloud or website?
  • Do you want to train your model locally or in the cloud?
  • How often do you retrain your models?
  • What cloud platform are you most comfortable with?
  • Do you use version control for your model code?
  • What are the biggest pain points with the current Compute setup?
  • How do you typically deploy a model to production?

Some proposals we have:

  • Automated command-line tool to deploy Sagemaker Studio notebooks and webhook. The CLI tool would deploy: a training notebook, a submission notebook, and a webhook that Numerai calls. This setup would remove the need for Docker and Terraform and would hopefully give users a more familiar UX by running things in a notebook.
  • Using Linode Stackscript to deploy a cloud computing machine with all the dependencies installed. Itā€™s much easier to get up and running in Linode vs. AWS, so this path would be helpful to users who arenā€™t comfortable working with AWS.
  • Automated command-line tool to deploy a scheduled Google Colab notebook (h/t to bor for his Google cloud post here)

Which of these is most appealing to you? What changes would you make?

4 Likes

Daily submissions give us how long a submission window? 24 hours? And I know you want to talk about automation, but none of it is appealing to me without knowing the context of how staking and rounds would work. (My computing needs are onerous.) Same target horizon of 20 days? Would a round be ending every day or folded together into some kind of weekly average? (If the former how would that overlap work? Youā€™d have to cap daily payouts at 1/20th). etc etc i.e. is there additional (or fasted paced) reward for the additional burden? (Even if automated I donā€™t care what anybody says, it will still be a much bigger burden and possibly with a significant actual dollar cost.) And you knowā€¦what if I just donā€™t wanna do it and will only submit once or twice a week? I would imagine youā€™ll get a lot of such gaps (even if automated pipelines break and there wonā€™t be much time to recover) so the daily metamodel may become quite ā€œchoppyā€ with models dropping in and out all the time.

As far as automation, my models take hours to run a set of predictions on live (on a very well-provisioned local machine). Iā€™m capable of automated the pipeline locally (or remotely) without any help from Numerai, but if local thatā€™s hours of development time lost every day while the computer is spitting out new predictions (I typically have it churning away on some new experiment practically 24/7), and if I want to avoid that and use the cloud that would cost me $3-$8 daily probably (rough estimate) because Iā€™d have to have a lot of cores to run it in reasonable time. (For the cloud, Iā€™d probably get a Linode or similar with 32-56 cores and would need to run it for 1-3 hours/day maybe at roughly $1-$3/hr depending on the setup ā€“ would have to experiment with different levels and see how fast they really are. I could also cost much more ā€“ just guessing at the moment based on the rates Iā€™m browsing.)

3 Likes

Please have account-level staking out before moving to mandatory automation, model slot switching would be hard with automation. Account level staking and instant stake adjustment can eliminate the need for slot switching.

5 Likes
  • Are your submissions automated now?
    No.

  • Do you submit from a home machine, compute/cloud or website?
    Home machine.

  • Do you want to train your model locally or in the cloud?
    Locally

  • How often do you retrain your models?
    Very rarely, I hope. I just started my new architecture around round 308, and itā€™s been unchanged since round 311.

  • What cloud platform are you most comfortable with?
    Iā€™ve never used one. Iā€™m such a neanderthal.

  • Do you use version control for your model code?
    No, but I could. Itā€™s not a big deal.

  • What are the biggest pain points with the current Compute setup?
    I am Python, Compute, and internet programming illiterate. And I currently work in MatLab. :older_man: My bad. On the plus side, in the past I usually estimate 3 weeks to be functional in a new language, 6 weeks to moderately fluent (Iā€™ve worked in lots).

  • How do you typically deploy a model to production?
    I just code it up, spend some time debugging it, and then let it run. I am so bad at doing this properly :laughing: I used to be much more formal in my approach (working for years in defense research makes one that way), but Iā€™ve happily put that behind me.

As for actually working on the problem as stated, I think I could adapt. The actual production of the 50 submissions takes just a minute or two; the whole process (download the live data, process it, upload the submissions) takes about 20 minutes. So I could probably still do it all by handā€¦

Despite my whining, if you feel this move is of benefit to Numerai, then go for it. Just give us a few weeks notice before switching over, and maybe some links to practice on.

As for the proposals, theyā€™re all Greek to me, Iā€™m not familiar with any of it. :thinking: OTOH, if you simply do what you do for the Tournament, but on a daily basis with a twelve or twenty four hour window, that would work for me well. a window of just a few hours is fine. Especially if you donā€™t mind some entries being skipped (weā€™re heading into beach weather).

Before I forget, Iā€™m currently downloading Tournament results periodically to build up statistical estimates for my modelsā€™ performance. I need those to estimate the best way of distributing my NMR before staking any significant amounts. If payouts are to be made on a daily model submission, then Iā€™ll need those on a daily basis, though delayed a few days is fine.

1 Like

I use the current compute pipeline (docker+terraform) and I find it really easy to use because of the nice scripts that numerai provided. (for other projects I tried to setup servers using terraform on my own and that was a lot more pain). I like the setup. I do not really understand why only so few people are using it.

Great point. This is probably one of the big reasons for lack of automation (it is for me as well as just the computing needs) ā€“ we have to switch out models/slots all the time to control our staking.

1 Like
  • Are your submissions automated now?
    yes, fully automated. most of them using compute, some others also dockerized and scheduled with prefect
  • Do you submit from a home machine, compute/cloud or website?
    home machine for the non-compute ones. mainly because I retrain those models each week (or because I was too lazy to port to compute)
  • Do you want to train your model locally or in the cloud?
    locally for now. However, my machine is quite dated and I am looking into cloud training
  • How often do you retrain your models?
    Most of my models are not retrained. some need weekly training, which is fully automated
  • What cloud platform are you most comfortable with?
    GCP
  • Do you use version control for your model code?
    of course
  • What are the biggest pain points with the current Compute setup?
    I really enjoy the setup. Uploading the docker images is slow, which is a bit annoying. And Iā€™d enjoy some better monitoring (trigger failures per model, response time per model, etc)
  • How do you typically deploy a model to production?
    compute or prefect
2 Likes
  • Are your submissions automated now?

Yes, local task scheduler on windows that runs a batch file.

  • Do you submit from a home machine, compute/cloud or website?

home machine.

  • Do you want to train your model locally or in the cloud?

locally

  • How often do you retrain your models?

once every 6 months

  • What cloud platform are you most comfortable with?

Google, but willing to use anything that can do a few hours of compute and has local storage.

  • Do you use version control for your model code?

yes

  • What are the biggest pain points with the current Compute setup?

all I want from the compute solution from numer.ai is something that triggers the running of some batch file / startup script. I can do the downloading / prediction / submission myself.

  • How do you typically deploy a model to production?

Duplicate the previous production directory, add a fresh git archive to the directory, exclude the files that hold the model api keys from github. update a few files within it based on how my model has evolved in the last 6 months, and add the batch files that does the downloading/computing/submission to the task scheduler.

A notebook kind of template that has a webhook in it sounds interesting, not sure if it runs clojure (there seems to be a clojure/jupyter bridge, so maybe that works). Otherwise - I would like something that is most similar to having a home machine in the cloud that responds to a trigger from numer.ai by running a prespecified script would be great.

Some questions for you:

> * Are your submissions automated now?
> - semi-automated (that is i press the button to start the pipeline, but it could be automated if i want to)
> * Do you submit from a home machine, compute/cloud or website?
> - home machine and gcp cloud
> * Do you want to train your model locally or in the cloud?
> - local and cloud
> * How often do you retrain your models?
> - maybe 1 model per 2/3 weeks
> * What cloud platform are you most comfortable with?
> - azure or gcp
> * Do you use version control for your model code?
> - lol sort of, github
> * What are the biggest pain points with the current Compute setup?
> - with terraform its meant to be platform-agnostic, however looking at the current code it still seems like a lot of work to get that compute ready for gcp/azure/oracle/ā€¦
*> *
> * How do you typically deploy a model to production?
> - either overwrite python files directly (conda env) or use docker

I am sure that besides the current compute there are a lot of users in the community who could provide high-quality howtoā€™s on how to setup an automated pipeline in the cloud based on python and a compute engine, maybe some bonus on those tutorials could also help with getting more people using a compute solution?

As for other questions you could ask are the following:

  • How many models do you have and how long does it take for your current setup to actually predict and submit?
  • How much cost do you have on a weekly/monthly basis with your current prediction setup?
1 Like
  • Are your submissions automated now?

Yes, all of them, but they are called from a single webhook (so your 4.4% number probably isnā€™t correct).

  • Do you submit from a home machine, compute/cloud or website?

Cloud, a mix of AWS and Google Colab.

  • Do you want to train your model locally or in the cloud?

Cloud.

  • How often do you retrain your models?

Some of my models are retrained every week due to a custom dimensionality reduction step. Some of these take many hours to retrain (5h or more). And from time to time they fail due to cloud shenanigans. So Iā€™m not happy about reducing the submission time.

  • What cloud platform are you most comfortable with?

AWS.

  • Do you use version control for your model code?

Yes.

  • What are the biggest pain points with the current Compute setup?

Certainly the overly complex setup required by AWS. And the fact that ECS doesnā€™t support machines with GPUs.

  • How do you typically deploy a model to production?

I develop a notebook on Google Colab, and have a Puppeteer script run it from a ECS job.

If you go the Sagemaker way, please let us use ā€œspotā€ instances. An alternative is AWS Batch, which deals with failures/retries. Iā€™d love to have an easy, programmable, version-controlled setup for it.

1 Like

TL,DR; do not make things overly complex, maximum information and flexibility is not always the best for participants, as they will feel suboptimal when not being able to work as much as they think they should.

In general, numerai tournament developement is quite fast-paced to me. For many of us itā€™s a side hobbie which has flexibility (you can stop running exp or do nothing for a month) and some returns (well, losses, given the current crypto market and my first stake time).
This is good, but itā€™s less so with the frequent changes.

Luckily the community is active and data version changes are backward compatible. But it is easy for the development team to evaluate incremental complexity increases relative to the current setup and not in absolute terms that really refer to entry barriers. As an illustration, one could imagine (roughly and not accurately) some experience levels:

  • manually download a .csv , look at it, create one manually and upload it via GUI
  • manually download a .csv and analyze it locally helped by whatever tool you use (excel, matlab, python, Rā€¦), manually generate a new csv and upload it
  • do the same with a .parquet file
  • understand and use an API to download data and upload predictions
  • automate the analysis and prediction generation in a programming language
  • update the data to a new version, be rich and buy plenty of ram (or code in C++, or run small-batched models or be really smart)
  • automate the training and execution of prediction-generating models
  • move the execution of models to the cloud
  • move the training of models to the cloud
  • deploy an NLP bot to answer this blogpost :exploding_head:

The development of the tournament is making participants escalate this ā€œsophisticationā€ ladder, and this makes barriers of entry tougher. The psychology of the participant is to be taken into account: For instance, if in the new setup evaluation is done per-week, no penalization occurs if missing a day prediction, and no intermediate information is provided, then the upgrade is simply automating a model and making it run every day. This is relatively easy given enough guidance. Else, the participant will worry a lot about making the best possible thing which involves 1) daily adjusting the stake, 2) making the automation overly robust, and 3) retraining every day. This is the spirit of @wigglemuse which I subscribe to: we donā€™t want to feel suboptimal and this might make being optimal harder.

Now the questions

  • Are your submissions automated now?
    No, but I connect to the server once a week to run commands.sh which does everything I want
  • Do you submit from a home machine, compute/cloud or website?
    university server, oups
  • Do you want to train your model locally or in the cloud?
    I train it on the clusters of my institution, which is presumably more powerful and cheaper than what the cloud will provide
  • How often do you retrain your models?
    Iā€™m just deploying v4, and I plan to finetune every week (of course, just because validation data is being updated)
  • What cloud platform are you most comfortable with?
    I have never used one, only remote servers via ssh
  • Do you use version control for your model code?
    Yes
  • What are the biggest pain points with the current Compute setup?
    I have to read how to use it
  • How do you typically deploy a model to production?
    For numerai, I have my predict scripts (frozen models), I run them, generate the csvs, and upload the csvs. Otherwise I have used cron

The biggest pain points are

  • I use python, C, and ssh in linux. Apparently have to learn cloud and code in notebooks? This would be terrible! Colab has not enough RAM to work comfortably. People that do data analysis donā€™t need to know how to code and people that know how to code donā€™t need to know how to cloud-deploy.
  • if cloud is not free, then trying a model (without staking) will cost money, which is a huge barrier of entry because it takes starting in numerai from zero-cost and zero-risk to costly and risky. I canā€™t feel like staking in a model without evaluating live performance, and if I canā€™t evaluate live performance for free (as I do now) then I would not do numerai.
  • I think there should be an example project for numerai research and deployment of models, the example scripts is not that good nor simple for building upon / deploying

best

3 Likes

Each weekā€™s submission requires several un-automated hours to prepare.

Most of the time (about 2/3) is spent incorporating the newest ā€˜liveā€™ rows, into the web of connections between all the rows. Some call this ā€˜unsupervised learningā€™, but in fact I supervise it rather closely! The remaining time is spent using that web, to interpolate target numbers into those live rows. Also un-automated.

If you require five or seven submissions a week, Iā€™ll just have to give up. Please keep weekly submission, at least as an option.

But if you do go daily: why not hourly, or minutely, orā€¦ ? HFT has been a thing, for some while now.

  • I use python, C, and ssh in linux. Apparently have to learn cloud and code in notebooks? This would be terrible! Colab has not enough RAM to work comfortably. People that do data analysis donā€™t need to know how to code and people that know how to code donā€™t need to know how to cloud-deploy.
  • if cloud is not free, then trying a model (without staking) will cost money, which is a huge barrier of entry because it takes starting in numerai from zero-cost and zero-risk to costly and risky. I canā€™t feel like staking in a model without evaluating live performance, and if I canā€™t evaluate live performance for free (as I do now) then I would not do numerai.

It seems to me that the two options numer.ai should offer is

  1. two notebooks - one that trains the example predictions model, and one that comes with a webhook that can submit live predictions every time, using the trained example predictions model. This is your low-entry free barrier. People can tweak the example predictions, store the models they like, and set up copies of the submitter notebook that each use a trained model.

  2. some kind of solution that can wake up a box and trigger a script to run on some cloud server or trigger a script on some always-on box

2 Likes

To be completely honest, Iā€™d love to continue doing everything locally through a scheduled R script that uses the rnumerai package. Iā€™ve looked into using compute and as easy as it might be for some I simply do not have the time to learn to do it.

Although my best model (Numerai) is fairly small and generates predictions quickly, many models Iā€™ve experimented with take quite a bit of time and submitting all 50 models takes my machine about 6 hours I believe. I could limit experiments to smaller models, but of course that means reducing the diversity of my models. Alternately, if I do keep experimenting with larger models it means my computer is bogged down for six hours per day constantly. I suppose I could live with that as long as I have high-performing models to stake on, but it raises the ā€œhassle factorā€ another notch closer to the point where I need to stop participating in the tournament entirely. Iā€™d probably keep participating as long as I can keep doing things locally, but it would be a pain in the ass. My office is already hard to keep cool in the summer haha!

As for forced automation, it would raise the hassle factor substantially and I would probably stop participating entirely for the foreseeable future. I have too many other projects (and children) to put in the time needed to make the switch. I know some will say its only a few hours, but ā€œa few hoursā€ is essentially all of my free time as someone with 3 toddlers and an infant. Iā€™m not doing that. At least, not until the little ones are not so little anymore.

In sum, my ability to keep participating is marginal at the moment and raising the hassle factor too far will tip me over. My guess is that at least some other participants are in the same boat. The question is whether the benefits of doing daily submissions with forced automation offset the downsides of losing marginal participants like myself.

(PS - Yes I stake quite a bit on the model I linked, so although I am ā€œmarginalā€ my influence on the meta-model is non-zero. The one I linked is just the one I used to test it initially)

5 Likes

Your ideas are good!

  • I would also add a python script option, notebooks are nice to interact with data, but it should work with a more fundamental script too.
  • the notebook/script should have a clear output (e.g. a csv file at some specified dir)

Are your submissions automated now?
No.

Do you submit from a home machine, compute/cloud or website?
Home machine.

Do you want to train your model locally or in the cloud?
Eventually want to train and submit models from the cloud.

How often do you retrain your models?
Once a model is trained I wonā€™t retrain unless thereā€™s new training data. However, when I have an idea to tweak an existing model, I train them as separate models in order to track performance over time.

What cloud platform are you most comfortable with?
GCP.

Do you use version control for your model code?
Yes.

What are the biggest pain points with the current Compute setup?
Havenā€™t tried but I saw that you have to have AWS in order to use it. Is that true?

How do you typically deploy a model to production?
Would love to have something like DataBricks where you can just schedule-run a notebook.

  • Are your submissions automated now?

Yes

  • Do you submit from a home machine, compute/cloud or website?

Home machine

  • Do you want to train your model locally or in the cloud?

Locally

  • How often do you retrain your models?

Every week

  • What cloud platform are you most comfortable with?

Indifferent

  • Do you use version control for your model code?

Yes

  • What are the biggest pain points with the current Compute setup?

It is an unnecessary additional step that I donā€™t need, which means it is a burden.
The simplicity and flexibility of numerai was the main reason I started playing with it years ago and one of the reason numerai is so better than others finance tournament out there

  • How do you typically deploy a model to production?

Long period of development and testing. Then simply update the production code folder to the latest version of my code repository and thatā€™s it. My cronjob will take care of running and submitting predictions every week

  • Are your submissions automated now?
    Yes

  • Do you submit from a home machine, compute/cloud or website?
    compute/cloud.

  • Do you want to train your model locally or in the cloud?
    Both but mostly locally currently.

  • How often do you retrain your models?
    Not very often

  • What cloud platform are you most comfortable with?
    AWS and GCP mostly

  • Do you use version control for your model code?
    Yes. I have a monorepo for all my models under git

  • What are the biggest pain points with the current Compute setup?
    AWS account memory limit was a hassle to get increased. Building docker images.

  • How do you typically deploy a model to production?
    Manually currently. I have the production model which is staked then a number of other testing various updates to the code. Once Iā€™m happy with the new code. I update the production models code and rebuilt docker images and deploy.

I do have some GitHub actions to lint the code.

Iā€™m happy enough with AWS. Now I have things running. It costs less than $1 a month to run, so must be costing AWS to process my payment.

Are your submissions automated now?

Yes

Do you submit from a home machine, compute/cloud or website?

cloud (AWS)

Do you want to train your model locally or in the cloud?

Locally for now, probably will use vast.ai laterā€¦

How often do you retrain your models?

Basically never.

What cloud platform are you most comfortable with?

AWS, Vultr

Do you use version control for your model code?

I use Git.

What are the biggest pain points with the current Compute setup?

I want to have 100% control of the deployment process and of what is being done on background. Also, the idea that each model is triggered by a webhook is a non-sense. I am using docker + amazon ECS + amazon Fargate + scheduled tasks.

How do you typically deploy a model to production?

docker build + docker push

Are your submissions automated now?

Yes

Do you submit from a home machine, compute/cloud or website?

Cloud (Azure Container Instances)

Do you want to train your model locally or in the cloud?

I donā€™t have a preference. If the model is to big and does not fit in my computer, Im ok using the cloud.

How often do you retrain your models?

Never. I just recently started doing ML stuff, I am not in the point where I would consider retraining on any basis.

What cloud platform are you most comfortable with?

Azure. Just because it is the one I have used the most.

Do you use version control for your model code?

Yes, git(hub).

What are the biggest pain points with the current Compute setup?

When I first arrived at numerai, it felt overwhelming. Probably beacuse everything was new to me: ml, python, numerai rounds, cryptos, staking, rounds, corr, mcā€¦ It was very hard for me to make heads or tails from it. I ended up doing my own thing on Azure because I have some free monthly quota there, I learnt along the way and I was uncertaing Compute was what I wanted/needed.

How do you typically deploy a model to production?

Manually. It involves copying some files to a safe place on the Azure Cloud and modifying and pushing a docker image. I m ok with it since it is a straightforward procedure and it is something I only do from time to time.

1 Like