I am just curious about it.
GPU : gtx 1080
CPU : Ryzen 3700 8 core
RAM : 128gb
I’m probably going to get an rtx 4090 when those come out
Thank you very much.
Similar spec as mine(5900HX, RTX2080, 64GBRAM).
I am also considering to upgrade it.
11th Generation Intel Core i9-11900K @3.50GHz, 16 Logical processors
64 GB memory
NVidia GeForce RTX 3070
I don’t use the RTX 3070 processor much for Numerai, as most of my processing is non-linear.
Thank you very much.
Also I mainly use CPU now.
CPU: Intel i7-9800X, 3.80GHz 16 cores
GPU: RTX 3090
RAM: 128 GB
CPU: Intel i9-7900X (10 core, @3.3 GHz)
GPU: NVIDIA Geforce RTX 3090
RAM: 64 GB
built in early 2020; GPU upgrade in 2022; RAM upgrade in 2021 (32->64).
training and predicting neural networks on the GPU, xgboost training on GPU but prediction on CPU (due to too high memory usage even for the 3090).
Was first hesitant with the 3090 because of price, now I’m very happy with it, mainly due to the fast iteration on modeling ideas. Came from a GTX 1060 6 GB.
Besides Numerai I use the workstation for trying out other heavy compute things like simulations, or deep learning generated images like from Stable Diffusion model.
i7-6800k, gtx 1070, 128 GB of ram, recently upgraded from 64 earlier this year!
Currently 64 GB is enough for me. The reason you increased the RAM is 64GB is not enough for you?
Yeah I probably could’ve been smarter about some things but yeah I ran out of ram when I was modeling with XGBoost and I’m too lazy to get it working with less memory.
I currently use Google Colab Pro+ which has some sort of Intel Xeon, 50GB RAM, and an Nvidia Tesla V100. Occasionally I get lucky and score an A100 instance.
Yes, XGBoost is very greedy with memory, especially on Windows. It can easily run out of memory even with 64GB on the full v4 dataset.
For some reason, XGBoost behaves better under Linux with less “out of memory” scenarios. I am not sure why.
In my experience, LightGBM can handle twice as much data as XGBoost using the same amount of RAM.
EDIT: Google is now enforcing a credit system with Colab, can’t use their high end GPUs nonstop anymore Probably will shift model training to my desktop, which is Intel 8700k, 64 GB RAM, Nvidia 3090.
Yeah, but you can’t say XGBoooOOOOOOSTTT with LightGBM
The dataset will probably get bigger in the future and even 128 GB won’t be enough. Hopefully the price of NMR 10x before that so i can buy a machine with even more memory.
By the way, why do you use XGBoost? You do not use LightGBM?
CPU: AMD TR 1920x
GPU: RTX 3090 x2
3060ti (8GB) + intel i9-10980XE + 64GB ram. Heavily leaning on Intel MKL for a lot of the calculations, and only sending the NN training off to the GPU. A future GPU with a larger memory might allow me to stay more on the GPU, but I wasn’t going to pay more than what they asked for a 3060ti back when I bought one :-).
No reason in particular, it wins a lot of kaggle competitions and it’s sort of a running gag for me now to act silly and push data through XGBoost to solve all my problems
That said, I learned recently that you can generally think of boosting algorithms as a non parametric approach to estimation, so good reason why it does well generally. Combining a non-parametric approach with true structural assumptions in how the data behaves is always more optimal though, but it requires more work and thinking about the actual underlying process than just getting predictions out of xgboost
CPU: 12 core Intel i9 10920X @ 3.5Ghz
GPU: 2x 3090 with 24GB of GPU memory on each card
Older secondary machine:
CPU: Quad core Intel i7-7700k @ 4.2Ghz
GPU: 2x 2080TI with 11GB of GPU memory on each card
Both machines are headless.
I have 2x 3090 as well, blower style 2-slot so room for 2 more if my PSU can handle it (at least 1 would require a riser cable).
Would linking them with an NVLINK help at all on the Numerai data you reckon? Or not really? I’m mostly looking for an easy way to pool the ram together for a total of 48GB.