MLP hyperparameter tuning starter

Hi all,

I share my kaggle notebook to build a simple neural network (multi-layer perceptron) to fit the Numerai Tournament data.

[numerai] MLP with KerasTuner Starter

Since for beginners, hyperparameters in a NN are hard to tune, I also demonstrate how to use the KerasTuner to automatically fine-tune the hyperparameters of a NN in the same notebook.

Hopefully this notebook can help someone who gets bored with an integration_test-like model and is willing to try out NN.

12 Likes

Thank you for your contribution. I upvoted the notebook. Hope that helps for my first sumission in Numerai.

By the way, how did you do in the “Jane Street Market Prediction” competition? Could you compare and contrast the two competitions so that I would understand what Numerai is like more easily?

Thank you very much

I will share my approach in the JaneStreet competition in Kaggle, not here, only if I will be victorious in the end. Otherwise there is no point sharing my approach for anyone.

Numerai is simply easier to work on than the JaneStreet, as there is no TimeSeriesAPI complication. Also there is quite a bit of resources for new starters, so you might want to have a look.

https://docs.numer.ai/tournament/new-users

Thank you for your advice.

Hi, thank u this is actually pretty useful.

I am training a MLP, and I get similar learning curves and a similar histogram of predictions, yet when I upload my predictions I have a lower Validation Corr (between 0.0005 and 0.002), considerably smaller than the metrics you show by the end of the notebook.

Do you know why it may be? Is there something I am missing in between predicting with the MLP and submitting the predictions?

Thank you

I don’t know, but NN’s performance is hyperparameter sensitive, so I am not surprised. Maybe increase the batch size.