Hi @bigbertha ,
So I’ve been trying to get the correlation / sharpe loss to work a little bit today, and what I noticed is this:
XGBoost (like in that post) has the parameter base_margin, which sets the initial predictions. This is really what we would need instead of the randomized prediction. However, the dumb thing is that this init_score param only exists for lgb.fit() , not lgb.train(). But lgb.fit() doesn’t support custom loss functions.
Now I’ve been trying to do it like this: make a base model on rmse, predict on train data (bootstrap_preds), then in the custom loss function set:
if np.all(preds == 0):
preds = bootstrap_preds
However, now I’m having problems with indexing again, with the index ypred_th[ee] being out of bounds.
In short, a lot of hassle…
What mix of metrics have you been using for parameters, may I ask?