Hey Joakim, a prediction problem in machine learning usually is reduced to an optimization problem. So we need to minimize or maximize a function; for example, in case of regression problems, RMSE, MAE, or R^2 are very popular. We can define objective function as a function that has first (Gradient) and second (Hessian) derivatives, whereas a metric function does not need to be differentiable. We need the objective to be differentiable, so algorithms like gradient boosting (hence the name!) and neural nets (for backpropagation) or even simple linear regression could be trained. In your list of metrics, only RMSE and MSE are differentiable (there are some proxy functions for some metrics like MAE that can be used as an objective) and the rest can only be used as metrics.
Here in this tournament, the choice of metric is predefined (Spearman Rank Correlation), so we are solving a ranking problem. Our selection of the objective function to be minimized or maximized by our algorithms is an open problem that should be addressed. To pick a proper objective function, first, we need to choose a validation scheme that we trust like k-fold cross-validation, time-split validation, adversarial validation, etc. This is a simple but essential step; without appropriate validation, all our efforts are useless! After that, we can try a list of objective functions to see if one of them improves our validation Spearman Rank Correlation score or not compared to the rest.
As for trying multiclass classification, after we set up our validation we can validate our ideas regarding multiclass classification (for example, ordinal multiclass classification with logistic regression).
Note: We can define our custom objective function in XGBoost or LightGBM easily (we can set our proxy Gradient and Hessian and feed them to the algorithm).