In my opinion, the confidence metric feels like it means nothing. It is an arbitrary value, and whether my model barely does better than random, or is able to predict every result with perfection, I will still get my payout at whatever confidence I set.
I'd like to propose a change to the system, such that "confidence" refers to how much of an improvement your model will be over random. If your model does that well or better, you get a payout, but if it does worse than your bet (even if better than random) the stake is destroyed. This would allow Numerai to use the stake as an actual measure of the data scientists confidence in their model's performance, rather than confidence that it will be some amount better than random.
Payouts would then work in reverse, with the stakes for the best performance being evaluated first, and then moving down the list. Does this sound like a good idea? Or have I missed some detail of the system that makes the current way effective?