So I've built a neural network, but its predictions are always worse than just putting 0.5 for all probabilities. Giving it more training cycles or running over more eras just worsens the problem as it converges to either choosing 0, 0.5, or 1 for all probabilities. I read up on log-loss evaluation and so changed the target from 0 or 1 to a 0.4 or 0.6, which helped, but now it is just converging to 0.4, 0.5, and 0.6 values.
Does anyone have an explanation of this that I can read about? Or advice for how to solve it?
Also I can't seem to get consistency high enough. How is that evaluated?
Thanks for the help