A few days a go I uploaded an ensemble to ObjectScience_2 that passed concordance and originality. This morning I made some changes to the ensemble mix, tested it didn’t like the results and immediately uploaded the previous ensemble. Now it fails originality. The suck on that is pretty high. How can we be “original” if we don’t dare test anything.
So, in thinking about this overnight (while I was asleep apparently) I need to adjust my workflow a little. I have to hold one of my accounts back strictly for testing in the course of a week. FE and such that shows promise during that phase can get pushed up into the modeling process the following Wednesday. Once those are locked in, I’m not touching them… Lesson learned.
We are planning on making concordance and originality open source in the future, so this should help give more transparency to why things pass or fail.
That would be awesome.
In general, a loading indicator or the like, would be more useful as well, rather than just defaulting to originality being fail and then switching to a pass after some time. Makes for unnecessary waits.
I do think there could be something in between so we know the check for originality is still processing. Originality can be hard to get and a little too easy to lose. Some additional precision wouldn’t hurt.
That’s something we’ve got coming soon.
+1 I set a timer to one minute after uploading, as a UI/UX guy myself I think a blinking dot could work
“You’ve just achieved originality…”
It’s probably worth following up on my original post…
The risk of losing originality in the current setup, has actually forced me to be more creative. In the last two weeks I’ve managed to dig “originality” out of the competition, relatively late in the round. Prior to this, I would have just focused on improving the score of the first model (not very original). Instead I’ve been forced to find “new” attacks, which has ultimately made me better at the project. That has given me fall-back options when something quits working or isn’t performing as well as I’d like.
So, I guess, to answer my own question, “Could Originality Actually Hurt Creativity?”, the answer is no. It forces real creativity if you’re willing to keep grinding. So I would encourage you to do just that. Keep experimenting and keep pushing, it’s out there if you’re willing to put in the time.
(This model was generated almost 20 hours after round 65 started)
I’ll have to respectfully disagree with you there, as I’ve just experienced quite the opposite. I was training model A let’s call it, and uploaded its results. After a few more trainings with different seeds, I received better results on the validation, this all happened in a five minute timespan. I uploaded model v2.0 and now originality fails. I attempt to revert to model v1.0, and now it claims originality is failing for it too, so this model can now go in the garbage. This seriously needs to be fixed.
@ddd It might be a case of just waiting a little more, I have uploaded several models where they pass originality, I tweak it and then logloss improves but originality doesn’t pass. I then revert and the system accepts it,so I am unsure if they get instantly burned. (perhaps the higher ups can enlighten us)
What did trow me off, is that when submitting a new model, the values remain unchanged from the previous model, so you get the impression that the model fails or passes concordance and originality while it might not, so you just have to wait a few minutes to make sure.
Only thing I am not sure is the notice about maximum uploads, need to test more next tournament, but the Originality loading spinner seemed to hang until I clicked the close button, but overall an improvement in my view.
yaaaaaaaaaaaaaaaaaaaaayyyyyyyyyyyyyyyyyyyyyyyyyyyy! That brings the awesome. So much information in that little thing, it’s going to make a lot of people very happy, myself included.
Massive thumbs up for this change! Well done!
Is making concordance and originality open source still the plan? It would be interesting to see if there are ways to improve how originality is determined to see if there is a better way to prevent multiple uploads of the same model + noise.
Hmmm, on one of my accounts that was passing both concordance and originality(waited several minutes for loading indicator to turn green, and was green for that day), I login today, and to my surprise it’s suddenly not passing originality. What would warrant this?