Thanks to the FORCE organizers and Xeek for hosting this competition.
I would like to suggest or request on behalf of all competitors if the number of daily submissions could be increased. My first reason is because validation score and the leaderboard scores have been in great contrast to one another (an inverse relation even). I don’t know if there’s a reason for the restriction to one per day but I believe that with the opportunity to submit more than one submissions – and
at least two submissions – contestants will have the chance to know what works better hence, help in building better models. I hope other contestants can see this too and support this suggestion.
Much thanks to Xeek and the FORCE organizers as they kindly see into this. Thank you !
1 Like
Hi, and thanks for the feedback. What do you mean by an inverse relation between validation scores and leaderboard scores?
The main reason for having a limit on leaderboard submissions is to keep participants from “brute forcing” the test data set. That being said, we will look into increasing the daily limit to two submissions.
3 Likes
we will allow two submissions daily from now on
2 Likes
Thank you for your response.
And by “inverse relation”, I mean, the better my validation score (using the default score) on my validation datasets --which I created from the train dataset-- the worse the model scores on the open dataset when it gets submitted. I wonder if this is due to the manner in which I created/splited the validation datasets from the train dataset. But as the daily limit will be increased to two, hopefully I’ll get to see what I am doing wrong.
Thank you so much for this!
Thank you so much for this!
If you are doing model selection and hyperparameter tuning based on some small chosen validation set then it is not unexpected that you will get to a point where an improvement in validation scores gives a decreased test score on the leaderboard. I suggest using a decent size validation set or perhaps using cross-validation.
Remember, if you try to somehow use the leaderboard for model selection by submitting twice a day you may end up high on the leaderboard but with the exact same problem during final scoring, which will be on a hidden test dataset.
1 Like
Thanks for suggestion, they are insightful. I’m quite aware that there’s every possibility to over fit the training data using it as validation and I do not intend to use it as one since the close test dataset is from different wells. Considering that the train data is very large, it made me wonder if a model could be over fitting to the patterns of the train data which are not in the test data, reason why I also asked to reduce the submission limits which is the understand the train data better and not the open test data.