This is the second challenge for extracting data from old crossplots. This challenge is different from the previous axes challenge: the scoring metric is Levenshtein distance to support the text theme, and the duration of the challenge has been extended to 6 weeks.
We are looking forward to seeing your ideas about how to solve this deceptively simple challenge.
The data for “Detect Text on Crossplots” is the same theme as the recently closed “Identify and Extract Axes Values.” However, the data is unique for each challenge.
Please try to log out and then log back into Xeek. The data only appears for users that are logged in. Note, the “Identify and Extract Axes Values” is no longer available as the challenge has closed.
No. What I want is download the data of this competition; as said two messages above, I undestand tha I must use the same data from the “Identify and Extract Axes Values” competition. So to download that data I need access and the competition is closed. Or do I need another way to access the data for this competition " Detect Text on Crossplots"
There might be an issue with a previous login. Please try to log into Xeek using an Incognito (Chrome) or Private (Edge) window. Are you challenged to enter an MFA code? If so, you should be able to navigate to “Detect Text on Crossplots” and check if you have joined the challenge (purple button on the right side of the Overview Tab). If you are signed in and have joined the challenge, you should be able to see the data. If those options don’t work, please email support@xeek.ai, and our Support Team will get you the data and manually connect to the challenge.
tien2020.le2020 you may not be able to see the data because you haven’t officially joined the challenge. Could you please check your status with the challenge and get back to us on whether you can see the data?
You may use whatever model you wish. However, if you are in consideration of a prize, you must submit your model, including any supplemental materials. Make sure that whatever model you submit has an open or permissive license.
Hi all!
The competition will end soon and I have questions about this:
In the rules of the competition, I found two conflicting dates for the end of the competition
November 2, 2022 (in the “COMPETITION TIMELINE” part)
November 3rd, 2022 at 23:00 UTC (in the introduction part)
Which one is correct - when is the end?
The judges will score the top 10 submissions on accuracy and interpretability. The accuracy of the submission counts for 90% of the final score. Accuracy will be determined using the same scoring algorithm described above for the Predictive Leaderboard. The top 20% of scores will receive maximum points (90).
Maximum points (90) will be received by the top 20% of the 10 selected (top-2), or 20% of all participants?
Do the top 10 have time to prepare their notebooks? When is the deadline - also November 2 or 3?
The interpretability metric counts for 10% of the final score. This qualitative metric focuses on the degree of documentation, clearly stating variables for models and using doc strings and markdown. Submissions with exceptional interpretability will receive maximum points (10).
If I understand correctly, then a few maximally documented notebooks can receive a maximum score. If the same participants were in the top 20% from the question 2 - will there be several winners?
Can you tell us more about the assessment of notebooks? With the same degree of documentation - will a simpler solution be preferred?
If I trained neural network - should I provide the training code? Will it evaluate too? How well should it be documented?
hi team, just making sure if my final submission is already received by the team, because I already sent it over email but got no response yet. I hope it’s not caught in your spam filter.