Something wrong with the Eval.AI evaluation?

#4
by yuexiang96 - opened

Hi there,

Thanks for the great work! I tried several model submissions (including Gemini 1.5 Pro and Llava 1.5) but seems that all of the returned accuracy is around 0.25. There might be something wrong with the evaluation setup on your backend as I checked some wrong predictions, and many of them are correct. And I calculated the overlapped predictions between different models and the percentage is not random. So I guess there might be some technical issue on the backend of the Eval.AI.

Thank you!

yuexiang96 changed discussion status to closed

Sign up or log in to comment