×

WildDash Benchmark

We are excited to be part of the Robust Vision Challenge 2018. Check out the challenge website and our submission instructions for further details on how to participate. We are looking forward to seeing you at CVPR!

Submissions will be available from mid February 2018.

To be listed in the public leaderboard, please follow these steps:

  1. Get the data. Create an account to get access to the download links. The download packages contain additional technical submission details.
  2. Compute your results. Identical parameter settings must be used for all frames.
  3. Upload and submit. Login, upload your results, add a brief description of your method, and submit for evaluation. To support double-blind review processes, author details may first be anonymous and may later be updated upon request.
  4. Check your results. Your submission will be evaluated automatically. You will get notified via email as soon as we manually approve the evaluation results. Please note that we do not support multiple submissions of the same or very incremental methods.

The WildDash Benchmark is part of the semantic and instance segmentation challenges of the Robust Vision Challenge 2018. If you want to participate, follow the instructions above and add _ROB as a postfix to your method name. Please note that you must use the same model / parameter setup to compute your results for all benchmarks of the respective challenge.