ADE-OoD: a Benchmark for Out-of-Distribution Detection Beyond Road Scenes

ADE-OoD is a benchmark for dense Out-of-Distribution detection on general natural images. The goal of the benchmark is to extend the domain in-distribution and out-of-distribution beyond the common road scenes paradigm. The main characteristics of the benchmark are:

  • 111 high-quality annotated samples
  • Large in-distribution ontology, consisting of the 150 semantic categories of the ADE20k dataset
  • Diverse in- and outdoor scenes
  • Diverse out-of-distribution objects, high variety of appearance, size and placement
  • Compatibility with a common semantic segmentation dataset (ADE20k), and models trained on it

Download

The download link points to a zip file containing the benchmark data. By downloading the data, you agree to the terms of use of the ADE20k dataset.

Data Code

Citations

If you use this benchmark in your research, please cite the following papers:


      @inproceedings{GalessoECCV2024,
      Title = {Diffusion for Out-of-Distribution Detection on Road Scenes and Beyond},
      Author = {Silvio Galesso and Philipp Schr\"oppel and Hssan Driss and Thomas Brox},
      Booktitle = {ECCV},
      Year = {2024}
      }
    

      @InProceedings{Zhou_2017_CVPR,
        author = {Zhou, Bolei and Zhao, Hang and Puig, Xavier and Fidler, Sanja and Barriuso, Adela and Torralba, Antonio},
        title = {Scene Parsing Through ADE20K Dataset},
        booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
        month = {July},
        year = {2017}
        } 
    

      @article{OpenImages,
        author = {Alina Kuznetsova and Hassan Rom and Neil Alldrin and Jasper Uijlings and Ivan Krasin and Jordi Pont-Tuset and Shahab Kamali and Stefan Popov and Matteo Malloci and Alexander Kolesnikov and Tom Duerig and Vittorio Ferrari},
        title = {The Open Images Dataset V4: Unified image classification, object detection, and visual relationship detection at scale},
        year = {2020},
        journal = {IJCV}
      }
    

Acknowledgements

The research leading to these results was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under the project numbers 401269959 and 417962828, and by the German Federal Ministry for Economic Affairs and Climate Action within the project “NXT GEN AI METHODS - Generative Methoden für Perzeption, Prädiktion und Planung". The authors would like to thank the consortium for the successful cooperation.