Abstract:In this paper, we present a large-scale hurricane Michael dataset for visual perception in disaster scenarios, and analyze state-of-the-art deep neural network models for semantic segmentation. The dataset consists of around 2000 high-resolution aerial images, with annotated ground-truth data for semantic segmentation. We discuss the challenges of the dataset and train the state-of-the-art methods on this dataset to evaluate how well these methods can recognize the disaster situations. Finally, we discuss challenges for future research.