Abstract:In post-event reconnaissance missions, engineers and researchers collect perishable information about damaged buildings in the affected geographical region to learn from the consequences of the event. A typical post-event reconnaissance mission is conducted by first doing a preliminary survey, followed by a detailed survey. The preliminary survey is typically conducted by driving slowly along a pre-determined route, observing the damage, and noting where further detailed data should be collected. This involves several manual, time-consuming steps that can be accelerated by exploiting recent advances in computer vision and artificial intelligence. The objective of this work is to develop and validate an automated technique to support post-event reconnaissance teams in the rapid collection of reliable and sufficiently comprehensive data, for planning the detailed survey. The technique incorporates several methods designed to automate the process of categorizing buildings based on their key physical attributes, and rapidly assessing their post-event structural condition. It is divided into pre-event and post-event streams, each intending to first extract all possible information about the target buildings using both pre-event and post-event images. Algorithms based on convolutional neural network (CNNs) are implemented for scene (image) classification. A probabilistic approach is developed to fuse the results obtained from analyzing several images to yield a robust decision regarding the attributes and condition of a target building. We validate the technique using post-event images captured during reconnaissance missions that took place after hurricanes Harvey and Irma. The validation data were collected by a structural wind and coastal engineering reconnaissance team, the National Science Foundation (NSF) funded Structural Extreme Events Reconnaissance (StEER) Network.
Abstract:After a disaster, teams of structural engineers collect vast amounts of images from damaged buildings to obtain lessons and gain knowledge from the event. Images of damaged buildings and components provide valuable evidence to understand the consequences on our structures. However, in many cases, images of damaged buildings are often captured without sufficient spatial context. Also, they may be hard to recognize in cases with severe damage. Incorporating past images showing a pre-disaster condition of such buildings is helpful to accurately evaluate possible circumstances related to a building's failure. One of the best resources to observe the pre-disaster condition of the buildings is Google Street View. A sequence of 360 panorama images which are captured along streets enables all-around views at each location on the street. Once a user knows the GPS information near the building, all external views of the building can be made available. In this study, we develop an automated technique to extract past building images from 360 panorama images serviced by Google Street View. Users only need to provide a geo-tagged image, collected near the target building, and the rest of the process is fully automated. High-quality and undistorted building images are extracted from past panoramas. Since the panoramas are collected from various locations near the building along the street, the user can identify its pre-disaster conditions from the full set of external views.