Urban Resilience.AI Lab, Zachry Department of Civil and Environmental Engineering, Texas A&M University, College Station, United States
Abstract:Disruptions to medical infrastructure during disasters pose significant risks to critically ill patients with advanced chronic kidney disease or end-stage renal disease. To enhance patient access to dialysis treatment under such conditions, it is crucial to assess the vulnerabilities of critical care facilities to hazardous events. This study proposes optimization models for patient reallocation and the strategic placement of temporary medical facilities to bolster the resilience of the critical care system, with a focus on equitable outcomes. Utilizing human mobility data from Texas, we evaluate patient access to critical care and dialysis centers under simulated hazard scenarios. The proposed bio-inspired optimization model, based on the Ant Colony optimization method, efficiently reallocates patients to mitigate disrupted access to dialysis facilities. The model outputs offer valuable insights into patient and hospital preparedness for disasters. Overall, the study presents a data-driven, analytics-based decision support tool designed to proactively mitigate potential disruptions in access to critical care facilities during disasters, tailored to the needs of health officials, emergency managers, and hospital system administrators in both the private and public sectors.
Abstract:High-resolution flood probability maps are essential for addressing the limitations of existing flood risk assessment approaches but are often limited by the availability of historical event data. Also, producing simulated data needed for creating probabilistic flood maps using physics-based models involves significant computation and time effort inhibiting the feasibility. To address this gap, this study introduces Flood-Precip GAN (Flood-Precipitation Generative Adversarial Network), a novel methodology that leverages generative machine learning to simulate large-scale synthetic inundation data to produce probabilistic flood maps. With a focus on Harris County, Texas, Flood-Precip GAN begins with training a cell-wise depth estimator using a limited number of physics-based model-generated precipitation-flood events. This model, which emphasizes precipitation-based features, outperforms universal models. Subsequently, a Generative Adversarial Network (GAN) with constraints is employed to conditionally generate synthetic precipitation records. Strategic thresholds are established to filter these records, ensuring close alignment with true precipitation patterns. For each cell, synthetic events are smoothed using a K-nearest neighbors algorithm and processed through the depth estimator to derive synthetic depth distributions. By iterating this procedure and after generating 10,000 synthetic precipitation-flood events, we construct flood probability maps in various formats, considering different inundation depths. Validation through similarity and correlation metrics confirms the fidelity of the synthetic depth distributions relative to true data. Flood-Precip GAN provides a scalable solution for generating synthetic flood depth data needed to create high-resolution flood probability maps, significantly enhancing flood preparedness and mitigation efforts.
Abstract:In the field of crisis/disaster informatics, social media is increasingly being used for improving situational awareness to inform response and relief efforts. Efficient and accurate text classification tools have been a focal area of investigation in crisis informatics. However, current methods mostly rely on single-label text classification models, which fails to capture different insights embedded in dynamic and multifaceted disaster-related social media data. This study introduces a novel approach to disaster text classification by enhancing a pre-trained Large Language Model (LLM) through instruction fine-tuning targeted for multi-label classification of disaster-related tweets. Our methodology involves creating a comprehensive instruction dataset from disaster-related tweets, which is then used to fine-tune an open-source LLM, thereby embedding it with disaster-specific knowledge. This fine-tuned model can classify multiple aspects of disaster-related information simultaneously, such as the type of event, informativeness, and involvement of human aid, significantly improving the utility of social media data for situational awareness in disasters. The results demonstrate that this approach enhances the categorization of critical information from social media posts, thereby facilitating a more effective deployment for situational awareness during emergencies. This research paves the way for more advanced, adaptable, and robust disaster management tools, leveraging the capabilities of LLMs to improve real-time situational awareness and response strategies in disaster scenarios.
Abstract:Near-real time estimation of damage to buildings and infrastructure, referred to as damage nowcasting in this study, is crucial for empowering emergency responders to make informed decisions regarding evacuation orders and infrastructure repair priorities during disaster response and recovery. Here, we introduce FloodDamageCast, a machine learning framework tailored for property flood damage nowcasting. The framework leverages heterogeneous data to predict residential flood damage at a resolution of 500 meters by 500 meters within Harris County, Texas, during the 2017 Hurricane Harvey. To deal with data imbalance, FloodDamageCast incorporates a generative adversarial networks-based data augmentation coupled with an efficient machine learning model. The results demonstrate the model's ability to identify high-damage spatial areas that would be overlooked by baseline models. Insights gleaned from flood damage nowcasting can assist emergency responders to more efficiently identify repair needs, allocate resources, and streamline on-the-ground inspections, thereby saving both time and effort.
Abstract:Street view imagery, aided by advancements in image quality and accessibility, has emerged as a valuable resource for urban analytics research. Recent studies have explored its potential for estimating lowest floor elevation (LFE), offering a scalable alternative to traditional on-site measurements, crucial for assessing properties' flood risk and damage extent. While existing methods rely on object detection, the introduction of image segmentation has broadened street view images' utility for LFE estimation, although challenges still remain in segmentation quality and capability to distinguish front doors from other doors. To address these challenges in LFE estimation, this study integrates the Segment Anything model, a segmentation foundation model, with vision language models to conduct text-prompt image segmentation on street view images for LFE estimation. By evaluating various vision language models, integration methods, and text prompts, we identify the most suitable model for street view image analytics and LFE estimation tasks, thereby improving the availability of the current LFE estimation model based on image segmentation from 33% to 56% of properties. Remarkably, our proposed method significantly enhances the availability of LFE estimation to almost all properties in which the front door is visible in the street view image. Also the findings present the first baseline and comparison of various vision models of street view image-based LFE estimation. The model and findings not only contribute to advancing street view image segmentation for urban analytics but also provide a novel approach for image segmentation tasks for other civil engineering and infrastructure analytics tasks.
Abstract:Inspired by ideas from health risk assessment, this paper presents a new perspective for flood risk assessment. The proposed perspective focuses on three pillars for examining flood risk: (1) inherent susceptibility, (2) mitigation strategies, and (3) external stressors. These pillars collectively encompass the physical and environmental characteristics of urban areas, the effectiveness of human-intervention measures, and the influence of uncontrollable external factors, offering a fresh point of view for decoding flood risks. For each pillar, we delineate its individual contributions to flood risk and illustrate their interactive and overall impact. The three-pillars model embodies a shift in focus from the quest to precisely model and quantify flood risk to evaluating pathways to high flood risk. The shift in perspective is intended to alleviate the quest for quantifying and predicting flood risk at fine resolutions as a panacea for enhanced flood risk management. The decomposition of flood risk pathways into the three intertwined pillars (i.e., inherent factors, mitigation factors, and external factors) enables evaluation of changes in factors within each pillar enhance and exacerbate flood risk, creating a platform from which to inform plans, decisions, and actions. Building on this foundation, we argue that a flood risk pathway analysis approach, which examines the individual and collective impacts of inherent factors, mitigation strategies, and external stressors, is essential for a nuanced evaluation of flood risk. Accordingly, the proposed perspective could complement the existing frameworks and approaches for flood risk assessment.
Abstract:There has been significant progress in improving the performance of graph neural networks (GNNs) through enhancements in graph data, model architecture design, and training strategies. For fairness in graphs, recent studies achieve fair representations and predictions through either graph data pre-processing (e.g., node feature masking, and topology rewiring) or fair training strategies (e.g., regularization, adversarial debiasing, and fair contrastive learning). How to achieve fairness in graphs from the model architecture perspective is less explored. More importantly, GNNs exhibit worse fairness performance compared to multilayer perception since their model architecture (i.e., neighbor aggregation) amplifies biases. To this end, we aim to achieve fairness via a new GNN architecture. We propose \textsf{F}air \textsf{M}essage \textsf{P}assing (FMP) designed within a unified optimization framework for GNNs. Notably, FMP \textit{explicitly} renders sensitive attribute usage in \textit{forward propagation} for node classification task using cross-entropy loss without data pre-processing. In FMP, the aggregation is first adopted to utilize neighbors' information and then the bias mitigation step explicitly pushes demographic group node presentation centers together. In this way, FMP scheme can aggregate useful information from neighbors and mitigate bias to achieve better fairness and prediction tradeoff performance. Experiments on node classification tasks demonstrate that the proposed FMP outperforms several baselines in terms of fairness and accuracy on three real-world datasets. The code is available in {\url{https://github.com/zhimengj0326/FMP}}.
Abstract:Community resilience is a complex and muti-faceted phenomenon that emerges from complex and nonlinear interactions among different socio-technical systems and their resilience properties. However, present studies on community resilience focus primarily on vulnerability assessment and utilize index-based approaches, with limited ability to capture heterogeneous features within community socio-technical systems and their nonlinear interactions in shaping robustness, redundancy, and resourcefulness components of resilience. To address this gap, this paper presents an integrated three-layer deep learning model for community resilience rating (called Resili-Net). Twelve measurable resilience features are specified and computed within community socio-technical systems (i.e., facilities, infrastructures, and society) related to three resilience components of robustness, redundancy, and resourcefulness. Using publicly accessible data from multiple metropolitan statistical areas in the United States, Resili-Net characterizes the resilience levels of spatial areas into five distinct levels. The interpretability of the model outcomes enables feature analysis for specifying the determinants of resilience in areas within each resilience level, allowing for the identification of specific resilience enhancement strategies. Changes in community resilience profiles under urban development patterns are further examined by changing the value of related socio-technical systems features. Accordingly, the outcomes provide novel perspectives for community resilience assessment by harnessing machine intelligence and heterogeneous urban big data.
Abstract:The standard model of infrastructure resilience, the resilience triangle, has been the primary way of characterizing and quantifying infrastructure resilience. However, the theoretical model merely provides a one-size-fits-all framework for all infrastructure systems. Most of the existing studies examine the characteristics of infrastructure resilience curves based on analytical models constructed upon simulated system performance. Limited empirical studies hindered our ability to fully understand and predict resilience characteristics in infrastructure systems. To address this gap, this study examined over 200 resilience curves related to power outages in three major extreme weather events. Using unsupervised machine learning, we examined different curve archetypes, as well as the fundamental properties of each resilience curve archetype. The results show two primary archetypes for power system resilience curves, triangular, and trapezoidal curves. Triangular curves characterize resilience behavior based on 1. critical functionality threshold, 2. critical functionality recovery rate, and 3. recovery pivot point. Trapezoidal archetypes explain resilience curves based on 1. duration of sustained function loss and 2. constant recovery rate. The longer the duration of sustained function loss, the slower the constant rate of recovery. The findings of this study provide novel perspectives enabling better understanding and prediction of resilience performance of power system infrastructures.
Abstract:Understanding the key factors shaping environmental hazard exposures and their associated environmental injustice issues is vital for formulating equitable policy measures. Traditional perspectives on environmental injustice have primarily focused on the socioeconomic dimensions, often overlooking the influence of heterogeneous urban characteristics. This limited view may obstruct a comprehensive understanding of the complex nature of environmental justice and its relationship with urban design features. To address this gap, this study creates an interpretable machine learning model to examine the effects of various urban features and their non-linear interactions to the exposure disparities of three primary hazards: air pollution, urban heat, and flooding. The analysis trains and tests models with data from six metropolitan counties in the United States using Random Forest and XGBoost. The performance is used to measure the extent to which variations of urban features shape disparities in environmental hazard levels. In addition, the analysis of feature importance reveals features related to social-demographic characteristics as the most prominent urban features that shape hazard extent. Features related to infrastructure distribution and land cover are relatively important for urban heat and air pollution exposure respectively. Moreover, we evaluate the models' transferability across different regions and hazards. The results highlight limited transferability, underscoring the intricate differences among hazards and regions and the way in which urban features shape hazard exposures. The insights gleaned from this study offer fresh perspectives on the relationship among urban features and their interplay with environmental hazard exposure disparities, informing the development of more integrated urban design policies to enhance social equity and environmental injustice issues.