Abstract:Floods are among the most prevalent and destructive natural disasters, often leading to severe social and economic impacts in urban areas due to the high concentration of assets and population density. In Iran, particularly in Tehran, recurring flood events underscore the urgent need for robust urban resilience strategies. This paper explores flood resilience models to identify the most effective approach for District 6 in Tehran. Through an extensive literature review, various resilience models were analyzed, with the Climate Disaster Resilience Index (CDRI) emerging as the most suitable model for this district due to its comprehensive resilience dimensions: Physical, Social, Economic, Organizational, and Natural Health resilience. Although the CDRI model provides a structured approach to resilience measurement, it remains a static model focused on spatial characteristics and lacks temporal adaptability. An extensive literature review enhances the CDRI model by integrating data from 2013 to 2022 in three-year intervals and applying machine learning techniques to predict resilience dimensions for 2025. This integration enables a dynamic resilience model that can accommodate temporal changes, providing a more adaptable and data driven foundation for urban flood resilience planning. By employing artificial intelligence to reflect evolving urban conditions, this model offers valuable insights for policymakers and urban planners to enhance flood resilience in Tehrans critical District 6.
Abstract:The Forward-Forward Learning (FFL) algorithm is a recently proposed solution for training neural networks without needing memory-intensive backpropagation. During training, labels accompany input data, classifying them as positive or negative inputs. Each layer learns its response to these inputs independently. In this study, we enhance the FFL with the following contributions: 1) We optimize label processing by segregating label and feature forwarding between layers, enhancing learning performance. 2) By revising label integration, we enhance the inference process, reduce computational complexity, and improve performance. 3) We introduce feedback loops akin to cortical loops in the brain, where information cycles through and returns to earlier neurons, enabling layers to combine complex features from previous layers with lower-level features, enhancing learning efficiency.
Abstract:Deep Neural Networks are powerful tools for understanding complex patterns and making decisions. However, their black-box nature impedes a complete understanding of their inner workings. Saliency-Guided Training (SGT) methods try to highlight the prominent features in the model's training based on the output to alleviate this problem. These methods use back-propagation and modified gradients to guide the model toward the most relevant features while keeping the impact on the prediction accuracy negligible. SGT makes the model's final result more interpretable by masking input partially. In this way, considering the model's output, we can infer how each segment of the input affects the output. In the particular case of image as the input, masking is applied to the input pixels. However, the masking strategy and number of pixels which we mask, are considered as a hyperparameter. Appropriate setting of masking strategy can directly affect the model's training. In this paper, we focus on this issue and present our contribution. We propose a novel method to determine the optimal number of masked images based on input, accuracy, and model loss during the training. The strategy prevents information loss which leads to better accuracy values. Also, by integrating the model's performance in the strategy formula, we show that our model represents the salient features more meaningful. Our experimental results demonstrate a substantial improvement in both model accuracy and the prominence of saliency, thereby affirming the effectiveness of our proposed solution.
Abstract:Deep Neural Networks are powerful tools to understand complex patterns and making decisions. However, their black-box nature impedes a complete understanding of their inner workings. While online saliency-guided training methods try to highlight the prominent features in the model's output to alleviate this problem, it is still ambiguous if the visually explainable features align with robustness of the model against adversarial examples. In this paper, we investigate the saliency trained model's vulnerability to adversarial examples methods. Models are trained using an online saliency-guided training method and evaluated against popular algorithms of adversarial examples. We quantify the robustness and conclude that despite the well-explained visualizations in the model's output, the salient models suffer from the lower performance against adversarial examples attacks.