Abstract:Rapid identification and response to breaking events, particularly those that pose a threat to human life such as natural disasters or conflicts, is of paramount importance. The prevalence of mobile devices and the ubiquity of network connectivity has generated a massive amount of temporally- and spatially-stamped data. Numerous studies have used mobile data to derive individual human mobility patterns for various applications. Similarly, the increasing number of orbital satellites has made it easier to gather high-resolution images capturing a snapshot of a geographical area in sub-daily temporal frequency. We propose a novel data fusion methodology integrating satellite imagery with privacy-enhanced mobile data to augment the event inference task, whether in real-time or historical. In the absence of boots on the ground, mobile data is able to give an approximation of human mobility, proximity to one another, and the built environment. On the other hand, satellite imagery can provide visual information on physical changes to the built and natural environment. The expected use cases for our methodology include small-scale disaster detection (i.e., tornadoes, wildfires, and floods) in rural regions, search and rescue operation augmentation for lost hikers in remote wilderness areas, and identification of active conflict areas and population displacement in war-torn states. Our implementation is open-source on GitHub: https://github.com/ekinugurel/SatMobFusion.
Abstract:Transformer models are revolutionizing machine learning, but their inner workings remain mysterious. In this work, we present a new visualization technique designed to help researchers understand the self-attention mechanism in transformers that allows these models to learn rich, contextual relationships between elements of a sequence. The main idea behind our method is to visualize a joint embedding of the query and key vectors used by transformer models to compute attention. Unlike previous attention visualization techniques, our approach enables the analysis of global patterns across multiple input sequences. We create an interactive visualization tool, AttentionViz, based on these joint query-key embeddings, and use it to study attention mechanisms in both language and vision transformers. We demonstrate the utility of our approach in improving model understanding and offering new insights about query-key interactions through several application scenarios and expert feedback.