Abstract:Audio-visual event localization (AVEL) plays a critical role in multimodal scene understanding. While existing datasets for AVEL predominantly comprise landscape-oriented long videos with clean and simple audio context, short videos have become the primary format of online video content due to the the proliferation of smartphones. Short videos are characterized by portrait-oriented framing and layered audio compositions (e.g., overlapping sound effects, voiceovers, and music), which brings unique challenges unaddressed by conventional methods. To this end, we introduce AVE-PM, the first AVEL dataset specifically designed for portrait mode short videos, comprising 25,335 clips that span 86 fine-grained categories with frame-level annotations. Beyond dataset creation, our empirical analysis shows that state-of-the-art AVEL methods suffer an average 18.66% performance drop during cross-mode evaluation. Further analysis reveals two key challenges of different video formats: 1) spatial bias from portrait-oriented framing introduces distinct domain priors, and 2) noisy audio composition compromise the reliability of audio modality. To address these issues, we investigate optimal preprocessing recipes and the impact of background music for AVEL on portrait mode videos. Experiments show that these methods can still benefit from tailored preprocessing and specialized model design, thus achieving improved performance. This work provides both a foundational benchmark and actionable insights for advancing AVEL research in the era of mobile-centric video content. Dataset and code will be released.
Abstract:The operation and planning of large-scale power systems are becoming more challenging with the increasing penetration of stochastic renewable generation. In order to minimize the decision risks in power systems with large amount of renewable resources, there is a growing need to model the short-term generation uncertainty. By producing a group of possible future realizations for certain set of renewable generation plants, scenario approach has become one popular way for renewables uncertainty modeling. However, due to the complex spatial and temporal correlations underlying in renewable generations, traditional model-based approaches for forecasting future scenarios often require extensive knowledge, while fitted models are often hard to scale. To address such modeling burdens, we propose a learning-based, data-driven scenario forecasts method based on generative adversarial networks (GANs), which is a class of deep-learning generative algorithms used for modeling unknown distributions. We firstly utilize an improved GANs with convergence guarantees to learn the intrinsic patterns and model the unknown distributions of (multiple-site) renewable generation time-series. Then by solving an optimization problem, we are able to generate forecasted scenarios without any scenario number and forecasting horizon restrictions. Our method is totally model-free, and could forecast scenarios under different level of forecast uncertainties. Extensive numerical simulations using real-world data from NREL wind and solar integration datasets validate the performance of proposed method in forecasting both wind and solar power scenarios.