Abstract:Time-frequency concentration and resolution of the Cohen's class time-frequency distribution (CCTFD) has attracted much attention in time-frequency analysis. A variety of uncertainty principles of the CCTFD is therefore derived, including the weak Heisenberg type, the Hardy type, the Nazarov type, and the local type. However, the standard Heisenberg type still remains unresolved. In this study, we address the question of how the standard Heisenberg's uncertainty principle of the CCTFD is affected by fundamental properties. The investigated distribution properties are Parseval's relation and the concise frequency domain definition (i.e., only frequency variables are explicitly found in the tensor product), based on which we confine our attention to the CCTFD with some specific kernels. That is the unit modulus and v-independent time translation, reversal and scaling invariant kernel CCTFD (UMITRSK-CCTFD). We then extend the standard Heisenberg's uncertainty principles of the Wigner distribution to those of the UMITRSK-CCTFD, giving birth to various types of attainable lower bounds on the uncertainty product in the UMITRSK-CCTFD domain. The derived results strengthen the existing weak Heisenberg type and fill gaps in the standard Heisenberg type.
Abstract:This paper devotes to combine the chirp basis function transformation and symplectic coordinates transformation to yield a novel Wigner distribution (WD) associated with the linear canonical transform (LCT), named as the symplectic WD in the LCT domain (SWDL). It incorporates the merits of the symplectic WD (SWD) and the WD in the LCT domain (WDL), achieving stronger capability in the linear frequency-modulated (LFM) signal frequency rate feature extraction while maintaining the same level of computational complexity. Some essential properties of the SWDL are derived, including marginal distributions, energy conservations, unique reconstruction, Moyal formula, complex conjugate symmetry, time reversal symmetry, scaling property, time translation property, frequency modulation property, and time translation and frequency modulation property. Heisenberg's uncertainty principles of the SWDL are formulated, giving rise to three kinds of lower bounds attainable respectively by Gaussian enveloped complex exponential signal, Gaussian signal and Gaussian enveloped chirp signal. The optimal symplectic matrices corresponding to the highest time-frequency resolution are generated by solving the lower bound optimization (minimization) problem. The time-frequency resolution of the SWDL is compared with those of the SWD and WDL to demonstrate its superiority in LFM signals time-frequency energy concentration. A synthesis example is also carried out to verify the feasibility and reliability of the theoretical analysis.
Abstract:Graph signal processing (GSP) has emerged as a powerful framework for analyzing data on irregular domains. In recent years, many classical techniques in signal processing (SP) have been successfully extended to GSP. Among them, chirp signals play a crucial role in various SP applications. However, graph chirp signals have not been formally defined despite their importance. Here, we define graph chirp signals and establish a comprehensive theoretical framework for their analysis. We propose the graph fractional vertex-frequency energy distribution (GFED), which provides a powerful tool for processing and analyzing graph chirp signals. We introduce the general fractional graph distribution (GFGD), a generalized vertex-frequency distribution, and the reduced interference GFED, which can suppress cross-term interference and enhance signal clarity. Furthermore, we propose a novel method for detecting graph signals through GFED domain filtering, facilitating robust detection and analysis of graph chirp signals in noisy environments. Moreover, this method can be applied to real-world data for denoising more effective than some state-of-the-arts, further demonstrating its practical significance.
Abstract:Vertex-frequency analysis, particularly the windowed graph Fourier transform (WGFT), is a significant challenge in graph signal processing. Tight frame theories is known for its low computational complexity in signal reconstruction, while fractional order methods shine at unveil more detailed structural characteristics of graph signals. In the graph fractional Fourier domain, we introduce multi-windowed graph fractional Fourier frames (MWGFRFF) to facilitate the construction of tight frames. This leads to developing the multi-windowed graph fractional Fourier transform (MWGFRFT), enabling novel vertex-frequency analysis methods. A reconstruction formula is derived, along with results concerning dual and tight frames. To enhance computational efficiency, a fast MWGFRFT (FMWGFRFT) algorithm is proposed. Furthermore, we define shift multi-windowed graph fractional Fourier frames (SMWGFRFF) and their associated transform (SMWGFRFT), exploring their dual and tight frames. Experimental results indicate that FMWGFRFT and SMWGFRFT excel in extracting vertex-frequency features in the graph fractional Fourier domain, with their combined use optimizing analytical performance. Applications in signal anomaly detection demonstrate the advantages of FMWGFRFT.
Abstract:AI-driven video generation techniques have made significant progress in recent years. However, AI-generated videos (AGVs) involving human activities often exhibit substantial visual and semantic distortions, hindering the practical application of video generation technologies in real-world scenarios. To address this challenge, we conduct a pioneering study on human activity AGV quality assessment, focusing on visual quality evaluation and the identification of semantic distortions. First, we construct the AI-Generated Human activity Video Quality Assessment (Human-AGVQA) dataset, consisting of 3,200 AGVs derived from 8 popular text-to-video (T2V) models using 400 text prompts that describe diverse human activities. We conduct a subjective study to evaluate the human appearance quality, action continuity quality, and overall video quality of AGVs, and identify semantic issues of human body parts. Based on Human-AGVQA, we benchmark the performance of T2V models and analyze their strengths and weaknesses in generating different categories of human activities. Second, we develop an objective evaluation metric, named AI-Generated Human activity Video Quality metric (GHVQ), to automatically analyze the quality of human activity AGVs. GHVQ systematically extracts human-focused quality features, AI-generated content-aware quality features, and temporal continuity features, making it a comprehensive and explainable quality metric for human activity AGVs. The extensive experimental results show that GHVQ outperforms existing quality metrics on the Human-AGVQA dataset by a large margin, demonstrating its efficacy in assessing the quality of human activity AGVs. The Human-AGVQA dataset and GHVQ metric will be released in public at https://github.com/zczhang-sjtu/GHVQ.git
Abstract:The conventional Cohen's distribution can't meet the requirement of additive noises jamming signals high-performance denoising under the condition of low signal-to-noise ratio, it is necessary to integrate the metaplectic transform for non-stationary signal fractional domain time-frequency analysis. In this paper, we blend time-frequency operators and coordinate operator fractionizations to formulate the definition of the metaplectic Wigner distribution, based on which we integrate the generalized metaplectic convolution to address the unified representation issue of the convolution type of metaplectic Cohen's distribution (CMCD), whose special cases and essential properties are also derived. We blend Wiener filter principle and fractional domain filter mechanism of the metaplectic transform to design the least-squares adaptive filter method in the metaplectic Wigner distribution domain, giving birth to the least-squares adaptive filter-based CMCD whose kernel function can be adjusted with the input signal automatically to achieve the minimum mean-square error (MSE) denoising in Wigner distribution domain. We discuss the optimal symplectic matrices selection strategy of the proposed adaptive CMCD through the minimum MSE minimization modeling and solving. Some examples are also carried out to demonstrate that the proposed filtering method outperforms some state-of-the-arts including Wiener filter and fixed kernel functions-based or adaptive Cohen's distribution in noise suppression.
Abstract:The fixed kernel function-based Cohen's class time-frequency distributions (CCTFDs) allow flexibility in denoising for some specific polluted signals. Due to the limitation of fixed kernel functions, however, from the view point of filtering they fail to automatically adjust the response according to the change of signal to adapt to different signal characteristics. In this letter, we integrate Wiener filter principle and the time-frequency filtering mechanism of CCTFD to design the least-squares adaptive filter method in the Wigner-Ville distribution (WVD) domain, giving birth to the least-squares adaptive filter-based CCTFD whose kernel function can be adjusted with the input signal automatically to achieve the minimum mean-square error denoising in the WVD domain. Some examples are also carried out to demonstrate that the proposed adaptive CCTFD outperforms some state-of-the-arts in noise suppression.
Abstract:In recent years, artificial intelligence (AI) driven video generation has garnered significant attention due to advancements in stable diffusion and large language model techniques. Thus, there is a great demand for accurate video quality assessment (VQA) models to measure the perceptual quality of AI-generated content (AIGC) videos as well as optimize video generation techniques. However, assessing the quality of AIGC videos is quite challenging due to the highly complex distortions they exhibit (e.g., unnatural action, irrational objects, etc.). Therefore, in this paper, we try to systemically investigate the AIGC-VQA problem from both subjective and objective quality assessment perspectives. For the subjective perspective, we construct a Large-scale Generated Vdeo Quality assessment (LGVQ) dataset, consisting of 2,808 AIGC videos generated by 6 video generation models using 468 carefully selected text prompts. Unlike previous subjective VQA experiments, we evaluate the perceptual quality of AIGC videos from three dimensions: spatial quality, temporal quality, and text-to-video alignment, which hold utmost importance for current video generation techniques. For the objective perspective, we establish a benchmark for evaluating existing quality assessment metrics on the LGVQ dataset, which reveals that current metrics perform poorly on the LGVQ dataset. Thus, we propose a Unify Generated Video Quality assessment (UGVQ) model to comprehensively and accurately evaluate the quality of AIGC videos across three aspects using a unified model, which uses visual, textual and motion features of video and corresponding prompt, and integrates key features to enhance feature expression. We hope that our benchmark can promote the development of quality evaluation metrics for AIGC videos. The LGVQ dataset and the UGVQ metric will be publicly released.
Abstract:In this paper, we present a simple but effective method to enhance blind video quality assessment (BVQA) models for social media videos. Motivated by previous researches that leverage pre-trained features extracted from various computer vision models as the feature representation for BVQA, we further explore rich quality-aware features from pre-trained blind image quality assessment (BIQA) and BVQA models as auxiliary features to help the BVQA model to handle complex distortions and diverse content of social media videos. Specifically, we use SimpleVQA, a BVQA model that consists of a trainable Swin Transformer-B and a fixed SlowFast, as our base model. The Swin Transformer-B and SlowFast components are responsible for extracting spatial and motion features, respectively. Then, we extract three kinds of features from Q-Align, LIQE, and FAST-VQA to capture frame-level quality-aware features, frame-level quality-aware along with scene-specific features, and spatiotemporal quality-aware features, respectively. Through concatenating these features, we employ a multi-layer perceptron (MLP) network to regress them into quality scores. Experimental results demonstrate that the proposed model achieves the best performance on three public social media VQA datasets. Moreover, the proposed model won first place in the CVPR NTIRE 2024 Short-form UGC Video Quality Assessment Challenge. The code is available at \url{https://github.com/sunwei925/RQ-VQA.git}.
Abstract:Traditional deep neural network (DNN)-based image quality assessment (IQA) models leverage convolutional neural networks (CNN) or Transformer to learn the quality-aware feature representation, achieving commendable performance on natural scene images. However, when applied to AI-Generated images (AGIs), these DNN-based IQA models exhibit subpar performance. This situation is largely due to the semantic inaccuracies inherent in certain AGIs caused by uncontrollable nature of the generation process. Thus, the capability to discern semantic content becomes crucial for assessing the quality of AGIs. Traditional DNN-based IQA models, constrained by limited parameter complexity and training data, struggle to capture complex fine-grained semantic features, making it challenging to grasp the existence and coherence of semantic content of the entire image. To address the shortfall in semantic content perception of current IQA models, we introduce a large Multi-modality model Assisted AI-Generated Image Quality Assessment (MA-AGIQA) model, which utilizes semantically informed guidance to sense semantic information and extract semantic vectors through carefully designed text prompts. Moreover, it employs a mixture of experts (MoE) structure to dynamically integrate the semantic information with the quality-aware features extracted by traditional DNN-based IQA models. Comprehensive experiments conducted on two AI-generated content datasets, AIGCQA-20k and AGIQA-3k show that MA-AGIQA achieves state-of-the-art performance, and demonstrate its superior generalization capabilities on assessing the quality of AGIs. Code is available at https://github.com/wangpuyi/MA-AGIQA.