Abstract:This paper presents an analytical framework for downlink pinching antenna systems (PASS) employing waveguide division multiple access (WDMA) and non-orthogonal multiple access (NOMA). A unified channel model is developed to capture antenna deployment, user spatial distribution, and path loss. Closed-form and single-integral expressions for the outage probability and average achievable rate are derived and validated via Monte Carlo simulations. The results show that NOMA achieves higher spectral efficiency at high transmit signal-to-noise ratio (SNR) due to successive interference cancellation (SIC), whereas WDMA offers more reliable performance at low to moderate SNR but suffers from an outage floor and rate saturation at high SNR. Moreover, WDMA performance is more sensitive to the user spatial distribution due to the spatially dependent inter-waveguide interference. These findings provide design insights for access-scheme selection and antenna placement in PASS.
Abstract:Research on sixth-generation (6G) integrated sensing and communication (ISAC) increasingly depends on multimodal datasets. These datasets need to jointly characterize wireless propagation, onboard sensing, and platform mobility. Existing tools cover only part of these aspects. Robotics simulators model physics and perception but not site-specific channels, while ray tracing and link level tools lack vehicle dynamics and onboard sensors. Combining them manually leads to workflows that are fragile and hard to reproduce. Rather than introducing another standalone simulator, this article presents SimART. It integrates mature robotics, ray tracing, and wireless evaluation engines into a single reproducible pipeline. The key idea is a robot operating system (ROS) backbone that both synchronizes and organizes all multimodal streams. A shared clock, a common coordinate frame, and timestamped messages keep the streams aligned in time and space, and a single rosbag recording captures the full session into one reproducible file. This design decouples the sensing front end from the wireless back end, so that any ROS-compatible simulator can be plugged in while reusing the same back end across aerial, ground, indoor, and maritime ISAC settings. On top of this backbone, SimART contributes a scene construction pipeline that converts both OpenStreetMap extracts and user-defined layouts into spatially aligned visual and electromagnetic assets, and a channel knowledge map (CKM) generator that aggregates ray tracing and system level outputs into spatial priors for ISAC algorithms. A case study on vision and position aided beam prediction demonstrates the utility of the platform. The code is publicly available at https://github.com/guchuanv-alt/SimART.
Abstract:Scaling context length is reshaping large-model development, yet full-attention Transformers suffer from prohibitive computation and inference bottlenecks at long sequences. A key challenge is to design foundation models that maintain performance and long-context efficiency with minimal training overhead. We introduce SpikingBrain2.0 (SpB2.0), a 5B model that advances both architecture and training efficiency of its predecessor. Our contributions are two-fold. (1) Architectural Innovation: We propose Dual-Space Sparse Attention (DSSA), an inter-layer hybrid of Sparse Softmax Attention (MoBA) and Sparse Linear Attention (SSE), achieving an improved performance-efficiency trade-off for long-context modeling. SpB2.0 further supports dual quantization paths: INT8-Spiking coding enables sparse event-driven computation, while FP8 coding accelerates inference on modern GPUs. (2) Enhanced Training Strategy: We develop an optimized Transformer-to-Hybrid (T2H) pipeline with dual conversion paths for LLMs and VLMs using curated open-source data. Empirically, SpB2.0-5B and SpB2.0-VL-5B recover most of the base Transformer (Qwen3-4B) capability with under 7k A100 GPU hours. SpB2.0 achieves a 10.13x TTFT speedup at 4M context and supports over 10M tokens on 8 A100 GPUs under vLLM, where full-attention models exceed memory limits. It also demonstrates strong cross-platform compatibility, enabling FP8 GPU inference (2.52x speedup at 250k) and efficient neuromorphic execution (64.31% sparsity, with 70.6% and 46.5% area and power reduction at 500MHz). Overall, SpikingBrain2.0 provides a practical pathway for lightweight, multimodal, spiking foundation models, highlighting the potential of combining brain-inspired mechanisms with efficient architectures for resource-constrained and edge scenarios.
Abstract:Orthogonal frequency-division multiplexing (OFDM) is a dominant waveform in modern wireless systems, yet its high peak-to-average power ratio (PAPR) and limited adaptability hinder efficient support for integrated communication and sensing. This paper proposes deep block-unitary precoded OFDM (DBU-OFDM), a structure-preserving learning framework that enables trainable waveform adaptation while preserving the DFT-based signal structure, pilot/null resource protection, and compatibility with low-complexity frequency-domain equalization. The proposed design restricts learning to a block-unitary transformation over data subcarriers and preserves pilot and null resources for structural compatibility. The transform is parameterized by recursive Householder reflections, ensuring strict unitarity as well as differentiable, numerically stable, and complexity-controllable implementation. Results show that DBU-OFDM achieves PAPR tails close to block-pilot DFT-s-OFDM while retaining comb-type pilots, improves communication reliability in frequency-selective fading via frequency-domain diversity, and enhances range and velocity estimation in direct sensing, especially in dimension-limited settings. Over-the-air USRP experiments and FPGA prototyping further verify its practical feasibility, demonstrating low error vector magnitude (EVM), clear PAPR reduction in real transmission, and hardware throughput up to 200~MS/s with microsecond-level latency. DBU-OFDM therefore offers a practical intermediate solution between conventional model-based OFDM waveforms and unconstrained neural transceivers for next-generation integrated communication and sensing systems.
Abstract:We present LoD-Loc v3, a novel method for generalized aerial visual localization in dense urban environments. While prior work LoD-Loc v2 achieves localization through semantic building silhouette alignment with low-detail city models, it suffers from two key limitations: poor cross-scene generalization and frequent failure in dense building scenes. Our method addresses these challenges through two key innovations. First, we develop a new synthetic data generation pipeline that produces InsLoD-Loc - the largest instance segmentation dataset for aerial imagery to date, comprising 100k images with precise instance building annotations. This enables trained models to exhibit remarkable zero-shot generalization capability. Second, we reformulate the localization paradigm by shifting from semantic to instance silhouette alignment, which significantly reduces pose estimation ambiguity in dense scenes. Extensive experiments demonstrate that LoD-Loc v3 outperforms existing state-of-the-art (SOTA) baselines, achieving superior performance in both cross-scene and dense urban scenarios with a large margin. The project is available at https://nudt-sawlab.github.io/LoD-Locv3/.
Abstract:While safety alignment for Multimodal Large Language Models (MLLMs) has gained significant attention, current paradigms primarily target malicious intent or situational violations. We propose shifting the safety frontier toward consequence-driven safety, a paradigm essential for the robust deployment of autonomous and embodied agents. To formalize this shift, we introduce OOD-MMSafe, a benchmark comprising 455 curated query-image pairs designed to evaluate a model's ability to identify latent hazards within context-dependent causal chains. Our analysis reveals a pervasive causal blindness among frontier models, with the highest 67.5% failure rate in high-capacity closed-source models, and identifies a preference ceiling where static alignment yields format-centric failures rather than improved safety reasoning as model capacity grows. To address these bottlenecks, we develop the Consequence-Aware Safety Policy Optimization (CASPO) framework, which integrates the model's intrinsic reasoning as a dynamic reference for token-level self-distillation rewards. Experimental results demonstrate that CASPO significantly enhances consequence projection, reducing the failure ratio of risk identification to 7.3% for Qwen2.5-VL-7B and 5.7% for Qwen3-VL-4B while maintaining overall effectiveness.
Abstract:Learning motion priors for physics-based humanoid control is an active research topic. Existing approaches mainly include variational autoencoders (VAE) and adversarial motion priors (AMP). VAE introduces information loss, and random latent sampling may sometimes produce invalid behaviors. AMP suffers from mode collapse and struggles to capture diverse motion skills. We present the Spherical Latent Motion Prior (SLMP), a two-stage method for learning motion priors. In the first stage, we train a high-quality motion tracking controller. In the second stage, we distill the tracking controller into a spherical latent space. A combination of distillation, a discriminator, and a discriminator-guided local semantic consistency constraint shapes a structured latent action space, allowing stable random sampling without information loss. To evaluate SLMP, we collect a two-hour human combat motion capture dataset and show that SLMP preserves fine motion detail without information loss, and random sampling yields semantically valid and stable behaviors. When applied to a two-agent physics-based combat task, SLMP produces human-like and physically plausible combat behaviors only using simple rule-based rewards. Furthermore, SLMP generalizes across different humanoid robot morphologies, demonstrating its transferability beyond a single simulated avatar.
Abstract:Sequential recommendation increasingly employs latent multi-step reasoning to enhance test-time computation. Despite empirical gains, existing approaches largely drive intermediate reasoning states via target-dominant objectives without imposing explicit feasibility constraints. This results in latent drift, where reasoning trajectories deviate into implausible regions. We argue that effective recommendation reasoning should instead be viewed as navigation on a collaborative manifold rather than free-form latent refinement. To this end, we propose ManCAR (Manifold-Constrained Adaptive Reasoning), a principled framework that grounds reasoning within the topology of a global interaction graph. ManCAR constructs a local intent prior from the collaborative neighborhood of a user's recent actions, represented as a distribution over the item simplex. During training, the model progressively aligns its latent predictive distribution with this prior, forcing the reasoning trajectory to remain within the valid manifold. At test time, reasoning proceeds adaptively until the predictive distribution stabilizes, avoiding over-refinement. We provide a variational interpretation of ManCAR to theoretically validate its drift-prevention and adaptive test-time stopping mechanisms. Experiments on seven benchmarks demonstrate that ManCAR consistently outperforms state-of-the-art baselines, achieving up to a 46.88% relative improvement w.r.t. NDCG@10. Our code is available at https://github.com/FuCongResearchSquad/ManCAR.
Abstract:Omni-modal Large Language Models (OLLMs) greatly expand LLMs' multimodal capabilities but also introduce cross-modal safety risks. However, a systematic understanding of vulnerabilities in omni-modal interactions remains lacking. To bridge this gap, we establish a modality-semantics decoupling principle and construct the AdvBench-Omni dataset, which reveals a significant vulnerability in OLLMs. Mechanistic analysis uncovers a Mid-layer Dissolution phenomenon driven by refusal vector magnitude shrinkage, alongside the existence of a modal-invariant pure refusal direction. Inspired by these insights, we extract a golden refusal vector using Singular Value Decomposition and propose OmniSteer, which utilizes lightweight adapters to modulate intervention intensity adaptively. Extensive experiments show that our method not only increases the Refusal Success Rate against harmful inputs from 69.9% to 91.2%, but also effectively preserves the general capabilities across all modalities. Our code is available at: https://github.com/zhrli324/omni-safety-research.
Abstract:Multimodal large language models (MLLMs) enable interaction over both text and images, but their safety behavior can be driven by unimodal shortcuts instead of true joint intent understanding. We introduce CSR-Bench, a benchmark for evaluating cross-modal reliability through four stress-testing interaction patterns spanning Safety, Over-rejection, Bias, and Hallucination, covering 61 fine-grained types. Each instance is constructed to require integrated image-text interpretation, and we additionally provide paired text-only controls to diagnose modality-induced behavior shifts. We evaluate 16 state-of-the-art MLLMs and observe systematic cross-modal alignment gaps. Models show weak safety awareness, strong language dominance under interference, and consistent performance degradation from text-only controls to multimodal inputs. We also observe a clear trade-off between reducing over-rejection and maintaining safe, non-discriminatory behavior, suggesting that some apparent safety gains may come from refusal-oriented heuristics rather than robust intent understanding. WARNING: This paper contains unsafe contents.