Abstract:In integrated sensing and communication (ISAC) systems, random signaling is used to convey useful information as well as sense the environment. Such randomness poses challenges in various components in sensing signal processing. In this paper, we investigate quantizer design for sensing in ISAC systems. Unlike quantizers for channel estimation in massive multiple-input-multiple-out (MIMO) communication systems, sensing in ISAC systems needs to deal with random nonorthogonal transmitted signals rather than a fixed orthogonal pilot. Considering sensing performance and hardware implementation, we focus on task-based hardware-limited quantization with spatial analog combining. We propose two strategies of quantizer optimization, i.e., data-dependent (DD) and data-independent (DI). The former achieves optimized sensing performance with high implementation overhead. To reduce hardware complexity, the latter optimizes the quantizer with respect to the random signal from a stochastic perspective. We derive the optimal quantizers for both strategies and formulate an algorithm based on sample average approximation (SAA) to solve the optimization in the DI strategy. Numerical results show that the optimized quantizers outperform digital-only quantizers in terms of sensing performance. Additionally, the DI strategy, despite its lower computational complexity compared to the DD strategy, achieves near-optimal sensing performance.
Abstract:Numerous web applications rely on solving combinatorial optimization problems, such as energy cost-aware scheduling, budget allocation on web advertising, and graph matching on social networks. However, many optimization problems involve unknown coefficients, and improper predictions of these factors may lead to inferior decisions which may cause energy wastage, inefficient resource allocation, inappropriate matching in social networks, etc. Such a research topic is referred to as "Predict-Then-Optimize (PTO)" which considers the performance of prediction and decision-making in a unified system. A noteworthy recent development is the end-to-end methods by directly optimizing the ultimate decision quality which claims to yield better results in contrast to the traditional two-stage approach. However, the evaluation benchmarks in this field are fragmented and the effectiveness of various models in different scenarios remains unclear, hindering the comprehensive assessment and fast deployment of these methods. To address these issues, we provide a comprehensive categorization of current approaches and integrate existing experimental scenarios to establish a unified benchmark, elucidating the circumstances under which end-to-end training yields improvements, as well as the contexts in which it performs ineffectively. We also introduce a new dataset for the industrial combinatorial advertising problem for inclusive finance to open-source. We hope the rethinking and benchmarking of PTO could facilitate more convenient evaluation and deployment, and inspire further improvements both in the academy and industry within this field.