Abstract:A 2D range-only tracking scenario is non-trivial due to two main reasons. First, when the states to be estimated are in Cartesian coordinates, the uncertainty region is multi-modal. The second reason is that the probability density function of azimuth conditioned on range takes the form of a generalized von Mises distribution, which is hard to tackle. Even in the case of implementing a uni-modal Kalman filter, one needs expectations of trigonometric functions of conditional bearing density, which are not available in the current literature. We prove that the trigonometric moments (circular moments) of the azimuth density conditioned on range can be computed as an infinite series, which can be sufficiently approximated by relatively few terms in summation. The solution can also be generalized to any order of the moments. This important result can provide an accurate depiction of the conditional azimuth density in 2D range-only tracking geometries. We also present a simple optimization problem that results in deterministic samples of conditional azimuth density from the knowledge of its circular moments leading to an accurate filtering solution. The results are shown in a two-dimensional simulation, where the range-only sensor platform maneuvers to make the system observable. The results prove that the method is feasible in such applications.
Abstract:With Retrieval Augmented Generation (RAG), Large Language Models (LLMs) are playing a pivotal role in information search and are being adopted globally. Although the multilingual capability of LLMs offers new opportunities to bridge the language barrier, do these capabilities translate into real-life scenarios where linguistic divide and knowledge conflicts between multilingual sources are known occurrences? In this paper, we studied LLM's linguistic preference in a RAG-based information search setting. We found that LLMs displayed systemic bias towards information in the same language as the query language in both information retrieval and answer generation. Furthermore, in scenarios where there is little information in the language of the query, LLMs prefer documents in high-resource languages, reinforcing the dominant views. Such bias exists for both factual and opinion-based queries. Our results highlight the linguistic divide within multilingual LLMs in information search systems. The seemingly beneficial multilingual capability of LLMs may backfire on information parity by reinforcing language-specific information cocoons or filter bubbles further marginalizing low-resource views.
Abstract:Large language models (LLMs) powered conversational search systems have already been used by hundreds of millions of people, and are believed to bring many benefits over conventional search. However, while decades of research and public discourse interrogated the risk of search systems in increasing selective exposure and creating echo chambers -- limiting exposure to diverse opinions and leading to opinion polarization, little is known about such a risk of LLM-powered conversational search. We conduct two experiments to investigate: 1) whether and how LLM-powered conversational search increases selective exposure compared to conventional search; 2) whether and how LLMs with opinion biases that either reinforce or challenge the user's view change the effect. Overall, we found that participants engaged in more biased information querying with LLM-powered conversational search, and an opinionated LLM reinforcing their views exacerbated this bias. These results present critical implications for the development of LLMs and conversational search systems, and the policy governing these technologies.