Abstract:Robot audition systems with multiple microphone arrays have many applications in practice. However, accurate calibration of multiple microphone arrays remains challenging because there are many unknown parameters to be identified, including the relative transforms (i.e., orientation, translation) and asynchronous factors (i.e., initial time offset and sampling clock difference) between microphone arrays. To tackle these challenges, in this paper, we adopt batch simultaneous localization and mapping (SLAM) for joint calibration of multiple asynchronous microphone arrays and sound source localization. Using the Fisher information matrix (FIM) approach, we first conduct the observability analysis (i.e., parameter identifiability) of the above-mentioned calibration problem and establish necessary/sufficient conditions under which the FIM and the Jacobian matrix have full column rank, which implies the identifiability of the unknown parameters. We also discover several scenarios where the unknown parameters are not uniquely identifiable. Subsequently, we propose an effective framework to initialize the unknown parameters, which is used as the initial guess in batch SLAM for multiple microphone arrays calibration, aiming to further enhance optimization accuracy and convergence. Extensive numerical simulations and real experiments have been conducted to verify the performance of the proposed method. The experiment results show that the proposed pipeline achieves higher accuracy with fast convergence in comparison to methods that use the noise-corrupted ground truth of the unknown parameters as the initial guess in the optimization and other existing frameworks.
Abstract:Asynchronous Microphone array calibration is a prerequisite for most audition robot applications. In practice, the calibration requires estimating microphone positions, time offsets, clock drift rates, and sound event locations simultaneously. The existing method proposed Graph-based Simultaneous Localisation and Mapping (Graph-SLAM) utilizing common TDOA, time difference of arrival between two microphones (TDOA-M), and odometry measurement, however, it heavily depends on the initial value. In this paper, we propose a novel TDOA, time difference of arrival between adjacent sound events (TDOA-S), combine it with TDOA-M, called hybrid TDOA, and add odometry measurement to construct Graph-SLAM and use the Gauss-Newton (GN) method to solve. TDOA-S is simple and efficient because it eliminates time offset without generating new variables. Simulation and real-world experiment results consistently show that our method is independent of microphone number, insensitive to initial values, and has better calibration accuracy and stability under various TDOA noises. In addition, the simulation result demonstrates that our method has a lower Cram\'er-Rao lower bound (CRLB) for microphone parameters, which explains the advantages of my method.
Abstract:Existing Large Language Models (LLM) can invoke a variety of tools and APIs to complete complex tasks. The computer, as the most powerful and universal tool, could potentially be controlled directly by a trained LLM agent. Powered by the computer, we can hopefully build a more generalized agent to assist humans in various daily digital works. In this paper, we construct an environment for a Vision Language Model (VLM) agent to interact with a real computer screen. Within this environment, the agent can observe screenshots and manipulate the Graphics User Interface (GUI) by outputting mouse and keyboard actions. We also design an automated control pipeline that includes planning, acting, and reflecting phases, guiding the agent to continuously interact with the environment and complete multi-step tasks. Additionally, we construct the ScreenAgent Dataset, which collects screenshots and action sequences when completing a variety of daily computer tasks. Finally, we trained a model, ScreenAgent, which achieved computer control capabilities comparable to GPT-4V and demonstrated more precise UI positioning capabilities. Our attempts could inspire further research on building a generalist LLM agent. The code is available at \url{https://github.com/niuzaisheng/ScreenAgent}.
Abstract:Predicting pedestrian motion trajectories is crucial for path planning and motion control of autonomous vehicles. Accurately forecasting crowd trajectories is challenging due to the uncertain nature of human motions in different environments. For training, recent deep learning-based prediction approaches mainly utilize information like trajectory history and interactions between pedestrians, among others. This can limit the prediction performance across various scenarios since the discrepancies between training datasets have not been properly incorporated. To overcome this limitation, this paper proposes a graph transformer structure to improve prediction performance, capturing the differences between the various sites and scenarios contained in the datasets. In particular, a self-attention mechanism and a domain adaption module have been designed to improve the generalization ability of the model. Moreover, an additional metric considering cross-dataset sequences is introduced for training and performance evaluation purposes. The proposed framework is validated and compared against existing methods using popular public datasets, i.e., ETH and UCY. Experimental results demonstrate the improved performance of our proposed scheme.
Abstract:Multiple microphone arrays have many applications in robot audition, including sound source localization, audio scene perception and analysis, etc. However, accurate calibration of multiple microphone arrays remains a challenge because there are many unknown parameters to be identified, including the Euler angles, geometry, asynchronous factors between the microphone arrays. This paper is concerned with joint calibration of multiple microphone arrays and sound source localization using graph simultaneous localization and mapping (SLAM). By using a Fisher information matrix (FIM) approach, we focus on the observability analysis of the graph SLAM framework for the above-mentioned calibration problem. We thoroughly investigate the identifiability of the unknown parameters, including the Euler angles, geometry, asynchronous effects between the microphone arrays, and the sound source locations. We establish necessary/sufficient conditions under which the FIM and the Jacobian matrix have full column rank, which implies the identifiability of the unknown parameters. These conditions are closely related to the variation in the motion of the sound source and the configuration of microphone arrays, and have intuitive and physical interpretations. We also discover several scenarios where the unknown parameters are not uniquely identifiable. All theoretical findings are demonstrated using simulation data.
Abstract:Pose estimation is important for robotic perception, path planning, etc. Robot poses can be modeled on matrix Lie groups and are usually estimated via filter-based methods. In this paper, we establish the closed-form formula for the error propagation for the Invariant extended Kalman filter (IEKF) in the presence of random noises and apply it to vision-aided inertial navigation. We evaluate our algorithm via numerical simulations and experiments on the OPENVINS platform. Both simulations and the experiments performed on the public EuRoC MAV datasets demonstrate that our algorithm outperforms some state-of-art filter-based methods such as the quaternion-based EKF, first estimates Jacobian EKF, etc.
Abstract:Trajectory optimization of sensing robots to actively gather information of targets has received much attention in the past. It is well-known that under the assumption of linear Gaussian target dynamics and sensor models the stochastic Active Information Acquisition problem is equivalent to a deterministic optimal control problem. However, the above-mentioned assumptions regarding the target dynamic model are limiting. In real-world scenarios, the target may be subject to disturbances whose models or statistical properties are hard or impossible to obtain. Typical scenarios include abrupt maneuvers, jumping disturbances due to interactions with the environment, anomalous misbehaviors due to system faults/attacks, etc. Motivated by the above considerations, in this paper we consider targets whose dynamic models are subject to arbitrary unknown inputs whose models or statistical properties are not assumed to be available. In particular, with the aid of an unknown input decoupled filter, we formulate the sensor trajectory planning problem to track evolution of the target state and analyse the resulting performance for both the state and unknown input evolution tracking. Inspired by concepts of Reduced Value Iteration, a suboptimal solution that expands a search tree via Forward Value Iteration with informativeness-based pruning is proposed. Concrete suboptimality performance guarantees for tracking both the state and the unknown input are established. Numerical simulations of a target tracking example are presented to compare the proposed solution with a greedy policy.
Abstract:For mobile robots to be effectively applied to real world unstructured environments -- such as large scale farming -- they require the ability to generate adaptive plans that account both for limited onboard resources, and the presence of dynamic changes, including nearby moving individuals. This work provides a real world empirical evaluation of our proposed hierarchical framework for long-term autonomy of field robots, conducted on University of Sydney's Swagbot agricultural robot platform. We demonstrate the ability of the framework to navigate an unstructured and dynamic environment in an effective manner, validating its use for long-term deployment in large scale farming, for tasks such as autonomous weeding in the presence of moving individuals.
Abstract:Achieving long-term autonomy for mobile robots operating in real-world unstructured environments such as farms remains a significant challenge. This is made increasingly complex in the presence of moving humans or livestock. These environments require a robot to be adaptive in its immediate plans, accounting for the state of nearby individuals and the response that they might have to the robot's actions. Additionally, in order to achieve longer-term goals, consideration of the limited on-board resources available to the robot is required, especially for extended missions such as weeding an agricultural field. To achieve efficient long-term autonomy, it is thus crucial to understand the impact that online dynamic updates to an energy efficient offline plan might have on resource usage whilst navigating through crowds or herds. To address these challenges, a hierarchical planning framework is proposed, integrating an online local dynamic path planner with an offline longer-term objective-based planner. This framework acts to achieve long-term autonomy through awareness of both dynamic responses of individuals to a robot's motion and the limited resources available. This paper details the hierarchical approach and its integration on a robotic platform, including a comprehensive description of the planning framework and associated perception modules. The approach is evaluated in real-world trials on farms, requiring both consideration of limited battery capacity and the presence of nearby moving individuals. These trials additionally demonstrate the ability of the framework to adapt resource use through variation of the local dynamic planner, allowing adaptive behaviour in changing environments. A summary video is available at https://youtu.be/DGVTrYwJ304.
Abstract:Achieving long term autonomy of robots operating in dynamic environments such as farms remains a significant challenge. Arguably, the most demanding factors to achieve this are the on-board resource constraints such as energy, planning in the presence of moving individuals such as livestock and people, and handling unknown and undulating terrain. These considerations require a robot to be adaptive in its immediate actions in order to successfully achieve long-term, resource-efficient and robust autonomy. To achieve this, we propose a hierarchical framework that integrates a local dynamic path planner with a longer term objective based planner and advanced motion control methods, whilst taking into consideration the dynamic responses of moving individuals within the environment. The framework is motivated by and synthesizes our recent work on energy aware mission planning, path planning in dynamic environments, and receding horizon motion control. In this paper we detail the proposed framework and outline its integration on a robotic platform. We evaluate the strategy in extensive simulated trials, traversing between objective waypoints to complete tasks such as soil sampling, weeding and recharging across a dynamic environment, demonstrating its capability to robustly adapt long term mission plans in the presence of moving individuals and obstacles for real world applications such as large scale farming.