Abstract:We tackle the problem of tracking the human lower body as an initial step toward an automatic motion assessment system for clinical mobility evaluation, using a multimodal system that combines Inertial Measurement Unit (IMU) data, RGB images, and point cloud depth measurements. This system applies the factor graph representation to an optimization problem that provides 3-D skeleton joint estimations. In this paper, we focus on improving the temporal consistency of the estimated human trajectories to greatly extend the range of operability of the depth sensor. More specifically, we introduce a new factor graph factor based on Koopman theory that embeds the nonlinear dynamics of several lower-limb movement activities. This factor performs a two-step process: first, a custom activity recognition module based on spatial temporal graph convolutional networks recognizes the walking activity; then, a Koopman pose prediction of the subsequent skeleton is used as an a priori estimation to drive the optimization problem toward more consistent results. We tested the performance of this module on datasets composed of multiple clinical lowerlimb mobility tests, and we show that our approach reduces outliers on the skeleton form by almost 1 m, while preserving natural walking trajectories at depths up to more than 10 m.
Abstract:We consider the problem of sample-based feedback-based motion planning from bearing (direction-only) measurements. We build on our previous work that defines a cell decomposition of the environment using RRT*, and finds an output feedback controller to navigate through each cell toward a goal location using duality, Control Lyapunov and Barrier Functions (CLF, CBF), and Linear Programming. In this paper, we propose a novel strategy that uses relative bearing measurements with respect to a set of landmarks in the environment, as opposed to full relative displacements. The main advantage is then that the measurements can be obtained using a simple monocular camera. We test the proposed algorithm in the simulation, and then in an experimental environment to evaluate the performance of our approach with respect to practical issues such as mismatches in the dynamical model of the robot, and measurements acquired with a camera with a limited field of view.
Abstract:We propose a novel approach for sampling-based and control-based motion planning that combines a representation of the environment obtained via a modified version of optimal Rapidly-exploring Random Trees (RRT*), with landmark-based output-feedback controllers obtained via Control Lyapunov Functions, Control Barrier Functions, and robust Linear Programming. Our solution inherits many benefits of RRT*-like algorithms, such as the ability to implicitly handle arbitrarily complex obstacles, and asymptotic optimality. Additionally, it extends planning beyond the discrete nominal paths, as feedback controllers can correct deviations from such paths, and are robust to discrepancies between the map used for planning and the real environment. We test our algorithms first in simulations and then in experiments, testing the robustness of the approach to practical conditions, such as deformations of the environment, mismatches in the dynamical model of the robot, and measurements acquired with a camera with a limited field of view.