LAAS-ROC
Abstract:Indoor tracking and pose estimation, i.e., determining the position and orientation of a moving target, are increasingly important due to their numerous applications. While Inertial Navigation Systems (INS) provide high update rates, their positioning errors can accumulate rapidly over time. To mitigate this, it is common to integrate INS with complementary systems to correct drift and improve accuracy. This paper presents a novel approach that combines INS with an acoustic Riemannian-based localization system to enhance indoor positioning and orientation tracking. The proposed method employs both the Extended Kalman Filter (EKF) and the Unscented Kalman Filter (UKF) for fusing data from the two systems. The Riemannian-based localization system delivers high-accuracy estimates of the target's position and orientation, which are then used to correct the INS data. A new projection algorithm is introduced to map the EKF or UKF output onto the Riemannian manifold, further improving estimation accuracy. Our results show that the proposed methods significantly outperform benchmark algorithms in both position and orientation estimation. The effectiveness of the proposed methods was evaluated through extensive numerical simulations and testing using our in-house experimental setup. These evaluations confirm the superior performance of our approach in practical scenarios.
Abstract:Machine learning techniques are increasingly used for high-stakes decision-making, such as college admissions, loan attribution or recidivism prediction. Thus, it is crucial to ensure that the models learnt can be audited or understood by human users, do not create or reproduce discrimination or bias, and do not leak sensitive information regarding their training data. Indeed, interpretability, fairness and privacy are key requirements for the development of responsible machine learning, and all three have been studied extensively during the last decade. However, they were mainly considered in isolation, while in practice they interplay with each other, either positively or negatively. In this Systematization of Knowledge (SoK) paper, we survey the literature on the interactions between these three desiderata. More precisely, for each pairwise interaction, we summarize the identified synergies and tensions. These findings highlight several fundamental theoretical and empirical conflicts, while also demonstrating that jointly considering these different requirements is challenging when one aims at preserving a high level of utility. To solve this issue, we also discuss possible conciliation mechanisms, showing that a careful design can enable to successfully handle these different concerns in practice.
Abstract:Interpretability is often pointed out as a key requirement for trustworthy machine learning. However, learning and releasing models that are inherently interpretable leaks information regarding the underlying training data. As such disclosure may directly conflict with privacy, a precise quantification of the privacy impact of such breach is a fundamental problem. For instance, previous work have shown that the structure of a decision tree can be leveraged to build a probabilistic reconstruction of its training dataset, with the uncertainty of the reconstruction being a relevant metric for the information leak. In this paper, we propose of a novel framework generalizing these probabilistic reconstructions in the sense that it can handle other forms of interpretable models and more generic types of knowledge. In addition, we demonstrate that under realistic assumptions regarding the interpretable models' structure, the uncertainty of the reconstruction can be computed efficiently. Finally, we illustrate the applicability of our approach on both decision trees and rule lists, by comparing the theoretical information leak associated to either exact or heuristic learning algorithms. Our results suggest that optimal interpretable models are often more compact and leak less information regarding their training data than greedily-built ones, for a given accuracy level.
Abstract:In recent years, a growing body of work has emerged on how to learn machine learning models under fairness constraints, often expressed with respect to some sensitive attributes. In this work, we consider the setting in which an adversary has black-box access to a target model and show that information about this model's fairness can be exploited by the adversary to enhance his reconstruction of the sensitive attributes of the training data. More precisely, we propose a generic reconstruction correction method, which takes as input an initial guess made by the adversary and corrects it to comply with some user-defined constraints (such as the fairness information) while minimizing the changes in the adversary's guess. The proposed method is agnostic to the type of target model, the fairness-aware learning method as well as the auxiliary knowledge of the adversary. To assess the applicability of our approach, we have conducted a thorough experimental evaluation on two state-of-the-art fair learning methods, using four different fairness metrics with a wide range of tolerances and with three datasets of diverse sizes and sensitive attributes. The experimental results demonstrate the effectiveness of the proposed approach to improve the reconstruction of the sensitive attributes of the training set.
Abstract:The growing interest in explainable artificial intelligence (XAI) for critical decision making motivates the need for interpretable machine learning (ML) models. In fact, due to their structure (especially with small sizes), these models are inherently understandable by humans. Recently, several exact methods for computing such models are proposed to overcome weaknesses of traditional heuristic methods by providing more compact models or better prediction quality. Despite their compressed representation of Boolean functions, Binary decision diagrams (BDDs) did not gain enough interest as other interpretable ML models. In this paper, we first propose SAT-based models for learning optimal BDDs (in terms of the number of features) that classify all input examples. Then, we lift the encoding to a MaxSAT model to learn optimal BDDs in limited depths, that maximize the number of examples correctly classified. Finally, we tackle the fragmentation problem by introducing a method to merge compatible subtrees for the BDDs found via the MaxSAT model. Our empirical study shows clear benefits of the proposed approach in terms of prediction quality and intrepretability (i.e., lighter size) compared to the state-of-the-art approaches.
Abstract:High accuracy range estimation is an essential tool required in many modern applications and technologies. However, continuous range estimation of a moving target is a challenging task, especially under Doppler effects. This paper presents a novel signal design, which we name differential Zadoff-Chu (DZC). Under Doppler effects, DZC sequences improve the performance of the maximum likelihood (ML)-based range estimation compared to its performance when using regular ZC sequences. Moreover, a reduced-complexity ranging algorithm is proposed utilizing DZC sequences and is shown to outperform the regular ZC ML-based range estimation. The proposed system is evaluated in a typical indoor environment, using low-cost ultrasound hardware. Under a low signal to noise ratio (-10 dB SNR), more than 90% of the range estimates are in less than 1.6 mm error, with a movement range from $0.2$ m to 2.2 m and a maximum velocity of 0.5 m/s. For the same movement range, the system provides range estimates with a root mean square error (RMSE) less than 0.76 mm in a high SNR scenario (10 dB), and an MSE less than 0.85 mm in a low SNR scenario (-10 dB). For a larger movement range from 1.8 m to 4.2 m with a maximum velocity of 1.91 m/s, the proposed system provides range estimates with RMSE less than 7.70 mm at 10 dB SNR.
Abstract:The widespread use of machine learning models, especially within the context of decision-making systems impacting individuals, raises many ethical issues with respect to fairness and interpretability of these models. While the research in these domains is booming, very few works have addressed these two issues simultaneously. To solve this shortcoming, we propose FairCORELS, a supervised learning algorithm whose objective is to learn at the same time fair and interpretable models. FairCORELS is a multi-objective variant of CORELS, a branch-and-bound algorithm, designed to compute accurate and interpretable rule lists. By jointly addressing fairness and interpretability, FairCORELS can achieve better fairness/accuracy tradeoffs compared to existing methods, as demonstrated by the empirical evaluation performed on real datasets. Our paper also contains additional contributions regarding the search strategies for optimizing the multi-objective function integrating both fairness, accuracy and interpretability.
Abstract:We study the notion of robustness in stable matching problems. We first define robustness by introducing (a,b)-supermatches. An $(a,b)$-supermatch is a stable matching in which if $a$ pairs break up it is possible to find another stable matching by changing the partners of those $a$ pairs and at most $b$ other pairs. In this context, we define the most robust stable matching as a $(1,b)$-supermatch where b is minimum. We show that checking whether a given stable matching is a $(1,b)$-supermatch can be done in polynomial time. Next, we use this procedure to design a constraint programming model, a local search approach, and a genetic algorithm to find the most robust stable matching. Our empirical evaluation on large instances show that local search outperforms the other approaches.
Abstract:Robust Stable Marriage (RSM) is a variant of the classical Stable Marriage problem, where the robustness of a given stable matching is measured by the number of modifications required for repairing it in case an unforeseen event occurs. We focus on the complexity of finding an (a,b)-supermatch. An (a,b)-supermatch is defined as a stable matching in which if any 'a' (non-fixed) men/women break up it is possible to find another stable matching by changing the partners of those 'a' men/women and also the partners of at most 'b' other couples. In order to show deciding if there exists an (a,b)-supermatch is NP-Complete, we first introduce a SAT formulation that is NP-Complete by using Schaefer's Dichotomy Theorem. Then, we show the equivalence between the SAT formulation and finding a (1,1)-supermatch on a specific family of instances.
Abstract:The FOCUS constraint expresses the notion that solutions are concentrated. In practice, this constraint suffers from the rigidity of its semantics. To tackle this issue, we propose three generalizations of the FOCUS constraint. We provide for each one a complete filtering algorithm as well as discussing decompositions.