Abstract:Egocentric vision aims to capture and analyse the world from the first-person perspective. We explore the possibilities for egocentric wearable devices to improve and enhance industrial use cases w.r.t. data collection, annotation, labelling and downstream applications. This would contribute to easier data collection and allow users to provide additional context. We envision that this approach could serve as a supplement to the traditional industrial Machine Vision workflow. Code, Dataset and related resources will be available at: https://github.com/Vivek9Chavan/EgoVis24
Abstract:Force control enables hands-on teaching and physical collaboration, with the potential to improve ergonomics and flexibility of automation. Established methods for the design of compliance, impedance control, and \rev{collision response} can achieve free-space stability and acceptable peak contact force on lightweight, lower payload robots. Scaling collaboration to higher payloads can allow new applications, but introduces challenges due to the more significant payload dynamics and the use of higher-payload industrial robots. To achieve high-payload manual guidance with contact, this paper proposes and validates new mechatronic design methods: standard admittance control is extended with damping feedback, compliant structures are integrated to the environment, and a contact response method which allows continuous admittance control is proposed. These methods are compared with respect to free-space stability, contact stability, and peak contact force. The resulting methods are then applied to realize two contact-rich tasks on a 16 kg payload (peg in hole and slot assembly) and free-space co-manipulation of a 50 kg payload.
Abstract:The objective of many contact-rich manipulation tasks can be expressed as desired contacts between environmental objects. Simulation and planning for rigid-body contact continues to advance, but the achievable performance is significantly impacted by hardware design, such as physical compliance and sensor placement. Much of mechatronic design for contact is done from a continuous controls perspective (e.g. peak collision force, contact stability), but hardware also affects the ability to infer discrete changes in contact. Robustly detecting contact state can support the correction of errors, both online and in trial-and-error learning. Here, discrete contact states are considered as changes in environmental dynamics, and the ability to infer this with proprioception (motor position and force sensors) is investigated. A metric of information gain is proposed, measuring the reduction in contact belief uncertainty from force/position measurements, and developed for fully- and partially-observed systems. The information gain depends on the coupled robot/environment dynamics and sensor placement, especially the location and degree of compliance. Hardware experiments over a range of physical compliance conditions validate that information gain predicts the speed and certainty with which contact is detected in (i) monitoring of contact-rich assembly and (ii) collision detection. Compliant environmental structures are then optimized to allow industrial robots to achieve safe, higher-speed contact.
Abstract:Uncertainty quantification is an important and challenging problem in deep learning. Previous methods rely on dropout layers which are not present in modern deep architectures or batch normalization which is sensitive to batch sizes. In this work, we address the problem of uncertainty quantification in deep residual networks by using a regularization technique called stochastic depth. We show that training residual networks using stochastic depth can be interpreted as a variational approximation to the intractable posterior over the weights in Bayesian neural networks. We demonstrate that by sampling from a distribution of residual networks with varying depth and shared weights, meaningful uncertainty estimates can be obtained. Moreover, compared to the original formulation of residual networks, our method produces well-calibrated softmax probabilities with only minor changes to the network's structure. We evaluate our approach on popular computer vision datasets and measure the quality of uncertainty estimates. We also test the robustness to domain shift and show that our method is able to express higher predictive uncertainty on out-of-distribution samples. Finally, we demonstrate how the proposed approach could be used to obtain uncertainty estimates in facial verification applications.