Abstract:As part of human core knowledge, the representation of objects is the building block of mental representation that supports high-level concepts and symbolic reasoning. While humans develop the ability of perceiving objects situated in 3D environments without supervision, models that learn the same set of abilities with similar constraints faced by human infants are lacking. Towards this end, we developed a novel network architecture that simultaneously learns to 1) segment objects from discrete images, 2) infer their 3D locations, and 3) perceive depth, all while using only information directly available to the brain as training data, namely: sequences of images and self-motion. The core idea is treating objects as latent causes of visual input which the brain uses to make efficient predictions of future scenes. This results in object representations being learned as an essential byproduct of learning to predict.
Abstract:A close partnership between people and partially autonomous machines has enabled decades of space exploration. But to further expand our horizons, our systems must become more capable. Increasing the nature and degree of autonomy - allowing our systems to make and act on their own decisions as directed by mission teams - enables new science capabilities and enhances science return. The 2011 Planetary Science Decadal Survey (PSDS) and on-going pre-Decadal mission studies have identified increased autonomy as a core technology required for future missions. However, even as scientific discovery has necessitated the development of autonomous systems and past flight demonstrations have been successful, institutional barriers have limited its maturation and infusion on existing planetary missions. Consequently, the authors and endorsers of this paper recommend that new programmatic pathways be developed to infuse autonomy, infrastructure for support autonomous systems be invested in, new practices be adopted, and the cost-saving value of autonomy for operations be studied.