Abstract:We study the task of locating a user in a mapped indoor environment using natural language queries and images from the environment. Building on recent pretrained vision-language models, we learn a similarity score between text descriptions and images of locations in the environment. This score allows us to identify locations that best match the language query, estimating the user's location. Our approach is capable of localizing on environments, text, and images that were not seen during training. One model, finetuned CLIP, outperformed humans in our evaluation.
Abstract:To enable robots to instruct humans in collaborations, we identify several aspects of language processing that are not commonly studied in this context. These include location, planning, and generation. We suggest evaluations for each task, offer baselines for simple methods, and close by discussing challenges and opportunities in studying language for collaboration.