Abstract:Recent Vision-Language-Action (VLA) models have made impressive progress toward general-purpose robotic manipulation by post-training large Vision-Language Models (VLMs) for action prediction. Yet most VLAs entangle perception and control in a monolithic pipeline optimized purely for action, which can erode language-conditioned grounding. In our real-world tabletop tests, policies over-grasp when the target is absent, are distracted by clutter, and overfit to background appearance. To address these issues, we propose OBEYED-VLA (OBject-centric and gEometrY groundED VLA), a framework that explicitly disentangles perceptual grounding from action reasoning. Instead of operating directly on raw RGB, OBEYED-VLA augments VLAs with a perception module that grounds multi-view inputs into task-conditioned, object-centric, and geometry-aware observations. This module includes a VLM-based object-centric grounding stage that selects task-relevant object regions across camera views, along with a complementary geometric grounding stage that emphasizes the 3D structure of these objects over their appearance. The resulting grounded views are then fed to a pretrained VLA policy, which we fine-tune exclusively on single-object demonstrations collected without environmental clutter or non-target objects. On a real-world UR10e tabletop setup, OBEYED-VLA substantially improves robustness over strong VLA baselines across four challenging regimes and multiple difficulty levels: distractor objects, absent-target rejection, background appearance changes, and cluttered manipulation of unseen objects. Ablation studies confirm that both semantic grounding and geometry-aware grounding are critical to these gains. Overall, the results indicate that making perception an explicit, object-centric component is an effective way to strengthen and generalize VLA-based robotic manipulation.
Abstract:Inspired by how humans reason over discrete objects and their relationships, we explore whether compact object-centric and object-relation representations can form a foundation for multitask robotic manipulation. Most existing robotic multitask models rely on dense embeddings that entangle both object and background cues, raising concerns about both efficiency and interpretability. In contrast, we study object-relation-centric representations as a pathway to more structured, efficient, and explainable visuomotor control. Our contributions are two-fold. First, we introduce LIBERO+, a fine-grained benchmark dataset designed to enable and evaluate object-relation reasoning in robotic manipulation. Unlike prior datasets, LIBERO+ provides object-centric annotations that enrich demonstrations with box- and mask-level labels as well as instance-level temporal tracking, supporting compact and interpretable visuomotor representations. Second, we propose SlotVLA, a slot-attention-based framework that captures both objects and their relations for action decoding. It uses a slot-based visual tokenizer to maintain consistent temporal object representations, a relation-centric decoder to produce task-relevant embeddings, and an LLM-driven module that translates these embeddings into executable actions. Experiments on LIBERO+ demonstrate that object-centric slot and object-relation slot representations drastically reduce the number of required visual tokens, while providing competitive generalization. Together, LIBERO+ and SlotVLA provide a compact, interpretable, and effective foundation for advancing object-relation-centric robotic manipulation.
Abstract:Growing global demand for food, coupled with continuing labor shortages, motivates the need for automated agricultural harvesting. While some specialty crops (e.g., apples, peaches, blueberries) can be harvested via existing harvesting modalities, fruits such as blackberries and raspberries require delicate handling to mitigate fruit damage that could significantly impact marketability. This motivates the development of soft robotic solutions that enable efficient, delicate harvesting. This paper presents the design, fabrication, and feasibility testing of a tendon-driven soft gripping system focused on blackberries, which are a fragile fruit susceptible to post-harvest damage. The gripper is both low-cost and small form factor, allowing for the integration of a micro-servo for tendon retraction, a near-infrared (NIR) based blackberry ripeness sensor utilizing the reflectance modality for identifying fully ripe blackberries, and an endoscopic camera for visual servoing with a UR-5. The gripper was used to harvest 139 berries with manual positioning in two separate field tests. Field testing found an average retention force of 2.06 N and 6.08 N for ripe and unripe blackberries, respectively. Sensor tests identified an average reflectance of 16.78 and 21.70 for ripe and unripe blackberries, respectively, indicating a clear distinction between the two ripeness levels. Finally, the soft robotic gripper was integrated onto a UR5 robot arm and successfully harvested fifteen artificial blackberries in a lab setting using visual servoing.