Picture for Nishanth Kumar

Nishanth Kumar

Open-World Task and Motion Planning via Vision-Language Model Inferred Constraints

Add code
Nov 13, 2024
Viaarxiv icon

VisualPredicator: Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning

Add code
Oct 30, 2024
Figure 1 for VisualPredicator: Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning
Figure 2 for VisualPredicator: Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning
Figure 3 for VisualPredicator: Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning
Figure 4 for VisualPredicator: Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning
Viaarxiv icon

AHA: A Vision-Language-Model for Detecting and Reasoning Over Failures in Robotic Manipulation

Add code
Oct 01, 2024
Viaarxiv icon

Learning to Bridge the Gap: Efficient Novelty Recovery with Planning and Reinforcement Learning

Add code
Sep 28, 2024
Figure 1 for Learning to Bridge the Gap: Efficient Novelty Recovery with Planning and Reinforcement Learning
Figure 2 for Learning to Bridge the Gap: Efficient Novelty Recovery with Planning and Reinforcement Learning
Figure 3 for Learning to Bridge the Gap: Efficient Novelty Recovery with Planning and Reinforcement Learning
Figure 4 for Learning to Bridge the Gap: Efficient Novelty Recovery with Planning and Reinforcement Learning
Viaarxiv icon

Adaptive Language-Guided Abstraction from Contrastive Explanations

Add code
Sep 12, 2024
Viaarxiv icon

Trust the PRoC3S: Solving Long-Horizon Robotics Problems with LLMs and Constraint Satisfaction

Add code
Jun 08, 2024
Viaarxiv icon

Practice Makes Perfect: Planning to Learn Skill Parameter Policies

Add code
Feb 22, 2024
Viaarxiv icon

Preference-Conditioned Language-Guided Abstraction

Add code
Feb 05, 2024
Viaarxiv icon

Learning Operators with Ignore Effects for Bilevel Planning in Continuous Domains

Add code
Aug 16, 2022
Figure 1 for Learning Operators with Ignore Effects for Bilevel Planning in Continuous Domains
Figure 2 for Learning Operators with Ignore Effects for Bilevel Planning in Continuous Domains
Figure 3 for Learning Operators with Ignore Effects for Bilevel Planning in Continuous Domains
Figure 4 for Learning Operators with Ignore Effects for Bilevel Planning in Continuous Domains
Viaarxiv icon

Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning

Add code
Mar 17, 2022
Figure 1 for Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning
Figure 2 for Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning
Figure 3 for Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning
Figure 4 for Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning
Viaarxiv icon