Picture for Tom Silver

Tom Silver

VisualPredicator: Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning

Add code
Oct 30, 2024
Viaarxiv icon

Anticipatory Task and Motion Planning

Add code
Jul 18, 2024
Viaarxiv icon

Practice Makes Perfect: Planning to Learn Skill Parameter Policies

Add code
Feb 22, 2024
Viaarxiv icon

Generalized Planning in PDDL Domains with Pretrained Large Language Models

Add code
May 18, 2023
Viaarxiv icon

Embodied Active Learning of Relational State Abstractions for Bilevel Planning

Add code
Mar 08, 2023
Viaarxiv icon

Learning Operators with Ignore Effects for Bilevel Planning in Continuous Domains

Add code
Aug 16, 2022
Figure 1 for Learning Operators with Ignore Effects for Bilevel Planning in Continuous Domains
Figure 2 for Learning Operators with Ignore Effects for Bilevel Planning in Continuous Domains
Figure 3 for Learning Operators with Ignore Effects for Bilevel Planning in Continuous Domains
Figure 4 for Learning Operators with Ignore Effects for Bilevel Planning in Continuous Domains
Viaarxiv icon

Learning Neuro-Symbolic Skills for Bilevel Planning

Add code
Jun 21, 2022
Figure 1 for Learning Neuro-Symbolic Skills for Bilevel Planning
Figure 2 for Learning Neuro-Symbolic Skills for Bilevel Planning
Figure 3 for Learning Neuro-Symbolic Skills for Bilevel Planning
Figure 4 for Learning Neuro-Symbolic Skills for Bilevel Planning
Viaarxiv icon

PG3: Policy-Guided Planning for Generalized Policy Generation

Add code
Apr 21, 2022
Figure 1 for PG3: Policy-Guided Planning for Generalized Policy Generation
Figure 2 for PG3: Policy-Guided Planning for Generalized Policy Generation
Figure 3 for PG3: Policy-Guided Planning for Generalized Policy Generation
Figure 4 for PG3: Policy-Guided Planning for Generalized Policy Generation
Viaarxiv icon

Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning

Add code
Mar 17, 2022
Figure 1 for Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning
Figure 2 for Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning
Figure 3 for Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning
Figure 4 for Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning
Viaarxiv icon

Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators

Add code
Sep 30, 2021
Figure 1 for Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators
Figure 2 for Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators
Figure 3 for Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators
Figure 4 for Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators
Viaarxiv icon