Picture for Ayush Agrawal

Ayush Agrawal

Language Models' Factuality Depends on the Language of Inquiry

Add code
Feb 25, 2025
Viaarxiv icon

Physical Reasoning and Object Planning for Household Embodied Agents

Add code
Nov 22, 2023
Figure 1 for Physical Reasoning and Object Planning for Household Embodied Agents
Figure 2 for Physical Reasoning and Object Planning for Household Embodied Agents
Figure 3 for Physical Reasoning and Object Planning for Household Embodied Agents
Figure 4 for Physical Reasoning and Object Planning for Household Embodied Agents
Viaarxiv icon

CLIPGraphs: Multimodal Graph Networks to Infer Object-Room Affinities

Add code
Jun 02, 2023
Figure 1 for CLIPGraphs: Multimodal Graph Networks to Infer Object-Room Affinities
Figure 2 for CLIPGraphs: Multimodal Graph Networks to Infer Object-Room Affinities
Figure 3 for CLIPGraphs: Multimodal Graph Networks to Infer Object-Room Affinities
Figure 4 for CLIPGraphs: Multimodal Graph Networks to Infer Object-Room Affinities
Viaarxiv icon

Do Language Models Know When They're Hallucinating References?

Add code
May 29, 2023
Figure 1 for Do Language Models Know When They're Hallucinating References?
Figure 2 for Do Language Models Know When They're Hallucinating References?
Figure 3 for Do Language Models Know When They're Hallucinating References?
Figure 4 for Do Language Models Know When They're Hallucinating References?
Viaarxiv icon

Sequence-Agnostic Multi-Object Navigation

Add code
May 10, 2023
Viaarxiv icon

Towards a Mathematics Formalisation Assistant using Large Language Models

Add code
Nov 14, 2022
Viaarxiv icon

Lyapunov Design for Robust and Efficient Robotic Reinforcement Learning

Add code
Aug 13, 2022
Figure 1 for Lyapunov Design for Robust and Efficient Robotic Reinforcement Learning
Figure 2 for Lyapunov Design for Robust and Efficient Robotic Reinforcement Learning
Figure 3 for Lyapunov Design for Robust and Efficient Robotic Reinforcement Learning
Figure 4 for Lyapunov Design for Robust and Efficient Robotic Reinforcement Learning
Viaarxiv icon

Computation of Regions of Attraction for Hybrid Limit Cycles Using Reachability: An Application to Walking Robots

Add code
Feb 09, 2022
Figure 1 for Computation of Regions of Attraction for Hybrid Limit Cycles Using Reachability: An Application to Walking Robots
Figure 2 for Computation of Regions of Attraction for Hybrid Limit Cycles Using Reachability: An Application to Walking Robots
Figure 3 for Computation of Regions of Attraction for Hybrid Limit Cycles Using Reachability: An Application to Walking Robots
Figure 4 for Computation of Regions of Attraction for Hybrid Limit Cycles Using Reachability: An Application to Walking Robots
Viaarxiv icon

Vision-aided Dynamic Quadrupedal Locomotion on Discrete Terrain using Motion Libraries

Add code
Oct 02, 2021
Figure 1 for Vision-aided Dynamic Quadrupedal Locomotion on Discrete Terrain using Motion Libraries
Figure 2 for Vision-aided Dynamic Quadrupedal Locomotion on Discrete Terrain using Motion Libraries
Figure 3 for Vision-aided Dynamic Quadrupedal Locomotion on Discrete Terrain using Motion Libraries
Figure 4 for Vision-aided Dynamic Quadrupedal Locomotion on Discrete Terrain using Motion Libraries
Viaarxiv icon

Autonomous Navigation for Quadrupedal Robots with Optimized Jumping through Constrained Obstacles

Add code
Jul 01, 2021
Figure 1 for Autonomous Navigation for Quadrupedal Robots with Optimized Jumping through Constrained Obstacles
Figure 2 for Autonomous Navigation for Quadrupedal Robots with Optimized Jumping through Constrained Obstacles
Figure 3 for Autonomous Navigation for Quadrupedal Robots with Optimized Jumping through Constrained Obstacles
Figure 4 for Autonomous Navigation for Quadrupedal Robots with Optimized Jumping through Constrained Obstacles
Viaarxiv icon