Picture for Erdem Bıyık

Erdem Bıyık

IMPACT: Intelligent Motion Planning with Acceptable Contact Trajectories via Vision-Language Models

Add code
Mar 13, 2025
Viaarxiv icon

Multi-Agent Inverse Q-Learning from Demonstrations

Add code
Mar 06, 2025
Viaarxiv icon

RAILGUN: A Unified Convolutional Policy for Multi-Agent Path Finding Across Different Environments and Tasks

Add code
Mar 04, 2025
Viaarxiv icon

MILE: Model-based Intervention Learning

Add code
Feb 19, 2025
Viaarxiv icon

NaVILA: Legged Robot Vision-Language-Action Model for Navigation

Add code
Dec 05, 2024
Viaarxiv icon

Accurate and Data-Efficient Toxicity Prediction when Annotators Disagree

Add code
Oct 16, 2024
Figure 1 for Accurate and Data-Efficient Toxicity Prediction when Annotators Disagree
Figure 2 for Accurate and Data-Efficient Toxicity Prediction when Annotators Disagree
Figure 3 for Accurate and Data-Efficient Toxicity Prediction when Annotators Disagree
Figure 4 for Accurate and Data-Efficient Toxicity Prediction when Annotators Disagree
Viaarxiv icon

Mitigating Suboptimality of Deterministic Policy Gradients in Complex Q-functions

Add code
Oct 15, 2024
Figure 1 for Mitigating Suboptimality of Deterministic Policy Gradients in Complex Q-functions
Figure 2 for Mitigating Suboptimality of Deterministic Policy Gradients in Complex Q-functions
Figure 3 for Mitigating Suboptimality of Deterministic Policy Gradients in Complex Q-functions
Figure 4 for Mitigating Suboptimality of Deterministic Policy Gradients in Complex Q-functions
Viaarxiv icon

Trajectory Improvement and Reward Learning from Comparative Language Feedback

Add code
Oct 08, 2024
Figure 1 for Trajectory Improvement and Reward Learning from Comparative Language Feedback
Figure 2 for Trajectory Improvement and Reward Learning from Comparative Language Feedback
Figure 3 for Trajectory Improvement and Reward Learning from Comparative Language Feedback
Figure 4 for Trajectory Improvement and Reward Learning from Comparative Language Feedback
Viaarxiv icon

Coprocessor Actor Critic: A Model-Based Reinforcement Learning Approach For Adaptive Brain Stimulation

Add code
Jun 10, 2024
Figure 1 for Coprocessor Actor Critic: A Model-Based Reinforcement Learning Approach For Adaptive Brain Stimulation
Figure 2 for Coprocessor Actor Critic: A Model-Based Reinforcement Learning Approach For Adaptive Brain Stimulation
Figure 3 for Coprocessor Actor Critic: A Model-Based Reinforcement Learning Approach For Adaptive Brain Stimulation
Figure 4 for Coprocessor Actor Critic: A Model-Based Reinforcement Learning Approach For Adaptive Brain Stimulation
Viaarxiv icon

ViSaRL: Visual Reinforcement Learning Guided by Human Saliency

Add code
Mar 16, 2024
Figure 1 for ViSaRL: Visual Reinforcement Learning Guided by Human Saliency
Figure 2 for ViSaRL: Visual Reinforcement Learning Guided by Human Saliency
Figure 3 for ViSaRL: Visual Reinforcement Learning Guided by Human Saliency
Figure 4 for ViSaRL: Visual Reinforcement Learning Guided by Human Saliency
Viaarxiv icon