Picture for Mrinal Kalakrishnan

Mrinal Kalakrishnan

Sparsh: Self-supervised touch representations for vision-based tactile sensing

Add code
Oct 31, 2024
Viaarxiv icon

Neural feels with neural fields: Visuo-tactile perception for in-hand manipulation

Add code
Dec 20, 2023
Viaarxiv icon

Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots

Add code
Oct 19, 2023
Viaarxiv icon

What do we learn from a large-scale study of pre-trained visual representations in sim and real environments?

Add code
Oct 03, 2023
Viaarxiv icon

Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators

Add code
May 05, 2023
Figure 1 for Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators
Figure 2 for Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators
Figure 3 for Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators
Figure 4 for Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators
Viaarxiv icon

USA-Net: Unified Semantic and Affordance Representations for Robot Memory

Add code
Apr 25, 2023
Viaarxiv icon

How to Train Your Robot with Deep Reinforcement Learning; Lessons We've Learned

Add code
Feb 04, 2021
Figure 1 for How to Train Your Robot with Deep Reinforcement Learning; Lessons We've Learned
Figure 2 for How to Train Your Robot with Deep Reinforcement Learning; Lessons We've Learned
Figure 3 for How to Train Your Robot with Deep Reinforcement Learning; Lessons We've Learned
Figure 4 for How to Train Your Robot with Deep Reinforcement Learning; Lessons We've Learned
Viaarxiv icon

Action Image Representation: Learning Scalable Deep Grasping Policies with Zero Real World Data

Add code
May 13, 2020
Figure 1 for Action Image Representation: Learning Scalable Deep Grasping Policies with Zero Real World Data
Figure 2 for Action Image Representation: Learning Scalable Deep Grasping Policies with Zero Real World Data
Figure 3 for Action Image Representation: Learning Scalable Deep Grasping Policies with Zero Real World Data
Figure 4 for Action Image Representation: Learning Scalable Deep Grasping Policies with Zero Real World Data
Viaarxiv icon

Quantile QT-Opt for Risk-Aware Vision-Based Robotic Grasping

Add code
Oct 01, 2019
Figure 1 for Quantile QT-Opt for Risk-Aware Vision-Based Robotic Grasping
Figure 2 for Quantile QT-Opt for Risk-Aware Vision-Based Robotic Grasping
Figure 3 for Quantile QT-Opt for Risk-Aware Vision-Based Robotic Grasping
Figure 4 for Quantile QT-Opt for Risk-Aware Vision-Based Robotic Grasping
Viaarxiv icon

Watch, Try, Learn: Meta-Learning from Demonstrations and Reward

Add code
Jun 07, 2019
Figure 1 for Watch, Try, Learn: Meta-Learning from Demonstrations and Reward
Figure 2 for Watch, Try, Learn: Meta-Learning from Demonstrations and Reward
Figure 3 for Watch, Try, Learn: Meta-Learning from Demonstrations and Reward
Figure 4 for Watch, Try, Learn: Meta-Learning from Demonstrations and Reward
Viaarxiv icon