Picture for Prashanth Krishnamurthy

Prashanth Krishnamurthy

New York University

Safe Multi-Robotic Arm Interaction via 3D Convex Shapes

Add code
Mar 14, 2025
Viaarxiv icon

Compliant Control of Quadruped Robots for Assistive Load Carrying

Add code
Mar 13, 2025
Viaarxiv icon

Towards Unified Benchmark and Models for Multi-Modal Perceptual Metrics

Add code
Dec 13, 2024
Figure 1 for Towards Unified Benchmark and Models for Multi-Modal Perceptual Metrics
Figure 2 for Towards Unified Benchmark and Models for Multi-Modal Perceptual Metrics
Figure 3 for Towards Unified Benchmark and Models for Multi-Modal Perceptual Metrics
Figure 4 for Towards Unified Benchmark and Models for Multi-Modal Perceptual Metrics
Viaarxiv icon

Distributed Inverse Dynamics Control for Quadruped Robots using Geometric Optimization

Add code
Dec 13, 2024
Viaarxiv icon

Out-of-Distribution Detection with Overlap Index

Add code
Dec 09, 2024
Figure 1 for Out-of-Distribution Detection with Overlap Index
Figure 2 for Out-of-Distribution Detection with Overlap Index
Figure 3 for Out-of-Distribution Detection with Overlap Index
Figure 4 for Out-of-Distribution Detection with Overlap Index
Viaarxiv icon

RoboPEPP: Vision-Based Robot Pose and Joint Angle Estimation through Embedding Predictive Pre-Training

Add code
Nov 26, 2024
Viaarxiv icon

OrionNav: Online Planning for Robot Autonomy with Context-Aware LLM and Open-Vocabulary Semantic Scene Graphs

Add code
Oct 08, 2024
Figure 1 for OrionNav: Online Planning for Robot Autonomy with Context-Aware LLM and Open-Vocabulary Semantic Scene Graphs
Figure 2 for OrionNav: Online Planning for Robot Autonomy with Context-Aware LLM and Open-Vocabulary Semantic Scene Graphs
Figure 3 for OrionNav: Online Planning for Robot Autonomy with Context-Aware LLM and Open-Vocabulary Semantic Scene Graphs
Figure 4 for OrionNav: Online Planning for Robot Autonomy with Context-Aware LLM and Open-Vocabulary Semantic Scene Graphs
Viaarxiv icon

EMMA: Efficient Visual Alignment in Multi-Modal LLMs

Add code
Oct 02, 2024
Figure 1 for EMMA: Efficient Visual Alignment in Multi-Modal LLMs
Figure 2 for EMMA: Efficient Visual Alignment in Multi-Modal LLMs
Figure 3 for EMMA: Efficient Visual Alignment in Multi-Modal LLMs
Figure 4 for EMMA: Efficient Visual Alignment in Multi-Modal LLMs
Viaarxiv icon

MultiTalk: Introspective and Extrospective Dialogue for Human-Environment-LLM Alignment

Add code
Sep 24, 2024
Figure 1 for MultiTalk: Introspective and Extrospective Dialogue for Human-Environment-LLM Alignment
Figure 2 for MultiTalk: Introspective and Extrospective Dialogue for Human-Environment-LLM Alignment
Figure 3 for MultiTalk: Introspective and Extrospective Dialogue for Human-Environment-LLM Alignment
Figure 4 for MultiTalk: Introspective and Extrospective Dialogue for Human-Environment-LLM Alignment
Viaarxiv icon

EnIGMA: Enhanced Interactive Generative Model Agent for CTF Challenges

Add code
Sep 24, 2024
Figure 1 for EnIGMA: Enhanced Interactive Generative Model Agent for CTF Challenges
Figure 2 for EnIGMA: Enhanced Interactive Generative Model Agent for CTF Challenges
Figure 3 for EnIGMA: Enhanced Interactive Generative Model Agent for CTF Challenges
Figure 4 for EnIGMA: Enhanced Interactive Generative Model Agent for CTF Challenges
Viaarxiv icon