Picture for Uttaran Bhattacharya

Uttaran Bhattacharya

Speech2UnifiedExpressions: Synchronous Synthesis of Co-Speech Affective Face and Body Expressions from Affordable Inputs

Add code
Jun 26, 2024
Figure 1 for Speech2UnifiedExpressions: Synchronous Synthesis of Co-Speech Affective Face and Body Expressions from Affordable Inputs
Figure 2 for Speech2UnifiedExpressions: Synchronous Synthesis of Co-Speech Affective Face and Body Expressions from Affordable Inputs
Figure 3 for Speech2UnifiedExpressions: Synchronous Synthesis of Co-Speech Affective Face and Body Expressions from Affordable Inputs
Figure 4 for Speech2UnifiedExpressions: Synchronous Synthesis of Co-Speech Affective Face and Body Expressions from Affordable Inputs
Viaarxiv icon

HanDiffuser: Text-to-Image Generation With Realistic Hand Appearances

Add code
Mar 04, 2024
Figure 1 for HanDiffuser: Text-to-Image Generation With Realistic Hand Appearances
Figure 2 for HanDiffuser: Text-to-Image Generation With Realistic Hand Appearances
Figure 3 for HanDiffuser: Text-to-Image Generation With Realistic Hand Appearances
Figure 4 for HanDiffuser: Text-to-Image Generation With Realistic Hand Appearances
Viaarxiv icon

VaQuitA: Enhancing Alignment in LLM-Assisted Video Understanding

Add code
Dec 04, 2023
Figure 1 for VaQuitA: Enhancing Alignment in LLM-Assisted Video Understanding
Figure 2 for VaQuitA: Enhancing Alignment in LLM-Assisted Video Understanding
Figure 3 for VaQuitA: Enhancing Alignment in LLM-Assisted Video Understanding
Figure 4 for VaQuitA: Enhancing Alignment in LLM-Assisted Video Understanding
Viaarxiv icon

Large Content And Behavior Models To Understand, Simulate, And Optimize Content And Behavior

Add code
Sep 08, 2023
Figure 1 for Large Content And Behavior Models To Understand, Simulate, And Optimize Content And Behavior
Figure 2 for Large Content And Behavior Models To Understand, Simulate, And Optimize Content And Behavior
Figure 3 for Large Content And Behavior Models To Understand, Simulate, And Optimize Content And Behavior
Figure 4 for Large Content And Behavior Models To Understand, Simulate, And Optimize Content And Behavior
Viaarxiv icon

Show Me What I Like: Detecting User-Specific Video Highlights Using Content-Based Multi-Head Attention

Add code
Jul 19, 2022
Viaarxiv icon

HighlightMe: Detecting Highlights from Human-Centric Videos

Add code
Oct 05, 2021
Figure 1 for HighlightMe: Detecting Highlights from Human-Centric Videos
Figure 2 for HighlightMe: Detecting Highlights from Human-Centric Videos
Figure 3 for HighlightMe: Detecting Highlights from Human-Centric Videos
Figure 4 for HighlightMe: Detecting Highlights from Human-Centric Videos
Viaarxiv icon

Speech2AffectiveGestures: Synthesizing Co-Speech Gestures with Generative Adversarial Affective Expression Learning

Add code
Aug 03, 2021
Figure 1 for Speech2AffectiveGestures: Synthesizing Co-Speech Gestures with Generative Adversarial Affective Expression Learning
Figure 2 for Speech2AffectiveGestures: Synthesizing Co-Speech Gestures with Generative Adversarial Affective Expression Learning
Figure 3 for Speech2AffectiveGestures: Synthesizing Co-Speech Gestures with Generative Adversarial Affective Expression Learning
Figure 4 for Speech2AffectiveGestures: Synthesizing Co-Speech Gestures with Generative Adversarial Affective Expression Learning
Viaarxiv icon

Learning Unseen Emotions from Gestures via Semantically-Conditioned Zero-Shot Perception with Adversarial Autoencoders

Add code
Sep 18, 2020
Figure 1 for Learning Unseen Emotions from Gestures via Semantically-Conditioned Zero-Shot Perception with Adversarial Autoencoders
Figure 2 for Learning Unseen Emotions from Gestures via Semantically-Conditioned Zero-Shot Perception with Adversarial Autoencoders
Figure 3 for Learning Unseen Emotions from Gestures via Semantically-Conditioned Zero-Shot Perception with Adversarial Autoencoders
Figure 4 for Learning Unseen Emotions from Gestures via Semantically-Conditioned Zero-Shot Perception with Adversarial Autoencoders
Viaarxiv icon

Emotions Don't Lie: A Deepfake Detection Method using Audio-Visual Affective Cues

Add code
Mar 17, 2020
Figure 1 for Emotions Don't Lie: A Deepfake Detection Method using Audio-Visual Affective Cues
Figure 2 for Emotions Don't Lie: A Deepfake Detection Method using Audio-Visual Affective Cues
Figure 3 for Emotions Don't Lie: A Deepfake Detection Method using Audio-Visual Affective Cues
Figure 4 for Emotions Don't Lie: A Deepfake Detection Method using Audio-Visual Affective Cues
Viaarxiv icon

EmotiCon: Context-Aware Multimodal Emotion Recognition using Frege's Principle

Add code
Mar 14, 2020
Figure 1 for EmotiCon: Context-Aware Multimodal Emotion Recognition using Frege's Principle
Figure 2 for EmotiCon: Context-Aware Multimodal Emotion Recognition using Frege's Principle
Figure 3 for EmotiCon: Context-Aware Multimodal Emotion Recognition using Frege's Principle
Figure 4 for EmotiCon: Context-Aware Multimodal Emotion Recognition using Frege's Principle
Viaarxiv icon