Picture for Puneet Kumar

Puneet Kumar

INRIA Saclay - Ile de France, CVN

VisioPhysioENet: Multimodal Engagement Detection using Visual and Physiological Signals

Add code
Sep 24, 2024
Figure 1 for VisioPhysioENet: Multimodal Engagement Detection using Visual and Physiological Signals
Figure 2 for VisioPhysioENet: Multimodal Engagement Detection using Visual and Physiological Signals
Figure 3 for VisioPhysioENet: Multimodal Engagement Detection using Visual and Physiological Signals
Figure 4 for VisioPhysioENet: Multimodal Engagement Detection using Visual and Physiological Signals
Viaarxiv icon

TCCT-Net: Two-Stream Network Architecture for Fast and Efficient Engagement Estimation via Behavioral Feature Signals

Add code
Apr 15, 2024
Viaarxiv icon

Measuring Non-Typical Emotions for Mental Health: A Survey of Computational Approaches

Add code
Mar 09, 2024
Viaarxiv icon

Synthesizing Sentiment-Controlled Feedback For Multimodal Text and Image Data

Add code
Feb 12, 2024
Viaarxiv icon

Interpretable Multimodal Emotion Recognition using Facial Features and Physiological Signals

Add code
Jun 05, 2023
Viaarxiv icon

Interpretable Multimodal Emotion Recognition using Hybrid Fusion of Speech and Image Data

Add code
Aug 25, 2022
Figure 1 for Interpretable Multimodal Emotion Recognition using Hybrid Fusion of Speech and Image Data
Figure 2 for Interpretable Multimodal Emotion Recognition using Hybrid Fusion of Speech and Image Data
Figure 3 for Interpretable Multimodal Emotion Recognition using Hybrid Fusion of Speech and Image Data
Figure 4 for Interpretable Multimodal Emotion Recognition using Hybrid Fusion of Speech and Image Data
Viaarxiv icon

Hybrid Fusion Based Interpretable Multimodal Emotion Recognition with Insufficient Labelled Data

Add code
Aug 24, 2022
Figure 1 for Hybrid Fusion Based Interpretable Multimodal Emotion Recognition with Insufficient Labelled Data
Figure 2 for Hybrid Fusion Based Interpretable Multimodal Emotion Recognition with Insufficient Labelled Data
Figure 3 for Hybrid Fusion Based Interpretable Multimodal Emotion Recognition with Insufficient Labelled Data
Figure 4 for Hybrid Fusion Based Interpretable Multimodal Emotion Recognition with Insufficient Labelled Data
Viaarxiv icon

Affective Feedback Synthesis Towards Multimodal Text and Image Data

Add code
Mar 31, 2022
Figure 1 for Affective Feedback Synthesis Towards Multimodal Text and Image Data
Figure 2 for Affective Feedback Synthesis Towards Multimodal Text and Image Data
Figure 3 for Affective Feedback Synthesis Towards Multimodal Text and Image Data
Figure 4 for Affective Feedback Synthesis Towards Multimodal Text and Image Data
Viaarxiv icon

Region extraction based approach for cigarette usage classification using deep learning

Add code
Mar 23, 2021
Figure 1 for Region extraction based approach for cigarette usage classification using deep learning
Figure 2 for Region extraction based approach for cigarette usage classification using deep learning
Figure 3 for Region extraction based approach for cigarette usage classification using deep learning
Figure 4 for Region extraction based approach for cigarette usage classification using deep learning
Viaarxiv icon

Domain Adaptation based Technique for Image Emotion Recognition using Pre-trained Facial Expression Recognition Models

Add code
Nov 17, 2020
Figure 1 for Domain Adaptation based Technique for Image Emotion Recognition using Pre-trained Facial Expression Recognition Models
Figure 2 for Domain Adaptation based Technique for Image Emotion Recognition using Pre-trained Facial Expression Recognition Models
Figure 3 for Domain Adaptation based Technique for Image Emotion Recognition using Pre-trained Facial Expression Recognition Models
Figure 4 for Domain Adaptation based Technique for Image Emotion Recognition using Pre-trained Facial Expression Recognition Models
Viaarxiv icon