Picture for Michael Neumann

Michael Neumann

Investigating the Utility of Multimodal Conversational Technology and Audiovisual Analytic Measures for the Assessment and Monitoring of Amyotrophic Lateral Sclerosis at Scale

Add code
Apr 15, 2021
Figure 1 for Investigating the Utility of Multimodal Conversational Technology and Audiovisual Analytic Measures for the Assessment and Monitoring of Amyotrophic Lateral Sclerosis at Scale
Figure 2 for Investigating the Utility of Multimodal Conversational Technology and Audiovisual Analytic Measures for the Assessment and Monitoring of Amyotrophic Lateral Sclerosis at Scale
Figure 3 for Investigating the Utility of Multimodal Conversational Technology and Audiovisual Analytic Measures for the Assessment and Monitoring of Amyotrophic Lateral Sclerosis at Scale
Figure 4 for Investigating the Utility of Multimodal Conversational Technology and Audiovisual Analytic Measures for the Assessment and Monitoring of Amyotrophic Lateral Sclerosis at Scale
Viaarxiv icon

Investigations on Audiovisual Emotion Recognition in Noisy Conditions

Add code
Mar 02, 2021
Figure 1 for Investigations on Audiovisual Emotion Recognition in Noisy Conditions
Figure 2 for Investigations on Audiovisual Emotion Recognition in Noisy Conditions
Figure 3 for Investigations on Audiovisual Emotion Recognition in Noisy Conditions
Figure 4 for Investigations on Audiovisual Emotion Recognition in Noisy Conditions
Viaarxiv icon

URoboSim -- An Episodic Simulation Framework for Prospective Reasoning in Robotic Agents

Add code
Dec 08, 2020
Figure 1 for URoboSim -- An Episodic Simulation Framework for Prospective Reasoning in Robotic Agents
Figure 2 for URoboSim -- An Episodic Simulation Framework for Prospective Reasoning in Robotic Agents
Figure 3 for URoboSim -- An Episodic Simulation Framework for Prospective Reasoning in Robotic Agents
Viaarxiv icon

Imagination-enabled Robot Perception

Add code
Nov 27, 2020
Figure 1 for Imagination-enabled Robot Perception
Figure 2 for Imagination-enabled Robot Perception
Figure 3 for Imagination-enabled Robot Perception
Figure 4 for Imagination-enabled Robot Perception
Viaarxiv icon

ADVISER: A Toolkit for Developing Multi-modal, Multi-domain and Socially-engaged Conversational Agents

Add code
May 04, 2020
Figure 1 for ADVISER: A Toolkit for Developing Multi-modal, Multi-domain and Socially-engaged Conversational Agents
Figure 2 for ADVISER: A Toolkit for Developing Multi-modal, Multi-domain and Socially-engaged Conversational Agents
Viaarxiv icon

Cross-lingual and Multilingual Speech Emotion Recognition on English and French

Add code
Mar 01, 2018
Figure 1 for Cross-lingual and Multilingual Speech Emotion Recognition on English and French
Figure 2 for Cross-lingual and Multilingual Speech Emotion Recognition on English and French
Figure 3 for Cross-lingual and Multilingual Speech Emotion Recognition on English and French
Figure 4 for Cross-lingual and Multilingual Speech Emotion Recognition on English and French
Viaarxiv icon

Attentive Convolutional Neural Network based Speech Emotion Recognition: A Study on the Impact of Input Features, Signal Length, and Acted Speech

Add code
Jun 02, 2017
Figure 1 for Attentive Convolutional Neural Network based Speech Emotion Recognition: A Study on the Impact of Input Features, Signal Length, and Acted Speech
Figure 2 for Attentive Convolutional Neural Network based Speech Emotion Recognition: A Study on the Impact of Input Features, Signal Length, and Acted Speech
Figure 3 for Attentive Convolutional Neural Network based Speech Emotion Recognition: A Study on the Impact of Input Features, Signal Length, and Acted Speech
Figure 4 for Attentive Convolutional Neural Network based Speech Emotion Recognition: A Study on the Impact of Input Features, Signal Length, and Acted Speech
Viaarxiv icon