Picture for Mohan Li

Mohan Li

Neural Honeytrace: A Robust Plug-and-Play Watermarking Framework against Model Extraction Attacks

Add code
Jan 16, 2025
Viaarxiv icon

A Survey on Federated Learning in Human Sensing

Add code
Jan 07, 2025
Viaarxiv icon

WHISMA: A Speech-LLM to Perform Zero-shot Spoken Language Understanding

Add code
Aug 29, 2024
Figure 1 for WHISMA: A Speech-LLM to Perform Zero-shot Spoken Language Understanding
Figure 2 for WHISMA: A Speech-LLM to Perform Zero-shot Spoken Language Understanding
Figure 3 for WHISMA: A Speech-LLM to Perform Zero-shot Spoken Language Understanding
Figure 4 for WHISMA: A Speech-LLM to Perform Zero-shot Spoken Language Understanding
Viaarxiv icon

Prompting Whisper for QA-driven Zero-shot End-to-end Spoken Language Understanding

Add code
Jun 21, 2024
Viaarxiv icon

DiaLoc: An Iterative Approach to Embodied Dialog Localization

Add code
Mar 11, 2024
Viaarxiv icon

Self-regularised Minimum Latency Training for Streaming Transformer-based Speech Recognition

Add code
Apr 24, 2023
Viaarxiv icon

Non-autoregressive End-to-end Approaches for Joint Automatic Speech Recognition and Spoken Language Understanding

Add code
Apr 21, 2023
Viaarxiv icon

Multiple-hypothesis RNN-T Loss for Unsupervised Fine-tuning and Self-training of Neural Transducer

Add code
Jul 29, 2022
Figure 1 for Multiple-hypothesis RNN-T Loss for Unsupervised Fine-tuning and Self-training of Neural Transducer
Figure 2 for Multiple-hypothesis RNN-T Loss for Unsupervised Fine-tuning and Self-training of Neural Transducer
Figure 3 for Multiple-hypothesis RNN-T Loss for Unsupervised Fine-tuning and Self-training of Neural Transducer
Figure 4 for Multiple-hypothesis RNN-T Loss for Unsupervised Fine-tuning and Self-training of Neural Transducer
Viaarxiv icon

Transformer-based Streaming ASR with Cumulative Attention

Add code
Mar 11, 2022
Figure 1 for Transformer-based Streaming ASR with Cumulative Attention
Figure 2 for Transformer-based Streaming ASR with Cumulative Attention
Figure 3 for Transformer-based Streaming ASR with Cumulative Attention
Figure 4 for Transformer-based Streaming ASR with Cumulative Attention
Viaarxiv icon

Head-synchronous Decoding for Transformer-based Streaming ASR

Add code
Apr 26, 2021
Figure 1 for Head-synchronous Decoding for Transformer-based Streaming ASR
Figure 2 for Head-synchronous Decoding for Transformer-based Streaming ASR
Figure 3 for Head-synchronous Decoding for Transformer-based Streaming ASR
Figure 4 for Head-synchronous Decoding for Transformer-based Streaming ASR
Viaarxiv icon