Picture for Jayadev Billa

Jayadev Billa

Modality Collapse as Mismatched Decoding: Information-Theoretic Limits of Multimodal LLMs

Add code
Feb 26, 2026
Viaarxiv icon

The Cascade Equivalence Hypothesis: When Do Speech LLMs Behave Like ASR$\rightarrow$LLM Pipelines?

Add code
Feb 19, 2026
Viaarxiv icon

Anatomy of Capability Emergence: Scale-Invariant Representation Collapse and Top-Down Reorganization in Neural Networks

Add code
Feb 17, 2026
Viaarxiv icon

When Audio-LLMs Don't Listen: A Cross-Linguistic Study of Modality Arbitration

Add code
Feb 12, 2026
Viaarxiv icon

Improving Low-Resource Speech Recognition with Pretrained Speech Models: Continued Pretraining vs. Semi-Supervised Training

Add code
Jul 01, 2022
Figure 1 for Improving Low-Resource Speech Recognition with Pretrained Speech Models: Continued Pretraining vs. Semi-Supervised Training
Figure 2 for Improving Low-Resource Speech Recognition with Pretrained Speech Models: Continued Pretraining vs. Semi-Supervised Training
Figure 3 for Improving Low-Resource Speech Recognition with Pretrained Speech Models: Continued Pretraining vs. Semi-Supervised Training
Viaarxiv icon

Improving low-resource ASR performance with untranscribed out-of-domain data

Add code
Jun 02, 2021
Figure 1 for Improving low-resource ASR performance with untranscribed out-of-domain data
Figure 2 for Improving low-resource ASR performance with untranscribed out-of-domain data
Figure 3 for Improving low-resource ASR performance with untranscribed out-of-domain data
Viaarxiv icon

Improving LSTM-CTC based ASR performance in domains with limited training data

Add code
May 23, 2018
Figure 1 for Improving LSTM-CTC based ASR performance in domains with limited training data
Figure 2 for Improving LSTM-CTC based ASR performance in domains with limited training data
Figure 3 for Improving LSTM-CTC based ASR performance in domains with limited training data
Figure 4 for Improving LSTM-CTC based ASR performance in domains with limited training data
Viaarxiv icon