Picture for Hosein Mohebbi

Hosein Mohebbi

In-Context Learning in Speech Language Models: Analyzing the Role of Acoustic Features, Linguistic Structure, and Induction Heads

Add code
Apr 07, 2026
Viaarxiv icon

Tracking the emergence of linguistic structure in self-supervised models learning from speech

Add code
Apr 02, 2026
Viaarxiv icon

Gender Disambiguation in Machine Translation: Diagnostic Evaluation in Decoder-Only Architectures

Add code
Mar 18, 2026
Viaarxiv icon

On the reliability of feature attribution methods for speech classification

Add code
May 22, 2025
Viaarxiv icon

How Language Models Prioritize Contextual Grammatical Cues?

Add code
Oct 04, 2024
Viaarxiv icon

Disentangling Textual and Acoustic Features of Neural Speech Representations

Add code
Oct 03, 2024
Figure 1 for Disentangling Textual and Acoustic Features of Neural Speech Representations
Figure 2 for Disentangling Textual and Acoustic Features of Neural Speech Representations
Figure 3 for Disentangling Textual and Acoustic Features of Neural Speech Representations
Figure 4 for Disentangling Textual and Acoustic Features of Neural Speech Representations
Viaarxiv icon

Homophone Disambiguation Reveals Patterns of Context Mixing in Speech Transformers

Add code
Oct 15, 2023
Figure 1 for Homophone Disambiguation Reveals Patterns of Context Mixing in Speech Transformers
Figure 2 for Homophone Disambiguation Reveals Patterns of Context Mixing in Speech Transformers
Figure 3 for Homophone Disambiguation Reveals Patterns of Context Mixing in Speech Transformers
Figure 4 for Homophone Disambiguation Reveals Patterns of Context Mixing in Speech Transformers
Viaarxiv icon

DecoderLens: Layerwise Interpretation of Encoder-Decoder Transformers

Add code
Oct 05, 2023
Figure 1 for DecoderLens: Layerwise Interpretation of Encoder-Decoder Transformers
Figure 2 for DecoderLens: Layerwise Interpretation of Encoder-Decoder Transformers
Figure 3 for DecoderLens: Layerwise Interpretation of Encoder-Decoder Transformers
Figure 4 for DecoderLens: Layerwise Interpretation of Encoder-Decoder Transformers
Viaarxiv icon

Quantifying Context Mixing in Transformers

Add code
Feb 08, 2023
Figure 1 for Quantifying Context Mixing in Transformers
Figure 2 for Quantifying Context Mixing in Transformers
Figure 3 for Quantifying Context Mixing in Transformers
Figure 4 for Quantifying Context Mixing in Transformers
Viaarxiv icon

AdapLeR: Speeding up Inference by Adaptive Length Reduction

Add code
Mar 16, 2022
Figure 1 for AdapLeR: Speeding up Inference by Adaptive Length Reduction
Figure 2 for AdapLeR: Speeding up Inference by Adaptive Length Reduction
Figure 3 for AdapLeR: Speeding up Inference by Adaptive Length Reduction
Figure 4 for AdapLeR: Speeding up Inference by Adaptive Length Reduction
Viaarxiv icon