Picture for Sunit Bhattacharya

Sunit Bhattacharya

Understanding the role of FFNs in driving multilingual behaviour in LLMs

Add code
Apr 22, 2024
Figure 1 for Understanding the role of FFNs in driving multilingual behaviour in LLMs
Figure 2 for Understanding the role of FFNs in driving multilingual behaviour in LLMs
Figure 3 for Understanding the role of FFNs in driving multilingual behaviour in LLMs
Figure 4 for Understanding the role of FFNs in driving multilingual behaviour in LLMs
Viaarxiv icon

Unveiling Multilinguality in Transformer Models: Exploring Language Specificity in Feed-Forward Networks

Add code
Oct 24, 2023
Figure 1 for Unveiling Multilinguality in Transformer Models: Exploring Language Specificity in Feed-Forward Networks
Figure 2 for Unveiling Multilinguality in Transformer Models: Exploring Language Specificity in Feed-Forward Networks
Figure 3 for Unveiling Multilinguality in Transformer Models: Exploring Language Specificity in Feed-Forward Networks
Figure 4 for Unveiling Multilinguality in Transformer Models: Exploring Language Specificity in Feed-Forward Networks
Viaarxiv icon

Multimodal Shannon Game with Images

Add code
Mar 20, 2023
Figure 1 for Multimodal Shannon Game with Images
Figure 2 for Multimodal Shannon Game with Images
Figure 3 for Multimodal Shannon Game with Images
Figure 4 for Multimodal Shannon Game with Images
Viaarxiv icon

Sentence Ambiguity, Grammaticality and Complexity Probes

Add code
Oct 15, 2022
Figure 1 for Sentence Ambiguity, Grammaticality and Complexity Probes
Figure 2 for Sentence Ambiguity, Grammaticality and Complexity Probes
Figure 3 for Sentence Ambiguity, Grammaticality and Complexity Probes
Figure 4 for Sentence Ambiguity, Grammaticality and Complexity Probes
Viaarxiv icon

Team ÚFAL at CMCL 2022 Shared Task: Figuring out the correct recipe for predicting Eye-Tracking features using Pretrained Language Models

Add code
Apr 11, 2022
Figure 1 for Team ÚFAL at CMCL 2022 Shared Task: Figuring out the correct recipe for predicting Eye-Tracking features using Pretrained Language Models
Figure 2 for Team ÚFAL at CMCL 2022 Shared Task: Figuring out the correct recipe for predicting Eye-Tracking features using Pretrained Language Models
Figure 3 for Team ÚFAL at CMCL 2022 Shared Task: Figuring out the correct recipe for predicting Eye-Tracking features using Pretrained Language Models
Figure 4 for Team ÚFAL at CMCL 2022 Shared Task: Figuring out the correct recipe for predicting Eye-Tracking features using Pretrained Language Models
Viaarxiv icon

EMMT: A simultaneous eye-tracking, 4-electrode EEG and audio corpus for multi-modal reading and translation scenarios

Add code
Apr 06, 2022
Figure 1 for EMMT: A simultaneous eye-tracking, 4-electrode EEG and audio corpus for multi-modal reading and translation scenarios
Figure 2 for EMMT: A simultaneous eye-tracking, 4-electrode EEG and audio corpus for multi-modal reading and translation scenarios
Figure 3 for EMMT: A simultaneous eye-tracking, 4-electrode EEG and audio corpus for multi-modal reading and translation scenarios
Figure 4 for EMMT: A simultaneous eye-tracking, 4-electrode EEG and audio corpus for multi-modal reading and translation scenarios
Viaarxiv icon