Picture for Harish Tayyar Madabushi

Harish Tayyar Madabushi

The Inherent Limits of Pretrained LLMs: The Unexpected Convergence of Instruction Tuning and In-Context Learning Capabilities

Add code
Jan 15, 2025
Viaarxiv icon

Adapting Whisper for Regional Dialects: Enhancing Public Services for Vulnerable Populations in the United Kingdom

Add code
Jan 15, 2025
Viaarxiv icon

Assessing Language Comprehension in Large Language Models Using Construction Grammar

Add code
Jan 08, 2025
Figure 1 for Assessing Language Comprehension in Large Language Models Using Construction Grammar
Figure 2 for Assessing Language Comprehension in Large Language Models Using Construction Grammar
Figure 3 for Assessing Language Comprehension in Large Language Models Using Construction Grammar
Figure 4 for Assessing Language Comprehension in Large Language Models Using Construction Grammar
Viaarxiv icon

SpeciaLex: A Benchmark for In-Context Specialized Lexicon Learning

Add code
Jul 18, 2024
Figure 1 for SpeciaLex: A Benchmark for In-Context Specialized Lexicon Learning
Figure 2 for SpeciaLex: A Benchmark for In-Context Specialized Lexicon Learning
Figure 3 for SpeciaLex: A Benchmark for In-Context Specialized Lexicon Learning
Figure 4 for SpeciaLex: A Benchmark for In-Context Specialized Lexicon Learning
Viaarxiv icon

Fine-Tuning with Divergent Chains of Thought Boosts Reasoning Through Self-Correction in Language Models

Add code
Jul 03, 2024
Viaarxiv icon

FS-RAG: A Frame Semantics Based Approach for Improved Factual Accuracy in Large Language Models

Add code
Jun 23, 2024
Viaarxiv icon

Pre-Trained Language Models Represent Some Geographic Populations Better Than Others

Add code
Mar 16, 2024
Figure 1 for Pre-Trained Language Models Represent Some Geographic Populations Better Than Others
Figure 2 for Pre-Trained Language Models Represent Some Geographic Populations Better Than Others
Figure 3 for Pre-Trained Language Models Represent Some Geographic Populations Better Than Others
Figure 4 for Pre-Trained Language Models Represent Some Geographic Populations Better Than Others
Viaarxiv icon

Code-Mixed Probes Show How Pre-Trained Models Generalise On Code-Switched Text

Add code
Mar 07, 2024
Viaarxiv icon

Standardize: Aligning Language Models with Expert-Defined Standards for Content Generation

Add code
Feb 19, 2024
Figure 1 for Standardize: Aligning Language Models with Expert-Defined Standards for Content Generation
Figure 2 for Standardize: Aligning Language Models with Expert-Defined Standards for Content Generation
Figure 3 for Standardize: Aligning Language Models with Expert-Defined Standards for Content Generation
Figure 4 for Standardize: Aligning Language Models with Expert-Defined Standards for Content Generation
Viaarxiv icon

Word Boundary Information Isn't Useful for Encoder Language Models

Add code
Jan 15, 2024
Figure 1 for Word Boundary Information Isn't Useful for Encoder Language Models
Figure 2 for Word Boundary Information Isn't Useful for Encoder Language Models
Figure 3 for Word Boundary Information Isn't Useful for Encoder Language Models
Figure 4 for Word Boundary Information Isn't Useful for Encoder Language Models
Viaarxiv icon