Picture for Harish Tayyar Madabushi

Harish Tayyar Madabushi

SpeciaLex: A Benchmark for In-Context Specialized Lexicon Learning

Add code
Jul 18, 2024
Viaarxiv icon

Fine-Tuning with Divergent Chains of Thought Boosts Reasoning Through Self-Correction in Language Models

Add code
Jul 03, 2024
Viaarxiv icon

FS-RAG: A Frame Semantics Based Approach for Improved Factual Accuracy in Large Language Models

Add code
Jun 23, 2024
Viaarxiv icon

Pre-Trained Language Models Represent Some Geographic Populations Better Than Others

Add code
Mar 16, 2024
Figure 1 for Pre-Trained Language Models Represent Some Geographic Populations Better Than Others
Figure 2 for Pre-Trained Language Models Represent Some Geographic Populations Better Than Others
Figure 3 for Pre-Trained Language Models Represent Some Geographic Populations Better Than Others
Figure 4 for Pre-Trained Language Models Represent Some Geographic Populations Better Than Others
Viaarxiv icon

Code-Mixed Probes Show How Pre-Trained Models Generalise On Code-Switched Text

Add code
Mar 07, 2024
Viaarxiv icon

Standardize: Aligning Language Models with Expert-Defined Standards for Content Generation

Add code
Feb 19, 2024
Viaarxiv icon

Word Boundary Information Isn't Useful for Encoder Language Models

Add code
Jan 15, 2024
Viaarxiv icon

Flesch or Fumble? Evaluating Readability Standard Alignment of Instruction-Tuned Language Models

Add code
Sep 11, 2023
Viaarxiv icon

Are Emergent Abilities in Large Language Models just In-Context Learning?

Add code
Sep 04, 2023
Viaarxiv icon

Construction Grammar and Language Models

Add code
Sep 04, 2023
Figure 1 for Construction Grammar and Language Models
Viaarxiv icon