Picture for Aakanksha Naik

Aakanksha Naik

ArxivDIGESTables: Synthesizing Scientific Literature into Tables using Language Models

Add code
Oct 25, 2024
Figure 1 for ArxivDIGESTables: Synthesizing Scientific Literature into Tables using Language Models
Figure 2 for ArxivDIGESTables: Synthesizing Scientific Literature into Tables using Language Models
Figure 3 for ArxivDIGESTables: Synthesizing Scientific Literature into Tables using Language Models
Figure 4 for ArxivDIGESTables: Synthesizing Scientific Literature into Tables using Language Models
Viaarxiv icon

CHIME: LLM-Assisted Hierarchical Organization of Scientific Studies for Literature Review Support

Add code
Jul 23, 2024
Viaarxiv icon

SciRIFF: A Resource to Enhance Language Model Instruction-Following over Scientific Literature

Add code
Jun 10, 2024
Figure 1 for SciRIFF: A Resource to Enhance Language Model Instruction-Following over Scientific Literature
Figure 2 for SciRIFF: A Resource to Enhance Language Model Instruction-Following over Scientific Literature
Figure 3 for SciRIFF: A Resource to Enhance Language Model Instruction-Following over Scientific Literature
Figure 4 for SciRIFF: A Resource to Enhance Language Model Instruction-Following over Scientific Literature
Viaarxiv icon

On-the-fly Definition Augmentation of LLMs for Biomedical NER

Add code
Mar 29, 2024
Viaarxiv icon

OLMo: Accelerating the Science of Language Models

Add code
Feb 07, 2024
Figure 1 for OLMo: Accelerating the Science of Language Models
Figure 2 for OLMo: Accelerating the Science of Language Models
Figure 3 for OLMo: Accelerating the Science of Language Models
Figure 4 for OLMo: Accelerating the Science of Language Models
Viaarxiv icon

Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research

Add code
Jan 31, 2024
Figure 1 for Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research
Figure 2 for Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research
Figure 3 for Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research
Figure 4 for Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research
Viaarxiv icon

Designing Guiding Principles for NLP for Healthcare: A Case Study of Maternal Health

Add code
Dec 19, 2023
Viaarxiv icon

LongBoX: Evaluating Transformers on Long-Sequence Clinical Tasks

Add code
Nov 16, 2023
Viaarxiv icon

CARE: Extracting Experimental Findings From Clinical Literature

Add code
Nov 16, 2023
Viaarxiv icon

S2abEL: A Dataset for Entity Linking from Scientific Tables

Add code
Apr 30, 2023
Viaarxiv icon