Picture for David Atkinson

David Atkinson

for the ALFA study

Intentionally Unintentional: GenAI Exceptionalism and the First Amendment

Add code
Jun 05, 2025
Viaarxiv icon

Dual Deep Learning Approach for Non-invasive Renal Tumour Subtyping with VERDICT-MRI

Add code
Apr 09, 2025
Viaarxiv icon

Unfair Learning: GenAI Exceptionalism and Copyright Law

Add code
Apr 01, 2025
Viaarxiv icon

AGGA: A Dataset of Academic Guidelines for Generative AI and Large Language Models

Add code
Jan 07, 2025
Figure 1 for AGGA: A Dataset of Academic Guidelines for Generative AI and Large Language Models
Viaarxiv icon

2 OLMo 2 Furious

Add code
Dec 31, 2024
Figure 1 for 2 OLMo 2 Furious
Figure 2 for 2 OLMo 2 Furious
Figure 3 for 2 OLMo 2 Furious
Figure 4 for 2 OLMo 2 Furious
Viaarxiv icon

Token Erasure as a Footprint of Implicit Vocabulary Items in LLMs

Add code
Jun 28, 2024
Viaarxiv icon

Locating and Editing Factual Associations in Mamba

Add code
Apr 04, 2024
Figure 1 for Locating and Editing Factual Associations in Mamba
Figure 2 for Locating and Editing Factual Associations in Mamba
Figure 3 for Locating and Editing Factual Associations in Mamba
Figure 4 for Locating and Editing Factual Associations in Mamba
Viaarxiv icon

Algorithmic progress in language models

Add code
Mar 09, 2024
Figure 1 for Algorithmic progress in language models
Figure 2 for Algorithmic progress in language models
Figure 3 for Algorithmic progress in language models
Figure 4 for Algorithmic progress in language models
Viaarxiv icon

OLMo: Accelerating the Science of Language Models

Add code
Feb 07, 2024
Figure 1 for OLMo: Accelerating the Science of Language Models
Figure 2 for OLMo: Accelerating the Science of Language Models
Figure 3 for OLMo: Accelerating the Science of Language Models
Figure 4 for OLMo: Accelerating the Science of Language Models
Viaarxiv icon

Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research

Add code
Jan 31, 2024
Figure 1 for Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research
Figure 2 for Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research
Figure 3 for Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research
Figure 4 for Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research
Viaarxiv icon