Picture for Mirac Suzgun

Mirac Suzgun

Shammie

Belief in the Machine: Investigating Epistemological Blind Spots of Language Models

Add code
Oct 28, 2024
Viaarxiv icon

Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools

Add code
May 30, 2024
Viaarxiv icon

Meta-Prompting: Enhancing Language Models with Task-Agnostic Scaffolding

Add code
Jan 23, 2024
Viaarxiv icon

Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models

Add code
Jan 02, 2024
Viaarxiv icon

A Benchmark for Learning to Translate a New Language from One Grammar Book

Add code
Sep 28, 2023
Viaarxiv icon

Safety-Tuned LLaMAs: Lessons From Improving the Safety of Large Language Models that Follow Instructions

Add code
Sep 25, 2023
Viaarxiv icon

string2string: A Modern Python Library for String-to-String Algorithms

Add code
Apr 27, 2023
Viaarxiv icon

Holistic Evaluation of Language Models

Add code
Nov 16, 2022
Figure 1 for Holistic Evaluation of Language Models
Figure 2 for Holistic Evaluation of Language Models
Figure 3 for Holistic Evaluation of Language Models
Figure 4 for Holistic Evaluation of Language Models
Viaarxiv icon

Follow the Wisdom of the Crowd: Effective Text Generation via Minimum Bayes Risk Decoding

Add code
Nov 14, 2022
Viaarxiv icon

Scaling Instruction-Finetuned Language Models

Add code
Oct 20, 2022
Figure 1 for Scaling Instruction-Finetuned Language Models
Figure 2 for Scaling Instruction-Finetuned Language Models
Figure 3 for Scaling Instruction-Finetuned Language Models
Figure 4 for Scaling Instruction-Finetuned Language Models
Viaarxiv icon