Picture for Joel Niklaus

Joel Niklaus

INCLUDE: Evaluating Multilingual Language Understanding with Regional Knowledge

Add code
Nov 29, 2024
Viaarxiv icon

Breaking the Manual Annotation Bottleneck: Creating a Comprehensive Legal Case Criticality Dataset through Semi-Automated Labeling

Add code
Oct 17, 2024
Figure 1 for Breaking the Manual Annotation Bottleneck: Creating a Comprehensive Legal Case Criticality Dataset through Semi-Automated Labeling
Figure 2 for Breaking the Manual Annotation Bottleneck: Creating a Comprehensive Legal Case Criticality Dataset through Semi-Automated Labeling
Figure 3 for Breaking the Manual Annotation Bottleneck: Creating a Comprehensive Legal Case Criticality Dataset through Semi-Automated Labeling
Figure 4 for Breaking the Manual Annotation Bottleneck: Creating a Comprehensive Legal Case Criticality Dataset through Semi-Automated Labeling
Viaarxiv icon

Unlocking Legal Knowledge: A Multilingual Dataset for Judicial Summarization in Switzerland

Add code
Oct 17, 2024
Viaarxiv icon

FLawN-T5: An Empirical Examination of Effective Instruction-Tuning Data Mixtures for Legal Reasoning

Add code
Apr 02, 2024
Viaarxiv icon

Towards Explainability and Fairness in Swiss Judgement Prediction: Benchmarking on a Multilingual Dataset

Add code
Feb 26, 2024
Viaarxiv icon

LegalLens: Leveraging LLMs for Legal Violation Identification in Unstructured Text

Add code
Feb 06, 2024
Viaarxiv icon

Automatic Anonymization of Swiss Federal Supreme Court Rulings

Add code
Oct 07, 2023
Viaarxiv icon

Resolving Legalese: A Multilingual Exploration of Negation Scope Resolution in Legal Documents

Add code
Sep 15, 2023
Viaarxiv icon

Anonymity at Risk? Assessing Re-Identification Capabilities of Large Language Models

Add code
Aug 22, 2023
Viaarxiv icon

LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models

Add code
Aug 20, 2023
Figure 1 for LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models
Figure 2 for LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models
Figure 3 for LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models
Figure 4 for LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models
Viaarxiv icon