Picture for Mihai Surdeanu

Mihai Surdeanu

Change Is the Only Constant: Dynamic LLM Slicing based on Layer Redundancy

Add code
Nov 05, 2024
Viaarxiv icon

When and Where Did it Happen? An Encoder-Decoder Model to Identify Scenario Context

Add code
Oct 10, 2024
Figure 1 for When and Where Did it Happen? An Encoder-Decoder Model to Identify Scenario Context
Figure 2 for When and Where Did it Happen? An Encoder-Decoder Model to Identify Scenario Context
Figure 3 for When and Where Did it Happen? An Encoder-Decoder Model to Identify Scenario Context
Figure 4 for When and Where Did it Happen? An Encoder-Decoder Model to Identify Scenario Context
Viaarxiv icon

Memorization In In-Context Learning

Add code
Aug 21, 2024
Figure 1 for Memorization In In-Context Learning
Figure 2 for Memorization In In-Context Learning
Figure 3 for Memorization In In-Context Learning
Figure 4 for Memorization In In-Context Learning
Viaarxiv icon

Data Contamination Report from the 2024 CONDA Shared Task

Add code
Jul 31, 2024
Figure 1 for Data Contamination Report from the 2024 CONDA Shared Task
Figure 2 for Data Contamination Report from the 2024 CONDA Shared Task
Figure 3 for Data Contamination Report from the 2024 CONDA Shared Task
Figure 4 for Data Contamination Report from the 2024 CONDA Shared Task
Viaarxiv icon

Layer-Wise Quantization: A Pragmatic and Effective Method for Quantizing LLMs Beyond Integer Bit-Levels

Add code
Jun 26, 2024
Figure 1 for Layer-Wise Quantization: A Pragmatic and Effective Method for Quantizing LLMs Beyond Integer Bit-Levels
Figure 2 for Layer-Wise Quantization: A Pragmatic and Effective Method for Quantizing LLMs Beyond Integer Bit-Levels
Figure 3 for Layer-Wise Quantization: A Pragmatic and Effective Method for Quantizing LLMs Beyond Integer Bit-Levels
Figure 4 for Layer-Wise Quantization: A Pragmatic and Effective Method for Quantizing LLMs Beyond Integer Bit-Levels
Viaarxiv icon

From Words to Numbers: Your Large Language Model Is Secretly A Capable Regressor When Given In-Context Examples

Add code
Apr 11, 2024
Figure 1 for From Words to Numbers: Your Large Language Model Is Secretly A Capable Regressor When Given In-Context Examples
Figure 2 for From Words to Numbers: Your Large Language Model Is Secretly A Capable Regressor When Given In-Context Examples
Figure 3 for From Words to Numbers: Your Large Language Model Is Secretly A Capable Regressor When Given In-Context Examples
Figure 4 for From Words to Numbers: Your Large Language Model Is Secretly A Capable Regressor When Given In-Context Examples
Viaarxiv icon

Towards Realistic Few-Shot Relation Extraction: A New Meta Dataset and Evaluation

Add code
Apr 05, 2024
Viaarxiv icon

ELLEN: Extremely Lightly Supervised Learning For Efficient Named Entity Recognition

Add code
Mar 26, 2024
Figure 1 for ELLEN: Extremely Lightly Supervised Learning For Efficient Named Entity Recognition
Figure 2 for ELLEN: Extremely Lightly Supervised Learning For Efficient Named Entity Recognition
Figure 3 for ELLEN: Extremely Lightly Supervised Learning For Efficient Named Entity Recognition
Figure 4 for ELLEN: Extremely Lightly Supervised Learning For Efficient Named Entity Recognition
Viaarxiv icon

Best of Both Worlds: A Pliable and Generalizable Neuro-Symbolic Approach for Relation Classification

Add code
Mar 05, 2024
Viaarxiv icon

Enhancing Transformer RNNs with Multiple Temporal Perspectives

Add code
Feb 04, 2024
Viaarxiv icon