Picture for George Chrysostomou

George Chrysostomou

Self-calibration for Language Model Quantization and Pruning

Add code
Oct 22, 2024
Figure 1 for Self-calibration for Language Model Quantization and Pruning
Figure 2 for Self-calibration for Language Model Quantization and Pruning
Figure 3 for Self-calibration for Language Model Quantization and Pruning
Figure 4 for Self-calibration for Language Model Quantization and Pruning
Viaarxiv icon

Lighter, yet More Faithful: Investigating Hallucinations in Pruned Large Language Models for Abstractive Summarization

Add code
Nov 15, 2023
Viaarxiv icon

On the Impact of Temporal Concept Drift on Model Explanations

Add code
Oct 17, 2022
Figure 1 for On the Impact of Temporal Concept Drift on Model Explanations
Figure 2 for On the Impact of Temporal Concept Drift on Model Explanations
Figure 3 for On the Impact of Temporal Concept Drift on Model Explanations
Figure 4 for On the Impact of Temporal Concept Drift on Model Explanations
Viaarxiv icon

An Empirical Study on Explanations in Out-of-Domain Settings

Add code
Feb 28, 2022
Figure 1 for An Empirical Study on Explanations in Out-of-Domain Settings
Figure 2 for An Empirical Study on Explanations in Out-of-Domain Settings
Figure 3 for An Empirical Study on Explanations in Out-of-Domain Settings
Figure 4 for An Empirical Study on Explanations in Out-of-Domain Settings
Viaarxiv icon

Frustratingly Simple Pretraining Alternatives to Masked Language Modeling

Add code
Sep 04, 2021
Figure 1 for Frustratingly Simple Pretraining Alternatives to Masked Language Modeling
Figure 2 for Frustratingly Simple Pretraining Alternatives to Masked Language Modeling
Figure 3 for Frustratingly Simple Pretraining Alternatives to Masked Language Modeling
Figure 4 for Frustratingly Simple Pretraining Alternatives to Masked Language Modeling
Viaarxiv icon

Enjoy the Salience: Towards Better Transformer-based Faithful Explanations with Word Salience

Add code
Aug 31, 2021
Figure 1 for Enjoy the Salience: Towards Better Transformer-based Faithful Explanations with Word Salience
Figure 2 for Enjoy the Salience: Towards Better Transformer-based Faithful Explanations with Word Salience
Figure 3 for Enjoy the Salience: Towards Better Transformer-based Faithful Explanations with Word Salience
Figure 4 for Enjoy the Salience: Towards Better Transformer-based Faithful Explanations with Word Salience
Viaarxiv icon

Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification

Add code
May 07, 2021
Figure 1 for Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification
Figure 2 for Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification
Figure 3 for Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification
Figure 4 for Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification
Viaarxiv icon

Variable Instance-Level Explainability for Text Classification

Add code
Apr 16, 2021
Figure 1 for Variable Instance-Level Explainability for Text Classification
Figure 2 for Variable Instance-Level Explainability for Text Classification
Figure 3 for Variable Instance-Level Explainability for Text Classification
Figure 4 for Variable Instance-Level Explainability for Text Classification
Viaarxiv icon