Picture for Suzanna Sia

Suzanna Sia

Where does In-context Translation Happen in Large Language Models

Add code
Mar 07, 2024
Viaarxiv icon

Anti-LM Decoding for Zero-shot In-context Machine Translation

Add code
Nov 14, 2023
Viaarxiv icon

In-context Learning as Maintaining Coherency: A Study of On-the-fly Machine Translation Using Large Language Models

Add code
May 05, 2023
Viaarxiv icon

Logical Satisfiability of Counterfactuals for Faithful Explanations in NLI

Add code
May 25, 2022
Figure 1 for Logical Satisfiability of Counterfactuals for Faithful Explanations in NLI
Figure 2 for Logical Satisfiability of Counterfactuals for Faithful Explanations in NLI
Figure 3 for Logical Satisfiability of Counterfactuals for Faithful Explanations in NLI
Figure 4 for Logical Satisfiability of Counterfactuals for Faithful Explanations in NLI
Viaarxiv icon

Clustering with UMAP: Why and How Connectivity Matters

Add code
Aug 12, 2021
Figure 1 for Clustering with UMAP: Why and How Connectivity Matters
Figure 2 for Clustering with UMAP: Why and How Connectivity Matters
Figure 3 for Clustering with UMAP: Why and How Connectivity Matters
Figure 4 for Clustering with UMAP: Why and How Connectivity Matters
Viaarxiv icon

Tired of Topic Models? Clusters of Pretrained Word Embeddings Make for Fast and Good Topics too!

Add code
Apr 30, 2020
Figure 1 for Tired of Topic Models? Clusters of Pretrained Word Embeddings Make for Fast and Good Topics too!
Figure 2 for Tired of Topic Models? Clusters of Pretrained Word Embeddings Make for Fast and Good Topics too!
Figure 3 for Tired of Topic Models? Clusters of Pretrained Word Embeddings Make for Fast and Good Topics too!
Figure 4 for Tired of Topic Models? Clusters of Pretrained Word Embeddings Make for Fast and Good Topics too!
Viaarxiv icon