Picture for Suyu Ge

Suyu Ge

A Little Goes a Long Way: Efficient Long Context Training and Inference with Partial Contexts

Add code
Oct 02, 2024
Figure 1 for A Little Goes a Long Way: Efficient Long Context Training and Inference with Partial Contexts
Figure 2 for A Little Goes a Long Way: Efficient Long Context Training and Inference with Partial Contexts
Figure 3 for A Little Goes a Long Way: Efficient Long Context Training and Inference with Partial Contexts
Figure 4 for A Little Goes a Long Way: Efficient Long Context Training and Inference with Partial Contexts
Viaarxiv icon

Efficient LLM Training and Serving with Heterogeneous Context Sharding among Attention Heads

Add code
Jul 25, 2024
Figure 1 for Efficient LLM Training and Serving with Heterogeneous Context Sharding among Attention Heads
Figure 2 for Efficient LLM Training and Serving with Heterogeneous Context Sharding among Attention Heads
Figure 3 for Efficient LLM Training and Serving with Heterogeneous Context Sharding among Attention Heads
Figure 4 for Efficient LLM Training and Serving with Heterogeneous Context Sharding among Attention Heads
Viaarxiv icon

GenSERP: Large Language Models for Whole Page Presentation

Add code
Feb 22, 2024
Viaarxiv icon

MART: Improving LLM Safety with Multi-round Automatic Red-Teaming

Add code
Nov 13, 2023
Figure 1 for MART: Improving LLM Safety with Multi-round Automatic Red-Teaming
Figure 2 for MART: Improving LLM Safety with Multi-round Automatic Red-Teaming
Figure 3 for MART: Improving LLM Safety with Multi-round Automatic Red-Teaming
Figure 4 for MART: Improving LLM Safety with Multi-round Automatic Red-Teaming
Viaarxiv icon

Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs

Add code
Oct 07, 2023
Figure 1 for Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs
Figure 2 for Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs
Figure 3 for Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs
Figure 4 for Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs
Viaarxiv icon

Augmenting Zero-Shot Dense Retrievers with Plug-in Mixture-of-Memories

Add code
Feb 07, 2023
Viaarxiv icon

Toward Understanding Bias Correlations for Mitigation in NLP

Add code
May 24, 2022
Figure 1 for Toward Understanding Bias Correlations for Mitigation in NLP
Figure 2 for Toward Understanding Bias Correlations for Mitigation in NLP
Figure 3 for Toward Understanding Bias Correlations for Mitigation in NLP
Figure 4 for Toward Understanding Bias Correlations for Mitigation in NLP
Viaarxiv icon

Unsupervised Summarization with Customized Granularities

Add code
Jan 29, 2022
Figure 1 for Unsupervised Summarization with Customized Granularities
Figure 2 for Unsupervised Summarization with Customized Granularities
Figure 3 for Unsupervised Summarization with Customized Granularities
Figure 4 for Unsupervised Summarization with Customized Granularities
Viaarxiv icon

Fine-Grained Opinion Summarization with Minimal Supervision

Add code
Oct 17, 2021
Figure 1 for Fine-Grained Opinion Summarization with Minimal Supervision
Figure 2 for Fine-Grained Opinion Summarization with Minimal Supervision
Figure 3 for Fine-Grained Opinion Summarization with Minimal Supervision
Figure 4 for Fine-Grained Opinion Summarization with Minimal Supervision
Viaarxiv icon

Improving Cyberbully Detection with User Interaction

Add code
Nov 01, 2020
Figure 1 for Improving Cyberbully Detection with User Interaction
Figure 2 for Improving Cyberbully Detection with User Interaction
Figure 3 for Improving Cyberbully Detection with User Interaction
Figure 4 for Improving Cyberbully Detection with User Interaction
Viaarxiv icon