Picture for Zecheng Tang

Zecheng Tang

LOGO -- Long cOntext aliGnment via efficient preference Optimization

Add code
Oct 24, 2024
Figure 1 for LOGO -- Long cOntext aliGnment via efficient preference Optimization
Figure 2 for LOGO -- Long cOntext aliGnment via efficient preference Optimization
Figure 3 for LOGO -- Long cOntext aliGnment via efficient preference Optimization
Figure 4 for LOGO -- Long cOntext aliGnment via efficient preference Optimization
Viaarxiv icon

Revealing and Mitigating the Local Pattern Shortcuts of Mamba

Add code
Oct 21, 2024
Viaarxiv icon

L-CiteEval: Do Long-Context Models Truly Leverage Context for Responding?

Add code
Oct 03, 2024
Figure 1 for L-CiteEval: Do Long-Context Models Truly Leverage Context for Responding?
Figure 2 for L-CiteEval: Do Long-Context Models Truly Leverage Context for Responding?
Figure 3 for L-CiteEval: Do Long-Context Models Truly Leverage Context for Responding?
Figure 4 for L-CiteEval: Do Long-Context Models Truly Leverage Context for Responding?
Viaarxiv icon

FakeShield: Explainable Image Forgery Detection and Localization via Multi-modal Large Language Models

Add code
Oct 03, 2024
Figure 1 for FakeShield: Explainable Image Forgery Detection and Localization via Multi-modal Large Language Models
Figure 2 for FakeShield: Explainable Image Forgery Detection and Localization via Multi-modal Large Language Models
Figure 3 for FakeShield: Explainable Image Forgery Detection and Localization via Multi-modal Large Language Models
Figure 4 for FakeShield: Explainable Image Forgery Detection and Localization via Multi-modal Large Language Models
Viaarxiv icon

MemLong: Memory-Augmented Retrieval for Long Text Modeling

Add code
Aug 30, 2024
Viaarxiv icon

OpenBA-V2: Reaching 77.3% High Compression Ratio with Fast Multi-Stage Pruning

Add code
May 09, 2024
Figure 1 for OpenBA-V2: Reaching 77.3% High Compression Ratio with Fast Multi-Stage Pruning
Figure 2 for OpenBA-V2: Reaching 77.3% High Compression Ratio with Fast Multi-Stage Pruning
Figure 3 for OpenBA-V2: Reaching 77.3% High Compression Ratio with Fast Multi-Stage Pruning
Figure 4 for OpenBA-V2: Reaching 77.3% High Compression Ratio with Fast Multi-Stage Pruning
Viaarxiv icon

Rethinking Negative Instances for Generative Named Entity Recognition

Add code
Feb 26, 2024
Viaarxiv icon

StrokeNUWA: Tokenizing Strokes for Vector Graphic Synthesis

Add code
Jan 30, 2024
Viaarxiv icon

Beyond Hard Samples: Robust and Effective Grammatical Error Correction with Cycle Self-Augmenting

Add code
Oct 23, 2023
Viaarxiv icon

OpenBA: An Open-sourced 15B Bilingual Asymmetric seq2seq Model Pre-trained from Scratch

Add code
Oct 01, 2023
Figure 1 for OpenBA: An Open-sourced 15B Bilingual Asymmetric seq2seq Model Pre-trained from Scratch
Figure 2 for OpenBA: An Open-sourced 15B Bilingual Asymmetric seq2seq Model Pre-trained from Scratch
Figure 3 for OpenBA: An Open-sourced 15B Bilingual Asymmetric seq2seq Model Pre-trained from Scratch
Figure 4 for OpenBA: An Open-sourced 15B Bilingual Asymmetric seq2seq Model Pre-trained from Scratch
Viaarxiv icon