Picture for June Yong Yang

June Yong Yang

A Simple Remedy for Dataset Bias via Self-Influence: A Mislabeled Sample Perspective

Add code
Nov 01, 2024
Viaarxiv icon

Augmentation-Driven Metric for Balancing Preservation and Modification in Text-Guided Image Editing

Add code
Oct 15, 2024
Viaarxiv icon

LANTERN: Accelerating Visual Autoregressive Models with Relaxed Speculative Decoding

Add code
Oct 04, 2024
Figure 1 for LANTERN: Accelerating Visual Autoregressive Models with Relaxed Speculative Decoding
Figure 2 for LANTERN: Accelerating Visual Autoregressive Models with Relaxed Speculative Decoding
Figure 3 for LANTERN: Accelerating Visual Autoregressive Models with Relaxed Speculative Decoding
Figure 4 for LANTERN: Accelerating Visual Autoregressive Models with Relaxed Speculative Decoding
Viaarxiv icon

LRQ: Optimizing Post-Training Quantization for Large Language Models by Learning Low-Rank Weight-Scaling Matrices

Add code
Jul 16, 2024
Figure 1 for LRQ: Optimizing Post-Training Quantization for Large Language Models by Learning Low-Rank Weight-Scaling Matrices
Figure 2 for LRQ: Optimizing Post-Training Quantization for Large Language Models by Learning Low-Rank Weight-Scaling Matrices
Figure 3 for LRQ: Optimizing Post-Training Quantization for Large Language Models by Learning Low-Rank Weight-Scaling Matrices
Figure 4 for LRQ: Optimizing Post-Training Quantization for Large Language Models by Learning Low-Rank Weight-Scaling Matrices
Viaarxiv icon

AdapTable: Test-Time Adaptation for Tabular Data via Shift-Aware Uncertainty Calibrator and Label Distribution Handler

Add code
Jul 15, 2024
Viaarxiv icon

Token-Supervised Value Models for Enhancing Mathematical Reasoning Capabilities of Large Language Models

Add code
Jul 12, 2024
Viaarxiv icon

Unleashing the Potential of Text-attributed Graphs: Automatic Relation Decomposition via Large Language Models

Add code
May 28, 2024
Viaarxiv icon

No Token Left Behind: Reliable KV Cache Compression via Importance-Aware Mixed Precision Quantization

Add code
Feb 28, 2024
Figure 1 for No Token Left Behind: Reliable KV Cache Compression via Importance-Aware Mixed Precision Quantization
Figure 2 for No Token Left Behind: Reliable KV Cache Compression via Importance-Aware Mixed Precision Quantization
Figure 3 for No Token Left Behind: Reliable KV Cache Compression via Importance-Aware Mixed Precision Quantization
Figure 4 for No Token Left Behind: Reliable KV Cache Compression via Importance-Aware Mixed Precision Quantization
Viaarxiv icon

Saliency Grafting: Innocuous Attribution-Guided Mixup with Calibrated Label Mixing

Add code
Dec 16, 2021
Figure 1 for Saliency Grafting: Innocuous Attribution-Guided Mixup with Calibrated Label Mixing
Figure 2 for Saliency Grafting: Innocuous Attribution-Guided Mixup with Calibrated Label Mixing
Figure 3 for Saliency Grafting: Innocuous Attribution-Guided Mixup with Calibrated Label Mixing
Figure 4 for Saliency Grafting: Innocuous Attribution-Guided Mixup with Calibrated Label Mixing
Viaarxiv icon

Fighting Fire with Fire: Contrastive Debiasing without Bias-free Data via Generative Bias-transformation

Add code
Dec 02, 2021
Figure 1 for Fighting Fire with Fire: Contrastive Debiasing without Bias-free Data via Generative Bias-transformation
Figure 2 for Fighting Fire with Fire: Contrastive Debiasing without Bias-free Data via Generative Bias-transformation
Figure 3 for Fighting Fire with Fire: Contrastive Debiasing without Bias-free Data via Generative Bias-transformation
Figure 4 for Fighting Fire with Fire: Contrastive Debiasing without Bias-free Data via Generative Bias-transformation
Viaarxiv icon