Picture for Lei Jiang

Lei Jiang

Emotion Transfer with Enhanced Prototype for Unseen Emotion Recognition in Conversation

Add code
Aug 27, 2025
Viaarxiv icon

Sig-DEG for Distillation: Making Diffusion Models Faster and Lighter

Add code
Aug 23, 2025
Viaarxiv icon

Dialogues Aspect-based Sentiment Quadruple Extraction via Structural Entropy Minimization Partitioning

Add code
Aug 07, 2025
Viaarxiv icon

Eyepiece-free pupil-optimized holographic near-eye displays

Add code
Jul 30, 2025
Viaarxiv icon

RECALLED: An Unbounded Resource Consumption Attack on Large Vision-Language Models

Add code
Jul 24, 2025
Viaarxiv icon

T-T: Table Transformer for Tagging-based Aspect Sentiment Triplet Extraction

Add code
May 08, 2025
Viaarxiv icon

Addressing Noise and Stochasticity in Fraud Detection for Service Networks

Add code
May 02, 2025
Viaarxiv icon

TARAC: Mitigating Hallucination in LVLMs via Temporal Attention Real-time Accumulative Connection

Add code
Apr 05, 2025
Figure 1 for TARAC: Mitigating Hallucination in LVLMs via Temporal Attention Real-time Accumulative Connection
Figure 2 for TARAC: Mitigating Hallucination in LVLMs via Temporal Attention Real-time Accumulative Connection
Figure 3 for TARAC: Mitigating Hallucination in LVLMs via Temporal Attention Real-time Accumulative Connection
Figure 4 for TARAC: Mitigating Hallucination in LVLMs via Temporal Attention Real-time Accumulative Connection
Viaarxiv icon

Revealing the Pragmatic Dilemma for Moral Reasoning Acquisition in Language Models

Add code
Feb 25, 2025
Figure 1 for Revealing the Pragmatic Dilemma for Moral Reasoning Acquisition in Language Models
Figure 2 for Revealing the Pragmatic Dilemma for Moral Reasoning Acquisition in Language Models
Figure 3 for Revealing the Pragmatic Dilemma for Moral Reasoning Acquisition in Language Models
Figure 4 for Revealing the Pragmatic Dilemma for Moral Reasoning Acquisition in Language Models
Viaarxiv icon

CipherPrune: Efficient and Scalable Private Transformer Inference

Add code
Feb 24, 2025
Figure 1 for CipherPrune: Efficient and Scalable Private Transformer Inference
Figure 2 for CipherPrune: Efficient and Scalable Private Transformer Inference
Figure 3 for CipherPrune: Efficient and Scalable Private Transformer Inference
Figure 4 for CipherPrune: Efficient and Scalable Private Transformer Inference
Viaarxiv icon