Picture for Zijun Sun

Zijun Sun

The Impact of Longitudinal Mammogram Alignment on Breast Cancer Risk Assessment

Add code
Nov 11, 2025
Viaarxiv icon

Reconsidering Explicit Longitudinal Mammography Alignment for Enhanced Breast Cancer Risk Prediction

Add code
Jun 24, 2025
Figure 1 for Reconsidering Explicit Longitudinal Mammography Alignment for Enhanced Breast Cancer Risk Prediction
Figure 2 for Reconsidering Explicit Longitudinal Mammography Alignment for Enhanced Breast Cancer Risk Prediction
Figure 3 for Reconsidering Explicit Longitudinal Mammography Alignment for Enhanced Breast Cancer Risk Prediction
Figure 4 for Reconsidering Explicit Longitudinal Mammography Alignment for Enhanced Breast Cancer Risk Prediction
Viaarxiv icon

MiniMax-M1: Scaling Test-Time Compute Efficiently with Lightning Attention

Add code
Jun 16, 2025
Figure 1 for MiniMax-M1: Scaling Test-Time Compute Efficiently with Lightning Attention
Figure 2 for MiniMax-M1: Scaling Test-Time Compute Efficiently with Lightning Attention
Figure 3 for MiniMax-M1: Scaling Test-Time Compute Efficiently with Lightning Attention
Figure 4 for MiniMax-M1: Scaling Test-Time Compute Efficiently with Lightning Attention
Viaarxiv icon

Emotional RAG: Enhancing Role-Playing Agents through Emotional Retrieval

Add code
Oct 30, 2024
Figure 1 for Emotional RAG: Enhancing Role-Playing Agents through Emotional Retrieval
Figure 2 for Emotional RAG: Enhancing Role-Playing Agents through Emotional Retrieval
Figure 3 for Emotional RAG: Enhancing Role-Playing Agents through Emotional Retrieval
Figure 4 for Emotional RAG: Enhancing Role-Playing Agents through Emotional Retrieval
Viaarxiv icon

ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information

Add code
Jun 30, 2021
Figure 1 for ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information
Figure 2 for ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information
Figure 3 for ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information
Figure 4 for ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information
Viaarxiv icon

Self-Explaining Structures Improve NLP Models

Add code
Dec 09, 2020
Figure 1 for Self-Explaining Structures Improve NLP Models
Figure 2 for Self-Explaining Structures Improve NLP Models
Figure 3 for Self-Explaining Structures Improve NLP Models
Figure 4 for Self-Explaining Structures Improve NLP Models
Viaarxiv icon

Neural Semi-supervised Learning for Text Classification Under Large-Scale Pretraining

Add code
Nov 19, 2020
Figure 1 for Neural Semi-supervised Learning for Text Classification Under Large-Scale Pretraining
Figure 2 for Neural Semi-supervised Learning for Text Classification Under Large-Scale Pretraining
Figure 3 for Neural Semi-supervised Learning for Text Classification Under Large-Scale Pretraining
Figure 4 for Neural Semi-supervised Learning for Text Classification Under Large-Scale Pretraining
Viaarxiv icon

Pair the Dots: Jointly Examining Training History and Test Stimuli for Model Interpretability

Add code
Oct 31, 2020
Figure 1 for Pair the Dots: Jointly Examining Training History and Test Stimuli for Model Interpretability
Figure 2 for Pair the Dots: Jointly Examining Training History and Test Stimuli for Model Interpretability
Figure 3 for Pair the Dots: Jointly Examining Training History and Test Stimuli for Model Interpretability
Figure 4 for Pair the Dots: Jointly Examining Training History and Test Stimuli for Model Interpretability
Viaarxiv icon

Summarize, Outline, and Elaborate: Long-Text Generation via Hierarchical Supervision from Extractive Summaries

Add code
Oct 14, 2020
Figure 1 for Summarize, Outline, and Elaborate: Long-Text Generation via Hierarchical Supervision from Extractive Summaries
Figure 2 for Summarize, Outline, and Elaborate: Long-Text Generation via Hierarchical Supervision from Extractive Summaries
Figure 3 for Summarize, Outline, and Elaborate: Long-Text Generation via Hierarchical Supervision from Extractive Summaries
Figure 4 for Summarize, Outline, and Elaborate: Long-Text Generation via Hierarchical Supervision from Extractive Summaries
Viaarxiv icon

Large-scale Pretraining for Neural Machine Translation with Tens of Billions of Sentence Pairs

Add code
Oct 06, 2019
Figure 1 for Large-scale Pretraining for Neural Machine Translation with Tens of Billions of Sentence Pairs
Figure 2 for Large-scale Pretraining for Neural Machine Translation with Tens of Billions of Sentence Pairs
Figure 3 for Large-scale Pretraining for Neural Machine Translation with Tens of Billions of Sentence Pairs
Figure 4 for Large-scale Pretraining for Neural Machine Translation with Tens of Billions of Sentence Pairs
Viaarxiv icon