Picture for Chang Wang

Chang Wang

SciGPT: A Large Language Model for Scientific Literature Understanding and Knowledge Discovery

Add code
Sep 09, 2025
Viaarxiv icon

Beyond Quality: Unlocking Diversity in Ad Headline Generation with Large Language Models

Add code
Aug 26, 2025
Viaarxiv icon

Enhancing Privacy in Decentralized Min-Max Optimization: A Differentially Private Approach

Add code
Aug 10, 2025
Viaarxiv icon

A Metric for MLLM Alignment in Large-scale Recommendation

Add code
Aug 07, 2025
Viaarxiv icon

Accurate Multi-Category Student Performance Forecasting at Early Stages of Online Education Using Neural Networks

Add code
Dec 08, 2024
Viaarxiv icon

WebCiteS: Attributed Query-Focused Summarization on Chinese Web Search Results with Citations

Add code
Mar 04, 2024
Viaarxiv icon

Neural-Optic Co-Designed Polarization-Multiplexed Metalens for Compact Computational Spectral Imaging

Add code
Nov 26, 2023
Viaarxiv icon

Efficient Post-training Quantization with FP8 Formats

Add code
Sep 26, 2023
Figure 1 for Efficient Post-training Quantization with FP8 Formats
Figure 2 for Efficient Post-training Quantization with FP8 Formats
Figure 3 for Efficient Post-training Quantization with FP8 Formats
Figure 4 for Efficient Post-training Quantization with FP8 Formats
Viaarxiv icon

On the Re-Solving Heuristic for Contextual Bandits with Knapsacks

Add code
Nov 25, 2022
Figure 1 for On the Re-Solving Heuristic for  Contextual Bandits with Knapsacks
Figure 2 for On the Re-Solving Heuristic for  Contextual Bandits with Knapsacks
Figure 3 for On the Re-Solving Heuristic for  Contextual Bandits with Knapsacks
Viaarxiv icon

QuaLA-MiniLM: a Quantized Length Adaptive MiniLM

Add code
Oct 31, 2022
Figure 1 for QuaLA-MiniLM: a Quantized Length Adaptive MiniLM
Figure 2 for QuaLA-MiniLM: a Quantized Length Adaptive MiniLM
Figure 3 for QuaLA-MiniLM: a Quantized Length Adaptive MiniLM
Viaarxiv icon