Picture for Li Ding

Li Ding

NVIDIA Nemotron 3: Efficient and Open Intelligence

Add code
Dec 24, 2025
Viaarxiv icon

Nemotron 3 Nano: Open, Efficient Mixture-of-Experts Hybrid Mamba-Transformer Model for Agentic Reasoning

Add code
Dec 23, 2025
Viaarxiv icon

Model-Agnostic Sentiment Distribution Stability Analysis for Robust LLM-Generated Texts Detection

Add code
Aug 09, 2025
Figure 1 for Model-Agnostic Sentiment Distribution Stability Analysis for Robust LLM-Generated Texts Detection
Figure 2 for Model-Agnostic Sentiment Distribution Stability Analysis for Robust LLM-Generated Texts Detection
Figure 3 for Model-Agnostic Sentiment Distribution Stability Analysis for Robust LLM-Generated Texts Detection
Figure 4 for Model-Agnostic Sentiment Distribution Stability Analysis for Robust LLM-Generated Texts Detection
Viaarxiv icon

Mutual-Supervised Learning for Sequential-to-Parallel Code Translation

Add code
Jun 11, 2025
Figure 1 for Mutual-Supervised Learning for Sequential-to-Parallel Code Translation
Figure 2 for Mutual-Supervised Learning for Sequential-to-Parallel Code Translation
Figure 3 for Mutual-Supervised Learning for Sequential-to-Parallel Code Translation
Figure 4 for Mutual-Supervised Learning for Sequential-to-Parallel Code Translation
Viaarxiv icon

Fast-Powerformer: A Memory-Efficient Transformer for Accurate Mid-Term Wind Power Forecasting

Add code
Apr 15, 2025
Figure 1 for Fast-Powerformer: A Memory-Efficient Transformer for Accurate Mid-Term Wind Power Forecasting
Figure 2 for Fast-Powerformer: A Memory-Efficient Transformer for Accurate Mid-Term Wind Power Forecasting
Figure 3 for Fast-Powerformer: A Memory-Efficient Transformer for Accurate Mid-Term Wind Power Forecasting
Figure 4 for Fast-Powerformer: A Memory-Efficient Transformer for Accurate Mid-Term Wind Power Forecasting
Viaarxiv icon

Large Language Model Inference Acceleration: A Comprehensive Hardware Perspective

Add code
Oct 06, 2024
Figure 1 for Large Language Model Inference Acceleration: A Comprehensive Hardware Perspective
Figure 2 for Large Language Model Inference Acceleration: A Comprehensive Hardware Perspective
Figure 3 for Large Language Model Inference Acceleration: A Comprehensive Hardware Perspective
Figure 4 for Large Language Model Inference Acceleration: A Comprehensive Hardware Perspective
Viaarxiv icon

MARCA: Mamba Accelerator with ReConfigurable Architecture

Add code
Sep 16, 2024
Figure 1 for MARCA: Mamba Accelerator with ReConfigurable Architecture
Figure 2 for MARCA: Mamba Accelerator with ReConfigurable Architecture
Figure 3 for MARCA: Mamba Accelerator with ReConfigurable Architecture
Figure 4 for MARCA: Mamba Accelerator with ReConfigurable Architecture
Viaarxiv icon

Pareto-Optimal Learning from Preferences with Hidden Context

Add code
Jun 21, 2024
Figure 1 for Pareto-Optimal Learning from Preferences with Hidden Context
Figure 2 for Pareto-Optimal Learning from Preferences with Hidden Context
Figure 3 for Pareto-Optimal Learning from Preferences with Hidden Context
Figure 4 for Pareto-Optimal Learning from Preferences with Hidden Context
Viaarxiv icon

DALex: Lexicase-like Selection via Diverse Aggregation

Add code
Jan 23, 2024
Figure 1 for DALex: Lexicase-like Selection via Diverse Aggregation
Figure 2 for DALex: Lexicase-like Selection via Diverse Aggregation
Figure 3 for DALex: Lexicase-like Selection via Diverse Aggregation
Figure 4 for DALex: Lexicase-like Selection via Diverse Aggregation
Viaarxiv icon

Optimizing Neural Networks with Gradient Lexicase Selection

Add code
Dec 19, 2023
Figure 1 for Optimizing Neural Networks with Gradient Lexicase Selection
Figure 2 for Optimizing Neural Networks with Gradient Lexicase Selection
Figure 3 for Optimizing Neural Networks with Gradient Lexicase Selection
Figure 4 for Optimizing Neural Networks with Gradient Lexicase Selection
Viaarxiv icon