Picture for Yaliang Li

Yaliang Li

BOTS: A Unified Framework for Bayesian Online Task Selection in LLM Reinforcement Finetuning

Add code
Oct 30, 2025
Viaarxiv icon

Security Tensors as a Cross-Modal Bridge: Extending Text-Aligned Safety to Vision in LVLM

Add code
Jul 28, 2025
Viaarxiv icon

Respecting Temporal-Causal Consistency: Entity-Event Knowledge Graphs for Retrieval-Augmented Generation

Add code
Jun 06, 2025
Figure 1 for Respecting Temporal-Causal Consistency: Entity-Event Knowledge Graphs for Retrieval-Augmented Generation
Figure 2 for Respecting Temporal-Causal Consistency: Entity-Event Knowledge Graphs for Retrieval-Augmented Generation
Figure 3 for Respecting Temporal-Causal Consistency: Entity-Event Knowledge Graphs for Retrieval-Augmented Generation
Figure 4 for Respecting Temporal-Causal Consistency: Entity-Event Knowledge Graphs for Retrieval-Augmented Generation
Viaarxiv icon

Trinity-RFT: A General-Purpose and Unified Framework for Reinforcement Fine-Tuning of Large Language Models

Add code
May 23, 2025
Viaarxiv icon

DetailMaster: Can Your Text-to-Image Model Handle Long Prompts?

Add code
May 22, 2025
Viaarxiv icon

Comprehensive Evaluation and Analysis for NSFW Concept Erasure in Text-to-Image Diffusion Models

Add code
May 21, 2025
Viaarxiv icon

Responsible Diffusion Models via Constraining Text Embeddings within Safe Regions

Add code
May 21, 2025
Viaarxiv icon

Enhancing Latent Computation in Transformers with Latent Tokens

Add code
May 19, 2025
Figure 1 for Enhancing Latent Computation in Transformers with Latent Tokens
Figure 2 for Enhancing Latent Computation in Transformers with Latent Tokens
Figure 3 for Enhancing Latent Computation in Transformers with Latent Tokens
Figure 4 for Enhancing Latent Computation in Transformers with Latent Tokens
Viaarxiv icon

Tree-based Models for Vertical Federated Learning: A Survey

Add code
Apr 03, 2025
Viaarxiv icon

MindGYM: Enhancing Vision-Language Models via Synthetic Self-Challenging Questions

Add code
Mar 12, 2025
Figure 1 for MindGYM: Enhancing Vision-Language Models via Synthetic Self-Challenging Questions
Figure 2 for MindGYM: Enhancing Vision-Language Models via Synthetic Self-Challenging Questions
Figure 3 for MindGYM: Enhancing Vision-Language Models via Synthetic Self-Challenging Questions
Figure 4 for MindGYM: Enhancing Vision-Language Models via Synthetic Self-Challenging Questions
Viaarxiv icon