Picture for Jing Zhang

Jing Zhang

The University of Sydney, Australia

LoRS: Efficient Low-Rank Adaptation for Sparse Large Language Model

Add code
Jan 15, 2025
Viaarxiv icon

Duplex: Dual Prototype Learning for Compositional Zero-Shot Learning

Add code
Jan 13, 2025
Viaarxiv icon

Large Language Models for Bioinformatics

Add code
Jan 10, 2025
Viaarxiv icon

Cosmos World Foundation Model Platform for Physical AI

Add code
Jan 07, 2025
Figure 1 for Cosmos World Foundation Model Platform for Physical AI
Figure 2 for Cosmos World Foundation Model Platform for Physical AI
Figure 3 for Cosmos World Foundation Model Platform for Physical AI
Figure 4 for Cosmos World Foundation Model Platform for Physical AI
Viaarxiv icon

The Scaling Law for LoRA Base on Mutual Information Upper Bound

Add code
Jan 06, 2025
Viaarxiv icon

CoT-based Synthesizer: Enhancing LLM Performance through Answer Synthesis

Add code
Jan 03, 2025
Viaarxiv icon

Dynamic Scaling of Unit Tests for Code Reward Modeling

Add code
Jan 02, 2025
Figure 1 for Dynamic Scaling of Unit Tests for Code Reward Modeling
Figure 2 for Dynamic Scaling of Unit Tests for Code Reward Modeling
Figure 3 for Dynamic Scaling of Unit Tests for Code Reward Modeling
Figure 4 for Dynamic Scaling of Unit Tests for Code Reward Modeling
Viaarxiv icon

MOL-Mamba: Enhancing Molecular Representation with Structural & Electronic Insights

Add code
Dec 21, 2024
Viaarxiv icon

Empowering LLMs to Understand and Generate Complex Vector Graphics

Add code
Dec 15, 2024
Viaarxiv icon

SEE: Sememe Entanglement Encoding for Transformer-bases Models Compression

Add code
Dec 15, 2024
Viaarxiv icon