Picture for Zhong-Yi Lu

Zhong-Yi Lu

AI-driven inverse design of materials: Past, present and future

Add code
Nov 14, 2024
Viaarxiv icon

Over-parameterized Student Model via Tensor Decomposition Boosted Knowledge Distillation

Add code
Nov 10, 2024
Viaarxiv icon

AI-accelerated discovery of high critical temperature superconductors

Add code
Sep 12, 2024
Figure 1 for AI-accelerated discovery of high critical temperature superconductors
Figure 2 for AI-accelerated discovery of high critical temperature superconductors
Figure 3 for AI-accelerated discovery of high critical temperature superconductors
Figure 4 for AI-accelerated discovery of high critical temperature superconductors
Viaarxiv icon

AI-accelerated Discovery of Altermagnetic Materials

Add code
Nov 13, 2023
Viaarxiv icon

Variational optimization of the amplitude of neural-network quantum many-body ground states

Add code
Aug 18, 2023
Viaarxiv icon

A simple framework for contrastive learning phases of matter

Add code
May 11, 2022
Figure 1 for A simple framework for contrastive learning phases of matter
Figure 2 for A simple framework for contrastive learning phases of matter
Figure 3 for A simple framework for contrastive learning phases of matter
Figure 4 for A simple framework for contrastive learning phases of matter
Viaarxiv icon

Parameter-Efficient Mixture-of-Experts Architecture for Pre-trained Language Models

Add code
Mar 02, 2022
Figure 1 for Parameter-Efficient Mixture-of-Experts Architecture for Pre-trained Language Models
Figure 2 for Parameter-Efficient Mixture-of-Experts Architecture for Pre-trained Language Models
Figure 3 for Parameter-Efficient Mixture-of-Experts Architecture for Pre-trained Language Models
Figure 4 for Parameter-Efficient Mixture-of-Experts Architecture for Pre-trained Language Models
Viaarxiv icon

Enabling Lightweight Fine-tuning for Pre-trained Language Model Compression based on Matrix Product Operators

Add code
Jun 04, 2021
Figure 1 for Enabling Lightweight Fine-tuning for Pre-trained Language Model Compression based on Matrix Product Operators
Figure 2 for Enabling Lightweight Fine-tuning for Pre-trained Language Model Compression based on Matrix Product Operators
Figure 3 for Enabling Lightweight Fine-tuning for Pre-trained Language Model Compression based on Matrix Product Operators
Figure 4 for Enabling Lightweight Fine-tuning for Pre-trained Language Model Compression based on Matrix Product Operators
Viaarxiv icon

A Model Compression Method with Matrix Product Operators for Speech Enhancement

Add code
Oct 10, 2020
Figure 1 for A Model Compression Method with Matrix Product Operators for Speech Enhancement
Figure 2 for A Model Compression Method with Matrix Product Operators for Speech Enhancement
Figure 3 for A Model Compression Method with Matrix Product Operators for Speech Enhancement
Figure 4 for A Model Compression Method with Matrix Product Operators for Speech Enhancement
Viaarxiv icon

Compressing deep neural networks by matrix product operators

Add code
Apr 11, 2019
Figure 1 for Compressing deep neural networks by matrix product operators
Figure 2 for Compressing deep neural networks by matrix product operators
Figure 3 for Compressing deep neural networks by matrix product operators
Figure 4 for Compressing deep neural networks by matrix product operators
Viaarxiv icon