Picture for Daliang Li

Daliang Li

ReST meets ReAct: Self-Improvement for Multi-Step Reasoning LLM Agent

Add code
Dec 15, 2023
Viaarxiv icon

Large Language Models with Controllable Working Memory

Add code
Nov 09, 2022
Viaarxiv icon

Preserving In-Context Learning ability in Large Language Model Fine-tuning

Add code
Nov 01, 2022
Viaarxiv icon

Large Models are Parsimonious Learners: Activation Sparsity in Trained Transformers

Add code
Oct 12, 2022
Figure 1 for Large Models are Parsimonious Learners: Activation Sparsity in Trained Transformers
Figure 2 for Large Models are Parsimonious Learners: Activation Sparsity in Trained Transformers
Figure 3 for Large Models are Parsimonious Learners: Activation Sparsity in Trained Transformers
Figure 4 for Large Models are Parsimonious Learners: Activation Sparsity in Trained Transformers
Viaarxiv icon

Understanding Robustness of Transformers for Image Classification

Add code
Mar 26, 2021
Figure 1 for Understanding Robustness of Transformers for Image Classification
Figure 2 for Understanding Robustness of Transformers for Image Classification
Figure 3 for Understanding Robustness of Transformers for Image Classification
Figure 4 for Understanding Robustness of Transformers for Image Classification
Viaarxiv icon

Modifying Memories in Transformer Models

Add code
Dec 01, 2020
Figure 1 for Modifying Memories in Transformer Models
Figure 2 for Modifying Memories in Transformer Models
Figure 3 for Modifying Memories in Transformer Models
Viaarxiv icon

FedMD: Heterogenous Federated Learning via Model Distillation

Add code
Oct 08, 2019
Figure 1 for FedMD: Heterogenous Federated Learning via Model Distillation
Figure 2 for FedMD: Heterogenous Federated Learning via Model Distillation
Figure 3 for FedMD: Heterogenous Federated Learning via Model Distillation
Figure 4 for FedMD: Heterogenous Federated Learning via Model Distillation
Viaarxiv icon