Picture for Fan Mo

Fan Mo

Data-Efficient Massive Tool Retrieval: A Reinforcement Learning Approach for Query-Tool Alignment with Language Models

Add code
Oct 04, 2024
Figure 1 for Data-Efficient Massive Tool Retrieval: A Reinforcement Learning Approach for Query-Tool Alignment with Language Models
Figure 2 for Data-Efficient Massive Tool Retrieval: A Reinforcement Learning Approach for Query-Tool Alignment with Language Models
Figure 3 for Data-Efficient Massive Tool Retrieval: A Reinforcement Learning Approach for Query-Tool Alignment with Language Models
Figure 4 for Data-Efficient Massive Tool Retrieval: A Reinforcement Learning Approach for Query-Tool Alignment with Language Models
Viaarxiv icon

Mitigating Hallucinations in Large Language Models via Self-Refinement-Enhanced Knowledge Retrieval

Add code
May 10, 2024
Viaarxiv icon

Centaur: Federated Learning for Constrained Edge Devices

Add code
Nov 12, 2022
Viaarxiv icon

SoK: Machine Learning with Confidential Computing

Add code
Aug 22, 2022
Figure 1 for SoK: Machine Learning with Confidential Computing
Figure 2 for SoK: Machine Learning with Confidential Computing
Figure 3 for SoK: Machine Learning with Confidential Computing
Figure 4 for SoK: Machine Learning with Confidential Computing
Viaarxiv icon

Towards Battery-Free Machine Learning and Inference in Underwater Environments

Add code
Feb 16, 2022
Figure 1 for Towards Battery-Free Machine Learning and Inference in Underwater Environments
Figure 2 for Towards Battery-Free Machine Learning and Inference in Underwater Environments
Figure 3 for Towards Battery-Free Machine Learning and Inference in Underwater Environments
Figure 4 for Towards Battery-Free Machine Learning and Inference in Underwater Environments
Viaarxiv icon

Quantifying Information Leakage from Gradients

Add code
May 28, 2021
Figure 1 for Quantifying Information Leakage from Gradients
Figure 2 for Quantifying Information Leakage from Gradients
Figure 3 for Quantifying Information Leakage from Gradients
Figure 4 for Quantifying Information Leakage from Gradients
Viaarxiv icon

PPFL: Privacy-preserving Federated Learning with Trusted Execution Environments

Add code
Apr 29, 2021
Figure 1 for PPFL: Privacy-preserving Federated Learning with Trusted Execution Environments
Figure 2 for PPFL: Privacy-preserving Federated Learning with Trusted Execution Environments
Figure 3 for PPFL: Privacy-preserving Federated Learning with Trusted Execution Environments
Figure 4 for PPFL: Privacy-preserving Federated Learning with Trusted Execution Environments
Viaarxiv icon

Layer-wise Characterization of Latent Information Leakage in Federated Learning

Add code
Oct 17, 2020
Figure 1 for Layer-wise Characterization of Latent Information Leakage in Federated Learning
Figure 2 for Layer-wise Characterization of Latent Information Leakage in Federated Learning
Figure 3 for Layer-wise Characterization of Latent Information Leakage in Federated Learning
Figure 4 for Layer-wise Characterization of Latent Information Leakage in Federated Learning
Viaarxiv icon

DarkneTZ: Towards Model Privacy at the Edge using Trusted Execution Environments

Add code
Apr 12, 2020
Figure 1 for DarkneTZ: Towards Model Privacy at the Edge using Trusted Execution Environments
Figure 2 for DarkneTZ: Towards Model Privacy at the Edge using Trusted Execution Environments
Figure 3 for DarkneTZ: Towards Model Privacy at the Edge using Trusted Execution Environments
Figure 4 for DarkneTZ: Towards Model Privacy at the Edge using Trusted Execution Environments
Viaarxiv icon

Towards Characterizing and Limiting Information Exposure in DNN Layers

Add code
Jul 13, 2019
Figure 1 for Towards Characterizing and Limiting Information Exposure in DNN Layers
Figure 2 for Towards Characterizing and Limiting Information Exposure in DNN Layers
Figure 3 for Towards Characterizing and Limiting Information Exposure in DNN Layers
Figure 4 for Towards Characterizing and Limiting Information Exposure in DNN Layers
Viaarxiv icon