Picture for Guangxiang Zhao

Guangxiang Zhao

When to Trust Aggregated Gradients: Addressing Negative Client Sampling in Federated Learning

Add code
Jan 25, 2023
Viaarxiv icon

From Mimicking to Integrating: Knowledge Integration for Pre-Trained Language Models

Add code
Oct 11, 2022
Figure 1 for From Mimicking to Integrating: Knowledge Integration for Pre-Trained Language Models
Figure 2 for From Mimicking to Integrating: Knowledge Integration for Pre-Trained Language Models
Figure 3 for From Mimicking to Integrating: Knowledge Integration for Pre-Trained Language Models
Figure 4 for From Mimicking to Integrating: Knowledge Integration for Pre-Trained Language Models
Viaarxiv icon

Rethinking the Openness of CLIP

Add code
Jun 04, 2022
Viaarxiv icon

Model Uncertainty-Aware Knowledge Amalgamation for Pre-Trained Language Models

Add code
Dec 14, 2021
Figure 1 for Model Uncertainty-Aware Knowledge Amalgamation for Pre-Trained Language Models
Figure 2 for Model Uncertainty-Aware Knowledge Amalgamation for Pre-Trained Language Models
Figure 3 for Model Uncertainty-Aware Knowledge Amalgamation for Pre-Trained Language Models
Figure 4 for Model Uncertainty-Aware Knowledge Amalgamation for Pre-Trained Language Models
Viaarxiv icon

Well-classified Examples are Underestimated in Classification with Deep Neural Networks

Add code
Oct 15, 2021
Figure 1 for Well-classified Examples are Underestimated in Classification with Deep Neural Networks
Figure 2 for Well-classified Examples are Underestimated in Classification with Deep Neural Networks
Figure 3 for Well-classified Examples are Underestimated in Classification with Deep Neural Networks
Figure 4 for Well-classified Examples are Underestimated in Classification with Deep Neural Networks
Viaarxiv icon

Topology-Imbalance Learning for Semi-Supervised Node Classification

Add code
Oct 08, 2021
Figure 1 for Topology-Imbalance Learning for Semi-Supervised Node Classification
Figure 2 for Topology-Imbalance Learning for Semi-Supervised Node Classification
Figure 3 for Topology-Imbalance Learning for Semi-Supervised Node Classification
Figure 4 for Topology-Imbalance Learning for Semi-Supervised Node Classification
Viaarxiv icon

Learning Relation Alignment for Calibrated Cross-modal Retrieval

Add code
Jun 01, 2021
Figure 1 for Learning Relation Alignment for Calibrated Cross-modal Retrieval
Figure 2 for Learning Relation Alignment for Calibrated Cross-modal Retrieval
Figure 3 for Learning Relation Alignment for Calibrated Cross-modal Retrieval
Figure 4 for Learning Relation Alignment for Calibrated Cross-modal Retrieval
Viaarxiv icon

Layer-Wise Cross-View Decoding for Sequence-to-Sequence Learning

Add code
Jun 03, 2020
Figure 1 for Layer-Wise Cross-View Decoding for Sequence-to-Sequence Learning
Figure 2 for Layer-Wise Cross-View Decoding for Sequence-to-Sequence Learning
Figure 3 for Layer-Wise Cross-View Decoding for Sequence-to-Sequence Learning
Figure 4 for Layer-Wise Cross-View Decoding for Sequence-to-Sequence Learning
Viaarxiv icon

Explicit Sparse Transformer: Concentrated Attention Through Explicit Selection

Add code
Dec 25, 2019
Figure 1 for Explicit Sparse Transformer: Concentrated Attention Through Explicit Selection
Figure 2 for Explicit Sparse Transformer: Concentrated Attention Through Explicit Selection
Figure 3 for Explicit Sparse Transformer: Concentrated Attention Through Explicit Selection
Figure 4 for Explicit Sparse Transformer: Concentrated Attention Through Explicit Selection
Viaarxiv icon

MUSE: Parallel Multi-Scale Attention for Sequence to Sequence Learning

Add code
Nov 17, 2019
Figure 1 for MUSE: Parallel Multi-Scale Attention for Sequence to Sequence Learning
Figure 2 for MUSE: Parallel Multi-Scale Attention for Sequence to Sequence Learning
Figure 3 for MUSE: Parallel Multi-Scale Attention for Sequence to Sequence Learning
Figure 4 for MUSE: Parallel Multi-Scale Attention for Sequence to Sequence Learning
Viaarxiv icon