Picture for Xingen Wang

Xingen Wang

LG-CAV: Train Any Concept Activation Vector with Language Guidance

Add code
Oct 14, 2024
Viaarxiv icon

Transition Propagation Graph Neural Networks for Temporal Networks

Add code
Apr 15, 2023
Viaarxiv icon

Temporal Aggregation and Propagation Graph Neural Networks for Dynamic Representation

Add code
Apr 15, 2023
Viaarxiv icon

Learning Dynamic Preference Structure Embedding From Temporal Networks

Add code
Nov 23, 2021
Figure 1 for Learning Dynamic Preference Structure Embedding From Temporal Networks
Figure 2 for Learning Dynamic Preference Structure Embedding From Temporal Networks
Figure 3 for Learning Dynamic Preference Structure Embedding From Temporal Networks
Figure 4 for Learning Dynamic Preference Structure Embedding From Temporal Networks
Viaarxiv icon

Automatic Fairness Testing of Neural Classifiers through Adversarial Sampling

Add code
Jul 29, 2021
Figure 1 for Automatic Fairness Testing of Neural Classifiers through Adversarial Sampling
Figure 2 for Automatic Fairness Testing of Neural Classifiers through Adversarial Sampling
Figure 3 for Automatic Fairness Testing of Neural Classifiers through Adversarial Sampling
Figure 4 for Automatic Fairness Testing of Neural Classifiers through Adversarial Sampling
Viaarxiv icon

Contrastive Model Inversion for Data-Free Knowledge Distillation

Add code
May 18, 2021
Figure 1 for Contrastive Model Inversion for Data-Free Knowledge Distillation
Figure 2 for Contrastive Model Inversion for Data-Free Knowledge Distillation
Figure 3 for Contrastive Model Inversion for Data-Free Knowledge Distillation
Figure 4 for Contrastive Model Inversion for Data-Free Knowledge Distillation
Viaarxiv icon

KDExplainer: A Task-oriented Attention Model for Explaining Knowledge Distillation

Add code
May 12, 2021
Figure 1 for KDExplainer: A Task-oriented Attention Model for Explaining Knowledge Distillation
Figure 2 for KDExplainer: A Task-oriented Attention Model for Explaining Knowledge Distillation
Figure 3 for KDExplainer: A Task-oriented Attention Model for Explaining Knowledge Distillation
Figure 4 for KDExplainer: A Task-oriented Attention Model for Explaining Knowledge Distillation
Viaarxiv icon