Picture for Yixiang Wang

Yixiang Wang

Automatic Reward Design via Learning Motivation-Consistent Intrinsic Rewards

Add code
Jul 29, 2022
Figure 1 for Automatic Reward Design via Learning Motivation-Consistent Intrinsic Rewards
Figure 2 for Automatic Reward Design via Learning Motivation-Consistent Intrinsic Rewards
Figure 3 for Automatic Reward Design via Learning Motivation-Consistent Intrinsic Rewards
Figure 4 for Automatic Reward Design via Learning Motivation-Consistent Intrinsic Rewards
Viaarxiv icon

DI-AA: An Interpretable White-box Attack for Fooling Deep Neural Networks

Add code
Oct 14, 2021
Figure 1 for DI-AA: An Interpretable White-box Attack for Fooling Deep Neural Networks
Figure 2 for DI-AA: An Interpretable White-box Attack for Fooling Deep Neural Networks
Figure 3 for DI-AA: An Interpretable White-box Attack for Fooling Deep Neural Networks
Figure 4 for DI-AA: An Interpretable White-box Attack for Fooling Deep Neural Networks
Viaarxiv icon

Foster Strengths and Circumvent Weaknesses: a Speech Enhancement Framework with Two-branch Collaborative Learning

Add code
Oct 12, 2021
Figure 1 for Foster Strengths and Circumvent Weaknesses: a Speech Enhancement Framework with Two-branch Collaborative Learning
Figure 2 for Foster Strengths and Circumvent Weaknesses: a Speech Enhancement Framework with Two-branch Collaborative Learning
Figure 3 for Foster Strengths and Circumvent Weaknesses: a Speech Enhancement Framework with Two-branch Collaborative Learning
Figure 4 for Foster Strengths and Circumvent Weaknesses: a Speech Enhancement Framework with Two-branch Collaborative Learning
Viaarxiv icon

IWA: Integrated Gradient based White-box Attacks for Fooling Deep Neural Networks

Add code
Feb 03, 2021
Figure 1 for IWA: Integrated Gradient based White-box Attacks for Fooling Deep Neural Networks
Figure 2 for IWA: Integrated Gradient based White-box Attacks for Fooling Deep Neural Networks
Figure 3 for IWA: Integrated Gradient based White-box Attacks for Fooling Deep Neural Networks
Figure 4 for IWA: Integrated Gradient based White-box Attacks for Fooling Deep Neural Networks
Viaarxiv icon

Generalizing Adversarial Examples by AdaBelief Optimizer

Add code
Jan 25, 2021
Figure 1 for Generalizing Adversarial Examples by AdaBelief Optimizer
Figure 2 for Generalizing Adversarial Examples by AdaBelief Optimizer
Viaarxiv icon

Learning to Utilize Shaping Rewards: A New Approach of Reward Shaping

Add code
Nov 05, 2020
Figure 1 for Learning to Utilize Shaping Rewards: A New Approach of Reward Shaping
Figure 2 for Learning to Utilize Shaping Rewards: A New Approach of Reward Shaping
Figure 3 for Learning to Utilize Shaping Rewards: A New Approach of Reward Shaping
Figure 4 for Learning to Utilize Shaping Rewards: A New Approach of Reward Shaping
Viaarxiv icon

Multi-Agent Deep Reinforcement Learning with Adaptive Policies

Add code
Nov 28, 2019
Figure 1 for Multi-Agent Deep Reinforcement Learning with Adaptive Policies
Figure 2 for Multi-Agent Deep Reinforcement Learning with Adaptive Policies
Figure 3 for Multi-Agent Deep Reinforcement Learning with Adaptive Policies
Figure 4 for Multi-Agent Deep Reinforcement Learning with Adaptive Policies
Viaarxiv icon