Picture for Pei Yang

Pei Yang

IDProtector: An Adversarial Noise Encoder to Protect Against ID-Preserving Image Generation

Add code
Dec 16, 2024
Viaarxiv icon

Anti-Reference: Universal and Immediate Defense Against Reference-Based Generation

Add code
Dec 08, 2024
Viaarxiv icon

Steganalysis on Digital Watermarking: Is Your Defense Truly Impervious?

Add code
Jun 13, 2024
Viaarxiv icon

WMAdapter: Adding WaterMark Control to Latent Diffusion Models

Add code
Jun 12, 2024
Figure 1 for WMAdapter: Adding WaterMark Control to Latent Diffusion Models
Figure 2 for WMAdapter: Adding WaterMark Control to Latent Diffusion Models
Figure 3 for WMAdapter: Adding WaterMark Control to Latent Diffusion Models
Figure 4 for WMAdapter: Adding WaterMark Control to Latent Diffusion Models
Viaarxiv icon

RingID: Rethinking Tree-Ring Watermarking for Enhanced Multi-Key Identification

Add code
Apr 23, 2024
Viaarxiv icon

PMT-IQA: Progressive Multi-task Learning for Blind Image Quality Assessment

Add code
Jan 03, 2023
Viaarxiv icon

Unsupervised Domain Adaptation via Deep Hierarchical Optimal Transport

Add code
Nov 21, 2022
Viaarxiv icon

Semantic Graph-enhanced Visual Network for Zero-shot Learning

Add code
Jun 08, 2020
Figure 1 for Semantic Graph-enhanced Visual Network for Zero-shot Learning
Figure 2 for Semantic Graph-enhanced Visual Network for Zero-shot Learning
Figure 3 for Semantic Graph-enhanced Visual Network for Zero-shot Learning
Figure 4 for Semantic Graph-enhanced Visual Network for Zero-shot Learning
Viaarxiv icon

Parallelized Training of Restricted Boltzmann Machines using Markov-Chain Monte Carlo Methods

Add code
Oct 14, 2019
Figure 1 for Parallelized Training of Restricted Boltzmann Machines using Markov-Chain Monte Carlo Methods
Figure 2 for Parallelized Training of Restricted Boltzmann Machines using Markov-Chain Monte Carlo Methods
Figure 3 for Parallelized Training of Restricted Boltzmann Machines using Markov-Chain Monte Carlo Methods
Figure 4 for Parallelized Training of Restricted Boltzmann Machines using Markov-Chain Monte Carlo Methods
Viaarxiv icon

Densifying Assumed-sparse Tensors: Improving Memory Efficiency and MPI Collective Performance during Tensor Accumulation for Parallelized Training of Neural Machine Translation Models

Add code
May 10, 2019
Figure 1 for Densifying Assumed-sparse Tensors: Improving Memory Efficiency and MPI Collective Performance during Tensor Accumulation for Parallelized Training of Neural Machine Translation Models
Figure 2 for Densifying Assumed-sparse Tensors: Improving Memory Efficiency and MPI Collective Performance during Tensor Accumulation for Parallelized Training of Neural Machine Translation Models
Figure 3 for Densifying Assumed-sparse Tensors: Improving Memory Efficiency and MPI Collective Performance during Tensor Accumulation for Parallelized Training of Neural Machine Translation Models
Figure 4 for Densifying Assumed-sparse Tensors: Improving Memory Efficiency and MPI Collective Performance during Tensor Accumulation for Parallelized Training of Neural Machine Translation Models
Viaarxiv icon