Picture for Feng He

Feng He

Implicit Priors Editing in Stable Diffusion via Targeted Token Adjustment

Add code
Dec 04, 2024
Viaarxiv icon

Generalizing Graph Transformers Across Diverse Graphs and Tasks via Pre-Training on Industrial-Scale Data

Add code
Jul 04, 2024
Viaarxiv icon

Improving Video Retrieval by Adaptive Margin

Add code
Mar 09, 2023
Viaarxiv icon

Friend Recall in Online Games via Pre-training Edge Transformers

Add code
Feb 20, 2023
Viaarxiv icon

CLOP: Video-and-Language Pre-Training with Knowledge Regularizations

Add code
Nov 07, 2022
Viaarxiv icon

A CLIP-Enhanced Method for Video-Language Understanding

Add code
Oct 14, 2021
Figure 1 for A CLIP-Enhanced Method for Video-Language Understanding
Figure 2 for A CLIP-Enhanced Method for Video-Language Understanding
Figure 3 for A CLIP-Enhanced Method for Video-Language Understanding
Viaarxiv icon

Effective and Efficient Network Embedding Initialization via Graph Partitioning

Add code
Aug 28, 2019
Figure 1 for Effective and Efficient Network Embedding Initialization via Graph Partitioning
Figure 2 for Effective and Efficient Network Embedding Initialization via Graph Partitioning
Figure 3 for Effective and Efficient Network Embedding Initialization via Graph Partitioning
Figure 4 for Effective and Efficient Network Embedding Initialization via Graph Partitioning
Viaarxiv icon

Two-Stream CNN with Loose Pair Training for Multi-modal AMD Categorization

Add code
Jul 28, 2019
Figure 1 for Two-Stream CNN with Loose Pair Training for Multi-modal AMD Categorization
Figure 2 for Two-Stream CNN with Loose Pair Training for Multi-modal AMD Categorization
Figure 3 for Two-Stream CNN with Loose Pair Training for Multi-modal AMD Categorization
Figure 4 for Two-Stream CNN with Loose Pair Training for Multi-modal AMD Categorization
Viaarxiv icon

mu-Forcing: Training Variational Recurrent Autoencoders for Text Generation

Add code
May 24, 2019
Figure 1 for mu-Forcing: Training Variational Recurrent Autoencoders for Text Generation
Figure 2 for mu-Forcing: Training Variational Recurrent Autoencoders for Text Generation
Figure 3 for mu-Forcing: Training Variational Recurrent Autoencoders for Text Generation
Figure 4 for mu-Forcing: Training Variational Recurrent Autoencoders for Text Generation
Viaarxiv icon