Picture for Xianbin Ye

Xianbin Ye

Technical Report of HelixFold3 for Biomolecular Structure Prediction

Add code
Aug 30, 2024
Viaarxiv icon

Pre-Training on Large-Scale Generated Docking Conformations with HelixDock to Unlock the Potential of Protein-ligand Structure Prediction Models

Add code
Oct 21, 2023
Viaarxiv icon

PASH at TREC 2021 Deep Learning Track: Generative Enhanced Model for Multi-stage Ranking

Add code
May 24, 2022
Figure 1 for PASH at TREC 2021 Deep Learning Track: Generative Enhanced Model for Multi-stage Ranking
Figure 2 for PASH at TREC 2021 Deep Learning Track: Generative Enhanced Model for Multi-stage Ranking
Figure 3 for PASH at TREC 2021 Deep Learning Track: Generative Enhanced Model for Multi-stage Ranking
Figure 4 for PASH at TREC 2021 Deep Learning Track: Generative Enhanced Model for Multi-stage Ranking
Viaarxiv icon

CandidateDrug4Cancer: An Open Molecular Graph Learning Benchmark on Drug Discovery for Cancer

Add code
Mar 02, 2022
Figure 1 for CandidateDrug4Cancer: An Open Molecular Graph Learning Benchmark on Drug Discovery for Cancer
Figure 2 for CandidateDrug4Cancer: An Open Molecular Graph Learning Benchmark on Drug Discovery for Cancer
Figure 3 for CandidateDrug4Cancer: An Open Molecular Graph Learning Benchmark on Drug Discovery for Cancer
Figure 4 for CandidateDrug4Cancer: An Open Molecular Graph Learning Benchmark on Drug Discovery for Cancer
Viaarxiv icon

Docking-based Virtual Screening with Multi-Task Learning

Add code
Nov 18, 2021
Figure 1 for Docking-based Virtual Screening with Multi-Task Learning
Figure 2 for Docking-based Virtual Screening with Multi-Task Learning
Figure 3 for Docking-based Virtual Screening with Multi-Task Learning
Figure 4 for Docking-based Virtual Screening with Multi-Task Learning
Viaarxiv icon

Winner Team Mia at TextVQA Challenge 2021: Vision-and-Language Representation Learning with Pre-trained Sequence-to-Sequence Model

Add code
Jun 24, 2021
Figure 1 for Winner Team Mia at TextVQA Challenge 2021: Vision-and-Language Representation Learning with Pre-trained Sequence-to-Sequence Model
Figure 2 for Winner Team Mia at TextVQA Challenge 2021: Vision-and-Language Representation Learning with Pre-trained Sequence-to-Sequence Model
Viaarxiv icon