Picture for Wenhao Jiang

Wenhao Jiang

Mitigating Catastrophic Forgetting in Multi-domain Chinese Spelling Correction by Multi-stage Knowledge Transfer Framework

Add code
Feb 18, 2024
Viaarxiv icon

Rethinking the Roles of Large Language Models in Chinese Grammatical Error Correction

Add code
Feb 18, 2024
Viaarxiv icon

Few-Shot Class-Incremental Learning with Prior Knowledge

Add code
Feb 02, 2024
Viaarxiv icon

RigLSTM: Recurrent Independent Grid LSTM for Generalizable Sequence Learning

Add code
Nov 03, 2023
Viaarxiv icon

Prefix-Tuning Based Unsupervised Text Style Transfer

Add code
Oct 23, 2023
Viaarxiv icon

LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment

Add code
Oct 14, 2023
Figure 1 for LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment
Figure 2 for LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment
Figure 3 for LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment
Figure 4 for LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment
Viaarxiv icon

Can Decentralized Stochastic Minimax Optimization Algorithms Converge Linearly for Finite-Sum Nonconvex-Nonconcave Problems?

Add code
Apr 24, 2023
Viaarxiv icon

Learning Grounded Vision-Language Representation for Versatile Understanding in Untrimmed Videos

Add code
Mar 11, 2023
Viaarxiv icon

SWEM: Towards Real-Time Video Object Segmentation with Sequential Weighted Expectation-Maximization

Add code
Aug 22, 2022
Figure 1 for SWEM: Towards Real-Time Video Object Segmentation with Sequential Weighted Expectation-Maximization
Figure 2 for SWEM: Towards Real-Time Video Object Segmentation with Sequential Weighted Expectation-Maximization
Figure 3 for SWEM: Towards Real-Time Video Object Segmentation with Sequential Weighted Expectation-Maximization
Figure 4 for SWEM: Towards Real-Time Video Object Segmentation with Sequential Weighted Expectation-Maximization
Viaarxiv icon

VLMixer: Unpaired Vision-Language Pre-training via Cross-Modal CutMix

Add code
Jun 17, 2022
Figure 1 for VLMixer: Unpaired Vision-Language Pre-training via Cross-Modal CutMix
Figure 2 for VLMixer: Unpaired Vision-Language Pre-training via Cross-Modal CutMix
Figure 3 for VLMixer: Unpaired Vision-Language Pre-training via Cross-Modal CutMix
Figure 4 for VLMixer: Unpaired Vision-Language Pre-training via Cross-Modal CutMix
Viaarxiv icon