Picture for Wushao Wen

Wushao Wen

MoExtend: Tuning New Experts for Modality and Task Extension

Add code
Aug 07, 2024
Figure 1 for MoExtend: Tuning New Experts for Modality and Task Extension
Figure 2 for MoExtend: Tuning New Experts for Modality and Task Extension
Figure 3 for MoExtend: Tuning New Experts for Modality and Task Extension
Figure 4 for MoExtend: Tuning New Experts for Modality and Task Extension
Viaarxiv icon

Mirror Gradient: Towards Robust Multimodal Recommender Systems via Exploring Flat Local Minima

Add code
Feb 17, 2024
Viaarxiv icon

Let's Think Outside the Box: Exploring Leap-of-Thought in Large Language Models with Creative Humor Generation

Add code
Dec 06, 2023
Figure 1 for Let's Think Outside the Box: Exploring Leap-of-Thought in Large Language Models with Creative Humor Generation
Figure 2 for Let's Think Outside the Box: Exploring Leap-of-Thought in Large Language Models with Creative Humor Generation
Figure 3 for Let's Think Outside the Box: Exploring Leap-of-Thought in Large Language Models with Creative Humor Generation
Figure 4 for Let's Think Outside the Box: Exploring Leap-of-Thought in Large Language Models with Creative Humor Generation
Viaarxiv icon

SUR-adapter: Enhancing Text-to-Image Pre-trained Diffusion Models with Large Language Models

Add code
May 12, 2023
Viaarxiv icon

LSAS: Lightweight Sub-attention Strategy for Alleviating Attention Bias Problem

Add code
May 09, 2023
Viaarxiv icon

ASR: Attention-alike Structural Re-parameterization

Add code
Apr 13, 2023
Figure 1 for ASR: Attention-alike Structural Re-parameterization
Figure 2 for ASR: Attention-alike Structural Re-parameterization
Figure 3 for ASR: Attention-alike Structural Re-parameterization
Figure 4 for ASR: Attention-alike Structural Re-parameterization
Viaarxiv icon

Deepening Neural Networks Implicitly and Locally via Recurrent Attention Strategy

Add code
Oct 27, 2022
Viaarxiv icon

Switchable Self-attention Module

Add code
Sep 13, 2022
Figure 1 for Switchable Self-attention Module
Figure 2 for Switchable Self-attention Module
Figure 3 for Switchable Self-attention Module
Figure 4 for Switchable Self-attention Module
Viaarxiv icon

Mix-Pooling Strategy for Attention Mechanism

Add code
Aug 22, 2022
Figure 1 for Mix-Pooling Strategy for Attention Mechanism
Figure 2 for Mix-Pooling Strategy for Attention Mechanism
Figure 3 for Mix-Pooling Strategy for Attention Mechanism
Figure 4 for Mix-Pooling Strategy for Attention Mechanism
Viaarxiv icon

Difficulty-aware Image Super Resolution via Deep Adaptive Dual-Network

Add code
May 01, 2019
Figure 1 for Difficulty-aware Image Super Resolution via Deep Adaptive Dual-Network
Figure 2 for Difficulty-aware Image Super Resolution via Deep Adaptive Dual-Network
Figure 3 for Difficulty-aware Image Super Resolution via Deep Adaptive Dual-Network
Figure 4 for Difficulty-aware Image Super Resolution via Deep Adaptive Dual-Network
Viaarxiv icon