Picture for Junfeng Tian

Junfeng Tian

Untie the Knots: An Efficient Data Augmentation Strategy for Long-Context Pre-Training in Language Models

Add code
Sep 07, 2024
Viaarxiv icon

P-Tailor: Customizing Personality Traits for Language Models via Mixture of Specialized LoRA Experts

Add code
Jun 18, 2024
Viaarxiv icon

Modeling Comparative Logical Relation with Contrastive Learning for Text Generation

Add code
Jun 13, 2024
Viaarxiv icon

Nyonic Technical Report

Add code
Apr 24, 2024
Figure 1 for Nyonic Technical Report
Figure 2 for Nyonic Technical Report
Figure 3 for Nyonic Technical Report
Figure 4 for Nyonic Technical Report
Viaarxiv icon

RethinkingTMSC: An Empirical Study for Target-Oriented Multimodal Sentiment Classification

Add code
Oct 14, 2023
Viaarxiv icon

UReader: Universal OCR-free Visually-situated Language Understanding with Multimodal Large Language Model

Add code
Oct 08, 2023
Viaarxiv icon

mPLUG-DocOwl: Modularized Multimodal Large Language Model for Document Understanding

Add code
Jul 04, 2023
Viaarxiv icon

ChatPLUG: Open-Domain Generative Dialogue System with Internet-Augmented Instruction Tuning for Digital Human

Add code
Apr 28, 2023
Figure 1 for ChatPLUG: Open-Domain Generative Dialogue System with Internet-Augmented Instruction Tuning for Digital Human
Figure 2 for ChatPLUG: Open-Domain Generative Dialogue System with Internet-Augmented Instruction Tuning for Digital Human
Figure 3 for ChatPLUG: Open-Domain Generative Dialogue System with Internet-Augmented Instruction Tuning for Digital Human
Figure 4 for ChatPLUG: Open-Domain Generative Dialogue System with Internet-Augmented Instruction Tuning for Digital Human
Viaarxiv icon

mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality

Add code
Apr 27, 2023
Figure 1 for mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality
Figure 2 for mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality
Figure 3 for mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality
Figure 4 for mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality
Viaarxiv icon

mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal Skip-connections

Add code
May 25, 2022
Figure 1 for mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal Skip-connections
Figure 2 for mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal Skip-connections
Figure 3 for mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal Skip-connections
Figure 4 for mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal Skip-connections
Viaarxiv icon