Picture for Lin Chen

Lin Chen

DSA, Hong Kong University of Science and Technology, Guangzhou

InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions

Add code
Dec 12, 2024
Viaarxiv icon

From Intention To Implementation: Automating Biomedical Research via LLMs

Add code
Dec 12, 2024
Viaarxiv icon

Joint Coverage and Electromagnetic Field Exposure Analysis in Downlink and Uplink for RIS-assisted Networks

Add code
Nov 30, 2024
Viaarxiv icon

Open-Sora Plan: Open-Source Large Video Generation Model

Add code
Nov 28, 2024
Figure 1 for Open-Sora Plan: Open-Source Large Video Generation Model
Figure 2 for Open-Sora Plan: Open-Source Large Video Generation Model
Figure 3 for Open-Sora Plan: Open-Source Large Video Generation Model
Figure 4 for Open-Sora Plan: Open-Source Large Video Generation Model
Viaarxiv icon

Mobility-Aware Federated Learning: Multi-Armed Bandit Based Selection in Vehicular Network

Add code
Oct 15, 2024
Viaarxiv icon

MM-Ego: Towards Building Egocentric Multimodal LLMs

Add code
Oct 09, 2024
Figure 1 for MM-Ego: Towards Building Egocentric Multimodal LLMs
Figure 2 for MM-Ego: Towards Building Egocentric Multimodal LLMs
Figure 3 for MM-Ego: Towards Building Egocentric Multimodal LLMs
Figure 4 for MM-Ego: Towards Building Egocentric Multimodal LLMs
Viaarxiv icon

Efficient Privacy-Preserving KAN Inference Using Homomorphic Encryption

Add code
Sep 12, 2024
Viaarxiv icon

QD-VMR: Query Debiasing with Contextual Understanding Enhancement for Video Moment Retrieval

Add code
Aug 23, 2024
Viaarxiv icon

VLMEvalKit: An Open-Source Toolkit for Evaluating Large Multi-Modality Models

Add code
Jul 16, 2024
Figure 1 for VLMEvalKit: An Open-Source Toolkit for Evaluating Large Multi-Modality Models
Figure 2 for VLMEvalKit: An Open-Source Toolkit for Evaluating Large Multi-Modality Models
Figure 3 for VLMEvalKit: An Open-Source Toolkit for Evaluating Large Multi-Modality Models
Figure 4 for VLMEvalKit: An Open-Source Toolkit for Evaluating Large Multi-Modality Models
Viaarxiv icon

InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output

Add code
Jul 03, 2024
Figure 1 for InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output
Figure 2 for InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output
Figure 3 for InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output
Figure 4 for InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output
Viaarxiv icon