Picture for Hongzhan Lin

Hongzhan Lin

Unlocking Multimodal Integration in EHRs: A Prompt Learning Framework for Language and Time Series Fusion

Add code
Feb 19, 2025
Viaarxiv icon

LLM-Enhanced Multiple Instance Learning for Joint Rumor and Stance Detection with Social Context Information

Add code
Feb 13, 2025
Viaarxiv icon

Llasa: Scaling Train-Time and Inference-Time Compute for Llama-based Speech Synthesis

Add code
Feb 06, 2025
Viaarxiv icon

ClarityEthic: Explainable Moral Judgment Utilizing Contrastive Ethical Insights from Large Language Models

Add code
Dec 17, 2024
Viaarxiv icon

ScratchEval: Are GPT-4o Smarter than My Child? Evaluating Large Multimodal Models with Visual Programming Challenges

Add code
Nov 28, 2024
Viaarxiv icon

From General to Specific: Utilizing General Hallucation to Automatically Measure the Role Relationship Fidelity for Specific Role-Play Agents

Add code
Nov 12, 2024
Figure 1 for From General to Specific: Utilizing General Hallucation to Automatically Measure the Role Relationship Fidelity for Specific Role-Play Agents
Figure 2 for From General to Specific: Utilizing General Hallucation to Automatically Measure the Role Relationship Fidelity for Specific Role-Play Agents
Figure 3 for From General to Specific: Utilizing General Hallucation to Automatically Measure the Role Relationship Fidelity for Specific Role-Play Agents
Figure 4 for From General to Specific: Utilizing General Hallucation to Automatically Measure the Role Relationship Fidelity for Specific Role-Play Agents
Viaarxiv icon

Towards Low-Resource Harmful Meme Detection with LMM Agents

Add code
Nov 08, 2024
Figure 1 for Towards Low-Resource Harmful Meme Detection with LMM Agents
Figure 2 for Towards Low-Resource Harmful Meme Detection with LMM Agents
Figure 3 for Towards Low-Resource Harmful Meme Detection with LMM Agents
Figure 4 for Towards Low-Resource Harmful Meme Detection with LMM Agents
Viaarxiv icon

AMR-Evol: Adaptive Modular Response Evolution Elicits Better Knowledge Distillation for Large Language Models in Code Generation

Add code
Oct 01, 2024
Viaarxiv icon

PEAR: Position-Embedding-Agnostic Attention Re-weighting Enhances Retrieval-Augmented Generation with Zero Inference Overhead

Add code
Sep 29, 2024
Figure 1 for PEAR: Position-Embedding-Agnostic Attention Re-weighting Enhances Retrieval-Augmented Generation with Zero Inference Overhead
Figure 2 for PEAR: Position-Embedding-Agnostic Attention Re-weighting Enhances Retrieval-Augmented Generation with Zero Inference Overhead
Figure 3 for PEAR: Position-Embedding-Agnostic Attention Re-weighting Enhances Retrieval-Augmented Generation with Zero Inference Overhead
Figure 4 for PEAR: Position-Embedding-Agnostic Attention Re-weighting Enhances Retrieval-Augmented Generation with Zero Inference Overhead
Viaarxiv icon

Codec Does Matter: Exploring the Semantic Shortcoming of Codec for Audio Language Model

Add code
Aug 30, 2024
Figure 1 for Codec Does Matter: Exploring the Semantic Shortcoming of Codec for Audio Language Model
Figure 2 for Codec Does Matter: Exploring the Semantic Shortcoming of Codec for Audio Language Model
Figure 3 for Codec Does Matter: Exploring the Semantic Shortcoming of Codec for Audio Language Model
Figure 4 for Codec Does Matter: Exploring the Semantic Shortcoming of Codec for Audio Language Model
Viaarxiv icon