Picture for Chi-Min Chan

Chi-Min Chan

HiPrompt: Tuning-free Higher-Resolution Generation with Hierarchical MLLM Prompts

Add code
Sep 04, 2024
Viaarxiv icon

AgentMonitor: A Plug-and-Play Framework for Predictive and Secure Multi-Agent Systems

Add code
Aug 27, 2024
Figure 1 for AgentMonitor: A Plug-and-Play Framework for Predictive and Secure Multi-Agent Systems
Figure 2 for AgentMonitor: A Plug-and-Play Framework for Predictive and Secure Multi-Agent Systems
Figure 3 for AgentMonitor: A Plug-and-Play Framework for Predictive and Secure Multi-Agent Systems
Figure 4 for AgentMonitor: A Plug-and-Play Framework for Predictive and Secure Multi-Agent Systems
Viaarxiv icon

RQ-RAG: Learning to Refine Queries for Retrieval Augmented Generation

Add code
Mar 31, 2024
Viaarxiv icon

AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents

Add code
Aug 21, 2023
Viaarxiv icon

ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate

Add code
Aug 14, 2023
Viaarxiv icon

Arbitrary Few Parameters are Good Enough for Adapting Large-scale Pre-trained Language Models

Add code
Jun 04, 2023
Viaarxiv icon

Plug-and-Play Document Modules for Pre-trained Models

Add code
May 28, 2023
Viaarxiv icon

Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models

Add code
Mar 15, 2022
Figure 1 for Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models
Figure 2 for Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models
Figure 3 for Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models
Figure 4 for Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models
Viaarxiv icon

On Transferability of Prompt Tuning for Natural Language Understanding

Add code
Nov 12, 2021
Figure 1 for On Transferability of Prompt Tuning for Natural Language Understanding
Figure 2 for On Transferability of Prompt Tuning for Natural Language Understanding
Figure 3 for On Transferability of Prompt Tuning for Natural Language Understanding
Figure 4 for On Transferability of Prompt Tuning for Natural Language Understanding
Viaarxiv icon