Picture for Kai Mei

Kai Mei

CoRE: LLM as Interpreter for Natural Language Programming, Pseudo-Code Programming, and Flow Programming of AI Agents

Add code
May 11, 2024
Figure 1 for CoRE: LLM as Interpreter for Natural Language Programming, Pseudo-Code Programming, and Flow Programming of AI Agents
Figure 2 for CoRE: LLM as Interpreter for Natural Language Programming, Pseudo-Code Programming, and Flow Programming of AI Agents
Figure 3 for CoRE: LLM as Interpreter for Natural Language Programming, Pseudo-Code Programming, and Flow Programming of AI Agents
Figure 4 for CoRE: LLM as Interpreter for Natural Language Programming, Pseudo-Code Programming, and Flow Programming of AI Agents
Viaarxiv icon

Exploring Concept Depth: How Large Language Models Acquire Knowledge at Different Layers?

Add code
Apr 10, 2024
Viaarxiv icon

AIOS: LLM Agent Operating System

Add code
Mar 26, 2024
Figure 1 for AIOS: LLM Agent Operating System
Figure 2 for AIOS: LLM Agent Operating System
Figure 3 for AIOS: LLM Agent Operating System
Figure 4 for AIOS: LLM Agent Operating System
Viaarxiv icon

What if LLMs Have Different World Views: Simulating Alien Civilizations with LLM-based Agents

Add code
Feb 21, 2024
Viaarxiv icon

War and Peace (WarAgent): Large Language Model-based Multi-Agent Simulation of World Wars

Add code
Nov 28, 2023
Viaarxiv icon

LightLM: A Lightweight Deep and Narrow Language Model for Generative Recommendation

Add code
Oct 30, 2023
Figure 1 for LightLM: A Lightweight Deep and Narrow Language Model for Generative Recommendation
Figure 2 for LightLM: A Lightweight Deep and Narrow Language Model for Generative Recommendation
Figure 3 for LightLM: A Lightweight Deep and Narrow Language Model for Generative Recommendation
Figure 4 for LightLM: A Lightweight Deep and Narrow Language Model for Generative Recommendation
Viaarxiv icon

NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models

Add code
May 28, 2023
Figure 1 for NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models
Figure 2 for NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models
Figure 3 for NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models
Figure 4 for NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models
Viaarxiv icon

UNICORN: A Unified Backdoor Trigger Inversion Framework

Add code
Apr 05, 2023
Figure 1 for UNICORN: A Unified Backdoor Trigger Inversion Framework
Figure 2 for UNICORN: A Unified Backdoor Trigger Inversion Framework
Figure 3 for UNICORN: A Unified Backdoor Trigger Inversion Framework
Figure 4 for UNICORN: A Unified Backdoor Trigger Inversion Framework
Viaarxiv icon

Rethinking the Reverse-engineering of Trojan Triggers

Add code
Oct 27, 2022
Figure 1 for Rethinking the Reverse-engineering of Trojan Triggers
Figure 2 for Rethinking the Reverse-engineering of Trojan Triggers
Figure 3 for Rethinking the Reverse-engineering of Trojan Triggers
Figure 4 for Rethinking the Reverse-engineering of Trojan Triggers
Viaarxiv icon

Theoretical Analysis of Deep Neural Networks in Physical Layer Communication

Add code
Feb 21, 2022
Figure 1 for Theoretical Analysis of Deep Neural Networks in Physical Layer Communication
Figure 2 for Theoretical Analysis of Deep Neural Networks in Physical Layer Communication
Figure 3 for Theoretical Analysis of Deep Neural Networks in Physical Layer Communication
Figure 4 for Theoretical Analysis of Deep Neural Networks in Physical Layer Communication
Viaarxiv icon