Picture for Chulin Xie

Chulin Xie

RedCode: Risky Code Execution and Generation Benchmark for Code Agents

Add code
Nov 12, 2024
Viaarxiv icon

On Memorization of Large Language Models in Logical Reasoning

Add code
Oct 30, 2024
Figure 1 for On Memorization of Large Language Models in Logical Reasoning
Figure 2 for On Memorization of Large Language Models in Logical Reasoning
Figure 3 for On Memorization of Large Language Models in Logical Reasoning
Figure 4 for On Memorization of Large Language Models in Logical Reasoning
Viaarxiv icon

Online Mirror Descent for Tchebycheff Scalarization in Multi-Objective Optimization

Add code
Oct 29, 2024
Viaarxiv icon

LLM-PBE: Assessing Data Privacy in Large Language Models

Add code
Aug 23, 2024
Viaarxiv icon

Crosslingual Capabilities and Knowledge Barriers in Multilingual Large Language Models

Add code
Jun 23, 2024
Viaarxiv icon

GuardAgent: Safeguard LLM Agents by a Guard Agent via Knowledge-Enabled Reasoning

Add code
Jun 13, 2024
Viaarxiv icon

Graph Chain-of-Thought: Augmenting Large Language Models by Reasoning on Graphs

Add code
Apr 10, 2024
Figure 1 for Graph Chain-of-Thought: Augmenting Large Language Models by Reasoning on Graphs
Figure 2 for Graph Chain-of-Thought: Augmenting Large Language Models by Reasoning on Graphs
Figure 3 for Graph Chain-of-Thought: Augmenting Large Language Models by Reasoning on Graphs
Figure 4 for Graph Chain-of-Thought: Augmenting Large Language Models by Reasoning on Graphs
Viaarxiv icon

FedSelect: Personalized Federated Learning with Customized Selection of Parameters for Fine-Tuning

Add code
Apr 03, 2024
Viaarxiv icon

TablePuppet: A Generic Framework for Relational Federated Learning

Add code
Mar 23, 2024
Viaarxiv icon

Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression

Add code
Mar 18, 2024
Figure 1 for Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression
Figure 2 for Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression
Figure 3 for Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression
Figure 4 for Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression
Viaarxiv icon