Picture for Mengdi Li

Mengdi Li

Can Large Language Models Identify Implicit Suicidal Ideation? An Empirical Evaluation

Add code
Feb 25, 2025
Viaarxiv icon

Towards User-level Private Reinforcement Learning with Human Feedback

Add code
Feb 22, 2025
Viaarxiv icon

Fraud-R1 : A Multi-Round Benchmark for Assessing the Robustness of LLM Against Augmented Fraud and Phishing Inducements

Add code
Feb 18, 2025
Viaarxiv icon

What makes your model a low-empathy or warmth person: Exploring the Origins of Personality in LLMs

Add code
Oct 07, 2024
Figure 1 for What makes your model a low-empathy or warmth person: Exploring the Origins of Personality in LLMs
Figure 2 for What makes your model a low-empathy or warmth person: Exploring the Origins of Personality in LLMs
Figure 3 for What makes your model a low-empathy or warmth person: Exploring the Origins of Personality in LLMs
Figure 4 for What makes your model a low-empathy or warmth person: Exploring the Origins of Personality in LLMs
Viaarxiv icon

Understanding Reasoning in Chain-of-Thought from the Hopfieldian View

Add code
Oct 04, 2024
Figure 1 for Understanding Reasoning in Chain-of-Thought from the Hopfieldian View
Figure 2 for Understanding Reasoning in Chain-of-Thought from the Hopfieldian View
Figure 3 for Understanding Reasoning in Chain-of-Thought from the Hopfieldian View
Figure 4 for Understanding Reasoning in Chain-of-Thought from the Hopfieldian View
Viaarxiv icon

A Hopfieldian View-based Interpretation for Chain-of-Thought Reasoning

Add code
Jun 18, 2024
Figure 1 for A Hopfieldian View-based Interpretation for Chain-of-Thought Reasoning
Figure 2 for A Hopfieldian View-based Interpretation for Chain-of-Thought Reasoning
Figure 3 for A Hopfieldian View-based Interpretation for Chain-of-Thought Reasoning
Figure 4 for A Hopfieldian View-based Interpretation for Chain-of-Thought Reasoning
Viaarxiv icon

Large Language Models for Orchestrating Bimanual Robots

Add code
Apr 02, 2024
Viaarxiv icon

Dialectical Alignment: Resolving the Tension of 3H and Security Threats of LLMs

Add code
Mar 30, 2024
Figure 1 for Dialectical Alignment: Resolving the Tension of 3H and Security Threats of LLMs
Figure 2 for Dialectical Alignment: Resolving the Tension of 3H and Security Threats of LLMs
Figure 3 for Dialectical Alignment: Resolving the Tension of 3H and Security Threats of LLMs
Figure 4 for Dialectical Alignment: Resolving the Tension of 3H and Security Threats of LLMs
Viaarxiv icon

Causal State Distillation for Explainable Reinforcement Learning

Add code
Dec 30, 2023
Figure 1 for Causal State Distillation for Explainable Reinforcement Learning
Figure 2 for Causal State Distillation for Explainable Reinforcement Learning
Figure 3 for Causal State Distillation for Explainable Reinforcement Learning
Figure 4 for Causal State Distillation for Explainable Reinforcement Learning
Viaarxiv icon

Accelerating Reinforcement Learning of Robotic Manipulations via Feedback from Large Language Models

Add code
Nov 04, 2023
Figure 1 for Accelerating Reinforcement Learning of Robotic Manipulations via Feedback from Large Language Models
Figure 2 for Accelerating Reinforcement Learning of Robotic Manipulations via Feedback from Large Language Models
Figure 3 for Accelerating Reinforcement Learning of Robotic Manipulations via Feedback from Large Language Models
Figure 4 for Accelerating Reinforcement Learning of Robotic Manipulations via Feedback from Large Language Models
Viaarxiv icon