Picture for Artem Shelmanov

Artem Shelmanov

Mohamed bin Zayed University of Artificial Intelligence

Is Human-Like Text Liked by Humans? Multilingual Human Detection and Preference Against AI

Add code
Feb 17, 2025
Viaarxiv icon

CoCoA: A Generalized Approach to Uncertainty Quantification by Integrating Confidence and Consistency of LLM Outputs

Add code
Feb 07, 2025
Viaarxiv icon

GenAI Content Detection Task 1: English and Multilingual Machine-Generated Text Detection: AI vs. Human

Add code
Jan 19, 2025
Viaarxiv icon

Libra-Leaderboard: Towards Responsible AI through a Balanced Leaderboard of Safety and Capability

Add code
Dec 24, 2024
Figure 1 for Libra-Leaderboard: Towards Responsible AI through a Balanced Leaderboard of Safety and Capability
Figure 2 for Libra-Leaderboard: Towards Responsible AI through a Balanced Leaderboard of Safety and Capability
Figure 3 for Libra-Leaderboard: Towards Responsible AI through a Balanced Leaderboard of Safety and Capability
Figure 4 for Libra-Leaderboard: Towards Responsible AI through a Balanced Leaderboard of Safety and Capability
Viaarxiv icon

Mental Disorders Detection in the Era of Large Language Models

Add code
Oct 09, 2024
Figure 1 for Mental Disorders Detection in the Era of Large Language Models
Figure 2 for Mental Disorders Detection in the Era of Large Language Models
Figure 3 for Mental Disorders Detection in the Era of Large Language Models
Figure 4 for Mental Disorders Detection in the Era of Large Language Models
Viaarxiv icon

Unconditional Truthfulness: Learning Conditional Dependency for Uncertainty Quantification of Large Language Models

Add code
Aug 20, 2024
Figure 1 for Unconditional Truthfulness: Learning Conditional Dependency for Uncertainty Quantification of Large Language Models
Figure 2 for Unconditional Truthfulness: Learning Conditional Dependency for Uncertainty Quantification of Large Language Models
Figure 3 for Unconditional Truthfulness: Learning Conditional Dependency for Uncertainty Quantification of Large Language Models
Figure 4 for Unconditional Truthfulness: Learning Conditional Dependency for Uncertainty Quantification of Large Language Models
Viaarxiv icon

LLM-DetectAIve: a Tool for Fine-Grained Machine-Generated Text Detection

Add code
Aug 08, 2024
Figure 1 for LLM-DetectAIve: a Tool for Fine-Grained Machine-Generated Text Detection
Figure 2 for LLM-DetectAIve: a Tool for Fine-Grained Machine-Generated Text Detection
Figure 3 for LLM-DetectAIve: a Tool for Fine-Grained Machine-Generated Text Detection
Figure 4 for LLM-DetectAIve: a Tool for Fine-Grained Machine-Generated Text Detection
Viaarxiv icon

Inference-Time Selective Debiasing

Add code
Jul 27, 2024
Figure 1 for Inference-Time Selective Debiasing
Figure 2 for Inference-Time Selective Debiasing
Figure 3 for Inference-Time Selective Debiasing
Figure 4 for Inference-Time Selective Debiasing
Viaarxiv icon

Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph

Add code
Jun 21, 2024
Figure 1 for Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph
Figure 2 for Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph
Figure 3 for Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph
Figure 4 for Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph
Viaarxiv icon

Vikhr: The Family of Open-Source Instruction-Tuned Large Language Models for Russian

Add code
May 22, 2024
Viaarxiv icon