Picture for Manas Gaur

Manas Gaur

University of Maryland, Baltimore County

Towards Robust Evaluation of Unlearning in LLMs via Data Transformations

Add code
Nov 23, 2024
Viaarxiv icon

A Domain-Agnostic Neurosymbolic Approach for Big Social Data Analysis: Evaluating Mental Health Sentiment on Social Media during COVID-19

Add code
Nov 11, 2024
Figure 1 for A Domain-Agnostic Neurosymbolic Approach for Big Social Data Analysis: Evaluating Mental Health Sentiment on Social Media during COVID-19
Figure 2 for A Domain-Agnostic Neurosymbolic Approach for Big Social Data Analysis: Evaluating Mental Health Sentiment on Social Media during COVID-19
Figure 3 for A Domain-Agnostic Neurosymbolic Approach for Big Social Data Analysis: Evaluating Mental Health Sentiment on Social Media during COVID-19
Figure 4 for A Domain-Agnostic Neurosymbolic Approach for Big Social Data Analysis: Evaluating Mental Health Sentiment on Social Media during COVID-19
Viaarxiv icon

Unboxing Occupational Bias: Grounded Debiasing LLMs with U.S. Labor Data

Add code
Aug 20, 2024
Figure 1 for Unboxing Occupational Bias: Grounded Debiasing LLMs with U.S. Labor Data
Figure 2 for Unboxing Occupational Bias: Grounded Debiasing LLMs with U.S. Labor Data
Figure 3 for Unboxing Occupational Bias: Grounded Debiasing LLMs with U.S. Labor Data
Figure 4 for Unboxing Occupational Bias: Grounded Debiasing LLMs with U.S. Labor Data
Viaarxiv icon

Human-Interpretable Adversarial Prompt Attack on Large Language Models with Situational Context

Add code
Jul 25, 2024
Viaarxiv icon

IoT-Based Preventive Mental Health Using Knowledge Graphs and Standards for Better Well-Being

Add code
Jun 19, 2024
Viaarxiv icon

WellDunn: On the Robustness and Explainability of Language Models and Large Language Models in Identifying Wellness Dimensions

Add code
Jun 17, 2024
Figure 1 for WellDunn: On the Robustness and Explainability of Language Models and Large Language Models in Identifying Wellness Dimensions
Figure 2 for WellDunn: On the Robustness and Explainability of Language Models and Large Language Models in Identifying Wellness Dimensions
Figure 3 for WellDunn: On the Robustness and Explainability of Language Models and Large Language Models in Identifying Wellness Dimensions
Figure 4 for WellDunn: On the Robustness and Explainability of Language Models and Large Language Models in Identifying Wellness Dimensions
Viaarxiv icon

REASONS: A benchmark for REtrieval and Automated citationS Of scieNtific Sentences using Public and Proprietary LLMs

Add code
May 03, 2024
Figure 1 for REASONS: A benchmark for REtrieval and Automated citationS Of scieNtific Sentences using Public and Proprietary LLMs
Figure 2 for REASONS: A benchmark for REtrieval and Automated citationS Of scieNtific Sentences using Public and Proprietary LLMs
Figure 3 for REASONS: A benchmark for REtrieval and Automated citationS Of scieNtific Sentences using Public and Proprietary LLMs
Figure 4 for REASONS: A benchmark for REtrieval and Automated citationS Of scieNtific Sentences using Public and Proprietary LLMs
Viaarxiv icon

COBIAS: Contextual Reliability in Bias Assessment

Add code
Feb 22, 2024
Viaarxiv icon

SaGE: Evaluating Moral Consistency in Large Language Models

Add code
Feb 21, 2024
Viaarxiv icon

Measuring Moral Inconsistencies in Large Language Models

Add code
Jan 26, 2024
Figure 1 for Measuring Moral Inconsistencies in Large Language Models
Figure 2 for Measuring Moral Inconsistencies in Large Language Models
Viaarxiv icon