Picture for Shaina Raza

Shaina Raza

PhD

FairSense-AI: Responsible AI Meets Sustainability

Add code
Mar 05, 2025
Viaarxiv icon

Comprehensive Analysis of Transparency and Accessibility of ChatGPT, DeepSeek, And other SoTA Large Language Models

Add code
Feb 21, 2025
Viaarxiv icon

VLDBench: Vision Language Models Disinformation Detection Benchmark

Add code
Feb 17, 2025
Viaarxiv icon

Perceived Confidence Scoring for Data Annotation with Zero-Shot LLMs

Add code
Feb 11, 2025
Viaarxiv icon

FairUDT: Fairness-aware Uplift Decision Trees

Add code
Feb 03, 2025
Viaarxiv icon

Image, Text, and Speech Data Augmentation using Multimodal LLMs for Deep Learning: A Survey

Add code
Jan 29, 2025
Viaarxiv icon

EQUATOR: A Deterministic Framework for Evaluating LLM Reasoning with Open-Ended Questions. # v1.0.0-beta

Add code
Dec 31, 2024
Viaarxiv icon

ViLBias: A Framework for Bias Detection using Linguistic and Visual Cues

Add code
Dec 22, 2024
Figure 1 for ViLBias: A Framework for Bias Detection using Linguistic and Visual Cues
Figure 2 for ViLBias: A Framework for Bias Detection using Linguistic and Visual Cues
Figure 3 for ViLBias: A Framework for Bias Detection using Linguistic and Visual Cues
Figure 4 for ViLBias: A Framework for Bias Detection using Linguistic and Visual Cues
Viaarxiv icon

Fact or Fiction? Can LLMs be Reliable Annotators for Political Truths?

Add code
Nov 08, 2024
Figure 1 for Fact or Fiction? Can LLMs be Reliable Annotators for Political Truths?
Figure 2 for Fact or Fiction? Can LLMs be Reliable Annotators for Political Truths?
Figure 3 for Fact or Fiction? Can LLMs be Reliable Annotators for Political Truths?
Figure 4 for Fact or Fiction? Can LLMs be Reliable Annotators for Political Truths?
Viaarxiv icon

Desert Camels and Oil Sheikhs: Arab-Centric Red Teaming of Frontier LLMs

Add code
Oct 31, 2024
Viaarxiv icon