Picture for Phillip Howard

Phillip Howard

Training-Free Mitigation of Language Reasoning Degradation After Multimodal Instruction Tuning

Add code
Dec 04, 2024
Figure 1 for Training-Free Mitigation of Language Reasoning Degradation After Multimodal Instruction Tuning
Figure 2 for Training-Free Mitigation of Language Reasoning Degradation After Multimodal Instruction Tuning
Figure 3 for Training-Free Mitigation of Language Reasoning Degradation After Multimodal Instruction Tuning
Figure 4 for Training-Free Mitigation of Language Reasoning Degradation After Multimodal Instruction Tuning
Viaarxiv icon

Debias your Large Multi-Modal Model at Test-Time with Non-Contrastive Visual Attribute Steering

Add code
Nov 15, 2024
Figure 1 for Debias your Large Multi-Modal Model at Test-Time with Non-Contrastive Visual Attribute Steering
Figure 2 for Debias your Large Multi-Modal Model at Test-Time with Non-Contrastive Visual Attribute Steering
Figure 3 for Debias your Large Multi-Modal Model at Test-Time with Non-Contrastive Visual Attribute Steering
Figure 4 for Debias your Large Multi-Modal Model at Test-Time with Non-Contrastive Visual Attribute Steering
Viaarxiv icon

Distill-SynthKG: Distilling Knowledge Graph Synthesis Workflow for Improved Coverage and Efficiency

Add code
Oct 22, 2024
Figure 1 for Distill-SynthKG: Distilling Knowledge Graph Synthesis Workflow for Improved Coverage and Efficiency
Figure 2 for Distill-SynthKG: Distilling Knowledge Graph Synthesis Workflow for Improved Coverage and Efficiency
Figure 3 for Distill-SynthKG: Distilling Knowledge Graph Synthesis Workflow for Improved Coverage and Efficiency
Figure 4 for Distill-SynthKG: Distilling Knowledge Graph Synthesis Workflow for Improved Coverage and Efficiency
Viaarxiv icon

Debiasing Large Vision-Language Models by Ablating Protected Attribute Representations

Add code
Oct 17, 2024
Viaarxiv icon

Is Your Paper Being Reviewed by an LLM? Investigating AI Text Detectability in Peer Review

Add code
Oct 03, 2024
Viaarxiv icon

Quantifying and Enabling the Interpretability of CLIP-like Models

Add code
Sep 10, 2024
Viaarxiv icon

SK-VQA: Synthetic Knowledge Generation at Scale for Training Context-Augmented Multimodal LLMs

Add code
Jun 28, 2024
Viaarxiv icon

Uncovering Bias in Large Vision-Language Models with Counterfactuals

Add code
Mar 29, 2024
Viaarxiv icon

Probing and Mitigating Intersectional Social Biases in Vision-Language Models with Counterfactual Examples

Add code
Nov 30, 2023
Viaarxiv icon

NeuroPrompts: An Adaptive Framework to Optimize Prompts for Text-to-Image Generation

Add code
Nov 20, 2023
Viaarxiv icon