Picture for Phillip Howard

Phillip Howard

Quantifying Interpretability in CLIP Models with Concept Consistency

Add code
Mar 14, 2025
Viaarxiv icon

LVLM-Compress-Bench: Benchmarking the Broader Impact of Large Vision-Language Model Compression

Add code
Mar 06, 2025
Viaarxiv icon

Is Your Paper Being Reviewed by an LLM? A New Benchmark Dataset and Approach for Detecting AI Text in Peer Review

Add code
Feb 26, 2025
Viaarxiv icon

Training-Free Mitigation of Language Reasoning Degradation After Multimodal Instruction Tuning

Add code
Dec 04, 2024
Figure 1 for Training-Free Mitigation of Language Reasoning Degradation After Multimodal Instruction Tuning
Figure 2 for Training-Free Mitigation of Language Reasoning Degradation After Multimodal Instruction Tuning
Figure 3 for Training-Free Mitigation of Language Reasoning Degradation After Multimodal Instruction Tuning
Figure 4 for Training-Free Mitigation of Language Reasoning Degradation After Multimodal Instruction Tuning
Viaarxiv icon

Debias your Large Multi-Modal Model at Test-Time with Non-Contrastive Visual Attribute Steering

Add code
Nov 15, 2024
Figure 1 for Debias your Large Multi-Modal Model at Test-Time with Non-Contrastive Visual Attribute Steering
Figure 2 for Debias your Large Multi-Modal Model at Test-Time with Non-Contrastive Visual Attribute Steering
Figure 3 for Debias your Large Multi-Modal Model at Test-Time with Non-Contrastive Visual Attribute Steering
Figure 4 for Debias your Large Multi-Modal Model at Test-Time with Non-Contrastive Visual Attribute Steering
Viaarxiv icon

Distill-SynthKG: Distilling Knowledge Graph Synthesis Workflow for Improved Coverage and Efficiency

Add code
Oct 22, 2024
Figure 1 for Distill-SynthKG: Distilling Knowledge Graph Synthesis Workflow for Improved Coverage and Efficiency
Figure 2 for Distill-SynthKG: Distilling Knowledge Graph Synthesis Workflow for Improved Coverage and Efficiency
Figure 3 for Distill-SynthKG: Distilling Knowledge Graph Synthesis Workflow for Improved Coverage and Efficiency
Figure 4 for Distill-SynthKG: Distilling Knowledge Graph Synthesis Workflow for Improved Coverage and Efficiency
Viaarxiv icon

Debiasing Large Vision-Language Models by Ablating Protected Attribute Representations

Add code
Oct 17, 2024
Figure 1 for Debiasing Large Vision-Language Models by Ablating Protected Attribute Representations
Figure 2 for Debiasing Large Vision-Language Models by Ablating Protected Attribute Representations
Figure 3 for Debiasing Large Vision-Language Models by Ablating Protected Attribute Representations
Figure 4 for Debiasing Large Vision-Language Models by Ablating Protected Attribute Representations
Viaarxiv icon

Is Your Paper Being Reviewed by an LLM? Investigating AI Text Detectability in Peer Review

Add code
Oct 03, 2024
Viaarxiv icon

Quantifying and Enabling the Interpretability of CLIP-like Models

Add code
Sep 10, 2024
Viaarxiv icon

SK-VQA: Synthetic Knowledge Generation at Scale for Training Context-Augmented Multimodal LLMs

Add code
Jun 28, 2024
Viaarxiv icon