Picture for Zhenhailong Wang

Zhenhailong Wang

Infogent: An Agent-Based Framework for Web Information Aggregation

Add code
Oct 24, 2024
Viaarxiv icon

Text-Based Reasoning About Vector Graphics

Add code
Apr 10, 2024
Figure 1 for Text-Based Reasoning About Vector Graphics
Figure 2 for Text-Based Reasoning About Vector Graphics
Figure 3 for Text-Based Reasoning About Vector Graphics
Figure 4 for Text-Based Reasoning About Vector Graphics
Viaarxiv icon

Do LVLMs Understand Charts? Analyzing and Correcting Factual Errors in Chart Captioning

Add code
Dec 15, 2023
Viaarxiv icon

Democratizing LLMs: An Exploration of Cost-Performance Trade-offs in Self-Refined Open-Source Models

Add code
Oct 22, 2023
Viaarxiv icon

Unleashing Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration

Add code
Jul 14, 2023
Viaarxiv icon

Paxion: Patching Action Knowledge in Video-Language Foundation Models

Add code
May 26, 2023
Viaarxiv icon

RCOT: Detecting and Rectifying Factual Inconsistency in Reasoning by Reversing Chain-of-Thought

Add code
May 19, 2023
Viaarxiv icon

Zemi: Learning Zero-Shot Semi-Parametric Language Models from Multiple Tasks

Add code
Oct 01, 2022
Figure 1 for Zemi: Learning Zero-Shot Semi-Parametric Language Models from Multiple Tasks
Figure 2 for Zemi: Learning Zero-Shot Semi-Parametric Language Models from Multiple Tasks
Figure 3 for Zemi: Learning Zero-Shot Semi-Parametric Language Models from Multiple Tasks
Figure 4 for Zemi: Learning Zero-Shot Semi-Parametric Language Models from Multiple Tasks
Viaarxiv icon

Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners

Add code
May 29, 2022
Figure 1 for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners
Figure 2 for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners
Figure 3 for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners
Figure 4 for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners
Viaarxiv icon

Model-Agnostic Multitask Fine-tuning for Few-shot Vision-Language Transfer Learning

Add code
Mar 09, 2022
Figure 1 for Model-Agnostic Multitask Fine-tuning for Few-shot Vision-Language Transfer Learning
Figure 2 for Model-Agnostic Multitask Fine-tuning for Few-shot Vision-Language Transfer Learning
Figure 3 for Model-Agnostic Multitask Fine-tuning for Few-shot Vision-Language Transfer Learning
Figure 4 for Model-Agnostic Multitask Fine-tuning for Few-shot Vision-Language Transfer Learning
Viaarxiv icon