Picture for Zihan Wang

Zihan Wang

Michael Pokorny

Humanity's Last Exam

Add code
Jan 24, 2025
Viaarxiv icon

KEIR @ ECIR 2025: The Second Workshop on Knowledge-Enhanced Information Retrieval

Add code
Jan 20, 2025
Viaarxiv icon

CSHNet: A Novel Information Asymmetric Image Translation Method

Add code
Jan 17, 2025
Figure 1 for CSHNet: A Novel Information Asymmetric Image Translation Method
Figure 2 for CSHNet: A Novel Information Asymmetric Image Translation Method
Figure 3 for CSHNet: A Novel Information Asymmetric Image Translation Method
Figure 4 for CSHNet: A Novel Information Asymmetric Image Translation Method
Viaarxiv icon

Unlocking adaptive digital pathology through dynamic feature learning

Add code
Dec 29, 2024
Viaarxiv icon

Model-diff: A Tool for Comparative Study of Language Models in the Input Space

Add code
Dec 13, 2024
Figure 1 for Model-diff: A Tool for Comparative Study of Language Models in the Input Space
Figure 2 for Model-diff: A Tool for Comparative Study of Language Models in the Input Space
Figure 3 for Model-diff: A Tool for Comparative Study of Language Models in the Input Space
Figure 4 for Model-diff: A Tool for Comparative Study of Language Models in the Input Space
Viaarxiv icon

Reducing Tool Hallucination via Reliability Alignment

Add code
Dec 05, 2024
Figure 1 for Reducing Tool Hallucination via Reliability Alignment
Figure 2 for Reducing Tool Hallucination via Reliability Alignment
Figure 3 for Reducing Tool Hallucination via Reliability Alignment
Figure 4 for Reducing Tool Hallucination via Reliability Alignment
Viaarxiv icon

g3D-LF: Generalizable 3D-Language Feature Fields for Embodied Tasks

Add code
Nov 26, 2024
Figure 1 for g3D-LF: Generalizable 3D-Language Feature Fields for Embodied Tasks
Figure 2 for g3D-LF: Generalizable 3D-Language Feature Fields for Embodied Tasks
Figure 3 for g3D-LF: Generalizable 3D-Language Feature Fields for Embodied Tasks
Figure 4 for g3D-LF: Generalizable 3D-Language Feature Fields for Embodied Tasks
Viaarxiv icon

I Can Tell What I am Doing: Toward Real-World Natural Language Grounding of Robot Experiences

Add code
Nov 20, 2024
Viaarxiv icon

What You See Is What Matters: A Novel Visual and Physics-Based Metric for Evaluating Video Generation Quality

Add code
Nov 20, 2024
Figure 1 for What You See Is What Matters: A Novel Visual and Physics-Based Metric for Evaluating Video Generation Quality
Figure 2 for What You See Is What Matters: A Novel Visual and Physics-Based Metric for Evaluating Video Generation Quality
Figure 3 for What You See Is What Matters: A Novel Visual and Physics-Based Metric for Evaluating Video Generation Quality
Figure 4 for What You See Is What Matters: A Novel Visual and Physics-Based Metric for Evaluating Video Generation Quality
Viaarxiv icon

R^3AG: First Workshop on Refined and Reliable Retrieval Augmented Generation

Add code
Oct 27, 2024
Viaarxiv icon