Picture for Dit-Yan Yeung

Dit-Yan Yeung

SG-LRA: Self-Generating Automatic Scoliosis Cobb Angle Measurement with Low-Rank Approximation

Add code
Nov 19, 2024
Viaarxiv icon

Fourier Amplitude and Correlation Loss: Beyond Using L2 Loss for Skillful Precipitation Nowcasting

Add code
Oct 30, 2024
Figure 1 for Fourier Amplitude and Correlation Loss: Beyond Using L2 Loss for Skillful Precipitation Nowcasting
Figure 2 for Fourier Amplitude and Correlation Loss: Beyond Using L2 Loss for Skillful Precipitation Nowcasting
Figure 3 for Fourier Amplitude and Correlation Loss: Beyond Using L2 Loss for Skillful Precipitation Nowcasting
Figure 4 for Fourier Amplitude and Correlation Loss: Beyond Using L2 Loss for Skillful Precipitation Nowcasting
Viaarxiv icon

Unified Triplet-Level Hallucination Evaluation for Large Vision-Language Models

Add code
Oct 30, 2024
Viaarxiv icon

Selection-p: Self-Supervised Task-Agnostic Prompt Compression for Faithfulness and Transferability

Add code
Oct 15, 2024
Figure 1 for Selection-p: Self-Supervised Task-Agnostic Prompt Compression for Faithfulness and Transferability
Figure 2 for Selection-p: Self-Supervised Task-Agnostic Prompt Compression for Faithfulness and Transferability
Figure 3 for Selection-p: Self-Supervised Task-Agnostic Prompt Compression for Faithfulness and Transferability
Figure 4 for Selection-p: Self-Supervised Task-Agnostic Prompt Compression for Faithfulness and Transferability
Viaarxiv icon

AnyAttack: Towards Large-scale Self-supervised Generation of Targeted Adversarial Examples for Vision-Language Models

Add code
Oct 07, 2024
Figure 1 for AnyAttack: Towards Large-scale Self-supervised Generation of Targeted Adversarial Examples for Vision-Language Models
Figure 2 for AnyAttack: Towards Large-scale Self-supervised Generation of Targeted Adversarial Examples for Vision-Language Models
Figure 3 for AnyAttack: Towards Large-scale Self-supervised Generation of Targeted Adversarial Examples for Vision-Language Models
Figure 4 for AnyAttack: Towards Large-scale Self-supervised Generation of Targeted Adversarial Examples for Vision-Language Models
Viaarxiv icon

EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions

Add code
Sep 26, 2024
Figure 1 for EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions
Figure 2 for EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions
Figure 3 for EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions
Figure 4 for EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions
Viaarxiv icon

Learning High-resolution Vector Representation from Multi-Camera Images for 3D Object Detection

Add code
Jul 22, 2024
Viaarxiv icon

JointDreamer: Ensuring Geometry Consistency and Text Congruence in Text-to-3D Generation via Joint Score Distillation

Add code
Jul 17, 2024
Viaarxiv icon

Rethinking Targeted Adversarial Attacks For Neural Machine Translation

Add code
Jul 07, 2024
Viaarxiv icon

RoboDreamer: Learning Compositional World Models for Robot Imagination

Add code
Apr 18, 2024
Figure 1 for RoboDreamer: Learning Compositional World Models for Robot Imagination
Figure 2 for RoboDreamer: Learning Compositional World Models for Robot Imagination
Figure 3 for RoboDreamer: Learning Compositional World Models for Robot Imagination
Figure 4 for RoboDreamer: Learning Compositional World Models for Robot Imagination
Viaarxiv icon