Picture for Hilde Kuehne

Hilde Kuehne

TimeLogic: A Temporal Logic Benchmark for Video QA

Add code
Jan 13, 2025
Viaarxiv icon

State-Space Large Audio Language Models

Add code
Nov 24, 2024
Viaarxiv icon

Teaching VLMs to Localize Specific Objects from In-context Examples

Add code
Nov 20, 2024
Figure 1 for Teaching VLMs to Localize Specific Objects from In-context Examples
Figure 2 for Teaching VLMs to Localize Specific Objects from In-context Examples
Figure 3 for Teaching VLMs to Localize Specific Objects from In-context Examples
Figure 4 for Teaching VLMs to Localize Specific Objects from In-context Examples
Viaarxiv icon

Convolutional Differentiable Logic Gate Networks

Add code
Nov 07, 2024
Figure 1 for Convolutional Differentiable Logic Gate Networks
Figure 2 for Convolutional Differentiable Logic Gate Networks
Figure 3 for Convolutional Differentiable Logic Gate Networks
Figure 4 for Convolutional Differentiable Logic Gate Networks
Viaarxiv icon

Newton Losses: Using Curvature Information for Learning with Differentiable Algorithms

Add code
Oct 24, 2024
Viaarxiv icon

MaskInversion: Localized Embeddings via Optimization of Explainability Maps

Add code
Jul 29, 2024
Viaarxiv icon

DASS: Distilled Audio State Space Models Are Stronger and More Duration-Scalable Learners

Add code
Jul 04, 2024
Viaarxiv icon

Whisper-Flamingo: Integrating Visual Features into Whisper for Audio-Visual Speech Recognition and Translation

Add code
Jun 14, 2024
Figure 1 for Whisper-Flamingo: Integrating Visual Features into Whisper for Audio-Visual Speech Recognition and Translation
Figure 2 for Whisper-Flamingo: Integrating Visual Features into Whisper for Audio-Visual Speech Recognition and Translation
Figure 3 for Whisper-Flamingo: Integrating Visual Features into Whisper for Audio-Visual Speech Recognition and Translation
Figure 4 for Whisper-Flamingo: Integrating Visual Features into Whisper for Audio-Visual Speech Recognition and Translation
Viaarxiv icon

LeGrad: An Explainability Method for Vision Transformers via Feature Formation Sensitivity

Add code
Apr 04, 2024
Figure 1 for LeGrad: An Explainability Method for Vision Transformers via Feature Formation Sensitivity
Figure 2 for LeGrad: An Explainability Method for Vision Transformers via Feature Formation Sensitivity
Figure 3 for LeGrad: An Explainability Method for Vision Transformers via Feature Formation Sensitivity
Figure 4 for LeGrad: An Explainability Method for Vision Transformers via Feature Formation Sensitivity
Viaarxiv icon

Uncertainty Quantification via Stable Distribution Propagation

Add code
Feb 13, 2024
Viaarxiv icon