Picture for Karsten Roth

Karsten Roth

How to Merge Your Multimodal Models Over Time?

Add code
Dec 09, 2024
Viaarxiv icon

Context-Aware Multimodal Pretraining

Add code
Nov 22, 2024
Viaarxiv icon

A Practitioner's Guide to Continual Multimodal Pretraining

Add code
Aug 26, 2024
Viaarxiv icon

Disentangled Representation Learning through Geometry Preservation with the Gromov-Monge Gap

Add code
Jul 10, 2024
Figure 1 for Disentangled Representation Learning through Geometry Preservation with the Gromov-Monge Gap
Figure 2 for Disentangled Representation Learning through Geometry Preservation with the Gromov-Monge Gap
Figure 3 for Disentangled Representation Learning through Geometry Preservation with the Gromov-Monge Gap
Figure 4 for Disentangled Representation Learning through Geometry Preservation with the Gromov-Monge Gap
Viaarxiv icon

Reflecting on the State of Rehearsal-free Continual Learning with Pretrained Models

Add code
Jun 13, 2024
Viaarxiv icon

ReNO: Enhancing One-step Text-to-Image Models through Reward-based Noise Optimization

Add code
Jun 06, 2024
Figure 1 for ReNO: Enhancing One-step Text-to-Image Models through Reward-based Noise Optimization
Figure 2 for ReNO: Enhancing One-step Text-to-Image Models through Reward-based Noise Optimization
Figure 3 for ReNO: Enhancing One-step Text-to-Image Models through Reward-based Noise Optimization
Figure 4 for ReNO: Enhancing One-step Text-to-Image Models through Reward-based Noise Optimization
Viaarxiv icon

ETHER: Efficient Finetuning of Large-Scale Models with Hyperplane Reflections

Add code
May 30, 2024
Viaarxiv icon

Improving Intervention Efficacy via Concept Realignment in Concept Bottleneck Models

Add code
May 02, 2024
Figure 1 for Improving Intervention Efficacy via Concept Realignment in Concept Bottleneck Models
Figure 2 for Improving Intervention Efficacy via Concept Realignment in Concept Bottleneck Models
Figure 3 for Improving Intervention Efficacy via Concept Realignment in Concept Bottleneck Models
Figure 4 for Improving Intervention Efficacy via Concept Realignment in Concept Bottleneck Models
Viaarxiv icon

kNN-CLIP: Retrieval Enables Training-Free Segmentation on Continually Expanding Large Vocabularies

Add code
Apr 15, 2024
Viaarxiv icon

Fantastic Gains and Where to Find Them: On the Existence and Prospect of General Knowledge Transfer between Any Pretrained Model

Add code
Oct 26, 2023
Viaarxiv icon