Picture for Andrew D. Bagdanov

Andrew D. Bagdanov

Modular Memory is the Key to Continual Learning Agents

Add code
Mar 02, 2026
Viaarxiv icon

SpectralGCD: Spectral Concept Selection and Cross-modal Representation Learning for Generalized Category Discovery

Add code
Feb 19, 2026
Viaarxiv icon

No MoCap Needed: Post-Training Motion Diffusion Models with Reinforcement Learning using Only Textual Prompts

Add code
Oct 08, 2025
Figure 1 for No MoCap Needed: Post-Training Motion Diffusion Models with Reinforcement Learning using Only Textual Prompts
Figure 2 for No MoCap Needed: Post-Training Motion Diffusion Models with Reinforcement Learning using Only Textual Prompts
Figure 3 for No MoCap Needed: Post-Training Motion Diffusion Models with Reinforcement Learning using Only Textual Prompts
Figure 4 for No MoCap Needed: Post-Training Motion Diffusion Models with Reinforcement Learning using Only Textual Prompts
Viaarxiv icon

NTRL: Encounter Generation via Reinforcement Learning for Dynamic Difficulty Adjustment in Dungeons and Dragons

Add code
Jun 24, 2025
Viaarxiv icon

EFC++: Elastic Feature Consolidation with Prototype Re-balancing for Cold Start Exemplar-free Incremental Learning

Add code
Mar 13, 2025
Figure 1 for EFC++: Elastic Feature Consolidation with Prototype Re-balancing for Cold Start Exemplar-free Incremental Learning
Figure 2 for EFC++: Elastic Feature Consolidation with Prototype Re-balancing for Cold Start Exemplar-free Incremental Learning
Figure 3 for EFC++: Elastic Feature Consolidation with Prototype Re-balancing for Cold Start Exemplar-free Incremental Learning
Figure 4 for EFC++: Elastic Feature Consolidation with Prototype Re-balancing for Cold Start Exemplar-free Incremental Learning
Viaarxiv icon

No Task Left Behind: Isotropic Model Merging with Common and Task-Specific Subspaces

Add code
Feb 07, 2025
Viaarxiv icon

Cross the Gap: Exposing the Intra-modal Misalignment in CLIP via Modality Inversion

Add code
Feb 06, 2025
Figure 1 for Cross the Gap: Exposing the Intra-modal Misalignment in CLIP via Modality Inversion
Figure 2 for Cross the Gap: Exposing the Intra-modal Misalignment in CLIP via Modality Inversion
Figure 3 for Cross the Gap: Exposing the Intra-modal Misalignment in CLIP via Modality Inversion
Figure 4 for Cross the Gap: Exposing the Intra-modal Misalignment in CLIP via Modality Inversion
Viaarxiv icon

SPEQ: Stabilization Phases for Efficient Q-Learning in High Update-To-Data Ratio Reinforcement Learning

Add code
Jan 15, 2025
Viaarxiv icon

Covariances for Free: Exploiting Mean Distributions for Federated Learning with Pre-Trained Models

Add code
Dec 18, 2024
Figure 1 for Covariances for Free: Exploiting Mean Distributions for Federated Learning with Pre-Trained Models
Figure 2 for Covariances for Free: Exploiting Mean Distributions for Federated Learning with Pre-Trained Models
Figure 3 for Covariances for Free: Exploiting Mean Distributions for Federated Learning with Pre-Trained Models
Figure 4 for Covariances for Free: Exploiting Mean Distributions for Federated Learning with Pre-Trained Models
Viaarxiv icon

RE-tune: Incremental Fine Tuning of Biomedical Vision-Language Models for Multi-label Chest X-ray Classification

Add code
Oct 23, 2024
Figure 1 for RE-tune: Incremental Fine Tuning of Biomedical Vision-Language Models for Multi-label Chest X-ray Classification
Figure 2 for RE-tune: Incremental Fine Tuning of Biomedical Vision-Language Models for Multi-label Chest X-ray Classification
Viaarxiv icon