Picture for Mateja Jamnik

Mateja Jamnik

LLM Embeddings for Deep Learning on Tabular Data

Add code
Feb 17, 2025
Viaarxiv icon

Measuring Cross-Modal Interactions in Multimodal Models

Add code
Dec 20, 2024
Viaarxiv icon

PATHS: A Hierarchical Transformer for Efficient Whole Slide Image Analysis

Add code
Nov 27, 2024
Viaarxiv icon

End-to-End Ontology Learning with Large Language Models

Add code
Oct 31, 2024
Viaarxiv icon

Efficient Bias Mitigation Without Privileged Information

Add code
Sep 26, 2024
Viaarxiv icon

TabEBM: A Tabular Data Augmentation Method with Distinct Class-Specific Energy-Based Models

Add code
Sep 24, 2024
Figure 1 for TabEBM: A Tabular Data Augmentation Method with Distinct Class-Specific Energy-Based Models
Figure 2 for TabEBM: A Tabular Data Augmentation Method with Distinct Class-Specific Energy-Based Models
Figure 3 for TabEBM: A Tabular Data Augmentation Method with Distinct Class-Specific Energy-Based Models
Figure 4 for TabEBM: A Tabular Data Augmentation Method with Distinct Class-Specific Energy-Based Models
Viaarxiv icon

Repurposing Language Models into Embedding Models: Finding the Compute-Optimal Recipe

Add code
Jun 06, 2024
Figure 1 for Repurposing Language Models into Embedding Models: Finding the Compute-Optimal Recipe
Figure 2 for Repurposing Language Models into Embedding Models: Finding the Compute-Optimal Recipe
Figure 3 for Repurposing Language Models into Embedding Models: Finding the Compute-Optimal Recipe
Figure 4 for Repurposing Language Models into Embedding Models: Finding the Compute-Optimal Recipe
Viaarxiv icon

TabMDA: Tabular Manifold Data Augmentation for Any Classifier using Transformers with In-context Subsetting

Add code
Jun 03, 2024
Figure 1 for TabMDA: Tabular Manifold Data Augmentation for Any Classifier using Transformers with In-context Subsetting
Figure 2 for TabMDA: Tabular Manifold Data Augmentation for Any Classifier using Transformers with In-context Subsetting
Figure 3 for TabMDA: Tabular Manifold Data Augmentation for Any Classifier using Transformers with In-context Subsetting
Figure 4 for TabMDA: Tabular Manifold Data Augmentation for Any Classifier using Transformers with In-context Subsetting
Viaarxiv icon

MM-Lego: Modular Biomedical Multimodal Models with Minimal Fine-Tuning

Add code
May 30, 2024
Viaarxiv icon

Understanding Inter-Concept Relationships in Concept-Based Models

Add code
May 28, 2024
Viaarxiv icon