Picture for Simon Kornblith

Simon Kornblith

Training objective drives the consistency of representational similarity across datasets

Add code
Nov 08, 2024
Viaarxiv icon

When Does Perceptual Alignment Benefit Vision Representations?

Add code
Oct 14, 2024
Figure 1 for When Does Perceptual Alignment Benefit Vision Representations?
Figure 2 for When Does Perceptual Alignment Benefit Vision Representations?
Figure 3 for When Does Perceptual Alignment Benefit Vision Representations?
Figure 4 for When Does Perceptual Alignment Benefit Vision Representations?
Viaarxiv icon

Aligning Machine and Human Visual Representations across Abstraction Levels

Add code
Sep 10, 2024
Viaarxiv icon

Training Language Models on the Knowledge Graph: Insights on Hallucinations and Their Detectability

Add code
Aug 14, 2024
Figure 1 for Training Language Models on the Knowledge Graph: Insights on Hallucinations and Their Detectability
Figure 2 for Training Language Models on the Knowledge Graph: Insights on Hallucinations and Their Detectability
Figure 3 for Training Language Models on the Knowledge Graph: Insights on Hallucinations and Their Detectability
Figure 4 for Training Language Models on the Knowledge Graph: Insights on Hallucinations and Their Detectability
Viaarxiv icon

Neither hype nor gloom do DNNs justice

Add code
Dec 08, 2023
Viaarxiv icon

Frontier Language Models are not Robust to Adversarial Arithmetic, or "What do I need to say so you agree 2+2=5?

Add code
Nov 15, 2023
Viaarxiv icon

Probing clustering in neural network representations

Add code
Nov 14, 2023
Viaarxiv icon

Getting aligned on representational alignment

Add code
Nov 02, 2023
Viaarxiv icon

Small-scale proxies for large-scale Transformer training instabilities

Add code
Sep 25, 2023
Viaarxiv icon

Replacing softmax with ReLU in Vision Transformers

Add code
Sep 15, 2023
Viaarxiv icon