Picture for Simon Kornblith

Simon Kornblith

Training objective drives the consistency of representational similarity across datasets

Add code
Nov 08, 2024
Figure 1 for Training objective drives the consistency of representational similarity across datasets
Figure 2 for Training objective drives the consistency of representational similarity across datasets
Figure 3 for Training objective drives the consistency of representational similarity across datasets
Figure 4 for Training objective drives the consistency of representational similarity across datasets
Viaarxiv icon

When Does Perceptual Alignment Benefit Vision Representations?

Add code
Oct 14, 2024
Figure 1 for When Does Perceptual Alignment Benefit Vision Representations?
Figure 2 for When Does Perceptual Alignment Benefit Vision Representations?
Figure 3 for When Does Perceptual Alignment Benefit Vision Representations?
Figure 4 for When Does Perceptual Alignment Benefit Vision Representations?
Viaarxiv icon

Aligning Machine and Human Visual Representations across Abstraction Levels

Add code
Sep 10, 2024
Viaarxiv icon

Training Language Models on the Knowledge Graph: Insights on Hallucinations and Their Detectability

Add code
Aug 14, 2024
Figure 1 for Training Language Models on the Knowledge Graph: Insights on Hallucinations and Their Detectability
Figure 2 for Training Language Models on the Knowledge Graph: Insights on Hallucinations and Their Detectability
Figure 3 for Training Language Models on the Knowledge Graph: Insights on Hallucinations and Their Detectability
Figure 4 for Training Language Models on the Knowledge Graph: Insights on Hallucinations and Their Detectability
Viaarxiv icon

Neither hype nor gloom do DNNs justice

Add code
Dec 08, 2023
Viaarxiv icon

Frontier Language Models are not Robust to Adversarial Arithmetic, or "What do I need to say so you agree 2+2=5?

Add code
Nov 15, 2023
Figure 1 for Frontier Language Models are not Robust to Adversarial Arithmetic, or "What do I need to say so you agree 2+2=5?
Figure 2 for Frontier Language Models are not Robust to Adversarial Arithmetic, or "What do I need to say so you agree 2+2=5?
Figure 3 for Frontier Language Models are not Robust to Adversarial Arithmetic, or "What do I need to say so you agree 2+2=5?
Figure 4 for Frontier Language Models are not Robust to Adversarial Arithmetic, or "What do I need to say so you agree 2+2=5?
Viaarxiv icon

Probing clustering in neural network representations

Add code
Nov 14, 2023
Viaarxiv icon

Getting aligned on representational alignment

Add code
Nov 02, 2023
Viaarxiv icon

Small-scale proxies for large-scale Transformer training instabilities

Add code
Sep 25, 2023
Figure 1 for Small-scale proxies for large-scale Transformer training instabilities
Figure 2 for Small-scale proxies for large-scale Transformer training instabilities
Figure 3 for Small-scale proxies for large-scale Transformer training instabilities
Figure 4 for Small-scale proxies for large-scale Transformer training instabilities
Viaarxiv icon

Replacing softmax with ReLU in Vision Transformers

Add code
Sep 15, 2023
Figure 1 for Replacing softmax with ReLU in Vision Transformers
Figure 2 for Replacing softmax with ReLU in Vision Transformers
Figure 3 for Replacing softmax with ReLU in Vision Transformers
Figure 4 for Replacing softmax with ReLU in Vision Transformers
Viaarxiv icon