Picture for Alexander G. Huth

Alexander G. Huth

The University of Texas at Austin

Crafting Interpretable Embeddings by Asking LLMs Questions

Add code
May 26, 2024
Viaarxiv icon

Humans and language models diverge when predicting repeating text

Add code
Oct 23, 2023
Viaarxiv icon

Scaling laws for language encoding models in fMRI

Add code
May 22, 2023
Viaarxiv icon

Brain encoding models based on multimodal transformers can transfer across language and vision

Add code
May 20, 2023
Figure 1 for Brain encoding models based on multimodal transformers can transfer across language and vision
Figure 2 for Brain encoding models based on multimodal transformers can transfer across language and vision
Figure 3 for Brain encoding models based on multimodal transformers can transfer across language and vision
Figure 4 for Brain encoding models based on multimodal transformers can transfer across language and vision
Viaarxiv icon

Explaining black box text modules in natural language with language models

Add code
May 17, 2023
Viaarxiv icon

Self-supervised models of audio effectively explain human cortical responses to speech

Add code
May 27, 2022
Figure 1 for Self-supervised models of audio effectively explain human cortical responses to speech
Figure 2 for Self-supervised models of audio effectively explain human cortical responses to speech
Figure 3 for Self-supervised models of audio effectively explain human cortical responses to speech
Figure 4 for Self-supervised models of audio effectively explain human cortical responses to speech
Viaarxiv icon

Physically Plausible Pose Refinement using Fully Differentiable Forces

Add code
May 17, 2021
Figure 1 for Physically Plausible Pose Refinement using Fully Differentiable Forces
Figure 2 for Physically Plausible Pose Refinement using Fully Differentiable Forces
Figure 3 for Physically Plausible Pose Refinement using Fully Differentiable Forces
Figure 4 for Physically Plausible Pose Refinement using Fully Differentiable Forces
Viaarxiv icon

Multi-timescale representation learning in LSTM Language Models

Add code
Sep 27, 2020
Figure 1 for Multi-timescale representation learning in LSTM Language Models
Figure 2 for Multi-timescale representation learning in LSTM Language Models
Figure 3 for Multi-timescale representation learning in LSTM Language Models
Figure 4 for Multi-timescale representation learning in LSTM Language Models
Viaarxiv icon

A single-layer RNN can approximate stacked and bidirectional RNNs, and topologies in between

Add code
Aug 30, 2019
Figure 1 for A single-layer RNN can approximate stacked and bidirectional RNNs, and topologies in between
Figure 2 for A single-layer RNN can approximate stacked and bidirectional RNNs, and topologies in between
Figure 3 for A single-layer RNN can approximate stacked and bidirectional RNNs, and topologies in between
Figure 4 for A single-layer RNN can approximate stacked and bidirectional RNNs, and topologies in between
Viaarxiv icon