Picture for Shanda Li

Shanda Li

An Empirical Analysis of Compute-Optimal Inference for Problem-Solving with Language Models

Add code
Aug 01, 2024
Figure 1 for An Empirical Analysis of Compute-Optimal Inference for Problem-Solving with Language Models
Figure 2 for An Empirical Analysis of Compute-Optimal Inference for Problem-Solving with Language Models
Figure 3 for An Empirical Analysis of Compute-Optimal Inference for Problem-Solving with Language Models
Figure 4 for An Empirical Analysis of Compute-Optimal Inference for Problem-Solving with Language Models
Viaarxiv icon

Functional Interpolation for Relative Positions Improves Long Context Transformers

Add code
Oct 06, 2023
Viaarxiv icon

Learning a Fourier Transform for Linear Relative Positional Encodings in Transformers

Add code
Feb 03, 2023
Figure 1 for Learning a Fourier Transform for Linear Relative Positional Encodings in Transformers
Figure 2 for Learning a Fourier Transform for Linear Relative Positional Encodings in Transformers
Figure 3 for Learning a Fourier Transform for Linear Relative Positional Encodings in Transformers
Figure 4 for Learning a Fourier Transform for Linear Relative Positional Encodings in Transformers
Viaarxiv icon

Is $L^2$ Physics-Informed Loss Always Suitable for Training Physics-Informed Neural Network?

Add code
Jun 04, 2022
Figure 1 for Is $L^2$ Physics-Informed Loss Always Suitable for Training Physics-Informed Neural Network?
Figure 2 for Is $L^2$ Physics-Informed Loss Always Suitable for Training Physics-Informed Neural Network?
Figure 3 for Is $L^2$ Physics-Informed Loss Always Suitable for Training Physics-Informed Neural Network?
Figure 4 for Is $L^2$ Physics-Informed Loss Always Suitable for Training Physics-Informed Neural Network?
Viaarxiv icon

Your Transformer May Not be as Powerful as You Expect

Add code
May 26, 2022
Figure 1 for Your Transformer May Not be as Powerful as You Expect
Figure 2 for Your Transformer May Not be as Powerful as You Expect
Figure 3 for Your Transformer May Not be as Powerful as You Expect
Figure 4 for Your Transformer May Not be as Powerful as You Expect
Viaarxiv icon

Learning Physics-Informed Neural Networks without Stacked Back-propagation

Add code
Feb 18, 2022
Figure 1 for Learning Physics-Informed Neural Networks without Stacked Back-propagation
Figure 2 for Learning Physics-Informed Neural Networks without Stacked Back-propagation
Figure 3 for Learning Physics-Informed Neural Networks without Stacked Back-propagation
Figure 4 for Learning Physics-Informed Neural Networks without Stacked Back-propagation
Viaarxiv icon

Can Vision Transformers Perform Convolution?

Add code
Nov 03, 2021
Figure 1 for Can Vision Transformers Perform Convolution?
Figure 2 for Can Vision Transformers Perform Convolution?
Figure 3 for Can Vision Transformers Perform Convolution?
Viaarxiv icon

Stable, Fast and Accurate: Kernelized Attention with Relative Positional Encoding

Add code
Jun 23, 2021
Figure 1 for Stable, Fast and Accurate: Kernelized Attention with Relative Positional Encoding
Figure 2 for Stable, Fast and Accurate: Kernelized Attention with Relative Positional Encoding
Figure 3 for Stable, Fast and Accurate: Kernelized Attention with Relative Positional Encoding
Figure 4 for Stable, Fast and Accurate: Kernelized Attention with Relative Positional Encoding
Viaarxiv icon