Picture for Yi Liao

Yi Liao

SATA: Spatial Autocorrelation Token Analysis for Enhancing the Robustness of Vision Transformers

Add code
Sep 30, 2024
Figure 1 for SATA: Spatial Autocorrelation Token Analysis for Enhancing the Robustness of Vision Transformers
Figure 2 for SATA: Spatial Autocorrelation Token Analysis for Enhancing the Robustness of Vision Transformers
Figure 3 for SATA: Spatial Autocorrelation Token Analysis for Enhancing the Robustness of Vision Transformers
Figure 4 for SATA: Spatial Autocorrelation Token Analysis for Enhancing the Robustness of Vision Transformers
Viaarxiv icon

SoftDedup: an Efficient Data Reweighting Method for Speeding Up Language Model Pre-training

Add code
Jul 09, 2024
Viaarxiv icon

Feature Activation Map: Visual Explanation of Deep Learning Models for Image Classification

Add code
Jul 11, 2023
Viaarxiv icon

A resource-efficient deep learning framework for low-dose brain PET image reconstruction and analysis

Add code
Feb 14, 2022
Figure 1 for A resource-efficient deep learning framework for low-dose brain PET image reconstruction and analysis
Figure 2 for A resource-efficient deep learning framework for low-dose brain PET image reconstruction and analysis
Figure 3 for A resource-efficient deep learning framework for low-dose brain PET image reconstruction and analysis
Figure 4 for A resource-efficient deep learning framework for low-dose brain PET image reconstruction and analysis
Viaarxiv icon

A Compositional Feature Embedding and Similarity Metric for Ultra-Fine-Grained Visual Categorization

Add code
Oct 06, 2021
Figure 1 for A Compositional Feature Embedding and Similarity Metric for Ultra-Fine-Grained Visual Categorization
Figure 2 for A Compositional Feature Embedding and Similarity Metric for Ultra-Fine-Grained Visual Categorization
Figure 3 for A Compositional Feature Embedding and Similarity Metric for Ultra-Fine-Grained Visual Categorization
Figure 4 for A Compositional Feature Embedding and Similarity Metric for Ultra-Fine-Grained Visual Categorization
Viaarxiv icon

PanGu-$α$: Large-scale Autoregressive Pretrained Chinese Language Models with Auto-parallel Computation

Add code
Apr 26, 2021
Figure 1 for PanGu-$α$: Large-scale Autoregressive Pretrained Chinese Language Models with Auto-parallel Computation
Figure 2 for PanGu-$α$: Large-scale Autoregressive Pretrained Chinese Language Models with Auto-parallel Computation
Figure 3 for PanGu-$α$: Large-scale Autoregressive Pretrained Chinese Language Models with Auto-parallel Computation
Figure 4 for PanGu-$α$: Large-scale Autoregressive Pretrained Chinese Language Models with Auto-parallel Computation
Viaarxiv icon

Probabilistically Masked Language Model Capable of Autoregressive Generation in Arbitrary Word Order

Add code
Apr 24, 2020
Figure 1 for Probabilistically Masked Language Model Capable of Autoregressive Generation in Arbitrary Word Order
Figure 2 for Probabilistically Masked Language Model Capable of Autoregressive Generation in Arbitrary Word Order
Figure 3 for Probabilistically Masked Language Model Capable of Autoregressive Generation in Arbitrary Word Order
Figure 4 for Probabilistically Masked Language Model Capable of Autoregressive Generation in Arbitrary Word Order
Viaarxiv icon

Interpreting Predictive Process Monitoring Benchmarks

Add code
Dec 22, 2019
Figure 1 for Interpreting Predictive Process Monitoring Benchmarks
Figure 2 for Interpreting Predictive Process Monitoring Benchmarks
Figure 3 for Interpreting Predictive Process Monitoring Benchmarks
Figure 4 for Interpreting Predictive Process Monitoring Benchmarks
Viaarxiv icon

Zero-Shot Paraphrase Generation with Multilingual Language Models

Add code
Nov 09, 2019
Figure 1 for Zero-Shot Paraphrase Generation with Multilingual Language Models
Figure 2 for Zero-Shot Paraphrase Generation with Multilingual Language Models
Figure 3 for Zero-Shot Paraphrase Generation with Multilingual Language Models
Figure 4 for Zero-Shot Paraphrase Generation with Multilingual Language Models
Viaarxiv icon

NEZHA: Neural Contextualized Representation for Chinese Language Understanding

Add code
Sep 05, 2019
Figure 1 for NEZHA: Neural Contextualized Representation for Chinese Language Understanding
Figure 2 for NEZHA: Neural Contextualized Representation for Chinese Language Understanding
Figure 3 for NEZHA: Neural Contextualized Representation for Chinese Language Understanding
Figure 4 for NEZHA: Neural Contextualized Representation for Chinese Language Understanding
Viaarxiv icon