Picture for Weichen Liu

Weichen Liu

Efficient Deep Learning Infrastructures for Embedded Computing Systems: A Comprehensive Survey and Future Envision

Add code
Nov 03, 2024
Viaarxiv icon

Enabling Energy-Efficient Deployment of Large Language Models on Memristor Crossbar: A Synergy of Large and Small

Add code
Oct 21, 2024
Viaarxiv icon

You Only Search Once: On Lightweight Differentiable Architecture Search for Resource-Constrained Embedded Platforms

Add code
Aug 30, 2022
Figure 1 for You Only Search Once: On Lightweight Differentiable Architecture Search for Resource-Constrained Embedded Platforms
Figure 2 for You Only Search Once: On Lightweight Differentiable Architecture Search for Resource-Constrained Embedded Platforms
Figure 3 for You Only Search Once: On Lightweight Differentiable Architecture Search for Resource-Constrained Embedded Platforms
Figure 4 for You Only Search Once: On Lightweight Differentiable Architecture Search for Resource-Constrained Embedded Platforms
Viaarxiv icon

FAT: An In-Memory Accelerator with Fast Addition for Ternary Weight Neural Networks

Add code
Jan 19, 2022
Figure 1 for FAT: An In-Memory Accelerator with Fast Addition for Ternary Weight Neural Networks
Figure 2 for FAT: An In-Memory Accelerator with Fast Addition for Ternary Weight Neural Networks
Figure 3 for FAT: An In-Memory Accelerator with Fast Addition for Ternary Weight Neural Networks
Figure 4 for FAT: An In-Memory Accelerator with Fast Addition for Ternary Weight Neural Networks
Viaarxiv icon

HSCoNAS: Hardware-Software Co-Design of Efficient DNNs via Neural Architecture Search

Add code
Mar 11, 2021
Figure 1 for HSCoNAS: Hardware-Software Co-Design of Efficient DNNs via Neural Architecture Search
Figure 2 for HSCoNAS: Hardware-Software Co-Design of Efficient DNNs via Neural Architecture Search
Figure 3 for HSCoNAS: Hardware-Software Co-Design of Efficient DNNs via Neural Architecture Search
Figure 4 for HSCoNAS: Hardware-Software Co-Design of Efficient DNNs via Neural Architecture Search
Viaarxiv icon

Bringing AI To Edge: From Deep Learning's Perspective

Add code
Nov 25, 2020
Figure 1 for Bringing AI To Edge: From Deep Learning's Perspective
Figure 2 for Bringing AI To Edge: From Deep Learning's Perspective
Figure 3 for Bringing AI To Edge: From Deep Learning's Perspective
Figure 4 for Bringing AI To Edge: From Deep Learning's Perspective
Viaarxiv icon

Cross-filter compression for CNN inference acceleration

Add code
May 18, 2020
Figure 1 for Cross-filter compression for CNN inference acceleration
Figure 2 for Cross-filter compression for CNN inference acceleration
Figure 3 for Cross-filter compression for CNN inference acceleration
Figure 4 for Cross-filter compression for CNN inference acceleration
Viaarxiv icon

CrowdExpress: A Probabilistic Framework for On-Time Crowdsourced Package Deliveries

Add code
Sep 08, 2018
Figure 1 for CrowdExpress: A Probabilistic Framework for On-Time Crowdsourced Package Deliveries
Figure 2 for CrowdExpress: A Probabilistic Framework for On-Time Crowdsourced Package Deliveries
Figure 3 for CrowdExpress: A Probabilistic Framework for On-Time Crowdsourced Package Deliveries
Figure 4 for CrowdExpress: A Probabilistic Framework for On-Time Crowdsourced Package Deliveries
Viaarxiv icon