Picture for Chaojian Li

Chaojian Li

Celine

Omni-Recon: Towards General-Purpose Neural Radiance Fields for Versatile 3D Applications

Add code
Mar 17, 2024
Viaarxiv icon

Towards Cognitive AI Systems: a Survey and Prospective on Neuro-Symbolic AI

Add code
Jan 02, 2024
Viaarxiv icon

MixRT: Mixed Neural Representations For Real-Time NeRF Rendering

Add code
Dec 20, 2023
Figure 1 for MixRT: Mixed Neural Representations For Real-Time NeRF Rendering
Figure 2 for MixRT: Mixed Neural Representations For Real-Time NeRF Rendering
Figure 3 for MixRT: Mixed Neural Representations For Real-Time NeRF Rendering
Figure 4 for MixRT: Mixed Neural Representations For Real-Time NeRF Rendering
Viaarxiv icon

GPT4AIGChip: Towards Next-Generation AI Accelerator Design Automation via Large Language Models

Add code
Sep 19, 2023
Figure 1 for GPT4AIGChip: Towards Next-Generation AI Accelerator Design Automation via Large Language Models
Figure 2 for GPT4AIGChip: Towards Next-Generation AI Accelerator Design Automation via Large Language Models
Figure 3 for GPT4AIGChip: Towards Next-Generation AI Accelerator Design Automation via Large Language Models
Figure 4 for GPT4AIGChip: Towards Next-Generation AI Accelerator Design Automation via Large Language Models
Viaarxiv icon

Instant-NeRF: Instant On-Device Neural Radiance Field Training via Algorithm-Accelerator Co-Designed Near-Memory Processing

Add code
May 09, 2023
Viaarxiv icon

ERSAM: Neural Architecture Search For Energy-Efficient and Real-Time Social Ambiance Measurement

Add code
Mar 24, 2023
Viaarxiv icon

INGeo: Accelerating Instant Neural Scene Reconstruction with Noisy Geometry Priors

Add code
Dec 05, 2022
Viaarxiv icon

ViTALiTy: Unifying Low-rank and Sparse Approximation for Vision Transformer Acceleration with a Linear Taylor Attention

Add code
Nov 09, 2022
Viaarxiv icon

ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design

Add code
Oct 18, 2022
Figure 1 for ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design
Figure 2 for ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design
Figure 3 for ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design
Figure 4 for ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design
Viaarxiv icon

MIA-Former: Efficient and Robust Vision Transformers via Multi-grained Input-Adaptation

Add code
Dec 21, 2021
Figure 1 for MIA-Former: Efficient and Robust Vision Transformers via Multi-grained Input-Adaptation
Figure 2 for MIA-Former: Efficient and Robust Vision Transformers via Multi-grained Input-Adaptation
Figure 3 for MIA-Former: Efficient and Robust Vision Transformers via Multi-grained Input-Adaptation
Figure 4 for MIA-Former: Efficient and Robust Vision Transformers via Multi-grained Input-Adaptation
Viaarxiv icon