Picture for Yingyan

Yingyan

Celine

MG-Verilog: Multi-grained Dataset Towards Enhanced LLM-assisted Verilog Generation

Add code
Jul 02, 2024
Figure 1 for MG-Verilog: Multi-grained Dataset Towards Enhanced LLM-assisted Verilog Generation
Figure 2 for MG-Verilog: Multi-grained Dataset Towards Enhanced LLM-assisted Verilog Generation
Figure 3 for MG-Verilog: Multi-grained Dataset Towards Enhanced LLM-assisted Verilog Generation
Figure 4 for MG-Verilog: Multi-grained Dataset Towards Enhanced LLM-assisted Verilog Generation
Viaarxiv icon

ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization

Add code
Jun 11, 2024
Viaarxiv icon

When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models

Add code
Jun 11, 2024
Viaarxiv icon

MixRT: Mixed Neural Representations For Real-Time NeRF Rendering

Add code
Dec 20, 2023
Figure 1 for MixRT: Mixed Neural Representations For Real-Time NeRF Rendering
Figure 2 for MixRT: Mixed Neural Representations For Real-Time NeRF Rendering
Figure 3 for MixRT: Mixed Neural Representations For Real-Time NeRF Rendering
Figure 4 for MixRT: Mixed Neural Representations For Real-Time NeRF Rendering
Viaarxiv icon

NetDistiller: Empowering Tiny Deep Learning via In-Situ Distillation

Add code
Oct 24, 2023
Viaarxiv icon

A Survey on Graph Neural Network Acceleration: Algorithms, Systems, and Customized Hardware

Add code
Jun 24, 2023
Viaarxiv icon

ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer

Add code
Jun 10, 2023
Figure 1 for ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer
Figure 2 for ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer
Figure 3 for ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer
Figure 4 for ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer
Viaarxiv icon