Picture for Shengwen Liang

Shengwen Liang

InstInfer: In-Storage Attention Offloading for Cost-Effective Long-Context LLM Inference

Add code
Sep 08, 2024
Figure 1 for InstInfer: In-Storage Attention Offloading for Cost-Effective Long-Context LLM Inference
Figure 2 for InstInfer: In-Storage Attention Offloading for Cost-Effective Long-Context LLM Inference
Figure 3 for InstInfer: In-Storage Attention Offloading for Cost-Effective Long-Context LLM Inference
Figure 4 for InstInfer: In-Storage Attention Offloading for Cost-Effective Long-Context LLM Inference
Viaarxiv icon

Natural language is not enough: Benchmarking multi-modal generative AI for Verilog generation

Add code
Jul 11, 2024
Figure 1 for Natural language is not enough: Benchmarking multi-modal generative AI for Verilog generation
Figure 2 for Natural language is not enough: Benchmarking multi-modal generative AI for Verilog generation
Figure 3 for Natural language is not enough: Benchmarking multi-modal generative AI for Verilog generation
Figure 4 for Natural language is not enough: Benchmarking multi-modal generative AI for Verilog generation
Viaarxiv icon

Data is all you need: Finetuning LLMs for Chip Design via an Automated design-data augmentation framework

Add code
Mar 17, 2024
Figure 1 for Data is all you need: Finetuning LLMs for Chip Design via an Automated design-data augmentation framework
Figure 2 for Data is all you need: Finetuning LLMs for Chip Design via an Automated design-data augmentation framework
Figure 3 for Data is all you need: Finetuning LLMs for Chip Design via an Automated design-data augmentation framework
Figure 4 for Data is all you need: Finetuning LLMs for Chip Design via an Automated design-data augmentation framework
Viaarxiv icon

ChipGPT: How far are we from natural language hardware design

Add code
May 23, 2023
Viaarxiv icon