Picture for Zirui Liu

Zirui Liu

LiNo: Advancing Recursive Residual Decomposition of Linear and Nonlinear Patterns for Robust Time Series Forecasting

Add code
Oct 22, 2024
Figure 1 for LiNo: Advancing Recursive Residual Decomposition of Linear and Nonlinear Patterns for Robust Time Series Forecasting
Figure 2 for LiNo: Advancing Recursive Residual Decomposition of Linear and Nonlinear Patterns for Robust Time Series Forecasting
Figure 3 for LiNo: Advancing Recursive Residual Decomposition of Linear and Nonlinear Patterns for Robust Time Series Forecasting
Figure 4 for LiNo: Advancing Recursive Residual Decomposition of Linear and Nonlinear Patterns for Robust Time Series Forecasting
Viaarxiv icon

Weighted Diversified Sampling for Efficient Data-Driven Single-Cell Gene-Gene Interaction Discovery

Add code
Oct 21, 2024
Figure 1 for Weighted Diversified Sampling for Efficient Data-Driven Single-Cell Gene-Gene Interaction Discovery
Figure 2 for Weighted Diversified Sampling for Efficient Data-Driven Single-Cell Gene-Gene Interaction Discovery
Figure 3 for Weighted Diversified Sampling for Efficient Data-Driven Single-Cell Gene-Gene Interaction Discovery
Figure 4 for Weighted Diversified Sampling for Efficient Data-Driven Single-Cell Gene-Gene Interaction Discovery
Viaarxiv icon

Gradient Rewiring for Editable Graph Neural Network Training

Add code
Oct 21, 2024
Figure 1 for Gradient Rewiring for Editable Graph Neural Network Training
Figure 2 for Gradient Rewiring for Editable Graph Neural Network Training
Figure 3 for Gradient Rewiring for Editable Graph Neural Network Training
Figure 4 for Gradient Rewiring for Editable Graph Neural Network Training
Viaarxiv icon

Taylor Unswift: Secured Weight Release for Large Language Models via Taylor Expansion

Add code
Oct 06, 2024
Figure 1 for Taylor Unswift: Secured Weight Release for Large Language Models via Taylor Expansion
Figure 2 for Taylor Unswift: Secured Weight Release for Large Language Models via Taylor Expansion
Figure 3 for Taylor Unswift: Secured Weight Release for Large Language Models via Taylor Expansion
Figure 4 for Taylor Unswift: Secured Weight Release for Large Language Models via Taylor Expansion
Viaarxiv icon

Robust Network Learning via Inverse Scale Variational Sparsification

Add code
Sep 27, 2024
Figure 1 for Robust Network Learning via Inverse Scale Variational Sparsification
Figure 2 for Robust Network Learning via Inverse Scale Variational Sparsification
Figure 3 for Robust Network Learning via Inverse Scale Variational Sparsification
Figure 4 for Robust Network Learning via Inverse Scale Variational Sparsification
Viaarxiv icon

INT-FlashAttention: Enabling Flash Attention for INT8 Quantization

Add code
Sep 26, 2024
Figure 1 for INT-FlashAttention: Enabling Flash Attention for INT8 Quantization
Figure 2 for INT-FlashAttention: Enabling Flash Attention for INT8 Quantization
Figure 3 for INT-FlashAttention: Enabling Flash Attention for INT8 Quantization
Figure 4 for INT-FlashAttention: Enabling Flash Attention for INT8 Quantization
Viaarxiv icon

Assessing and Enhancing Large Language Models in Rare Disease Question-answering

Add code
Aug 15, 2024
Figure 1 for Assessing and Enhancing Large Language Models in Rare Disease Question-answering
Figure 2 for Assessing and Enhancing Large Language Models in Rare Disease Question-answering
Figure 3 for Assessing and Enhancing Large Language Models in Rare Disease Question-answering
Figure 4 for Assessing and Enhancing Large Language Models in Rare Disease Question-answering
Viaarxiv icon

Research on Tibetan Tourism Viewpoints information generation system based on LLM

Add code
Jul 18, 2024
Figure 1 for Research on Tibetan Tourism Viewpoints information generation system based on LLM
Figure 2 for Research on Tibetan Tourism Viewpoints information generation system based on LLM
Figure 3 for Research on Tibetan Tourism Viewpoints information generation system based on LLM
Figure 4 for Research on Tibetan Tourism Viewpoints information generation system based on LLM
Viaarxiv icon

KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches

Add code
Jul 01, 2024
Figure 1 for KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches
Figure 2 for KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches
Figure 3 for KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches
Figure 4 for KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches
Viaarxiv icon

Zeroth-Order Fine-Tuning of LLMs with Extreme Sparsity

Add code
Jun 05, 2024
Figure 1 for Zeroth-Order Fine-Tuning of LLMs with Extreme Sparsity
Figure 2 for Zeroth-Order Fine-Tuning of LLMs with Extreme Sparsity
Figure 3 for Zeroth-Order Fine-Tuning of LLMs with Extreme Sparsity
Figure 4 for Zeroth-Order Fine-Tuning of LLMs with Extreme Sparsity
Viaarxiv icon