Picture for Haokun Li

Haokun Li

From Informal to Formal -- Incorporating and Evaluating LLMs on Natural Language Requirements to Verifiable Formal Proofs

Add code
Jan 27, 2025
Viaarxiv icon

Core Context Aware Attention for Long Context Language Modeling

Add code
Dec 17, 2024
Figure 1 for Core Context Aware Attention for Long Context Language Modeling
Figure 2 for Core Context Aware Attention for Long Context Language Modeling
Figure 3 for Core Context Aware Attention for Long Context Language Modeling
Figure 4 for Core Context Aware Attention for Long Context Language Modeling
Viaarxiv icon

M2DA: Multi-Modal Fusion Transformer Incorporating Driver Attention for Autonomous Driving

Add code
Mar 19, 2024
Viaarxiv icon

Boost Test-Time Performance with Closed-Loop Inference

Add code
Mar 26, 2022
Figure 1 for Boost Test-Time Performance with Closed-Loop Inference
Figure 2 for Boost Test-Time Performance with Closed-Loop Inference
Figure 3 for Boost Test-Time Performance with Closed-Loop Inference
Figure 4 for Boost Test-Time Performance with Closed-Loop Inference
Viaarxiv icon

Generative Low-bitwidth Data Free Quantization

Add code
Mar 07, 2020
Figure 1 for Generative Low-bitwidth Data Free Quantization
Figure 2 for Generative Low-bitwidth Data Free Quantization
Figure 3 for Generative Low-bitwidth Data Free Quantization
Figure 4 for Generative Low-bitwidth Data Free Quantization
Viaarxiv icon

Solving Satisfiability of Polynomial Formulas By Sample-Cell Projection

Add code
Mar 04, 2020
Figure 1 for Solving Satisfiability of Polynomial Formulas By Sample-Cell Projection
Figure 2 for Solving Satisfiability of Polynomial Formulas By Sample-Cell Projection
Viaarxiv icon