Picture for Peihao Wang

Peihao Wang

Rethinking Addressing in Language Models via Contexualized Equivariant Positional Encoding

Add code
Jan 01, 2025
Figure 1 for Rethinking Addressing in Language Models via Contexualized Equivariant Positional Encoding
Figure 2 for Rethinking Addressing in Language Models via Contexualized Equivariant Positional Encoding
Figure 3 for Rethinking Addressing in Language Models via Contexualized Equivariant Positional Encoding
Figure 4 for Rethinking Addressing in Language Models via Contexualized Equivariant Positional Encoding
Viaarxiv icon

Understanding and Mitigating Bottlenecks of State Space Models through the Lens of Recency and Over-smoothing

Add code
Dec 31, 2024
Figure 1 for Understanding and Mitigating Bottlenecks of State Space Models through the Lens of Recency and Over-smoothing
Figure 2 for Understanding and Mitigating Bottlenecks of State Space Models through the Lens of Recency and Over-smoothing
Figure 3 for Understanding and Mitigating Bottlenecks of State Space Models through the Lens of Recency and Over-smoothing
Figure 4 for Understanding and Mitigating Bottlenecks of State Space Models through the Lens of Recency and Over-smoothing
Viaarxiv icon

Oscillation Inversion: Understand the structure of Large Flow Model through the Lens of Inversion Method

Add code
Nov 17, 2024
Viaarxiv icon

Large Spatial Model: End-to-end Unposed Images to Semantic 3D

Add code
Oct 24, 2024
Figure 1 for Large Spatial Model: End-to-end Unposed Images to Semantic 3D
Figure 2 for Large Spatial Model: End-to-end Unposed Images to Semantic 3D
Figure 3 for Large Spatial Model: End-to-end Unposed Images to Semantic 3D
Figure 4 for Large Spatial Model: End-to-end Unposed Images to Semantic 3D
Viaarxiv icon

Read-ME: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design

Add code
Oct 24, 2024
Figure 1 for Read-ME: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design
Figure 2 for Read-ME: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design
Figure 3 for Read-ME: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design
Figure 4 for Read-ME: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design
Viaarxiv icon

Model-GLUE: Democratized LLM Scaling for A Large Model Zoo in the Wild

Add code
Oct 07, 2024
Figure 1 for Model-GLUE: Democratized LLM Scaling for A Large Model Zoo in the Wild
Figure 2 for Model-GLUE: Democratized LLM Scaling for A Large Model Zoo in the Wild
Figure 3 for Model-GLUE: Democratized LLM Scaling for A Large Model Zoo in the Wild
Figure 4 for Model-GLUE: Democratized LLM Scaling for A Large Model Zoo in the Wild
Viaarxiv icon

Lift3D: Zero-Shot Lifting of Any 2D Vision Model to 3D

Add code
Mar 27, 2024
Viaarxiv icon

Generalization Error Analysis for Sparse Mixture-of-Experts: A Preliminary Study

Add code
Mar 26, 2024
Viaarxiv icon

SteinDreamer: Variance Reduction for Text-to-3D Score Distillation via Stein Identity

Add code
Dec 31, 2023
Viaarxiv icon

Taming Mode Collapse in Score Distillation for Text-to-3D Generation

Add code
Dec 31, 2023
Figure 1 for Taming Mode Collapse in Score Distillation for Text-to-3D Generation
Figure 2 for Taming Mode Collapse in Score Distillation for Text-to-3D Generation
Figure 3 for Taming Mode Collapse in Score Distillation for Text-to-3D Generation
Figure 4 for Taming Mode Collapse in Score Distillation for Text-to-3D Generation
Viaarxiv icon