Picture for Pu Lu

Pu Lu

Training Agents with Weakly Supervised Feedback from Large Language Models

Add code
Nov 29, 2024
Viaarxiv icon

DiTFastAttn: Attention Compression for Diffusion Transformer Models

Add code
Jun 12, 2024
Figure 1 for DiTFastAttn: Attention Compression for Diffusion Transformer Models
Figure 2 for DiTFastAttn: Attention Compression for Diffusion Transformer Models
Figure 3 for DiTFastAttn: Attention Compression for Diffusion Transformer Models
Figure 4 for DiTFastAttn: Attention Compression for Diffusion Transformer Models
Viaarxiv icon

Ada3D : Exploiting the Spatial Redundancy with Adaptive Inference for Efficient 3D Object Detection

Add code
Jul 17, 2023
Viaarxiv icon

Reading and Writing: Discriminative and Generative Modeling for Self-Supervised Text Recognition

Add code
Jul 01, 2022
Figure 1 for Reading and Writing: Discriminative and Generative Modeling for Self-Supervised Text Recognition
Figure 2 for Reading and Writing: Discriminative and Generative Modeling for Self-Supervised Text Recognition
Figure 3 for Reading and Writing: Discriminative and Generative Modeling for Self-Supervised Text Recognition
Figure 4 for Reading and Writing: Discriminative and Generative Modeling for Self-Supervised Text Recognition
Viaarxiv icon

All You Need Is Boundary: Toward Arbitrary-Shaped Text Spotting

Add code
Nov 21, 2019
Figure 1 for All You Need Is Boundary: Toward Arbitrary-Shaped Text Spotting
Figure 2 for All You Need Is Boundary: Toward Arbitrary-Shaped Text Spotting
Figure 3 for All You Need Is Boundary: Toward Arbitrary-Shaped Text Spotting
Figure 4 for All You Need Is Boundary: Toward Arbitrary-Shaped Text Spotting
Viaarxiv icon