Picture for Xiaofeng Wang

Xiaofeng Wang

Task-Parameter Nexus: Task-Specific Parameter Learning for Model-Based Control

Add code
Dec 17, 2024
Viaarxiv icon

OccScene: Semantic Occupancy-based Cross-task Mutual Learning for 3D Scene Generation

Add code
Dec 15, 2024
Viaarxiv icon

Hierarchical Context Alignment with Disentangled Geometric and Temporal Modeling for Semantic Occupancy Prediction

Add code
Dec 11, 2024
Viaarxiv icon

ReconDreamer: Crafting World Models for Driving Scene Reconstruction via Online Restoration

Add code
Nov 29, 2024
Viaarxiv icon

Timestep Embedding Tells: It's Time to Cache for Video Diffusion Model

Add code
Nov 28, 2024
Viaarxiv icon

EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation

Add code
Nov 13, 2024
Figure 1 for EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation
Figure 2 for EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation
Figure 3 for EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation
Figure 4 for EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation
Viaarxiv icon

Weak-to-Strong Preference Optimization: Stealing Reward from Weak Aligned Model

Add code
Oct 24, 2024
Viaarxiv icon

DriveDreamer4D: World Models Are Effective Data Machines for 4D Driving Scene Representation

Add code
Oct 17, 2024
Figure 1 for DriveDreamer4D: World Models Are Effective Data Machines for 4D Driving Scene Representation
Figure 2 for DriveDreamer4D: World Models Are Effective Data Machines for 4D Driving Scene Representation
Figure 3 for DriveDreamer4D: World Models Are Effective Data Machines for 4D Driving Scene Representation
Figure 4 for DriveDreamer4D: World Models Are Effective Data Machines for 4D Driving Scene Representation
Viaarxiv icon

Layer-wise Importance Matters: Less Memory for Better Performance in Parameter-efficient Fine-tuning of Large Language Models

Add code
Oct 15, 2024
Figure 1 for Layer-wise Importance Matters: Less Memory for Better Performance in Parameter-efficient Fine-tuning of Large Language Models
Figure 2 for Layer-wise Importance Matters: Less Memory for Better Performance in Parameter-efficient Fine-tuning of Large Language Models
Figure 3 for Layer-wise Importance Matters: Less Memory for Better Performance in Parameter-efficient Fine-tuning of Large Language Models
Figure 4 for Layer-wise Importance Matters: Less Memory for Better Performance in Parameter-efficient Fine-tuning of Large Language Models
Viaarxiv icon

PersonaMark: Personalized LLM watermarking for model protection and user attribution

Add code
Sep 15, 2024
Figure 1 for PersonaMark: Personalized LLM watermarking for model protection and user attribution
Figure 2 for PersonaMark: Personalized LLM watermarking for model protection and user attribution
Figure 3 for PersonaMark: Personalized LLM watermarking for model protection and user attribution
Figure 4 for PersonaMark: Personalized LLM watermarking for model protection and user attribution
Viaarxiv icon