Picture for Wanli Ouyang

Wanli Ouyang

School of Electrical and Information Engineering, The University of Sydney, Australia

Human-Centric Foundation Models: Perception, Generation and Agentic Modeling

Add code
Feb 12, 2025
Viaarxiv icon

Towards Efficient and Intelligent Laser Weeding: Method and Dataset for Weed Stem Detection

Add code
Feb 10, 2025
Viaarxiv icon

Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time Scaling

Add code
Feb 10, 2025
Viaarxiv icon

TripoSG: High-Fidelity 3D Shape Synthesis using Large-Scale Rectified Flow Models

Add code
Feb 10, 2025
Viaarxiv icon

Satellite Observations Guided Diffusion Model for Accurate Meteorological States at Arbitrary Resolution

Add code
Feb 09, 2025
Viaarxiv icon

Acquisition through My Eyes and Steps: A Joint Predictive Agent Model in Egocentric Worlds

Add code
Feb 09, 2025
Viaarxiv icon

MindAligner: Explicit Brain Functional Alignment for Cross-Subject Visual Decoding from Limited fMRI Data

Add code
Feb 07, 2025
Viaarxiv icon

Improving Video Generation with Human Feedback

Add code
Jan 23, 2025
Figure 1 for Improving Video Generation with Human Feedback
Figure 2 for Improving Video Generation with Human Feedback
Figure 3 for Improving Video Generation with Human Feedback
Figure 4 for Improving Video Generation with Human Feedback
Viaarxiv icon

DispFormer: Pretrained Transformer for Flexible Dispersion Curve Inversion from Global Synthesis to Regional Applications

Add code
Jan 08, 2025
Figure 1 for DispFormer: Pretrained Transformer for Flexible Dispersion Curve Inversion from Global Synthesis to Regional Applications
Figure 2 for DispFormer: Pretrained Transformer for Flexible Dispersion Curve Inversion from Global Synthesis to Regional Applications
Figure 3 for DispFormer: Pretrained Transformer for Flexible Dispersion Curve Inversion from Global Synthesis to Regional Applications
Figure 4 for DispFormer: Pretrained Transformer for Flexible Dispersion Curve Inversion from Global Synthesis to Regional Applications
Viaarxiv icon

Dolphin: Closed-loop Open-ended Auto-research through Thinking, Practice, and Feedback

Add code
Jan 07, 2025
Figure 1 for Dolphin: Closed-loop Open-ended Auto-research through Thinking, Practice, and Feedback
Figure 2 for Dolphin: Closed-loop Open-ended Auto-research through Thinking, Practice, and Feedback
Figure 3 for Dolphin: Closed-loop Open-ended Auto-research through Thinking, Practice, and Feedback
Figure 4 for Dolphin: Closed-loop Open-ended Auto-research through Thinking, Practice, and Feedback
Viaarxiv icon