Picture for Zhaodong Wang

Zhaodong Wang

LayerDAG: A Layerwise Autoregressive Diffusion Model for Directed Acyclic Graph Generation

Add code
Nov 04, 2024
Viaarxiv icon

Enhanced E-Commerce Attribute Extraction: Innovating with Decorative Relation Correction and LLAMA 2.0-Based Annotation

Add code
Dec 09, 2023
Viaarxiv icon

Chakra: Advancing Performance Benchmarking and Co-design using Standardized Execution Traces

Add code
May 26, 2023
Figure 1 for Chakra: Advancing Performance Benchmarking and Co-design using Standardized Execution Traces
Figure 2 for Chakra: Advancing Performance Benchmarking and Co-design using Standardized Execution Traces
Figure 3 for Chakra: Advancing Performance Benchmarking and Co-design using Standardized Execution Traces
Figure 4 for Chakra: Advancing Performance Benchmarking and Co-design using Standardized Execution Traces
Viaarxiv icon

A Deep Value-network Based Approach for Multi-Driver Order Dispatching

Add code
Jun 08, 2021
Figure 1 for A Deep Value-network Based Approach for Multi-Driver Order Dispatching
Figure 2 for A Deep Value-network Based Approach for Multi-Driver Order Dispatching
Figure 3 for A Deep Value-network Based Approach for Multi-Driver Order Dispatching
Figure 4 for A Deep Value-network Based Approach for Multi-Driver Order Dispatching
Viaarxiv icon

Efficient Deep Reinforcement Learning through Policy Transfer

Add code
Feb 19, 2020
Figure 1 for Efficient Deep Reinforcement Learning through Policy Transfer
Figure 2 for Efficient Deep Reinforcement Learning through Policy Transfer
Figure 3 for Efficient Deep Reinforcement Learning through Policy Transfer
Figure 4 for Efficient Deep Reinforcement Learning through Policy Transfer
Viaarxiv icon

Interactive Reinforcement Learning with Dynamic Reuse of Prior Knowledge from Human/Agent's Demonstration

Add code
May 11, 2018
Figure 1 for Interactive Reinforcement Learning with Dynamic Reuse of Prior Knowledge from Human/Agent's Demonstration
Figure 2 for Interactive Reinforcement Learning with Dynamic Reuse of Prior Knowledge from Human/Agent's Demonstration
Figure 3 for Interactive Reinforcement Learning with Dynamic Reuse of Prior Knowledge from Human/Agent's Demonstration
Figure 4 for Interactive Reinforcement Learning with Dynamic Reuse of Prior Knowledge from Human/Agent's Demonstration
Viaarxiv icon