Picture for Fangzhen Lin

Fangzhen Lin

Adjustable Robust Reinforcement Learning for Online 3D Bin Packing

Add code
Oct 06, 2023
Viaarxiv icon

On Computing Universal Plans for Partially Observable Multi-Agent Path Finding

Add code
May 25, 2023
Viaarxiv icon

Using Language Models For Knowledge Acquisition in Natural Language Reasoning Problems

Add code
Apr 04, 2023
Viaarxiv icon

Backward Imitation and Forward Reinforcement Learning via Bi-directional Model Rollouts

Add code
Aug 04, 2022
Figure 1 for Backward Imitation and Forward Reinforcement Learning via Bi-directional Model Rollouts
Figure 2 for Backward Imitation and Forward Reinforcement Learning via Bi-directional Model Rollouts
Figure 3 for Backward Imitation and Forward Reinforcement Learning via Bi-directional Model Rollouts
Figure 4 for Backward Imitation and Forward Reinforcement Learning via Bi-directional Model Rollouts
Viaarxiv icon

PocketNN: Integer-only Training and Inference of Neural Networks via Direct Feedback Alignment and Pocket Activations in Pure C++

Add code
Jan 08, 2022
Figure 1 for PocketNN: Integer-only Training and Inference of Neural Networks via Direct Feedback Alignment and Pocket Activations in Pure C++
Figure 2 for PocketNN: Integer-only Training and Inference of Neural Networks via Direct Feedback Alignment and Pocket Activations in Pure C++
Figure 3 for PocketNN: Integer-only Training and Inference of Neural Networks via Direct Feedback Alignment and Pocket Activations in Pure C++
Figure 4 for PocketNN: Integer-only Training and Inference of Neural Networks via Direct Feedback Alignment and Pocket Activations in Pure C++
Viaarxiv icon

Computing Class Hierarchies from Classifiers

Add code
Dec 02, 2021
Figure 1 for Computing Class Hierarchies from Classifiers
Figure 2 for Computing Class Hierarchies from Classifiers
Figure 3 for Computing Class Hierarchies from Classifiers
Figure 4 for Computing Class Hierarchies from Classifiers
Viaarxiv icon

XRJL-HKUST at SemEval-2021 Task 4: WordNet-Enhanced Dual Multi-head Co-Attention for Reading Comprehension of Abstract Meaning

Add code
Mar 30, 2021
Figure 1 for XRJL-HKUST at SemEval-2021 Task 4: WordNet-Enhanced Dual Multi-head Co-Attention for Reading Comprehension of Abstract Meaning
Figure 2 for XRJL-HKUST at SemEval-2021 Task 4: WordNet-Enhanced Dual Multi-head Co-Attention for Reading Comprehension of Abstract Meaning
Figure 3 for XRJL-HKUST at SemEval-2021 Task 4: WordNet-Enhanced Dual Multi-head Co-Attention for Reading Comprehension of Abstract Meaning
Figure 4 for XRJL-HKUST at SemEval-2021 Task 4: WordNet-Enhanced Dual Multi-head Co-Attention for Reading Comprehension of Abstract Meaning
Viaarxiv icon

Faster and Safer Training by Embedding High-Level Knowledge into Deep Reinforcement Learning

Add code
Oct 22, 2019
Figure 1 for Faster and Safer Training by Embedding High-Level Knowledge into Deep Reinforcement Learning
Figure 2 for Faster and Safer Training by Embedding High-Level Knowledge into Deep Reinforcement Learning
Figure 3 for Faster and Safer Training by Embedding High-Level Knowledge into Deep Reinforcement Learning
Figure 4 for Faster and Safer Training by Embedding High-Level Knowledge into Deep Reinforcement Learning
Viaarxiv icon

Recycling Computed Answers in Rewrite Systems for Abduction

Add code
Feb 16, 2004
Figure 1 for Recycling Computed Answers in Rewrite Systems for Abduction
Figure 2 for Recycling Computed Answers in Rewrite Systems for Abduction
Figure 3 for Recycling Computed Answers in Rewrite Systems for Abduction
Figure 4 for Recycling Computed Answers in Rewrite Systems for Abduction
Viaarxiv icon