Picture for Han-Lim Choi

Han-Lim Choi

Aircraft Trajectory Segmentation-based Contrastive Coding: A Framework for Self-supervised Trajectory Representation

Add code
Jul 29, 2024
Viaarxiv icon

LiCS: Navigation using Learned-imitation on Cluttered Space

Add code
Jun 21, 2024
Viaarxiv icon

Distilling Privileged Information for Dubins Traveling Salesman Problems with Neighborhoods

Add code
Apr 25, 2024
Viaarxiv icon

Path Planning in 3D with Motion Primitives for Wind Energy-Harvesting Fixed-Wing Aircraft

Add code
Nov 17, 2023
Viaarxiv icon

Computing Forward Reachable Sets for Nonlinear Adaptive Multirotor Controllers

Add code
Sep 16, 2022
Figure 1 for Computing Forward Reachable Sets for Nonlinear Adaptive Multirotor Controllers
Figure 2 for Computing Forward Reachable Sets for Nonlinear Adaptive Multirotor Controllers
Figure 3 for Computing Forward Reachable Sets for Nonlinear Adaptive Multirotor Controllers
Figure 4 for Computing Forward Reachable Sets for Nonlinear Adaptive Multirotor Controllers
Viaarxiv icon

DS-K3DOM: 3-D Dynamic Occupancy Mapping with Kernel Inference and Dempster-Shafer Evidential Theory

Add code
Sep 16, 2022
Figure 1 for DS-K3DOM: 3-D Dynamic Occupancy Mapping with Kernel Inference and Dempster-Shafer Evidential Theory
Figure 2 for DS-K3DOM: 3-D Dynamic Occupancy Mapping with Kernel Inference and Dempster-Shafer Evidential Theory
Figure 3 for DS-K3DOM: 3-D Dynamic Occupancy Mapping with Kernel Inference and Dempster-Shafer Evidential Theory
Viaarxiv icon

Distilling a Hierarchical Policy for Planning and Control via Representation and Reinforcement Learning

Add code
Nov 16, 2020
Figure 1 for Distilling a Hierarchical Policy for Planning and Control via Representation and Reinforcement Learning
Figure 2 for Distilling a Hierarchical Policy for Planning and Control via Representation and Reinforcement Learning
Figure 3 for Distilling a Hierarchical Policy for Planning and Control via Representation and Reinforcement Learning
Figure 4 for Distilling a Hierarchical Policy for Planning and Control via Representation and Reinforcement Learning
Viaarxiv icon

Online Gaussian Process State-Space Models: Learning and Planning for Partially Observable Dynamical Systems

Add code
Mar 14, 2019
Figure 1 for Online Gaussian Process State-Space Models: Learning and Planning for Partially Observable Dynamical Systems
Figure 2 for Online Gaussian Process State-Space Models: Learning and Planning for Partially Observable Dynamical Systems
Figure 3 for Online Gaussian Process State-Space Models: Learning and Planning for Partially Observable Dynamical Systems
Figure 4 for Online Gaussian Process State-Space Models: Learning and Planning for Partially Observable Dynamical Systems
Viaarxiv icon

A Distributed ADMM Approach to Informative Trajectory Planning for Multi-Target Tracking

Add code
Jan 09, 2019
Figure 1 for A Distributed ADMM Approach to Informative Trajectory Planning for Multi-Target Tracking
Figure 2 for A Distributed ADMM Approach to Informative Trajectory Planning for Multi-Target Tracking
Figure 3 for A Distributed ADMM Approach to Informative Trajectory Planning for Multi-Target Tracking
Figure 4 for A Distributed ADMM Approach to Informative Trajectory Planning for Multi-Target Tracking
Viaarxiv icon

Adaptive Path-Integral Autoencoder: Representation Learning and Planning for Dynamical Systems

Add code
Jan 03, 2019
Figure 1 for Adaptive Path-Integral Autoencoder: Representation Learning and Planning for Dynamical Systems
Figure 2 for Adaptive Path-Integral Autoencoder: Representation Learning and Planning for Dynamical Systems
Figure 3 for Adaptive Path-Integral Autoencoder: Representation Learning and Planning for Dynamical Systems
Viaarxiv icon