Abstract:Training self-driving cars is often challenging since they require a vast amount of labeled data in multiple real-world contexts, which is computationally and memory intensive. Researchers often resort to driving simulators to train the agent and transfer the knowledge to a real-world setting. Since simulators lack realistic behavior, these methods are quite inefficient. To address this issue, we introduce a framework (perception, planning, and control) in a real-world driving environment that transfers the real-world environments into gaming environments by setting up a reliable Markov Decision Process (MDP). We propose variations of existing Reinforcement Learning (RL) algorithms in a multi-agent setting to learn and execute the discrete control in real-world environments. Experiments show that the multi-agent setting outperforms the single-agent setting in all the scenarios. We also propose reliable initialization, data augmentation, and training techniques that enable the agents to learn and generalize to navigate in a real-world environment with minimal input video data, and with minimal training. Additionally, to show the efficacy of our proposed algorithm, we deploy our method in the virtual driving environment TORCS.
Abstract:The convergence and numerical analysis of a low memory implementation of the Orthogonal Matching Pursuit greedy strategy, which is termed Self Projected Matching Pursuit, is presented. This approach provides an iterative way of solving the least squares problem with much less storage requirement than direct linear algebra techniques. Hence, it is appropriate for solving large linear systems. Furthermore, the low memory requirement of the method suits it for massive parallelization, via Graphics Processing Unit, to tackle systems which can be broken into a large number of subsystems of much smaller dimension.