Picture for Raihan Seraj

Raihan Seraj

Generalizing Multi-Step Inverse Models for Representation Learning to Finite-Memory POMDPs

Add code
Apr 22, 2024
Viaarxiv icon

PcLast: Discovering Plannable Continuous Latent States

Add code
Nov 06, 2023
Viaarxiv icon

AutoCast++: Enhancing World Event Prediction with Zero-shot Ranking-based Context Retrieval

Add code
Oct 03, 2023
Viaarxiv icon

Tsetlin Machine for Solving Contextual Bandit Problems

Add code
Feb 04, 2022
Figure 1 for Tsetlin Machine for Solving Contextual Bandit Problems
Figure 2 for Tsetlin Machine for Solving Contextual Bandit Problems
Figure 3 for Tsetlin Machine for Solving Contextual Bandit Problems
Figure 4 for Tsetlin Machine for Solving Contextual Bandit Problems
Viaarxiv icon

Approximate information state for approximate planning and reinforcement learning in partially observed systems

Add code
Oct 17, 2020
Figure 1 for Approximate information state for approximate planning and reinforcement learning in partially observed systems
Figure 2 for Approximate information state for approximate planning and reinforcement learning in partially observed systems
Figure 3 for Approximate information state for approximate planning and reinforcement learning in partially observed systems
Figure 4 for Approximate information state for approximate planning and reinforcement learning in partially observed systems
Viaarxiv icon

Doubly Robust Off-Policy Actor-Critic Algorithms for Reinforcement Learning

Add code
Dec 11, 2019
Figure 1 for Doubly Robust Off-Policy Actor-Critic Algorithms for Reinforcement Learning
Figure 2 for Doubly Robust Off-Policy Actor-Critic Algorithms for Reinforcement Learning
Figure 3 for Doubly Robust Off-Policy Actor-Critic Algorithms for Reinforcement Learning
Viaarxiv icon

Entropy Regularization with Discounted Future State Distribution in Policy Gradient Methods

Add code
Dec 11, 2019
Figure 1 for Entropy Regularization with Discounted Future State Distribution in Policy Gradient Methods
Figure 2 for Entropy Regularization with Discounted Future State Distribution in Policy Gradient Methods
Figure 3 for Entropy Regularization with Discounted Future State Distribution in Policy Gradient Methods
Figure 4 for Entropy Regularization with Discounted Future State Distribution in Policy Gradient Methods
Viaarxiv icon