Picture for Arnab Maiti

Arnab Maiti

Near-Optimal Pure Exploration in Matrix Games: A Generalization of Stochastic Bandits & Dueling Bandits

Add code
Oct 25, 2023
Viaarxiv icon

Logarithmic Regret for Matrix Games against an Adversary with Noisy Bandit Feedback

Add code
Jun 22, 2023
Viaarxiv icon

Instance-dependent Sample Complexity Bounds for Zero-sum Matrix Games

Add code
Mar 19, 2023
Viaarxiv icon

Fairness and Welfare Quantification for Regret in Multi-Armed Bandits

Add code
May 27, 2022
Viaarxiv icon

Streaming Algorithms for Stochastic Multi-armed Bandits

Add code
Dec 09, 2020
Figure 1 for Streaming Algorithms for Stochastic Multi-armed Bandits
Figure 2 for Streaming Algorithms for Stochastic Multi-armed Bandits
Viaarxiv icon