Abstract:The stochastic multi-armed bandit problem studies decision-making under uncertainty. In the problem, the learner interacts with an environment by choosing an action at each round, where a round is an instance of an interaction. In response, the environment reveals a reward, which is sampled from a stochastic process, to the learner. The goal of the learner is to maximize cumulative reward. A specific variation of the stochastic multi-armed bandit problem is the restless bandit, where the reward for each action is sampled from a Markov chain. The restless bandit with a discrete state-space is a well-studied problem, but to the best of our knowledge, not many results exist for the continuous state-space version which has many applications such as hyperparameter optimization. In this work, we tackle the restless bandit with continuous state-space by assuming the rewards are the inner product of an action vector and a state vector generated by a linear Gaussian dynamical system. To predict the reward for each action, we propose a method that takes a linear combination of previously observed rewards for predicting each action's next reward. We show that, regardless of the sequence of previous actions chosen, the reward sampled for any previously chosen action can be used for predicting another action's future reward, i.e. the reward sampled for action 1 at round $t-1$ can be used for predicting the reward for action $2$ at round $t$. This is accomplished by designing a modified Kalman filter with a matrix representation that can be learned for reward prediction. Numerical evaluations are carried out on a set of linear Gaussian dynamical systems.
Abstract:Future virtualized radio access network (vRAN) infrastructure providers (and today's experimental wireless testbed providers) may be simultaneously uncertain what signals are being transmitted by their base stations and legally responsible for their violations. These providers must monitor the spectrum of transmissions and external signals without access to the radio itself. In this paper, we propose FDMonitor, a full-duplex monitoring system attached between a transmitter and its antenna to achieve this goal. Measuring the signal at this point on the RF path is necessary but insufficient since the antenna is a bidirectional device. FDMonitor thus uses a bidirectional coupler, a two-channel receiver, and a new source separation algorithm to simultaneously estimate the transmitted signal and the signal incident on the antenna. Rather than requiring an offline calibration, we also adaptively estimate the linear model for the system on the fly. FDMonitor has been running on a real-world open wireless testbed, monitoring 19 SDR platforms controlled (with bare metal access) by outside experimenters over a seven month period, sending alerts whenever a violation is observed. Our experimental results show that FDMonitor accurately separates signals across a range of signal parameters. Over more than 7 months of observation, it achieves a positive predictive value of 97%, with a total of 20 false alerts.
Abstract:The stochastic multi-armed bandit has provided a framework for studying decision-making in unknown environments. We propose a variant of the stochastic multi-armed bandit where the rewards are sampled from a stochastic linear dynamical system. The proposed strategy for this stochastic multi-armed bandit variant is to learn a model of the dynamical system while choosing the optimal action based on the learned model. Motivated by mathematical finance areas such as Intertemporal Capital Asset Pricing Model proposed by Merton and Stochastic Portfolio Theory proposed by Fernholz that both model asset returns with stochastic differential equations, this strategy is applied to quantitative finance as a high-frequency trading strategy, where the goal is to maximize returns within a time period.