Abstract:At the risk of overstating the case, connectionist approaches to machine learning, i.e. neural networks, are enjoying a small vogue right now. However, these methods require large volumes of data and produce models that are uninterpretable to humans. An alternative framework that is compatible with neural networks and gradient-based learning, but explicitly models compositionality, is Vector Symbolic Architectures (VSAs). VSAs are a family of algebras on high-dimensional vector representations. They arose in cognitive science from the need to unify neural processing and the kind of symbolic reasoning that humans perform. While machine learning methods have benefited from category theoretical analyses, VSAs have not yet received similar treatment. In this paper, we present a first attempt at applying category theory to VSAs. Specifically, we conduct a brief literature survey demonstrating the lacking intersection of these two topics, provide a list of desiderata for VSAs, and propose that VSAs may be understood as a (division) rig in a category enriched over a monoid in Met (the category of Lawvere metric spaces). This final contribution suggests that VSAs may be generalised beyond current implementations. It is our hope that grounding VSAs in category theory will lead to more rigorous connections with other research, both within and beyond, learning and cognition.
Abstract:Probability distributions are central to Bayesian accounts of cognition, but behavioral assessments do not directly measure them. Posterior distributions are typically computed from collections of individual participant actions, yet are used to draw conclusions about the internal structure of participant beliefs. Also not explicitly measured are the prior distributions that distinguish Bayesian models from others by representing initial states of belief. Instead, priors are usually derived from experimenters' intuitions or model assumptions and applied equally to all participants. Here we present three experiments using "Plinko", a behavioral task in which participants estimate distributions of ball drops over all available outcomes and where distributions are explicitly measured before any observations. In Experiment 1, we show that participant priors cluster around prototypical probability distributions (Gaussian, bimodal, etc.), and that prior cluster membership may indicate learning ability. In Experiment 2, we highlight participants' ability to update to unannounced changes of presented distributions and how this ability is affected by environmental manipulation. Finally, in Experiment 3, we verify that individual participant priors are reliable representations and that learning is not impeded when faced with a physically implausible ball drop distribution that is dynamically defined according to individual participant input. This task will prove useful in more closely examining mechanisms of statistical learning and mental model updating without requiring many of the assumptions made by more traditional computational modeling methodologies.