In-context learning (ICL), a property demonstrated by transformer-based sequence models, refers to the automatic inference of an input-output mapping based on examples of the mapping provided as context. ICL requires no explicit learning, i.e., no explicit updates of model weights, directly mapping context and new input to the new output. Prior work has proved the usefulness of ICL for detection in MIMO channels. In this setting, the context is given by pilot symbols, and ICL automatically adapts a detector, or equalizer, to apply to newly received signals. However, the implementation tested in prior art was based on conventional artificial neural networks (ANNs), which may prove too energy-demanding to be run on mobile devices. This paper evaluates a neuromorphic implementation of the transformer for ICL-based MIMO detection. This approach replaces ANNs with spiking neural networks (SNNs), and implements the attention mechanism via stochastic computing, requiring no multiplications, but only logical AND operations and counting. When using conventional digital CMOS hardware, the proposed implementation is shown to preserve accuracy, with a reduction in power consumption ranging from $5.4\times$ to $26.8\times$, depending on the model sizes, as compared to ANN-based implementations.