Abstract:Physical reservoir computing is a computational paradigm that enables temporal pattern recognition to be performed directly in physical matter. By exciting non-linear dynamical systems and linearly classifying their changes in state, we can create highly energy-efficient devices capable of solving machine learning tasks without the need to build a modular system consisting of millions of neurons interconnected by synapses. The chosen dynamical system must have three desirable properties: non-linearity, complexity, and fading memory to act as an effective reservoir. We present task agnostic quantitative measures for each of these three requirements and exemplify them for two reservoirs: an echo state network and a simulated magnetic skyrmion-based reservoir. We show that, in general, systems with lower damping reach higher values in all three performance metrics. Whilst for input signal strength, there is a natural trade-off between memory capacity and non-linearity of the reservoir's behaviour. In contrast to typical task-dependent reservoir computing benchmarks, these metrics can be evaluated in parallel from a single input signal, drastically speeding up the parameter search to design efficient and high-performance reservoirs.