Abstract:Depression disorder is a serious health condition that has affected the lives of millions of people around the world. Diagnosis of depression is a challenging practice that relies heavily on subjective studies and, in most cases, suffers from late findings. Electroencephalography (EEG) biomarkers have been suggested and investigated in recent years as a potential transformative objective practice. In this article, for the first time, a detailed systematic review of EEG-based depression diagnosis approaches is conducted using advanced machine learning techniques and statistical analyses. For this, 938 potentially relevant articles (since 1985) were initially detected and filtered into 139 relevant articles based on the review scheme 'preferred reporting items for systematic reviews and meta-analyses (PRISMA).' This article compares and discusses the selected articles and categorizes them according to the type of machine learning techniques and statistical analyses. Algorithms, preprocessing techniques, extracted features, and data acquisition systems are discussed and summarized. This review paper explains the existing challenges of the current algorithms and sheds light on the future direction of the field. This systematic review outlines the issues and challenges in machine intelligence for the diagnosis of EEG depression that can be addressed in future studies and possibly in future wearable technologies.
Abstract:The modelling of memristive devices is an essential part of the development of novel in-memory computing systems. Models are needed to enable the accurate and efficient simulation of memristor device characteristics, for purposes of testing the performance of the devices or the feasibility of their use in future neuromorphic and in-memory computing architectures. The consideration of memristor non-idealities is an essential part of any modelling approach. The nature of the deviation of memristive devices from their initial state, particularly at ambient temperature and in the absence of a stimulating voltage, is of key interest, as it dictates their reliability as information storage media - a property that is of importance for both traditional storage and neuromorphic applications. In this paper, we investigate the use of a generative modelling approach for the simulation of the delay and initial resistance-conditioned resistive drift distribution of memristive devices. We introduce a data normalisation scheme and a novel training technique to enable the generative model to be conditioned on the continuous inputs. The proposed generative modelling approach is suited for use in end-to-end training and device modelling scenarios, including learned data storage applications, due to its simulation efficiency and differentiability.
Abstract:In the quest for low power, bio-inspired computation both memristive and memcapacitive-based Artificial Neural Networks (ANN) have been the subjects of increasing focus for hardware implementation of neuromorphic computing. One step further, regenerative capacitive neural networks, which call for the use of adiabatic computing, offer a tantalising route towards even lower energy consumption, especially when combined with `memimpedace' elements. Here, we present an artificial neuron featuring adiabatic synapse capacitors to produce membrane potentials for the somas of neurons; the latter implemented via dynamic latched comparators augmented with Resistive Random-Access Memory (RRAM) devices. Our initial 4-bit adiabatic capacitive neuron proof-of-concept example shows 90% synaptic energy saving. At 4 synapses/soma we already witness an overall 35% energy reduction. Furthermore, the impact of process and temperature on the 4-bit adiabatic synapse shows a maximum energy variation of 30% at 100 degree Celsius across the corners without any functionality loss. Finally, the efficacy of our adiabatic approach to ANN is tested for 512 & 1024 synapse/neuron for worst and best case synapse loading conditions and variable equalising capacitance's quantifying the expected trade-off between equalisation capacitance and range of optimal power-clock frequencies vs. loading (i.e. the percentage of active synapses).
Abstract:Electronic systems are becoming more and more ubiquitous as our world digitises. Simultaneously, even basic components are experiencing a wave of improvements with new transistors, memristors, voltage/current references, data converters, etc, being designed every year by hundreds of R&D groups world-wide. To date, the workhorse for testing all these designs has been a suite of lab instruments including oscilloscopes and signal generators, to mention the most popular. However, as components become more complex and pin numbers soar, the need for more parallel and versatile testing tools also becomes more pressing. In this work, we describe and benchmark an FPGA system developed that addresses this need. This general purpose testing system features a 64-channel source-meter unit (SMU), and 2x banks of 32 digital pins for digital I/O. We demonstrate that this bench-top system can obtain $170 pA$ current noise floor, $40 ns$ pulse delivery at $\pm13.5 V$ and $12 mA$ maximum current drive/channel. We then showcase the instrument's use in performing a selection of three characteristic measurement tasks: a) current-voltage (IV) characterisation of a diode and a transistor, b) fully parallel read-out of a memristor crossbar array and c) an integral non-linearity (INL) test on a DAC. This work introduces a down-scaled electronics laboratory packaged in a single instrument which provides a shift towards more affordable, reliable, compact and multi-functional instrumentation for emerging electronic technologies.