Abstract:The coaxial cables commonly used to connect RF coil arrays with the control console of an MRI scanner are susceptible to electromagnetic coupling. As the number of RF channel increases, such coupling could result in severe heating and pose a safety concern. Non-conductive transmission solutions based on fiber-optic cables are considered to be one of the alternatives, but are limited by the high dynamic range ($>80$~dB) of typical MRI signals. A new digital fiber-optic transmission system based on delta-sigma modulation (DSM) is developed to address this problem. A DSM-based optical link is prototyped using off-the-shelf components and bench-tested at different signal oversampling rates (OSR). An end-to-end dynamic range (DR) of 81~dB, which is sufficient for typical MRI signals, is obtained over a bandwidth of 200~kHz, which corresponds to $OSR=50$. A fully-integrated custom fourth-order continuous-time DSM (CT-DSM) is designed in 180~nm CMOS technology to enable transmission of full-bandwidth MRI signals (up to 1~MHz) with adequate DR. Initial electrical test results from this custom chip are also presented.
Abstract:When samples have internal structure, we often see a mismatch between the objective optimized during training and the model's goal during inference. For example, in sequence-to-sequence modeling we are interested in high-quality translated sentences, but training typically uses maximum likelihood at the word level. Learning to recognize individual faces from group photos, each captioned with the correct but unordered list of people in it, is another example where a mismatch between training and inference objectives occurs. In both cases, the natural training-time loss would involve a combinatorial problem -- dynamic programming-based global sequence alignment and weighted bipartite graph matching, respectively -- but solutions to combinatorial problems are not differentiable with respect to their input parameters, so surrogate, differentiable losses are used instead. Here, we show how to perform gradient descent over combinatorial optimization algorithms that involve continuous parameters, for example edge weights, and can be efficiently expressed as integer, linear, or mixed-integer linear programs. We demonstrate usefulness of gradient descent over combinatorial optimization in sequence-to-sequence modeling using differentiable encoder-decoder architecture with softmax or Gumbel-softmax, and in weakly supervised learning involving a convolutional, residual feed-forward network for image classification.
Abstract:Neural Ordinary Differential Equations have been recently proposed as an infinite-depth generalization of residual networks. Neural ODEs provide out-of-the-box invertibility of the mapping realized by the neural network, and can lead to networks that are more efficient in terms of computational time and parameter space. Here, we show that a Neural ODE operating on a space with dimensionality increased by one compared to the input dimension is a universal approximator for the space of continuous functions, at the cost of loosing invertibility. We then turn our focus to invertible mappings, and we prove that any homeomorphism on a $p$-dimensional Euclidean space can be approximated by a Neural ODE operating on a $(2p+1)$-dimensional Euclidean space.