Abstract:We unify k-local Ising optimization and optical KAN function learning on a single photonic platform, establishing a critical convergence point in optical computing that enables interleaved discrete-continuous workflows. We introduce a single spacial light modulator (SLM)-centric primitive that realizes, in one stroke, all-optical k-local Ising interactions and fully optical Kolmogorov-Arnold network (KAN) layers. The central idea is to convert structural nonlinearity of a nominally linear photonic scatterer into a per-window computational resource by adding one relay pass through the same spatial light modulator. A folded 4f relay reimages the first Fourier plane onto the SLM so that each chosen spin clique or ridge channel occupies a disjoint window with its own second-pass phase patch. Propagation remains linear in the optical field, yet the measured intensity in each window becomes a freely programmable polynomial of the clique sum or projection amplitude. This yields native, per-clique k-local couplings without nonlinear media and, in parallel, the many independent univariate nonlinearities required by KAN layers, all with in-situ physical gradients for training using two-frame (forward and adjoint) physical gradients. We outline implementation on spatial photonic Ising machines, injection-locked VCSEL arrays, and the Microsoft analog optical computers. In all cases the hardware change is one extra lens and a fold (or an on-chip 4f loop), enabling a minimal overhead, massively parallel route to high-order optical Ising optimization and trainable, all-optical KAN processing.
Abstract:We explore the potential of spatial-photonic Ising machines (SPIMs) to address computationally intensive Ising problems that employ low-rank and circulant coupling matrices. Our results indicate that the performance of SPIMs is critically affected by the rank and precision of the coupling matrices. By developing and assessing advanced decomposition techniques, we expand the range of problems SPIMs can solve, overcoming the limitations of traditional Mattis-type matrices. Our approach accommodates a diverse array of coupling matrices, including those with inherently low ranks, applicable to complex NP-complete problems. We explore the practical benefits of low-rank approximation in optimization tasks, particularly in financial optimization, to demonstrate the real-world applications of SPIMs. Finally, we evaluate the computational limitations imposed by SPIM hardware precision and suggest strategies to optimize the performance of these systems within these constraints.
Abstract:The classical XY model is a lattice model of statistical mechanics notable for its universality in the rich hierarchy of the optical, laser and condensed matter systems. We show how to build complex structures for machine learning based on the XY model's nonlinear blocks. The final target is to reproduce the deep learning architectures, which can perform complicated tasks usually attributed to such architectures: speech recognition, visual processing, or other complex classification types with high quality. We developed the robust and transparent approach for the construction of such models, which has universal applicability (i.e. does not strongly connect to any particular physical system), allows many possible extensions while at the same time preserving the simplicity of the methodology.