Abstract:Observation of the underlying actors that generate event sequences reveals that they often evolve continuously. Most modern methods, however, tend to model such processes through at most piecewise-continuous trajectories. To address this, we adopt a way of viewing events not as standalone phenomena but instead as observations of a Gaussian Process, which in turn governs the actor's dynamics. We propose integrating these obtained dynamics, resulting in a continuous-trajectory modification of the widely successful Neural ODE model. Through Gaussian Process theory, we were able to evaluate the uncertainty in an actor's representation, which arises from not observing them between events. This estimate led us to develop a novel, theoretically backed negative feedback mechanism. Empirical studies indicate that our model with Gaussian process interpolation and negative feedback achieves state-of-the-art performance, with improvements up to 20% AUROC against similar architectures.
Abstract:Given the escalating risks of malicious attacks in the finance sector and the consequential severe damage, a thorough understanding of adversarial strategies and robust defense mechanisms for machine learning models is critical. The threat becomes even more severe with the increased adoption in banks more accurate, but potentially fragile neural networks. We aim to investigate the current state and dynamics of adversarial attacks and defenses for neural network models that use sequential financial data as the input. To achieve this goal, we have designed a competition that allows realistic and detailed investigation of problems in modern financial transaction data. The participants compete directly against each other, so possible attacks and defenses are examined in close-to-real-life conditions. Our main contributions are the analysis of the competition dynamics that answers the questions on how important it is to conceal a model from malicious users, how long does it take to break it, and what techniques one should use to make it more robust, and introduction additional way to attack models or increase their robustness. Our analysis continues with a meta-study on the used approaches with their power, numerical experiments, and accompanied ablations studies. We show that the developed attacks and defenses outperform existing alternatives from the literature while being practical in terms of execution, proving the validity of the competition as a tool for uncovering vulnerabilities of machine learning models and mitigating them in various domains.