Abstract:Generative AI models are capable of performing a wide range of tasks that traditionally require creativity and human understanding. They learn patterns from existing data during training and can subsequently generate new content such as texts, images, and music that follow these patterns. Due to their versatility and generally high-quality results, they, on the one hand, represent an opportunity for digitalization. On the other hand, the use of generative AI models introduces novel IT security risks that need to be considered for a comprehensive analysis of the threat landscape in relation to IT security. In response to this risk potential, companies or authorities using them should conduct an individual risk analysis before integrating generative AI into their workflows. The same applies to developers and operators, as many risks in the context of generative AI have to be taken into account at the time of development or can only be influenced by the operating company. Based on this, existing security measures can be adjusted, and additional measures can be taken.
Abstract:Based on a new coupling approach, we prove that the transition step of the Hamiltonian Monte Carlo algorithm is contractive w.r.t. a carefully designed Kantorovich (L1 Wasserstein) distance. The lower bound for the contraction rate is explicit. Global convexity of the potential is not required, and thus multimodal target distributions are included. Explicit quantitative bounds for the number of steps required to approximate the stationary distribution up to a given error are a direct consequence of contractivity. These bounds show that HMC can overcome diffusive behaviour if the duration of the Hamiltonian dynamics is adjusted appropriately.