Quantum machine learning (QML) has surged as a prominent area of research with the objective to go beyond the capabilities of classical machine learning models. A critical aspect of any learning task is the process of data embedding, which directly impacts model performance. Poorly designed data-embedding strategies can significantly impact the success of a learning task. Despite its importance, rigorous analyses of data-embedding effects are limited, leaving many cases without effective assessment methods. In this work, we introduce a metric for binary classification tasks, the class margin, by merging the concepts of average randomness and classification margin. This metric analytically connects data-induced randomness with classification accuracy for a given data-embedding map. We benchmark a range of data-embedding strategies through class margin, demonstrating that data-induced randomness imposes a limit on classification performance. We expect this work to provide a new approach to evaluate QML models by their data-embedding processes, addressing gaps left by existing analytical tools.