Privacy-protecting data analysis investigates statistical methods under privacy constraints. This is a rising challenge in modern statistics, as the achievement of confidentiality guarantees, which typically occurs through suitable perturbations of the data, may determine a loss in the statistical utility of the data. In this paper, we consider privacy-protecting tests for goodness-of-fit in frequency tables, this being arguably the most common form of releasing data. Under the popular framework of $(\varepsilon,\delta)$-differential privacy for perturbed data, we introduce a private likelihood-ratio (LR) test for goodness-of-fit and we study its large sample properties, showing the importance of taking the perturbation into account to avoid a loss in the statistical significance of the test. Our main contribution provides a quantitative characterization of the trade-off between confidentiality, measured via differential privacy parameters $\varepsilon$ and $\delta$, and utility, measured via the power of the test. In particular, we establish a precise Bahadur-Rao type large deviation expansion for the power of the private LR test, which leads to: i) identify a critical quantity, as a function of the sample size and $(\varepsilon,\delta)$, which determines a loss in the power of the private LR test; ii) quantify the sample cost of $(\varepsilon,\delta)$-differential privacy in the private LR test, namely the additional sample size that is required to recover the power of the LR test in the absence of perturbation. Such a result relies on a novel multidimensional large deviation principle for sum of i.i.d. random vectors, which is of independent interest. Our work presents the first rigorous treatment of privacy-protecting LR tests for goodness-of-fit in frequency tables, making use of the power of the test to quantify the trade-off between confidentiality and utility.