Differential Privacy is one of the strongest privacy guarantees, which allows the release of useful information about any sensitive dataset. However, it provides the same level of protection for all elements in the data universe. In this paper, we consider $d_{\mathcal{X}}$-privacy, an instantiation of the privacy notion introduced in \cite{chatzikokolakis2013broadening}, which allows specifying a separate privacy budget for each pair of elements in the data universe. We describe a systematic procedure to tailor any existing differentially private mechanism into a $d_{\mathcal{X}}$-private variant for the case of linear queries. For the resulting $d_{\mathcal{X}}$-private mechanisms, we provide theoretical guarantees on the trade-off between utility and privacy, and show that they always outperform their \emph{vanilla} counterpart. We demonstrate the effectiveness of our procedure, by evaluating the proposed $d_{\mathcal{X}}$-private Laplace mechanism on both synthetic and real datasets using a set of randomly generated linear queries.