Counterfactual Explanations are becoming a de-facto standard in post-hoc interpretable machine learning. For a given classifier and an instance classified in an undesired class, its counterfactual explanation corresponds to small perturbations of that instance that allows changing the classification outcome. This work aims to leverage Counterfactual Explanations to detect the important decision boundaries of a pre-trained black-box model. This information is used to build a supervised discretization of the features in the dataset with a tunable granularity. Using the discretized dataset, a smaller, therefore more interpretable Decision Tree can be trained, which, in addition, enhances the stability and robustness of the baseline Decision Tree. Numerical results on real-world datasets show the effectiveness of the approach in terms of accuracy and sparsity compared to the baseline Decision Tree.