Explanation in machine learning and related fields such as artificial intelligence aims at making machine learning models and their decisions understandable to humans. Existing work suggests that personalizing explanations might help to improve understandability. In this work, we derive a conceptualization of personalized explanation by defining and structuring the problem based on prior work on machine learning explanation, personalization (in machine learning) and concepts and techniques from other domains such as privacy and knowledge elicitation. We perform a categorization of explainee information used in the process of personalization as well as describing means to collect this information. We also identify three key explanation properties that are amendable to personalization: complexity, decision information and presentation. We also enhance existing work on explanation by introducing additional desiderata and measures to quantify the quality of personalized explanations.