Machine learning has shown successes for complex learning problems in which data/parameters can be multidimensional and too complex for a first-principles based analysis. Some applications that utilize machine learning require human interpretability, not just to understand a particular result (classification, detection, etc.) but also for humans to take action based on that result. Black-box machine learning model interpretation has been studied, but recent work has focused on validation and improving model performance. In this work, an actionable interpretation of black-box machine learning models is presented. The proposed technique focuses on the extraction of actionable measures to help users make a decision or take an action. Actionable interpretation can be implemented in most traditional black-box machine learning models. It uses the already trained model, used training data, and data processing techniques to extract actionable items from the model outcome and its time-series inputs. An implementation of the actionable interpretation is shown with a use case: dementia-related agitation prediction and the ambient environment. It is shown that actionable items can be extracted, such as the decreasing of in-home light level, which is triggering an agitation episode. This use case of actionable interpretation can help dementia caregivers take action to intervene and prevent agitation.