Explainability is a key challenge and a major research theme in AI research for developing intelligent systems that are capable of working with humans more effectively. An obvious choice in developing explainable intelligent systems relies on employing knowledge representation formalisms which are inherently tailored towards expressing human knowledge e.g., interrogative agendas. In the scope of this work, we focus on formal concept analysis (FCA), a standard knowledge representation formalism, to express interrogative agendas, and in particular to categorize objects w.r.t. a given set of features. Several FCA-based algorithms have already been in use for standard machine learning tasks such as classification and outlier detection. These algorithms use a single concept lattice for such a task, meaning that the set of features used for the categorization is fixed. Different sets of features may have different importance in that categorization, we call a set of features an agenda. In many applications a correct or good agenda for categorization is not known beforehand. In this paper, we propose a meta-learning algorithm to construct a good interrogative agenda explaining the data. Such algorithm is meant to call existing FCA-based classification and outlier detection algorithms iteratively, to increase their accuracy and reduce their sample complexity. The proposed method assigns a measure of importance to different set of features used in the categorization, hence making the results more explainable.