We motivate and formally define a new task for fine-tuning rule-like generalization in large language models. It is conjectured that the shortcomings of current LLMs are due to a lack of ability to generalize. It has been argued that, instead, humans are better at generalization because they have a tendency at extracting rules from complex data. We try to recreate this tendency to rule-based generalization. When exposed to tests of analytic intelligence, for example, the visual RAVEN IQ test, human problem-solvers identify the relevant objects in the picture and their relevant attributes and reason based on rules applied to these objects and attributes. Based on the induced rules, they are able to provide a solution to the test. We propose a task that translates this IQ task into language. In this paper, we provide the formal specification for the task and the generative process of its datasets.