We study the Dictionary Learning (aka Sparse Coding) problem of obtaining a sparse representation of data points, by learning \emph{dictionary vectors} upon which the data points can be written as sparse linear combinations. We view this problem from a geometry perspective as the spanning set of a subspace arrangement, and focus on understanding the case when the underlying hypergraph of the subspace arrangement is specified. For this Fitted Dictionary Learning problem, we completely characterize the combinatorics of the associated subspace arrangements (i.e.\ their underlying hypergraphs). Specifically, a combinatorial rigidity-type theorem is proven for a type of geometric incidence system. The theorem characterizes the hypergraphs of subspace arrangements that generically yield (a) at least one dictionary (b) a locally unique dictionary (i.e.\ at most a finite number of isolated dictionaries) of the specified size. We are unaware of prior application of combinatorial rigidity techniques in the setting of Dictionary Learning, or even in machine learning. We also provide a systematic classification of problems related to Dictionary Learning together with various algorithms, their assumptions and performance.