Abstract:Recent work has shown great progress in integrating spatial conditioning to control large, pre-trained text-to-image diffusion models. Despite these advances, existing methods describe the spatial image content using hand-crafted conditioning inputs, which are either semantically ambiguous (e.g., edges) or require expensive manual annotations (e.g., semantic segmentation). To address these limitations, we propose a new label-free way of conditioning diffusion models to enable fine-grained spatial control. We introduce the concept of neural semantic image synthesis, which uses neural layouts extracted from pre-trained foundation models as conditioning. Neural layouts are advantageous as they provide rich descriptions of the desired image, containing both semantics and detailed geometry of the scene. We experimentally show that images synthesized via neural semantic image synthesis achieve similar or superior pixel-level alignment of semantic classes compared to those created using expensive semantic label maps. At the same time, they capture better semantics, instance separation, and object orientation than other label-free conditioning options, such as edges or depth. Moreover, we show that images generated by neural layout conditioning can effectively augment real data for training various perception tasks.
Abstract:Designing code to be simplistic yet to offer choice is a tightrope walk. Additional modules such as optimizers and data sets make a framework useful to a broader audience, but the added complexity quickly becomes a problem. Framework parameters may apply only to some modules but not others, be mutually exclusive or depend on each other, often in unclear ways. Even so, many frameworks are limited to a few specific use cases. This paper presents the underlying concept of UniNAS, a framework designed to incorporate a variety of Neural Architecture Search approaches. Since they differ in the number of optimizers and networks, hyper-parameter optimization, network designs, candidate operations, and more, a traditional approach can not solve the task. Instead, every module defines its own hyper-parameters and a local tree structure of module requirements. A configuration file specifies which modules are used, their used parameters, and which other modules they use in turn This concept of argument trees enables combining and reusing modules in complex configurations while avoiding many problems mentioned above. Argument trees can also be configured from a graphical user interface so that designing and changing experiments becomes possible without writing a single line of code. UniNAS is publicly available at https://github.com/cogsys-tuebingen/uninas
Abstract:The automatic design of architectures for neural networks, Neural Architecture Search, has gained a lot of attention over the recent years, as the thereby created networks repeatedly broke state-of-the-art results for several disciplines. The network search spaces are often finite and designed by hand, in a way that a fixed and small number of decisions constitute a specific architecture. Given these circumstances, inter-choice dependencies are likely to exist and affect the network search, but are unaccounted for in the popular one-shot methods. We extend the Single-Path One-Shot search-networks with additional weights that depend on combinations of choices and analyze their effect. Experiments in NAS-Bench 201 and SubImageNet based search spaces show an improved super-network performance in only-convolutions settings and that the overhead is nearly negligible for sequential network designs.
Abstract:While recent NAS algorithms are thousands of times faster than the pioneering works, it is often overlooked that they use fewer candidate operations, resulting in a significantly smaller search space. We present PR-DARTS, a NAS algorithm that discovers strong network configurations in a much larger search space and a single day. A small candidate operation pool is used, from which candidates are progressively pruned and replaced with better performing ones. Experiments on CIFAR-10 and CIFAR-100 achieve 2.51% and 15.53% test error, respectively, despite searching in a space where each cell has 150 times as many possible configurations than in the DARTS baseline. Code is available at https://github.com/cogsys-tuebingen/prdarts
Abstract:Neural network architectures found by sophistic search algorithms achieve strikingly good test performance, surpassing most human-crafted network models by significant margins. Although computationally efficient, their design is often very complex, impairing execution speed. Additionally, finding models outside of the search space is not possible by design. While our space is still limited, we implement undiscoverable expert knowledge into the economic search algorithm Efficient Neural Architecture Search (ENAS), guided by the design principles and architecture of ShuffleNet V2. While maintaining baseline-like 2.85% test error on CIFAR-10, our ShuffleNASNets are significantly less complex, require fewer parameters, and are two times faster than the ENAS baseline in a classification task. These models also scale well to a low parameter space, achieving less than 5% test error with little regularization and only 236K parameters.