Abstract:Cold temperatures can cause significant frost damage to fruit crops depending on their resilience, or cold hardiness, which changes throughout the dormancy season. This has led to the development of predictive cold-hardiness models, which help farmers decide when to deploy expensive frost-mitigation measures. Unfortunately, cold-hardiness data for model training is only available for some fruit cultivars due to the need for specialized equipment and expertise. Rather, farmers often do have years of phenological data (e.g. date of budbreak) that they regularly collect for their crops. In this work, we introduce a new transfer-learning framework, Transfer via Auxiliary Labels (TAL), that allows farmers to leverage the phenological data to produce more accurate cold-hardiness predictions, even when no cold-hardiness data is available for their specific crop. The framework assumes a set of source tasks (cultivars) where each has associated primary labels (cold hardiness) and auxiliary labels (phenology). However, the target task (new cultivar) is assumed to only have the auxiliary labels. The goal of TAL is to predict primary labels for the target task via transfer from the source tasks. Surprisingly, despite the vast literature on transfer learning, to our knowledge, the TAL formulation has not been previously addressed. Thus, we propose several new TAL approaches based on model selection and averaging that can leverage recent deep multi-task models for cold-hardiness prediction. Our results on real-world cold-hardiness and phenological data for multiple grape cultivars demonstrate that TAL can leverage the phenological data to improve cold-hardiness predictions in the absence of cold-hardiness data.
Abstract:The search for image compression optimization techniques is a topic of constant interest both in and out of academic circles. One method that shows promise toward future improvements in this field is image colorization since image colorization algorithms can reduce the amount of color data that needs to be stored for an image. Our work focuses on optimizing a color grid based approach to fully-automated image color information retention with regard to convolutional colorization network architecture for the purposes of image compression. More generally, using a convolutional neural network for image re-colorization, we want to minimize the amount of color information that is stored while still being able to faithfully re-color images. Our results yielded a promising image compression ratio, while still allowing for successful image recolorization reaching high CSIM values.