Abstract:Going beyond mimicking limited human experiences, recent studies show initial evidence that, like humans, large language models (LLMs) are capable of improving their abilities purely by self-correction, i.e., correcting previous responses through self-examination, in certain circumstances. Nevertheless, little is known about how such capabilities arise. In this work, based on a simplified setup akin to an alignment task, we theoretically analyze self-correction from an in-context learning perspective, showing that when LLMs give relatively accurate self-examinations as rewards, they are capable of refining responses in an in-context way. Notably, going beyond previous theories on over-simplified linear transformers, our theoretical construction underpins the roles of several key designs of realistic transformers for self-correction: softmax attention, multi-head attention, and the MLP block. We validate these findings extensively on synthetic datasets. Inspired by these findings, we also illustrate novel applications of self-correction, such as defending against LLM jailbreaks, where a simple self-correction step does make a large difference. We believe that these findings will inspire further research on understanding, exploiting, and enhancing self-correction for building better foundation models.
Abstract:As a fundamental data format representing spatial information, depth map is widely used in signal processing and computer vision fields. Massive amount of high precision depth maps are produced with the rapid development of equipment like laser scanner or LiDAR. Therefore, it is urgent to explore a new compression method with better compression ratio for high precision depth maps. Utilizing the wide spread deep learning environment, we propose an end-to-end learning-based lossless compression method for high precision depth maps. The whole process is comprised of two sub-processes, named pre-processing of depth maps and deep lossless compression of processed depth maps. The deep lossless compression network consists of two sub-networks, named lossy compression network and lossless compression network. We leverage the concept of pseudo-residual to guide the generation of distribution for residual and avoid introducing context models. Our end-to-end lossless compression network achieves competitive performance over engineered codecs and has low computational cost.