Abstract:This comprehensive study evaluates the performance of OpenAI's o1-preview large language model across a diverse array of complex reasoning tasks, spanning multiple domains, including computer science, mathematics, natural sciences, medicine, linguistics, and social sciences. Through rigorous testing, o1-preview demonstrated remarkable capabilities, often achieving human-level or superior performance in areas ranging from coding challenges to scientific reasoning and from language processing to creative problem-solving. Key findings include: -83.3% success rate in solving complex competitive programming problems, surpassing many human experts. -Superior ability in generating coherent and accurate radiology reports, outperforming other evaluated models. -100% accuracy in high school-level mathematical reasoning tasks, providing detailed step-by-step solutions. -Advanced natural language inference capabilities across general and specialized domains like medicine. -Impressive performance in chip design tasks, outperforming specialized models in areas such as EDA script generation and bug analysis. -Remarkable proficiency in anthropology and geology, demonstrating deep understanding and reasoning in these specialized fields. -Strong capabilities in quantitative investing. O1 has comprehensive financial knowledge and statistical modeling skills. -Effective performance in social media analysis, including sentiment analysis and emotion recognition. The model excelled particularly in tasks requiring intricate reasoning and knowledge integration across various fields. While some limitations were observed, including occasional errors on simpler problems and challenges with certain highly specialized concepts, the overall results indicate significant progress towards artificial general intelligence.
Abstract:Optimizer plays an important role in neural network training with high efficiency and performance. Weight update based on its gradient is the central part of the optimizer. It has been shown that normalization and standardization operation on weight and gradient can accelerate the training process and improve performance such as Weight Standardization (WS), weight normalization (WN) and gradient normalization (GN); there is also gradient centralization (GC). In this work, we introduce a new optimization technique based on the gradient magnitude in a gradient vector named adaptive gradient regularization (AGR), which normalizes the gradient vector in all dimensions as a coefficient vector and subtracts the product of the gradient and its coefficient vector by the vanilla gradient. It can be viewed as an adaptive gradient clipping method. We show that the AGR can improve the loss function Lipschitzness with a more stable training process and better generalization performance. AGR is very simple to be embedded into vanilla optimizers such as Adan and AdamW with only three lines of code. Our experiments are conducted in image generation, image classification and language representation, which shows that our AGR improves the training result.
Abstract:Diffusion models have achieved promising results for Structure-Based Drug Design (SBDD). Nevertheless, high-quality protein subpocket and ligand data are relatively scarce, which hinders the models' generation capabilities. Recently, Direct Preference Optimization (DPO) has emerged as a pivotal tool for the alignment of generative models such as large language models and diffusion models, providing greater flexibility and accuracy by directly aligning model outputs with human preferences. Building on this advancement, we introduce DPO to SBDD in this paper. We tailor diffusion models to pharmaceutical needs by aligning them with elaborately designed chemical score functions. We propose a new structure-based molecular optimization method called DecompDPO, which decomposes the molecule into arms and scaffolds and performs preference optimization at both local substructure and global molecule levels, allowing for more precise control with fine-grained preferences. Notably, DecompDPO can be effectively used for two main purposes: (1) fine-tuning pretrained diffusion models for molecule generation across various protein families, and (2) molecular optimization given a specific protein subpocket after generation. Extensive experiments on the CrossDocked2020 benchmark show that DecompDPO significantly improves model performance in both molecule generation and optimization, with up to 100% Median High Affinity and a 54.9% Success Rate.
Abstract:Recently, Large Language Models (LLMs) have demonstrated outstanding performance across a wide range of downstream language tasks. Temperature sampling is a commonly used decoding strategy for LLMs' generation process. However, a fixed temperature parameter is used in most cases, which may not always be an optimal choice for balancing generation quality and diversity. In this paper, we propose an effective Entropy-based Dynamic Temperature (EDT) Sampling method, to achieve a more balanced performance in terms of both generation quality and diversity by dynamically selecting the temperature parameter. Additionally, we also show model performance and comprehensive analyses for 4 different generation benchmarks. Our experiments show that EDT significantly outperforms the existing strategies across different tasks.
Abstract:The recent surge of generative AI has been fueled by the generative power of diffusion probabilistic models and the scalable capabilities of large language models. Despite their potential, it remains elusive whether diffusion language models can solve general language tasks comparable to their autoregressive counterparts. This paper demonstrates that scaling diffusion models w.r.t. data, sizes, and tasks can effectively make them strong language learners. We build competent diffusion language models at scale by first acquiring knowledge from massive data via masked language modeling pretraining thanks to their intrinsic connections. We then reprogram pretrained masked language models into diffusion language models via diffusive adaptation, wherein task-specific finetuning and instruction finetuning are explored to unlock their versatility in solving general language tasks. Experiments show that scaling diffusion language models consistently improves performance across downstream language tasks. We further discover that instruction finetuning can elicit zero-shot and few-shot in-context learning abilities that help tackle many unseen tasks by following natural language instructions, and show promise in advanced and challenging abilities such as reasoning.
Abstract:Benefiting from the sequence-level knowledge distillation, the Non-Autoregressive Transformer (NAT) achieves great success in neural machine translation tasks. However, existing knowledge distillation has side effects, such as propagating errors from the teacher to NAT students, which may limit further improvements of NAT models and are rarely discussed in existing research. In this paper, we introduce selective knowledge distillation by introducing an NAT evaluator to select NAT-friendly targets that are of high quality and easy to learn. In addition, we introduce a simple yet effective progressive distillation method to boost NAT performance. Experiment results on multiple WMT language directions and several representative NAT models show that our approach can realize a flexible trade-off between the quality and complexity of training data for NAT models, achieving strong performances. Further analysis shows that distilling only 5% of the raw translations can help an NAT outperform its counterpart trained on raw data by about 2.4 BLEU.
Abstract:While diffusion models have achieved great success in generating continuous signals such as images and audio, it remains elusive for diffusion models in learning discrete sequence data like natural languages. Although recent advances circumvent this challenge of discreteness by embedding discrete tokens as continuous surrogates, they still fall short of satisfactory generation quality. To understand this, we first dive deep into the denoised training protocol of diffusion-based sequence generative models and determine their three severe problems, i.e., 1) failing to learn, 2) lack of scalability, and 3) neglecting source conditions. We argue that these problems can be boiled down to the pitfall of the not completely eliminated discreteness in the embedding space, and the scale of noises is decisive herein. In this paper, we introduce DINOISER to facilitate diffusion models for sequence generation by manipulating noises. We propose to adaptively determine the range of sampled noise scales for counter-discreteness training; and encourage the proposed diffused sequence learner to leverage source conditions with amplified noise scales during inference. Experiments show that DINOISER enables consistent improvement over the baselines of previous diffusion-based sequence generative models on several conditional sequence modeling benchmarks thanks to both effective training and inference strategies. Analyses further verify that DINOISER can make better use of source conditions to govern its generative process.
Abstract:Deep Linear and Nonlinear learning methods have already been vital machine learning methods for investigating the hierarchical features such as functional connectivity in the human brain via functional Magnetic Resonance signals; however, there are three major shortcomings: 1). For deep linear learning methods, although the identified hierarchy of functional connectivity is easily explainable, it is challenging to reveal more hierarchical functional connectivity; 2). For deep nonlinear learning methods, although non-fully connected architecture reduces the complexity of neural network structures that are easy to optimize and not vulnerable to overfitting, the functional connectivity hierarchy is difficult to explain; 3). Importantly, it is challenging for Deep Linear/Nonlinear methods to detect meta and sub-functional connectivity even in the shallow layers; 4). Like most conventional Deep Nonlinear Methods, such as Deep Neural Networks, the hyperparameters must be tuned manually, which is time-consuming. Thus, in this work, we propose a novel deep hybrid learning method named SEmi-Nonlinear Deep Efficient Reconstruction (SENDER), to overcome the aforementioned shortcomings: 1). SENDER utilizes a multiple-layer stacked structure for the linear learning methods to detect the canonical functional connectivity; 2). SENDER implements a non-fully connected architecture conducted for the nonlinear learning methods to reveal the meta-functional connectivity through shallow and deeper layers; 3). SENDER incorporates the proposed background components to extract the sub-functional connectivity; 4). SENDER adopts a novel rank reduction operator to implement the hyperparameters tuning automatically. To further validate the effectiveness, we compared SENDER with four peer methodologies using real functional Magnetic Resonance Imaging data for the human brain.
Abstract:Deep Neural Networks (DNNs) have already become a crucial computational approach to revealing the spatial patterns in the human brain; however, there are three major shortcomings in utilizing DNNs to detect the spatial patterns in functional Magnetic Resonance Signals: 1). It is a fully connected architecture that increases the complexity of network structures that is difficult to optimize and vulnerable to overfitting; 2). The requirement of large training samples results in erasing the individual/minor patterns in feature extraction; 3). The hyperparameters are required to be tuned manually, which is time-consuming. Therefore, we propose a novel deep nonlinear matrix factorization named Deep Matrix Approximately Nonlinear Decomposition (DEMAND) in this work to take advantage of the shallow linear model, e.g., Sparse Dictionary Learning (SDL) and DNNs. At first, the proposed DEMAND employs a non-fully connected and multilayer-stacked architecture that is easier to be optimized compared with canonical DNNs; furthermore, due to the efficient architecture, training DEMAND can avoid overfitting and enables the recognition of individual/minor features based on a small dataset such as an individual data; finally, a novel rank estimator technique is introduced to tune all hyperparameters of DEMAND automatically. Moreover, the proposed DEMAND is validated by four other peer methodologies via real functional Magnetic Resonance Imaging data in the human brain. In short, the validation results demonstrate that DEMAND can reveal the reproducible meta, canonical, and sub-spatial features of the human brain more efficiently than other peer methodologies.
Abstract:The Matrix Decomposition techniques have been a vital computational approach to analyzing the hierarchy of functional connectivity in the human brain. However, there are still four shortcomings of these methodologies: 1). Large training samples; 2). Manually tuning hyperparameters; 3). Time-consuming and require extensive computational source; 4). It cannot guarantee convergence to a unique fixed point. Therefore, we propose a novel deep matrix factorization technique called Deep Linear Matrix Approximate Reconstruction (DELMAR) to bridge the abovementioned gaps. The advantages of the proposed method are: at first, proposed DELMAR can estimate the important hyperparameters automatically; furthermore, DELMAR employs the matrix backpropagation to reduce the potential accumulative errors; finally, an orthogonal projection is introduced to update all variables of DELMAR rather than directly calculating the inverse matrices. The validation experiments of three peer methods and DELMAR using real functional MRI signal of the human brain demonstrates that our proposed method can efficiently identify the spatial feature in fMRI signal even faster and more accurately than other peer methods. Moreover, the theoretical analyses indicate that DELMAR can converge to the unique fixed point and even enable the accurate approximation of original input as DNNs.