Abstract:In recent years, large language models (LLMs) have made significant progress in natural language processing (NLP), with models like ChatGPT and GPT-4 achieving impressive capabilities in various linguistic tasks. However, training models on such a large scale is challenging, and finding datasets that match the model's scale is often difficult. Fine-tuning and training models with fewer parameters using novel methods have emerged as promising approaches to overcome these challenges. One such model is MiniGPT-4, which achieves comparable vision-language understanding to GPT-4 by leveraging novel pre-training models and innovative training strategies. However, the model still faces some challenges in image understanding, particularly in artistic pictures. A novel multimodal model called ArtGPT-4 has been proposed to address these limitations. ArtGPT-4 was trained on image-text pairs using a Tesla A100 device in just 2 hours, using only about 200 GB of data. The model can depict images with an artistic flair and generate visual code, including aesthetically pleasing HTML/CSS web pages. Furthermore, the article proposes novel benchmarks for evaluating the performance of vision-language models. In the subsequent evaluation methods, ArtGPT-4 scored more than 1 point higher than the current \textbf{state-of-the-art} model and was only 0.25 points lower than artists on a 6-point scale. Our code and pre-trained model are available at \url{https://huggingface.co/Tyrannosaurus/ArtGPT-4}.
Abstract:Large deep learning models have shown great potential for delivering exceptional results in various applications. However, the training process can be incredibly challenging due to the models' vast parameter sizes, often consisting of hundreds of billions of parameters. Common distributed training methods, such as data parallelism, tensor parallelism, and pipeline parallelism, demand significant data communication throughout the process, leading to prolonged wait times for some machines in physically distant distributed systems. To address this issue, we propose a novel solution called Hulk, which utilizes a modified graph neural network to optimize distributed computing systems. Hulk not only optimizes data communication efficiency between different countries or even different regions within the same city, but also provides optimal distributed deployment of models in parallel. For example, it can place certain layers on a machine in a specific region or pass specific parameters of a model to a machine in a particular location. By using Hulk in experiments, we were able to improve the time efficiency of training large deep learning models on distributed systems by more than 20\%. Our open source collection of unlabeled data:https://github.com/DLYuanGod/Hulk.
Abstract:Pre-trained models have been used in many fields in recent years, ranging from natural language understanding to computer vision and natural language generation. Nowadays, the performance of these natural language generation models is overly dependent on the model's scale and the dataset's size. While the larger language model is excellent in some respects, it cannot learn up-to-date knowledge and is relatively difficult to relearn. In this paper, a new adversarial process learning method is called Auto-Learning, which can improve the performance of any natural language generation model without the help of additional datasets. Auto-Learning includes two models: $G$ is a text generation model, and $D$ can test whether the data generated by G is legitimate. Firstly, the fine-tuned $D$ model is used as the brain's knowledge base before the process. Then the text generated by the $G$ model is used as the input of $D$ to determine whether the text is legitimate. Finally, $G$ is fine-tuned according to the output of $D$. This adversarial process is like a self-escalation of the brain through some a priori knowledge. When this adversarial system wants to learn something new, simply fine-tune the $D$ model. Our approach applies to Autoregressive Language Modeling for all Transformer classes. Auto-Learning enables 8 models to achieve stable improvement in 10 natural language processing tasks without any change in structure.