Abstract:This technical report presents Yi-Lightning, our latest flagship large language model (LLM). It achieves exceptional performance, ranking 6th overall on Chatbot Arena, with particularly strong results (2nd to 4th place) in specialized categories including Chinese, Math, Coding, and Hard Prompts. Yi-Lightning leverages an enhanced Mixture-of-Experts (MoE) architecture, featuring advanced expert segmentation and routing mechanisms coupled with optimized KV-caching techniques. Our development process encompasses comprehensive pre-training, supervised fine-tuning (SFT), and reinforcement learning from human feedback (RLHF), where we devise deliberate strategies for multi-stage training, synthetic data construction, and reward modeling. Furthermore, we implement RAISE (Responsible AI Safety Engine), a four-component framework to address safety issues across pre-training, post-training, and serving phases. Empowered by our scalable super-computing infrastructure, all these innovations substantially reduce training, deployment and inference costs while maintaining high-performance standards. With further evaluations on public academic benchmarks, Yi-Lightning demonstrates competitive performance against top-tier LLMs, while we observe a notable disparity between traditional, static benchmark results and real-world, dynamic human preferences. This observation prompts a critical reassessment of conventional benchmarks' utility in guiding the development of more intelligent and powerful AI systems for practical applications. Yi-Lightning is now available through our developer platform at https://platform.lingyiwanwu.com.
Abstract:Few-Shot Relation Extraction (FSRE), a subtask of Relation Extraction (RE) that utilizes limited training instances, appeals to more researchers in Natural Language Processing (NLP) due to its capability to extract textual information in extremely low-resource scenarios. The primary methodologies employed for FSRE have been fine-tuning or prompt tuning techniques based on Pre-trained Language Models (PLMs). Recently, the emergence of Large Language Models (LLMs) has prompted numerous researchers to explore FSRE through In-Context Learning (ICL). However, there are substantial limitations associated with methods based on either traditional RE models or LLMs. Traditional RE models are hampered by a lack of necessary prior knowledge, while LLMs fall short in their task-specific capabilities for RE. To address these shortcomings, we propose a Dual-System Augmented Relation Extractor (DSARE), which synergistically combines traditional RE models with LLMs. Specifically, DSARE innovatively injects the prior knowledge of LLMs into traditional RE models, and conversely enhances LLMs' task-specific aptitude for RE through relation extraction augmentation. Moreover, an Integrated Prediction module is employed to jointly consider these two respective predictions and derive the final results. Extensive experiments demonstrate the efficacy of our proposed method.
Abstract:Humans watch more than a billion hours of video per day. Most of this video was edited manually, which is a tedious process. However, AI-enabled video-generation and video-editing is on the rise. Building on text-to-image models like Stable Diffusion and Imagen, generative AI has improved dramatically on video tasks. But it's hard to evaluate progress in these video tasks because there is no standard benchmark. So, we propose a new dataset for text-guided video editing (TGVE), and we run a competition at CVPR to evaluate models on our TGVE dataset. In this paper we present a retrospective on the competition and describe the winning method. The competition dataset is available at https://sites.google.com/view/loveucvpr23/track4.
Abstract:The Generic Event Boundary Detection (GEBD) task aims to build a model for segmenting videos into segments by detecting general event boundaries applicable to various classes. In this paper, based on last year's MAE-GEBD method, we have improved our model performance on the GEBD task by adjusting the data processing strategy and loss function. Based on last year's approach, we extended the application of pseudo-label to a larger dataset and made many experimental attempts. In addition, we applied focal loss to concentrate more on difficult samples and improved our model performance. Finally, we improved the segmentation alignment strategy used last year, and dynamically adjusted the segmentation alignment method according to the boundary density and duration of the video, so that our model can be more flexible and fully applicable in different situations. With our method, we achieve an F1 score of 86.03% on the Kinetics-GEBD test set, which is a 0.09% improvement in the F1 score compared to our 2022 Kinetics-GEBD method.
Abstract:Generic Event Boundary Detection (GEBD) tasks aim at detecting generic, taxonomy-free event boundaries that segment a whole video into chunks. In this paper, we apply Masked Autoencoders to improve algorithm performance on the GEBD tasks. Our approach mainly adopted the ensemble of Masked Autoencoders fine-tuned on the GEBD task as a self-supervised learner with other base models. Moreover, we also use a semi-supervised pseudo-label method to take full advantage of the abundant unlabeled Kinetics-400 data while training. In addition, we propose a soft-label method to partially balance the positive and negative samples and alleviate the problem of ambiguous labeling in this task. Lastly, a tricky segmentation alignment policy is implemented to refine boundaries predicted by our models to more accurate locations. With our approach, we achieved 85.94% on the F1-score on the Kinetics-GEBD test set, which improved the F1-score by 2.31% compared to the winner of the 2021 Kinetics-GEBD Challenge. Our code is available at https://github.com/ContentAndMaterialPortrait/MAE-GEBD.
Abstract:Wearing a seatbelt appropriately while driving can reduce serious crash-related injuries or deaths by about half. However, current seatbelt reminder system has multiple shortcomings, such as can be easily fooled by a "Seatbelt Warning Stopper", and cannot recognize incorrect usages for example seating in front of a buckled seatbelt or wearing a seatbelt under the arm. General seatbelt usage recognition has many challenges, to name a few, lacking of color information in Infrared (IR) cameras, strong distortion caused by wide Field of View (FoV) fisheye lens, low contrast between belt and its background, occlusions caused by hands or hair, and imaging blurry. In this paper, we introduce a novel general seatbelt detection and usage recognition framework to resolve the above challenges. Our method consists of three components: a local predictor, a global assembler, and a shape modeling process. Our approach can be applied to the driver in the Driver Monitoring System (DMS) or general passengers in the Occupant Monitoring System (OMS) for various camera modalities. Experiment results on both DMS and OMS are provided to demonstrate the accuracy and robustness of the proposed approach.
Abstract:Camera calibration plays a critical role in various computer vision tasks such as autonomous driving or augmented reality. Widely used camera calibration tools utilize plane pattern based methodology, such as using a chessboard or AprilTag board, user's calibration expertise level significantly affects calibration accuracy and consistency when without clear instruction. Furthermore, calibration is a recurring task that has to be performed each time the camera is changed or moved. It's also a great burden to calibrate huge amounts of cameras such as Driver Monitoring System (DMS) cameras in a production line with millions of vehicles. To resolve above issues, we propose a calibration system called Calibration with Pose Guidance to improve calibration accuracy, reduce calibration variance among different users or different trials of the same person. Experiment result shows that our proposed method achieves more accurate and consistent calibration than traditional calibration tools.
Abstract:Indoor localization has many applications, such as commercial Location Based Services (LBS), robotic navigation, and assistive navigation for the blind. This paper formulates the indoor localization problem into a multimedia retrieving problem by modeling visual landmarks with a panoramic image feature, and calculating a user's location via GPU- accelerated parallel retrieving algorithm. To solve the scene similarity problem, we apply a multi-images based retrieval strategy and a 2D aggregation method to estimate the final retrieval location. Experiments on a campus building real data demonstrate real-time responses (14fps) and robust localization.
Abstract:Both assistant driving and self-driving have attracted a great amount of attention in the last few years. However, the majority of research efforts focus on safe driving; few research has been conducted on in-vehicle climate control, or assistant driving based on travellers' personal habits or preferences. In this paper, we propose a novel approach for climate control, driver behavior recognition and driving recommendation for better fitting drivers' preferences in their daily driving. The algorithm consists three components: (1) A in-vehicle sensing and context feature enriching compnent with a Internet of Things (IoT) platform for collecting related environment, vehicle-running, and traffic parameters that affect drivers' behaviors. (2) A non-intrusive intelligent driver behaviour and vehicle status detection component, which can automatically label vehicle's status (open windows, turn on air condition, etc.), based on results of applying further feature extraction and machine learning algorithms. (3) A personalized driver habits learning and preference recommendation component for more healthy and comfortable experiences. A prototype using a client-server architecture with an iOS app and an air-quality monitoring sensor has been developed for collecting heterogeneous data and testing our algorithms. Real-world experiments on driving data of 11,370 km (320 hours) by different drivers in multiple cities worldwide have been conducted, which demonstrate the effective and accuracy of our approach.