Abstract:The ability of humans to rapidly learn new knowledge while retaining old memories poses a significant challenge for current deep learning models. To handle this challenge, we draw inspiration from human memory and learning mechanisms and propose the Self-Reflective Complementary Incremental System (SR-CIS). Comprising the deconstructed Complementary Inference Module (CIM) and Complementary Memory Module (CMM), SR-CIS features a small model for fast inference and a large model for slow deliberation in CIM, enabled by the Confidence-Aware Online Anomaly Detection (CA-OAD) mechanism for efficient collaboration. CMM consists of task-specific Short-Term Memory (STM) region and a universal Long-Term Memory (LTM) region. By setting task-specific Low-Rank Adaptive (LoRA) and corresponding prototype weights and biases, it instantiates external storage for parameter and representation memory, thus deconstructing the memory module from the inference module. By storing textual descriptions of images during training and combining them with the Scenario Replay Module (SRM) post-training for memory combination, along with periodic short-to-long-term memory restructuring, SR-CIS achieves stable incremental memory with limited storage requirements. Balancing model plasticity and memory stability under constraints of limited storage and low data resources, SR-CIS surpasses existing competitive baselines on multiple standard and few-shot incremental learning benchmarks.
Abstract:A technological revolution is occurring in the field of robotics with the data-driven deep learning technology. However, building datasets for each local robot is laborious. Meanwhile, data islands between local robots make data unable to be utilized collaboratively. To address this issue, the work presents Peer-Assisted Robotic Learning (PARL) in robotics, which is inspired by the peer-assisted learning in cognitive psychology and pedagogy. PARL implements data collaboration with the framework of cloud robotic systems. Both data and models are shared by robots to the cloud after semantic computing and training locally. The cloud converges the data and performs augmentation, integration, and transferring. Finally, fine tune this larger shared dataset in the cloud to local robots. Furthermore, we propose the DAT Network (Data Augmentation and Transferring Network) to implement the data processing in PARL. DAT Network can realize the augmentation of data from multi-local robots. We conduct experiments on a simplified self-driving task for robots (cars). DAT Network has a significant improvement in the augmentation in self-driving scenarios. Along with this, the self-driving experimental results also demonstrate that PARL is capable of improving learning effects with data collaboration of local robots.
Abstract:This paper presents a kind of effective synchronization clustering method based on a linearized version of Vicsek model. This method can be represented by an Effective Synchronization Clustering algorithm (ESynC), an Improved version of ESynC algorithm (IESynC), a Shrinking Synchronization Clustering algorithm based on another linear Vicsek model (SSynC), and an effective Multi-level Synchronization Clustering algorithm (MSynC). After some analysis and comparisions, we find that ESynC algorithm based on the Linearized version of the Vicsek model has better synchronization effect than SynC algorithm based on an extensive Kuramoto model and a similar synchronization clustering algorithm based on the original Vicsek model. By simulated experiments of some artificial data sets, we observe that ESynC algorithm, IESynC algorithm, and SSynC algorithm can get better synchronization effect although it needs less iterative times and less time than SynC algorithm. In some simulations, we also observe that IESynC algorithm and SSynC algorithm can get some improvements in time cost than ESynC algorithm. At last, it gives some research expectations to popularize this algorithm.
Abstract:This paper presents a Fast Synchronization Clustering algorithm (FSynC), which is an improved version of SynC algorithm. In order to decrease the time complexity of the original SynC algorithm, we combine grid cell partitioning method and Red-Black tree to construct the near neighbor point set of every point. By simulated experiments of some artificial data sets and several real data sets, we observe that FSynC algorithm can often get less time than SynC algorithm for many kinds of data sets. At last, it gives some research expectations to popularize this algorithm.