Abstract:Since its launch, ChatGPT has achieved remarkable success as a versatile conversational AI platform, drawing millions of users worldwide and garnering widespread recognition across academic, industrial, and general communities. This paper aims to point a portrait of early GPT users and understand how they evolved. Specific questions include their topics of interest and their potential careers; and how this changes over time. We conduct a detailed analysis of real-world ChatGPT datasets with multi-turn conversations between users and ChatGPT. Through a multi-pronged approach, we quantify conversation dynamics by examining the number of turns, then gauge sentiment to understand user sentiment variations, and finally employ Latent Dirichlet Allocation (LDA) to discern overarching topics within the conversation. By understanding shifts in user demographics and interests, we aim to shed light on the changing nature of human-AI interaction and anticipate future trends in user engagement with language models.
Abstract:Photovoltaic (PV) modules are recently employed in photovoltaic visible light communication (PVLC) for simultaneous energy harvesting and visible light communication. A PV-based receiver features large signal output, easy optical alignment, and self-powered operation. However, PV modules usually have a severe bandwidth limitation when used as passive photodetectors. In this paper, we systematically investigate the internal impedance dynamic of PV modules and how that affects their frequency response characteristics under different illuminances. We propose a simplified yet accurate dynamic PV mode AC detection model to capture the frequency response characteristics of a PVLC receiver. The model is validated with the impedance spectroscopy characterization methodologies. Experimental results show that a PV module's internal resistance and capacitance depend on incident illuminance, affecting PV's frequency response. The bandwidth is exacerbated under indoor environments with low illuminance levels due to the increment of internal resistance for PV modules. The RC constant can be reduced for PVLC receivers working near open-circuit voltage conditions by adding a moderate local light to decrease the internal resistance value. For practical implementation, PVLC receivers will employ a load for data recovery. We show that adjusting the forward bias conditions can simultaneously reduce the resistance and capacitance values. With the optimization of equivalent trans-impedance, the data rate of a Cadmium telluride (CdTe) PV module achieves a 3.8 times enhancement under 200 lux. We also demonstrate that the BER of a 5-Mbit/s eight-level pulse amplitude modulation (PAM8) signal can be reduced from 9.8*10-2 to 1.4*10-3 by maximizing the transimpedance gain-bandwidth product.
Abstract:Human beings can quickly adapt to environmental changes by leveraging learning experience. However, the poor ability of adapting to dynamic environments remains a major challenge for AI models. To better understand this issue, we study the problem of continual domain adaptation, where the model is presented with a labeled source domain and a sequence of unlabeled target domains. There are two major obstacles in this problem: domain shifts and catastrophic forgetting. In this work, we propose Gradient Regularized Contrastive Learning to solve the above obstacles. At the core of our method, gradient regularization plays two key roles: (1) enforces the gradient of contrastive loss not to increase the supervised training loss on the source domain, which maintains the discriminative power of learned features; (2) regularizes the gradient update on the new domain not to increase the classification loss on the old target domains, which enables the model to adapt to an in-coming target domain while preserving the performance of previously observed domains. Hence our method can jointly learn both semantically discriminative and domain-invariant features with labeled source domain and unlabeled target domains. The experiments on Digits, DomainNet and Office-Caltech benchmarks demonstrate the strong performance of our approach when compared to the state-of-the-art.
Abstract:Existing methods for arterial blood pressure (BP) estimation directly map the input physiological signals to output BP values without explicitly modeling the underlying temporal dependencies in BP dynamics. As a result, these models suffer from accuracy decay over a long time and thus require frequent calibration. In this work, we address this issue by formulating BP estimation as a sequence prediction problem in which both the input and target are temporal sequences. We propose a novel deep recurrent neural network (RNN) consisting of multilayered Long Short-Term Memory (LSTM) networks, which are incorporated with (1) a bidirectional structure to access larger-scale context information of input sequence, and (2) residual connections to allow gradients in deep RNN to propagate more effectively. The proposed deep RNN model was tested on a static BP dataset, and it achieved root mean square error (RMSE) of 3.90 and 2.66 mmHg for systolic BP (SBP) and diastolic BP (DBP) prediction respectively, surpassing the accuracy of traditional BP prediction models. On a multi-day BP dataset, the deep RNN achieved RMSE of 3.84, 5.25, 5.80 and 5.81 mmHg for the 1st day, 2nd day, 4th day and 6th month after the 1st day SBP prediction, and 1.80, 4.78, 5.0, 5.21 mmHg for corresponding DBP prediction, respectively, which outperforms all previous models with notable improvement. The experimental results suggest that modeling the temporal dependencies in BP dynamics significantly improves the long-term BP prediction accuracy.