Abstract:Most current affect scales and sentiment analysis on written text focus on quantifying valence (sentiment) -- the most primary dimension of emotion. However, emotions are broader and more complex than valence. Distinguishing negative emotions of similar valence could be important in contexts such as mental health. This project proposes a semi-supervised machine learning model (DASentimental) to extract depression, anxiety and stress from written text. First, we trained the model to spot how sequences of recalled emotion words by $N=200$ individuals correlated with their responses to the Depression Anxiety Stress Scale (DASS-21). Within the framework of cognitive network science, we model every list of recalled emotions as a walk over a networked mental representation of semantic memory, with emotions connected according to free associations in people's memory. Among several tested machine learning approaches, we find that a multilayer perceptron neural network trained on word sequences and semantic network distances can achieve state-of-art, cross-validated predictions for depression ($R = 0.7$), anxiety ($R = 0.44$) and stress ($R = 0.52$). Though limited by sample size, this first-of-its-kind approach enables quantitative explorations of key semantic dimensions behind DAS levels. We find that semantic distances between recalled emotions and the dyad "sad-happy" are crucial features for estimating depression levels but are less important for anxiety and stress. We also find that semantic distance of recalls from "fear" can boost the prediction of anxiety but it becomes redundant when the "sad-happy" dyad is considered. Adopting DASentimental as a semi-supervised learning tool to estimate DAS in text, we apply it to a dataset of 142 suicide notes. We conclude by discussing key directions for future research enabled by artificial intelligence detecting stress, anxiety and depression.
Abstract:Vehicle color recognition plays an important role in intelligent traffic management and criminal investigation assistance. However, the current vehicle color recognition research involves at most 13 types of colors and the recognition accuracy is low, which is difficult to meet practical applications. To this end, this paper has built a benchmark dataset (Vehicle Color-24) that includes 24 types of vehicle colors, including 10091 vehicle pictures taken from 100 hours of urban road surveillance videos. In addition, in order to solve the problem of long tail distribution in Vehicle Color-24 dataset and low recognition rate of existing methods, this paper proposes a Smooth Modulated Neural Network with Multi-layer Feature Representation (SMNN-MFR) is used for 24 types of vehicle color recognition. SMNN-MFR includes four parts: feature extraction, multi-scale feature fusion, suggestion frame generation and smooth modulation. The model is trained and verified on the Vehicle Color-24 benchmark dataset. Comprehensive experiments show that the average recognition accuracy of the algorithm in the 24 categories of color benchmark databases is 94.96%, which is 33.47% higher than the Faster RCNN network. In addition, the average accuracy rate of the model when recognizing 8 types of colors is 97.25%, and the detection accuracy of algorithms in similar databases is improved. At the same time, visualization and ablation experiments also proved the rationality of our network settings and the effectiveness of each module. The code and database are published at: https://github.com/mendy-2013.