Abstract:Knitted sensors frequently suffer from inconsistencies due to innate effects such as offset, relaxation, and drift. These properties, in combination, make it challenging to reliably map from sensor data to physical actuation. In this paper, we demonstrate a method for counteracting this by applying processing using a minimal artificial neural network (ANN) in combination with straightforward pre-processing. We apply a number of exponential smoothing filters on a re-sampled sensor signal, to produce features that preserve different levels of historical sensor data and, in combination, represent an adequate state of previous sensor actuation. By training a three-layer ANN with a total of 8 neurons, we manage to significantly improve the mapping between sensor reading and actuation force. Our findings also show that our technique translates to sensors of reasonably different composition in terms of material and structure, and it can furthermore be applied to related physical features such as strain.
Abstract:This study presents a novel approach for touch sensing using semi-elastic textile surfaces that does not require the placement of additional sensors in the sensing area, instead relying on sensors located on the border of the textile. The proposed approach is demonstrated through experiments involving an elastic Jersey fabric and a variety of machine-learning models. The performance of one particular border-based sensor design is evaluated in depth. By using visual markers, the best-performing visual sensor arrangement predicts a single touch point with a mean squared error of 1.36 mm on an area of 125mm by 125mm. We built a textile only prototype that is able to classify touch at three indent levels (0, 15, and 20 mm) with an accuracy of 82.85%. Our results suggest that this approach has potential applications in wearable technology and smart textiles, making it a promising avenue for further exploration in these fields.
Abstract:We generate synthetic images with the "Stable Diffusion" image generation model using the Wordnet taxonomy and the definitions of concepts it contains. This synthetic image database can be used as training data for data augmentation in machine learning applications, and it is used to investigate the capabilities of the Stable Diffusion model. Analyses show that Stable Diffusion can produce correct images for a large number of concepts, but also a large variety of different representations. The results show differences depending on the test concepts considered and problems with very specific concepts. These evaluations were performed using a vision transformer model for image classification.
Abstract:We built models with Logistic Regression and linear Support Vector Machines on a large dataset consisting of regular news articles and news from satirical websites, and showed that such linear classifiers on a corpus with about 60,000 articles can perform with a precision of 98.7% and a recall of 95.2% on a random test set of the news. On the other hand, when testing the classifier on "publication sources" which are completely unknown during training, only an accuracy of 88.2% and an F1-score of 76.3% are achieved. As another result, we showed that the same algorithm can distinguish between news written by the news agency itself and paid articles from customers. Here the results had an accuracy of 99%.
Abstract:For the webportal "Who is in the News!" with statistics about the appearence of persons in written news we developed an extension, which measures the relationship of public persons depending on a time parameter, as the relationship may vary over time. On a training corpus of English and German news articles we built a measure by extracting the persons occurrence in the text via pretrained named entity extraction and then construct time series of counts for each person. Pearson correlation over a sliding window is then used to measure the relation of two persons.