Abstract:Despite their satisfactory speech recognition capabilities, current speech assistive devices still lack suitable automatic semantic analysis capabilities as well as useful representation of pragmatic world knowledge. Instead, current technologies require users to learn keywords necessary to effectively operate and work with a machine. Such a machine-centered approach can be frustrating for users. However, recognizing a basic difference between the semiotics of humans and machines presents a possibility to overcome this shortcoming: For the machine, the meaning of a (human) utterance is defined by its own scope of actions. Machines, thus, do not need to understand the meanings of individual words, nor the meaning of phrasal and sentence semantics that combine individual word meanings with additional implicit world knowledge. For speech assistive devices, the learning of machine specific meanings of human utterances by trial and error should be sufficient. Using the trivial example of a cognitive heating device, we show that -- based on dynamic semantics -- this process can be formalized as the learning of utterance-meaning pairs (UMP). This is followed by a detailed semiotic contextualization of the previously generated signs.