Abstract:This work examines the interconnections between logic, epistemology, and sciences within the Naturalist tradition. It presents a scheme that connects logic, mathematics, physics, chemistry, biology, and cognition, emphasizing scale-invariant, self-organizing dynamics across organizational tiers of nature. The inherent logic of agency exists in natural processes at various levels, under information exchanges. It applies to humans, animals, and artifactual agents. The common human-centric, natural language-based logic is an example of complex logic evolved by living organisms that already appears in the simplest form at the level of basal cognition of unicellular organisms. Thus, cognitive logic stems from the evolution of physical, chemical, and biological logic. In a computing nature framework with a self-organizing agency, innovative computational frameworks grounded in morphological/physical/natural computation can be used to explain the genesis of human-centered logic through the steps of naturalized logical processes at lower levels of organization. The Extended Evolutionary Synthesis of living agents is essential for understanding the emergence of human-level logic and the relationship between logic and information processing/computational epistemology. We conclude that more research is needed to elucidate the details of the mechanisms linking natural phenomena with the logic of agency in nature.
Abstract:Modern computational natural philosophy conceptualizes the universe in terms of information and computation, establishing a framework for the study of cognition and intelligence. Despite some critiques, this computational perspective has significantly influenced our understanding of the natural world, leading to the development of AI systems like ChatGPT based on deep neural networks. Advancements in this domain have been facilitated by interdisciplinary research, integrating knowledge from multiple fields to simulate complex systems. Large Language Models (LLMs), such as ChatGPT, represent this approach's capabilities, utilizing reinforcement learning with human feedback (RLHF). Current research initiatives aim to integrate neural networks with symbolic computing, introducing a new generation of hybrid computational models.
Abstract:Recent comprehensive overview of 40 years of research in cognitive architectures, (Kotseruba and Tsotsos 2020), evaluates modelling of the core cognitive abilities in humans, but only marginally addresses biologically plausible approaches based on natural computation. This mini review presents a set of perspectives and approaches which have shaped the development of biologically inspired computational models in the recent past that can lead to the development of biologically more realistic cognitive architectures. For describing continuum of natural cognitive architectures, from basal cellular to human-level cognition, we use evolutionary info-computational framework, where natural/ physical/ morphological computation leads to evolution of increasingly complex cognitive systems. Forty years ago, when the first cognitive architectures have been proposed, understanding of cognition, embodiment and evolution was different. So was the state of the art of information physics, bioinformatics, information chemistry, computational neuroscience, complexity theory, self-organization, theory of evolution, information and computation. Novel developments support a constructive interdisciplinary framework for cognitive architectures in the context of computing nature, where interactions between constituents at different levels of organization lead to complexification of agency and increased cognitive capacities. We identify several important research questions for further investigation that can increase understanding of cognition in nature and inspire new developments of cognitive technologies. Recently, basal cell cognition attracted a lot of interest for its possible applications in medicine, new computing technologies, as well as micro- and nanorobotics.
Abstract:Development of the intelligent autonomous robot technology presupposes its anticipated beneficial effect on the individuals and societies. In the case of such disruptive emergent technology, not only questions of how to build, but also why to build and with what consequences are important. The field of ethics of intelligent autonomous robotic cars is a good example of research with actionable practical value, where a variety of stakeholders, including the legal system and other societal and governmental actors, as well as companies and businesses, collaborate bringing about shared view of ethics and societal aspects of technology. It could be used as a starting platform for the approaches to the development of intelligent autonomous robots in general, considering human-machine interfaces in different phases of the life cycle of technology - the development, implementation, testing, use and disposal. Drawing from our work on ethics of autonomous intelligent robocars, and the existing literature on ethics of robotics, our contribution consists of a set of values and ethical principles with identified challenges and proposed approaches for meeting them. This may help stakeholders in the field of intelligent autonomous robotics to connect ethical principles with their applications. Our recommendations of ethical requirements for autonomous cars can be used for other types of intelligent autonomous robots, with the caveat for social robots that require more research regarding interactions with the users. We emphasize that existing ethical frameworks need to be applied in a context-sensitive way, by assessments in interdisciplinary, multi-competent teams through multi-criteria analysis. Furthermore, we argue for the need of a continuous development of ethical principles, guidelines, and regulations, informed by the progress of technologies and involving relevant stakeholders.
Abstract:At present, artificial intelligence in the form of machine learning is making impressive progress, especially the field of deep learning (DL) [1]. Deep learning algorithms have been inspired from the beginning by nature, specifically by the human brain, in spite of our incomplete knowledge about its brain function. Learning from nature is a two-way process as discussed in [2][3][4], computing is learning from neuroscience, while neuroscience is quickly adopting information processing models. The question is, what can the inspiration from computational nature at this stage of the development contribute to deep learning and how much models and experiments in machine learning can motivate, justify and lead research in neuroscience and cognitive science and to practical applications of artificial intelligence.
Abstract:This paper addresses the open question formulated as: Which levels of abstraction are appropriate in the synthetic modelling of life and cognition? within the framework of info-computational constructivism, treating natural phenomena as computational processes on informational structures. At present we lack the common understanding of the processes of life and cognition in living organisms with the details of co-construction of informational structures and computational processes in embodied, embedded cognizing agents, both living and artifactual ones. Starting with the definition of an agent as an entity capable of acting on its own behalf, as an actor in Hewitt Actor model of computation, even so simple systems as molecules can be modelled as actors exchanging messages (information). We adopt Kauffmans view of a living agent as something that can reproduce and undergoes at least one thermodynamic work cycle. This definition of living agents leads to the Maturana and Varelas identification of life with cognition. Within the info-computational constructive approach to living beings as cognizing agents, from the simplest to the most complex living systems, mechanisms of cognition can be studied in order to construct synthetic model classes of artifactual cognizing agents on different levels of organization.
Abstract:Nature can be seen as informational structure with computational dynamics (info-computationalism), where an (info-computational) agent is needed for the potential information of the world to actualize. Starting from the definition of information as the difference in one physical system that makes a difference in another physical system, which combines Bateson and Hewitt definitions, the argument is advanced for natural computation as a computational model of the dynamics of the physical world where information processing is constantly going on, on a variety of levels of organization. This setting helps elucidating the relationships between computation, information, agency and cognition, within the common conceptual framework, which has special relevance for biology and robotics.
Abstract:Agents and agent systems are becoming more and more important in the development of a variety of fields such as ubiquitous computing, ambient intelligence, autonomous computing, intelligent systems and intelligent robotics. The need for improvement of our basic knowledge on agents is very essential. We take a systematic approach and present extended classification of artificial agents which can be useful for understanding of what artificial agents are and what they can be in the future. The aim of this classification is to give us insights in what kind of agents can be created and what type of problems demand a specific kind of agents for their solution.