Abstract:The philosophy of language, which has historically been developed through an anthropocentric lens, is now being forced to move towards post-anthropocentrism due to the advent of large language models (LLMs) like ChatGPT (OpenAI), Claude (Anthropic), which are considered to possess linguistic abilities comparable to those of humans. Traditionally, LLMs have been explained through distributional semantics as their foundational semantics. However, recent research is exploring alternative foundational semantics beyond distributional semantics. This paper proposes Robert Brandom's inferentialist semantics as an suitable foundational semantics for LLMs, specifically focusing on the issue of linguistic representationalism within this post-anthropocentric trend. Here, we show that the anti-representationalism and logical expressivism of inferential semantics, as well as quasi-compositionality, are useful in interpreting the characteristics and behaviors of LLMs. Further, we propose a \emph{consensus theory of truths} for LLMs. This paper argues that the characteristics of LLMs challenge mainstream assumptions in philosophy of language, such as semantic externalism and compositionality. We believe the argument in this paper leads to a re-evaluation of anti\hyphen{}representationalist views of language, potentially leading to new developments in the philosophy of language.
Abstract:A graph is a very common and powerful data structure used for modeling communication and social networks. Models that generate graphs with arbitrary features are important basic technologies in repeated simulations of networks and prediction of topology changes. Although existing generative models for graphs are useful for providing graphs similar to real-world graphs, graph generation models with tunable features have been less explored in the field. Previously, we have proposed GraphTune, a generative model for graphs that continuously tune specific graph features of generated graphs while maintaining most of the features of a given graph dataset. However, the tuning accuracy of graph features in GraphTune has not been sufficient for practical applications. In this paper, we propose a method to improve the accuracy of GraphTune by adding a new mechanism to feed back errors of graph features of generated graphs and by training them alternately and independently. Experiments on a real-world graph dataset showed that the features in the generated graphs are accurately tuned compared with conventional models.
Abstract:Generative models for graphs have been actively studied for decades, and they have a wide range of applications. Recently, learning-based graph generation that reproduces real-world graphs has gradually attracted the attention of many researchers. Several generative models that utilize modern machine learning technologies have been proposed, though a conditional generation of general graphs is less explored in the field. In this paper, we propose a generative model that allows us to tune a value of a global-level structural feature as a condition. Our model called GraphTune enables to tune a value of any structural feature of generated graphs using Long Short Term Memory (LSTM) and Conditional Variational AutoEncoder (CVAE). We performed comparative evaluations of GraphTune and conventional models with a real graph dataset. The evaluations show that GraphTune enables to clearly tune a value of a global-level structural feature compared to the conventional models.
Abstract:With the development of graph applications, generative models for graphs have been more crucial. Classically, stochastic models that generate graphs with a pre-defined probability of edges and nodes have been studied. Recently, some models that reproduce the structural features of graphs by learning from actual graph data using machine learning have been studied. However, in these conventional studies based on machine learning, structural features of graphs can be learned from data, but it is not possible to tune features and generate graphs with specific features. In this paper, we propose a generative model that can tune specific features, while learning structural features of a graph from data. With a dataset of graphs with various features generated by a stochastic model, we confirm that our model can generate a graph with specific features.