IGM-LabInfo
Abstract:Very large data sets are the common rule in automated mapping, GIS, remote sensing, and what we can name geo-information. Indeed, in 1983 Landsat was already delivering gigabytes of data, and other sensors were in orbit or ready for launch, and a tantamount of cartographic data was being digitized. The retrospective paper revisits several issues that geo-information sciences had to face from the early stages on, including: structure ( to bring some structure to the data registered from a sampled signal, metadata); processing (huge amounts of data for big computers and fast algorithms); uncertainty (the kinds of errors, their quantification); consistency (when merging different sources of data is logically allowed, and meaningful); ontologies (clear and agreed shared definitions, if any kind of decision should be based upon them). All these issues are the background of Internet queries, and the underlying technology has been shaped during those years when geo-information engineering emerged.
Abstract:The objective is to present one important aspect of the European IST-FET project "REV!GIS"1: the methodology which has been developed for the translation (interpretation) of the quality of the data into a "fitness for use" information, that we can confront to the user needs in its application. This methodology is based upon the notion of "ontologies" as a conceptual framework able to capture the explicit and implicit knowledge involved in the application. We do not address the general problem of formalizing such ontologies, instead, we rather try to illustrate this with three applications which are particular cases of the more general "data fusion" problem. In each application, we show how to deploy our methodology, by comparing several possible solutions, and we try to enlighten where are the quality issues, and what kind of solution to privilege, even at the expense of a highly complex computational approach. The expectation of the REV!GIS project is that computationally tractable solutions will be available among the next generation AI tools.
Abstract:Using qualitative reasoning with geographic information, contrarily, for instance, with robotics, looks not only fastidious (i.e.: encoding knowledge Propositional Logics PL), but appears to be computational complex, and not tractable at all, most of the time. However, knowledge fusion or revision, is a common operation performed when users merge several different data sets in a unique decision making process, without much support. Introducing logics would be a great improvement, and we propose in this paper, means for deciding -a priori- if one application can benefit from a complete revision, under only the assumption of a conjecture that we name the "containment conjecture", which limits the size of the minimal conflicts to revise. We demonstrate that this conjecture brings us the interesting computational property of performing a not-provable but global, revision, made of many local revisions, at a tractable size. We illustrate this approach on an application.