Very large data sets are the common rule in automated mapping, GIS, remote sensing, and what we can name geo-information. Indeed, in 1983 Landsat was already delivering gigabytes of data, and other sensors were in orbit or ready for launch, and a tantamount of cartographic data was being digitized. The retrospective paper revisits several issues that geo-information sciences had to face from the early stages on, including: structure ( to bring some structure to the data registered from a sampled signal, metadata); processing (huge amounts of data for big computers and fast algorithms); uncertainty (the kinds of errors, their quantification); consistency (when merging different sources of data is logically allowed, and meaningful); ontologies (clear and agreed shared definitions, if any kind of decision should be based upon them). All these issues are the background of Internet queries, and the underlying technology has been shaped during those years when geo-information engineering emerged.