The World of Geographically Referenced Information is Facing a Paradigm Shift

Erik Kjems

 

One of the biggest issues or discussion subject within the whole geographic information domain at the moment is the ever-changing demands for handling information in a better and more efficient way. The domain is expanding in all kinds of directions. What remains is geo-referenced information handled with a computer. We are seeing applications and demonstrators showing off in 3D and the wonderful things one can experience here. We are seeing an ever-growing number of applications that handle online information for example traffic or flight control.

Or we simply follow our book purchase from the online shop situated wherever in the world, all the way to the front door, online on a smart phone.The number of GPS units and sensors is growing fast, and if geo-referencing was a specialist’ s work afew years ago, it is a mainstream “ one click matter” today. Software in smart-phones and alike makes it incredibly easy to create geo-referenced data. Location-based services are a fast growing business accordingly and all kinds of geo-related social networking “ here I am” applications invade our daily life.

“ So what” , one might add. Well, I think the GI community should wake up and face the overflow of information from a professional angle. It seems as if the community lies within a state of torpidity. The GIS development that got its roots back in the 60s needs to face a completely new paradigm for geo-related information today. The two-dimensional overlay analyses are of course still a great tool and our relational databases are filled with important information. The demand for trustworthy and accurate information is, however, moving from solid storages to on demand information at your fingertip. And that is not all.

 

The information is supposed to have been optimized to exactly your personal needs, which means it has to take into account where you are, where you are going, what purpose your triphas, what device you are using etc. The GIS community has the knowledge and the tools to mix allkind of information like maps, plans, tables together into one meaningful amalgam, but we need todevelop new ways to cope with these new demands.

In my opinion the biggest challenge is not to acquire geo-related data, but to use them together in a meaningful way. If for instance you look into the area of traffic control, you will see all kinds ofinformation coming from sensors in or along the road. In a few years the amount of traffic informationwill explode, because all vehicles involved will provide the system with information about at least position and speed, providing information about the actual traffic situation at one particular spot in the network.

What system is able to combine all different kinds of online and static information coming from a considerable amount of different vendors using different communication protocols and ending up in a meaningful compromise to optimize the traffic flow in a geographically widespread road network? Just to mention one domain which uses geo-referenced data extensively.

But these problems are not really pertinent at the moment compared to the problem that we don't have mainstream systems that can handle time-based or historical data, or store real three-dimensional geometry which can be referenced to with the use of an arbitrary x,y,z coordinate, preferably on a global scale. Data or rather information changes all the time and information systems, especially geographic ones, should be able to handle dynamic changes of the real world - both in their representation and in their presentation.

Existing systems are tweaked again and again to producedesired results. Software vendors are struggling to fulfil the demands. New versions arrive on the shelves every year, but the improvement within core functionality is sparse and primarily found in the marketing material to praise the everlasting innovative software packages.

Luckily we do see developments that face the demands of upcoming new devices. But when one take a closer look, the applications are downsized from the main platforms and not at all complying with the users’ expectations, and not adjusted to the manifold possibilities of these compelling small but expensive devices. Perhaps field GIS workers are satisfied with these solutions and find them very useful, but the mainstream GIS demands will come from everybody and the situation changes constantly due to a new position and demand for topical information.

When I am standing in front of a shop please show me the offer of the hour, when I pass the bar show me the band name which is performing and when I wait for the next boat please show me the boat on the river compared to my position. Obvious examples of geographically based information, but no common solutions in sight. App's for iPhone and Android offer an overwhelming amount of nice geo-related applications, but they are not connected into one coherent system. Perhaps one day Google will throw an Android app on the market for free, which combines different domains of geographically related contents for handheld devices.

Most people with interest for geography in general have been spending plenty of time together with Google Earth (GE). Being able to explore the world without leaving the armchair is intriguing. GE has done a lot for the 3D virtual globe community. While the need for this kind of software with a global oriented user interface used to be difficult to explain, GE's convincing way of presenting obvious examples of usage makes it an easy task today. GE opens up for contents with KML and actually gives the opportunity for some great interactive usages on a big scale. When it comes to a more local scale,however, this platform does not only lack precision but also opportunities for online information. It probably was never designed for that purpose.

And this is perhaps the biggest challenge at the moment. No systems have a suitable data model to really challenge this new on demand life spatial information world, referenced to a model map. This statement was of course a biased one and a pointer to the development that has been ongoing at the Centre for 3D GeoInformation (3DGI) since 2001, where we started to develop a platform that could handle the world as we experience it.

This means that we try to use as little abstraction as possible modelling the world. It is still a model where each feature is geo-referenced using global geo-centric coordinates and represented as a boundary representation. The quality of a model depends onthe data provided. Each feature can have a “ life” of its own meaning that the object representation of the feature can get interactivity or online streaming information as behaviour. We call the technology GRIFIN (Geographic Reference Interface for Internet Networks) and it is available as open sourcecode.

To explain the biggest difference to existing or perhaps traditional developments one can say that GRIFIN contains features represented by objects that can take care of themselves. This means first of all that an object is run by a virtual machine in our case JavaVM, and that it can take care of for instance communication to other databases, sensors, applications or whatever, without any major application running to tell it what to do – it knows by itself. The objects are described with managed code why the objects are called managed objects (MO). MOs can even change themselves.  

Figure1 - Figure 1. Design schema for conventional (left) and MO based (right) approaches, respectively. Courtesy of Wan Wen.Figure1 - Design schema for conventional (left) and MO based (right) approaches, respectively. Courtesy of Wan Wen.

For example, the shape or colour of the object can change due to information fetched from an external domain. In a traditional system everything had to be predetermined in the main software and would be limited to the functions provided in the software. So every bit of communication and change regarding a feature has to be controlled by the main system.

In figure 1 is shown a schematic representation of the differencebetween the systems. On the left hand side you can depict the conventional software design where objects are encapsulated by the main application while the MO approach on the right hand side has its objects run within a virtual machine environment and freely can act on its own.

MOs are stored as byte code. Whether their data or behaviour is stored within the object or in external databases partly or entirely would be a matter of system design. From figure 1 it should be clear that GRIFIN is a platform for applications rather than an application you can start up and put data into. Each use of GRIFIN will result in a certain kind of application – they will be similar but still different. MOs and their connected behaviour can, however, be reused from application to application. MOs defined properly will run within a VM environment.

Therefore the development should focus on those. Instead of modelling static geometry for a city model, one should think in interactive units that can communicate, inform, renew or present themselves interactively. MOs are indexed with regards totheir coordinates. This has the big advantage that the client viewer easily can find the MOs within the area of interest. MOs can be stored in distributed databases geographically spread around the world. Depending on the application and the desired view MOs will show up when they are “ in range” for viewing.

GRIFIN uses geo-centric coordinates because it is more convenient in a context of the virtual global representation of real world features. It does not make sense to use map projections whenthe representation of the objects is digital, and used and presented as such on monitors. Another big advantage of the managed code-based platform is the handling of semantic information of the provided data. GRIFIN does not really bother to convert or interpret this information during the import.

The information simply stores within the object. So if the semantic information is important due to specific demands in the application one will develop an agent-based object, probably with no geometric representation, handling these. To the system this agent will just be another object run by the Virtual Machine. The same goes for data in different formats. Since the system contains managed objects it would be preferable to exchange MOs rather than data generated in a propriety format.

The biggest disadvantage at the moment is that MOs cannot yet easily be created, compared to conventional systems, but this will hopefully change soon. The 3DGI centre is participating in the InfraWorld research project paid by the Norwegian Research Council together with the companies Vianova and Norkart in Norway and Finland and Iver in Spain. Together we are developing a future 3D-GIS platform for not only infrastructure data but for city and landscape data in general based on the managed object concept. 

It is time for a paradigm shift towards a platform that can handle the demands of the future. Demands,which lie beyond what you can expect is handled by the existing systems because their data models don’ t fit. GRIFIN is a new approach, which shows great potential and seems to be a promising attempt.

----------------------------------------------------------------------
Author: Erik Kjems, PhD, M.Sc. Associate Professor, Director of the Aalborg University Centre for 3D GeoInformation Fibigerstraede 119220 Aalborg, Denmark
Tel +45 99408079 - Email kjems@3dgi.dk
More information: http://www.3dgi.dk