Credit: Aaron Koblin: http://www.aaronkoblin.com/
This article orginally appeared as a thought piece for Agit8, a blog where “people at the leading edge of their discipline or field to offer insight into where they see the future of their work going.” Its not written for geo-geeks, more as an introduction to digital cartography and the challenges ahead.
Digital Cartography of course. While paper maps aren’t going to die, its a limited medium and all the fun stuff is going on in the digital sphere. Remember this – all paper maps are produced from huge amounts of data, 1s and 0s describing our planet down to centimetre accuracy. The AtoZ in the bottom of your bag, the grimy tube map stuck to the station wall and the dog-eared road atlas in the glove box are all renderings of digital data. Lets make it really simple:
A Map = Data x Visualisation
So the paper map and the online equivalent are produced in similar ways, but the output medium is different. True, I can draw on my paper map but apart from that there isn’t much else you can do with it. The online map opens up a whole new world of possibilities, and while we’re at it lets not limit it to an online map. In many circumstances a map is perfect, in other contexts I might not want to see a map at all, just directions. There is a raft of location enabled applications out there, from your mobile phone to your car navigation system to the browser you’re using now. Its a really exciting time to be a neogeographer, especially when you read articles like Trendwatching’s 2009 Briefing which declares “maps are the new interface”.
Web2.0 isn’t possible without APIs and web services. Indeed, back at the Where2.0 Conference in 2005, Tim O’Reilly pronounced “Google Maps with CraigsList is the first Web 2.0 application.” The recent proliferation of map mashups and location enabled devices points to a big future for location. As any geo-geek will tell you, “its all about the data”, and the rise of Web2.0 and semantics (well, its getting better) has fuelled these fires. Looking at ProgrammableWeb’s API directory, three of the top ten APIs are mapping services:
Top Ten APIs from www.programmableweb.com
There is also the larger trend of companies opening up their services with APIs, and the possibilities now are just so much greater with all that data being piped around the world. Visiting Flickr tells me that 2.7million photos have been geotagged this month alone. The mapping services are out there, the data is too and people are flinging mashups together all over the place.
The history of this discipline was grounded in the traditional RDBMS; chunks of data sitting in a machine on the same network as you, that you have control over. Now I treat the cloud as my database – I can juice data from any API, mash it with my own data, stir in some geotagged photos from Flickr and serve on a GoogleMaps platter to any device. Suddenly Everything Has Changed.
Where do we go from here?
The fundamentals remain the same: data and visualisation. This is going to come in two sections, first a sober look at visualisation and then a ranting finale about data.
We need to make better location enabled apps; getting some points on a slippy map was a triumph in 2005 but its getting pretty stale in 2009. I love the variety we get with paper maps, the styling, the history and sense of culture that comes through a vintage map (remember seeing your first map that wasn’t centred on your home country?) Nearly everything on the web is so, so homogeneous. Now I’m not criticising Google Maps, they provide a wonderful service that I use extensively myself but right now we have a one-size-fits-all mapping paradigm. The little variety that’s out there comes from someone using Yahoo! Maps instead of the Google.
A paper map was limited in its interactivity, slippy maps are limited too: they are pre-rendered image tiles that fit neatly together in the browser that you can drag around with the mouse but you can’t do anything more. I can layer as much data as I like over the top, but I can’t access the underlying data because its just a static pre-rendered map tile.
Michal Migurski of Stamen Design has written an excellent piece about a talk he gave called ‘Tiles Then, Flows Now’ over here at his blog. Its a fascinating piece, about moving beyond map tiles to flows of data, utilising real time web technologies and even a little game development theory in there. However, as someone so wisely told me, I’m getting ahead of myself here.
I want to talk about vectors, the underlying data that is used to create those map tiles that everybody is so busy throwing data on top of. Why is this important? Look at a regular search engine, it understands text but it can’t read text written in an image. The text is locked in the image and no amount of searching is going to find it, take that text out of the image and record it as HTML and suddenly you have a very searchable site. Exactly the same principle applies to mapping applications – expose the underlying vector data and it becomes searchable, queryable and customisable.
Right now, none of this is possible with any of the current crop of mapping services, presumably because they pay huge sums of money to get that vector data, process it, host it, and want people to use their own services. Some enlightened countries have openly shared all public mapping datasets but this is by no means wide-spread. Enter OpenStreetMap, the crowdsourced movement to create a free map of the world. Without going into too much detail, its gained massive traction and has better data in some areas than the traditional mapping agencies, worse in others. It is by no means complete but it is improving all the time, and crucially the vector data is available under the Creative Commons licence.
So, if you’re a hardcore developer, roll up those sleeves, download some data and delve into the Mapnik documentation and start making your own beautiful customised maps. One of the best examples of this is OpenCycleMap, its completely customised to show the most relevant information to cyclists. Note the contour styling, cycle routes are highlighted and the display of public toilets that has been queried out of the OSM data:
OpenCycleMap Customised Styling: http://www.opencyclemap.org
Now this isn’t for everyone, its time consuming and requires a lot of specialist knowledge. Indeed, many people do want an interface and style that is familiar, fine, choose what suits the project best. If you’re creating a map to show the location of one shop, then a simple pushpin on a GMap is all you need. Interestingly, CloudMade the startup creating mapping services with OpenStreetMap data is aiming for the middle ground between these two extremes. They expose enough control over the vector data to allow custom map styling without the headaches involved in a full Mapnik implementation. I would expect the other major players to follow suit and start to offer MapCSS as part of their APIs.
So the challenge is to think of what’s relevant to the user and site style – do they need to see every minor street, should I emphasise certain features, can I match the map’s palette with the site’s? Start making beautiful, customised maps.
We all now what a map is, its a diagram that tries to represent the real world on a flat piece of paper. As a geographer, there is something deeply comforting about a map; its something I understand, that tells me about the world, where I need to go, where things are and what they look like. There is something almost Victorian about it, an obsession with putting the world in order – controlling space. Lets ponder that thought: if you understand something then you can control it, if you can control it then you own it. Property. I’m no Marxist, but issues of ownership and the control of information have been the main preoccupation of governments since they were established.
The Ordnance Survey, the UK’s national mapping agency, has its origins in the quashing of the Jacobean uprising in the Scottish highlands. The military needed maps to protect, plan and control the UK and it was only in 1983 that it became a civilian government organisation. Until recently this model worked well, map making was an expensive and labour intensive process and the format for delivery was always the same, a paper map. Things have changed a bit since then.
How do you get your information? As an Agit8 reader I assume you’re pretty tech savvy. Our methods of interacting with information have been blown away by computers and the internet, and more recently the mobile phone. When I’m looking for a book I don’t go to a library and search through index cards, I Google it. When I’m driving somewhere I don’t carry around a massive road atlas and five 1:25,000 maps, I punch in the destination on my car navigation system and it takes me there.
The methods of interacting with data have changed, but those producing it haven’t. Anyone who has dealt with the Ordnance Survey can tell you just how restrictive their licensing terms are. Indeed, the Show Us a Better Way competition was held last year to encourage the civilian masses to come up with innovative ideas to mashup government data. It was a very Governement2.0 initiative from the Cabinet Office, and innovative approaches like this must be applauded, the reality of licensing geo data is somewhat different. You can only imagine the look on their faces when the OS promptly informed public bodies that using any data derived from an OS map cannot be used on an online mapping platform (apart from their own). So that was pretty much every idea out the window then.
We are faced with the rather ridiculous situation that every public body pays another public body, the OS, for mapping data. Then, if they create any new information with reference to the OS maps, like recording the location of every public toilet, then that data is classified as “derived data”. This derived data effectively cannot be shared with anyone who isn’t also licensing data from the OS. Confused? Essentially, it means that nearly every government dataset with any kind of location information is under lock and key. Its not just the OS, they receive a regular bashing from the Guardian but all aspects of government data policy are out of whack.
Things are in a real tangle here, various government departments are trying to free up data to encourage innovation and the information economy, but how can this be accomplished? There are real positive steps that can be taken to start loosening the bonds that tie publicly owned datasets. Start by reading the excellent “Power of Information Review”, commissioned by the Cabinet Office and was the instigation for the Show Us a Better Way competition. It is a sign of the government waking up, the juggernaut is slowly turning but needs prodding and poking to get it on the right track. Following its publication The Power of Information Task Force was established by Tom Watson MP. If you’re a Twitter fan, you can follow him here.
The Office of Public Sector Information has an ‘Unlocking Service’, make a request for access to information and they are supposed to unlock it for you. I am not sure of the success rating of this service but the idea is sound, try it here.
Perhaps more effective is the MySociety run ‘What Do They Know‘ where you can easily make a Freedom of Information request and monitor its progress. What’s more any data that is released is automatically posted online so that anyone can access and use it. So USE IT! Put in requests, work the system and start unlocking some of the value in public datasets, mash it and get it out there.
But what about the maps you say? Well without wishing to repeat myself I have to refer back to OpenStreetMap. If you want truly free data, that you can edit, share and do whatever you want with then pick up a GPS unit and start mapping. Then make some beautiful maps.