Tech giant Samsung has reported its lowest quarterly profit in eight years this week an indicator to the weakened global economy to hit PC…
Google has just announced the launch of Knowledge Graph, which it sees as a smarter way to search. Knowledge Graph is a way of tying together pieces of meta-data about things, such as their relationships to other things within the world. What this means is that when you search for something that is available in the Knowledge Graph, Google will display additional information about the thing that you searched for, so that you can directly access facts before having to browse through a bunch of web-pages.
So, for instance, if you search for “William Shakespeare” you will be able to see important information about the man such as when and where he was born, when and where he died, where he was educated, the names of his children and what his career was. That might not sound altogether exciting in itself, after all Wikipedia has filled this niche for a long time. But there is a whole lot more to this. For instance, when you search for ‘maestro’ you could be interested in a type of music software, a type of debit card, a tv series, a hip-hop producer, an italian word, or a variety of other possibilities. What Knowledge Graph can do is help you to quickly limit your search to something specific by simply clicking on the entity that most suits what you’re looking for. So Knowledge Graph helps you to overcome the problem of ambiguity within your search results.
While Knowledge Graph still has to find its way beyond the US, I’d like to point you to its roots. That way you can start exploring what this kind of search functionality really has to offer and understand how it will ultimately revolutionize how we access information on the Internet. Back in 2001, Tim Berners Lee, the inventor of the World Wide Web, began talking about the limitations of straight HTML to represent information on the internet.
He proposed that we needed a way in which machines could discover the relationships between things, so that ultimately we could build a ‘Semantic Web‘. This idea was taken up by the World Wide Web Consortium (W3C) and Resource Description Framework (RDF) was born. RDF is a model that can be used to represent the way in which we exchange information over the Internet so that machines are better capable of making sense of the data and are able to search, parse and extract relevant information according to our needs.
RDF has been quietly simmering away in the background for many years now. If you try to find any RDF resources, you’ll mostly discover that the technology has only been widely adopted in the sciences and for the large part, by academic institutions. That’s not necessarily a bad thing. These sectors have helped to prove, at least in concept, that developing a semantic web is at least possible. We also need to keep in mind that these kinds of institutions were the driving force behind the adoption of the Internet in the first place. But the web is big, and the number of things in the world and all of their relationships is even bigger. Capturing all of this information is a mammoth-sized task.
Fortunately, Wikipedia proved how powerful voluntary collaborative work could be. So a similar project was started by an American company called Metaweb, and in 2007 they released Freebase. Freebase is a collaborative database that stores structured data about things. Unlike a traditional database, which stores information in tables and uses keys to create relationships, the Freebase database stores information in a very large graph, with relationships being defined by the connections between nodes. The easiest way to think of it is to imagine an incredibly large and complicated mindmap.
Volunteers were given the ability to add information into the Freebase database, according to a rigorously defined schema. In return, Freebase gave the community an API that allowed you to easily query the database and use the information within your own website or applications. Freebase set out to start collecting information that could be structured in a way that would ultimately contribute heavily toward the Semantic Web. If you try out its search facility you will find that it provides very similar functionality to that provided by Knowledge Graph.
All of this shouldn’t be that surprising, since Google acquired Metaweb in 2010. Still, Freebase is continuing to run in the background, slowly collecting data and building semantic relationships that will ultimately power what is now being termed Web 3.0. Actually, this technology is sure to revolutionize the way that things work on the web. As voice activated search, such as Siri, becomes more mainstream, allowing machines to make more sense of what we mean when we look for information will make it easier for us to find and interact with that information.
While the Semantic Web and tools like Knowledge Graph and Freebase are exciting, there are those who believe that the joy that these technologies bring to us will be short-lived. Sci-fi writer, journalist and blogger, Cory Doctorow has written a scathing attack on the idea that we can ever achieve a truly semantic web in his short piece MetaCrap. Perhaps his point that ‘People Lie’ is the one that stands out most strongly to me. As the SEO hounds leap onto the metadata gravy-train, we’ll see the semantic web die a slow and miserable death. In the meantime, let’s enjoy using a technology that’s been in the making for over ten years now.