Well, it looks like there’ll be no escaping the “social graph” term. World Wide Web inventor Tim Berners-Lee, in a blog post last evening, not only bestowed his blessing on the social graph but elevated it to the capitalized Social Graph, a sign that we have a New Paradigm on our hands. Sir Tim suggests that the Semantic Web (recently dubbed “Web 3.0”) was really the Social Graph all along, and that the graph represents the third great conceptual leap for the network – from net to web to graph:
The Net links computers, the Web links documents. Now, people are making another mental move. There is realization now, “It’s not the documents, it is the things they are about which are important”. Obvious, really.
Biologists are interested in proteins, drugs, genes. Businesspeople are interested in customers, products, sales. We are all interested in friends, family, colleagues, and acquaintances. There is a lot of blogging about the strain, and total frustration that, while you have a set of friends, the Web is providing you with separate documents about your friends. One in facebook, one on linkedin, one in livejournal, one on advogato, and so on. The frustration that, when you join a photo site or a movie site or a travel site, you name it, you have to tell it who your friends are all over again. The separate Web sites, separate documents, are in fact about the same thing – but the system doesn’t know it …
Its not the Social Network Sites that are interesting – it is the Social Network itself. The Social Graph. The way I am connected, not the way my Web pages are connected. We can use the word Graph, now, to distinguish from Web. I called this graph the Semantic Web, but maybe it should have been Giant Global Graph!
Goodbye WWW. Hello GGG.
But while it’s true that technologists and theoreticians desire to abstract the graph from the sites – and see only the benefits of doing so – it’s not yet clear that that’s what ordinary users want or even care about. That’ll be the real test to whether the graph makes the leap from mathematician to mainstream – and it will also tell us whether a social network like Facebook has a chance to become a true platform or is fated to remain a mere site.
I would think that in the end it will benefit both. Right now it’s sort of a war which site “gets” you. You might sign up for more than one site but we all know that all the setup involved in that gets annoying. Thus people won’t do that.
Now if I can just go to one site and I already have all my profile info, friends etc. So if this setup is more automatic they will also use it more.
So in the end users will have more choice of services and platforms will have more users. I think the key here really is to make this as transparent as possible (while thinking about privacy of course, this is another issue here).
I kind of have had the same thoughts and probably I am not alone :)
But this adding of the social layer to a website, really, it does seems like a possible functionality paradigm shift like the one that was made to “Web 2.0”.
Of course, I don’t really know what I am talking about, and I am also mostly thinking about Google OpenSocial. But still. I think Web 3.0 could be an apt analogy and description for what the inclusion and integration of ones social graph could bring to a website.
[Um… I self-consciously note that this is two very long posts in a short time. I’ll back of on long posts for a bit. Don’t mean to spam — you just hit two subjects of particular interest to me.]
I think I understand Tim a little bit. Maybe I’m projecting but..
To think like Tim, and I’m sure Nick will get this but not everyone will, imagine yourself in a Philosophy of Language seminar — we’ll borrow (and lightly twist around) some terminology from that kind of context.
He thinks a lot about names and systems of naming things. The net gives us names for the nodes on the net. The web (per Tim) gives us names for documents and locations within documents. The semantic web was the invention of new standards for markup — new “names” — this time naming arbitrary predicates (like IsMortal(Socrates) or WasBornBefore(Socrates, Plato) or ThisPageIsAbout(Socrates) etc.)
The names, up to that point, represent (kind of circularly) a taxonomy of all the automatically processable data “on the web”.
What Tim sees now, and Social Graph applications are just one example, is the introduction of “ontological hypotheses” as web content. That is, the mark-up in a web document may literally be naming just other web content, but presumably it is “about” something external to the web (like Socrates or “my trip to the Acme Conference”). People, in natural languages, have names for those ontological hypotheses (names like “Socrates”). Tim’s observation is that people are beginning to reify those names into the web as mark-up and URIs (e.g., “Ok, according to me, the URI http://… is the official name for Socrates”) and, from there, to encode models of the ontological concept and use that as a basis for automated reasoning about web content.
When the names are only about addressing other stuff on the net, they are used mainly for retrieval. When the names encode a predicate calculus of assertions made by a content provider, now they can be processed in other ways besides just retrieval.
So, that’s the big shift he’s on about: the fact that nowadays there are people sitting around in meetings trying to decide the best encodings for their favorite list of ontological hypotheses and then imagining automated reasoning on that.
Essentially, W3C has taken a decade or so to get everyone up to speed on AI 101.
You remember the “Cyc” project, right? It’s the same thing except in a form that creates competitive markets both for perfecting the extraction of reliable models from mark-up on the web and for innovating in reasoning rules.
Tim’s a little full of hot air on one point: the Web doesn’t actually name “documents” in any familiar sense of the word. Rather, URLs are just an expansion of the IP address space to include an arbitrarily large number of address bits with the constraint that the usual 48 (or 80) bits of IP address and port determine routing up to a point, with the rest of the bits handled by whoever picks up the packets so-delivered. That is, the web doesn’t name “documents” it names “virtual nodes” or “routes” — it names territories that are found on the ICANN maps — it names, ultimately, entries in ICANN’s database mainly, with extra bits tagged on so that each of those names can be subdivided into any greater number of names. The web names property deeds to database entries, another words.
“Documents,” in any interesting sense of the word, are “ontological hypotheses” of the sort I described above — so, we’re only just now really, barely kinda getting to the point where we’ll begin to see systems for actually naming documents.
That will be hugely important. For one thing, as Humpty Dumpty would note, when we name deeds rather than “documents” then the owner of the deed gets to decide what the names we use mean. (What does “Google for X” mean?) For another thing, if we link to “documents” (which includes “virtual documents” that actually represent some service that computes a result), then when we ask the net to give us a document, routing and provisioning can be blended: data can be cached and services instantiated wherever convenient, in response to demand. The user is asking for a specific action or specific piece of data: not requesting a service of a property owner. Content providers don’t need a specific host, necessarily: they just need to make their content addressable.
User’s will benefit from this a lot but that’s separate discussion.
Facebook, et al. aren’t examples of all of this: they are the losers here. Right now, on Facebook etc., users create “documents” of their personal data, but those “documents” are co-extensive with database entries privately owned by Facebook. The new “Open Social Graph” push is aimed at convincing users to at least represent all of the data they are putting in a mark-up format that makes it easy to “scrape” — potentially trashing the “property value” of Facebook’s DNS entries and in-house databases. (If the open standards take off, the next big social site will probably be one that scrapes tons of data and then tells a few million users, all at once, “by the way, you have a free account over here”)
In other words, the new stuff affecting Facebook et al. are standards in which users store their personal data on the net as directly addressable documents, rather than as “documents” that live at a fixed address. (The “ontological hypothesis” about these documents is that a person stands behind each (or most) and that the data tells us something true about that person.)
A separate question is whether or why user’s ought to create these particular, personal-profile documents (or “documents”) at all — no matter how they are addressed. Of course, for the most part, people would be wise to not. Still, the same technical changes work out more interestingly in other areas:
Consider an alternative to Wikipedia: no central site where everyone edits one version of an article. Forget that. Instead, just a simple, host-independent article format that includes a reliable way to footnote other articles and other article authors, and a way to sign articles. And put that on something like a Usenet system for distributing articles. And then let editors compete by picking and choosing and working with authors to see who can assemble the best encyclopedia. Same technology that’s horking naive users of social network sites also enables that much better form of Wikipedia.
Interesting times?
-t
Social graph is web2.1
Web 3.0 is the “off-line web” services
Sliverlight, Google Gears, Flex and JNext
It’s great that someone with TBL’s influence is focusing on this. I hope Google really addresses this as part of Opensocial. The benefit to Google of Social Network Portability is huge. They could great a whole new Google search – Google Profile search. You could search the web for people with particular attributes. We as individuals could choose what information we share. And we could all be friends without having to join the same site.
Here is an actual social graph of an actual on-line community.
Enjoy!
Nick,
I interpret Tim’s GGG in a different way. The most important thing I believe in GGG is that the web will be transformed from the publisher-oriented information organization structure to the viewer-oriented information organization structure. This transformation may be the key towards the next generation Web.
— Yihong
I think that this level of abstraction is very good idea. Now we have make such kinks and relations using html links which were made for a different purposes and we often have no access to do that. And that separation is the right thing: different goals – different levels.