Friend of a Friend - The Ignored Web Standard for Social Networking
created Jan 29, 2020
Yesterday, I read this interesting post.
"Friend of a Friend: The Facebook That Could Have Been"
https://twobithistory.org/2020/01/05/foaf.html
Excerpts from that long post with my emphasis added.
The FOAF standard, or Friend of a Friend standard, is a now largely defunct/ignored/superseded1 web standard dating from the early 2000s that hints at what social networking might have looked like had Facebook not conquered the world.
The FOAF project, begun in 2000, set out to create a universal standard for describing people and the relationships between them. That might strike you as a wildly ambitious goal today, but aspirations like that were par for the course in the late 1990s and early 2000s. The web (as people still called it then) had just trounced closed systems like America Online and Prodigy. It could only have been natural to assume that further innovation in computing would involve the open, standards-based approach embodied by the web.
Many people believed that the next big thing was for the web to evolve into something called the Semantic Web. I have written about what exactly the Semantic Web was supposed to be and how it was supposed to work before, so I won’t go into detail here. But I will sketch the basic vision motivating the people who worked on Semantic Web technologies, because the FOAF standard was an application of that vision to social networking.
So why didn’t FOAF succeed? Why do we all use Facebook now instead? Let’s ignore that FOAF is a simple standard with nowhere near as many features as Facebook—that’s true today, clearly, but if FOAF had enjoyed more momentum it’s possible that applications could have been built on top of it to deliver a Facebook-like experience. The interesting question is: Why didn’t this nascent form of distributed social networking catch fire when Facebook was not yet around to compete with it?
There probably is no single answer to that question, but if I had to pick one, I think the biggest issue is that FOAF only makes sense on a web where everyone has a personal website. In the late 1990s and early 2000s, it might have been easy to assume the web would eventually look like this, especially since so many of the web’s early adopters were, as far as I can tell, prolific bloggers or politically engaged technologists excited to have a platform. But the reality is that regular people don’t want to have to learn how to host a website. FOAF allows you to control your own social information and broadcast it to social networks instead of filling out endless web forms, which sounds pretty great if you already have somewhere to host that information. But most people in practice found it easier to just fill out the web form and sign up for Facebook than to figure out how to buy a domain and host some XML.
Those excerpts do an injustice to the post. It's a good read.
Here are some links mentioned in that post.
http://lists.foaf-project.org/pipermail/foaf-dev/2003-July/005463.html
http://www.ldodds.com/foaf/foaf-a-matic
https://www.theguardian.com/technology/2004/feb/19/newmedia.media
Foaf files can be read by computers, so it should eventually be possible to answer queries such as: How old is Marc Canter; Show me pictures bloggers who live in London; and Find recent articles written by people who went to ETech - always assuming people have put the relevant information in their foaf.rdf file.
"Chapter 18. What is DeanSpace?"
http://www.extremedemocracy.com/chapters/Chapter18-Hynes.pdf
https://programmingisterrible.com/post/39438834308/distributed-social-network
I used that foaf-a-matic website to create a small FOAF file that I placed here:
I added a couple "friends" to the file for testing. These are people who I have met in person in Toledo and who posted regularly for many years at my message board toledotalk.com. The only friends info that I included in the file consists of their nicknames and the URLs for their personal websites.
I view the friends section like a bookmarks or blogroll that contains a list of personal websites that I visit at least occasionally. I doubt that's what the FOAF standard means by "friends," but most people I know do not maintain personal websites, but they use the silos.
My "friends" could be people like https://expeditionaryart.com who I have never met in person and who I have never communicated with, but I have bought several art-related items from her in recent years, and I enjoy her blog posts and watercolor sketches. She maintains a small business and a personal website that I visit occasionally. I'm also on her email list. In the olden days, Maria's website would have been added to my blogroll.
If personal website owners viewed their FOAF files as blogrolls, then it could help with discovery of other websites. Unless blogrolls or bookmark sections on personal websites were marked up with Microformats, then the XML RDF file would be easier to process by software.
[]{#semantic-web}Semantic Web
In the above FOAF post, the author linked to his/her 2018 post.
"Whatever Happened to the Semantic Web?"
https://twobithistory.org/2018/05/27/semantic-web.html
And here's the corresponding Hacker News discussion.
https://news.ycombinator.com/item?id=18023408
Back in the aught years, I failed to understand the point of the Semantic Web concepts. It seemed like a solution to problems that could not be defined.
Sometimes, it feels like geeks aspire for complexity. When simple things work well, maybe too well, and too many people use the simple tech, then geeks invent reasons to add complexity. Maybe the pro-complexity geeks want to be the only ones who can understand the concepts.
I always assumed that the Semantic Web was plumbing or markup that was created and consumed by software. People would not manually write Semantic Web markup. If felt too theoretical.
Excerpts from the twobithistory.org post:
In 2001, Tim Berners-Lee, inventor of the World Wide Web, published an article in Scientific American. Berners-Lee, along with two other researchers, Ora Lassila and James Hendler, wanted to give the world a preview of the revolutionary new changes they saw coming to the web. Since its introduction only a decade before, the web had fast become the world’s best means for sharing documents with other people. Now, the authors promised, the web would evolve to encompass not just documents but every kind of data one could imagine.
They called this new web the Semantic Web. The great promise of the Semantic Web was that it would be readable not just by humans but also by machines. Pages on the web would be meaningful to software programs—they would have semantics—allowing programs to interact with the web the same way that people do. Programs could exchange data across the Semantic Web without having to be explicitly engineered to talk to each other.
The Semantic Web was meant to be readable by humans??? I assume that meant for debugging purposes by programmers and designers. Surely it was not meant to be read by non-tech people.
This Two-Bit History post is another long and interesting article. More from the post.
To some more practically minded engineers, the Semantic Web was, from the outset, a utopian dream.
The basic idea behind the Semantic Web was that everyone would use a new set of standards to annotate their webpages with little bits of XML. These little bits of XML would have no effect on the presentation of the webpage, but they could be read by software programs to divine meaning that otherwise would only be available to humans.
Over the past 10 years, the IndieWeb.org advocates have achieved the above, not with XML but by using Microformats, which are classes attached to HTML tags. I use some Microformats on my pages.
I don't know if the IndieWeb's usage of Microformats would fulfill the dreams of Semantic Web proponents, but Microformats are useful for identifying a myriad of web post types.
HTML5 introduced some Semantic Web-like tags that I have used in my posts over the past several years. These tags help define the article page, I guess. I don't know. Other than applying CSS rules to the tags, I have not found any good use for these tags. Some of these tags include: article, section, header, footer, aside, etc.
Excerpts from http://motherfuckingwebsite.com which might be my favorite article about web design for documents that are meant to be read
Look at this shit. You can read it ... that is, if you can read, motherfucker. It makes sense. It has motherfucking hierarchy. It's using HTML5 tags so you and your bitch-ass browser know what the fuck's in this fucking site. That's semantics, motherfucker.
I think that MF post only contains two HTML5 tags: header and aside.
Maybe these HTML5 tags would be useful for complex documents. I have used them because others were using them and suggesting their usage, but I don't think that they are needed.
Currently at sawv.org, I'm using the following in an article page:
<article role="main" class="h-entry">
<header>
<h1 class="entry-title p-name">Friend of a Friend - The Ignored Web Standard for Social Networking</h1>
</header>
<section class="entry-content e-content">
<p>article body</p>
</section>
<footer style="display:none;">
<p class="p-author h-card">
<a class="u-uid" href="/info.html">jr</a> -
<a class="u-url" href="">#</a>
</p>
</footer>
</article>
All of those class
attributes are Microformats.
A page and a website can be tested here https://indiewebify.me to see if it's marked up with Microformats.
Back to the Two-Bit History post about the Semantic Web:
Cory Doctorow, a blogger and digital rights activist, published an influential essay in 2001 that pointed out the many problems with depending on voluntarily supplied metadata. A world of “exhaustive, reliable” metadata would be wonderful, he argued, but such a world was “a pipe-dream, founded on self-delusion, nerd hubris, and hysterically inflated market opportunities."
Man, Cory made those observations in 2001, and history proved that he was correct.
Voluntarily supplied metadata might work in a controlled environment, such as within a corporation where employees create documents for internal usage, and the company maintains a standards document that describes how to format web pages. Everyone would use the same content management system.
More from the Two-Bit post:
Doctorow had found himself in a series of debates over the Semantic Web at tech conferences and wanted to catalog the serious issues that the Semantic Web enthusiasts (Doctorow calls them “semweb hucksters”) were overlooking.4 The essay, titled “Metacrap,” identifies seven problems, among them the obvious fact that most web users were likely to provide either no metadata at all or else lots of misleading metadata meant to draw clicks.
Others have also seen the Semantic Web project as tragically flawed, though they have located the flaw elsewhere. Aaron Swartz, the famous programmer and another digital rights activist, wrote in an unfinished book about the Semantic Web published after his death that Doctorow was “attacking a strawman.”6 Nobody expected that metadata on the web would be thoroughly accurate and reliable, but the Semantic Web, or at least a more realistically scoped version of it, remained possible.
The problem, in Swartz’ view, was the “formalizing mindset of mathematics and the institutional structure of academics” that the “semantic Webheads” brought to bear on the challenge. In forums like the World Wide Web Consortium (W3C), a huge amount of effort and discussion went into creating standards before there were any applications out there to standardize.
And the standards that emerged from these “Talmudic debates” were so abstract that few of them ever saw widespread adoption. The few that did, like XML, were “uniformly scourges on the planet, offenses against hardworking programmers that have pushed out sensible formats (like JSON) in favor of overly-complicated hairballs with no basis in reality.”
The Semantic Web might have thrived if, like the original web, its standards were eagerly adopted by everyone. But that never happened because—as has been discussed on this blog before—the putative benefits of something like XML are not easy to sell to a programmer when the alternatives are both entirely sufficient and much easier to understand.
That 2018 post does not mention Microformats. It did mention OpenGraph and JSON-LD.
From the Two-Bit post:
What’s fascinating about JSON-LD and OpenGraph is that you can use them without knowing anything about subject-predicate-object triplets, RDF, RDF Schema, ontologies, OWL, or really any other Semantic Web technologies—you don’t even have to know XML. Manu Sporny has even said that the JSON-LD working group at W3C made a special effort to avoid references to RDF in the JSON-LD specification.15 This is almost certainly why these technologies have succeeded and continue to be popular. Nobody wants to use a tool that can only be fully understood by reading a whole family of specifications.
Again, the admin tax is probably the reason why some of these ideas, including the IndieWeb, don't get accepted by a wide audience. From the Two-Bit post:
Sean B. Palmer, an Internet Person that has scrubbed all biographical information about himself from the internet but who claims to have worked in the Semantic Web world for a while in the 2000s, posits that the real problem was the lack of a truly decentralized infrastructure to host the Semantic Web on.
To host your own website, you need to buy a domain name from ICANN, configure it correctly using DNS, and then pay someone to host your content if you don’t already have a server of your own. We shouldn’t be surprised if the average person finds it easier to enter their information into a giant, corporate data repository. And in a web of giant, corporate data repositories, there are no compelling use cases for Semantic Web technologies.
The Semantic Web or similar ideas may be used within corporations on the old intranet.
More from Two-Bit:
Imagine a web where, rather than filling out the same tedious form every time you register for a service, you were somehow able to authorize services to get that information from your own website.
I can do something like that now, via IndieAuth, which is another IndieWeb idea.
If a service permits logging in via IndiaAuth, I can enter the URL for my website. The IndieAuth process determines that I want a login code sent to my email account. I enter that code, IndieAuth completes the process, and I'm logged into the service, and if it's my first time, then it also means that I'm registered at the service or website. Instead of email, the IndieAuth process can rely on users being logged into either their GitHub or Twitter accounts. I prefer email. I work to secure my email account.
The point is that my identity is my sawv.org domain name. Instead of using Twitter soley for logging into a service, IndieAuth permits IndieWeb users to use their websites ad their identies.
That Two-Bit post linked to another post, which, once again, is an interesting read.
https://twobithistory.org/2017/09/21/the-rise-and-rise-of-json.html
2018 email conversation with Cory Doctorow.
https://twobithistory.org/doctorow.txt
Wow, Cory's 2001 essay about metadata was posted to The Well.
https://people.well.com/user/doctorow/metacrap.htm
Sidenote: Back in the early aught years when I thought about message board design and built my toledotalk.com message board, I read this 1997 story by Wired about The Well.
https://www.wired.com/1997/05/ff-well
The above HN thread about the Semantic Web contained 201 comments. Here are excerpts from the top comment:
I have some insight here because I did a postdoc working on anatomy ontologies in the UK. A big part of the problem with the semantic web is that lots of people in European academia use it as a collection of buzzwords for making grant proposals sexier, without understanding or caring what it actually means.
I would prepare conference presentations where I was just filling slides up with BS to fill time.
Devs from other universities (gotta check that international research box!) understood the technology even less than our team did. We provided them a tool for storing RDF triples for their webpage so they could store triples about anatomical relationships. They wanted to use said RDF store as their backend database for storing things like usernames and passwords. facepalm
So you have all these academics publishing all this extremely important sounding literature about the semantic web, but as soon as you pry one nanometer deep, it's nothing but a giant ball of crap.
Reply comment:
Wow, had such a similar story, except with a Master's degree. Most of my graduate program was consumed by a highly-cited and quite arrogant professor who focused on the Semantic Web. I took 2(!) separate Semantic Web courses and couldn't understand what the fuss was all about.
By the end, I figured out the professor was full of crap. That was one of the big experiences that helped me figure out that academia wasn't for me. And I would be perfectly fine never hearing about triples, RDF, or that other nonsense again!
-30-