# Info [About](/about.html) - sora.soupmode.com is a test site for my web-based static site generator called Sora that is written in Lua. Feed formats: * [h-feed](/hfeed.html) * [JSON feed](/feed.json) * [RSS3 feed](/rss3.txt) ![rss3 text logo](http://www.aaronsw.com/2002/rss30logo) - ??? - [what is RSS 3.0?](/2018/08/21/rss-30-jokey-but-useful-spec.html) Titles only. I'm disinterested in including the full post within a feed. I like visiting websites from a feed reader. A title and/or the opening paragraph should be enough detail to inform the reader whether the post is worth reading. Most of the time, I don't use a feed reader. I prefer visiting websites directly by either recalling the URLs from memory or by clicking the links, kept on my "favorites" HTML page. I prefer to "follow" personal publishers who post at most only a few times per week. I like Aaron Swartz's simple RSS 3.0 spec that he created in 2002 as a humorous backlash to arguments about the future of RSS. Those RSS disputes ultimately led to the asinine [feed wars](https://indieweb.org/RSS_Atom_wars) that negatively enveloped many geeks in the blogosphere. RSS 3.0 looks like something created in the Gopher world. RSS3 requires no escaping, no double quoting, and no single quoting. Extended ASCII characters, however, would need encoded as usual for web content. Whitespace is used to separate items. A regex or a split command can parse the content on each line. It's easy for humans to read too. It's brilliant as a joke, related to that time period, and it's still brilliant today for my tech prefs. I miss Aaron. I think that he would have been dismayed by today's massively bloated websites, especially sites that focus on text that are meant for browsing-only users. He would have expressed displeasure with today's social media toxicity and lax privacy concerns. But then again, social media and privacy seem like an oxymoron. Aaron would have continued his activism for making content freely available to all. And I think that Aaron would have been a fan of at least some aspects of the IndieWeb. Sora docs: * [README](/sora-readme.html) * [User Guide](/sora-user-guide.html) * [API](/sora-api.html) * [Wren Features not Included with Sora](/wren-features-not-included-with-sora.html) --- Every post contains the following output file types: HTML, markup text, and JSON. * * * The default Sora JSON output format, however, can be overwritten with custom JSON that exists within the markup for the post. This JSON would be surrounded by Sora custom commands ``. Example post that creates a JSON feed file in the format of . * * * - the main point of this web post Validate the JSON file with . --- For a normal Sora post, the JSON format reflects the markup and the HTML output. * The [IndieWeb](https://indieweb.org) promotes [Microformats](http://microformats.org) to be included within the HTML output. Instead of programs consuming XML and JSON files, the programs would consume the HTML files, marked up with Microformats. The IndieWeb also promotes a standard way to convert a Microformatted HTML file to JSON. * - parser Submitting the [info.html](/info.html) page to the above parser [produces the following JSON format](http://pin13.net/mf2/?url=http%3A%2F%2Fsora.soupmode.com%2Finfo.html), based upon the Microformats contained within info.html. Submitting Sora's [h-feed](http://sora.soupmode.com/hfeed.html) file to the parser [produces this JSON output](http://pin13.net/mf2/?url=http%3A%2F%2Fsora.soupmode.com%2Fhfeed.html). The h-feed file is an HTML page that contains the most recently created posts, sorted by youngest to oldest. It would be similar to an RSS or jsonfeed.org feed file. The h-feed file could be the home page for a blog. A few feed readers can parse h-feed files. This saves a publisher the trouble of creating feed files in RSS, JSON, Atom, and whatever else gets proposed. Since a website is creating HTML for humans to read, then by adding Microformats, the same HTML page can be processed by computer programs. That's the theory espoused by the IndieWeb. But since most feed readers do not support the h-feed format, then h-feed is slow to catch on. --- This is a cool service, created by an IndieWeb user. I only need my Sora code to produce one feed format: h-feed. I can remove the code that creates JSON feed because Granary can read h-feed files (HTML Microformats) and create Atom or JSON feeds. * JSON Feed * Atom (XML) Of course, this is relying on a third party service that could disappear some day. Maybe by then, more CMS apps and feed readers will produce and consume h-feed. It's HTML, after all. I access my h-feed files often because it's auto-generated by my Wren and Sora apps, and the content on the pages is displayed in reverse chronological order by creation date from youngest to oldest. The h-feed file has utility to me. The RSS, Atom, and JSON feeds have no utility for me, unless for some odd reason, I want to subscribe to my own website within a feed reader. Outside of testing the feeds, I don't know why I would subscribe to my own website. Of course, the RSS and Atom feeds are meant for others, just like HTML pages. Long ago, I lost interest in the semantic web concepts. Would microformats count as enabling the semantic web? HTML pages that use microformats can be read by browsing-only users, and those same pages can act as plumbing to be processed by computer programs. --- On Oct 26, 2018, I added RSS3 support to my simple feed reading website, found at . At the moment, Finch processes RSS XML, Atom XML, and RSS3 text feeds. I'll add JSON Feed support later. I'd like to add h-feed support too, but I have not created a microformats parser. MF parsers exist in some languages. When I tested my Finch code, I saw feed processing errors for the following XML-formatted feeds. * Error: couldn't parse xml. lxp says: duplicate attribute for URL * Error: couldn't parse xml. lxp says: not well-formed (invalid token) for URL https://www.jeremycherfas.net/blog.rss * Error: couldn't parse xml. lxp says: undefined entity for URL https://colinwalker.blog/feed/ Feed readers have had to account for broken or malformed XML feeds. I'm not doing that. If the XML parser cannot process the RSS and Atom files, then my script moves on to the next feed. Man, these XML files are complicated. JSON Feed files looks simpler, and they seem easier to process. RSS3 plain text feed files would be even easier, in my opinion. I wonder how muddied h-feed pages could get. ¯\\\_(ツ)\_/¯