While reading this:
Google built a feed platform that is freely available for any user with a Google account.
... and re-reading this:
The data technologies powering Google Reader can easily be used and extended by third-party feed aggregators for use in their own applications.
... it struck me:
- centralized aggregation, decentralized delivery and UI
- the hardest part in a reader (IMHO) is aggregation, because it offers a lot of pitfalls with little reward for making it "just work"
- more interesting: the visible stuff
- solves bandwidth issues for everybody
- solves stability issues for everybody
- saves costs for the developer/hoster (these kinds of applications are way more resource intensive than the typical blog software)
Note that this is completely in line with the software industry's current trend:
- there is no money to be made in basic infrastructure: someone already does it better, and for free
- now every web developer can create their own feed reader within an afternoon.
- which of course means (repeat after me:) software is a commodity.
To each his own
Custom software is where it's at. The small projects developed with minimum effort, for a small set of uses or users, possibly active for only a limited time until you don't need it any more, or until something better comes along.
Which also means it's important to make migration between software solutions painless. Say, as painless as exporting and then importing an OPML file.
Or, say, as painless as simply logging in.
Because when the underlying infrastructure for all those little feed readers is provided by Google there is no need to migrate any data.
(And, of course, this all also ties us closer to the data monster.)
Now I'm seriously considering scrapping the little work I did on a Ruby feed aggregation infrastructure. There's cooler stuff to do than thinking up flow charts of HTTP error states.
Or is there?