Difference between revisions of "Google Reader"

From Archiveteam
Jump to navigation Jump to search
(also lowercase twitter users)
(another old HN feed)
Line 172: Line 172:
** http://rss.searchyc.com/user/USERNAME
** http://rss.searchyc.com/user/USERNAME
** http://rss.searchyc.com/user/USERNAME?only=comments
** http://rss.searchyc.com/user/USERNAME?only=comments
** http://rss.searchyc.com/user/USERNAME?only=comments&sort=by_date
** http://rss.searchyc.com/user/USERNAME?sort=by_date
** http://rss.searchyc.com/user/USERNAME?sort=by_date
* Less Wrong feeds
* Less Wrong feeds

Revision as of 15:25, 10 June 2013

Google Reader
URL http://www.google.com/reader/[IAWcite.todayMemWeb]
Status Online!
Archiving status In progress...
Archiving type Unknown
Project source https://github.com/ArchiveTeam/greader-grab
Project tracker N/A
IRC channel #donereading (on hackint)

Shutdown notification

On the March 13, Google announced that they'll "spring clean" Google Reader at Official Google Reader Blog:

we will soon retire Google Reader (the actual date is July 1, 2013)

Backing up your own data

Backing up the historical feed data

Google Reader acts as a cache for RSS/Atom feed content, keeping deleted posts and deleted blogs accessible (if you can recreate the RSS/Atom feed URL). After the Reader shutdown, this data might still be available via the Feeds API, but we'd like to grab most of this data before July 1 through the much more straightforward /reader/ API.

Your help is needed

Give us your feed URLs

We need to discover as many feed URLs as possible. Not all of them can be discovered through crawling, so we need your OPML files. (Though if you have any private or passworded feeds, please strip them out.)

Upload OPML files and lists of URLs to:

http://allyourfeed.ludios.org:8080/

Run the grab on your Linux machine

This project is not in the Warrior yet, so follow the install steps on https://github.com/ArchiveTeam/greader-grab

(Up to ~5GB of your disk space will be used; items are immediately uploaded elsewhere.)

Crawl websites to discover blogs and usernames

We need to discover millions of blog/username URLs on popular blogging platforms (which we'll turn into feed URLs).

Join #donereading and #archiveteam on efnet if you'd like to help with this.

Tools for URL discovery

git clone https://github.com/trivio/common_crawl_index
cd common_crawl_index
pip install --user boto
PYTHONPATH=. python bin/index_lookup_remote 'com.blogspot'

You can copy and edit bin/index_lookup_remote to print just the necessary information:

# Print entire URL:
	rest, schema =  url.rsplit(":", 1)
	domain, path = rest.split('/', 1)
	print schema + '://' + '.'.join(domain.split('.')[::-1]) + '/' + path

# Print just the subdomain:
	print '.'.join(url.split('/', 1)[0].split('.')[::-1])

# Print just the first two URL /path segments:
	rest, schema =  url.rsplit(":", 1)
	domain, path = rest.split('/', 1)
	print schema + '://' + '.'.join(domain.split('.')[::-1]) + '/' + '/'.join(path.split('/', 2)[0:2])

# Print just the first URL /path segment:
	rest, schema =  url.rsplit(":", 1)
	domain, path = rest.split('/', 1)
	print schema + '://' + '.'.join(domain.split('.')[::-1]) + '/' + '/'.join(path.split('/', 1)[0:1])

Pipe the output to | uniq | bzip2 > sitename-list.bz2, check it with bzless, and upload it to our OPML collector.

Add to to the above list of blog platforms

See:

Crawl Google Reader itself for feeds

https://www.google.com/reader/directory/search?q=keyword-here

https://www.google.com/reader/directory/search?q=keyword-here&start=10

Make greader-grab not save the embedded styles and image on 404 pages

We get a ton of 404s from Reader's feed API, e.g. https://www.google.com/reader/api/0/stream/contents/feed/https%3A%2F%2Faws.amazon.com%2Frss%2404-this-please?r=n&n=100 and these 404 pages are bloating our WARCs. If greader-grab used hanzo's warc-tools to rewrite the .warc.gz (replacing the 404 responses) before uploading, we would save a ton of space.

Add gzip support to wget-lua

It would be quite helpful to have a wget-lua that supports gzip content encoding (vanilla wget doesn't support it either.) This will speed up downloads and save a lot of bandwidth.

There have already been some attempts at making wget support gzip:

https://github.com/kravietz/wget-gzip (Windows-only; needs to work on Linux)

https://github.com/ptolts/wget-with-gzip-compression (based on a wget from 2003?)

External links

WARCs are landing at http://archive.org/details/archiveteam_greader