Blogger

From Archiveteam
Revision as of 14:04, 5 April 2016 by Powerkitten (talk | contribs) ("downloading" to "download" in the opening paragraph.)
Jump to navigation Jump to search
Blogger
Blogger logo
Blogger- Crea tu blog gratuito 1303511108785.png
URL http://www.blogger.com/
Status Online!
Archiving status In progress...
Archiving type Unknown
Project source blogger-discovery
Project tracker bloggerdisco
IRC channel #frogger (on hackint)

Blogger is a blog hosting service. On February 23, 2015, they announced that "sexually explicit" blogs would be restricted from public access in a month. But soon they withdrew their plan, and said they wouldn't change their existing policies.[1] However, before that, we had decided to download everything.

Strategy

Find as many http://foobar.blogspot.com domains as possible and download them. Blogs often link to other blogs, which will help, so each individual blog saved will help discover others. Also a small-scale crawl of Blogger profiles (e.g. http://www.blogger.com/profile/{random number up to 35217655}) will provide links to blogs authored by each user (e.g. https://www.blogger.com/profile/5618947 links to http://hintergedanke.blogspot.com/) - Although note that this does not cover ALL bloggers or ALL blogs, and is merely a starting point for further discovery.

Country Redirect

Accessing http://whatever.blogspot.com will usually redirect to a country-specific subdomain depending on your IP address (e.g. whatever.blogspot.co.uk, whatever.blogspot.in, etc) which in some cases may be censored or edited to meet local laws and standards - this can be bypassed by requesting http://whatever.blogspot.com/ncr as the root URL.[2] [3]

Downloading a single blog with Wget

These Wget parameters can download a BlogSpot blog, including comments and any on-site dependencies. It should also reject redundant pages such as the /search/ directory and any multiple occurrences of the same page but with different query strings. It has only be tested on blogs using a Blogger subdomain (e.g. http://foobar.blogspot.com), not custom domains (e.g. http://foobar.com). Both instances of [URL] should be replaced with the same URL. A simple Perl wrapper is available here.

wget --recursive --level=2 --no-clobber --no-parent --page-requisites --continue --convert-links --user-agent="" -e robots=off --reject "*\\?*,*@*" --exclude-directories="/search,/feeds" --referer="[URL]" --wait 1 [URL]

UPDATE:

Use this improved bash script instead, in order to bypass the adult content confirmation. BLOGURL should be in http://someblog.blogspot.com format.

#!/bin/bash
blogspoturl="BLOGURL"
wget -O - "blogger.com/blogin.g?blogspotURL=$blogspoturl" | grep guestAuth | cut -d'"' -f 4 | wget -i - --save-cookies cookies.txt --keep-session-cookies
wget --load-cookies cookies.txt --recursive --level=2 --no-clobber --no-parent --page-requisites --continue --convert-links --user-agent="" -e robots=off --reject "*\\?*,*@*" --exclude-directories="/search,/feeds" --referer="$blogspoturl" --wait 1 $blogspoturl

Export XML trick

Add this to a blog url and it will download the most recent 499 posts (that is the limit): /atom.xml?redirect=false&max-results=

How can I help?

Running the Warrior

Start up the Warrior and select the Blogger Discovery project. Do not increase the default concurrency of 2, because Google limits requests aggressively (and you get blocked for ~45 minutes, maybe less). Moreover, if you see "503 Service Unavailable" messages, decrease concurrency to 1.

Running the script manually

See details here: http://github.com/ArchiveTeam/blogger-discovery

Do not increase the concurrency above 2, because Google limits requests aggressively (and you get blocked for ~45 minutes, maybe less). Moreover, if you see "503 Service Unavailable" messages, decrease concurrency to 1.

Speeding things up

Disclaimer

The following method is not ArchiveTeam's official recommendation. You are solely responsible for any consequences of using this abusive method.

Solving Google's captcha, and using the resulting cookie, the request limit doesn't apply for three hours, that is, one can hammer Blogger as intensely as they like.

You can find a modified discover.py script here, which uses a cookies file, and has decreased sleep time if you encounter a captcha (so that you don't need to wait 45 minutes if you have the cookie), and lacks the sleep between requests at all. Replace the original discover.py script with this.

The other thing, that you have to do in every three hours, is:

  • When you see the script bump into a captcha, go to http://blogger.com/ and solve it
  • Export your cookies with some tool, e.g. for Firefox there is this extension. Save the file as cookies.txt into the folder where discover.py resides.

In case you want to renew the cookie before the three hours expire, find and delete the cookie named GOOGLE_ABUSE_EXEMPTION in your browser, and then do the things above. Note: changing the cookie's expiry date doesn't have effect.

DO NOT leave this script alone without solving the captcha latest right after the expiry of the cookie, otherwise items will be garbaged continuously. If you have to leave the script alone, schedule its stop by issuing the sleep 10400; touch STOP command in its folder right when renewing the cookie. (This will stop the script's operation after 3 hours; when you want to restart the script, issue rm STOP beforehand.)

External links

References