This page contains some tips and tricks for exploring soon-to-be-dead websites, to find URLs to feed into the Archive Team crawlers.
Open Directory Project data
The Open Directory Project offers machine-readable downloads of its data. You want the "content.rdf.u8.gz" from there.
wget http://rdf.dmoz.org/rdf/content.rdf.u8.gz gunzip content.rdf.u8.gz
Quick-and-dirty shell parsing for the not-too-fussy:
grep '<link r:resource=.*dyingsite\.com' content.rdf.u8 | sed 's/.*<link r\:resource="\([^"]*\).*".*/\1/' | sort | uniq > odp-sitelist.txt
MediaWiki wikis, especially the very large ones operated by the Wikimedia Foundation, often return a large number of important sites hosted with a service.
./mwlinkscrape.py "*.dyingsite.com" > mw-sitelist.txt
There exists tools such as GoogleScraper which will scrape various search engines using a web browser instead of an API.
Below lists some specific helpful tips.
Google doesn't let its search results be scraped by automated tools. One must do it manually, but there are some tools and tips that still let you do a good discovery quite quickly.
To find results under a domain, let your search term be
If you want more than 10 results per page, add the
num parameter to the URL like this:
To go to the next page of the results, don't use the "Next" link on the bottom; that would give you the next ten results. Instead, use the
start parameter in the URL:
You can go up to 900, that is the 901–1000th results. Google doesn't let you browse more than the first 1000 results. However, there are some good news:
- The estimated number of results shown is usually like ten or hundred times more than the actual number of results you'll be presented. So don't panic.
- Should the number of results be indeed more than 1000, the easiest workaround is clicking on "Search tools", then on "Any time", and selecting "Custom range". Setting a specific range, you can reduce the number of results in one search, and going, say, year by year, you'll be presented with all the results. (Hopefully.)
For exporting the results efficiently, there must be several tools around. One of them is SEOquake, a Firefox extension. (In fact, exporting search results is just one feature of it.) After installing the extension and restarting the browser, Google search results will have buttons to export (save or append) the results in CSV format. (It is recommended to disable – in SEOquake options – all those analysis apperaing in the search results and on the toolbar, they are just slowing things down and occupy a lot of space in the CSV.) – After some repetitive but easy work, you'll have the URL list in your CSV(s). If SEOquake analysis things are turned off, it will be just the URLs embraced with quotation marks. Replace "s with nothing in a text editor, or for Linux terminal geeks,
cut -d'"' -f 2 is the way.
Should Google stand in your way with a captcha, fill it, then you can proceed. (Cookies must be enabled for it to work.)
Microsoft, bless their Redmondish hearts, have an API for fetching Bing search engine results, which has a free tier of 5000 queries per month (this will cover you for about 250 sets of 1000 results). However, it only returns the first 1000 results for any query, so you can't just search "site:dyingsite.com" and get all the things on a site. You'll need to get a bit creative with the search terms.
Grab this Python script (look for "BING_API_KEY" and replace it with your "Primary Account Key"), and then:
python bingscrape.py "site:dyingsite.com" >> bing-sitelist.txt python bingscrape.py "about me site:dyingsite.com" >> bing-sitelist.txt python bingscrape.py "gallery site:dyingsite.com" >> bing-sitelist.txt python bingscrape.py "in memoriam site:dyingsite.com" >> bing-sitelist.txt python bingscrape.py "diary site:dyingsite.com" >> bing-sitelist.txt python bingscrape.py "bob site:dyingsite.com" >> bing-sitelist.txt
And so on.
Common Crawl Index
The Common Crawl index is a very big (21 gigabytes compressed) list of URLs in the Common Crawl corpus. Grepping this list may well reveal plenty of URLs to archive. The list is in an odd format; along the lines of
com.deadsite.www/subdirectory/subsubdirectory:http so you'll need to some filtering of the results. The results can sometimes be ambiguous.
grep '^com\.dyingsite[/\.]' zfqwbPRW.txt > commoncrawl-sitelist.txt
You can also use the Common Crawl URL search and get the results as a JSON file. Quick-and-dirty grep/sed parsing:
grep -F '"url":' locations.json | sed 's/.*url": "\([^"]*\).*/\1/' | sort | uniq > commoncrawl-sitelist.txt
- Twitter's search API doesn't offer historical results. However, their web search does a complete index now including a searching expanded URLs.
- A tool like Litterapi will scrape their web search and build a fake API.
- t by sferik is a command-line interface for Twitter using the API via an application you create on your account. Not only does it allow easy CSV/JSON export of your own data, but it allows you to scrape others tweets. API limits apply but this tool is very powerful
- Topsy offers a competing search service with an API of all Tweets. However, it is not free (but perhaps you can borrow their API key) and does not search expanded URLs.
Locating subdomains is important to get complete scrapes of websites and is particularly critical for many sites offering to host user content.
- cc99.nl offers a subdomain finder service yielding great, if incomplete results
- This blog post offers more methods to enumerate subdomains