Site exploration

From Archiveteam
Revision as of 21:17, 3 January 2015 by Chfoo (talk | contribs) (add a search engines section and move to subsection)
Jump to navigation Jump to search

This page contains some tips and tricks for exploring soon-to-be-dead websites, to find URLs to feed into the Archive Team crawlers.

Open Directory Project data

The Open Directory Project offers machine-readable downloads of its data. You want the "content.rdf.u8.gz" from there.

gunzip content.rdf.u8.gz

Quick-and-dirty shell parsing for the not-too-fussy:

grep '<link r:resource=.*dyingsite\.com' content.rdf.u8 | sed 's/.*<link r\:resource="\([^"]*\).*".*/\1/' | sort | uniq > odp-sitelist.txt

MediaWiki wikis

MediaWiki wikis, especially the very large ones operated by the Wikimedia Foundation, often return a large number of important sites hosted with a service. is a tool by an Archive Team patriot which extracts a machine-readable list from a number of wikis (it actually uses the text of this page to get a list of wikis to scrape).

./ "*" > mw-sitelist.txt

Search Engines

There exists tools such as GoogleScraper which will scrape various search engines using a web browser instead of a API.

Below lists some specific helpful tips.


Google doesn't let its search results be scraped by automated tools. One must do it manually, but there are some tools and tips that still let you do a good discovery quite quickly.

To find results under a domain, let your search term be

If you want more than 10 results per page, add the num parameter to the URL like this:

To go to the next page of the results, don't use the "Next" link on the bottom; that would give you the next ten results. Instead, use the start parameter in the URL:

You can go up to 900, that is the 901–1000th results. Google doesn't let you browse more than the first 1000 results. However, there are some good news:

  • The estimated number of results shown is usually like ten or hundred times more than the actual number of results you'll be presented. So don't panic.
  • Should the number of results be indeed more than 1000, the easiest workaround is clicking on "Search tools", then on "Any time", and selecting "Custom range". Setting a specific range, you can reduce the number of results in one search, and going, say, year by year, you'll be presented with all the results. (Hopefully.)

For exporting the results efficiently, there must be several tools around. One of them is SEOquake, a Firefox extension. (In fact, exporting search results is just one feature of it.) After installing the extension and restarting the browser, Google search results will have buttons to export (save or append) the results in CSV format. (It is recommended to disable – in SEOquake options – all those analysis apperaing in the search results and on the toolbar, they are just slowing things down and occupy a lot of space in the CSV.) – After some repetitive but easy work, you'll have the URL list in your CSV(s). If SEOquake analysis things are turned off, it will be just the URLs embraced with quotation marks. Replace "s with nothing in a text editor, or for Linux terminal geeks, cut -d'"' -f 2 is the way.

Should Google stand in your way with a captcha, fill it, then you can proceed. (Cookies must be enabled for it to work.)

Bing API

Microsoft, bless their Redmondish hearts, have an API for fetching Bing search engine results, which has a free tier of 5000 queries per month (this will cover you for about 250 sets of 1000 results). However, it only returns the first 1000 results for any query, so you can't just search "" and get all the things on a site. You'll need to get a bit creative with the search terms.

Grab this Python script (look for "BING_API_KEY" and replace it with your "Primary Account Key"), and then:

python "" >> bing-sitelist.txt
python "about me" >> bing-sitelist.txt
python "gallery" >> bing-sitelist.txt
python "in memoriam" >> bing-sitelist.txt
python "diary" >> bing-sitelist.txt
python "bob" >> bing-sitelist.txt

And so on.

Common Crawl Index

The Common Crawl index is a very big (21 gigabytes compressed) list of URLs in the Common Crawl corpus. Grepping this list may well reveal plenty of URLs to archive. The list is in an odd format; along the lines of com.deadsite.www/subdirectory/subsubdirectory:http so you'll need to some filtering of the results. The results can sometimes be ambiguous.

grep '^com\.dyingsite[/\.]' zfqwbPRW.txt > commoncrawl-sitelist.txt

Our Ivan wrote a Python script which will take your list of URLs on standard input and print out a list of normally-formed URLs on standard output.

You can also use the Common Crawl URL search and get the results as a JSON file. Quick-and-dirty grep/sed parsing:

grep -F '"url":' locations.json | sed 's/.*url": "\([^"]*\).*/\1/' | sort | uniq > commoncrawl-sitelist.txt

See Also