Site exploration

From Archiveteam
Jump to navigation Jump to search

This page contains some tips and tricks for exploring soon-to-be-dead websites, to find URLs to feed into the Archive Team crawlers.

Wayback Machine

The Wayback Machine's CDX server can be used to find pages already in the WBM (with caveats, see User:OrIdow6/Info). An easy way to get all pages is (for instance with Egloos):

curl 'https://web.archive.org/cdx/search?url=*.egloos.com&showNumPages=true'

(this gives 749 pages)

seq 0 708 | awk '{ printf("https://web.archive.org/cdx/search?url=*.egloos.com&page=%s&fl=urlkey\n", $1) }' | wget --input-file - --retry-on-http-error=429

To enumerate the subdomains of a domain that is in the WBM:

domain=egloos.com pages=$(curl -s "https://web.archive.org/cdx/search/cdx?url=*.$domain&collapse=original&fl=original&showNumPages=1) seq 0 $pages | xargs -d '\n' printf "https://web.archive.org/cdx/search/cdx?url=*.$domain&collapse=original&fl=original&page=%s\n" | wget --input-file - --retry-on-http-error=429 | grep -oP "^https?://[^/]*$domain(:[0-9]+)?/" | sed 's/:[0-9]\+//' | sort -u

Open Directory Project data

The Open Directory Project offers machine-readable downloads of its data. You want the "content.rdf.u8.gz" from there.

wget http://rdf.dmoz.org/rdf/content.rdf.u8.gz
gunzip content.rdf.u8.gz

Quick-and-dirty shell parsing for the not-too-fussy:

grep '<link r:resource=.*dyingsite\.com' content.rdf.u8 | sed 's/.*<link r\:resource="\([^"]*\).*".*/\1/' | sort | uniq > odp-sitelist.txt

MediaWiki wikis

MediaWiki wikis, especially the very large ones operated by the Wikimedia Foundation, often return a large number of important sites hosted with a service.

mwlinkscrape.py is a tool by an Archive Team patriot which extracts a machine-readable list from a number of wikis (it actually uses the text of this page to get a list of wikis to scrape).

./mwlinkscrape.py "*.dyingsite.com" > mw-sitelist.txt

Search Engines

There exists tools such as GoogleScraper which will scrape various search engines using a web browser instead of an API.

Other tools such as googler or ddgr or bing-scrape download and parse search results HTML pages.

Below lists some specific helpful tips.

Google

Google doesn't let its search results be scraped by automated tools. One must do it manually, but there are some tools and tips that still let you do a good discovery quite quickly.

To find results under a domain, let your search term be site:dyingsite.com.

To find results under subdomains of a domain, let your search term be site:*.dyingsite.com.

If you want more than 10 results per page, add the num parameter to the URL like this:

https://www.google.com/search?q=site:dyingsite.com&num=100

To go to the next page of the results, don't use the "Next" link on the bottom; that would give you the next ten results. Instead, use the start parameter in the URL:

https://www.google.com/search?q=site:dyingsite.com&num=100&start=100

You can go up to 300, that is the 301–400th results. Google doesn't let you browse more than the first 400 results. However, there are some good news:

  • The estimated number of results shown is usually like ten or hundred times more than the actual number of results you'll be presented. So don't panic.
  • Should the number of results be indeed more than 400, the easiest workaround is clicking on "Search tools", then on "Any time", and selecting "Custom range". Setting a specific range, you can reduce the number of results in one search, and going, say, year by year, you'll be presented with all the results. (Hopefully.)

For exporting the results efficiently, there must be several tools around. One of them is SEOquake, a Firefox extension. (In fact, exporting search results is just one feature of it.) After installing the extension and restarting the browser, Google search results will have buttons to export (save or append) the results in CSV format. (It is recommended to disable – in SEOquake options – all those analysis appearing in the search results and on the toolbar, they are just slowing things down and occupy a lot of space in the CSV.) – After some repetitive but easy work, you'll have the URL list in your CSV(s). If SEOquake analysis things are turned off, it will be just the URLs embraced with quotation marks. Replace "s with nothing in a text editor, or for Linux terminal geeks, cut -d'"' -f 2 is the way.

Should Google stand in your way with a captcha, fill it, then you can proceed. (Cookies must be enabled for it to work.)

Bing API

Microsoft, bless their Redmondish hearts, have an API for fetching Bing search engine results, which has a free tier of 5000 queries per month (this will cover you for about 250 sets of 1000 results). However, it only returns the first 1000 results for any query, so you can't just search "site:dyingsite.com" and get all the things on a site. You'll need to get a bit creative with the search terms.

Grab this Python script (look for "BING_API_KEY" and replace it with your "Primary Account Key"), and then:

python bingscrape.py "site:dyingsite.com" >> bing-sitelist.txt
python bingscrape.py "about me site:dyingsite.com" >> bing-sitelist.txt
python bingscrape.py "gallery site:dyingsite.com" >> bing-sitelist.txt
python bingscrape.py "in memoriam site:dyingsite.com" >> bing-sitelist.txt
python bingscrape.py "diary site:dyingsite.com" >> bing-sitelist.txt
python bingscrape.py "bob site:dyingsite.com" >> bing-sitelist.txt

And so on.

To find subdomains use e.g. site:example.com+.

Common Crawl Index

The Common Crawl index is a very big (21 gigabytes compressed) list of URLs in the Common Crawl corpus. Grepping this list may well reveal plenty of URLs to archive. The list is in an odd format; along the lines of com.deadsite.www/subdirectory/subsubdirectory:http so you'll need to some filtering of the results. The results can sometimes be ambiguous.

grep '^com\.dyingsite[/\.]' zfqwbPRW.txt > commoncrawl-sitelist.txt

Our Ivan wrote a Python script (Mirror) which will take your list of URLs on standard input and print out a list of normally-formed URLs on standard output.

You can also use the Common Crawl URL search and get the results as a JSON file. Quick-and-dirty grep/sed parsing:

grep -F '"url":' locations.json | sed 's/.*url": "\([^"]*\).*/\1/' | sort | uniq > commoncrawl-sitelist.txt


During the Imgur project a group tried to scrape not just the Common Crawl index but the entire Common Crawl dataset, which led to complaints from them directed at us.

Twitter

  • Twitter's search API doesn't offer historical results. However, their web search does a complete index now[1] including a searching expanded URLs.
  • A tool like Litterapi will scrape their web search and build a fake API.
  • t by sferik is a command-line interface for Twitter using the API via an application you create on your account. Not only does it allow easy CSV/JSON export of your own data, but it allows you to scrape others tweets. API limits apply but this tool is very powerful
  • Topsy offers a competing search service with an API of all Tweets. However, it is not free (but perhaps you can borrow their API key) and does not search expanded URLs.
  • Nitter is an alternative Twitter front-end that, when working (as Twitter appear to actively resist its use), can have significantly greater chances of successful data archival using tools such as ArchiveBot. On a similar note, the vanilla Twitter web site is known to return 200s on pages in which their access has been denied.

Subdomain enumeration

Locating subdomains is important to get complete scrapes of websites and is particularly critical for many sites offering to host user content.

See Also

References