Difference between revisions of "Restoring"

From Archiveteam
Jump to navigation Jump to search
(more tricks)
Line 21: Line 21:


==Tricks==
==Tricks==
===Unmodified pages===


This is undocumented, but if you retrieve a page with '''id_''' after the datecode, you will get the unmodified original document without all the Wayback scripts, header stuff, and link rewriting. This is useful when restoring one page at a time or when writing a tool to retrieve a site:
This is undocumented, but if you retrieve a page with '''id_''' after the datecode, you will get the unmodified original document without all the Wayback scripts, header stuff, and link rewriting. This is useful when restoring one page at a time or when writing a tool to retrieve a site:


http://web.archive.org/web/20051001001126id_/http://www.archive.org/
http://web.archive.org/web/20051001001126id_/http://www.archive.org/
===Wildcard search===
You can do a wildcard search for all URLs Wayback has retrieved for a given domain like so:
http://web.archive.org/web/*/http://archiveteam.org/*
Or for a subdirectory:
http://web.archive.org/web/*/http://archiveteam.org/_images/*
The "filter results" textbox in the upper right allows you to type e.g. ".jpg" to show only matching files.

Revision as of 15:25, 8 May 2014

So, you have a website that's gone, for whatever reason, there's a copy in the Wayback Machine, and now you want to get it all back, preferably without clicking on every single page.

If you're lucky, the grab was done by Archive Team and the WARC file will be available in the Archive Team collection where you can just download the whole thing and then extract contents with one of the WARC tools like warctozip.

If it wasn't an Archive Team grab and the pages were just scooped up as part of normal Wayback Machine operation, things are a bit more difficult as archive.org does not allow you to download the WARC files for these crawls directly (and the data you want would be split across many grabs anyway as they tend to grab part of a site on one occasion, a different part later, etc.).

The Wayback Machine doesn't intentionally try to prevent you from downloading multiple pages, but the usual method of recursive downloading a directory with a tool like wget using the -np parameter does not work because the Wayback Machine date-codes URLs based on the time of crawl and so things which appear to be in the same directory are not. E.g.:

http://web.archive.org/web/20140208214426/http://archiveteam.org/index.php?title=Main_Page

links to

http://web.archive.org/web/20140215063724/http://archiveteam.org/index.php?title=Who_We_Are

which has a different date code, so it fails the "no parent" test.

Tools to use

  • Warrick - Main site was at [1] but seems down. Downloads are available at [2]. Your mileage may vary - it's quite slow to run and the feature of grabbing from Google/Yahoo/Bing caches doesn't seem to work. A Linux/Cygwin/other *nix environment is also required. But currently probably the best option.
  • Wayback downloader, a service that will download your site from the Wayback Machine and even add a plugin for Wordpress, 1 site is $15, 2 to 4, $12.50 and 5 or more is $7,50. Cheap way to get data back without setting up your own environment. Effectiveness unknown.

Tricks

Unmodified pages

This is undocumented, but if you retrieve a page with id_ after the datecode, you will get the unmodified original document without all the Wayback scripts, header stuff, and link rewriting. This is useful when restoring one page at a time or when writing a tool to retrieve a site:

http://web.archive.org/web/20051001001126id_/http://www.archive.org/

Wildcard search

You can do a wildcard search for all URLs Wayback has retrieved for a given domain like so:

http://web.archive.org/web/*/http://archiveteam.org/*

Or for a subdirectory:

http://web.archive.org/web/*/http://archiveteam.org/_images/*

The "filter results" textbox in the upper right allows you to type e.g. ".jpg" to show only matching files.