Restoring

From Archiveteam
Revision as of 21:35, 19 November 2023 by Frank123 (talk | contribs) (website change)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

So, you have a website that's gone, for whatever reason, there's a copy in the Wayback Machine, and now you want to get it all back, preferably without clicking on every single page.

If you're lucky, the grab was done by Archive Team and the WARC file will be available in the Archive Team collection where you can just download the whole thing and then extract contents with one of the WARC tools like warctozip.

If it wasn't an Archive Team grab and the pages were just scooped up as part of normal Wayback Machine operation, things are a bit more difficult as archive.org does not allow you to download the WARC files for these crawls directly (and the data you want would be split across many grabs anyway as they tend to grab part of a site on one occasion, a different part later, etc.).

The Wayback Machine doesn't intentionally try to prevent you from downloading multiple pages, but the usual method of recursively downloading a directory with a tool like wget using the -np parameter does not work because the Wayback Machine date-codes URLs based on the time of crawl and so things which appear to be in the same directory are not. E.g.:

http://web.archive.org/web/20140208214426/http://archiveteam.org/index.php?title=Main_Page

links to

http://web.archive.org/web/20140215063724/http://archiveteam.org/index.php?title=Who_We_Are

which has a different date code, so as far as wget can tell they are in different directories and the crawl stops.

Tools to use

  • Wayback Machine Downloader, small tool in Ruby to download any website from the Wayback Machine. Free and open-source.
  • Warrick - Main site was at [1] now at ArcWarrick. Downloads are available at [2]. Your mileage may vary - it's quite slow to run and the feature of grabbing from Google/Yahoo/Bing caches doesn't seem to work. A Linux/Cygwin/other *nix environment is also required. But currently probably the best option.
  • Wayback downloader, a service that will download your site from the Wayback Machine and even add a plugin for Wordpress, 1 site is $15, 2 to 4, $12.50 and 5 or more is $7,50. Cheap way to get data back without setting up your own environment. Effectiveness unknown.
  • Wayback Machine Downloader Service Another service that recovers websites from archive.org. It has a free demo and offers unlimited downloads for $79. Not related to the aforementioned Ruby tool with the same name.
  • Waybackr, A new service that downloads, packs and sends to your email a copy of any website stored in the Wayback Machine. This service was free, but it seems to have stopped working.
  • Archivarix wayback machine online downloader, This service sends you email with zip archive of restored website. 200 files is free, first thousand files above this limit will cost $0.005 per file, every next thousand will cost $0.0005 per file.

Tricks

Unmodified pages

This is undocumented, but if you retrieve a page with id_ after the datecode, you will get the unmodified original document without all the Wayback scripts, header stuff, and link rewriting. This is useful when restoring one page at a time or when writing a tool to retrieve a site:

http://web.archive.org/web/20051001001126id_/http://www.archive.org/

Wildcard search

You can do a wildcard search for all URLs Wayback has retrieved for a given domain like so:

http://web.archive.org/web/*/http://archiveteam.org/*

Or for a subdirectory:

http://web.archive.org/web/*/http://archiveteam.org/images/*

The "filter results" textbox in the upper right allows you to type e.g. ".jpg" to show only matching files.

This data is also available in a machine-readable format:

http://web.archive.org/cdx/search/cdx?url=archiveteam.org/images/*