Internet Archive

From Archiveteam
Jump to navigation Jump to search
Internet Archive
Internet Archive mainpage in 2016-8-10
Internet Archive mainpage in 2016-8-10
URL http://www.archive.org[IAWcite.todayMemWeb]
Status Online!
Archiving status In progress...
Archiving type Unknown
Project source IA.BAK
Project tracker ia.bak
IRC channel #internetarchive.bak (on hackint)

The Internet Archive is a non-profit digital library with the stated mission/motto: "universal access to all knowledge". The Internet Archive stores over 400 billion webpages from different dates and times for historical purposes that are available through the Wayback Machine, arguably an archivist's wet dream. The Archive.org website also archives books, music, videos, and software.

Mirrors

There are currently two mirrors of the Internet Archive collection - the official mirror available at archive.org, and a second mirror at Bibliotheca Alexandrina. Both seem to be up and stable.

Some manually-selected collections are also mirrored manually as part of the project INTERNETARCHIVE.BAK. See that page and the section #Backing up the Internet Archive.

Raw Numbers

December 2010:

  • 4 data centers, 1,300 nodes, 11,000 spinning disks
  • Wayback Machine: 2.4 PetaBytes
  • Books/Music/Video Collections: 1.7 PetaBytes
  • Total used storage: 5.8 PetaBytes

August 2014:

  • 4 data centers, 550 nodes, 20,000 spinning disks
  • Wayback Machine: 9.6 PetaBytes
  • Books/Music/Video Collections: 9.8 PetaBytes
  • Unique data: 18.5 PetaBytes
  • Total used storage: 50 PetaBytes

Items added per year

Search made 21:56, 17 January 2016 (EST) (this is just from the (mutable) "addeddate" metadata, so it might change, although it shouldn't)

Year Items added
2001 63
2002 4,212
2003 18,259
2004 61,629
2005 61,004
2006 185,173
2007 334,015
2008 429,681
2009 807,371
2010 813,764
2011 1,113,083
2012 1,651,036
2013 3,164,482
2014 2,424,610
2015 3,113,601

Uploading to archive.org

Upload any content you manage to preserve! Registering takes a minute.

Tools

The are three main methods to upload items to Internet Archive programmatically:

Don't use FTP upload, try to keep your items below 400 GiB size, add plenty of metadata.

Wayback machine save page now

  • For quick one-shot webpage archiving, use the Wayback Machine's "Save Page Now" tool.
    • See October 2019 update for details including access requests.
    • To input a list of URLs, https://archive.org/services/wayback-gsheets/ (avoid trying to send many thousands URLs; there's Archivebot for that)
    • There's also an email address where to send lists of URLs in the body, useful to submit automatic email digests (could not independently verify its functioning as of September 2019)

Many scripts have been written to use the live proxy:

Torrent upload

Torrent upload, useful if you need resume (for huge files or because your bandwidth is insufficient for upload in one go):

  • Just create the item, make a torrent with your files in it, name it like the item, and upload it to the item.
  • archive.org will connect to you and other peers via a Transmission daemon and keep downloading all the contents till done;
  • For a command line tool you can use e.g. mktorrent or buildtorrent, example: mktorrent -a udp://tracker.publicbt.com:80/announce -a udp://tracker.openbittorrent.com:80 -a udp://tracker.ccc.de:80 -a udp://tracker.istole.it:80 -a http://tracker.publicbt.com:80/announce -a http://tracker.openbittorrent.com/announce "DIRECTORYTOUPLOAD" ;
  • You can then seed the torrent with one of the many graphical clients (e.g. Transmission) or on the command line (Transmission and rtorrent are the most popular; btdownloadcurses reportedly doesn't work with udp trackers.)
  • archive.org will stop the download if the torrent stalls for some time and add a file to your item called "resume.tar.gz", which contains whatever data was downloaded. To resume, delete the empty file called IDENTIFIER_torrent.txt; then, resume the download by re-deriving the item (you can do that from the Item Manager.) Make sure that there are online peers with the data before re-deriving and don't delete the torrent file from the item.

Formats

Formats: anything, but:

  • Sites should be uploaded in WARC format;
  • Audio, video, books and other prints are supported from a number of formats;
  • For .tar and .zip files archive.org offers an online browser to search and download the specific files one needs, so you probably want to use either unless you have good reasons (e.g. if 7z or bzip2 reduce the size tenfold).

This unofficial documentation page explains various of the special files found in every item.

Upload speed

Quite often, it's hard to use your full bandwidth to/from the Internet Archive, which can be frustrating. The bottleneck may be temporary (check the current network speed and s3 errors) but also persistent, especially if your network is far (e.g. transatlantic connections).

If your connection is slow or unreliable and you're trying to upload a lot of data, it's strongly recommended to use the bittorrent method (see above).

Some users with Gigabit upstream links or more, on common GNU/Linux operating systems (such as Alpine), have had some success in increasing their upload speed by using more memory on TCP congestion control and telling the kernel to live with higher latency and lower responsiveness, as in this example:

# sysctl net.core.rmem_default=8388608 net.core.rmem_max=8388608 net.ipv4.tcp_rmem="32768 131072 8388608" net.core.wmem_default=8388608 net.core.wmem_max=8388608 net.ipv4.tcp_wmem="32768 131072 8388608" net.core.default_qdisc=fq net.ipv4.tcp_congestion_control=bbr
# sysctl kernel.sched_min_granularity_ns=1000000000 kernel.sched_latency_ns=1000000000 kernel.sched_migration_cost_ns=2147483647 kernel.sched_rr_timeslice_ms=100 kernel.sched_wakeup_granularity_ns=1000000000

Downloading from archive.org

  • Wayback Machine APIs
    • Availability – data for one capture for a given URL
    • Memento – data for all captures of a given URL
    • CDX – data for all captures of a given URL
  • Other Wayback Machine APIs used in the website interface, not included in IA's list, include:
    • timemap – data for a given URL prefix; note the limit=100 parameter (which serves to prevent accidental downloads of gigabytes of JSON)
    • simhash – hashes (compress=0), or the degree of change in content between consecutive captures (compress=1), for captures of a given URL for a given year
    • calendarcaptures – data for a given URL for a given year or day
    • sparkline – summary of data for a given URL
    • host – any hosts/domains detected for a given URL
    • metadata – metadata for a given host/domain
    • anchor – host/domain keyword search
  • internetarchive Python tool
    • When searching, you can specify the sort order by providing a list of field names, switching to descending order by suffixing the string with " desc".
  • Manually, from an individual item: click "HTTPS"; or replace details with download in the URL and reload. This will take you to a page with a link to download a ZIP containing the original files and metadata.
  • In bulk: see http://blog.archive.org/2012/04/26/downloading-in-bulk-using-wget/
  • There's also an unofficial shell function that checks how many URLs the Wayback Machine lists for a domain name.
  • Individual files within .zip and .tar archives can be listed, and downloaded, by appending a slash after the /download/ URL. This will bring up a listing of the content, from a URL with zipviewer.php in it. For example: https://archive.org/download/CreativeComputing_v03n06_NovDec1977/Creative_Computing_v03n06_Nov_Dec_1977_jp2.zip/
  • To download a raw, unmodified page from the Wayback Machine, add "id_" to the end of the timestamp, e.g.
https://web.archive.org/web/20130806040521id_/http://faq.web.archive.org/page-without-wayback-code/
  • id_ Identity - perform no alterations of the original resource, return it as it was archived.
  • js_ Javascript - return document marked up as javascript.
  • cs_ CSS - return document marked up as CSS.
  • im_ Image - return document as an image.


robots.txt and the Wayback Machine

The Internet Archive used to respect a site's robots.txt file. If that file blocked the ia_archiver user-agent (either directly or with a wildcard rule) the Internet Archive would not crawl the disallowed paths and it would block access through the Wayback Machine to all previously-crawled content matching the disallowed paths until the robots.txt entry is removed. If a site returned a server error when its robots.txt is requested the IA also interpreted that as a 'Disallow: /' rule. From e-mail correspondence with info@archive.org on Jun 10, 2016 regarding a site returning a 503 HTTP status code for its robots.txt:

The internet Archive respects the privacy of site owners, and therefore, when an error message is returned when trying to retrieve a website’s robots.txt, we consider that as "Disallow: /". -Benjamin

As of April 2017, the Internet Archive is no longer fully respecting robots.txt[1], although this change may not be visible on all archived sites yet. Alexa's crawler still respects robots.txt[2], and Archive-It respects robots.txt by default[3]. Users can still request that their domain be excluded from the Wayback Machine.

Note that if the content is available in the form of web archive (WARC) file through the IA's normal collections the WARC file may still be downloaded even if the content is not accessible through the Wayback Machine.

Browsing

There are 6 top-level collections in the Archive, which pretty-much everything else is under. These are:

  • web -- Web Crawls
  • texts -- eBooks and Texts
  • movies -- Moving Image Archive
  • audio -- Audio Archive
  • software -- The Internet Archive Software Collection
  • image -- Images

This is an incomplete list of significant sub-collections within the toplevel ones:

  • movies -- Moving Image Archive
    • opensource_movies -- Community Video
    • television -- Television
      • adviews -- AdViews
      • tv -- TV News Search & Borrow
    • tvarchive -- Television Archive (where the content in the "TV News Search & Borrow is located; not directly accessible)

Internet Archive/Collections is a list of all the collections that contain other collections.

Backing up the Internet Archive

The contents of the Wayback Machine as of 2002 (and again in 2006) have been duplicated in Alexandria, Egypt, available via http://www.bibalex.org/isis/frontend/archive/archive_web.aspx .

In April 2015, ArchiveTeam founder Jason Scott came up with an idea of a distributed backup of the Internet Archive. In the following months, the necessary tools got developed and volunteers with spare disk space appeared, and now tens of terabytes of rare and precious digital content of the Archive have already been cloned in several copies around the world. The project is open to everyone who has got at least a few hundred gigabytes of disk space that they can sacrifice on the medium or long term. For details, see the INTERNETARCHIVE.BAK page.

Let us clarify once again: ArchiveTeam is not the Internet Archive. This "backing up the Internet Archive" project, just like all the other website-rescuing ArchiveTeam projects are not ordered, asked for, organized or supported by the Internet Archive, nor are the ArchiveTeam members the employees of the Internet Archive (except a few ones). Besides accepting – and, in this case, providing – the content, the Internet Archive doesn't collaborate with the ArchiveTeam.

Most of the directly downloadable items at IA are also available as torrents -- at any given time some fraction of these have external seeders, although as of 01:46, 17 February 2016 (EST) there is a problem with IA's trackers where they refuse to track many of the torrents.

Technical notes

The history of tasks run on each item can be viewed (when logged in) by going to a URL of the form http:// archive.org/history/IDENTIFIER (where IDENTIFIER is the id of the item, e.g. the part after "/details/" in a typical IA url).

Some of the task commands include:

archive.php
Initial uploading, adding of reviews, and other purposes (example)
bup.php
Backing UP items from their primary to their secondary storage location after they are modified (always appears last in any group of tasks) (example)
derive.php
Handles generating the derived data formats (e.g. converting audio files into mp3s, OCRing scanned texts, generating CDX indexes for WARCs) (example)
book_op.php
? Includes virus scan, which usually takes a while. (example)
fixer.php
? (example)
create.php
? (example)
checkin.php
? (example)
delete.php
Used early on (i.e. ~2007) to delete a few items -- not used (except on some test files) since, apparently. (example)
make_dark.php
Removes an item from public view; used for spam, malware, copyright issues, etc. (example)
modify_xml.php
? (example)
make_undark.php
Reverses the effect of make_dark.php (example)

See also

External links

Unofficial mobile apps