Internet Archive
Internet Archive | |
Internet Archive mainpage in 2016-8-10 | |
URL | https://archive.org[IA•Wcite•.today•MemWeb] |
Status | Endangered[1] |
Archiving status | On hiatus |
Archiving type | Unknown |
Project source | IA.BAK |
Project tracker | ia.bak |
IRC channel | #internetarchive.bak (on hackint) |
The Internet Archive is a non-profit digital library with the stated mission/motto: "universal access to all knowledge". The Internet Archive stores over 800 billion webpages from different dates and times for historical purposes that are available through the Wayback Machine, arguably an archivist's wet dream. The Archive.org website also archives books, music, videos, and software.
Mirrors
There are currently two mirrors of the Internet Archive collection - the official mirror available at archive.org, and a second mirror at Bibliotheca Alexandrina. The former seems to be up and stable while the latter still has its homepage working but not the rest of the site, which went down around April-May 2023, as Bing still has cached versions of a few pages from March 2023, viewable by typing in "site:web.archive.bibalex.org" on Bing and pressing the cache button.
Some manually-selected collections are also mirrored manually as part of the project INTERNETARCHIVE.BAK. See that page and the section #Backing up the Internet Archive.
Raw Numbers
December 2010:
- 4 data centers, 1,300 nodes, 11,000 spinning disks
- Wayback Machine: 2.4 PetaBytes
- Books/Music/Video Collections: 1.7 PetaBytes
- Total used storage: 5.8 PetaBytes
August 2014:
- 4 data centers, 550 nodes, 20,000 spinning disks
- Wayback Machine: 9.6 PetaBytes
- Books/Music/Video Collections: 9.8 PetaBytes
- Unique data: 18.5 PetaBytes
- Total used storage: 50 PetaBytes
December 2021:
- 4 data centers, 745 nodes, 28,000 spinning disks
- Wayback Machine: 57 PetaBytes
- Books/Music/Video Collections: 42 PetaBytes
- Unique data: 99 PetaBytes
- Total used storage: 212 PetaBytes
Items added per year
Search made 21:56, 17 January 2016 (EST) (this is just from the (mutable) "addeddate" metadata, so it might change, although it shouldn't)
Year | Items added |
---|---|
2001 | 63 |
2002 | 4,212 |
2003 | 18,259 |
2004 | 61,629 |
2005 | 61,004 |
2006 | 185,173 |
2007 | 334,015 |
2008 | 429,681 |
2009 | 807,371 |
2010 | 813,764 |
2011 | 1,113,083 |
2012 | 1,651,036 |
2013 | 3,164,482 |
2014 | 2,424,610 |
2015 | 3,113,601 |
Uploading to archive.org
Upload any content you manage to preserve! Registering takes a minute.
Tools
The are three main methods to upload items to Internet Archive programmatically:
- internetarchive Python library is the main tool now, see the extensive https://archive.org/services/docs/api/
- Handy script for mass upload (ias3upload.pl) with automatic error checking and retry
- S3 interface (for direct usage with curl, or indirect with the tool of your choice)
Don't use FTP upload, try to keep your items below 400 GiB size, add plenty of metadata.
Wayback machine save page now
- For quick one-shot webpage archiving, use the Wayback Machine's "Save Page Now" tool.
- See October 2019 update for details including access requests.
- To input a list of URLs, https://archive.org/services/wayback-gsheets/ (avoid trying to send many thousands URLs; there's Archivebot for that)
- There's also an email address where to send lists of URLs in the body, useful to submit automatic email digests (could not independently verify its functioning as of September 2019)
Many scripts have been written to use the live proxy:
- JavaScript Bookmarklet and Chrome extension made by @bitsgalore that provide a fast way to submit pages on the Internet Archive. You can get them here: https://www.bitsgalore.org/2014/08/02/How-to-save-a-web-page-to-the-Internet-Archive
- UserScript: “AutoSave to Internet Archive - Wayback Machine” by user “Flare0n”. Mirrors: Mirror 1[IA•Wcite•.today•MemWeb] Mirror 2[IA•Wcite•.today•MemWeb]. (No longer developed since 2014, but still functional.)
Torrent upload
Torrent upload, useful if you need resume (for huge files or because your bandwidth is insufficient for upload in one go):
- Just create the item, make a torrent with your files in it, name it like the item, and upload it to the item.
- archive.org will connect to you and other peers via a Transmission daemon and keep downloading all the contents till done;
- For a command line tool you can use e.g. mktorrent or buildtorrent, example:
mktorrent -a udp://tracker.publicbt.com:80/announce -a udp://tracker.openbittorrent.com:80 -a udp://tracker.ccc.de:80 -a udp://tracker.istole.it:80 -a http://tracker.publicbt.com:80/announce -a http://tracker.openbittorrent.com/announce "DIRECTORYTOUPLOAD"
; - You can then seed the torrent with one of the many graphical clients (e.g. Transmission) or on the command line (Transmission and rtorrent are the most popular; btdownloadcurses reportedly doesn't work with udp trackers.)
- archive.org will stop the download if the torrent stalls for some time and add a file to your item called "resume.tar.gz", which contains whatever data was downloaded. To resume, delete the empty file called
IDENTIFIER_torrent.txt
; then, resume the download by re-deriving the item (you can do that from the Item Manager.) Make sure that there are online peers with the data before re-deriving and don't delete the torrent file from the item.
Formats
Formats: anything, but:
- Sites should be uploaded in WARC format;
- Audio, video, books and other prints are supported from a number of formats;
- For .tar and .zip files archive.org offers an online browser to search and download the specific files one needs, so you probably want to use either unless you have good reasons (e.g. if 7z or bzip2 reduce the size tenfold).
This unofficial documentation page explains various of the special files found in every item.
Upload speed
Quite often, it's hard to use your full bandwidth to/from the Internet Archive, which can be frustrating. The bottleneck may be temporary (check the current network speed and s3 errors) but also persistent, especially if your network is far (e.g. transatlantic connections).
If your connection is slow or unreliable and you're trying to upload a lot of data, it's strongly recommended to use the bittorrent method (see above).
Some users with Gigabit upstream links or more, on common GNU/Linux operating systems (such as Alpine), have had some success in increasing their upload speed by using more memory on TCP congestion control and telling the kernel to live with higher latency and lower responsiveness, as in this example:
# sysctl net.core.rmem_default=8388608 net.core.rmem_max=8388608 net.ipv4.tcp_rmem="32768 131072 8388608" net.core.wmem_default=8388608 net.core.wmem_max=8388608 net.ipv4.tcp_wmem="32768 131072 8388608" net.core.default_qdisc=fq net.ipv4.tcp_congestion_control=bbr # sysctl kernel.sched_min_granularity_ns=1000000000 kernel.sched_latency_ns=1000000000 kernel.sched_migration_cost_ns=2147483647 kernel.sched_rr_timeslice_ms=100 kernel.sched_wakeup_granularity_ns=1000000000
Downloading from archive.org
- Wayback Machine APIs
- Availability – data for one capture for a given URL
- Memento – data for all captures of a given URL
- CDX – data for all captures of a given URL
- Other Wayback Machine APIs used in the website interface, not included in IA's list, include:
- timemap – data for a given URL prefix; note the
limit=100
parameter (which serves to prevent accidental downloads of gigabytes of JSON) - simhash – hashes (
compress=0
), or the degree of change in content between consecutive captures (compress=1
), for captures of a given URL for a given year - calendarcaptures – data for a given URL for a given year or day
- sparkline – summary of data for a given URL
- host – any hosts/domains detected for a given URL
- metadata – metadata for a given host/domain
- anchor – host/domain keyword search
- timemap – data for a given URL prefix; note the
- internetarchive Python tool
- When searching, you can specify the sort order by providing a list of field names, switching to descending order by suffixing the string with " desc".
- Manually, from an individual item: click "HTTPS"; or replace
details
withdownload
in the URL and reload. This will take you to a page with a link to download a ZIP containing the original files and metadata. - In bulk: see https://blog.archive.org/2012/04/26/downloading-in-bulk-using-wget/
- There's also an unofficial shell function that checks how many URLs the Wayback Machine lists for a domain name.
- Individual files within .zip and .tar archives can be listed, and downloaded, by appending a slash after the /download/ URL. This will bring up a listing of the content, from a URL with zipviewer.php in it. For example: https://archive.org/download/CreativeComputing_v03n06_NovDec1977/Creative_Computing_v03n06_Nov_Dec_1977_jp2.zip/
- To download a raw, unmodified page from the Wayback Machine, add "id_" to the end of the timestamp, e.g.
https://web.archive.org/web/20130806040521id_/http://faq.web.archive.org/page-without-wayback-code/
- There are also some other codes that can be added to the end of the timestamp, as described here: http://archive-access.sourceforge.net/projects/wayback/administrator_manual.html#Archival_URL_Replay_Mode[IA•Wcite•.today•MemWeb]
- id_ Identity - perform no alterations of the original resource, return it as it was archived.
- js_ Javascript - return document marked up as javascript.
- cs_ CSS - return document marked up as CSS.
- im_ Image - return document as an image.
- if_ Iframe - Used by default for frames and videos. Usually works for images too.
- oe_ - Hides the Wayback toolbar upon loading.
robots.txt and the Wayback Machine
The Internet Archive used to respect a site's robots.txt file. If that file blocked the ia_archiver user-agent (either directly or with a wildcard rule) the Internet Archive would not crawl the disallowed paths and it would block access through the Wayback Machine to all previously-crawled content matching the disallowed paths until the robots.txt entry is removed. If a site returned a server error when its robots.txt is requested the IA also interpreted that as a 'Disallow: /' rule. From e-mail correspondence with info@archive.org on Jun 10, 2016 regarding a site returning a 503 HTTP status code for its robots.txt:
The internet Archive respects the privacy of site owners, and therefore, when an error message is returned when trying to retrieve a website’s robots.txt, we consider that as "Disallow: /". -Benjamin
As of April 2017, the Internet Archive is no longer fully respecting robots.txt[2], although this change may not be visible on all archived sites yet. Alexa's crawler still respects robots.txt[3], and Archive-It respects robots.txt by default[4]. Users can still request that their domain be excluded from the Wayback Machine.
Note that if the content is available in the form of web archive (WARC) file through the IA's normal collections the WARC file may still be downloaded even if the content is not accessible through the Wayback Machine.
Browsing
There are 6 top-level collections in the Archive, which pretty-much everything else is under. These are:
- web -- Web Crawls
- texts -- eBooks and Texts
- movies -- Moving Image Archive
- audio -- Audio Archive
- software -- The Internet Archive Software Collection
- image -- Images
This is an incomplete list of significant sub-collections within the toplevel ones:
- texts -- eBooks and Texts
- opensource -- Community Texts
- movies -- Moving Image Archive
- opensource_movies -- Community Video
- television -- Television
- tvarchive -- Television Archive (where the content in the "TV News Search & Borrow is located; not directly accessible)
- audio -- Audio Archive
- opensource_audio -- Community Audio
- etree -- Live Music Archive
- librivoxaudio -- The LibriVox Free Audiobook Collection
- software -- The Internet Archive Software Collection
- 301works -- 301Works.org
- consolelivingroom -- Console Living Room
- coverdiscs -- CD and DVD Coverdisc Collection
- softwarelibrary -- Software Library
- open_source_software -- Community Software
- image -- Images
- flickrcommons -- Flickr Commons Archive
- maps_usgs -- USGS Maps
- nasa -- NASA Images
- coverartarchive -- Cover Art Archive
Internet Archive/Collections is a list of all the collections that contain other collections.
Backing up the Internet Archive
The contents of the Wayback Machine as of 2002 (and again in 2006) have been duplicated in Alexandria, Egypt, available via https://www.bibalex.org/isis/frontend/archive/archive_web.aspx .
In April 2015, ArchiveTeam founder Jason Scott came up with an idea of a distributed backup of the Internet Archive. In the following months, the necessary tools got developed and volunteers with spare disk space appeared, and now tens of terabytes of rare and precious digital content of the Archive have already been cloned in several copies around the world. The project is open to everyone who has got at least a few hundred gigabytes of disk space that they can sacrifice on the medium or long term. For details, see the INTERNETARCHIVE.BAK page.
Let us clarify once again: ArchiveTeam is not the Internet Archive. This "backing up the Internet Archive" project, just like all the other website-rescuing ArchiveTeam projects are not ordered, asked for, organized or supported by the Internet Archive, nor are the ArchiveTeam members the employees of the Internet Archive (except a few ones). Besides accepting – and, in this case, providing – the content, the Internet Archive doesn't collaborate with the ArchiveTeam.
Most of the directly downloadable items at IA are also available as torrents -- at any given time some fraction of these have external seeders, although as of 01:46, 17 February 2016 (EST) there is a problem with IA's trackers where they refuse to track many of the torrents.
Copyright lawsuit
The Internet Archive has faced a lawsuit from publishers over making digital copies of copyright works available. In September 2024 they lost at the Second Circuit Court of Appeals. If the level of damages awarded threatens their existence, then we may need to step at very short notice to rescue their content.
Technical notes
The history of tasks run on each item can be viewed (when logged in) by going to a URL of the form http:// archive.org/history/IDENTIFIER (where IDENTIFIER is the id of the item, e.g. the part after "/details/" in a typical IA url).
Some of the task commands include:
- archive.php
- Initial uploading, adding of reviews, and other purposes (example)
- bup.php
- Backing UP items from their primary to their secondary storage location after they are modified (always appears last in any group of tasks) (example)
- derive.php
- Handles generating the derived data formats (e.g. converting audio files into mp3s, OCRing scanned texts, generating CDX indexes for WARCs) (example)
- book_op.php
- ? Includes virus scan, which usually takes a while. (example)
- fixer.php
- ? (example)
- create.php
- ? (example)
- checkin.php
- ? (example)
- delete.php
- Used early on (i.e. ~2007) to delete a few items -- not used (except on some test files) since, apparently. (example)
- make_dark.php
- Removes an item from public view; used for spam, malware, copyright issues, etc. (example)
- modify_xml.php
- Modify the metadata of an item (?) (example)
- make_undark.php
- Reverses the effect of make_dark.php (example)
See also
- Working with ARCHIVE.ORG
- Internet Archive/Advanced Search -- a copy of the documentation without the browser-breaking thousand-item dropdowns on the actual page
- Internet Archive Census
- https://internetarchive.archiveteam.org/ -- Hitchhiker's Guide to the Internet Archive, a non-official wiki about the Internet Archive
External links
- https://archive.org[IA•Wcite•.today•MemWeb]
- https://help.archive.org/
- Bibliotheca Alexandrina mirror[IA•Wcite•.today•MemWeb]
- Petabox details[IA•Wcite•.today•MemWeb]
- A python interface to archive.org[IA•Wcite•.today•MemWeb]
- JSON API for archive.org services and metadata[IA•Wcite•.today•MemWeb]
- Developer portal (beta)[IA•Wcite•.today•MemWeb]
- Old developer portal[IA•Wcite•.today•MemWeb]
- English Wikipedia page on Help:Using the Wayback Machine[IA•Wcite•.today•MemWeb]
- English Wikipedia page: Lists of Internet Archive's collections[IA•Wcite•.today•MemWeb]
- Gordon Mohr Takes Us Inside the Internet Archives[IA•Wcite•.today•MemWeb] -- Interview from June 18, 2008, mentions Alexandria copy
- https://monitor.archive.org/weathermap/weathermap.html[IA•Wcite•.today•MemWeb]
- https://github.com/internetarchive/wayback/tree/master/wayback-cdx-server[IA•Wcite•.today•MemWeb] - More useful documentation of the Wayback CDX endpoint
- https://archive.org/web/researcher/cdx_legend.php[IA•Wcite•.today•MemWeb] - Mostly unhelpful documentation of the cdx format.
- https://github.com/internetarchive/CDX-Writer[IA•Wcite•.today•MemWeb] - "Python script to create CDX index files of WARC data"
- ArchiveAcademy -- a number of internal-focused presentations by IA staff on implementation-details
- archive-webextension[IA•Wcite•.today•MemWeb] - Firefox add-on for saving pages into the Internet Archive
- https://github.com/hartator/wayback-machine-downloader[IA•Wcite•.today•MemWeb]
Unofficial mobile apps
References
- ↑ https://www.vice.com/en_us/article/5dzg8n/archiving-the-internet-archive-sued-by-publishers
- ↑ https://blog.archive.org/2017/04/17/robots-txt-meant-for-search-engines-dont-work-well-for-web-archives/[IA•Wcite•.today•MemWeb]
- ↑ https://support.alexa.com/hc/en-us/articles/200450194-Alexa-s-Web-and-Site-Audit-Crawlers[IA•Wcite•.today•MemWeb]
- ↑ https://support.archive-it.org/hc/en-us/articles/208001096-Avoid-robots-txt-exclusions[IA•Wcite•.today•MemWeb]