Difference between revisions of "Internet Archive Census"

From Archiveteam
Jump to navigation Jump to search
(→‎Contents of the Census: 113 duplicates in the main datafile)
(mention recheck, and update introduction to clarify other purposes)
Line 1: Line 1:
The '''Internet Archive Census''' is an unofficial attempt to count and indicate the files available on the Internet Archive, focusing on downloadable, public-facing files. The purpose of this project is multi-fold, including determination of sizes of various collections, and determining priorities in backing up portions of the Internet Archive's data stores.
The '''Internet Archive Censuses''' are unofficial attempts to count and account for the files available on the Internet Archive, both directly downloadable, public files and private files that are available through interfaces like the Wayback Machine or the TV News Archive. The purpose of this project is multi-fold, including collections of the reported hashes of all the files, determination of sizes of various collections, and determining priorities in backing up portions of the Internet Archive's data stores.


The first Census was conducted in March of 2015. Its results are on the Archive at [https://archive.org/details/ia-bak-census_20150304 https://archive.org/details/ia-bak-census_20150304].  
The first Census was conducted in March of 2015. Its results are on the Archive at {{IA id|ia-bak-census_20150304}}.
 
A re-check of the items in the first census was done in January 2016. The results are on IA under {{IA id|IACensusData}}.


== Purpose of the Census ==
== Purpose of the Census ==


The Census was called for as a stepping stone in the [[INTERNETARCHIVE.BAK]] project, an experiment and project to have Archive Team back up the Internet Archive. While officially, the Internet Archive has 21 petabytes of information in its data stores (as of March 2015), some of that data is related to system overhead, or are stream-only/not available. By having a full run-through of the entire collection of items at the Archive, the next phases of the INTERNETARCHIVE.BAK experiment (testing methodologies) can move forward.
The original census was called for as a stepping stone in the [[INTERNETARCHIVE.BAK]] project, an experiment and project to have Archive Team back up the Internet Archive. While officially, the Internet Archive has 21 petabytes of information in its data stores (as of March 2015), some of that data is related to system overhead, or are stream-only/not available. By having a full run-through of the entire collection of items at the Archive, the next phases of the INTERNETARCHIVE.BAK experiment (testing methodologies) can move forward.


The data is also useful for talking about what the Internet Archive does, and what kinds of items are in the stacks - collections can be found with very large or manageable amounts of data, and audiences/researchers outside the backup experiment can do their own sets of data access and acquisition. Search engines can be experimented with, as well as data visualization.
The data is also useful for talking about what the Internet Archive does, and what kinds of items are in the stacks - collections can be found with very large or manageable amounts of data, and audiences/researchers outside the backup experiment can do their own sets of data access and acquisition. Search engines can be experimented with, as well as data visualization.

Revision as of 08:09, 12 April 2016

The Internet Archive Censuses are unofficial attempts to count and account for the files available on the Internet Archive, both directly downloadable, public files and private files that are available through interfaces like the Wayback Machine or the TV News Archive. The purpose of this project is multi-fold, including collections of the reported hashes of all the files, determination of sizes of various collections, and determining priorities in backing up portions of the Internet Archive's data stores.

The first Census was conducted in March of 2015. Its results are on the Archive at ia-bak-census_20150304.

A re-check of the items in the first census was done in January 2016. The results are on IA under IACensusData.

Purpose of the Census

The original census was called for as a stepping stone in the INTERNETARCHIVE.BAK project, an experiment and project to have Archive Team back up the Internet Archive. While officially, the Internet Archive has 21 petabytes of information in its data stores (as of March 2015), some of that data is related to system overhead, or are stream-only/not available. By having a full run-through of the entire collection of items at the Archive, the next phases of the INTERNETARCHIVE.BAK experiment (testing methodologies) can move forward.

The data is also useful for talking about what the Internet Archive does, and what kinds of items are in the stacks - collections can be found with very large or manageable amounts of data, and audiences/researchers outside the backup experiment can do their own sets of data access and acquisition. Search engines can be experimented with, as well as data visualization.

Contents of the Census

The Census is a very large collection of JSON-formatted tables, returned by the use of the ia-mine utility by Jake Johnson of the Internet Archive. Like all such projects, the data should not be considered perfect, although a large percentage should accurately reflect the site. As there is only one census so far, there is no comparable data in terms of growth or file change. (There are reports of total files or other activity, but not to the level of the JSON format material the Census provides).

The full itemlist metamgr-norm-ids-20150304205357.txt.gz (135.7M compressed; 372M uncompressed) contains 14,926,080 item identifiers (including exactly one duplicate, https://archive.org/details/e-dv212_boston_14_harvardsquare_09-05_001.ogg for some bizarre reason).

The main data file public-file-size-md_20150304205357.json.gz is 6073671780 bytes (5.7G) compressed, and 22522862598 bytes (21G) uncompressed. It contains one item without any identifier at all, which from the file names, appears to be https://archive.org/details/lecture_10195 (which had its _meta.xml file re-created soon after the census was run). Oddly, it contains only 13,075,195 normal string identifiers, with 113 duplicates.

The retrieved itemlist all-ids-got-sorted.txt.gz (91215211 bytes (87M) compressed; 389853688 bytes (372M) uncompressed) contains 14,921,581 item identifiers, with no duplicates.

The un-retrieved itemlist unretrievable-items.txt (141247 bytes) contains 4,508 items, with no duplicates.

Some Relevant Information from the Census

Based on the output of the Census:

  • The size of the listed data is 14.23 petabytes.
  • The census only contains "original" data, not derivations created by the system. (For example, if a .AVI file is uploaded, the census only counts the .AVI, and not a .MP4 or .GIF derived from the original file).
  • The vast majority of the data is compressed in some way. By far the largest kind of file is gzip, with 9PB uploaded! Most files that are not in a archive format are compressed videos, music, pictures etc.
  • The largest single file (that is not just a tar of other files) is TELSEY_004.MOV (449GB), in item TELSEY_004 in the xfrstn collection.
  • There are 22,596,286 files which are copies of other files. The duplicate files take up 1.06PB of space. (Assuming all files with the same MD5 are duplicates.)
  • The largest duplicated file is all-20150219205226/part-0235.cdx.gz (195GB) in item wbsrv-0235-1. The entire wbsrv-0235-1 item is a duplicate of wbsrv-0235-0, that's 600GB. This is intentional, as these items are part of the waybackcdx collection, used to re-check already archived URLs in the Wayback Machine, and the whole index is duplicated, to decrease risk of loss.

Extracting data

As hinted by the IA guys, the jq tool is well-suited to working with the census.

Here is a command line that will generate a file containing "md5 size collection url" format lines for everything in the census:

zcat public-file-size-md_20150304205357.json.gz | ./jq --raw-output '(.collection[]? // .collection) as $coll | (.id[]? // .id) as $id | .files[] | "\(.md5)\t\(.size)\t\($coll)\thttps://archive.org/download/\($id)/\(.name)"' > md5_collection_url.txt

Some files are in multiple collections, and even in multiple items. The above command line generates all the permutations in those cases, and so outputs 296 million lines. Here is a varient that picks a single item and collection when a file is in multiple ones; it outputs 177 million lines.

jq --raw-output '(.collection[0]? // .collection) as $coll | (.id[0]? // .id) as $id | .files[] | "\(.md5)\t\(.size)\t\($coll)\thttps://archive.org/download/\($id)/\(.name)"'