Internet Archive Census
The Internet Archive Censuses are unofficial attempts to count and account for the files available on the Internet Archive, both directly downloadable, public files and private files that are available through interfaces like the Wayback Machine or the TV News Archive. The purpose of this project is multi-fold, including collections of the reported hashes of all the files, determination of sizes of various collections, and determining priorities in backing up portions of the Internet Archive's data stores.
The first Census was conducted in March of 2015. Its results are on the Archive at ia-bak-census_20150304.
A re-check of the items in the first census was done in January 2016. The results are on IA under IACensusData.
A third census was done in April 2016, ia_census_201604, based on an updated list of identifiers, and including the sha1 hashes as well as the md5s.
Purpose of the Census
The original census was called for as a stepping stone in the INTERNETARCHIVE.BAK project, an experiment and project to have Archive Team back up the Internet Archive. While officially, the Internet Archive has 21 petabytes of information in its data stores (as of March 2015), some of that data is related to system overhead, or are stream-only/not available. By having a full run-through of the entire collection of items at the Archive, the next phases of the INTERNETARCHIVE.BAK experiment (testing methodologies) can move forward.
The data is also useful for talking about what the Internet Archive does, and what kinds of items are in the stacks - collections can be found with very large or manageable amounts of data, and audiences/researchers outside the backup experiment can do their own sets of data access and acquisition. Search engines can be experimented with, as well as data visualization.
Contents of the Census
Each Census is a very large collection of JSON-formatted records, consisting of a subset of the metadata of each item in the archive. The metadata is downloaded with Jake Johnson's ia-mine utility, then processed by with the jq tool. Like all such projects, the data should not be considered perfect, although a large percentage should accurately reflect the site. Since there have been three censuses, some limited comparisons of growth or file change can be made. (There are also previous reports of total files or other activity, but none to the level of the JSON format material the Censuses provide).
The first two censuses used the same itemlist: https://archive.org/details/e-dv212_boston_14_harvardsquare_09-05_001.ogg for some bizarre reason). The 2nd census made a sorted (and uncompressed) version of it available: . The 3rd census used an updated itemlist: (486.9M uncompressed), which is sorted and contains 19,134,984 item identifiers.(135.7M compressed; 372M uncompressed). It contains 14,926,080 item identifiers (including exactly one duplicate,
The main data file for the first census is lecture_10195 (which had its _meta.xml file re-created soon after the census was run). Oddly, it contains only 13,075,195 normal string identifiers, with 113 duplicates.(6073671780 bytes (5.7G) compressed; 22522862598 bytes (21G) uncompressed). It contains one item without any identifier at all, which from the file names, appears to be
The main data file for the second census is(8796507585 bytes (9G) compressed; 35966005026 bytes (36G) uncompressed). An additional data file of items re-grabbed after failing the first time is: (10486657 bytes (11M)).
The main data file for the third census is(11219518595 bytes (10.4 G) compressed). It contains only metadata about items all of whose files can be downloaded without restriction. There is also a data file (5993343635 bytes (5.6 G) compressed) containing metadata for the other items from the itemlist for which metadata was available.
The second and third censuses also includes tab-separated-value files listing only the identifiers, file paths and md5 hashes from the main data files (2 from the 3rd census, 2 from the second, and one from the first census's main data file (included in the second census's item for historical reasons)):
- In the 2nd census item:
- (7218470882 bytes (7G) compressed)
- (7066587 bytes (7M))
- (4791601837 bytes (5G) compressed)
- In the 3rd census item:
- (5212673170 bytes (4.9 G) compressed)
- (2690158285 bytes (2.5 G) compressed)
The third census also includes similar files for the sha1 hashes:(6120162493 bytes (5.7 G) compressed), (3202141568 bytes (3 G) compressed)
The retrieved itemlist from the original census(91215211 bytes (87M) compressed; 389853688 bytes (372M) uncompressed) contains 14,921,581 item identifiers, with no duplicates.
The un-retrieved itemlist from the original census(141247 bytes) contains 4,508 items, with no duplicates.
The second census also includes three auxiliary lists of identifiers:(71052 bytes (70K)), the 2,886 identifiers in the leftovers re-grab; (7875713 bytes (8M)), 256,352 previously-used (i.e. "dark") identifiers, and (1566 bytes (2K)) 68 identifiers only identifiable as having been previously used by looking at the /history/ page.
The third census also includes 6 scripts written during the process of generating it, and which may be useful in future ones:
Some Relevant Information from the Census
Based on the output of the Census:
- The size of the listed data is 14.23 petabytes.
- The census only contains "original" data, not derivations created by the system. (For example, if a .AVI file is uploaded, the census only counts the .AVI, and not a .MP4 or .GIF derived from the original file).
- The vast majority of the data is compressed in some way. By far the largest kind of file is gzip, with 9PB uploaded! Most files that are not in a archive format are compressed videos, music, pictures etc.
- The largest single file (that is not just a tar of other files) is TELSEY_004.MOV (449GB), in item TELSEY_004 in the xfrstn collection.
- There are 22,596,286 files which are copies of other files. The duplicate files take up 1.06PB of space. (Assuming all files with the same MD5 are duplicates.)
- The largest duplicated file is all-20150219205226/part-0235.cdx.gz (195GB) in item wbsrv-0235-1. The entire wbsrv-0235-1 item is a duplicate of wbsrv-0235-0, that's 600GB. This is intentional, as these items are part of the waybackcdx collection, used to re-check already archived URLs in the Wayback Machine, and the whole index is duplicated, to decrease risk of loss.
As hinted by the IA guys, the jq tool is well-suited to working with the census.
Here is a command line that will generate a file containing "md5 size collection url" format lines for everything in the census:
zcat public-file-size-md_20150304205357.json.gz | ./jq --raw-output '(.collection? // .collection) as $coll | (.id? // .id) as $id | .files | "\(.md5)\t\(.size)\t\($coll)\thttps://archive.org/download/\($id)/\(.name)"' > md5_collection_url.txt
Some files are in multiple collections, and even in multiple items. The above command line generates all the permutations in those cases, and so outputs 296 million lines. Here is a varient that picks a single item and collection when a file is in multiple ones; it outputs 177 million lines.
jq --raw-output '(.collection? // .collection) as $coll | (.id? // .id) as $id | .files | "\(.md5)\t\(.size)\t\($coll)\thttps://archive.org/download/\($id)/\(.name)"'
- engbooksall - identifiers of all english language texts with djvutext files as of 4/11/11 (32 MB uncompressed text file)
- text_identifiers - multiple files of text identifiers per year, generated 12/11/2009
- IA_book_ids - 7 MB list of identifiers of books, generated 2008