Difference between revisions of "INTERNETARCHIVE.BAK/git-annex implementation"

From Archiveteam
Jump to navigation Jump to search
m (Reverted edits by Megalanya1 (talk) to last revision by Jscott)
 
(49 intermediate revisions by 8 users not shown)
Line 1: Line 1:
This page addresses a [https://git-annex.branchable.com git-annex] implementation of [[INTERNETARCHIVE.BAK]].
This page addresses a [https://git-annex.branchable.com git-annex] implementation of [[INTERNETARCHIVE.BAK]].


For more information, see http://git-annex.branchable.com/design/iabackup/.
= Quickstart =


= Some quick info on Internet Archive =
Do this on the drive you want to use:


== Data model ==
<pre>
$ git clone https://github.com/ArchiveTeam/IA.BAK
$ cd IA.BAK
$ ./iabak
</pre>
 
It will walk you through setup and starting to download files, and install a cron job (or .timer unit) to perform periodic maintenance.
 
It should prompt you for how much disk space to not use. To adjust this value later, use <code>git config annex.diskreserve 200GB</code> in all of the <code>IA.BAK/shard*</code> directories.
 
Configuration and maintenance information can be found in the README.md file. (Also available at https://github.com/ArchiveTeam/IA.BAK/#readme)


IA's data is organized into ''collections'' and ''items''.  One collection contains many items. An item contains files of the same type such as multiple MP3 files in an album or a single ISO image file. (A PDF manual and its software should go in separate items.)
=== Dependencies ===
* sane UNIX environment (shell, df, perl, grep)
* git
* crontab OR systemd (NOTE: you may need to run <code>loginctl enable-linger <user></code> to make sure the job is not killed)
* <code>shuf</code> (optional - will randomize the order you download files in)


Here's an example collection and item in that collection: https://archive.org/details/archiveteam-fire, https://archive.org/details/proust-panic-download-warc.
= Status =


== Browsing the Internet Archive ==
[http://iabak.archiveteam.org/ Graphs of status]


In addition to the web interface, you can use the [https://pypi.python.org/pypi/internetarchive Internet Archive command-line tool].  The tool currently requires a Python 2.x installation.  If you use pip, run
[http://iabak.archiveteam.org/stats/ raw data]


<pre>
= Implementation plan =
pip install internetarchive
</pre>


See https://pypi.python.org/pypi/internetarchive#command-line-usage for usage information.  If you want to start digging, you might find it useful to issue <code>ia search 'collection:*'</code>; this'll return summary information for all of IA's items.
For more information, see http://git-annex.branchable.com/design/iabackup/


= First tasks =
== First tasks ==


Some first steps to work on:
Some first steps to work on:


* pick a set of around 10 thousand items whose size sums to around 8 TB
* Get a list of files, checksums, and urls. (done)
* build map from Item to shard. Needs to scale well to 24+ million. sql?
* Write a script to generate a git-annex repository with 100k files from the list. (done)
* write ingestion script that takes an item and generates a tarball of its non-derived files. Needs to be able to reproduce the same checksum each time run on an (unmodified) item. I know how to make tar and gz reproducible, BTW
* Set up a server to serve up the git repos. Any linux system with a few hundred gb of disk and ssh and git-annex installed will do. It needs to accept incoming ssh connections from registered clients, only letting them run git-annex-shell. (done)
* write client registration backend, which generates the client's ssh private key, git-annex UUID, and sends them to the client (somehow tied to IA library cards?)
* Put one shard repo on the server to start. (done)
* client runtime environment (docker image maybe?) with warrior-like interface (all that needs to do is configure things and get git-annex running)
* Manually register a few clients to start, have them manually download some files, and `git annex sync` their state back to the server. See how it all hangs together. (done)
* Get that first shard backed up enough to be able to say, "we have successfully backed up 1/1770th of the IA!" (done!)
 
== Middle tasks ==
 
* get fscking and dead client expiry working (done)
* Test a restore from a shard. Tell git-annex the content is no longer in the IA. Get the clients to upload it to our server.
* Write client registration interface, which generates the client's ssh private key, git-annex UUID, and sends them to the client (done)
* Help the user get the iabak-cronjob set up.
* Email expire warnings (done)
 
== Later tasks ==
 
* Create all 1770 shards, and see how that scales.
* Write pre-receive git hook, to reject pushes of branches other then the git-annex branch (already done), and prevent bad/malicious pushes of the git-annex branch
* Client runtime environment (docker image maybe?) with warrior-like interface (all that needs to do is configure things and get git-annex running)
 
== SHARD1 ==
 
This is our first part of the IA that we want to get backed up. If we succeed, we will have backed up 1/1770th of the Internet Archive.
This git-annex repository contains 100k files, the entire collections "internetarchivebooks" and "usenethistorical".
 
Some stats about the files this repository is tracking:
* number of files: 103343
* total file size: 2.91 terabytes
* size of the git repository itself was 51 megabytes to start
* after filling up shard1, the git repo had grown to 196 mb
* We aimed for 4 copies of every file downloaded, but a few files got 5-8 copies made, due to eg, races and manual downloads. Want to keep an eye on this with future shards.
* We got SHARD1 fully downloaded between April 1-6th. It took a while to ramp up as people came in, so later shards may download faster. Also, 2/3 of SHARD2 was downloaded during this same time period.
 
== Admin details ==
 
See [[INTERNETARCHIVE.BAK/admin]].

Latest revision as of 16:32, 17 January 2017

This page addresses a git-annex implementation of INTERNETARCHIVE.BAK.

Quickstart

Do this on the drive you want to use:

$ git clone https://github.com/ArchiveTeam/IA.BAK
$ cd IA.BAK
$ ./iabak

It will walk you through setup and starting to download files, and install a cron job (or .timer unit) to perform periodic maintenance.

It should prompt you for how much disk space to not use. To adjust this value later, use git config annex.diskreserve 200GB in all of the IA.BAK/shard* directories.

Configuration and maintenance information can be found in the README.md file. (Also available at https://github.com/ArchiveTeam/IA.BAK/#readme)

Dependencies

  • sane UNIX environment (shell, df, perl, grep)
  • git
  • crontab OR systemd (NOTE: you may need to run loginctl enable-linger <user> to make sure the job is not killed)
  • shuf (optional - will randomize the order you download files in)

Status

Graphs of status

raw data

Implementation plan

For more information, see http://git-annex.branchable.com/design/iabackup/

First tasks

Some first steps to work on:

  • Get a list of files, checksums, and urls. (done)
  • Write a script to generate a git-annex repository with 100k files from the list. (done)
  • Set up a server to serve up the git repos. Any linux system with a few hundred gb of disk and ssh and git-annex installed will do. It needs to accept incoming ssh connections from registered clients, only letting them run git-annex-shell. (done)
  • Put one shard repo on the server to start. (done)
  • Manually register a few clients to start, have them manually download some files, and `git annex sync` their state back to the server. See how it all hangs together. (done)
  • Get that first shard backed up enough to be able to say, "we have successfully backed up 1/1770th of the IA!" (done!)

Middle tasks

  • get fscking and dead client expiry working (done)
  • Test a restore from a shard. Tell git-annex the content is no longer in the IA. Get the clients to upload it to our server.
  • Write client registration interface, which generates the client's ssh private key, git-annex UUID, and sends them to the client (done)
  • Help the user get the iabak-cronjob set up.
  • Email expire warnings (done)

Later tasks

  • Create all 1770 shards, and see how that scales.
  • Write pre-receive git hook, to reject pushes of branches other then the git-annex branch (already done), and prevent bad/malicious pushes of the git-annex branch
  • Client runtime environment (docker image maybe?) with warrior-like interface (all that needs to do is configure things and get git-annex running)

SHARD1

This is our first part of the IA that we want to get backed up. If we succeed, we will have backed up 1/1770th of the Internet Archive. This git-annex repository contains 100k files, the entire collections "internetarchivebooks" and "usenethistorical".

Some stats about the files this repository is tracking:

  • number of files: 103343
  • total file size: 2.91 terabytes
  • size of the git repository itself was 51 megabytes to start
  • after filling up shard1, the git repo had grown to 196 mb
  • We aimed for 4 copies of every file downloaded, but a few files got 5-8 copies made, due to eg, races and manual downloads. Want to keep an eye on this with future shards.
  • We got SHARD1 fully downloaded between April 1-6th. It took a while to ramp up as people came in, so later shards may download faster. Also, 2/3 of SHARD2 was downloaded during this same time period.

Admin details

See INTERNETARCHIVE.BAK/admin.