Difference between revisions of "INTERNETARCHIVE.BAK/git-annex implementation"

From Archiveteam
Jump to navigation Jump to search
m (Reverted edits by Megalanya1 (talk) to last revision by Jscott)
 
(29 intermediate revisions by 8 users not shown)
Line 1: Line 1:
This page addresses a [https://git-annex.branchable.com git-annex] implementation of [[INTERNETARCHIVE.BAK]].
This page addresses a [https://git-annex.branchable.com git-annex] implementation of [[INTERNETARCHIVE.BAK]].
= Quickstart =
Do this on the drive you want to use:
<pre>
$ git clone https://github.com/ArchiveTeam/IA.BAK
$ cd IA.BAK
$ ./iabak
</pre>
It will walk you through setup and starting to download files, and install a cron job (or .timer unit) to perform periodic maintenance.
It should prompt you for how much disk space to not use. To adjust this value later, use <code>git config annex.diskreserve 200GB</code> in all of the <code>IA.BAK/shard*</code> directories.
Configuration and maintenance information can be found in the README.md file. (Also available at https://github.com/ArchiveTeam/IA.BAK/#readme)
=== Dependencies ===
* sane UNIX environment (shell, df, perl, grep)
* git
* crontab OR systemd (NOTE: you may need to run <code>loginctl enable-linger <user></code> to make sure the job is not killed)
* <code>shuf</code> (optional - will randomize the order you download files in)
= Status =
[http://iabak.archiveteam.org/ Graphs of status]
[http://iabak.archiveteam.org/stats/ raw data]
= Implementation plan =


For more information, see http://git-annex.branchable.com/design/iabackup/
For more information, see http://git-annex.branchable.com/design/iabackup/


= First tasks =
== First tasks ==


Some first steps to work on:
Some first steps to work on:
Line 11: Line 41:
* Set up a server to serve up the git repos. Any linux system with a few hundred gb of disk and ssh and git-annex installed will do. It needs to accept incoming ssh connections from registered clients, only letting them run git-annex-shell. (done)
* Set up a server to serve up the git repos. Any linux system with a few hundred gb of disk and ssh and git-annex installed will do. It needs to accept incoming ssh connections from registered clients, only letting them run git-annex-shell. (done)
* Put one shard repo on the server to start. (done)
* Put one shard repo on the server to start. (done)
* Manually register a few clients to start, have them manually download some files, and `git annex sync` their state back to the server. See how it all hangs together. (in progress)
* Manually register a few clients to start, have them manually download some files, and `git annex sync` their state back to the server. See how it all hangs together. (done)
* Get that first shard backed up enough to be able to say, "we have successfully backed up 1/1770th of the IA!"
* Get that first shard backed up enough to be able to say, "we have successfully backed up 1/1770th of the IA!" (done!)


= Middle tasks =
== Middle tasks ==


* Test a restore of that first shard. Tell git-annex the content is no longer in the IA. Get the clients to upload it to our server.
* get fscking and dead client expiry working (done)
* Test a restore from a shard. Tell git-annex the content is no longer in the IA. Get the clients to upload it to our server.
* Write client registration interface, which generates the client's ssh private key, git-annex UUID, and sends them to the client (done)
* Help the user get the iabak-cronjob set up.
* Email expire warnings (done)


= Later tasks =
== Later tasks ==


* Create all 1770 shards, and see how that scales.
* Create all 1770 shards, and see how that scales.
* Write pre-receive git hook, to reject pushes of branches other then the git-annex branch (already done), and prevent bad/malicious pushes of the git-annex branch
* Write pre-receive git hook, to reject pushes of branches other then the git-annex branch (already done), and prevent bad/malicious pushes of the git-annex branch
* Write client registration interface, which generates the client's ssh private key, git-annex UUID, and sends them to the client
* Client runtime environment (docker image maybe?) with warrior-like interface (all that needs to do is configure things and get git-annex running)
* Client runtime environment (docker image maybe?) with warrior-like interface (all that needs to do is configure things and get git-annex running)


= shard1 =
== SHARD1 ==


This is our first part of the IA that we want to get backed up. If we succeed, we will have backed up 1/1770th of the Internet Archive.
This is our first part of the IA that we want to get backed up. If we succeed, we will have backed up 1/1770th of the Internet Archive.
This git-annex repository contains 100k files, the entire collections "internetarchivebooks" and "usenethistorical".
This git-annex repository contains 100k files, the entire collections "internetarchivebooks" and "usenethistorical".


'''git clone SHARD1@124.6.40.227:shard1'''
Some stats about the files this repository is tracking:
 
* number of files: 103343
You need to get your .ssh/id_rsa.pub added to be able to access this. Ask closure on IRC (EFNET #internetarchive.bak) for now..
* total file size: 2.91 terabytes
 
* size of the git repository itself was 51 megabytes to start
To play with this, just git clone it, and then start git-annex downloading some of the files to back up. '''git annex get --not --copies 2'''
* after filling up shard1, the git repo had grown to 196 mb
(That will back up any files that don't have 2 known copies already, including the IA as a copy. If it doesn't find enough files, change to --copies 3 etc)
* We aimed for 4 copies of every file downloaded, but a few files got 5-8 copies made, due to eg, races and manual downloads. Want to keep an eye on this with future shards.
 
* We got SHARD1 fully downloaded between April 1-6th. It took a while to ramp up as people came in, so later shards may download faster. Also, 2/3 of SHARD2 was downloaded during this same time period.
After you have downloaded some files, let the central repo know you're backing them up by running: '''git annex sync'''
 
`git annex status` shows some stats about the files this repository is tracking:
 
annexed files in working tree: 103343
 
size of annexed files in working tree: 2.91 terabytes
 
The size of the git repository itself is 51 megabytes.
 
''Note that due to the IA census using md5sums, you need git-annex version 5.20150205 to run git-annex fsck in this repository.''
 
Older verisons of git-annex will work for everything else, but not fsck.
 
One way to install that version is https://git-annex.branchable.com/install/Linux_standalone/
 
== tuning your repo ==
 
So you want to back up part of the IA, but don't want this to take over your whole disk or internet pipe? Here's some tuning options you can use.. Run these commands in the git repo you checked out.
 
'''git config annex.diskreserve 200GB'''
 
This will prevent git-annex from using up the last 200gb of your disk. Adjust to suite.
 
'''git config annex.web-options=--limit-rate=200k'''
 
This will limit wget/curl to downloading at 200 kb/s. Adjust to suite.
 
= scalability testing =
 
== git-annex repo growth test ==
 
I made a test repo with 10000 files, added via git annex. After git gc --aggressive, .git/objects/ was 4.3M.
 
I wanted to see how having a fair number of clients each storing part of that and communicating back what they were doing would scale. So, I made 100 clones of the initial repo, each representing a client.
 
Then in each client, I picked 300 files at random to download. This means that on average, each file would end up replicated to 3 clients. I ran the downloads one client at a time, so as to not overload my laptop.
 
Then I had each client sync its git-annex state back up with the origin repo. (Again sequentially.)
After this sync, the size of the git objects grew to 24M, gc --aggressive reduced it to 18M.
 
Next, I wanted to simulate maintenance stage, where clients are doing fsck every month and reporting back about the files they still have.
I dummied up the data that would be generated by such a fsck, and ran it in each client (just set location log for each present file to 1).
After syncing back to the origin repo, and git gc --aggressive, the size of the git objects grew to 19M, so 1MB per month growth.


Summary: Not much to worry about here. Note that if, after several years, the git-annex info in the repo got too big, git-annex forget can be used to forget old history, and drop it back down to starting levels. This leaves plenty of room to grow; either to 100k files, or to 1000 clients. And this is just simulating one share, of thousands.
== Admin details ==


Script: http://tmp.kitenet.net/git-annex-growth-test.sh
See [[INTERNETARCHIVE.BAK/admin]].

Latest revision as of 16:32, 17 January 2017

This page addresses a git-annex implementation of INTERNETARCHIVE.BAK.

Quickstart

Do this on the drive you want to use:

$ git clone https://github.com/ArchiveTeam/IA.BAK
$ cd IA.BAK
$ ./iabak

It will walk you through setup and starting to download files, and install a cron job (or .timer unit) to perform periodic maintenance.

It should prompt you for how much disk space to not use. To adjust this value later, use git config annex.diskreserve 200GB in all of the IA.BAK/shard* directories.

Configuration and maintenance information can be found in the README.md file. (Also available at https://github.com/ArchiveTeam/IA.BAK/#readme)

Dependencies

  • sane UNIX environment (shell, df, perl, grep)
  • git
  • crontab OR systemd (NOTE: you may need to run loginctl enable-linger <user> to make sure the job is not killed)
  • shuf (optional - will randomize the order you download files in)

Status

Graphs of status

raw data

Implementation plan

For more information, see http://git-annex.branchable.com/design/iabackup/

First tasks

Some first steps to work on:

  • Get a list of files, checksums, and urls. (done)
  • Write a script to generate a git-annex repository with 100k files from the list. (done)
  • Set up a server to serve up the git repos. Any linux system with a few hundred gb of disk and ssh and git-annex installed will do. It needs to accept incoming ssh connections from registered clients, only letting them run git-annex-shell. (done)
  • Put one shard repo on the server to start. (done)
  • Manually register a few clients to start, have them manually download some files, and `git annex sync` their state back to the server. See how it all hangs together. (done)
  • Get that first shard backed up enough to be able to say, "we have successfully backed up 1/1770th of the IA!" (done!)

Middle tasks

  • get fscking and dead client expiry working (done)
  • Test a restore from a shard. Tell git-annex the content is no longer in the IA. Get the clients to upload it to our server.
  • Write client registration interface, which generates the client's ssh private key, git-annex UUID, and sends them to the client (done)
  • Help the user get the iabak-cronjob set up.
  • Email expire warnings (done)

Later tasks

  • Create all 1770 shards, and see how that scales.
  • Write pre-receive git hook, to reject pushes of branches other then the git-annex branch (already done), and prevent bad/malicious pushes of the git-annex branch
  • Client runtime environment (docker image maybe?) with warrior-like interface (all that needs to do is configure things and get git-annex running)

SHARD1

This is our first part of the IA that we want to get backed up. If we succeed, we will have backed up 1/1770th of the Internet Archive. This git-annex repository contains 100k files, the entire collections "internetarchivebooks" and "usenethistorical".

Some stats about the files this repository is tracking:

  • number of files: 103343
  • total file size: 2.91 terabytes
  • size of the git repository itself was 51 megabytes to start
  • after filling up shard1, the git repo had grown to 196 mb
  • We aimed for 4 copies of every file downloaded, but a few files got 5-8 copies made, due to eg, races and manual downloads. Want to keep an eye on this with future shards.
  • We got SHARD1 fully downloaded between April 1-6th. It took a while to ramp up as people came in, so later shards may download faster. Also, 2/3 of SHARD2 was downloaded during this same time period.

Admin details

See INTERNETARCHIVE.BAK/admin.