Difference between revisions of "Talk:INTERNETARCHIVE.BAK"

From Archiveteam
Jump to navigation Jump to search
(→‎Other: +IPFS)
Line 27: Line 27:
* [https://www.cs.umd.edu/~elaine/docs/permacoin.pdf Permacoin] - Repurposing Bitcoin Work for Data Preservation
* [https://www.cs.umd.edu/~elaine/docs/permacoin.pdf Permacoin] - Repurposing Bitcoin Work for Data Preservation
* [https://cseweb.ucsd.edu/~hovav/dist/verstore.pdf Compact Proofs of Retrievability]
* [https://cseweb.ucsd.edu/~hovav/dist/verstore.pdf Compact Proofs of Retrievability]
=== See Also ===
Don't forget about the options enumerated as part of the [[Valhalla]] project (particularly the software options); this is much the same thing.


==Other anticipated problems==
==Other anticipated problems==

Revision as of 04:19, 5 March 2015

Internet archive animated text.gif

A note on the end-user drives

I feel it is really critical that the drives or directories sitting in the end-user's location be absolutely readable, as a file directory, containing the files. Even if that directory is inside a .tar or .zip or .gz file. Making it into a encrypted item should not happen, unless we make a VERY SPECIFIC, and redundant channel of such a thing. --Jscott 00:01, 2 March 2015 (EST)

  • A possibility is that it's encrypted but easy to unencrypt, so that its harder to fake hashes to it but it can be unpacked into useful items even without the main support network there.

Potential solutions to the storage problem

Tahoe-LAFS

  • Tahoe-LAFS - decentralized (mostly), client-side encrypted file storage grid
    • Requires central introducer and possibly gateway nodes
    • Any storage node could perform a Sybil attack until a feature for client-side storage node choice is added to Tahoe.

git-annex

  • git-annex - allows tracking copies of files in git without them being stored in a repository
    • Also provides a way to know what sources exist for a given item. git-annex is not (AFAIK) locked to any specific storage medium. -- yipdw

Right now, git-annex seems to be in the lead. Besides being flexible about the sources of the material in question, the developer is a member of Archive Team AND has been addressing all the big-picture problems for over a year.

Full worked proposed design for using git-annex for this: https://git-annex.branchable.com/design/iabackup/ -- joeyh

Other

See Also

Don't forget about the options enumerated as part of the Valhalla project (particularly the software options); this is much the same thing.

Other anticipated problems

  • Users tampering with data - how do we know data a user stored has not been modified since it was pulled from IA?
    • Proposed solution: have multiple people make their own collection of checksums of IA files. --Mhazinsk 00:10, 2 March 2015 (EST)
      • All IA items already include checksums in the _files.xml. So there could be an effort to back up these xml files in more locations than the data itself (should be feasible since they are individually quite small).
  • "Dark" items (e.g. the "Internet Records" collection)
    • There are classifications of items within the Archive that should be considered for later waves, and not this initial effort. That includes dark items, television, and others.
      • It seems like this would include a lot of what we would want to back up the most though, e.g. a substantial percentage of the books scanned are post-1923 and not public
  • Data which may be illegal in certain countries/jurisdictions and expose volunteers to legal risk (terrorist propaganda, pornography, etc.)
    • Interesting! Several solutions come to mind. --Jscott 02:35, 2 March 2015 (EST)
  • User bandwidth (particularly upstream)
  • latency in swapping disks - assume we may be using cold storage
    • Tiered storage? e.g. one for cloud, one for online trusted users' storage, and one for cold storage
  • User laziness
    • In-browser solution like JSMESS. Store fragments in IndexedDB with the key as the hash of data. Emscripten tahoe-lafs! --Chfoo 22:21, 2 March 2015 (EST)
  • User motivation
    • Gamify it. Add a leaderboard with scores and etc --Chfoo 23:18, 2 March 2015 (EST)
  • User trust
    • Build a user community --Chfoo 23:18, 2 March 2015 (EST)

Project Lab and Corner

  • Projects are much easier with the Internet Archive tool, available here.
  • There is a _files.xml in each item indicating what files are original and which are derivations.
  • Please step forward and write a script that, given a collection, finds all the items in that collection and adds up all the sizes of the original files.

Some results so far:

Collection Link to Collection Total Size Original Files Size % of Total
Ephemeral Films [1] 10971882551213 (10.9tb) 9453160185702 (9.4tb) 86%
Computer Magazines [2] 3392870124693 (3.3tb) 1897118607284 (1.8tb) 55%
Software Library [3] 63140205942 (63.5gb) 61142015946 (61.5gb) 96%

Rough Count

According to one of the internal counters, there are 24,598,934 "items" at the Archive. This number should be considered rough and suspect but can give some insight into the scope of the project.

Case Studies

If you implement it, will users use it?

BOINC

  1. Why do people participate in BOINC projects?
  2. Why do projects use BOINC?
  3. How does BOINC keep track of work units?
  4. How does BOINC deal with bad actors?
  5. Why do BOINC projects share project users and points among other projects?
  6. What makes people download the client software and install it?

Stack Overflow

  1. Q & A sites existed before Stack Overflow. What makes Stack Overflow so successful?
  2. How does Stack Overflow eliminate bad questions and answers?
  3. What makes Stack Exchange grow so large?
  4. How does it deal with spam?