Difference between revisions of "ArchiveBot"

From Archiveteam
Jump to navigation Jump to search
(we're accepting pipelines again!)
(updaaaaates)
Line 1: Line 1:
[[File:Librarianmotoko.jpg|200px|right|thumb|Imagine Motoko Kusanagi as an archivist.]]
[[File:Librarianmotoko.jpg|200px|right|thumb|Imagine Motoko Kusanagi as an archivist.]]


'''ArchiveBot''' is an [[IRC]] bot designed to automate the archival of smaller websites (e.g. up to a few hundred thousand URLs).  You give it a URL to start at, and it grabs all content under that URL, [[Wget_with_WARC_output|records it in a WARC]], and then uploads that WARC to ArchiveTeam servers for eventual injection into the [https://archive.org/search.php?query=collection%3Aarchivebot&sort=-publicdate Internet Archive] (or other archive sites).
'''ArchiveBot''' is an [[IRC]] bot designed to automate the archival of smaller websites (e.g. up to a few hundred thousand URLs).  You give it a URL to start at, and it grabs all content under that URL, [[Wget_with_WARC_output|records it in a WARC]] file, and then uploads that WARC to ArchiveTeam servers for eventual injection into the [https://archive.org/search.php?query=collection%3Aarchivebot&sort=-publicdate Internet Archive]'s Wayback Machine (or other archive sites).


== Details ==
== Details ==


To use ArchiveBot, drop by [http://chat.efnet.org:9090/?nick=&channels=%23archivebot&Login=Login '''#archivebot'''] on EFNet. To interact with ArchiveBot, you [http://archivebot.readthedocs.org/en/latest/commands.html issue '''commands'''] by typing it into the channel. Note you will need channel operator (<code>@</code>) or voice (<code>+</code>) permissions in order to issue archiving jobs; please ask for assistance or leave a message describing the website you want to archive.  
To use ArchiveBot, drop by the IRC channel [http://chat.efnet.org:9090/?nick=&channels=%23archivebot&Login=Login '''#archivebot'''] on EFNet. To interact with ArchiveBot, you [http://archivebot.readthedocs.org/en/latest/commands.html issue '''commands'''] by typing it into the channel. Note you will need channel operator (<code>@</code>) or voice (<code>+</code>) permissions in order to issue archiving jobs; please ask for assistance or leave a message describing the website you want to archive.


The [http://dashboard.at.ninjawedding.org/3 '''dashboard'''] shows the sites being downloaded currently. The [http://archivebot.at.ninjawedding.org:4567/pipelines pipeline monitor station] shows the status of deployed instances of crawlers. The [http://archive.fart.website/archivebot/viewer/ viewer] assists in browsing and searching archives.
The [http://dashboard.at.ninjawedding.org/3 '''dashboard'''] publicly shows the sites being downloaded currently. The [http://archivebot.at.ninjawedding.org:4567/pipelines pipeline monitor station] shows the status of deployed instances of crawlers. The [http://archive.fart.website/archivebot/viewer/ viewer] assists in browsing and searching archives.


Follow [https://twitter.com/archivebot @ArchiveBot] on [[Twitter]]!<ref>Formerly known as [https://twitter.com/atarchivebot @ATArchiveBot]</ref>
You can also follow [https://twitter.com/archivebot @ArchiveBot] on [[Twitter]]!<ref>Formerly known as [https://twitter.com/atarchivebot @ATArchiveBot]</ref> although its tweets may slightly lag behind the current status of the bot.


=== Components ===
== Components ==


IRC interface
IRC interface
:The bot listens for commands and reports back status on the IRC channel. You can ask it to archive a website or webpage, check whether the URL has been saved, change the delay time between request, or add some ignore rules. This IRC interface is collaborative meaning anyone with permission can adjust the parameter of jobs. Note that the bot isn't a chat bot so it will ignore you if it doesn't understand a command.
:The bot listens for commands in the IRC channel and then reports back status on the IRC channel. You can ask it to archive a whole website or single webpage, check whether the URL has been saved, change the delay time between requests, or add some ignore rules to avoid crawling certain web cruft. This IRC interface is collaborative, meaning anyone with permission can adjust the parameter of jobs. Note that the bot isn't a chat bot so it will ignore you if it doesn't understand a command.


Dashboard
Dashboard
:The dashboard displays the URLs being downloaded. Each URL line in the dashboard is categorized into successes, warnings, and errors. It will be highlighted in yellow or red. It also provides RSS feeds.
:The [http://dashboard.at.ninjawedding.org/3 '''ArchiveBot dashboard'''] is a web-based front-end displaying the URLs being downloaded by the various web crawls. Each URL line in the dashboard is categorized by its HTTP code into successes, warnings, and errors. It will be highlighted in yellow or red. the dashaboard also provides RSS feeds.


Backend
Backend
:The backend contains the database of jobs and several maintenance tasks such as trimming logs and posting Tweets on Twitter. The backend is the centralized portion of ArchiveBot.
:The backend contains the database of all jobs and several maintenance tasks such as trimming logs and posting Tweets on Twitter. The backend is the centralized portion of ArchiveBot.


Crawler
Crawler
:The crawler will download and spider the website into WARC files. The crawler is the distributed portion of ArchiveBot. Volunteers run nodes connected to the backend. The backend will tell the nodes what jobs to run. Once the node has finished, it reports back to the backend and uploads the WARC files to the staging server. This process is handled by a supervisor script called a pipeline.
:The crawler will download and spider the website into WARC files. The crawler is the distributed portion of ArchiveBot. Volunteers run pipeline nodes connected to the backend. The backend will tell the nodes/pipelines what jobs to run. Once the crawl job has finished, the pipeline reports back to the backend and uploads the WARC files to the staging server. This process is handled by a supervisor script called a pipeline.


Staging server
Staging server
:The staging server is the place where all the WARC files are uploaded temporary. Once the current batch has been approved, it will be uploaded to the Internet Archive for consumption by the Wayback Machine.
:The staging server, known as [[FOS|FOS (Fortress of Solitude)]], is the place where all the WARC files are temporarily uploaded. Once the current batch has been approved, the files will be uploaded to the Internet Archive for consumption by the Wayback Machine.


ArchiveBot's source code can be found at https://github.com/ArchiveTeam/ArchiveBot. [[Dev|Contributions welcomed]]! Any issues or feature requests may be filed at [https://github.com/ArchiveTeam/ArchiveBot/issues the issue tracker].  
ArchiveBot's source code can be found at https://github.com/ArchiveTeam/ArchiveBot. [[Dev|Contributions welcomed]]! Any issues or feature requests may be filed at [https://github.com/ArchiveTeam/ArchiveBot/issues the issue tracker].  


=== People ===
== People ==


The IRC bot, pipeline manager backend, and dashboard is operated by [[User:yipdw|yipdw]], although a few other ArchiveTeam members were given access in late 2017. The staging server [[FOS|FOS (Fortress of Solitude)]], where the data sits for final checks before being moved over to the Internet Archive serves, is operated by [[User:jscott|SketchCow]]. The pipelines/crawlers are operated by various volunteers around the world.  Each pipeline typically runs two or three web crawl jobs at any given time.
The main server that controls the IRC bot, pipeline manager backend, and web dashboard is operated by [[User:yipdw|yipdw]], although a few other ArchiveTeam members were given SSH access in late 2017. The staging server [[FOS|FOS (Fortress of Solitude)]], where the data sits for final checks before being moved over to the Internet Archive serves, is operated by [[User:jscott|SketchCow]]. The pipelines are operated by various volunteers around the world.  Each pipeline typically runs two or three web crawl jobs at any given time.


== Volunteer to run a Pipeline ==
== Volunteer to run a Pipeline ==
As of December 2017, new ArchiveBot pipelines are being accepted again. You'll need to have a machine with:
As of November 2017, ArchiveBot has again started accepting applications from volunteers who want to set up new pipelines. You'll need to have a machine with:


* lots of disk space (40 GB minimum / 200 GB recommended / 500 GB atypical)
* lots of disk space (40 GB minimum / 200 GB recommended / 500 GB atypical)
Line 43: Line 43:
* always-on unrestricted internet access (absolutely no firewall/proxies/censorship/ISP-injected-ads/DNS-redirection/free-cafe-wifi)
* always-on unrestricted internet access (absolutely no firewall/proxies/censorship/ISP-injected-ads/DNS-redirection/free-cafe-wifi)


If you would like to volunteer, please review the [https://github.com/ArchiveTeam/ArchiveBot/blob/master/INSTALL.pipeline Pipeline Install] instructions. Then contact ArchiveTeam members [[User:Asparagirl|Asparagirl]], [[User:astrid|astrid]], [[User:JAA|JAA]], [[User:yipdw|yipdw]], or other ArchiveTeam members hanging out in #archivebot and we can hook you up.
Suggestion: the $40/month Digital Ocean droplets (4 GB memory/2 CPU/60 GB hard drive) running Ubuntu work pretty well.


=== Installation ===
If you have a suitable server available and would like to volunteer, please review the [https://github.com/ArchiveTeam/ArchiveBot/blob/master/INSTALL.pipeline Pipeline Install] instructions. Then contact ArchiveTeam members [[User:Asparagirl|Asparagirl]], [[User:astrid|astrid]], [[User:JAA|JAA]], [[User:yipdw|yipdw]], or other ArchiveTeam members hanging out in #archivebot, and we can hook you up, adding your machine to the list of approved pipelines, so that it will start processing incoming ArchiveBot jobs.
 
== Installation ==


Installing the ArchiveBot can be difficult. The [https://github.com/ArchiveTeam/ArchiveBot/blob/master/INSTALL.pipeline Pipeline Install] instructions are online, but are tricky.
Installing the ArchiveBot can be difficult. The [https://github.com/ArchiveTeam/ArchiveBot/blob/master/INSTALL.pipeline Pipeline Install] instructions are online, but are tricky.

Revision as of 21:42, 6 January 2018

Imagine Motoko Kusanagi as an archivist.

ArchiveBot is an IRC bot designed to automate the archival of smaller websites (e.g. up to a few hundred thousand URLs). You give it a URL to start at, and it grabs all content under that URL, records it in a WARC file, and then uploads that WARC to ArchiveTeam servers for eventual injection into the Internet Archive's Wayback Machine (or other archive sites).

Details

To use ArchiveBot, drop by the IRC channel #archivebot on EFNet. To interact with ArchiveBot, you issue commands by typing it into the channel. Note you will need channel operator (@) or voice (+) permissions in order to issue archiving jobs; please ask for assistance or leave a message describing the website you want to archive.

The dashboard publicly shows the sites being downloaded currently. The pipeline monitor station shows the status of deployed instances of crawlers. The viewer assists in browsing and searching archives.

You can also follow @ArchiveBot on Twitter![1] although its tweets may slightly lag behind the current status of the bot.

Components

IRC interface

The bot listens for commands in the IRC channel and then reports back status on the IRC channel. You can ask it to archive a whole website or single webpage, check whether the URL has been saved, change the delay time between requests, or add some ignore rules to avoid crawling certain web cruft. This IRC interface is collaborative, meaning anyone with permission can adjust the parameter of jobs. Note that the bot isn't a chat bot so it will ignore you if it doesn't understand a command.

Dashboard

The ArchiveBot dashboard is a web-based front-end displaying the URLs being downloaded by the various web crawls. Each URL line in the dashboard is categorized by its HTTP code into successes, warnings, and errors. It will be highlighted in yellow or red. the dashaboard also provides RSS feeds.

Backend

The backend contains the database of all jobs and several maintenance tasks such as trimming logs and posting Tweets on Twitter. The backend is the centralized portion of ArchiveBot.

Crawler

The crawler will download and spider the website into WARC files. The crawler is the distributed portion of ArchiveBot. Volunteers run pipeline nodes connected to the backend. The backend will tell the nodes/pipelines what jobs to run. Once the crawl job has finished, the pipeline reports back to the backend and uploads the WARC files to the staging server. This process is handled by a supervisor script called a pipeline.

Staging server

The staging server, known as FOS (Fortress of Solitude), is the place where all the WARC files are temporarily uploaded. Once the current batch has been approved, the files will be uploaded to the Internet Archive for consumption by the Wayback Machine.

ArchiveBot's source code can be found at https://github.com/ArchiveTeam/ArchiveBot. Contributions welcomed! Any issues or feature requests may be filed at the issue tracker.

People

The main server that controls the IRC bot, pipeline manager backend, and web dashboard is operated by yipdw, although a few other ArchiveTeam members were given SSH access in late 2017. The staging server FOS (Fortress of Solitude), where the data sits for final checks before being moved over to the Internet Archive serves, is operated by SketchCow. The pipelines are operated by various volunteers around the world. Each pipeline typically runs two or three web crawl jobs at any given time.

Volunteer to run a Pipeline

As of November 2017, ArchiveBot has again started accepting applications from volunteers who want to set up new pipelines. You'll need to have a machine with:

  • lots of disk space (40 GB minimum / 200 GB recommended / 500 GB atypical)
  • 512 MB RAM (2 GB recommended, 2 GB swap recommended)
  • 10 mbps upload/download speeds (100 mbps recommended)
  • long-term availability (2 months minimum)
  • always-on unrestricted internet access (absolutely no firewall/proxies/censorship/ISP-injected-ads/DNS-redirection/free-cafe-wifi)

Suggestion: the $40/month Digital Ocean droplets (4 GB memory/2 CPU/60 GB hard drive) running Ubuntu work pretty well.

If you have a suitable server available and would like to volunteer, please review the Pipeline Install instructions. Then contact ArchiveTeam members Asparagirl, astrid, JAA, yipdw, or other ArchiveTeam members hanging out in #archivebot, and we can hook you up, adding your machine to the list of approved pipelines, so that it will start processing incoming ArchiveBot jobs.

Installation

Installing the ArchiveBot can be difficult. The Pipeline Install instructions are online, but are tricky.

But there is a Travis.yml automated install script for Travis-cl that is designed to test the ArchiveBot.

Since it's good enough for testing... it's good enough for installation, right? There must be a way to convert it into an installer script.

Disclaimers

  1. Everything is provided on a best-effort basis; nothing is guaranteed to work. (We're volunteers, not a support team.)
  2. We can decide to stop a job or ban a user if a job is deemed unnecessary. (We don't want to run up operator bandwidth bills and waste Internet Archive donations on costs.)
  3. We're not Internet Archive. (We do what we want.)
  4. We're not the Wayback Machine. Specifically, we are not ia_archiver or archive.org_bot. (We don't run crawlers on behalf of other crawlers.)

Occasionally, we had to ban blocks of IP addresses from the channel. If you think a ban does not apply to you but cannot join the #archivebot channel, please join the main #archiveteam channel instead.

Bad Behavior

If you are a website operator and you notice ArchiveBot misbehaving, please contact us on #archivebot or #archiveteam on EFnet (see top of page for links).

ArchiveBot understands robots.txt (please read the article) but does not match any directives. It uses it for discovering more links such as sitemaps however.

Also, please remember that we are not the Internet Archive.

More

Like ArchiveBot? Check out our homepage and other projects!

Notes

  1. Formerly known as @ATArchiveBot


v · t · e         Archive Team
Current events

Alive... OR ARE THEY · Deathwatch · Projects

Archiveteam.jpg
Archiving projects

APKMirror · Archive.is · BetaArchive · Government Backup (#datarefuge · ftp-gov· Gmane · Internet Archive · It Died · Megalodon.jp · OldApps.com · OldVersion.com · OSBetaArchive · TEXTFILES.COM · The Dead, the Dying & The Damned · The Mail Archive · UK Web Archive · WebCite · Vaporwave.me

Blogging

Blog.pl · Blogger · Blogster · Blogter.hu · Freeblog.hu · Fuelmyblog · Jux · LINE BLOG · LiveJournal · My Opera · Nolblog.hu · Open Diary · ownlog.com · Posterous · Powerblogs · Proust · Roon · Splinder · Tumblr · Vox · Weblog.nl · Windows Live Spaces · Wordpress.com · Xanga · Yahoo! Blog · Zapd

Cloud hosting/file sharing

aDrive · AnyHub · Box · Dropbox · Docstoc · Fast.io · Google Drive · Google Groups Files · iCloud · Fileplanet · LayerVault · MediaCrush · MediaFire · Mega · MegaUpload · MobileMe · OneDrive · Pomf.se · RapidShare · Ubuntu One · Yahoo! Briefcase

Corporations

Apple · IBM · Google · Loblaw · Lycos Europe · Microsoft · Yahoo!

Events

Arab Spring · Great Ape-Snake War · Spanish Revolution

Font Repos

DaFont · Google Web Fonts · GNU FreeFont · Fontspace

Forums/Message boards

4chan · Captain Luffy Forums · College Confidential · Discourse · DSLReports · ESPN Forums · Facepunch Forums · forums.starwars.com · HeavenGames · JamiiForums · Invisionfree · NeoGAF · Textream · The Classic Horror Film Board · Yahoo! Messages · Yahoo! Neighbors · Yuku.com · Zetaboards

Gaming

Atomicgamer · Bazaar.tf · City of Heroes · Club Nintendo · Clutch · Counter-Strike: Global Offensive · CS:GO Lounge · Desura · Dota 2 · Dota 2 Lounge · Emulation Zone · ESEA · GameBanana · GameMaker Sandbox · GameTrailers · Halo · Heroes of Newerth · HLTV.org · HQ Trivia · Infinite Crisis · joinDOTA · League of Legends · Liquipedia · Minecraft.net · Player.me · Playfire · Raptr · SingStar · Steam · SteamDB · SteamGridDB · Team Fortress 2 · TF2 Outpost · Warhammer · Xfire

Image hosting

500px · AOL Pictures · Blipfoto · Blingee · Canv.as · Camera+ · Cameroid · DailyBooth · Degree Confluence Project · DeviantART · Demotivalo.net · Flickr · Fotoalbum.hu · Fotolog.com · Fotopedia · Frontback · Geograph Britain and Ireland · Giphy · GTF Képhost · ImageShack · Imgh.us · Imgur · Inkblazers · Instagram · Kepfeltoltes.hu · Kephost.com · Kephost.hu · Kepkezelo.com · Keptarad.hu · Madden GIFERATOR · MLKSHK · Microsoft Clip Art · Microsoft Photosynth · Nokia Memories · noob.hu · Odysee · Panoramio · Photobucket · Picasa · Picplz · Pixiv · Portalgraphics.net · PSharing · Ptch · puu.sh · Rawporter · Relay.im · ScreenshotsDatabase.com · Sketch · Smack Jeeves · Snapjoy · Streetfiles · Tabblo · Tinypic · Trovebox · TwitPic · Wallbase · Wallhaven · Webshots · Wikimedia Commons

Knowledge/Wikis

arXiv · Citizendium · Clipboard.com · Deletionpedia · EditThis · Encyclopedia Dramatica · Etherpad · Everything2 · infoAnarchy · GeoNames · GNUPedia · Google Books (Google Books Ngram· Horror Movie Database · Insurgency Wiki · Knol · Lost Media Wiki · Neoseeker.com · Notepad.cc · Nupedia · OpenCourseWare · OpenStreetMap · Orain · Pastebin · Patch.com · Project Gutenberg · Puella Magi · Referata · Resedagboken · SongMeanings · ShoutWiki · The Internet Movie Database · TropicalWikis · Uncyclopedia · Urban Dictionary · Urban Exploration Resource · Webmonkey · Wikia · Wikidot · WikiHow · Wikkii · WikiLeaks · Wikipedia (Simple English Wikipedia· Wikispaces · Wikispot · Wik.is · Wiki-Site · WikiTravel · Word Count Journal

Magazines/Blogs/News

Cyberpunkreview.com · Game Developer Magazine · Gigaom · Hardware Canucks · Helium · JPG Magazine · Make Magazine · The Escapist · Polygamia.pl · San Fransisco Bay Guardian · Scoop · Regretsy · Yahoo! Voices

Microblogging

Heello · Identi.ca · Jaiku · Mommo.hu · Plurk · Sina Weibo · Tencent Weibo · Twitter · TwitLonger

Music/Audio

8tracks · AOL Music · Audimated.com · Cinch · digCCmixter · Dogmazic.net · Earbits · exfm · Free Music Archive · Gogoyoko · Indaba Music · Instacast · Instaudio · Jamendo · Last.fm · Music Unlimited · MOG · PureVolume · Reverbnation · ShareTheMusic · SoundCloud · Soundpedia · Spotify · This Is My Jam · TuneWiki · Twaud.io · WinAmp

People

Aaron Swartz · Michael S. Hart · Steve Jobs · Mark Pilgrim · Dennis Ritchie · Len Sassaman Project

Protocols/Infrastructure

FTP · Gopher · IRC · Usenet · World Wide Web
BitTorrent DHT

Q&A

Askville · Answerbag · Answers.com · Ask.com · Askalo · Baidu Knows · Blurtit · ChaCha · Experts Exchange · Formspring · GirlsAskGuys · Google Answers · Google Baraza · JustAnswer · MetaFilter · Quora · Retrospring · StackExchange · The AnswerBank · The Internet Oracle · Uclue · WikiAnswers · Yahoo! Answers

Recipes/Food

Allrecipes · Epicurious · Food.com · Foodily · Food Network · Punchfork · ZipList

Social bookmarking

Addinto · Backflip · Balatarin · BibSonomy · Bkmrx · Blinklist · BlogMarks · BookmarkSync · CiteULike · Connotea · Delicious · Designer News · Digg · Diigo · Dir.eccion.es · Evernote · Excite Bookmark · Faves · Favilous · folkd · Freelish · Getboo · GiveALink.org · Gnolia · Google Bookmarks · Hacker News · HeyStaks · IndianPad · Kippt · Knowledge Plaza · Licorize · Linkwad · Menéame · Microsoft Developer Network · myVIP · Mister Wong · My Web · Mylink Vault · Newsvine · Oneview · Pearltrees · Pinboard · Pocket · Propeller.com · Reddit · sabros.us · Scloog · Scuttle · Simpy · SiteBar · Slashdot · Squidoo · StumbleUpon · Twine · Voat · Vizited · Yummymarks · Xmarks · Yahoo! Buzz · Zootool · Zotero

Social networks

Bebo · BlackPlanet · Classmates.com · Cyworld · Dogster · Dopplr · douban · Ello · Facebook · Flixster · FriendFeed · Friendster · Friends Reunited · Gaia Online · Google+ · Habbo · hi5 · Hyves · iWiW · LinkedIn · Miiverse · mixi · MyHeritage · MyLife · Myspace · myVIP · Netlog · Odnoklassniki · Orkut · Plaxo · Qzone · Renren · Skyrock · Sonico.com · Storylane · Tagged · tvtag · Upcoming · Viadeo · Vine · VK · WeeWorld · Weibo · Wretch · Xuite · Yahoo! Groups · Yahoo! Stars India · Yahoo! Upcoming · more sites...

Shopping/Retail

Alibaba · AliExpress · Amazon · Apple Store · Barnes & Noble · DirectCanada · eBay · Kmart · NCIX · Printfection · RadioShack · Sears · Sears Canada · Target · The Book Depository · ThinkGeek · Toys "R" Us · Walmart

Software/code hosting

Android Development · Alioth · Assembla · BerliOS · Betavine · Bitbucket · BountySource · Codecademy · CodePlex · Freepository · Free Software Foundation · GNU Savannah · GitHost  · GitHub · GitHub Downloads · Gitorious · Gna! · Google Code · ibiblio · java.net · JavaForge · KnowledgeForge · Launchpad · LuaForge · Maemo · mozdev · OSOR.eu · OW2 Consortium · Openmoko · OpenSolaris · Ourproject.org · Ovi Store · Project Kenai · RubyForge · SEUL.org · SourceForge · Stypi · TestFlight · tigris.org · Transifex · TuxFamily · Yahoo! Downloads

Television/Radio

ABC · Austin City Limits · BBC · CBC · CBS · Computer Chronicles · CTV · Fox · G4 · Global TV · Jeopardy! · NBC · NHK · PBS · Penn & Teller: Bullshit! · The Howard Stern Show · TV News Archive (Understanding 9/11)

Torrenting/Piracy

ExtraTorrent · EZTV · isoHunt · KickassTorrents · The Pirate Bay · Torrentz · Library Genesis

Video hosting

Academic Earth · Bambuser · Blip.tv · Epic · Freshlive · Google Video · Justin.tv · Mixer · Niconico · Nokia Trailers · Oddshot.tv · Periscope · Plays.tv · Qwiki · Skillfeed · Stickam · TED Talks · Ticker.tv · Twitch.tv · Ustream · Videoplayer.hu · Viddler · Viddy · Vidme · Vimeo · Vine · Vstreamers · Yahoo! Video · YouTube · Famous Internet videos (Me at the zoo)

Web hosting

Angelfire · Brace.io · BT Internet · CableAmerica Personal Web Space · Claranet Netherlands Personal Web Pages · Comcast Personal Web Pages · Extra.hu · FortuneCity · Free ProHosting · GeoCities (patch· Google Business Sitebuilder · Google Sites · Internet Centrum · MBinternet · MSN TV · Nifty · Nwnyet · Parodius Networking · Prodigy.net · Saunalahti Iso G · Swipnet · Telenor · Tripod · University of Michigan personal webpages · Verizon Mysite · Verizon Personal Web Space · Webs · Webzdarma · Virgin Media

Web applications

Mailman · MediaWiki · phpBB · Simple Machines Forum · vBulletin

Information

A Million Ways to Die on the Web · Backup Tips · Cheap storage · Collecting items randomly · Data compression algorithms and tools · Dev · Discovery Data · DOS Floppies · Fortress of Solitude · Keywords · Naughty List · Nightmare Projects · Rescuing floppy disks · Rescuing optical media · Site exploration · The WARC Ecosystem · Working with ARCHIVE.ORG

Projects

ArchiveCorps · Audit2014 · Emularity · Faceoff · FlickrFckr · Froogle · INTERNETARCHIVE.BAK (Internet Archive Census· IRC Quotes · JSMESS · JSVLC · Just Solve the Problem · NewsGrabber · Project Newsletter · Valhalla · Web Roasting (ISP Hosting · University Web Hosting· Woohoo

Tools

ArchiveBot · ArchiveTeam Warrior (Tracker· Google Takeout · HTTrack · Video downloaders · Wget (Lua · WARC)

Teams

Bibliotheca Anonoma · LibreTeam · URLTeam · Yahoo Video Warroom · WikiTeam

Other

800notes · AOL · Akoha · Ancestry.com · April Fools' Day · Amplicate · AutoAdmit · Bre.ad · Circavie · Cobook · Co.mments · Countdown · Discourse · Distill · Dmoz · Easel · Eircode · Electronic Frontier Foundation · FanFiction.Net · Feedly · Ficlets · Forrst · FunnyExam.com · FurAffinity · Google Helpouts · Google Moderator · Google Poly · Google Reader · ICQmail · IFTTT · Jajah · JuniorNet · Lulu Poetry · Mobile Phone Applications · Mochi Media · Mozilla Firefox · MyBlogLog · NBII · Newgrounds · Neopets · Quantcast · Quizilla · Salon Table Talk · Shutdownify · Slidecast · Stack Overflow · SOPA blackout pages · starwars.yahoo.com · TechNet · Toshiba Support · USA-Gov · Volán · Widgetbox · Windows Technical Preview · Wunderlist · YTMND · Zoocasa

About Archive Team

Introduction · Philosophy · Who We Are · Our stance on robots.txt · Why Back Up? · Software · Formats · Storage Media · Recommended Reading · Films and documentaries about archiving · Talks · In The Media · FAQ