WikiTeam, we preserve wikis
|Project status||Special case|
|Archiving status||In progress... (manual)|
|Project source||WikiTeam GitHub|
|IRC channel||(on hackint)|
WikiTeam software is a set of tools for archiving wikis. They work on MediaWiki wikis, but we want to expand to other wiki engines. As of 2019, WikiTeam has preserved more than 250,000 wikis.
There are two completely separate projects under the umbrella of WikiTeam:
- The archival of the wikis in the form of XML dumps. This is what most of this page is about.
- The archival of external links found in wikis to WARCs. See the Links warrior project section.
The archival of the wikis themselves to WARCs is also desirable but has not been attempted yet.
The total number of MediaWiki wikis is unknown, but some estimates exist.
According to WikiApiary, which is the most updated database, there are 21,139 independent wikis (1,718 are semantic) and 4,819 in wikifarms as of 2018-08-02. But it doesn't include 400,000+ Wikia wikis, and the independent list coverage can be improved for sure.
According to Pavlo's list generated in December 2008, there are 20,000 wikis. This list was imported into WikiApiary.
- By number of pages: Wikimedia Commons (77 million), Wikidata (72 million), English Wikipedia (49 million), DailyWeeKee (35 million), WikiBusiness (22 million).
- By number of files: Wikimedia Commons (57 million), English Wikipedia (800,000).
As of 2019, our collection at Internet Archive holds dumps for 250,000 wikis (including independent, wikifarm wikis, some packages of wikis and Wiki[pm]edia).
There are also wikifarms with hundreds of wikis. Here we only create pages for those we have some special information about that we don't want to lose (like archiving history and tips). For a full list, please use WikiApiary wikifarms main page.
Before backing up a wikifarm, try to update the list of wikis for it. There are Python scripts to generate those lists for many wikifarms.
|Battlestar Wiki (site)||8||Online||1|
|EditThis (site)||1,350||Unstable||1,297||Most dumps were done in 2014. This wikifarm is not well covered in WikiApiary.|
|elwiki.com (site)||Unknown||Offline||None||Last seen online in 2008. There is no dumps, presumably lost. Perhaps some pages are in the Wayback Machine.|
|Miraheze (site)||2,319||Online||~2,200||Non-profit. Dumps were made in September 2016. Later in 2019 more dumps were uploaded.|
|Neoseeker.com (site)||229||Online||159||Check why there are dozens of wikis without dump.|
|Orain (site)||425||Offline||380||Last seen online in September 2015. Dumps were made in August 2013, January 2014 and August 2015.|
|Referata (site)||156||Unstable||~80||Check why there are dozens of wikis without dump.|
|ScribbleWiki (site)||119||Offline||None||Last seen online in 2008. There is no dumps, presumably lost. Perhaps some pages are in the Wayback Machine.|
|ShoutWiki (site)||1,879||Online||~1,300||Check why there are dozens of wikis without dump.|
|TropicalWikis (site)||187||Offline||152||Killed off in November 2013. Allegedly pending move to Orain (which became offline too). Data from February 2013 and earlier saved.|
|Wiki-Site (site)||5,839||Online||367||No uploaded dumps yet.|
|Wikia (site)||400,000||Online||300,000+||Help:Database download, Their dumping code|
Wikis to archive
Please add a wiki to WikiApiary if you want someone to archive it sooner or later; or tell us on the #wikiteam IRC channel (on hackint) if it's particularly urgent. Remember that there are thousands of wikis we don't even know about yet.
You can help downloading wikis yourself. If you don't know where to start, pick a wiki which was not archived yet from the lists on WikiApiary. Also, you can edit those pages to link existing dumps! You'll help others focus their work.
Examples of huge wikis:
- Wikipedia - arguably the largest and one of the oldest wikis on the planet. It offers public backups (also for sister projects): https://dumps.wikimedia.org
- They have some mirrors but not many.
- The transfer of the dumps to the Internet Archive is automated and is currently managed by Hydriz.
- Wikimedia Commons - a wiki of media files available for free usage. It offers public backups: https://dumps.wikimedia.org
- But there is no image dump available, only the image descriptions
- So we made it! http://archive.org/details/wikimediacommons
- Wikia - a website that allows the creation and hosting of wikis. Doesn't make regular backups.
We're trying to decide which other wiki engines to work on: suggestions needed!
Tools and source code
Official WikiTeam tools
- WikiTeam in GitHub
- dumpgenerator.py to download MediaWiki wikis: python dumpgenerator.py --api=https://www.archiveteam.org/api.php --xml --images
- wikipediadownloader.py to download Wikipedia dumps from download.wikimedia.org: python wikipediadownloader.py
- Scripts of a guy who saved Wikitravel
- OddMuseWiki backup
- UseModWiki: use wget/curl and raw mode (might have a different URL scheme, like this)
- Some wikis: UseMod:SiteList
Most of our dumps are in the wikiteam collection at the Internet Archive. If you want an item to land there, just upload it in "opensource" collection and remember the "WikiTeam" keyword, it will be moved at some point. When you've uploaded enough wikis, you'll probably be made a collection admin to save others the effort to move your stuff.
For a manually curated list, visit the download section on GitHub.
Some tips follow. Don't issue commands you don't understand, especially batch commands which use loops or find and xargs, unless you're ready to lose all the data you got.
When downloading Wikipedia/Wikimedia Commons dumps, pages-meta-history.xml.7z and pages-meta-history.xml.bz2 are the same, but 7z is usually smaller (better compress ratio), so use 7z.
To download a mass of wikis with N parallel threads, just
split your full
$list in N chunks, then start N instances of
launcher.py (tutorial), one for each list
- If you want to upload dumps as they're ready and clean up your storage: at the same time, in a separate window or screen, run a loop of the kind
while true; do ./uploader.py $list --prune-directories --prune-wikidump; sleep 12h; done;(the
sleepensure each run has something to do).
- If you're using --xmlrevisions, dumpgenerator.py will use much less memory because it won't get giant blobs of XML from Special:Export when a big page has a thousand revisions or more. You can then afford to run 100 instances of launcher.py/dumpgenerator.py with just 2 cores and 8 GiB of RAM. Watch your ulimit for the number of files, individual and total memory: 7z may consume up to 5 GiB of RAM for the biggest dumps (over 10 GiB). CPU usage tends to be lower at the beginning (launcher.py is not yet launching any 7z task because few dumps have completed) and the disk is usually hit harder at a beginning of a resume (launcher.py needs to scan the directories multiple times and dumpgeneratory.py needs to read the lists of titles, XML and image directories). Before increasing concurrency, make sure you have enough resources for those stressful times, not just for the easy ride at the beginning of the dump.
- If you want to go advanced and run really many instances, use
tmux! Use tmux new-window to launch several instances in the same session. Every now and then, attach to the tmux session and look (
ctrl-b f) for windows stuck on "is wrong", "is slow" or "......" loops, or which are inactive. Even with a couple cores you can run a hundred instances, just make sure to have enough disk space for the occasional huge ones (tens of GB).
- If you get closer to a 1000 instances of launcher.py, it may be too much for tmux to handle. You're probably not actually going to look at the output of hundreds of windows anyway. So just run everything in the background with xargs, monitor the crashes and then check the directories manually.
split -a 4 -d -l 10 wikistoarchive.txt wt_ ; ls -1 wt_* | xargs -n1 -I§ -P300 sh -c "python launcher.py § 2>&1 > /dev/null ; "
If you have many wikidump directories, some of the following commands may be useful. Sometimes a dump is complete but the 7z is missing or broken (e.g. for lack of memory), or you're running low on disk and you can't wait for uploader.py to verify the uploads one by one. A hint of a complete dump is the presence of siteinfo.json: that means dumpgenerator thought the XML was done, but an image download may still be running.
- Check errors in 7z files. It's better to avoid running uploader.py on many archives if you're not sure they're fine (for instance if you've not monitored crashes of dumpgenerator.py/launcher.py). It's much harder for other people to download the 7z files from archive.org and check them after they've been uploaded, and the presence of an archive may discourage someone else from making a new one even if the archive is not actually usable.
find -maxdepth 1 -type f -name "*7z" | xargs -n1 -P4 -I§ sh -c "7z l § 2>&1 | grep ^ERROR "
- Delete directories corresponding to a 7z file.
find -maxdepth 1 -name "*wikidump.7z" | cut -d/ -f2 | sed 's,.7z,,g' | xargs -P8 rm -rf
- If launcher.py has failed to create 7z files due to running low on resources, you may make them manually with a loop and lower compression level.
find -maxdepth 1 -name siteinfo.json | cut -d/ -f2 |sed 's,wikidump,,g' | xargs -n1 -P6 -I§ sh -c "cd §wikidump/ ; 7za a -ms=off -mx=3 ../§history.xml.7z §history.xml §titles.txt errors.log index.html config.txt siteinfo.json Special:Version.html ; ../§history.xml.7z ../§wikidump.7z ; 7za a -mx=1 ../§wikidump.7z images/ §images.txt ; "
- Find the biggest ongoing wikidump directories: when you don't have something as nice as ncdu, something simple may suffice, like
du -shc * | grep Gor
find -maxdepth 2 -type f -name "*xml" -size +1G.
You can download and seed the torrents from the archive.org collection. Every item has a "Torrent" link.
We also have dumps for our coordination wikis:
Anyone can restore a wiki using its XML dump and images.
Wikis.cc is restoring some sites.
Links warrior project
We preserve external links used in wikis
|Project status||Special case|
|Archiving status||In progress... (dormant since 2017)|
|IRC channel||(on hackint)|
There is a (currently dormant) warrior project to archive external links used in wikis. The target format for this archival is WARC. The data from this project is uploaded to this collection on the Internet Archive.
- Websites - WikiApiary
- Pavlo's list of wikis (mediawiki.csv) (backup)
- WikiIndex Statistics
- Comparison of wiki hosting services
- List of largest wikis
- List of largest wikis in the world
- Wikimedia Downloads Historical Archives
- Dump of Nostalgia, an ancient version of Wikipedia from 2001
- WikiTeam collection at Internet Archive
- battlestarwikiorg - dumps
- bluwiki - dumps
- communpedia - dumps
- editthis.info - list of wikis
- editthis.info - dumps
- elwiki.com - list of wikis
- elwiki.com - dumps
- We're sorry about the downtime we've been having lately
- miraheze - dumps
- neoseeker.com - list of wikis
- neoseeker.com - dumps
- orain.com - list of wikis
- orain - dumps
- Orain wikifarm dump (August 2013)
- referata.com - list of wikis
- referata.com - dumps
- Referata wikifarm dump 20111204
- Referata wikifarm dump (August 2013)
- scribblewiki.com - list of wikis
- scribblewiki.com - dumps
- What is ScribbleWiki?
- shoutwiki.com - list of wikis
- shoutwiki.com - dumps
- ShoutWiki wikifarm dump
- sourceforge - dumps
- tropicalwikis.com - list of wikis
- tropicalwikis.com - dumps
- wiki-site.com - list of wikis
- wikia.com - list of wikis
- wikihub - dumps
- wiki.wiki - list of wikis
- wikki.com - dumps
- WikiTeam on GitHub
- WikiIndex - an index of wikis
- S23 wikistats - stats for over 40,000 wikis
- Comparison of wikifarms
- Wikipedia Archive
|v · t · Knowledge and Wikis|
Battlestar Wiki · BluWiki · Communpedia · EditThis · elwiki.com · Miraheze · Neoseeker.com · Orain · Referata · ScribbleWiki · ShoutWiki · Sourceforge · TropicalWikis · Wik.is · Wiki.Wiki · Wiki-Site · Wikia · Wikidot · WikiHub · Wikispaces · Wikkii · YourWiki.net