From Archiveteam
Jump to navigation Jump to search
WikiTeam XML
WikiTeam, we preserve wikis
WikiTeam, we preserve wikis
Status Special case
Archiving status In progress... (manual)
Archiving type other
Project source WikiTeam GitHub
IRC channel #wikiteam (on hackint)
IRC bot run by DigitalDragon using WikiTeam3 tools
IRC bot run by DigitalDragon using WikiTeam3 tools
Status Special case
Archiving status In progress... (manual)
Archiving type other
Project source WikiBot GitHub
IRC channel #wikibot (on hackint)

WikiTeam software is a set of tools for archiving wikis. They work on MediaWiki wikis, but we want to expand to other wiki engines. As of 2019, WikiTeam has preserved more than 250,000 wikis.

You can check our collection at Internet Archive, the source code on GitHub and some lists of wikis by status on WikiApiary. There's also a list of not yet archived wikis on WikiApiary.

There are two completely separate projects under the umbrella of WikiTeam:

  • The archival of the wikis in the form of XML dumps. This is what most of this page is about.
  • The archival of external links found in wikis to WARCs. See the Links warrior project section.

The archival of the wikis themselves to WARCs is also desirable but has not been attempted yet.

Current status

The total number of MediaWiki wikis is unknown, but some estimates exist.

According to WikiApiary, which is the most updated database, there are 21,139 independent wikis (1,718 are semantic) and 4,819 in wikifarms as of 2018-08-02.[1] But it doesn't include 400,000+ Wikia wikis, and the independent list coverage can be improved for sure.

According to Pavlo's list generated in December 2008, there are 20,000 wikis.[2] This list was imported into WikiApiary.

According to WikiIndex, there are 20,698 wikis.[3] The URLs in this project were added to WikiApiary in the past too.

A number of wikifarms have vanished and about 180 are still online.[4][5][6]

Most wikis are small, containing about 100 pages or less, but there are some very large wikis:[7][8]

  • By number of pages: Wikimedia Commons (77 million), Wikidata (72 million), English Wikipedia (49 million), DailyWeeKee (35 million), WikiBusiness (22 million).
  • By number of files: Wikimedia Commons (57 million), English Wikipedia (800,000).

The oldest dumps are probably some 2001 dumps of Wikipedia when it used UseModWiki.[9][10]

As of 2019, our collection at Internet Archive holds dumps for 250,000 wikis (including independent, wikifarm wikis, some packages of wikis and Wiki[pm]edia).[11]


There are also wikifarms with hundreds of wikis. Here we only create pages for those we have some special information about that we don't want to lose (like archiving history and tips). For a full list, please use WikiApiary wikifarms main page.

Before backing up a wikifarm, try to update the list of wikis for it. There are Python scripts to generate those lists for many wikifarms.

Wikis to archive

Please add a wiki to WikiApiary if you want someone to archive it sooner or later; or tell us on IRC (#wikiteam (on hackint)) if it's particularly urgent. Remember that there are thousands of wikis we don't even know about yet.

You can help downloading wikis yourself. If you don't know where to start, pick a wiki which was not archived yet from the lists on WikiApiary. Also, you can edit those pages to link existing dumps! You'll help others focus their work.

Examples of huge wikis:

  • Wikipedia - arguably the largest and one of the oldest wikis on the planet. It offers public backups (also for sister projects):
    • They have some mirrors but not many.
    • The transfer of the dumps to the Internet Archive is automated and is currently managed by Hydriz.
  • Wikia - a website that allows the creation and hosting of wikis. Doesn't make regular backups.

We're trying to decide which other wiki engines to work on: suggestions needed!

Tools and source code

Official WikiTeam tools


MediaWiki Dump Generator

The MediaWiki Client Tools' MediaWiki Dump Generator dumpgenerator script, a Python 3.x port of the WikiTeam Python 2.7 script. It is run from the command-line in a terminal.

The XML dump can include full or only most recent page history. The images dump will contain all file types with associated descriptions. The siteinfo.json and SpecialVersion.html files will contain information about wiki features such as the installed extensions and skins. User account information won't be preserved.

Full instructions are at the MediaWiki Client Tools' MediaWiki Dump Generator GitHub repository.


WikiTeam3 is Save the Web Project's updated fork of MediaWiki Dump Generator. This Python 3 script runs from the command-line in a terminal and will dump the XML and files (including images).

The XML dump can include full or only most recent page history. The images dump will contain all file types with associated descriptions. The siteinfo.json and SpecialVersion.html files will contain information about wiki features such as the installed extensions and skins. Note that user account information won't be preserved.

Installation and usage instructions are at the WikiTeam3 GitHub repository.

Wiki dumps

Most of our dumps are in the wikiteam collection at the Internet Archive. If you want an item to land there, just upload it in "opensource" collection and remember the "WikiTeam" keyword, it will be moved at some point. When you've uploaded enough wikis, you'll probably be made a collection admin to save others the effort to move your stuff.

For a manually curated list, visit the download section on GitHub.

There is another site of MediaWiki dumps located here on Scott's website.


Some tips follow.

Before archiving or asking for archiving, check that the wiki is suitable to be archived:

  • Check the size of the wiki, very large wikis might overflow disk or be hard to archive; visit the Special:SpecialPages page and click on the Special:MediaStatistics (or Special:ListFiles for older wikis) and Special:Statistics pages. You can also directly enter those pages in the search bar or edit the URL. For non-English wikis, they will redirect to the correct localized title.
  • Check the date the last upload of the wiki you are after, either in the browser, or on the command-line using the ia tool:
ia search originalurl:*examplewikiname* | jq -r .identifier | xargs ia metadata | jq -r '.metadata.addeddate, .metadata.originalurl'

Don't issue commands you don't understand, especially batch commands which use loops or find and xargs, unless you're ready to lose all the data you got.

When downloading Wikipedia/Wikimedia Commons dumps, pages-meta-history.xml.7z and pages-meta-history.xml.bz2 are the same, but 7z is usually smaller (better compress ratio), so use 7z.

To download a mass of wikis with N parallel threads, just split your full $list in N chunks, then start N instances of (tutorial), one for each list

  • If you want to upload dumps as they're ready and clean up your storage: at the same time, in a separate window or screen, run a loop of the kind while true; do ./ $list --prune-directories --prune-wikidump; sleep 12h; done; (the sleep ensure each run has something to do).
  • If you're using --xmlrevisions, will use much less memory because it won't get giant blobs of XML from Special:Export when a big page has a thousand revisions or more. You can then afford to run 100 instances of with just 2 cores and 8 GiB of RAM. Watch your ulimit for the number of files, individual and total memory: 7z may consume up to 5 GiB of RAM for the biggest dumps (over 10 GiB). CPU usage tends to be lower at the beginning ( is not yet launching any 7z task because few dumps have completed) and the disk is usually hit harder at a beginning of a resume ( needs to scan the directories multiple times and needs to read the lists of titles, XML and image directories). Before increasing concurrency, make sure you have enough resources for those stressful times, not just for the easy ride at the beginning of the dump.
  • If you want to go advanced and run really many instances, use tmux[1]! Use tmux new-window to launch several instances in the same session. Every now and then, attach to the tmux session and look (ctrl-b f) for windows stuck on "is wrong", "is slow" or "......" loops, or which are inactive[2]. Even with a couple cores you can run a hundred instances, just make sure to have enough disk space for the occasional huge ones (tens of GB).
  • If you get closer to a 1000 instances of, it may be too much for tmux to handle. You're probably not actually going to look at the output of hundreds of windows anyway. So just run everything in the background with xargs, monitor the crashes and then check the directories manually.
    split -a 4 -d -l 10 wikistoarchive.txt wt_ ; ls -1 wt_* | xargs -n1 -I§ -P300 sh -c "python § 2>&1 > /dev/null ; "

If you have many wikidump directories, some of the following commands may be useful. Sometimes a dump is complete but the 7z is missing or broken (e.g. for lack of memory), or you're running low on disk and you can't wait for to verify the uploads one by one. A hint of a complete dump is the presence of siteinfo.json: that means dumpgenerator thought the XML was done, but an image download may still be running.

  • Check errors in 7z files. It's better to avoid running on many archives if you're not sure they're fine (for instance if you've not monitored crashes of It's much harder for other people to download the 7z files from and check them after they've been uploaded, and the presence of an archive may discourage someone else from making a new one even if the archive is not actually usable.
    find -maxdepth 1 -type f -name "*7z" | xargs -n1 -P4 -I§ sh -c "7z l § 2>&1 | grep ^ERROR "
  • Delete directories corresponding to a 7z file.
    find -maxdepth 1 -name "*wikidump.7z" | cut -d/ -f2 | sed 's,.7z,,g' | xargs -P8 rm -rf
  • If has failed to create 7z files due to running low on resources, you may make them manually with a loop and lower compression level.
    find -maxdepth 1 -name siteinfo.json | cut -d/ -f2 |sed 's,wikidump,,g' | xargs -n1 -P6 -I§ sh -c "cd §wikidump/ ; 7za a -ms=off -mx=3 ../§history.xml.7z §history.xml §titles.txt errors.log index.html config.txt siteinfo.json Special:Version.html ; ../§history.xml.7z ../§wikidump.7z ; 7za a -mx=1 ../§wikidump.7z images/ §images.txt ; "
  • Find the biggest ongoing wikidump directories: when you don't have something as nice as ncdu, something simple may suffice, like du -shc * | grep G or find -maxdepth 2 -type f -name "*xml" -size +1G.

BitTorrent downloads

You can download and seed the torrents from the collection. Every item has a "Torrent" link.

Old mirrors

  1. Sourceforge (also mirrored to another 26 mirrors)
  2. Internet Archive (direct link to directory)


We also have dumps for our coordination wikis:

Restoring wikis

Anyone can restore a wiki using its XML dump and images. is restoring some sites.

Links warrior project

WikiTeam links
We preserve external links used in wikis
We preserve external links used in wikis
Status Special case
Archiving status In progress... (dormant since 2017)
Archiving type Unknown
Project source wikis-grab
Project tracker wikis
IRC channel #wikiteam (on hackint)

There is a (currently dormant) warrior project to archive external links used in wikis. The target format for this archival is WARC. The data from this project is uploaded to this collection on the Internet Archive.


  1. Websites - WikiApiary
  2. Pavlo's list of wikis (mediawiki.csv) (backup)
  3. WikiIndex Statistics
  4. Wikifarms
  5. Comparison of wiki hosting services
  6. Category:WikiFarm
  7. List of largest wikis
  8. List of largest wikis in the world
  9. Wikimedia Downloads Historical Archives
  10. Dump of Nostalgia, an ancient version of Wikipedia from 2001
  11. WikiTeam collection at Internet Archive
  12. - list of wikis
  13. battlestarwikiorg - dumps
  14. bluwiki - dumps
  15. communpedia - dumps
  16. - list of wikis
  17. - dumps
  18. warcarchives
  19. Farm:EditThis
  20. - Special:Version
  21. - list of wikis
  22. - dumps
  23. We're sorry about the downtime we've been having lately
  24. - list of wikis
  25. - list of wikis
  26. miraheze - dumps
  27. - list of wikis
  28. - dumps
  29. - list of wikis
  30. orain - dumps
  31. Orain wikifarm dump (August 2013)
  32. - list of wikis
  33. - dumps
  34. Referata wikifarm dump 20111204
  35. Referata wikifarm dump (August 2013)
  36. - list of wikis
  37. - dumps
  38. What is ScribbleWiki?
  39. - list of wikis
  40. - dumps
  41. ShoutWiki wikifarm dump
  42. sourceforge - dumps
  43. - list of wikis
  44. - dumps
  45. - list of wikis
  46. wikihub - dumps
  47. - list of wikis
  48. list of wikis
  49. - dumps

External links

v · t · e         Knowledge and Wikis

DokuWiki · MediaWiki · MoinMoin · Oddmuse · PukiWiki · UseModWiki · YukiWiki


atwiki · Battlestar Wiki · BluWiki · Communpedia · EditThis · · Fandom · Miraheze · · Orain · Referata · ScribbleWiki · Seesaa · ShoutWiki · SourceForge · TropicalWikis · · Wiki.Wiki · Wiki-Site · Wikidot · WikiHub · Wikispaces · WikiForge · WikiTide · Wikkii ·


Wikipedia · Wikimedia Commons · Wikibooks · Wikidata · Wikinews · Wikiquote · Wikisource · Wikispecies · Wiktionary · Wikiversity · Wikivoyage · Wikimedia Incubator · Meta-Wiki


Anarchopedia · Citizendium · Conservapedia · Creation Wiki · EcuRed · Enciclopedia Libre Universal en Español · GNUPedia · Moegirlpedia · Nico Nico Pedia · Nupedia · OmegaWiki · OpenStreetMap · Pixiv Encyclopedia

Indexes and stats

WikiApiary · WikiIndex · Wikistats