|Status||Offline (see Wallhaven)|
|IRC channel||#archiveteam-bs (on hackint)|
wallbase.cc was a store of wallpapers and other high-resolution media typically scraped from chans' /hr, /wg, and /w boards.
WALL YOUR BASE ARE BELONG TO US. I'm... I'm sorry. I'll see myself out.
Apparently, wallbase's forums have gone down. They are indeed unavailable at http://wallbase.cc/forum. This has prompted some concern that the website may too fade. So far, there has been no announcement of a shut-down. However, the site's owner ("Yotoon") is MIA according to the #wallbase Twitter account (that is apparently run by staff) and the upload function has been disabled. See the following:
The staff are working on a project called WallHaven that might serve as a replacement. I don't know what the staff have access to, though presumably, they would have all of the metadata themselves. Should extra metadata be grabbed in our scrape?
Work thus far
User pluesch has all images (except the first 10k), all categories (category_id;category_name), all tags (tag_id;tag_name) and all image to tag relations (image_id;tag_id1;[...]). He will make it available somewhere as soon as possible.
THIS WILL BE A PROJECT, SCRIPTS ARE BEING WORKED ON. THIS WIKI WILL BE CHANGED VERY SOON.
So far, there is no repo on github, someone should change that. There is a small ruby script available on the discussion page, however.
User arkiver is currently scraping as much as he can, and user godane has done a small portion that is available on archive.org here. Does anyone plan to implement a see-saw instance and tracker? Grabbing gigs and gigs of images can add up pretty quickly; going by the work done thus far, the total backup looks to be about 300GB.
The domain implements rate limiting. From my own experience, I seem to recall it being picky about the referrer header at one point too, but a brief examination seems to indicate that may no longer be the case.
Search "wallbase.cc" on archive.org