|Archiving status||In progress...|
|IRC channel||(on EFnet)|
wallbase.cc is a store of wallpapers and other high-resolution media typically scraped from chans' /hr, /wg, and /w boards.
WALL YOUR BASE ARE BELONG TO US. I'm... I'm sorry. I'll see myself out.
Apparently, wallbase's forums have gone down. They are indeed unavailable at http://wallbase.cc/forum. This has prompted some concern that the website may too fade. So far, there has been no announcement of a shut-down. However, the site's owner ("Yotoon") is MIA according to the #wallbase twitter account (that is apparently run by staff) and the upload function has been disabled. See the following:
The staff are working on a project called WallHaven that might serve as a replacement. I don't know what the staff have access to, though presumably, they would have all of the metadata themselves. Should extra metadata be grabbed in our scrape?
Work thus far
THIS WILL BE A PROJECT, SCRIPTS ARE BEING WORKED ON. THIS WIKI WILL BE CHANGED VERY SOON.
So far, there is no repo on github, someone should change that. There is a small ruby script available on the discussion page, however.
User arkiver is currently scraping as much as he can, and user godane has done a small portion that is available on archive.org here. Does anyone plan to implement a see-saw instance and tracker? Grabbing gigs and gigs of images can add up pretty quickly; going by the work done thus far, the total backup looks to be about 300GB.
User H2u has the first 100k (excluding those that were removed from wallbase, which is most) including URLs and NSFW images available here temporarily.
User pluesch is currently scraping as much as he can with the script from H2u and also scraping all available raw html for later analysis (source and tags). pluesch has the first 1.5m.
The domain implements rate limiting. From my own experience, I seem to recall it being picky about the referrer header at one point too, but a brief examination seems to indicate that may no longer be the case.