Talk:Parodius Networking

From Archiveteam
Jump to navigation Jump to search

The below content was originally on the main page of the article. --Aggroskater 21:30, 27 April 2012 (EDT)

Introduction

The owner of Parodius has made the decision to stop providing free hosting services for nonprofit web sites about classic video gaming.

However, the owner said in this forum post:

Each and every hosted person here has a right to define whether or not they want their content archived. Some may have robots.txt in place, others may not but might have other conditionals (some technical, some via footers/agreements on their page). Their data is their data; I am not the owner of their data. I cannot decide for them if they are comfortable with that.
If you feel comfortable with someone archiving your data like this, who is a third-party, I would recommend you contact them and offer/work out something. I'm fine with them archiving the home page, our FAQ, etc. -- sure, that's all public anyway. But our home page/etc. != users' content. Most of the site owners will be moving their stuff to other URLs, which means all the content/etc. will be available on the Internet just at a new URL.
Bandwidth = not free. The last thing I need is to find my 95th percentile at 20mbit because someone thinks bandwidth grows on trees.

So if AT wants to help, these ground rules are in place:

  1. AT should get in touch with each site's owner.
  2. The archiving shouldn't proceed too fast.
  3. Organize efforts so as not to duplicate.
  • Tepples is archiving nesdev.parodius.com, which consists of a static HTML site, a wwwThreads, and a phpBB 2. See the forum topic about this archiving effort.
  • Tepples is also archiving wiki.nesdev.com, a MediaWiki site associated with nesdev.parodius.com.
  • The news posts and information pages directly under www.parodius.com are open for AT.
  • The owners of foxhack.net[1] and nesworld.com[2] do not want AT's help.

Thursday April 26 2012 Update

Thanks for the update! I read through the forum page listed. I'm Aggroskater and while I'm not the original point of contact --nor the Head Honcho-- I'm eager to clarify what we do. I apologize if we gave a bad impression. I don't know precisely what was said and when, but if I were the operator of a long-standing community and I got a verbatim request along the lines of "we want a full archive of everything on the parodius network" I too would be concerned. It's pretty vague. Is "full archive" just scrapes of public web content? Is it dumps of databases that could have all sorts of private information in them? What about FTP archives? It's definitely easier to just "mysqldump --opt -u $uname -p $dbname > backup-$(date '+%F-%H%M').sql" and reload the database elsewhere, but scrubbing the database of any and all private information could quickly become a very big chore (and knowing how most code-bases are, you'd probably end up breaking the functionality of the site in doing so). I know I wouldn't want a site operator just blindly passing out database dumps with my passwords --hashed and salted or not-- to anyone who asked. I wouldn't want a disgruntled moderator or sysadmin to dump all of my PMs or my emails either. That being said, it usually comes down to developing a mirror suitable for use in the WayBack Machine.


By and large, the majority of our work --or rather I should say, the projects I've been involved with-- amount to finely-targeted scrapes of public web content with the intent to avoid duplication and the need for postprocessing as much as possible. We've come a long way from our ragtag beginnings where archiving was quite literally a "sign up for a slot on this table in the wiki page and run this one-liner" experience. These days efforts are centralized with trackers, S3 and EC2 instances, and customized scripts for all of the really big projects. For Parodius Networking, an operation several magnitudes smaller than something like geocities or mobileme, the focus has been on identifying specific hosts to archive and proceeding to crawl them at a reasonable pace. Depending on what precisely is being crawled, different command sets can be run to avoid duplicate and meaningless content. It's never fun to have to sift through hundreds of thousands of inodes because wget thought "&action=print", "&action=edit", "&printable=yes", "&redirect=no", "&section=$somesection", and any other combination or permutation of URL parameters, were all going to have unique content for the base URL.


But beyond the technical details and at the heart of the matter are deep-set philosophies surrounding such topics as content, copyrights, user rights, privacy, and all of the other concepts that get flung into this maelstrom brought upon us by the World Wide Web. These philosophies are all lobbied for and against by various members of the community who perceive they have something to gain or lose. I'd cheerfully set aside these issues and just soldier on if they weren't so firmly at the forefront of the rhetoric. I could spend tens if not hundreds of paragraphs talking about it, but I'll try to finish with only a few. I view ArchiveTeam as a library; indeed we've worked with the Internet Archive on various occasions. Fundamentally, I view what we're doing as functionally equivalent to the archiving efforts of libraries and museums. We're not here to plagiarize content and counterfeit it as our own. We're not here to purloin user data and sell it to the highest bidder. We're here to archive. Given the perilous nature of online existence, our efforts thus far have been highly reactionary, struggling to save history before it is rm -rf /'d into oblivion. I wouldn't be surprised if each of us had a story to tell of when a significant part of our residual self-image (I love that term; thank you Morpheus) was lost into the void of inter-sector disk space, never to return again.


How our actions interface with copyright and intellectual property has become a legal minefield that makes "IANAL" an understatement. The cozy confines of physical property --I wrote this book, I printed X copies, libraries may keep X copies that get leased out one at a time-- are nonexistent in a digital age of near-instant and near-zero-cost replication. Every physical-to-digital analogy out there falls flat at some point and only highlights the need for newer, saner, legislation. Personally, I agree with the general notion that, "every [person] has a right to define whether or not they want their content archived," implies. The devil is merely in the details: I hold that we all have that right and exercise it with every post and edit and comment we write to every website on the planet. The very action of writing my thoughts --my "intellectual property" if you will-- onto this publicly available article implies that I am OK with its ability to be read and transmitted openly, for that is the very nature of the medium in which I am speaking. It is nonsensical to suggest that I both want to post a public comment to a website, but don't want someone else to archive it, to copy it, to read it; the nature of the medium is such that to read it is to archive it, to copy it, for that is the only way it will ever reach a user's browser.


I understand these ideas aren't perfect. I understand that a site may very well have a "Terms and Conditions" or "Terms of Service" agreement that at once lets everyone and no-one see the content, or else have a technical set-up wherein one must "log-in" or perform some other action to view certain content. I argue twofold. First, on a practical level, I argue that motive is important. The motivations of archivists and their ilk are far different from those of pirates and their crew. One wishes to preserve content for all people and even for generations to come, while the other is interested in selfish, immediate consumption. Second, and on a more philosophical level, I argue that it is an exercise in futility to try to codify absolutist rules upon an entity that run contrary to its very nature. Legislation may dictate that alcohol is a prohibited substance, but people will still go out of their way to consume it. Society may demand that its youth never fornicate, but the kids will still be kids. Conglomerates of rights-holders may petition to restrict the dissemination of their work, but man is a social and cultural animal that will assimilate it into the collective psyche. That the internet has made such assimilation easier is not a fault of the technology to be rectified, but a gift to be safeguarded and endowed to our descendents.


I do apologize if our original contact appeared to be a demand for confidential data or any other unethical action. It certainly wasn't our intent. And I hope that you have a better understanding of just what it is we do, why we do it, and what we stand for.


Post-mortem video

Koitsu wanted you to see this: http://www.youtube.com/watch?v=W_C1gkPeJDc --Tepples 19:03, 10 December 2012 (EST)