From Archiveteam
Revision as of 04:49, 12 September 2011 by Kr (talk | contribs) (→‎Distributed Scraping: new section)
Jump to navigation Jump to search

Regarding archiving

Just randomly requesting TinyURLs like you propose will get you banned since you are making many requests for non-existent TinyURLs. We do allow bots to crawl TinyURLs, but only if they are crawling TinyURLs that exist which they pulled from whatever source they are crawling.

Kevin "Gilby" Gilbertson

TinyURL, Founder

A Problem Easily Solved

Just provide for us an excel spreadsheet in the form of:

tinyurl ID | full URL

And scraping won't be necessary. Up for it?

--Jscott 20:25, 4 December 2010 (UTC)

I e-mailed the TinyURL owner and he replied with that.
Zachera 00:06, 11 December 2010 (UTC)

Another URL shortener

I ran into another URL shortener: Here's their API: Jodi.a.schneider 17:05, 3 September 2011 (UTC)

Could you archive (, please? Thank you!

To clarify: is the address of the web page; is the prefix of the generated shortened URLs. -- Pne 12:08, 9 September 2011 (UTC)

Distributed Scraping

You could make a browser extension that records the long url for each short url that a person's browser visits, for certain known shorteners. This might be particularly helpful for uncooperative shorteners, since they wouldn't know the difference.

Then it would be a matter of encouraging many people to install the browser extension.