URLs

From Archiveteam
Jump to navigation Jump to search
URLs
URL https://url.spec.whatwg.org/
Status Special case
Archiving status In progress...
Archiving type DPoS
Project source urls-grab
urls-sources
Project tracker urls
IRC channel #// (on hackint)
Project lead arkiver
Data[how to use] archiveteam_urls

The URLs project is a continuous, generic, best-effort project to archive random URLs from a variety of sources, including external links discovered in other projects (such as Reddit and Telegram), news sites and feeds of interest crawled regularly from urls-sources, and lists queued manually in the IRC channel.

Raw WARC access is currently unavailable to the public since 2023-04-10. Metadata about the crawl (CDX, MegaWARC, etc.) is still publicly available.

Important note: If you run this project, you'll likely see your IP get banned from Facebook, Instagram, YouTube, etc., and using those sites may become difficult (e.g. constant captchas, forced login). Also, if you run at significant speed, you'll likely see abuse notices, IP blacklists, and so on.

How to help if you have lists of URLs

For other ArchiveTeam projects that can use this kind of help, see Projects requiring URL lists.

This project requires lists of URLs for content on the target website. If you have a source of URLs, please:

  1. If the list exceeds a few megabytes, compress it, preferably using zstd -10.
  2. Give the file a descriptive name and upload it to https://transfer.archivete.am/.
  3. Share the resulting URL in the project IRC channel.
    • If you wish your list to remain private, please get in touch with a channel op (e.g. arkiver or JustAnotherArchivist). Items generated from your list will still be processed publicly, but they will be mixed in with all other items and channel logs will not associate them with you.
    • Please briefly describe the content of your list and why it should be archived! A sentence or two is fine.

Caveats

  • Lists containing large numbers of URLs on one host are not appropriate here. This project runs at very high speed and can easily DDoS a server. If you would like a crawl of a single website, please request it in #archivebot instead.
  • Lists containing extremely important or endangered URLs are not appropriate here either. This project is best-effort only and does not track whether archival succeeded. If you would like to monitor the status of a list as it is archived, please request it in #archivebot instead.