Distributed recursive crawls

From Archiveteam
Revision as of 20:07, 22 March 2022 by JustAnotherArchivist (talk | contribs) (Created page with "{{Infobox project | project_status = {{specialcase}} | archiving_status = {{in progress}} | archiving_type = DPoS | source = [https://github.com/ArchiveTeam/grab-grab grab-grab] | tracker = [https://tracker.archiveteam.org/grab/ grab] | irc = Y }} This is a project to recursively crawl large websites that have no clear structure that can easily be split into work items the way we usually do on DPoS projects. It is somewhat comparable to ArchiveBot in that crawls...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
Distributed recursive crawls
Status Special case
Archiving status In progress...
Archiving type DPoS
Project source grab-grab
Project tracker grab
IRC channel #Y (on hackint)

This is a project to recursively crawl large websites that have no clear structure that can easily be split into work items the way we usually do on DPoS projects. It is somewhat comparable to ArchiveBot in that crawls are started manually for specific sites of interest.