By default the bot only grabs a single URL. However it supports recursion, which is rather slow, since every single page needs to be loaded and rendered by a browser. A dashboard is available for watching the progress of such jobs.
||Archive <url> with <concurrency> processes according to recursion <policy>.|
||Get job status for <uuid>.|
||Revoke or abort running job with <uuid>.|
Please note that the commands are case-sensitive.
URL lists can be archived using recursion, for example:
chromebot: a https://transfer.notkiska.pw/inline/UpfR/HollyConrad-tweets -r 1 -j 4
chromebot will assume all lines starting with http(s):// are valid links. Note that the list itself must be returned by the server as an *inline* document, not as a download (attachment).
chromebot has been blacklisted by Instagram. When trying to archive any Instagram.com website, chromebot responds with the following error:
<Instagram.com URL> cannot be queued: Banned by Instagram
Cloudflare DDoS protection
chromebot should be able to circumvent Cloudflare's DDoS protection, but scrolling and other behaviour may be disabled after the reload (issue #13 on GitHub).