What exactly happens when ChromeBot tries to access Instagram's website?
How well does it handle Twitter Lite?
While Twitter's original desktop website still relies much on HTML source code, Twitter's Mobile page is a “Web App”, powered by AJAX. In addition, it causes serious compatibility problems with older versions of browsers (but Twitter redirects them to “Mobile Web (M2)”, their legacy mobile website, anyway).
The advantage of the AJAX-powered web-app is that allows for smoother browsing because thanks to AJAX, there is no need to reload the entire webpage. But the initial loading time takes obviously longer, because it needs to download more information into the RAM (if not already in browser cache).
The downside of AJAX is obvious, especially for YouTube comments. Starting circa 2013, those did no longer load within the page itself (included into HTML source code). See YouTube#Comment loading for more information. AJAX has been a death sentence for the Wayback Machine, also for other websites.
Archive.is has partially been able to handle AJAX content, losing it's ability to capture YouTube comments since late 2017 (except for directly linked comments).
But now, there is our mighty ChromeBot. Thankfully.
It is not very likely for Twitter to replace their legacy website (also known as “Twitter Web Client” in tweet source tags) with their new “App” style website (“Twitter Web App”, formerly “Twitter Lite”), but in case it actually happens, or in case it becomes the default and only users who are logged in are able to opt out, is ChromeBot prepared? …and will it support infinite scroll there too?
It would be good if Twitter still gives users the choice about which platform to use. If Twitter enforced their AJAX-powered website onto all users, ArchiveBot, (which is more mature and more suited for mass archivals of larger pages rather than ChromeBot for modern, JS-heavy pages), could be incapacitated.
Some websites that have multiple pages (e.g. Google Desktop website search results) work via URL's that can be put into a list and then fed into ArchiveBot.
Some other websies (e.g. YouTube comments and video lists in 2012, prior to bottomless infinite scrolling) did have pages that can not be accessed via URL (but YouTube had /all_comments?v= back then, which supported pages.).
We need to find a way to archive website content in an automated way (manually via WARC recording is already possible) with content that can only be accessed via clicking (e.g. comment pages that get accessed via AJAX instead of URL). --ATrescue (talk) 19:39, 30 April 2019 (UTC)
“bajop-” job ID's? New naming system?
- Yesterday (20190506), all job ID's started with “
- Today (20190507), all job ID's start with “
Earlier, job ID's had just random ID's.