From Archiveteam
Revision as of 21:05, 27 December 2015 by JesseW (talk | contribs) (fix various non-native language typos)
Jump to navigation Jump to search
NewsGrabber - Archiving all the world's news!
Status Online!
Archiving status In progress...
Archiving type Unknown
Project source NewsGrabber
Project tracker [1]
IRC channel #newsgrabber (on hackint)

NewsGrabber is a project to save as many news articles as possible from as many websites as possible.


A lot of news articles are saved in the Wayback Machine by the Internet Archive. They're mostly saved via the Top News collection (part of the Focused Crawls), the crawls of GDELT URLs, and through Wide Crawls.

We think these crawls aren't very complete crawls of the world's news articles.

  • Crawls on websites from Top News in Focused Crawls crawl a website from the seed URL up to 5 layers deep. This is done once a day. Not a lot of websites are covered and the websites covered are mostly in English.
  • GDELT does a very good job of covering news from around the world, but sometimes misses more local websites and many non-English websites.
  • Wide Crawls are focused on the whole World Wide Web, not on news articles.


NewsGrabber is written to solve the problem of missing news articles. NewsGrabber contains a database of URLs to be checked for articles, and allows anyone to add new websites to the database. Multiple seed URLs can be added for each website entry, all of which will be crawled periodically to look for new article URLs. youtube-dl can be used to download article URLs, making it possible to preserve news in video-form just as well as news in text-form.


NewsGrabber handles several options for discovering and processing URLs. More details about using these options to add new websites can be found in the README of the project [2].


The refreshtime is the time NewsGrabber waits before crawling the seed URLs of the website. The refreshtime can be as low as 5 seconds.


SeedURLs are used by NewsGrabber to discover new articles. Often not all newsarticles are displayed on the front page of a website. They may be spread over several sections or rss feeds. A list of these URLs can be given to NewsGrabber so all articles from these URLs are found and not only those on the frontpage.


NewsGrabber supports videos. URLs containing videos are downloaded with youtube-dl, if they match the regex given for videoURLs.


In case of big events a live page is often created with the latest news on this event. NewsGrabber will grab URLs matching the regex given for liveURLs over and over, and will crawl every new bit of information on them.

Immediate grab

When emergencies happen newssites try to cover as much as they can about what is happening. Often false information is published on the websites and later removed. To catch articles that are later removed it is possible to make NewsGrabber grab new found articles immediatly instead of adding them to the list, which will be grabbed once every hour.