|NewsGrabber - Archiving all the worlds news!|
|Archiving status||In progress...|
|IRC channel||(on EFnet)|
NewsGrabber is a project to save as many newsarticles as possible from as many websites as possible.
A lot of news articles are saved in the Wayback Machine by the Internet Archive. They're mostly save through Top News of Focused Crawls , with the crawls on GDELT URLs , and through Wide Crawls.
We think these crawls aren't very complete crawls of the worldwide newsarticles.
- Crawls on websites from Top News in Focused Crawls crawl a website from the seed URL up to 5 layers deep. This is often done once a day. Not a lot of websites are covered and the websites covered are mostly English.
- GDELT does a very good job of covering news from around the world, but sometimes misses the more local websites and many non-English websites.
- Wide Crawls are focused on the whole World Wide Web, not on newsarticles.
NewsGrabber is written to solve the problem of missing newsarticles. NewsGrabber allows anyone to add new websites to it's database to be checked for articles. Several seed URLs can be added to a website entry which are crawled periodically for new URLs. If new URLs are found, these URLs are either downloaded with or without youtube-dl, making it possible to preserve news in video-form just as good as news in text-form.
NewsGrabber handles several options for discovering and processing URLs. More details about using these options to add new websites can be found in the README of the project .
The refreshtime is the time NewsGrabber waits before crawling the seed URLs of the website. The refreshtime can be as low as 5 seconds.
SeedURLs are used by NewsGrabber to discover new articles. Often not all newsarticles are displayed on the front page of a website. They may be spread over several sections or rss feeds. A list of these URLs can be given to NewsGrabber so all articles from these URLs are found and not only those on the frontpage.
NewsGrabber supports videos. URLs containing videos are downloaded with youtube-dl, if they match the regex given for videoURLs.
In case of big events a live page is often created with the latest news on this event. NewsGrabber will grab URLs matching the regex given for liveURLs over and over, and will crawl every new bit of information on them.
When emergencies happen newssites try to cover as much as they can about what is happening. Often false information is published on the websites and later removed. To catch articles that are later removed it is possible to make NewsGrabber grab new found articles immediatly instead of adding them to the list, which will be grabbed once every hour.