An NZB file can be used to gather the contents of a news feed from a Usenet posting. The NZB file will do this by downloading headers that are specific to a user's search criteria rather than downloading all of the headers in a particular newsgroup. NZB files are saved in the XML file format. How can the answer be improved?
Been searching on google and might be that im searching for the wronge things.
How can i index a newsgroup server ?If i already have a list of nzb files, how hard is it to write a little command line tool that downloads them?
Indexing a newsgroup server means you will have to write a little library to download all the posts of the usenet server. Just the headers(topics) take a big amount of Gigabytes just to store. I believe you have to have several Terabytes available to be able to store just all the headers.
A little less storage hungry would be to just index the last X days, but might get big anyway.
Once you have all the post headers, you will have to use some logic to group together all the posts that would be one release of any kind (movie/program/ISO). For each group you create, you combine these to an NZB xml file which you can use for your favorite usenet downloader.
But if your question is that you have several URLs for NZB files you wish to download, there is a nice tool called webget that you can use to download any URL file.