In general narf is almost entirely hands off once it has been installed properly. However, there is one vital thing you must do periodically: you must roll the logfile. Apart from that, we suggest keeping an eye on narf's memory usage and the state of any article backlog you may have (or not have).
Narf's log accumulates rapidly, especially with its shipped default of logging at least one line for every article that it examines. Unless you have infinite disk space, you will want to roll (and gzip) these log files periodically. In order to roll narf's logfile, follow these steps:
You may want to produce reports at this time, as we do; our somewhat rough and ready software for this is covered here in the installation instructions. We have put our reports on the web here, but our software to do this is so hackish that we are not distributing it.
A running narf process can be poked in several ways. It reacts to two Unix signals:
Narf will automatically reload the filter_innd.pl filter file at the end of processing a run of batches if the file has changed. This is different than in INN, where the filter must be specifically reloaded. While narf makes a fair attempt to not die if something goes wrong with the filter reload, it is possible to take the daemon down with a particularly bad mistake or typo. If the filter has changed and does not reload cleanly, narf will pause until it can be successfully reloaded (and log errors to standard error).
Narf will reload the $CONFFILE file if it exists and changes. Despite the name, this file is not really intended for configuration setups; it is more a hacker's interface to load code or change variables in the running daemon without having to restart it. Narf will not stop running batches if a load of this file fails, unlike the filter file. Unless you understand the implications of lexical scoping and perl's do function, you should not attempt to use this to redefine narf's own functions on the fly.
If the $STOPFILE file exists, narf will stop processing batches until the file is removed. Narf will continue to reload the filter and $CONFFILE despite being stopped, and will save recognized EMP signatures if hit with a SIGHUP. On many systems, narf will immediately resume processing batches if this file is removed and narf is sent a signal that it catches.
Narf saves copies various sorts of rejected articles in $DUMPDIR if this has been configured in. In particular, all local postings that narf rejects are saved here (either in the file local or in filenames that start with local-), whether or not anything else will normally be saved. What is written is controlled by a subroutine &filter_logname that is normally defined in your filter; if this routine is not defined, nothing but rejected local posts will be logged.
The copies don't normally accumulate fast enough to require automatic maintenance; we just look at them periodically to see what's turned up. As shipped and with our filter, narf will log:
Our filter (and most INN filters) will automatically recognize much (but not all) new spam and reject future instances of it. Unless you feel like hunting around for new spam and new spammers, you should have no need to update your filter. On the other hand, the author finds that hunting spam can be an amusing and satisfying way of spending some time.
At least some familiarity with perl will be required to update the filter, especially if you plan on doing anything complex.
Narf automatically reloads the filter if it's been changed (see the cautions in the Controlling Narf section).
Unless narf grows to too large, you should not need to stop and restart it. Because narf and the filter keep various pieces of information in memory (such as cancels narf will reject if it sees them soon, or the signatures of recent articles to use in recognizing excessive posting), restarting narf should be avoided whenever possible.
Narf must be restarted for certain sorts of configuration changes to take effect. You should not normally need to do this if all you're doing is changing the filter.
As shipped both narf and our filter have generous limits and narf is configured conservatively; both of these may result in large memory usages. Tuning our filter is discussed in its documentation; some of the discussion is also relevant to an unmodified cleanfeed-inn filter, or to our MD5-modified version of it.
The best way to reduce narf memory usage is to remove its need to keep track of recently seen message-ids. In order to do this safely you must arrange that your NNTP daemons do their best to accept no duplicate message-ids. We do this with Paul Vixie's message-id daemon, which keeps an in-memory collection of recently accepted message-ids; we see basically no dups at all in the stream of articles that our narf processes. You should measure your dup rate before turning off narf's tracking of recent message-ids. To turn it off, change the configuration variable $DoMSGHist (and restart narf).
If you are allowing narf to do cancel rejection, you can change the size of the cache of prospective cancels that narf keeps. Although measuring your actual data is best, our statistics suggest that we could get good hit rates with a even a very small cache (see the comments in the source code). The variable $CMSGHIST is what you want to tweak.
Extracting a copy of the article body from the article itself is one of the things that contributes to fragmenting perl's memory and thus growing its total virtual memory. If your filter only examines the first so many bytes of the article body, you should change it to set the global variable $::MaxArtSize. Narf will then copy no more than this much of the actual article body into the __BODY__ element of the %::hdr hash, which has resulted in a significant reduction in memory growth for us. It's vital that your filter behave the same for articles bodies of a size at or over this size, otherwise you may get incorrect results.
If your chosen filter does not look at the article body at all (cleanfeed-inn and ours do look at the article body), you can delete the setting of the __BODY__ element of the %::hdr hash in the &headercrack subroutine.
Narf has a number of internal configuration variables that may not have obvious effects. Although we believe that the shipped narf defaults are fine, you may wish to tune them in special circumstances. The narf source code is the final authority on them but here is a guide to some and their effects.
You can change many of these parameters (and the &dumpxtraid routine) via code tucked in the $CONFFILE file, although knowledge is recommended. Narf must be restarted for some changes to take effect.
This page is part of our narf pages.