Vim, its defaults, and the problem this presents sysadmins
One of Vim's many options is '
may be off or on. The real thing that it does, behind the thicket
of technical description, is that
hidden controls whether or not
you can casually move away from a modified Vim buffer to another
one. In most editors this isn't even an option and you always can
(you'll get prompted if you try to exit with unsaved changes). In
Vim, for historical reasons, this is an option and for further
historical reasons it defaults to 'off'.
(The historical reasons are that it wasn't an option in the original BSD
vi, which behaved as if
hidden was always off. Vim cares a fair bit
about compatibility back to historical vi.)
The default of
hidden being off gets in the way of doing certain
sorts of things in Vim, like making changes to multiple files at
once, and it's also at odds with what
I want and how I like to work in my editors. So the obvious thing
for me to do would be to add '
set hidden' to my
.vimrc and move
on. However, there is a problem with that, or rather two problems,
because I use Vim partly as a sysadmin's editor.
By that I mean that I use vi(m) from several different accounts
root account) and on many different machines, not
all of which have a shared home directory even for my own account
root always has a local home directory).
In order for '
set hidden' to be useful to me, it needs to be quite
pervasive; it needs to work pretty much everywhere I use vim. Otherwise
I will periodically trip over situations where it doesn't work, which
means that I'll always have to remember the workarounds (and ideally
practice them). As a non-default setting, this is at least difficult
(although not completely impossible, since we already have an install
framework that puts various things into place on all standard machines).
This is why what programs have as defaults matters a lot to sysadmins,
in a way that they don't to people who only use one or a few
environments on a regular basis. Defaults are all that we can count on
everywhere, and our lives are easier if we work within them (we have
less to remember, less to customize on as many systems as possible as
early as possible, and so on). My life would be a bit easier if Vim
had decided that its default was to have
PS: The other thing about defaults is that going with the defaults
is the course of least discussion in the case of setups used by
multiple people, which is an extremely common case for the
Sidebar: The practical flies in my nice theoretical entry
My entry is the theory, but once I actually looked at things it
turns out to be not so neat in practice. First off, my own personal
.vimrc turns out to already turn on
hidden, due to me following
the setup guide from Aristotle Pagaltzis' vim-buftabline package. Second, we already install
.vimrc in the root account in our standard Ubuntu
installs, and reading the comments in it makes it clear that I wrote
it. I could probably add '
set hidden' to this and re-deploy it
without any objections from my co-workers, and this would cover
almost all of the cases that matter to me in practice.
It's useful to record changes that you tried and failed to do
Today, for reasons beyond the scope of this entry, I decided to try out enabling HTTP/2 on our support site. We already have HTTP/2 enabled on another internal Apache server, and both servers run Ubuntu 18.04, so I expected no problems. While I could enable everything fine and restart Apache, to my surprise I didn't get HTTP/2 on the site. Inspecting the Apache error log showed the answer:
[http2:warn] [pid 10400] AH10034: The mpm module (prefork.c) is not supported by mod_http2. The mpm determines how things are processed in your server. HTTP/2 has more demands in this regard and the currently selected mpm will just not do. This is an advisory warning. Your server will continue to work, but the HTTP/2 protocol will be inactive.
We're still using the prefork MPM on this server because when we tried to use the event MPM, we ran into a problem that is probably this Apache bug (we suspect that Ubuntu doesn't have the fix for in their 18.04 Apache version). After I found all of this out, I reverted my Apache configuration changes; we'll have to try this later, in 20.04.
We have a 'worklog' system where we record the changes we make and the work we do in email (that gets archived and so on). Since I didn't succeed here and reverted everything involved, there is no change to record, so I first was going to just move on to the next bit of work. Then I rethought that and wrote a worklog message anyway to record my failure and why. Sure, I didn't make a change, but our worklog is our knowledge base (and one way we communicate with each other, including people who are on vacation), and now it contains an explanation of why we don't and can't have HTTP/2 on those our web servers that are using prefork. If or when we come back to deal with HTTP/2 again, we'll have some additional information and context for how things are with it and us.
This is similar to documenting why you didn't do attractive things, but I think of it as somewhat separate. For us, HTTP/2 isn't particularly that sort of an attractive thing; it's just there and it might be nice to turn it on.
(At one level this issue doesn't come up too often because we don't usually fail at changes this way. At another level, perhaps it should come up more often, because we do periodically investigate things, determine that they won't work for some reason, and then quietly move on. I suspect that I wouldn't have thought to write a worklog at all if I had read up on Apache HTTP/2 beforehand and discovered that it didn't work with the prefork MPM. I was biased toward writing a worklog here because I was making an actual change (that I expected to work), which implies a worklog about it.)
Using alerts as tests that guard against future errors
On Twitter, I said:
These days, I think of many of our alerts as tests, like code tests to verify that bugs don't come back. If we broke something in the past and didn't notice or couldn't easily spot what was wrong, we add an alert (and a metric or check for it to use, if necessary).
So we have an alert for 'can we log in with POP3' (guess what I broke once, and surprise, GMail uses POP3 to pull email from us), and one for 'did we forget to commit this RCS file and broke self-serve device registration', and so on.
(The RCS file alert is a real one; I mentioned it here.)
In modern programming, it's conventional that when you find a bug in your code, you usually write a test that checks for it (before you fix the bug). This test is partly to verify that you actually fixed the bug, but it's also there to guard against the bug ever coming back; after all, if you got it wrong once, you might accidentally get it wrong again in the future. You can find a lot of these tests over modern codebases, especially in tricky areas, and if you read the commit logs you can usually find people saying exactly this about the newly added tests.
As sysadmins here, how we operate our systems isn't exactly programming, but I think that some of the same principles apply. Like programmers, we're capable of breaking things or setting up something that is partially but not completely working. When that happens, we can fix it (like programmers fixing a bug) and move on, or we can recognize that if we made a mistake once, we might make the same mistake later (or a similar one that has the same effects), just like issues in programs can reappear.
(If anything, I tend to think that traditional style sysadmins are more prone to re-breaking things than programmers are because we routinely rebuild our 'programs', ie our systems, due to things like operating systems and programs getting upgraded. Every new version of Ubuntu and its accompanying versions of Dovecot, Exim, Apache, and so on is a new chance to recreate old problems, and on top of that we tend to build things with complex interdependencies that we often don't fully understand or realize.)
In this environment, my version of tests has become alerts. As I said in the tweets, if we broke something in the past and didn't notice, I'll add an alert for it to make sure that if we do it again, we'll find out right away this time around. Just as with the tests that programmers add, I don't expect these alerts to ever fire, and certainly not very often; if they do fire frequently, then either they're bad (just as tests can be bad) or we have a process problem, where we need to change how we operate so we stop making this particular mistake so often.
This is somewhat of a divergence from the usual modern theory of alerts, which is that you should have only a few alerts and they should mostly be about things that cause people pain. However, I think it's in the broad scope of that philosophy, because as I understand it the purpose of the philosophy is to avoid alerts that aren't meaningful and useful and will just annoy people. If we broke something, telling us about it definitely isn't just annoying it; it's something we need to fix.
(In an environment with sophisticated alert handling, you might want to not route these sort of alerts to people's phones and the like. We just send everything to email, and generally if we're reading email it's during working hours.)
A file permissions and general deployment annoyance with Certbot
The more we use Certbot, the more I become convinced that it isn't written by people who actually operate it in anything like the kind of environment that we do (and perhaps not at all, although I hope that the EFF uses it for their own web serving). I say this because while Certbot works, there are all sorts of little awkward bits around the edges in practical operation (eg). Today's particular issue is a two part issue concerning file permissions on TLS certificates and keys (and this can turn into a general deployment issue).
Certbot stores all of your TLS certificate information under
/etc/letsencrypt/live, which is normally owned by root and is
root-only (Unix mode 0700). Well, actually, that's false, because
normally the contents of that directory hierarchy are only symlinks
/etc/letsencrypt/archive, which is also owned by root and
root-only. This works fine for daemons that read TLS certificate
material as root, but not all daemons do; in particular, Exim reads
them as the Exim user and group.
The first issue is that Certbot adds an extra level of permissions
to TLS private keys. As covered by Certbot's documentation, from
Certbot version 0.29.0, private keys for certificates are specifically
root-only. This means that you can't give Exim access to the TLS
keys it needs just by chgrp'ing
/etc/letsencrypt/archive to the Exim group and then making them
mode 0750; you must also specifically chgrp and chmod the private
key files. This can be automated with a deploy hook script, which
will be run when certificates are renewed.
(Documentation for deploy hooks is hidden away in the discussion of renewing certificates.)
The second issue is that deploy hooks do exactly and only what they're documented to do, which means that deploy hooks do not run the first time you get a certificate. After all, the first time is not a renewal, and Certbot said specifically that deploy hooks run on renewal, not 'any time a certificate is issued'. This means that all of your deployment automation, including changing TLS private key permissions so that your daemons can access the keys, won't happen when you get your initial certificate. You get to do it all by hand.
(You can't easily do it by running your deployment script by hand, because your deployment script is probably counting on various environment variables that Certbot sets.)
We currently get out of this by doing the chgrp and chmod by hand when we get our initial TLS certificates; this adds an extra manual step to initial host setup and conversions to Certbot, which is annoying. If we had more intricate deployment, I think we would have to force an immediate renewal after the TLS certificate had been issued, and to avoid potentially running into rate limits we might want to make our first TLS certificate be a test certificate. Conveniently, there are already other reasons to do this.
Finding metrics that are missing labels in Prometheus (for alert metrics)
One of the things you can abuse metrics for in Prometheus is to
configure different alert levels, alert destinations, and so on for
different labels within the same metric, as I wrote about back in
my entry on using group_* vector matching for database lookups. The example in that entry used two metrics
the former showing the current available space and the latter
describing the alert levels and so on we want. Once we're using
metrics this way, one of the interesting questions we could ask is
what filesystems don't have a space alert set. As it turns out, we
can answer this relatively easily.
The first step is to be precise about what we want. Here, we want
to know what '
fs' labels are missing from
fs label is missing if it's not present in
but is present in
our_zfs_avail_gb. Since we're talking about
sets of labels, answering this requires some sort of set operation.
our_zfs_minfree_gb only has unique values for the
(ie, we only ever set one alert per filesystem), then this is
our_zfs_avail_gb UNLESS ON(fs) our_zfs_minfree_gb
our_zfs_avail_gb metric generates our initial set of known
fs labels. Then we use UNLESS to subtract the set of all
labels that are present in
our_zfs_minfree_gb. We have to use
ON(fs)' because the only label we want to match on between the
two metrics is the
fs label itself.
However, this only works if
our_zfs_minfree_gb has no duplicate
fs labels. If it does (eg if different people can set their own
alerts for the same filesystem), we'd get a 'duplicate series' error
from this expression. The usual fix is to use a one to many match,
but those can't be combined with set operators
unless'. Instead we must get creative. Since all we care
about is the labels and not the values, we can use an aggregation
to give us a single series for each label on the right side of the
our_zfs_avail_gb UNLESS ON(fs) count(our_zfs_minfree_gb) by (fs)
As a side effect of what they do, all aggregation operators condense
multiple instances of a label value this way. It's very convenient
if you just want one instance of it; if you care about the resulting
value being one that exists in your underlying metrics you can use
You can obviously invert this operation to determine 'phantom' alerts,
alerts that have
fs labels that don't exist in your underlying metric.
That expression is:
count(our_zfs_minfree_gb) by (fs) UNLESS ON(fs) our_zfs_avail_gb
(Here I'm assuimg
our_zfs_minfree_gb has duplicate
if it doesn't, you get a simpler expression.)
Such phantom alerts might come about from typos, filesystems that haven't been created yet but you've pre-set alert levels for, or filesystems that have been removed since alert levels were set for them.
This general approach can be applied to any two metrics where some
label ought to be paired up across both. For instance, you could
cross-check that every
node_info_uname metric is matched by one
or more custom per-host informational metrics that your own software
is supposed to generate and expose through the node exporter's
(This entry was sparked by a prometheus-users mailing list thread that caused me to work out the specifics of how to do this.)
Bidirectional NAT and split horizon DNS in our networking setup
Like many other places, we have far too many machines to give them all public IPs (or at least public IPv4 IPs), especially since they're spread across multiple groups and each group should get its own isolated subnet. Our solution is the traditional one; we use RFC 1918 IPv4 address space behind firewalls, give groups subnets within it (these days generally /16s), and put each group in what we call a sandbox. Outgoing traffic from each sandbox subnet is NAT'd so that it comes out from a gateway IP for that sandbox, or sometimes a small range of them.
However, sometimes people quite reasonably want to have some of their sandbox machines reachable from the outside world for various reasons, and also sometimes they need their machines to have unique and stable public IPs for outgoing traffic. To handle both of these cases, we use OpenBSD's support for bidirectional NAT. We have a 'BINAT subnet' in our public IP address space and each BINAT'd machine gets assigned an IP on it; as external traffic goes through our perimeter firewall, it does the necessary translation between internal addresses and external ones. Although all public BINAT IPs are on a single subnet, the internal IPs are scattered all over all of our sandbox subnets. All of this is pretty standard.
(The public BINAT subnet is mostly virtual, although not entirely so; for various peculiar reasons there are a few real machines on it.)
However, this leaves us with a DNS problem for internal machines (machines behind our perimeter firewall) and internal traffic to these BINAT'd machines. People and machines on our networks want to be able to talk to these machines using their public DNS names, but the way our networks are set up, they must use the internal IP addresses to do so; the public BINAT IP addresses don't work. Fortunately we already have a split-horizon DNS setup, because we long ago made the decision to have a private top level domain for all of our sandbox networks, so we use our existing DNS infrastructure to give BINAT'd machines different IP addresses in the internal and external views. The external view gives you the public IP, which works (only) if you come in through our perimeter firewall; the internal view gives you the internal RFC 1918 IP address, which works only inside our networks.
(In a world where new gTLDs are created like popcorn, having our own top level domain isn't necessarily a great idea, but we set this up many years before the profusion of gTLDs started. And I can hope that it will stop before someone decides to grab the one we use. Even if they do grab it, the available evidence suggests that we may not care if we can't resolve public names in it.)
Using split-horizon DNS this way does leave people (including us) with some additional problems. The first one is cached DNS answers, or in general not talking to the right DNS servers. If your machine moves between internal and external networks, it needs to somehow flush and re-resolve these names. Also, if you're on one of our internal networks and you do DNS queries to someone else's DNS server, you'll wind up with the public IPs and things won't work. This is a periodic source of problems for users, especially since one of the ways to move on or off our internal networks is to connect to our VPN or disconnect from it.
The other problem is that we need to have internal DNS for any public name that your BINAT'd machine has. This is no problem if you give your BINAT machine a name inside our subdomain, since we already run DNS for that, but if you go off to register your own domain for it (for instance, for a web site), things can get sticky, especially if you want your public DNS to be handled by someone else. We don't have any particularly great solutions for this, although there are decent ones that work in some situations.
(Also, you have to tell us what names your BINAT'd machine has. People don't always do this, probably partly because the need for it isn't necessarily obvious to them. We understand the implications of our BINAT system, but we can't expect that our users do.)
(There's both an obvious reason and a subtle reason why we can't apply BINAT translation to all internal traffic, but that's for another entry because the subtle reason is somewhat complicated.)
Using Wireshark's Statistics menu to get per-host traffic volume
As part of my casual Internet browsing, I recently read 6 Lessons we learned when debugging a scaling problem on GitLab.com. As sort of an aside (although listed as a lesson), the article mentioned Wireshark's Statistics menu and how it can show you per-conversation information (and thus let you find specific sorts of conversations, such as short ones). I didn't think about it much at the time, but this mention stuck in the back of my mind (as such things often do, at least for a while).
Today I had a situation where we had a saturated OpenBSD firewall
and I very much wanted to find out roughly what hosts were responsible
for the traffic. OpenBSD has per-interface statistics (which let
me see that the firewall's interface was saturated with incoming
traffic), but it doesn't have anything more granular by default and
we didn't have any traffic accounting stuff set up in our PF rules.
I tried a plain
tcpdump, but this firewall sits in front of enough
hosts that the output was overwhelming. As I was thinking unhappy
thoughts about trying to write some awk on the fly, a little light
went on; perhaps Wireshark could help. So I used tcpdump to capture
a minute or two of traffic to a file, copied the capture file over
to my Linux machine, and fired up Wireshark.
(Since I only cared about packet sizes, not packet contents, I was able to let tcpdump truncate packets to keep the file size down.)
The answer is yes, Wireshark absolutely had something that could help; the 'Endpoints' option on the Statistics menu gives you a breakdown of the traffic by various endpoint categories, including IPv4 hosts (it will also do it by host+port combination). This immediately pointed me to the high-volume hosts at work.
Using packet captures for this isn't necessarily as useful and precise as real traffic volume information that is measured directly and reliably by the host in some way, and it likely has more overhead. But it has the large virtue that we can use it in any situation where we can run tcpdump for a while, and almost everything has tcpdump. I can use it with our OpenBSD firewalls to find traffic sources, I can use it with our Linux fileservers to figure out which NFS clients are doing a high volume of read or write IO, and I'm sure I can use it in plenty of other situations too.
(One that just occurred to me is trying to find out who is doing an unusually large number of DNS queries to our DNS servers. We don't have query logging, but we can capture a couple of minutes of traffic to port 53.)
Although I wish we hadn't had this problem today, I'm glad that I now have another tool for troubleshooting problems. And I'm glad that I read that article and its mention of Wireshark stuck in my mind. I really do never know when this stuff will come in handy.
Another way to do easy configuration for lots of Prometheus Blackbox checks
Early on in our use of Prometheus, I wrote up a scheme for easy configuration of lots of Blackbox checks where I encoded the name of the Blackbox module to use in the names of the targets you configured, and then extracted them with relabeling. The result gave you target names that looked like:
- ssh_banner,somehost:22 - http_2xx,http://somewhere/url
This encodes the Blackbox module before the comma and the actual Blackbox target after it (you can use any suitable separator; I picked comma for how it looks).
This works, but I've learned that there is another approach that is more natural and perhaps clearer, namely adding explicit additional labels to your targets and then using those labels in relabeling to determine things like the Blackbox module or even the target to check.
Let's start with the basics (since I didn't know this for a while),
which is that a Prometheus '
targets' section of statically
configured targets can have additional labels specified. The
ostensible purpose of this (covered in the documentation)
is to attach additional labels to all metrics scraped from the
- targets: - 22.214.171.124:53 - 126.96.36.199:53 labels: - type: external
(My initial use of this was to explicitly label some of the hosts we check as off-network hosts, because check failure for them is different from failure for our local machines.)
However, as covered in this prometheus-users message from Ben
these additional labels are available at the start of the scrape,
and so you can use relabeling to turn them into things like what
Blackbox module to use. For example, suppose you add a '
ssh_banner' label to a set of targets that you want checked with
that Blackbox module, and then have a relabeling configuration like
# Set the target from the address, # as usual - source_labels: [__address__] target_label: __param_target # Set the Blackbox module from # the 'module:' label - source_labels: [module] target_label: __param_module # And now point the address to a # local Blackbox as usual. - target_label: __address__ replacement: 127.0.0.1:9115
(As a disclaimer, I haven't actually tested this snippet.)
I see advantages and disadvantages to this approach. One advantage
is that it's likely to be more clearer and normal. People are (or
should be) used to attaching extra labels to static targets, and
it's clearly documented, so the only magic and mystery is how your
module label takes effect. While I like my original
syntax, it's clearly more magical and unusual; you're going to have
to read the relabeling configuration to understand what's going on
and how to write additional things.
One drawback is that it pretty much forces you to group checks by module instead of by target. With my scheme, you can list several checks for a host together:
- targets: [...] - ssh_banner,host:22 - smtp_banner,host:25 - http_2xx,http://host/url
With an explicit label-based approach to selecting the module, each
of these has to be in a separate static configuration section because
they each need a different
module label. On the other hand, this
pushes you toward listing all of your checks for a given Blackbox
module in one spot.
A place where this can be an active drawback is if you need to vary
additional labels for groups of targets, especially across modules.
For instance, if you want to attach a '
dc' label to all Blackbox
metrics from a group of hosts, you now need to split up those per
module sections (with a '
module' label) into multiple sections,
one for each combination of
dc. This could easily
get pretty verbose (although it might not matter if you're
automatically generating this from external configuration information).
I probably won't be changing our configuration from my current trick to this more straightforward approach, but I'm going to bear it in mind for future use. Partly this is because our setup already exists and works, and partly it's because we use some additional labels now and I want to preserve our freedom to easily use more in the future.
A lesson of (alert) scale we learned from a power failure
Starting last November, we moved over to a new metrics, monitoring, and alerting system based around Prometheus. Prometheus's Alertmanager allows you to group alerts together in various ways, but what it supports is not ideal for us and once the dust settled we decided that the best we could do was to group our alerts by host. In practice, hosts are both what we maintain and usually what breaks. And usually their problems are independent of each other.
Then we had a power failure and our DNS servers failed to come back into service. All of our Prometheus scraping and monitoring was done by host name, and 'I cannot resolve this host name' causes Prometheus to consider that the scrape or check has failed. Pretty much the moment the Prometheus server host rebooted, essentially all of our checks started failing and triggering alerts, and eventually as we started to get the DNS servers up the resulting email could actually be delivered.
When the dust settled, we had received an impressive amount of email from Alertmanager (and a bunch of other system email, too, reporting things like cron job failures); my mail logs say we got over 700 messages all told. Needless to say, this much email is not useful; in fact, it's harmful. Instead of alert email pointing out problems, it was drowning us in noise; we had to ignore it and mass-delete it just to control our mailboxes.
I'd always known that this was a potential problem in our setup, but I didn't expect it to be that much of a problem (or to come up that soon). In the aftermath of the power failure, it was clear that we needed to control alert volume during anything larger than a small scale outage. Even if we'd only received one email message per host we monitored, it could still rapidly escalate to too many. By the time we're getting ten or fifteen email messages all of a sudden, they're pretty much noise. We have a problem and we know it; the exhaustive details are no longer entirely useful, especially if delivered in bits and pieces.
I took two lessons from this experience. The first is the obvious one, which is that you should consider what happens to your monitoring and alerting system if a lot of things go wrong, and think about how to deal with that. It's not an easy problem, because what you want when there's only a few things wrong is different from what you want when there's a lot of them, and how your alerting system is going to behave when things go very wrong is not necessarily easy to predict.
(I'm not sure if our alerts flapped or some of them failed to group together the way I expected them to, or both. Either way we got a lot more email than I'd have predicted.)
The second lesson is that large scale failures are perhaps more likely and less conveniently timed than you'd like, so it's worth taking at least some precautions to deal with them before you think you really need to. One reason to act ahead of time here is that a screaming alert system can easily make a bad situation worse. You may also want to err on the side of silence. In some ways it's better to get no alerts during a large scale failure than too many, since you probably already know that you have a big problem.
(This sort of elaborates on a toot of mine.)
Sidebar: How we now deal with this
Nowadays we have a special 'there is a large scale problem' alert that shuts everything else up for the duration, and to go with it a 'large scale outages' Grafana dashboard that is mostly text tables to list down machines, active alerts, failing checks, other problems, and so on.
(We built a dedicated dashboard for this because our normal overview dashboard isn't really designed to deal with a lot of things being down; it's more focused on the routine situation that nothing or almost nothing is down and you want an overview of how things are going. So, for example, it doesn't bother having very large space to list down hosts and active alerts, because most of the time that would be empty wasted space.)
Turning off DNSSEC in my Unbound instances
It has been '0' days since DNSSEC caused DNS resolution for perfectly good DNS names to fail on my machine. Time to turn DNSSEC validation off, which I should have done long ago.
I use Unbound on my machines, from the Fedora package, so this is not some questionable local resolver implementation getting things wrong; this is a genuine DNSSEC issue. In my case, it was for www.linuxjournal.com, which is in my sources of news because it's shutting down. When I tried to visit it from my home machine, I couldn't get an answer for its IP address. Turning on verbose Unbound logging gave me a great deal of noise, in which I could barely make out that Unbound was able to obtain A and AAAA records but then was going on to try DNSSEC and clearly something was going wrong. Turning of DNSSEC fixed it, once I did it in the right way.
NLNet Labs has a Howto on turning off DNSSEC in Unbound that
provides a variety of ways to do this, starting from setting
val-permissive-mode: yes' all the way up to disabling the validator
module. My configuration has had permissive mode set to yes for years,
but that was apparently not good enough to deal with this situation,
so I have now removed the validator module from my Unbound module
configuration. In fact I have minimized it compared to the Fedora
The Fedora 29 default configuration for Unbound modules is:
module-config: "ipsecmod validator iterator"
I had never heard of 'ipsecmod' before, but it turns out to be
'opportunistic IPSec support', as described in the current
unbound.conf; I will
let you read the details there. Although configured as a module in
the Fedora version, it is not enabled ('ipsecmod-enabled' is set
off); however, I have a low enough opinion of unprompted IPSec to
random strangers that I removed the module entirely, just in case.
So my new module config is just:
(Possibly I could take that out too and get better performance.)
In the Fedora Unbound configuration, this can go in a new file in
/etc/unbound/local.d. I called my new file '
(There were a variety of frustrating aspects to this experience and I have some opinions on DNSSEC as a whole, but those are for another entry.)