How not to set up your DNS (part 19)
It's been quite a while since the last installment, but today's is an interesting although simple case. Presented in the traditional illustrated format:
; sdig ns xing121.cn dns1.dns-dns.com.cn. dns2.dns-dns.com.cn. ; sdig a dns1.dns-dns.com.cn. 127.0.0.1 ; sdig a dns2.dns-dns.com.cn. 127.0.0.1
As they say, 'I don't think so'. If you run a caching resolving nameserver that does not have 127.0.0.1 in its access ACLs, this sort of thing is a great way to have mysterious messages show up in your logs about:
client 127.0.0.1#21877: query (cache) 'www.xing121.cn/A/IN' denied
(Guess how I noticed this particular problem.)
Judging from our logs, there seem to be a number of Chinese domains that have this problem (with the same DNS servers), assuming that it is a problem and not something deliberate.
Less straightforward is this case:
; sdig ns edetsa.com. ns1.hn.org. tucuman.edetsa.com. ; sdig a ns1.hn.org. 127.0.0.1 ; sdig a tucuman.edetsa.com. 184.108.40.206
One possible theory is that hn.org no longer wishes to be a DNS server for edetsa.com but can't get edetsa.com's cooperation, so they've just changed the A record for that name to something that makes people go away. (hn.org has real working DNS servers of its own.)
An advantage for hardware RAID over software RAID
I am generally fairly negative on hardware RAID; I feel that both in theory and in especially in practice, it is almost never a benefit. However, today I realized that there is one way that a hardware RAID card could have an advantage: avoiding PCI bandwidth limits during RAID reconstruction.
In the general case, RAID reconstruction has to read all of the remaining intact disks and then write back to the new disk. With software RAID, this data must cross the PCI bus, since it is the server's main CPU and RAM that do all of the work. With hardware RAID, nothing crosses the PCI bus; it all happens on the card.
But is the PCI bus going to be the limiting factor? I think that it's at least possible. There is some evidence that our iSCSI targets are PCI bandwidth limited for sequential reads with 1 TB disks; they can read from each individual disk at around 105 MBytes/sec, but if we try reading all 12 at once, we get only around 59 MBytes/sec from each disk (for an aggregate 708 MBytes/sec, much less than the theoretical 1260 MBytes/sec we'd get if we could drive each disk at full speed).
(We were reading from the raw device, so there was no filesystem overhead. Which isn't to say that we weren't running into some other kernel performance limit, instead of an intrinsic hardware one. And for that matter, the hardware limits may be in our ESATA controller cards instead of in the PCI bus, although at that point it doesn't make a difference for planning systems; if you can't get an ESATA controller that works fast enough but you can get a hardware RAID controller that does, you don't care too much about exactly why the ESATA controller isn't fast enough.)
One might say that RAID reconstruction is an obscure corner case that's not worth optimizing for. Well, yes, but on the other hand when it happens people tend to care a great deal about how fast you can return your RAID to full protection (and get upset if it is not pretty fast).