== Clearing SMART disk complaints, with safety provided by ZFS Recently, my office machine's _smartd_ began complaining about problems on one of my drives ([[again ZFSOnLinuxScrubSave]]): .pn prewrap on > Device: /dev/sdc [SAT], 5 Currently unreadable (pending) sectors \\ > Device: /dev/sdc [SAT], 5 Offline uncorrectable sectors As it happens, I was eventually able to make all of these complaints go away (I won't say I fixed the problem, because the disk is undoubtedly still slowly failing). This took a number of steps and some of them were significantly helped by [[ZFS on Linux http://zfsonlinux.org/]]. (For background, this disk is one half of a mirrored pair. [[Most of it is in a ZFS pool ZFSOnLinuxDiskSetup]]; the rest is in various software RAID mirrors.) My steps: # Scrub my ZFS pool, in the hopes that this would make the problem go away like [[the first iteration of _smartd_ complaints ZFSOnLinuxScrubSave]]. Unfortunately I wasn't so lucky this time around, but the scrub did verify that all of my data was intact. # Use _dd_ to read all of the partitions of the disk (one after another) in order to try to find where the bad spots were. This wound up making four of the five problem sectors just quietly go away and did turn up a hard read error in one partition. Fortunately or unfortunately it was my ZFS partition. The resulting kernel complaints looked like: > blk_update_request: I/O error, dev sdc, sector 1362171035 > Buffer I/O error on dev sdc, logical block 170271379, async page read The reason that a ZFS scrub did not turn up a problem was that ZFS scrubs only check allocated space. Presumably the read error is in unallocated space. # Use the kernel error messages and carefully iterated experiments with _dd_'s _skip=_ argument to make sure I had the right block offset into _/dev/sdc_, ie the block offset that would make _dd_ immediately read that sector. # Then I tried to write zeroes over just that sector with '_dd if=/dev/zero of=/dev/sdc seek=... count=1_'. Unfortunately this ran into a problem; for some reason the kernel felt that this was a 4k sector drive, or at least that it had to do 4k IO to _/dev/sdc_. This caused it to attempt to do a read-modify-write cycle, which immediately failed when it tried to read the 4k block that contained the bad sector. (The goal here was to force the disk to reallocate the bad sector into one of its spare sectors. If this reallocation failed, I'd have replaced the disk right away.) # This meant that I needed to do 4K writes, not 512 byte writes, which meant that I needed the right offset for _dd_ in 4K units. This was handily the 'logical block' from the kernel error message, which I verified by running: > dd if=/dev/sdc of=/dev/null bs=4k skip=170271379 count=1 This immediately errored out with a read error, which is what I expected. # Now that I had the right 4K offset, I could write 4K of /dev/zero to the right spot. To really verify that I was doing (only) 4K of IO and to the right spot, I ran _dd_ under _strace_: > strace dd if=/dev/zero of=/dev/sdc bs=4k seek=170271379 count=1 # To verify that this _dd_ had taken care of the problem, I redid the _dd_ read. This time it succeeded. # Finally, to verify that writing zeroes over a bit of one side of my ZFS pool had only gone to unallocated space and hadn't damaged anything, I re-scrubbed the ZFS pool. ZFS was important here because ZFS checksums meant that writing zeroes over bits of one pool disk was 'safe', unlike with software RAID, because if I hit any in-use data ZFS would know that the chunk of 0 bytes was incorrect and fix it up. With software RAID I guess I'd have had to carefully copy the data from the other side of the software RAID, instead of just using _/dev/zero_. By the way, I don't necessarily recommend this long series of somewhat hackish steps. In an environment with plentiful spare drives, the right answer is probably 'replace the questionable disk entirely'. It happens that we don't have lots of spare drives at this moment, plus I don't have enough drive bays in my machine to make this at all convenient right now. (Also, in theory I didn't need to clear the SMART warnings at all. In practice the Fedora 23 _smartd_ whines incessantly about this to syslog at a very high priority, which causes one of my windows to get notifications every half hour or so and I just couldn't stand it any more. It was either shut up _smartd_ somehow or replace the disk. Believe it or not, all these steps seemed to be the easiest way to shut up _smartd_. It worked, too.)