Chris's Wiki :: blog/tech Commentshttps://utcc.utoronto.ca/~cks/space/blog/tech/?atomcommentsDWiki2024-03-24T18:33:31ZRecent comments in Chris's Wiki :: blog/tech.By Walex on /blog/tech/WriteBufferingAndSyncstag:CSpace:blog/tech/WriteBufferingAndSyncs:e5b9ff975cc88489071fb0361874258a0e3f086fWalex<div class="wikitext"><p><blockquote>“To translate this to typical system settings, I believe that you want to aggressively trigger disk writeback and perhaps deliberately restrict the total amount of buffered writes that the system can have. Rather than allowing multiple gigabytes of outstanding buffered writes and deferring writeback until a gigabyte or more has accumulated, you'd set things to trigger writebacks almost immediately”</blockquote></p>
<p>My rule is to allow outstanding writes to be no more than 1-2 seconds of the IO rate applicable.</p>
<p><a href="https://www.sabi.co.uk/blog/05-4th.html?051105#051105">https://www.sabi.co.uk/blog/05-4th.html?051105#051105</a>
<a href="https://www.sabi.co.uk/blog/0707jul.html?070701#070701">https://www.sabi.co.uk/blog/0707jul.html?070701#070701</a>
<a href="https://www.sabi.co.uk/blog/14-two.html?141010#141010">https://www.sabi.co.uk/blog/14-two.html?141010#141010</a>
<a href="https://www.sabi.co.uk/blog/16-one.html?160114#160114">https://www.sabi.co.uk/blog/16-one.html?160114#160114</a></p>
</div>2024-03-24T18:33:31ZBy Walex on /blog/tech/WriteBufferingAndSyncstag:CSpace:blog/tech/WriteBufferingAndSyncs:22e335030697367fbd1b1eaad3b5a2470dd1dff3Walexhttp://www.sabi.co.uk/blog/12-two.html?120222#120222<div class="wikitext"><p>Most filesystems use a metadata log, some use metadata+data, some <em>are</em> a metadata+data log, and BSD UFS uses "soft updates", which is a very careful ordering of updates that never leaves metadata inconsistent or not consistent with data. The problem is that it is very difficult to do right, and is very difficult to make changes, so most other filesystems either journal or are log based.</p>
<p><a href="https://www.mckusick.com/softdep/">https://www.mckusick.com/softdep/</a>
<a href="http://www.sabi.co.uk/blog/12-two.html?120222#120222">http://www.sabi.co.uk/blog/12-two.html?120222#120222</a></p>
<p>There is a an outline of the history of this here:</p>
<p><a href="https://lists.lugod.org/presentations/filesystems.pdf">https://lists.lugod.org/presentations/filesystems.pdf</a></p>
<p>There is lots more about this is various posts by several authors about the "O_PONIES" discussion.</p>
</div>2024-03-24T18:22:58ZBy George Spelvin on /blog/tech/WriteBufferingAndSyncstag:CSpace:blog/tech/WriteBufferingAndSyncs:840328cdef24f0b96203044b26ee9cd4ac37d3bfGeorge Spelvin<div class="wikitext"><p>Thanks, Ted, for the technical details.</p>
<p>I should mention, however, that there's an easier hack that works well in specifically the newly-created-file case you mention: allocate the blocks, but don't increase the on-disk i_size. And have the file system know that the data after i_size must be zeroed before use.</p>
<p>This lets you write out the metadata without writing out the data, and without significant new data structures either.</p>
<p>You do need some new code to keep the in-memory st_size and on-disk i_size separate, with the latter being the offset of the first unwritten byte.</p>
<p>It's limited to appending to a file and not e.g. filling holes in a sparse file, but as you say, that's definitely the common case.</p>
</div>2024-03-23T00:16:34ZBy Ted Ts'o on /blog/tech/WriteBufferingAndSyncstag:CSpace:blog/tech/WriteBufferingAndSyncs:05161b18cee7a751f958228061d3252929ea3f9fTed Ts'ohttps://thunk.org/tytso<div class="wikitext"><p>As it turns out fsync(2) in Linux does work the way George Spelvin has suggested. We first will issue writes to the data blocks (of the file being fsync'ed), and then secondly do we commit the metadata. So while the first phase is going on, other writers who are also calling fsync(2) won't have their commits blocked.... so long as all of the writes involved are writing to blocks that have already been allocated, so we are overwriting existing data blocks. This is often the case in the database world, for example.</p>
<p>Unfortunately things get a bit more complicated when block allocations are involved, since these necessarily require metadata changes --- and most of the time, we don't want the previous counts of data blocks to be unmasked the system crashes. So in ext4's data=ordered mode, if you have a a large file --- say, a large file that is freshly created containing, say, an image of a 4GB DVD rip, which is being fsync'ed, while we are writing out the data blocks for that large file, we need to allocate the data block for this new file, and these allocations require making global changes to the file system metadata. Now suppose someone writes a small file, and calls fsync(2) on that small file. When we commit the metadata blocks, which include the block allocations for the large file and the small file, we need to make sure all of the data blocks that have been assigned to the large file are written out, or else on a crash, so that we don't risk exposing stale data from a previous file to an unauthorized user. And this where the entangelment comes from.</p>
<p>Now, if you don't care about stale data being potentially exposed after a crash, you can just mount the file system in data=write back mode, which will avoid the entangelment, and so long as your system never crashes or you don't care about exposing stale data, you're golden.</p>
<p>If you do care about this, then it is still soluble, but requires a lot more complexity in the file system. Because now what you need to do is to allocate block for the new file, but (a) not actually make the changes in the file system's global metadata , and (b) make sure that despite the fact that you haven't made the metadata changes, that the blocks that have been allocated for file A also won't be allocated for file B. That is, you need to have an in-memory state to reserve blocks, and to provisionally assign various physical block ranges to an inode'sd logical block ranges --- but in a way that doesn't involve storing this information in the file system's metadata.</p>
</div>2024-03-21T20:41:57ZBy Chris Siebenmann on /blog/tech/WriteBufferingAndSyncstag:CSpace:blog/tech/WriteBufferingAndSyncs:091bcc2cc9dafc51c230846b96b4fe66df01e629Chris Siebenmann<div class="wikitext"><p>I don't think there's any fundamental obstacle to a filesystem making
it so that committing the journal isn't a choke point. But at the same
time I don't think very many do it, and I think it's probably easier to
implement it as basically a single-threaded process. If you implement
journal commit as a concurrent process you need to carefully keep various
things separate even if they'd normally be mingled together (for example,
allocating new space for new data blocks).</p>
</div>2024-03-18T20:39:25ZBy George Spelvin on /blog/tech/WriteBufferingAndSyncstag:CSpace:blog/tech/WriteBufferingAndSyncs:39201b9fe5db26e723550dca0054d7ab7d74ae80George Spelvin<div class="wikitext"><p>It seems to me that there's a simpler solution: When <code>fsync()</code>ing, empty the buffers <em>before</em> synchronizing with other journal writes.</p>
<p>Just to state explicitly what's implicit in what you wrote, there's a difference between the data being on disk, and the associated metadata being written. The second part is the "commit" which makes the write durable.</p>
<p>All file systems have this distinction, but journaling file systems make commits global, so you have more interference between writers.</p>
<p>Writing n blocks of data takes O(n) time, while the metadata commit is, if not quite O(log n), at least o(n). Large commits aren't themselves a prospect to be feared.</p>
<p>Keeping an overhang in RAM is useful <em>if</em> we have enough buffer space to absorb the write <em>and</em> we won't be synchronizing the write so can move on while the OS completes the writes asynchronously.</p>
<p>Given modern RAM sizes, the former threshold is quite generous, but we still need heuristics. It's annoying when one massive writer eats all the available RAM, stalling a lot of other smaller writers which could otherwise have proceeded asynchronously.</p>
<p>But I don't see why we need to make heuristic guesses at the second.</p>
<p>Rather, divide <code>fsync()</code> operations into two phases:</p>
<ol><li>Writing out the data</li>
<li>Committing the metadata</li>
</ol>
<p>The important part of this idea is that <em>phase 1 does not block journal commits.</em> Multiple other writers may force a journal commit while this lengthy preliminary is in progress. Only once it's on disk do we need to proceed to the associated global journal commit, which requires synchronization with other writers, but is never huge.</p>
<p>Rather than the awkward heuristic of saying "I suspect this process will want to sync its writes, so let's minimize RAM buffering", you wait until you have an <code>fsync()</code> call which tells you unambiguously. But then you flush the buffers <em>without blocking other syncs</em>, just like you would have done had your heuristic triggered on the initial <code>write()</code> call, until the final o(n) metadata update.</p>
</div>2024-03-18T14:31:56ZBy Verisimilitude on /blog/tech/PerfectionTraptag:CSpace:blog/tech/PerfectionTrap:e73489e34246557b4807e49863140a7386459690Verisimilitudehttp://verisimilitudes.net<div class="wikitext"><p>Notice no one ever says that the perfect mathematical proof is the enemy of the good mathematical proof. This is because a flawed mathematical proof is worthless, and similarly for software.</p>
<blockquote><p>In reality it's not actually a choice between right and worse; it's really a choice between nothing, worse, and right.</p>
</blockquote>
<p>In the eighteen years since and a world increasingly reliant on software with no fallback, it should be clear that nothing is preferable to deeply-flawed software.</p>
<blockquote><p>The easiest place to see this is computer security, where insistence on perfection (or some excellent approximation) is one of the holy tenets.</p>
</blockquote>
<p>It's better to know one is <em>insecure</em>, than to falsely believe one to be <em>secure</em>, however these words are defined.</p>
</div>2024-03-15T19:09:25ZBy Twirrim on /blog/tech/ServerCPUDensityAndRAMLatencytag:CSpace:blog/tech/ServerCPUDensityAndRAMLatency:bd825bf2b887e25544bd603771b2bd5a14612a82Twirrim<div class="wikitext"><p>NUMA is your biggest concern when it comes to RAM latency, and with increasing core counts, it's only going to get worse. NUMA has a <em>lot</em> of quirks to it that can dramatically influence performance.</p>
<p>Without going too deep in to the subject, but the cores in your system are grouped together into NUMA nodes, each node is directly attached to a particular subset of memory, and indirectly attached to the rest via the other nodes, paying the penalty of that extra hop between it and the memory. That adds noticeable latency to every request.</p>
<p>It can have some really significant impact. For example, Oracle has been exploring having ktext replicated in to each NUMA domain on arm64 (which in server class chips tends to be even more "NUMA"ish), <a href="https://lwn.net/Articles/956900/">https://lwn.net/Articles/956900/</a>, "[the patches] show a gain of between 6% and 17% for database-centric like workloads. When combined with userspace awareness of NUMA, this can result in a gain of over 50%."
Having to reach across to the other NUMA node to get to the executable code in the kernel turns out to be an expensive and common operation.</p>
<p>It only gets worse from there, for example the linux page cache isn't fully NUMA aware. I know of someone who tripped up on this benchmarking NUMA nodes. They thought they had two very different performance NUMA nodes in the system they were benchmarking. In reality, it turned out the mysql client library got cached in one NUMA node's memory, during the previous benchmark run, and so the calls for the functions exposed by the client library were having to go cross-NUMA!</p>
<p>CXL etc. that are in the pipeline will also make these kinds of concerns increasingly important, as they talk about CXL in terms of adding a NUMA node hop or two cost.</p>
</div>2024-03-04T02:26:14ZBy Fazal Majid on /blog/tech/ServerCPUDensityAndRAMLatencytag:CSpace:blog/tech/ServerCPUDensityAndRAMLatency:1ab99c59c7dcf2a246a477f7eb6d7d0c7b8c19d9Fazal Majidhttps://majid.info/<div class="wikitext"><p>NUMA has more of an impact. In my experience PostgreSQL performance is most correlated with the STREAMS benchmark and Amazon AWS’s biggest instances underperform some cheaper ones due to that NUMA penalty.</p>
</div>2024-03-03T12:37:19ZBy Fazal Majid on /blog/tech/ServersSpeedOfChangeDowntag:CSpace:blog/tech/ServersSpeedOfChangeDown:d31f5b4b475147f1057c72bcd7048d40c1c086a9Fazal Majidhttps://majid.info/<div class="wikitext"><p>AWS is raising efficiency with its Graviton series arm64 CPUs, and it’s easier to move functions as a service than legacy platform as a service workload. Still, when my former employer rebuilt its dockers on ARM, we got 30% cost savings.</p>
<p>Newer CPUs are also adding features like encrypted memory that protect VMs’ memory from snooping by the hypervisor, and thus allow sensitive industries like health care to move to the cloud.</p>
</div>2024-03-02T15:21:55ZBy sapphirepaw on /blog/tech/ServersSpeedOfChangeDowntag:CSpace:blog/tech/ServersSpeedOfChangeDown:e29df789abd441c23d9bdc7fc92e3c1d9ceb455esapphirepawhttps://www.sapphirepaw.org/<div class="wikitext"><p>AWS has reached a point where they are always touting "price efficiency" of new hardware generations. "Best price/performance ever!" when you boost prices 8% and performance 10%. It looks from the outside like they can't get higher performance without raising their own costs.</p>
<p>The floor of the "smallest configurable instance" never goes down, either. They filled in underneath <code>m*.large</code> with the <code>t*</code> families, which are CPU-throttled to emulate smaller slices of the underlying hardware.</p>
</div>2024-03-01T15:56:08ZBy Ivan on /blog/tech/OpenSourceCultureAndPublicWorktag:CSpace:blog/tech/OpenSourceCultureAndPublicWork:b82c48ca6bfba6129db8ca6168a6846f87ed8cafIvan<div class="wikitext"><p>Related might be <a href="https://rachelbythebay.com/w/2018/10/09/moat/">the choice to stay out of the community altogether</a>, although for not exactly these reasons.</p>
<p>There are forces in the community which try to address this problem, e.g. by <a href="https://allcontributors.org/docs/en/emoji-key">recognising the many ways to contribute which aren't limited to code</a> (although these technical choices are not for everybody). On the other end of the spectrum is the rare project like SQLite where <a href="https://sqlite.org/copyright.html">code is not that welcome and patches, if they are submitted, are most likely to be rewritten</a> for provenance and accountability reasons.</p>
</div>2024-02-26T07:01:35ZBy Phong on /blog/tech/DesktopECCOptions2024tag:CSpace:blog/tech/DesktopECCOptions2024:122b816c64f9bdbf53d1b539996ea1b1d1b46b5ePhong<div class="wikitext"><p>I have the Asus W680 motherboard mentioned on the Fediverse with an i7-13700K and it does appear to properly support ECC (at least as Windows 11 can tell). I have not had any problems so far running it as my main desktop.</p>
</div>2024-02-23T20:54:03ZBy Cristina on /blog/tech/DesktopECCOptions2024tag:CSpace:blog/tech/DesktopECCOptions2024:eff7ee86602c94bb3ad37c6948f9b20caba36d6cCristina<div class="wikitext"><p>A little over a decade ago, going with a Xeon and ECC instead of a desktop Intel chip added maybe 2 or 3 hundred dollars to the system cost, and I built such a system for under $1500. Now, it seems Xeon has nothing much cheaper than Threadripper. I see some Sapphire Rapids Xeon parts (one generation behind) in the $600-$1000 range.</p>
<p>For a while, Intel also supported ECC on some desktop i3 chips, such as i3-8100. The rumour was that they did this to target embedded markets, unlike the otherwise-higher-end i5 through i9 which lacked ECC. I don't know whether such support still exists.</p>
<p>The "workstation" motherboards do, in general, seem to be closer to what Chris is looking for. For example, I see several with three to seven M.2 slots, dual ethernet ports (or triple, but it looks like the third is usually reserved for BMC), 15+ USB ports, 6 or 8 SATA, even up to seven full-speed full-width PCIe slots. The Asus Pro WS Sage SE boards—W790E or WRX80E—are examples for those who don't mind spending $1400. Or the Gigabyte TRX50 Aero D is $400 cheaper but with only 3 PCIe slots.</p>
<p>The aforementioned 3 boards all seem to need RDIMMs. Personally, I'd probably go for unofficial DDR5 ECC UDIMM support according to web forum posts, maybe with Ryzen 8000G Pro APUs when available. Or catch the trailing edge of W680 boards and CPUs as they drop in price, and add an ethernet card if more than one port is needed. But I'd love to see that hypothetical "W780". As Linus Torvalds has ranted, ECC shouldn't be a "premium" feature. Researchers have found via experiments like "bit-squatting" that memory errors do occur in the wild, often enough to be of concern.</p>
<p>Anyway, thanks for this post, and please keep us updated if you manage to build a low-cost system with working ECC.</p>
</div>2024-02-17T18:58:53ZBy Cristina on /blog/tech/DesktopECCOptions2024tag:CSpace:blog/tech/DesktopECCOptions2024:309d137e492a4712b93ba5f3495f9fd964b4fc3aCristina<div class="wikitext"><p>Jonathan, the Threadripper (TR) is considered a workstation board rather than a desktop board per se. As Chris wrote, "The traditional option to getting ECC RAM support (along with a bunch of other things) was to buy a 'workstation' motherboard…"; that sentence was about Xeon but is just as true for TR.</p>
<p>Threadripper chips and boards support ECC officially, and TR might be the reason for desktop Ryzens supporting ECC at all: at least in the original version, Ryzen and TR used the same dies (TR having been a "spare time" project). But it looks hard to get a new current-generation model for less than 2 or 3 thousand dollars (I do see a previous-generation 5955WX—literally just one, in Ottawa—for $1000 at Memory Express). The motherboards are not cheap either. While a nice option for those who can get an employer to pay, this decision will probably double or triple the cost of a system.</p>
</div>2024-02-17T16:59:00ZBy Chris Siebenmann on /blog/tech/DesktopECCOptions2024tag:CSpace:blog/tech/DesktopECCOptions2024:3b27af5ffc920933f2697d3e5e1b4beae39496d8Chris Siebenmann<div class="wikitext"><p>Oops yes, Threadripper class AMDs do support ECC and you can get
motherboards for them and build your own desktop that way. For some
reason I always push this out of my mind as a crazy option, but it's
probably not more so than a Xeon-based build.</p>
</div>2024-02-17T16:54:58ZBy Thomas on /blog/tech/DesktopECCOptions2024tag:CSpace:blog/tech/DesktopECCOptions2024:ad8c58678265a9a5702199bb54c1f1f5690c3f6eThomas<div class="wikitext"><p>I’ve got an ASRock Rack AM5 1U system that I’m quite happy with - a ryzen 9 7900 + 128G of RAM makes a really nice small rack server, and uses <100W nearly all the time.</p>
</div>2024-02-17T12:54:04ZBy Jonathan on /blog/tech/DesktopECCOptions2024tag:CSpace:blog/tech/DesktopECCOptions2024:d374176f476976abad9a0096e6951a2692e83eb2Jonathanhttps://jmtd.net<div class="wikitext"><p>Pretty sure the Threadripper class AMDs support ECC. I have a Lenovo P620 workstation with TR and ECC.</p>
</div>2024-02-17T11:42:00ZBy Verisimilitude on /blog/tech/IPv6IsTheFuturetag:CSpace:blog/tech/IPv6IsTheFuture:ac6691bdb9fe31f8efd0b7b600f06dda8fd12f9cVerisimilitudehttp://verisimilitudes.net<div class="wikitext"><p>Nearly eight years later, the situation for IPv6 still looks bad. The only reason it's made the progress it has is due to so-called <em>smart</em>phones, which is another way of saying large corporations manage things, and the people have little to no say.</p>
<p>It's far more likely IPv6 is replaced in favour of something else. IPv4 will undoubtedly outlive IPv6, and I'd wager on that. Everything important is still available over IPv4, whereas nothing important is only available over IPv6. I'll continue to design interfaces that only allow IPv4.</p>
</div>2024-02-16T00:15:01ZBy Edward R on /blog/tech/CPUIGPCoolingAdvantagetag:CSpace:blog/tech/CPUIGPCoolingAdvantage:e3293cb34e16b2b193bad4c3023d15d61d89ac44Edward R<div class="wikitext"><blockquote><p>Another limit, now that I look, is the amount of power available to a PCIe card</p>
</blockquote>
<p>That's not an important limit. Any card that wants more power will just have you plug in a "PCIe power" connector directly from the power supply. Evidently, a 6-pin connector can supply an extra 75 W, an 8-pin adds 150 W, and some high-end cards have two 8-pin connectors (so: 150, 225, or 300 W total).</p>
<p>For what it's worth, I've got a Gigabyte Eagle RX 6600, being the cheapest decent non-Nvidia card I could find at the time (I'm also annoyed at the lack of low-end options). It's a double-width thing that can draw up to 132 watts and has multiple fans, but it's turned out not to be a problem in practice. Most motherboards leave a blank PCIe slot near the main ×16 connector, maybe with an M.2 slot lying flat, and I don't think I've heard the fans—which don't even come on if i'm not gaming. With motherboards having integrated network, sound, and storage controllers, it's rare that anyone would need 7 expansion cards, as ATX cases allow.</p>
<p>If I could've bought a CPU with integrated graphics and ECC RAM support, I'd have done that. Though people with high-end monitors need to consider the constant display-scanout bandwidth: three "4K" monitors at high refresh-rates could degrade RAM performance by perhaps 25%.</p>
</div>2024-01-26T22:17:12ZBy Anonymous on /blog/tech/CPUIGPCoolingAdvantagetag:CSpace:blog/tech/CPUIGPCoolingAdvantage:b91afac2aaaf218e8f2ec71146d6b894e436a060Anonymous<div class="wikitext"><blockquote>
<p>Another advantage is those IGPs, when integrated in a manageability platform like Intel AMT, offer remote KVM functionality for the poor man’s IPMI (works better if you use MeshCommander).</p>
</blockquote>
<p>I thought Intel had discontinued support for MeshCommander ?</p>
</div>2024-01-26T12:01:20ZBy Barry on /blog/tech/CPUIGPCoolingAdvantagetag:CSpace:blog/tech/CPUIGPCoolingAdvantage:1783ef7de34c6ddf61edab43f41a0f09860a47c3Barry<div class="wikitext"><p>For their current desktop graphic cards Nvidia and AMD haven't released entry-level GPUs: everything has triple-digit watt consumption, so passive cooling isn't even an option. Pallit is rumoured to have a KalmX GEForce 3050 in the works, which will be midrange and have an enormous heatsink, but it's something. Any other fanless card will have an even older GPU.</p>
</div>2024-01-25T21:15:23ZBy Ian Z aka nobrowser on /blog/tech/CPUIGPCoolingAdvantagetag:CSpace:blog/tech/CPUIGPCoolingAdvantage:0c9c34f91c5b9b962b09f5ee08e0d18061271e38Ian Z aka nobrowser<div class="wikitext"><p>Also: in the case of active cooling, the extra noise.</p>
</div>2024-01-25T19:38:21ZBy kib on /blog/tech/MotherboardFeaturesPCIeCoststag:CSpace:blog/tech/MotherboardFeaturesPCIeCosts:f38ae3e6a14cfbda3cc78a332e1547b2d1506fcbkib<div class="wikitext"><p>There are PCIe switches, which use seems to be common and required on top level motherboards.</p>
</div>2024-01-25T11:04:45ZBy Fazal Majid on /blog/tech/CPUIGPCoolingAdvantagetag:CSpace:blog/tech/CPUIGPCoolingAdvantage:c2126b504df2d582135be7c8e9bf47e1b5a71104Fazal Majidhttps://majid.info/<div class="wikitext"><p>Another advantage is those IGPs, when integrated in a manageability platform like Intel AMT, offer remote KVM functionality for the poor man’s IPMI (works better if you use MeshCommander).</p>
</div>2024-01-25T07:04:09ZBy Ian Z on /blog/tech/MFAIsBothSimpleAndWorktag:CSpace:blog/tech/MFAIsBothSimpleAndWork:c6aa1efb30485be3b9fa133cc45356e86e3e41b5Ian Z<div class="wikitext"><p>I have some of the same gripes, but here's what I do:</p>
<p>- I never use the QR code, only the ASCII key and I type it manually.</p>
<p>- I let Github etc. "trust" my home desktop so MFA is only required once per month or something.</p>
<p>- KeePass and friends are for the birds. I have a micro SD card with an encrypted filesystem where I keep my valuable passwords, including the MFA recovery codes. It's almost read-only so I don't expect it to fail before my carbon based systems do :-P</p>
<p>- I have a Yubikey but the only thing I use it for now is unlocking the computer after suspend. I got it for MFA on the work AWS account but that is now in the rear mirror.</p>
</div>2024-01-11T22:34:23ZBy Polar on /blog/tech/MFAIsBothSimpleAndWorktag:CSpace:blog/tech/MFAIsBothSimpleAndWork:1ae4badceb2dc1dd4bc5a1764281da1e9107c336Polar<div class="wikitext"><p>In my experience as a System Administrator the hassle of training folks on MFA, setting it up 30+ times with each user, and making it required to give the company and each user a higher threshold of security significantly out weights the possible lose of data and trust. Many password managers have MFA built right in which makes the process even more streamlined for less experienced users.</p>
<p>For me, personally, I would never go back to not using MFA for anything I see any value in. From a pet project I don't want to be hacked into that could spread malware or a video game I paid for. This is the world we live in and if security must be ignored to have fun you are setting yourself up for disaster IMO.</p>
</div>2024-01-11T18:25:12ZBy Fazal Majid on /blog/tech/MFAIsBothSimpleAndWorktag:CSpace:blog/tech/MFAIsBothSimpleAndWork:ebbcd73961bfba8826e41bd9445b1112ed1682f9Fazal Majidhttps://majid.info/<div class="wikitext"><p>My TOTP app, Lockdown, syncs seamlessly between my iOS and macOS devices using iCloud, and I was able to write in an afternoon a backup script that exports to a HTML file you can print or save it: <a href="https://blog.majid.info/lockdown-export/">https://blog.majid.info/lockdown-export/</a></p>
<p>I do prefer FIDO, but there are indeed websites that will limit you to two or even one key (looking at you, PayPal). Keeping a spare Yubikey in the office is a simple disaster recovery plan.</p>
</div>2024-01-11T07:01:44ZBy zdw on /blog/tech/MFAIsBothSimpleAndWorktag:CSpace:blog/tech/MFAIsBothSimpleAndWork:f742b03d196c64b0c6aa15732b268b0df7b004cazdwhttp://zackofalltrades.com<div class="wikitext"><p>Ah, my bad, I misread the statement about backup, and agree that it's a problem with physical devices, but not TOPT</p>
</div>2024-01-11T05:38:23ZBy zdw on /blog/tech/MFAIsBothSimpleAndWorktag:CSpace:blog/tech/MFAIsBothSimpleAndWork:13667528a87488ebe6923948654b91e188e6432czdwhttp://zackofalltrades.com<div class="wikitext"><p>TOPT can be trivally backed up - for example, the various KeePass-derived password managers support it, and you can sync across clients, keeping the same password wallet on both desktop/phone.</p>
</div>2024-01-11T05:35:36ZBy Chris Siebenmann on /blog/tech/FilesystemCacheAndNFSBandwidthtag:CSpace:blog/tech/FilesystemCacheAndNFSBandwidth:ae92e010e8f187434343bb1d1355615998a5c950Chris Siebenmann<div class="wikitext"><p>More or less the only thing changing is in hardware, which is moving
to a different set of SuperMicro hardware, <a href="https://www.supermicro.com/en/products/system/hyper/2u/sys-221h-tn24r">SuperServer SYS-221H-TN24R</a>,
using the <a href="https://www.supermicro.com/en/products/motherboard/X13DEM">X13DEM dual-socket motherboard</a> with Xeon
Silver 4410Ys and dual 10G-T (and the mentioned 512 GB of RAM). We were
only able to get these units with front panel disks, so two of the 24 bays
are system disks and they have (or will have) 22 data disks. This is more
data disks than <a href="https://utcc.utoronto.ca/~cks/space/blog/linux/ZFSFileserverSetupIII">the current hardware</a>,
so we're not trying to do a one to one replacement of the existing
servers and migration is going to take some time.</p>
</div>2024-01-10T18:17:43ZBy Adam on /blog/tech/FilesystemCacheAndNFSBandwidthtag:CSpace:blog/tech/FilesystemCacheAndNFSBandwidth:f35df2a46b12453d0d8ac7e51c36b9cedf24c679Adam<div class="wikitext"><blockquote><p>Our current ZFS fileserver hardware is getting long in the tooth, so we're working on moving to new hardware (with the same software and operational setup, which we're happy with).</p>
</blockquote>
<p>Time for a ZFSFileserverSetupIV article? Materialistic, I know. The 512 GiB RAM + the III setup makes for interesting guessing.</p>
</div>2024-01-10T15:46:27ZBy sapphirepaw on /blog/tech/EmailAddressesBadPermanentIDstag:CSpace:blog/tech/EmailAddressesBadPermanentIDs:3d15bd051eacca1cb037389d776a64de729e02c1sapphirepawhttps://www.sapphirepaw.org/<div class="wikitext"><p>Another one that just hit my brain out of nowhere: in the app-centric world, <strong>the device phone number is also a bad "permanent" ID.</strong> Sure, the user already has a phone number… but it's not unique over time.</p>
<p>This combines especially poorly with less-privileged people changing their phone numbers more frequently, or not wanting to plug their One True Identity directly into your system.</p>
</div>2024-01-09T16:46:32ZBy Verisimilitude on /blog/tech/TLSCertificateExpiryHacktag:CSpace:blog/tech/TLSCertificateExpiryHack:caa1798ba6457caca7c17d16ce1d60321ce457c7Verisimilitudehttp://verisimilitudes.net<div class="wikitext"><blockquote><p>When people argue about this, let's be clear; TLS certificate expiry times, like most forms of key expiry, are fundamentally a hack that exists to deal with the imperfections of the world.</p>
</blockquote>
<p>Most such design decisions in TLS are mechanisms of control above all else.</p>
<blockquote><p>TLS certificates would never be issued to anyone other than the owner of something</p>
</blockquote>
<p>This flaw alone invalidates the entire model, which makes giving it not even one sentence of mention surprising.</p>
<p><a href="https://www.apple.com/appleca/AppleIncRootCertificate.cer">This</a> is the root certificate of Apple Computer, used at its lowest levels and burned into the computers.</p>
<p>I examined it with an X.509 tool, and found no expiration dates therein. I may be wrong, but this isn't surprising to me in the least. Of course Apple Computer and Google can be trusted with cryptographic secrets that don't expire; it's only the peons who don't need those. Apple Computer and Google certainly don't mind decreasing the already insufficiently-small lifetimes of what they allow others to use, however.</p>
</div>2024-01-08T23:55:44ZBy Jonathan on /blog/tech/EmailAddressesBadPermanentIDstag:CSpace:blog/tech/EmailAddressesBadPermanentIDs:fdef07d70ea3777ed5cbc1f10c18d371527f11a9Jonathanhttps://jmtd.net<div class="wikitext"><p>Threat models around this should consider domain expiry and re-registration by a bad actor. Those of us using vanity/personal domains should really think about an exit strategy before accepting mail.</p>
</div>2023-12-31T10:11:23ZBy Ian Z aka nobrowser on /blog/tech/StandardsAndBadContenttag:CSpace:blog/tech/StandardsAndBadContent:334697b4f7c366ef9f9f6cb9f1670e9e6b48ccd5Ian Z aka nobrowser<div class="wikitext"><p>However, as Seymour Metz pointed out in the <code>spammers.dontlike.us</code> mailing list, when it comes to the line endings in the DATA terminating sequence, i.e. the ones whose loose handling actually causes the vulnerability, the RFC does in fact say explicitly that the receiving MTA MUST NOT accept the sequence <code>LF . LF</code> as a terminator.</p>
<p>I have not checked if it says anything about other dodgy variants such as <code>CR . CR</code>.</p>
</div>2023-12-27T06:17:25ZBy Chris Siebenmann on /blog/tech/TLSInternalCANameConstraintsIItag:CSpace:blog/tech/TLSInternalCANameConstraintsII:96a2aad6e6784336357a7dfc177be526fca3df16Chris Siebenmann<div class="wikitext"><p>That's a great point, chipb. I usually think of separate TLS certificates
(intermediates or otherwise) as having separate keys, but that's not
absolutely required, and if you're already looking at using a ten or
twenty year 'intermediate' (with obviously the same key for the duration),
you can reuse the same key for multiple intermediate certificates. I'll
have to remember this if it ever comes up for us.</p>
</div>2023-12-11T04:23:23ZBy chipb on /blog/tech/TLSInternalCANameConstraintsIItag:CSpace:blog/tech/TLSInternalCANameConstraintsII:33f35dafcdcc5e46862c401c7024bba919f829d2chipb<div class="wikitext"><blockquote><p>(One option would be to pre-mint a series of intermediates with relatively short (and overlapping) lifetimes as a precaution. But then you have to protect all of these intermediate certificates for possibly several decades.)</p>
</blockquote>
<p>I usually try to avoid being a pedant, but I’m compelled to point out that the certificate isn’t the sensitive part to protect; all those pre-minted intermediate certs could share the same key. It may not be a great idea, but at least to me, it doesn’t seem worse in principle than using a single multi-decade intermediate.</p>
</div>2023-12-10T16:34:28ZBy David Magda on /blog/tech/TLSInternalCANameConstraintsIItag:CSpace:blog/tech/TLSInternalCANameConstraintsII:06c9ec6afb225ba0671f409a74426e62712aeb48David Magdahttp://www.magda.ca/<div class="wikitext"><blockquote><p><em>Then I read Michal Jirků's Running one's own root Certificate Authority in 2023[1] and had a realization about a general way out so that everything would accept your TLS name constraints.</em></p>
</blockquote>
<p>Some interesting stuff in the Hacker News comments:</p>
<ul><li><a href="https://news.ycombinator.com/item?id=37537689">https://news.ycombinator.com/item?id=37537689</a></li>
</ul>
<blockquote><p><em>At this point, you throw away the CA root certificate's private key, so no one can make any more intermediate certificates.</em></p>
</blockquote>
<p>Or perhaps get an hardware security module (or two) and store them in there:</p>
<ul><li><a href="https://shop.nitrokey.com/shop/nkhs2-nitrokey-hsm-2-7">https://shop.nitrokey.com/shop/nkhs2-nitrokey-hsm-2-7</a></li>
</ul>
<p>If you end up losing the HSM(s), it'd be no worse than if you'd gotten rid of the private root key in the first place. The above model appears to have two-man / m-of-n rule support, and allow for (encrypted) backing up of units to each other.</p>
<p>The HN post also has various suggests on (open source) CA software so one doesn't have to roll (and maintain) one's own:</p>
<ul><li><a href="https://github.com/smallstep">https://github.com/smallstep</a> / <a href="https://smallstep.com/docs/step-ca/">https://smallstep.com/docs/step-ca/</a><p>
</li>
<li><a href="https://github.com/OpenVPN/easy-rsa">https://github.com/OpenVPN/easy-rsa</a><p>
</li>
<li><a href="https://hohnstaedt.de/xca/">https://hohnstaedt.de/xca/</a><p>
</li>
<li><a href="https://github.com/FiloSottile/mkcert">https://github.com/FiloSottile/mkcert</a> (good for on-one-host dev stuff)</li>
</ul>
</div>2023-12-09T13:43:09ZBy Fazal Majid on /blog/tech/MailClientsAndCreatingHTMLMailtag:CSpace:blog/tech/MailClientsAndCreatingHTMLMail:e9e7e9f0d2acb773611db78e4841fca22749ce5dFazal Majidhttps://majid.info/<div class="wikitext"><p>They need to that for the previews anyway.</p>
</div>2023-12-01T11:52:14ZBy Arnaud Gomes on /blog/tech/FirewallsAndMACstag:CSpace:blog/tech/FirewallsAndMACs:de91b5bda3ab2bc058e1d0d348df5484695f107eArnaud Gomes<div class="wikitext"><p>Another advantage IP addresses have over MAC addresses: an interface may have any number of IP addresses and you can move them around easily (technically you can move MAC addresses around as well but there is much less tooling around, you'll probably have to roll your own). This is important in many (most?) HA implementations.</p>
<pre>
-- A
</pre>
</div>2023-11-03T07:21:05ZBy David Sowder on /blog/tech/MFAPushFatigueQuestionstag:CSpace:blog/tech/MFAPushFatigueQuestions:d789a3e6144a59ac50518b545e677cb7a6cc3d7cDavid Sowder<div class="wikitext"><p>Microsoft MFA uses a phone app, which can do push notifications. "Number matching" can be used to combat MFA fatigue by requiring the user to type a number (two digits in our case) from the authenticating website into the Microsoft MFA app, thus making it much more likely that the user knows which authentication they are approving.</p>
</div>2023-10-08T19:48:56ZBy Etienne Dechamps on /blog/tech/MFAPushFatigueQuestionstag:CSpace:blog/tech/MFAPushFatigueQuestions:c762d53c36098dc245be5b23c76480896c922e32Etienne Dechamps<div class="wikitext"><p>Some push MFA apps (e.g. GitHub, Microsoft) will ask the user to enter a code into the app, where the code is shown on the login page. This eliminates the "MFA spam" threat because the victim won't know the code (they're not the one looking at the login page, the attacker is), and so can't "accidentally" say yes to the MFA prompt.</p>
<p>Push MFA is still vulnerable to phishing though. WebAuthn/FIDO/U2F/CTAP is not vulnerable to phishing because the authenticator will not provide a signature if the domain of the requesting website doesn't match.</p>
</div>2023-10-04T19:48:16ZBy Miksa on /blog/tech/MFAPushFatigueQuestionstag:CSpace:blog/tech/MFAPushFatigueQuestions:81268b591339e3349fbab44d2a0ae8d1e960d990Miksa<div class="wikitext"><p>I don't use push MFA, when we deployed Microsoft MFA I chose TOTP because of familiarity with Google auth, but I can think of few scenarios how I would how received valid surprise MFA pushes.</p>
<p>I have a virtual Windows running 24/7 at work for email and web browsing. It has Thunderbird running all the time and Edge showing my O365 calendar. Every 90 days they will try to connect to their services and find out the MFA has expired and they will get prompted. Just in past weeks both have been showing windows with expired MFA prompts when I had logged in in the morning. They most likely would have created MFA pushes outside my work hours.</p>
<p>A traditional source is of course your cell phone which is connecting to email and calendar constantly. It used to be a nuisance at work that whenever you changed you password you would start getting temporary account locks frequently because your phone and email clients running on the background would continue using your old password.</p>
</div>2023-10-02T13:57:34ZBy Chris Siebenmann on /blog/tech/MFAPushFatigueQuestionstag:CSpace:blog/tech/MFAPushFatigueQuestions:310255c4687b7785993b82811c78c29b8436408dChris Siebenmann<div class="wikitext"><p>In an ideal organization, 'what to do if you get an unsolicited MFA push'
would be part of the MFA training (and there would be MFA training,
rather than throwing an app and some vague instructions in people's
directions and hoping for the best). In real organizations, this is one
of the failure modes. And I agree about the security dialogs and what
they've trained people to do.</p>
</div>2023-10-01T19:22:03ZBy Ben Cotton on /blog/tech/MFAPushFatigueQuestionstag:CSpace:blog/tech/MFAPushFatigueQuestions:ea732a86778c725228ebbd8d17b6a31a577136b7Ben Cottonhttps://funnelfiasco.com<div class="wikitext"><p>You are correct, but</p>
<blockquote><p>In this environment, getting a surprise MFA push request (or worse, several) out of the blue means that someone else has your password, which should cause you to hit some sort of big red security problem button to trigger at least a password change.</p>
</blockquote>
<p>assumes a reasonably security-competent user. In most organizations and in most roles, I'm not sure that's something you can rely on. The average user (especially outside of IT roles) doesn't know how the MFA system works, so they won't necessarily know that this is a red flag. It doesn't help that decades of unexpected (from the casual user perspective) security dialogs have trained people to default approve.</p>
</div>2023-10-01T15:34:55ZBy Arnaud Gomes on /blog/tech/MFAPushFatigueQuestionstag:CSpace:blog/tech/MFAPushFatigueQuestions:c79e676ac4618d7014d15039b0d82e6ba412cfb9Arnaud Gomes<div class="wikitext"><p>To me this screams of hastily-deployed MFA that was needed to validate some kind of certification and / or pushed by a vendor, without the proper planning or procedures. </p>
<p>Reporting unexpected MFA requests requires at least somebody to report to; same for reacting to MFA rejections, which needs somebody to either look at the monitoring or set up the proper automation. These are organizational issues, not really tooling-related.</p>
<pre>
-- A
</pre>
</div>2023-10-01T11:25:26ZBy Fazal Majid on /blog/tech/MFAPushFatigueQuestionstag:CSpace:blog/tech/MFAPushFatigueQuestions:9a06d0db87fc45ab29a69e0a83a4cc4dd7701e20Fazal Majidhttps://majid.info/<div class="wikitext"><p>The only secure MFA is FIDO (and its Passkey variant). Push notification based pseudo-MFA does not authenticate the website, is trivially phishable and whenever I see it I know the organization is not serious about security.</p>
</div>2023-10-01T11:09:23ZBy Ben Hutchings on /blog/tech/MFABasicOptionsIn2023tag:CSpace:blog/tech/MFABasicOptionsIn2023:daa92918b8e15382c7b0e52c8441c7bb55cbef49Ben Hutchings<div class="wikitext"><p>All of those three are vulnerable to phishing:</p>
<ul><li>With SMS and TOTP, the one-time code can be phished along with their password</li>
<li>With push-based approval, it's not even possible for the user to tell how the request was generated</li>
</ul>
<p>I tend to think push-based approval is the <em>least</em> secure of the three. I've read several incident reports where the attacker got a user's password and then repeatedly tried to log in, spamming them with approval requests until they gave in and tapped Approve.</p>
<p>WebAuthn should be secure against phishing since the browser takes care of comparing web site identities. The need for a hardware token has been a major barrier to its use, but maybe passkeys will change that.</p>
</div>2023-09-29T02:44:30ZFrom 193.219.181.219 on /blog/tech/LatencyImpactMyXExperiencetag:CSpace:blog/tech/LatencyImpactMyXExperience:d1702eb345201437ec7ca9dd487003a1ffd7b99fFrom 193.219.181.219<div class="wikitext"><p>18 ms to work over DSL still sounds pretty good. I was pretty surprised when we had to upgrade from (8 Mbps down, 0.4 Mbps up) ADSL to a fixed LTE/4G connection and the latency actually went down a little.¹</p>
<p>From what I found out, I believe it was caused by the DSL "interleaving" feature, which is supposed to make it more noise-resistant but by its nature needs to buffer a lot of data (so that it would have something to interleave). It might be that your ISP can agree to turn it off for you. I don't know if that applies to VDSL, however.</p>
<p>¹ (Though it slowly got worse over the last year, with mtr currently showing 17 ms "best" as before, but 30 ms "average". But eh, at least the speed is decent.)</p>
</div>2023-09-11T05:15:22ZBy Opk on /blog/tech/LatencyImpactMyXExperiencetag:CSpace:blog/tech/LatencyImpactMyXExperience:bdf35d9abe8cfa5973cda9ca1ce3e5a63d3bd537Opk<div class="wikitext"><p>The Internet providers always focus on the raw bandwidth figures in their marketing material. I've had to get "upgrades" reverted in the past due to increased latency making it actually worse. I'm stuck with home Internet using cable TV infrastructure as my only option which is typically worse than ADSL for latency. Despite downloads and netflix being fast, I find remote desktop software like VNC to be barely usable and I generally get complaints from others about my signal in video conferences. Adding a router with fq_codel enabled did improve things slightly. I'd recommend trying that, especially if you only see poor latency under load implying a possible bufferbloat problem.</p>
</div>2023-09-10T10:00:16ZBy Patrick on /blog/tech/LatencyImpactMyXExperiencetag:CSpace:blog/tech/LatencyImpactMyXExperience:9b5a2ffb50d3c4e7dfd8390567c822c6d13b1c10Patrick<div class="wikitext"><p>Hi Chris,</p>
<p>I can highly recommend fiber based Internet access.
My first hop latency dropped from around 20ms to around 2ms after the migration from VDSL to fiber Internet access.
I think this is mainly due to the fact tat digital to optical and back is much faster than digital to analog transformation.
Everything feels faster/snappier with fiber based access.</p>
<p>Best Regards,
Patrick</p>
</div>2023-09-10T09:36:48ZBy Walex on /blog/tech/MFABasicOptionsIn2023tag:CSpace:blog/tech/MFABasicOptionsIn2023:db143357dd6cfcf48bb0f9895f6357f41ee9354bWalexhttp://www.sabi.co.uk/blog/20-two.html?201107#201107<div class="wikitext"><p>«abandon passwords and switch to SMS authentication only. Technically it requires me knowing my email address and having my phone, but it feels more single factor than multi factor.</i>»</p>
<p>That I guess is "multi-step" rather than "multi factor". Also it does really matter whether one uses the same factor (whether "known" or "had") or not, what matters is whether the security token are <em>independent</em>.</p>
<p><a href="http://www.sabi.co.uk/blog/20-two.html?201107#201107">http://www.sabi.co.uk/blog/20-two.html?201107#201107</a></p>
<p>Also using mobile phone SIMs for the first or second secrets is very inconvenient because losing the SIM locks you out of a lot of places.
Always use multiple phone numbers and/or multiple e-mail addresses and/or multiple FIDO etc.</p>
</div>2023-09-09T15:29:58ZBy Yildo on /blog/tech/MFABasicOptionsIn2023tag:CSpace:blog/tech/MFABasicOptionsIn2023:3868fdf89044d8877b7484bac01079d2b40059d6Yildohttps://eozygodon.com/@yildo<div class="wikitext"><p>I'm increasingly seeing non-technical sites abandon passwords and switch to SMS authentication only. Technically it requires me knowing my email address and having my phone, but it feels more single factor than multi factor.</p>
<p>I've run into this with either Hotels dot com or Booking dot com this August, and there were others</p>
</div>2023-09-07T20:16:06ZFrom 193.219.181.219 on /blog/tech/MFABasicOptionsIn2023tag:CSpace:blog/tech/MFABasicOptionsIn2023:9eaeacc65fc168d0e2d76f1a705f5136972d13d2From 193.219.181.219<div class="wikitext"><blockquote><p>(At some point this list may include WebAuthn, but right now you mostly need a hardware security key to use it on your desktop or laptop.)</p>
</blockquote>
<p>Windows has long had a software-based WebAuthn/FIDO2 token aka "platform Authenticator" as part of the OS, branded Windows Hello (no, not TPM-based, that's only used by Windows Hello for Business which is of course completely unrelated). I believe macOS now has one as well, as does Android – the main focus of the new "passkeys" trend is these OS-integrated authenticators (plus ways to get them from one device to another, such as
Bluetooth).</p>
<p>Linux is the only outlier, with a few projects that emulate FIDO tokens (e.g. via uinput or uhid) but none of them really catching on, though it seems there is an "xdg-credentials-portal" that aims to finally change that.</p>
</div>2023-09-07T04:42:32ZBy Simon on /blog/tech/TLSInternalCANameConstraintstag:CSpace:blog/tech/TLSInternalCANameConstraints:f45ef7c0e429837a4d597eac793be58da281d25cSimon<div class="wikitext"><blockquote><p>My reaction to this suggestion has traditionally been that it was extremely dangerous.</p>
</blockquote>
<p>I think this depends on where you would install such a CA.</p>
<p>If this is on machines you control then I think it's not a much increased risk. You will already have various things with similar if not higher risk: Your admin access to those system (ssh, puppet, whatever) and other things with similar impact like a repo with your configs or a shared file server with security relevant data (like home directories of privileged users). In those cases rotating a CA also shouldn't be very hard.</p>
<p>On the other hand if you expect the CA to be installed on systems where you don't have direct control, for example the laptop of a user of your services, then it's another story. There you introduce both new significant risks (impersonation of most TLS connections, even if they have nothing to do with your services) and it's hard to change the CA. Here (working) name constraints are very helpful.</p>
</div>2023-09-05T03:26:31ZBy Yildo on /blog/tech/YamlIsOkayEnoughtag:CSpace:blog/tech/YamlIsOkayEnough:f1d382f1b2019fd7a6b44bcc0fddc74000893aeeYildohttps://eozygodon.com/@yildo<div class="wikitext"><p>Yaml is pretty good. Json is a strict subset of Yaml, so if all you want is Json with comments, Yaml is a lot more widespread than Json5 for that</p>
<p>I enjoy the anchor features and the merge syntax, but most of the time you don't need it. I do wish there were a built-in solution for merging lists, because that becomes a pain point in anything built on top of Yaml like Helm</p>
<p>I definitely loathe XML. Toml is ok</p>
</div>2023-08-28T19:55:30ZBy Georg Sauthoff on /blog/tech/ContourMouseReviewtag:CSpace:blog/tech/ContourMouseReview:21c7781f4602cf9a0c24eab634ce7b8ff744ad96Georg Sauthoffhttps://gms.tf<div class="wikitext"><p>FWIW, the current Evoluent VerticalMouse D still has a dedicated third/middle button in addition to a scroll wheel.</p>
<p>The scroll wheel is placed between the first and middle button such that it can be conveniently used with the index finger.
It has distinctive click stops, i.e. it doesn't scroll by accident.
Notably, the scroll wheel can also be clicked, however that click is registered es BTN_EXTRA - and it requires enough sufficient such that it isn't triggered too easy.</p>
<p>The mouse also has dedicated backward and forward navigation buttons on the left side which register as BTN_SIDE and BTN_FORWARD.</p>
<p>In contrast to most other currently available mice its main contact surface isn't soft-touch plastics and it's available in cable-only variants.</p>
</div>2023-08-26T12:57:44ZBy Miksa on /blog/tech/TLSShortCertDurationVsBlackBoxestag:CSpace:blog/tech/TLSShortCertDurationVsBlackBoxes:05639a4387f29a5f5599509423578a37e4f6cc63Miksa<div class="wikitext"><p>This summer we finally managed to implement Acme for the certificates we acquire through the Geant contract. Now that the renewals mostly don't require manual work we also scheduled the renewals to happen 60 days before expiration. Just to avoid the problem where a server goes through its monthly maintenance window and the next day we receive the 30 day expiration notification. We still have some services where we don't know a way to swap the cert without an outage.</p>
<p>The most recent of these cases was with our Red Hat Satellite and we ended up swapping the cert one day before expiration on the next window. And of course the two coworkers who had done it the previous time and remembered the process best were on vacation.</p>
</div>2023-08-25T16:57:27ZBy Fazal Majid on /blog/tech/TLSShortCertDurationVsBlackBoxestag:CSpace:blog/tech/TLSShortCertDurationVsBlackBoxes:efa1a1676de4e1e0f45253874158ccf58e31b052Fazal Majidhttps://majid.info/<div class="wikitext"><p>Another class of such devices: printers. I wrote some Python scripts to update the LE certes on my Epson printers and it was a major pain in the backside due to their crackpot web UI design (not to mention they have completely different systems per printer line).</p>
<p><a href="https://blog.majid.info/epson-certificates/">https://blog.majid.info/epson-certificates/</a></p>
<p>There really needs to be some standardized protocol to provision certes and keys.</p>
</div>2023-08-25T07:51:04ZFrom 193.219.181.219 on /blog/tech/TLSShortCertDurationVsBlackBoxestag:CSpace:blog/tech/TLSShortCertDurationVsBlackBoxes:dc547772f646a1a123de85f5b011403446459d2cFrom 193.219.181.219<div class="wikitext"><p>BMCs are not something I'd bother getting a public webPKI certificate either way – an internal CA is good enough, for which even 10-year certificates are still possible (all of the lifetime restrictions only apply to public CAs).</p>
<p>Especially as our iLO4 BMCs don't even support storing (and providing to the browser) an intermediate chain, so while e.g. Firefox has its own ways to cope with it, all CLI TLS clients such as ilorest will outright fail to validate the certificate. So we have to issue certificates directly from the root CA, which is only possible with internal roots.</p>
</div>2023-08-25T06:27:36ZBy Verisimilitude on /blog/tech/TLSShortCertDurationVsBlackBoxestag:CSpace:blog/tech/TLSShortCertDurationVsBlackBoxes:8f99e9a37e954f6f3236f54b47e060d7130f3d0fVerisimilitudehttp://verisimilitudes.net<div class="wikitext"><p>Rest assured, this inconvenience is intended. I'm still waiting for the authorities to deny someone a certificate for political reasons. The fact that Google is able to dictate policy like this is evidence alone of the rot.</p>
</div>2023-08-25T05:45:16ZBy Miksa on /blog/tech/CLAsImpedeContributionsIItag:CSpace:blog/tech/CLAsImpedeContributionsII:90f7a30528be1d6b59654d57099c1dd934a1af0bMiksa<div class="wikitext"><p>I suspect companies are fully aware of this issue and that is the way they like it. Outside contributors would be too big of a risk and trap for them.</p>
<p>Hashicorp is a good example. In the past weeks they've switched the license of their products like Terraform and Vault from Mozilla Public License 2.0 to Business Source License 1.1. This would probably been impossible if the code was littered with outside contributions without an ironclad CLA. And seems that Hashicorp didn't trust on their CLA anyway. Someone checked their Github and didn't really find contributions from other than Hashicorp employees.</p>
<p>So the only option is to send a bugreport that the company can reverse engineer. Because this makes me wonder, if you include in your report the correct fix to the code does that leave you with copyright claim even without direct pull request? Does the company have to go through the IBM PC reverse engineering? One employee reads the bugreport, explains the required changes to an intermediary, and the intermediary specs the change to a coder who has never seen any bugreports.</p>
<p>A related case are all the community projects built on top of companies owned by random people. Good recent example if "ADS-B Exchange", a community alternative for FlightRadar24. Most of the volunteers contributing for ADSBe probably didn't even realize it could be sold to a private equity firm.</p>
<p>All community projects should be built on top a foundation or association that can't be easily bought. My email address service is provided by the Finnist association IKI ry. I am one of it's ~30000 members and I have my single vote for important matters. If a company wants to buy my vote it's available, I don't know, let's say for 5 grand.</p>
</div>2023-08-23T14:49:47ZBy Chris Siebenmann on /blog/tech/CLAsImpedeContributionsIItag:CSpace:blog/tech/CLAsImpedeContributionsII:569c5b73d2e0c00de17771a4c96b0fdf4d2cc645Chris Siebenmann<div class="wikitext"><p>My view is that there are two significant differences between a CLA
and an open source 'license' (which are really copyright grants,
not contracts). First, how enforceable various open source licensing
terms are is an area of litigation, while no one doubts that signed
contracts are enforceable. Second and much more importantly, an open
source license binds the original developer as much as it binds people
distributing changes, which drastically reduces the scope for clauses
that are dangerous to you since they would also be equally dangerous to
the original developer (at least for any generally accepted open source
license). A CLA doesn't bind the original developer this way; it can be
completely asymmetric (and often is) and thus require dangerous terms
from you that don't apply to the original developer and significantly
favour them.</p>
</div>2023-08-20T12:11:20ZBy Anonymous on /blog/tech/CLAsImpedeContributionsIItag:CSpace:blog/tech/CLAsImpedeContributionsII:194e1a138170fb1afa7356574e0144d01feda6fcAnonymous<div class="wikitext"><blockquote>
<p>A Contributor License Agreement is a legal document and a legal agreement. </p>
</blockquote>
<p>An open source license is just as much a 'legal document and a legal agreement' as a CLA is. I guess the difference is not whether it's a legal document or not, but rather what it is that is 'typically' stated in an open source license versus a CLA.</p>
</div>2023-08-20T11:46:48ZBy Chris Siebenmann on /blog/tech/CLAsImpedeContributionsIItag:CSpace:blog/tech/CLAsImpedeContributionsII:f512871231c535e720d21edf744c446d808f6a05Chris Siebenmann<div class="wikitext"><p>If the project and the people involved are taking CLAs seriously, no third
party can sign a CLA for my code and the project can't accept it if they
try. The CLA is intended to get a legal agreement from the copyright
owner of the code, regardless of the license that was put on the code.</p>
</div>2023-08-20T03:25:49ZBy Nolan on /blog/tech/CLAsImpedeContributionsIItag:CSpace:blog/tech/CLAsImpedeContributionsII:ffff75ff3e83a523be9bff93f1f7182588d75a5dNolanhttps://sigbus.net<div class="wikitext"><p>CLAs are almost always a bad idea, but in your specific situation, why not post the diff to the project bug tracker tracking the specific bug, with a MIT or BSD or some other very permissive license, and let anyone else who is comfortable or able to sign their CLA submit it?</p>
</div>2023-08-20T02:01:50ZBy Michael on /blog/tech/ContourMouseReviewtag:CSpace:blog/tech/ContourMouseReview:969ccc728d70e88a8fa873103adafaaa46b69d02Michael<div class="wikitext"><p>Yes, at least one version with a slightly-clicky scrollwheel came in wired and wireless versions. Curiously, the wireless version uses USB (I think it's micro-USB on the mouse side, but haven't double-checked) for charging <em>and</em> can also use the same connection for talking to the computer instead of through the included USB radio transceiver dongle <em>and</em> they included a quite flexible USB cable. So the wireless version can effectively serve as <em>both</em> a wired and a wireless mouse, depending on preference. Just another little detail that you don't see often; usually it's either/or, but this one is truly both. All that would be missing for it to be truly both is a switch to turn off the mouse's radio.</p>
</div>2023-08-17T07:34:40ZBy Andrew on /blog/tech/FullLegalNamesProblemstag:CSpace:blog/tech/FullLegalNamesProblems:7b5e6d3fd5ba8cd53db5fc8dc3d822cb7fed84f5Andrew<div class="wikitext"><p>@Yildo: Yes, it's an ICAO 9303 requirement. "Latin-alphabet characters, i.e. A to Z and a to z, and Arabic numerals, i.e. 1234567890 shall be used to represent data in the VIZ. [...] When mandatory data elements are in a language that does not use the Latin alphabet, a transcription or transliteration shall also be provided."</p>
<p>Diacritics are allowed, as are letters like ß, Ð, and þ, so it's not an <em>ASCII</em> requirement, but it is a Latin one.</p>
</div>2023-08-15T19:54:46ZBy Yildo on /blog/tech/FullLegalNamesProblemstag:CSpace:blog/tech/FullLegalNamesProblems:7c9f8bb211ce9b018b73cda58b0737147a58a2d8Yildo<div class="wikitext"><p>@Andrew: Is that a scannable passport requirement?</p>
<p>Googling I saw ASCII Latin names on Japanese and Russian passport examples, but not on Vietnamese or Turkish. The latter are in a Latin alphabet to start, but with a set of diacritics that are not in the basic or extended ASCII table</p>
<p>Insisting on using those names is not a friendly choice, though. They are imposed, and they are likely to lend themselves to a different pronunciation from the source name</p>
</div>2023-08-14T07:33:29ZBy Andrew on /blog/tech/FullLegalNamesProblemstag:CSpace:blog/tech/FullLegalNamesProblems:c22e03725d2e48a8ef91a2de49108d10955ede8cAndrew<div class="wikitext"><blockquote><p>If you're asking for 'legal name, but in the Latin alphabet', well, that's certainly something.</p>
</blockquote>
<p>If they have a passport, they do in fact have one of those. Not that you should be asking for it, but it's an interesting note.</p>
</div>2023-08-14T00:36:40ZBy andyjpb on /blog/tech/RPCSystemsGoodVersusBasictag:CSpace:blog/tech/RPCSystemsGoodVersusBasic:39603a79c28bbb91846e09c6393c3d8828c392d7andyjpbhttp://www.ashurst.eu.org/<div class="wikitext"><p>tef seems to agree with you:</p>
<p><a href="https://cohost.org/tef/post/1877226-why-i-think-rpc-suck">https://cohost.org/tef/post/1877226-why-i-think-rpc-suck</a></p>
<p><a href="https://github.com/tef/trpc">https://github.com/tef/trpc</a></p>
<p>(and, FWIW, so do I ;-) )</p>
</div>2023-08-10T16:21:07ZBy sapphirepaw on /blog/tech/CheapOnlyWhileThereIsVolumetag:CSpace:blog/tech/CheapOnlyWhileThereIsVolume:25f38a3926d1e0c40c5922d132e716cf7f2cc375sapphirepawhttps://www.sapphirepaw.org/<div class="wikitext"><p>Parallel thought: the unreasonable cheapness of a thing can reveal where the volume is, and self-perpetuate that. I would like a higher-resolution display but 16:9 and "full HD" are so <em>excessively</em> popular that manufacturers are offering 27-inch 1920x1080 displays. Or even larger.</p>
<p>Meanwhile, there's almost no such thing as a standalone 5K display, while Apple could put them in 27" iMacs just fine.</p>
</div>2023-08-01T15:02:13ZBy Chris Done on /blog/tech/HTTPUniversalDefaultProtocoltag:CSpace:blog/tech/HTTPUniversalDefaultProtocol:ab64f12b7a0a9ef82343d8010ad9fc11a338d906Chris Donehttps://chrisdone.com<div class="wikitext"><p>It’s certainly my go-to. </p>
<p>Another aspect is that HTTPS takes care of encryption for you, it’s transparent to most devs.</p>
</div>2023-07-25T10:22:45ZBy Chris Siebenmann on /blog/tech/HTTPUniversalDefaultProtocoltag:CSpace:blog/tech/HTTPUniversalDefaultProtocol:4187c03610e2e7ae32178f053d5af77879d0e2d4Chris Siebenmann<div class="wikitext"><p>I don't think that firewalls are a large factor in the use of
HTTP as a transport mechanism. Many of the things that use HTTP
this way don't normally run on port 80 (or port 443 when they use
TLS), and they often are only used inside a network perimiter.
I'm thinking here of software like <a href="https://github.com/prometheus/node_exporter">the Prometheus host agent</a> or <a href="https://github.com/prometheus/blackbox_exporter">its Blackbox probe
agent</a>, both of which
use HTTP but only sort of incidentally, and which you definitely don't
want to expose to the world.</p>
<p>In a past life (and even for some software today, such as ClamAV), these
things would have used a custom protocol of some sort instead of HTTP.
The observable effects would be about the same.</p>
</div>2023-07-20T19:26:26ZBy orev on /blog/tech/HTTPUniversalDefaultProtocoltag:CSpace:blog/tech/HTTPUniversalDefaultProtocol:4f37e217441629070beca1adbbf2c08343be1749orev<div class="wikitext"><p>I believe that one of the main drivers of using HTTP for everything is that firewalls don’t block it. In the old days, where every app had its own network port, it became possible for the security team to control access to everything by blocking ports. Since HTTP was so useful for web sites, it had to be left open. Then everyone started using it because it allowed circumvention of all the port blocking. Now we need to implement HTTP firewalls that inspect URLs to try to block apps. It’s a slow moving arms race between businesses and security teams.</p>
</div>2023-07-20T18:08:48ZBy Ben Cotton on /blog/tech/SocialMediaPostsNotSimpletag:CSpace:blog/tech/SocialMediaPostsNotSimple:6c6e4660c4a5516ba4558d7eeb5fd53c5f831467Ben Cottonhttps://funnelfiasco.com<div class="wikitext"><p>When I was still maintaining a Perl CLI client for Twitter, I occasionally needed to look at the raw json that the Twitter API sends. It is very rich, especially when a post contains an image or video.</p>
</div>2023-07-17T13:13:15ZBy sapphirepaw on /blog/tech/Windows10MyViewstag:CSpace:blog/tech/Windows10MyViews:c713b57969cfab67d9bc1972f1d2e7f3136ebe6csapphirepawhttps://www.sapphirepaw.org/<div class="wikitext"><p>It's been 7 years and this post has aged pretty well.</p>
<p>Finish setting up your PC, switch to Edge with Bing, oh look the Office 365 placeholder "app" has mysteriously reappeared! How many times do I need to tell them "no," uninstall crapware, and so forth?</p>
<p>I built my PC in 2012 when Windows 7 was an obviously good choice, but Microsoft has made things so awful that "Mac mini" is a serious contender for my next desktop? I will have to choose among 3 bad options now.</p>
</div>2023-07-13T13:55:23ZBy stump on /blog/tech/TLSCSRsAreADefaulttag:CSpace:blog/tech/TLSCSRsAreADefault:22d5575a6d41b01ba37b8701f543ab633e7a2050stumphttps://stump.io/<div class="wikitext"><p>Meant to also include above:</p>
<p>Even for those who are already aware of that property of CSRs, CSRs are an existing, well-supported mechanism for making those attestations, so continuing to use CSRs removes the need to come up with a new way for doing that, with all of the new tooling-vulnerability/bug potential that this would imply.</p>
</div>2023-07-03T19:01:46ZBy stump on /blog/tech/TLSCSRsAreADefaulttag:CSpace:blog/tech/TLSCSRsAreADefault:7d9e546980ac7c0e5aedbc32d7e18c55e8405109stumphttps://stump.io/<div class="wikitext"><p>There's another reason.</p>
<p>CSRs include a signature by the key they are requesting a cert for, and it looks to me like the ACME protocol specifically requires the CSR to request exactly the same set of identifiers that the signed certificate will be valid for.</p>
<p>Being able to provide one demonstrates to the CA that whoever/whatever possesses the private key did want, at some point, to obtain a certificate for that particular set of identifiers for that key.</p>
<p>Similarly, when a CA would ask for some value to be included in the CSR but wouldn't give you that value until sometime in the middle of their certificate issuance workflow, your response shows the CA that you do have the ability to sign things with the private key at that time.</p>
</div>2023-07-03T18:50:37ZBy David Marceau on /blog/tech/RISCVServersNotSoontag:CSpace:blog/tech/RISCVServersNotSoon:192bbc0b83a81cf35f1461167ceedd7601b46ef2David Marceauhttps://www.linkedin.com/in/david-marceau-509876251/<div class="wikitext"><p>I beg to differ.</p>
<p>RISC-V Spotlight: Ventana Brings RISC-V to Data Center with Veyron V1 - Balaji Baktha, Ventana
<a href="https://www.youtube.com/watch?v=mXeW4fFxzC8">https://www.youtube.com/watch?v=mXeW4fFxzC8</a>
<a href="https://www.ventanamicro.com/ventana-introduces-veyron-worlds-first-data-center-class-risc-v-cpu-product-family/">https://www.ventanamicro.com/ventana-introduces-veyron-worlds-first-data-center-class-risc-v-cpu-product-family/</a></p>
<p>High-Performance RISC-V Processor for Computation Acceleration and Server - Wei-han Lien
<a href="https://youtu.be/Od-IDrBRD-k">https://youtu.be/Od-IDrBRD-k</a></p>
<p>Intel Horse Creek board
<a href="https://www.sifive.com/boards/hifive-pro-p550">https://www.sifive.com/boards/hifive-pro-p550</a></p>
<p><a href="https://www.cnx-software.com/2022/12/14/sipeed-lm4a-t-head-th1520-risc-v-module-to-power-raspberry-pi-4-competitor-and-cluster-board/">https://www.cnx-software.com/2022/12/14/sipeed-lm4a-t-head-th1520-risc-v-module-to-power-raspberry-pi-4-competitor-and-cluster-board/</a></p>
<p><a href="https://liliputing.com/mips-announces-its-first-risc-v-chip-designs-are-now-available-for-licensing/">https://liliputing.com/mips-announces-its-first-risc-v-chip-designs-are-now-available-for-licensing/</a></p>
<p>Linux on RISC-V — software ecosystem update | Wei Fu | Red Hat Software (Beijing)
Wei Fu mentioned another higher-end data-center offering coming out of Sophgo. I don't even know what the SOC name is called yet but it's T-HEAD(Alibaba) based.
<a href="https://www.youtube.com/watch?v=it12BLBn-9Q&t=1062s">https://www.youtube.com/watch?v=it12BLBn-9Q&t=1062s</a></p>
<p>VisionFive 2 - 3D GPU を統合した世界初の高性能 RISC-V SBC | 木村 優之 Masayuki Kimura / StarFive Technology: <a href="https://www.youtube.com/watch?v=jGp6F2X-sl0">https://www.youtube.com/watch?v=jGp6F2X-sl0</a></p>
<p>There are users out there with real SBC's. Sure not entirely polished and finished for desktop usage, but there's enough to make a small backend server with it. It isn't entirely optimized yet, but functional enough and the ecosystem really is there with all the known libraries and languages.
<a href="https://rvspace.org/en/project/VisionFive2_Debian_Wiki_202306_Release">https://rvspace.org/en/project/VisionFive2_Debian_Wiki_202306_Release</a>
<a href="https://forum.rvspace.org/c/visionfive-2/19">https://forum.rvspace.org/c/visionfive-2/19</a></p>
</div>2023-06-23T13:04:59ZBy Yildo on /blog/tech/GabonCountryDNSEvaporationtag:CSpace:blog/tech/GabonCountryDNSEvaporation:e2b34e4292397ca090f216cbcc0a9d0b6b7fb231Yildohttps://eozygodon.com/@yildo<div class="wikitext"><p>Trying to think of cool .ga domains:</p>
<ul><li>rutaba.ga is registered as a redirect for someone's French-language blog</li>
<li>babaya.ga does not resolve</li>
<li>ba.ga and chipsba.ga do not resolve</li>
<li>ga.ga does not resolve</li>
<li>ladyga.ga is a cybersquatter</li>
<li>atlanta.ga is 403 Forbidden</li>
</ul>
</div>2023-06-18T22:32:02ZBy Walex on /blog/tech/ProtocolsAndEncryptiontag:CSpace:blog/tech/ProtocolsAndEncryption:4840202c405388c8c7103adfa7f4c5c441bec9d4Walexhttp://www.sabi.co.uk/Notes/linuxFS.html#fsHintsNFS<div class="wikitext"><p>Overall I have quite a different view from our blogger on these issues, to summarize:</p>
<ul><li>There is a lot of difference between encryption for authentication (where usually encryption costs don't matter much) and for confidentiality.<p>
</li>
<li>There is no practical alternative for authentication to Kerberos.<p>
</li>
<li>Kerberos for NFS (and SMB/CIFS, AFS, and other filesystems) authentication works pretty well and has large benefits, such as the ability to enforce user/group access control <em>at the server</em>, plus the other advantages of Kerberos such as the ability to forward credentials.<p>
</li>
<li>Kerberos for NFS data confidentiality was a mistake, but for some people it was better-than-nothing.<p>
</li>
<li>Kerberos used to be annoying to setup for the Linux in-kernel driver, but fortunately it is quite easy to setup with the NFS Ganesha implementation, especially with NFS4.<p>
</li>
<li>IPSEC for data confidentiality works very well (and in part for host authentication), but it has come into widespread usability only with AES+GCM special instructions.<p>
</li>
<li>Before AES and AES+GCM instructions not only encryption was quite expensive and mostly usable only for authentication, it was also subject to extensive government suppression that made it risky, Therefore the prevalence of special-case encryption and in off-mainstream tools (e.g. Linux and SSH), and mostly anyhow for authentication. SSL initially was itself off-mainstream until it came to the attention of the authorities.<p>
</li>
<li>Regardless of speed and government suppression, the big problem with any encryption scheme, whether for authentication or confidentiality, is key distribution. The popularity of SSH seems to me largely based not only on its relative initial obscurity, compared to SSL too, but also on its not being associated with any kind of built-in PKI (which is also a significant risk in the hands of the sillies, but also makes it much easier to get into). That is why I think SSH (despite its terrible misdesigns) became far more popular than TELNET+SSL or TELNET+IPSEC, to the ridiculous point that IP-over-SSH is probably far more popular than IPSEC itself.<p>
</li>
<li>The best chance for IPSEC is accordingly the "strongSwan" implementation, which can use existing public and private SSH key pairs for authentication (AES+GCM with AES-NI for confidentiality) and thus is quite trivial to setup.</li>
</ul>
</div>2023-05-25T17:16:58ZBy mappu on /blog/tech/StreamProtocolsAndEncryptiontag:CSpace:blog/tech/StreamProtocolsAndEncryption:c3ee00b31b4c36d3a5629f08c7b5660b54596794mappu<div class="wikitext"><p>DTLS had some solution for this, but ISTR it became really unpopular after Heartbleed.</p>
</div>2023-05-25T02:16:40ZBy Chris Siebenmann on /blog/tech/ProtocolsAndEncryptiontag:CSpace:blog/tech/ProtocolsAndEncryption:990840e4003a3c09dae7f11384e06d411df026e9Chris Siebenmann<div class="wikitext"><p>The unencrypted IMAP and SMTP protocols are both stream focused protocols.
Each starts with a stream setup phase (IMAP login, SMTP EHLO, etc)
and proceeds through a series of steps where the state of the stream
provides critical context. In both of them, it's clear what happens
if the stream is broken for some reason; you have to start over with
another stream setup and re-establish the context. NFS is not like this;
the TCP stream is an implementation detail of the transport mechanism,
not a core element of the protocol (and historically NFS was transported
over UDP first). I believe that NFS clients typically use one TCP stream
to transport all RPC for a particular server, but they aren't required to.
If a NFS TCP connection breaks, re-establishment is supposed to be more
or less transparent to the RPC layer, and the stream carries little or no
context for the RPC operations.</p>
<p>It's my view that this difference matters. For example, you can argue
that the correct thing to encrypt is NFS RPC operations and replies (and
then continue to transport them over unencrypted TCP), whereas encrypting
only the IMAP or SMTP commands and replies in an unencrypted TCP stream
is relatively obviously crazy, partly because the overall integrity of
the TCP stream is critical for SMTP and IMAP (since all IMAP and SMTP
commands are issued in a context established by the state of the stream;
if you can replay or splice in operations you can damage or destroy that
state and cause valid commands from the client to go wrong, so the state
must be protected).</p>
<p>(I'd argue that the overall integrity of the sequence of NFS RPCs and
replies is also of high importance, but I'm not certain I'd win that
argument and I'd expect people to try to put together clever RPC level
encryption schemes to achieve it instead of encrypting the entire stream.)</p>
</div>2023-05-24T17:37:20ZFrom 193.219.181.219 on /blog/tech/ProtocolsAndEncryptiontag:CSpace:blog/tech/ProtocolsAndEncryption:59a46443d1f00e78f6e425db25a2dcdb5d0a46a8From 193.219.181.219<div class="wikitext"><blockquote><p>while encrypted NFS with Kerberos is its own bespoke, unique cryptographic protocol and implementation.</p>
</blockquote>
<p>I suppose that's true; although Kerberos encryption <em>in general</em> is also reasonably hardened over the years (with MS Active Directory being a heavy user; e.g. all AD LDAP traffic is Kerberos-encrypted instead of using TLS), but unlike TLS it indeed needs to be custom-integrated into each protocol, e.g. with SunRPC there is certainly a downside that it only encrypts the RPC call payloads but not headers.</p>
<p>(Though in practice for NFSv4, the only visible RPC operation is "COMPOUND", with the real ops inside the secure payload so it doesn't reveal anything really, but I'm not sure whether that's "secure by accident" or "secure by design".)</p>
<p>Similarly with SMBv3, the custom encryption feature does not protect authentication (it uses the auth mechanism to derive keying), so while it is fine with Kerberos, it really does not help much when used with NTLM... (And SMB-over-QUIC is apparently a thing now, but of course only for those who pay for Azure.)</p>
<blockquote><p>One part of my answer is 'agility'. SMTP was able to add TLS because it could add a 'STARTTLS' command to the protocol and get people to use it; IMAP was able to add TLS partly because it could also add a 'STARTTLS' command and partly because it was able to add an entire new TCP port that was 'IMAP over TLS'. As a practical matter, NFS has never had this agility; the existing NFS protocols had no easy place to add the equivalent of EHLO and STARTTLS</p>
</blockquote>
<p>I'm not sure that's right, exactly; for years NFS has had the <a href="https://datatracker.ietf.org/doc/html/rfc2623#section-2.3.1">"NULL" RPC function</a> both to negotiate between existing security mechanisms (if you don't specify the correct one at mount time), and to <a href="https://www.rfc-editor.org/rfc/rfc2203.html#section-5.2.2">set up a GSS context for Kerberos</a> (as Kerberos itself is already not completely stateless if you want its integrity or encryption services).</p>
<p>So it appears that's exactly how they implemented RPC-TLS as well – essentially adding a <a href="https://datatracker.ietf.org/doc/html/draft-ietf-nfsv4-rpc-tls-09#section_0A03673B-14BA-4228-8A8A-F76AA318CA73">"STARTTLS" pseudo-mechanism</a> to the same NULL operation.</p>
<blockquote><p>and the IP ports to use were either fixed or found in arcane ways that made it hard to add a new port for an all-TLS version.</p>
</blockquote>
<p>For NFSv4 (which only has a single well-known port) I believe it certainly could've been done the same way as with other TLS-ified protocols, e.g. `mount -o tls` could have implied `-o port=XYZ`.</p>
<p>For NFSv2/3, where portmapper was a thing, I suppose it could've been achieved by defining NFS-over-TLS as a new RPC "program"?</p>
<blockquote><p>This means that a straightforward version of 'NFS over TLS' is encrypting the (TCP) transport stream, not 'NFS' as such. There are similar protocol challenges with DNS over TLS.</p>
</blockquote>
<p>This is really the same with all protocols, though. IMAPS doesn't change how IMAP works either, it only encrypts the underlying stream; on the other hand, DNS-over-TCP has always been a thing and putting it inside a TLS tunnel was possible without much change in the way it works.</p>
<blockquote><p>The deepest reason I see is that we never successfully created a generic 'random TCP streams over encryption' system that could be applied to encrypt something without the cooperation of the protocol. People sort of tried in the form of IPsec, but for various reasons IPsec has not caught on and isn't considered desirable today.</p>
</blockquote>
<p>From what I know, IPsec in the form of IKEv2 is quite alright security-wise (a bit less hassle than IKEv1 was, and many of the proprietary tweaks to IKEv1 such as Cisco-style authentication are now built-into IKEv2), though it's still annoying to set up compared to e.g. WireGuard.</p>
<p>Though, I believe much of its apparent complexity comes from the fact that</p>
<p>"as opposed to simply specifying 'this traffic should be encrypted' (a feature that IPsec did support)"</p>
<p>is not merely "possible" but baked <em>hard</em> into Linux IPsec implementations, to the point that it makes simple "VPN-like" usage annoyingly difficult. (Only recently did Linux finally add virtual "xfrmi" interfaces...) If only strongSwan could be asked to give a userspace tun interface like normal.</p>
</div>2023-05-24T06:53:09ZBy Chris Siebenmann on /blog/tech/NFSVsNFSWithKerberostag:CSpace:blog/tech/NFSVsNFSWithKerberos:4139046442d2b604ebf8c154c264061c6e15a438Chris Siebenmann<div class="wikitext"><p>This is a good question (set of questions) and my answers are sufficiently
long that I put them in an entry, <a href="https://utcc.utoronto.ca/~cks/space/blog/tech/ProtocolsAndEncryption">ProtocolsAndEncryption</a>. The short
version is that NFS never adopted TLS (although that's changing now)
and I'm not enthused about bespoke cryptographic protocols because
their track record is generally pretty bad.</p>
<p>One reason that NFS wasn't and isn't used anywhere near as much in the
desktop/laptop space is that Unix fell out of favour there. The dominant
desktop and laptop systems talk SMB/CIFS and not NFS, and SMB was rather
earlier to distrusting the client machine than NFS was. NFS also spent
a long time with an assortment of reliability issues in anything that
wasn't an excellent network environment (issues it still has to some
degree today in NFS v3; you can still get stuck locks).</p>
</div>2023-05-24T02:28:31ZFrom 193.219.181.219 on /blog/tech/NFSEncryptionOptionstag:CSpace:blog/tech/NFSEncryptionOptions:f5a31fb7afaed6643ef73827508d54fb5b7ee0a3From 193.219.181.219<div class="wikitext"><blockquote><p>Generally all those options (both TLS and VPN) will likely have performance far better than Kerberos</p>
</blockquote>
<p>(On that note, Microsoft has probably learned the same lesson, as they traditionally have been using Kerberos encryption for LDAP and all the various DCE-RPC stuff that was implemented in 2000s, but when they finally added encryption to SMBv3 they deliberately chose to <em>not</em> do the same – Kerberos in SMBv3 only performs authentication and key-exchange, while bulk encryption is done at the SMB protocol layer in a way that's easier to offload to hardware.)</p>
</div>2023-05-23T05:05:26ZFrom 193.219.181.219 on /blog/tech/NFSVsNFSWithKerberostag:CSpace:blog/tech/NFSVsNFSWithKerberos:e2f9ecf36d5223b23abf4b4da7a9dcae9e171049From 193.219.181.219<div class="wikitext"><blockquote><p>My short version is that IPSec or any VPN technology is probably what you want. In a Linux environment today, I would use Wireguard. IPSec will let you encrypt only NFS traffic with native features; for other things, you'd encrypt only NFS traffic by giving the NFS servers (and clients) private 'inside' IPs and using them for NFS mounts and permissions. An extremely discount version could probably be built using SSH connection tunneling.</p>
</blockquote>
<p>This makes me wonder, what is it about file-access protocols that they fundamentally "have to" go through a VPN or be tunneled through SSH, in your opinion, instead of using protocol-integrated security like everything else can? (This is in context of workstation-to-server, not server-to-server.)</p>
<p>That is, why we use SSH-over-Internet instead of Telnet-over-VPN, or for example IMAPS-over-Internet instead of IMAP-over-VPN, trusting their built-in encryption and authentication, but reject the same thing in NFS or SMB?</p>
<p>(This is broadly related to "[...] in the individual laptop and desktop space. But very few people are trying to use NFS in that environment (and for good reason); most commonly people use SMB/CIFS there"; you never mentioned what specific reason it was, though.)</p>
</div>2023-05-23T04:52:58ZFrom 193.219.181.219 on /blog/tech/NFSEncryptionOptionstag:CSpace:blog/tech/NFSEncryptionOptions:05f2504f65ba97257681049651290cdd212430c7From 193.219.181.219<div class="wikitext"><p>The third option is NFS-over-TLS or "RPC-TLS", initial support for which has <a href="https://git.kernel.org/linus/b3cbf98e2fdf3cb147a95161560cd25987284330">just</a> been merged into Linux a few weeks ago (I believe FreeBSD has already had it for a little longer); you could probably test it with mainline and <a href="https://github.com/oracle/ktls-utils">ktls-utils</a>, and the latest nfs-utils as well.</p>
<p>Either sec=sys over mutually authenticated TLS (with server and client certificates) or authentication-only sec=krb5 (without Kerberos-level integrity or encryption, leaving that to TLS) over normal TLS would probably be the most useful combinations, though you could use it whichever way.</p>
<p>Generally all those options (both TLS and VPN) will likely have performance <em>far better</em> than Kerberos; at least in my tests sec=krb5i-over-IPsec seems to offer 2x-3x the performance than sec=krb5p. I'm not entirely sure why, perhaps the way Kerberos uses AES-CTS doesn't easily lend itself to offloading (or perhaps just because AES-CTS is not AES-GCM, and it's the latter that CPUs really like).</p>
</div>2023-05-23T04:45:13ZBy Chris Siebenmann on /blog/tech/NFSVsNFSWithKerberostag:CSpace:blog/tech/NFSVsNFSWithKerberos:3c947204f760c4976eeb2688667af5d061167ad3Chris Siebenmann<div class="wikitext"><p>My short version is that IPSec or any VPN technology is probably what you
want. In a Linux environment today, I would use Wireguard. IPSec will let
you encrypt only NFS traffic with native features; for other things, you'd
encrypt only NFS traffic by giving the NFS servers (and clients) private
'inside' IPs and using them for NFS mounts and permissions. An extremely
discount version could probably be built using SSH connection tunneling.</p>
<p>(Wireguard reportedly can have very good performance and high bandwidth,
but I haven't tested this. Since encryption is encryption, you're
probably not worse off performance wise than you would be with fully
encrypted Kerberos NFS.)</p>
<p>My view on people who need extremely low latency and extremely high
bandwidth, beyond what these solutions can provide, is that they need
to build physically secure networks for their NFS traffic.</p>
</div>2023-05-22T22:22:58ZBy Anonymous on /blog/tech/NFSVsNFSWithKerberostag:CSpace:blog/tech/NFSVsNFSWithKerberos:d488ab3c5e14a32a44318364752d32a51e6ca1a3Anonymous<div class="wikitext"><blockquote>
<p>In a modern context, there are a lot of ways of securing and encrypting NFS traffic itself </p>
</blockquote>
<p>< - snip - ></p>
<blockquote><p>If you need to secure NFS against an untrusted network, you have plenty of better options than Kerberos</p>
</blockquote>
<p>Apparently, i have not been keeping up with modern technology. I would love to read a post here about those better alternatives.</p>
</div>2023-05-22T17:46:27ZBy Chris Siebenmann on /blog/tech/NFSVsNFSWithKerberostag:CSpace:blog/tech/NFSVsNFSWithKerberos:76764f395ed1dff0f69e0532c41706942aa38f33Chris Siebenmann<div class="wikitext"><p>In a modern context, there are a lot of ways of securing and encrypting
NFS traffic itself; as a side effect, many of these ways will also
authenticate the machines involved. If you need to secure NFS against
an untrusted network, you have plenty of better options than Kerberos
('better' in the sense that they have fewer drastic side effects and
dependencies).</p>
</div>2023-05-22T17:13:13ZBy Anonymous on /blog/tech/NFSVsNFSWithKerberostag:CSpace:blog/tech/NFSVsNFSWithKerberos:c0a1dacdb7e7312e084336954ae4384adabd0a10Anonymous<div class="wikitext"><p>A real benefit of NFSv4 with Kerberos that you seem to ignore, is that you can encrypt the traffic; without Kerberos (and with NFSv3) all traffic - including but not limited to passwords - is send as readable plain text.</p>
</div>2023-05-22T11:38:09ZFrom 193.219.181.219 on /blog/tech/NFSVsNFSWithKerberostag:CSpace:blog/tech/NFSVsNFSWithKerberos:eba1f6981fcd05c0178c9fa8b5c5a811df9833bdFrom 193.219.181.219<div class="wikitext"><blockquote><p>But very few people are trying to use NFS in that environment (and for good reason); most commonly people use SMB/CIFS there. </p>
</blockquote>
<p>I think this needs more elaboration.</p>
<p>I wouldn't say NFS is great over WAN, but it doesn't seem any worse than SMB if both are set up to use the same Kerberos authentication, especially with NFSv4 running entirely over the single tcp2049 (no sideband protocols). Locking works differently in NFSv4, with "leases" (which I believe are inspired by SMBv2/3).</p>
<blockquote><p>that this form of NFS with Kerberos requires you to give up essentially all unattended operations that NFS clients might perform on behalf of users. Crontab entries, web CGIs or long running web server processes</p>
</blockquote>
<p>There's a middle ground to this – it requires you to give up operations on <em>arbitrary</em> users, but it's always possible to have <em>specific</em> users to store their credentials on the system. For userspace tools with MIT Kerberos libraries there are at least five ways to set up a cronjob that authenticates on your behalf using your stored credentials (keytab), half of them are also applicable to kerberized NFS. (It's really not a new thing, other sites and universities have been doing it.)</p>
<p>So although you can't have a webapp that fully bypasses access permissions, you can certainly have cronjobs that run indefinitely if they're set up by their owners.</p>
<p>(I've read about some sites having a system that automatically creates "batch" credentials instead of the user having to store their own; IIRC Rutgers had such a system.)</p>
</div>2023-05-22T05:27:06ZBy Ray on /blog/tech/PeopleDontPatchtag:CSpace:blog/tech/PeopleDontPatch:a39035956e5de643f1c205ba7a3957827ebb1b17Ray<div class="wikitext"><p>The problem is, patches usually break things nowadays. And for some reason, companies slip in new features, removal of old features, and UI changes that break people's workflow into a 'patch'. They can't be bothered to make a full new version as a separate thing anymore.</p>
<p>In addition to all that mess, applying updates usually means losing all state. Which in the old days, ok, save your files first, then run updates, then you can reload them just as they were. But a lot of modern software doesn't really have any way to save its state at all.</p>
<p>So much is feed-based, context-based, etc. You can reload the same URL, but it will be showing completely different stuff when you do.</p>
<p>And as for not doing the major upgrades - people sticking with things that aren't getting security updates anymore - well, that's the same. The upgrades break everything. There's no backward compatibility.</p>
<p>All of those things (making patches that break things, making software with state that can't be restored, breaking backward compatibility) are really our failings as software developers. It's not the fault of the people who quite reasonably choose to not update.</p>
<p>If people don't apply patches, it's easy to blame them, insult them, whatever, but really it's our fault. If upgrading didn't break things 9 out of 10 times, then maybe people wouldn't think twice about it.</p>
<p>We really have to get a whole lot better about how we make updates if we want people to be willing to apply them. The current state of the art where applying updates is at least as risky as whatever they're trying to protect us against, if not more risky, and they quite often result in at least a lost day of productivity, that just ain't right. We have to do better.</p>
</div>2023-05-21T02:56:19ZBy elviejo79 on /blog/tech/MailingListsVsForumstag:CSpace:blog/tech/MailingListsVsForums:116e2882982106c07327b7bf8e6ef62176adb15eelviejo79https://elviejo79.github.io<div class="wikitext"><p>I agree with you that a forum is the best way to answer questions. Abd proof of that is stackoverflow.com... that long time ago was described as a forum where questions are the threads and has two levels of nesting answers and comments on the answers.</p>
<p>However on your comment you put forums and chats (discord, slack) on the same category... And they are not </p>
<p>A forum encourages asynchronous communication whereas a chat encourages synchronous.
So a chat seems nore dynamic but fragments our attention with constant interruptions.
Plus in a chat, there are threads, but the default mode is chronological order.</p>
</div>2023-05-13T10:34:32ZFrom 193.219.181.219 on /blog/tech/MailingListsVsForumstag:CSpace:blog/tech/MailingListsVsForums:0d061929e3ee174259012f720f1010b09b892c5eFrom 193.219.181.219<div class="wikitext"><p>Most of my list reading nowadays goes through the NNTP gateways of Gmane and Lore (LKML), and even though it looks more or less the same as an IMAP folder (and lacks the convenience of IMAP), I somehow feel less guilty pressing "Mark all as read" on an NNTP group than a mail folder.</p>
</div>2023-05-12T12:15:08ZBy Ian Z aka nobrowser on /blog/tech/MailingListsVsForumstag:CSpace:blog/tech/MailingListsVsForums:a4db04819305ad75304d65ca76a6e31bfeea4f14Ian Z aka nobrowser<div class="wikitext"><p>Even on forums, there is always a "General" or "Other" category, and that's where most posts will end up. So that makes filtering by category largely useless, in my experience.</p>
<p>As for filtering by thread, all forum interfaces I have seen repeat the same fatal flaw: I have to "open" or "read" a thread at least once before I can mute it. Compare that with reading a mailing list in mutt where you can just go through the thread list (hitting only one key per thread!) and mute by the subject.</p>
<p>I agree that net news was the best discussion medium ever, and as far as I know there is no technical hurdle to adopting it now. I don't know how good the news reader support in mutt / neomutt is, though, and all other readers definitely suck (slrn can't even decode QP and base64, last time I tried it).</p>
</div>2023-05-11T18:34:39ZBy Miksa on /blog/tech/MailingListsVsForumstag:CSpace:blog/tech/MailingListsVsForums:f8f4dcf6a8122f5231d9608c68bd1f8c55e4244cMiksa<div class="wikitext"><p>But it was the newsgroups that handled all of this best, and I rue it's demise regularly. You could easily start a new thread. A thread could split into subthreads with different discussions. And you could immediately see how many new messages a group had and read through all of them just by hitting space bar repeatedly. I just wish forums would build a newsgroup interfaces.</p>
<p>On a thread list on a forum it may be hard to spot new threads before they drop to the second page. If fact I mostly read a forum where most of the discussion has concentrated to thousands and tens of thousands of posts long "megathreads". I always just open them from bookmarks, it may be months before I check the forums for any new threads. And all these megathreads have multiple ongoing simultaneous discussions and you need to read all posts to keep track of them.</p>
<p>I also read a history blog with weekly new posts and great discussions in the comment section of every post. The blog is WP based and the comments have several separate threads and keeping up with new comments is arduous. When I return to the blog I last read it on May 5th. So I search the page for "May 5" and see there are 25 hits. I refresh the page, search again and now there are 38 hits, so several new comments. I need to jump to every new hit, read them and any related newer comments. When I have read all of "May 5" I then search for "May 6" and repeat until I have caught up. Newsgroups would be god send.</p>
</div>2023-05-11T16:03:36ZBy Anonymous on /blog/tech/MailingListsVsForumstag:CSpace:blog/tech/MailingListsVsForums:87c24f8c5993581947b5eee58a31b72374ee07c0Anonymous<div class="wikitext"><p>Forums ? Hasn't that gone the way of the dodo ?</p>
<p>All the cool kids use chats these days, like Discord for example.</p>
</div>2023-05-11T11:11:10Z