Why email is often not as good as modern communication protocols
I was recently reading Git is already federated & decentralized (via). To summarize the article, it's a reaction to proposals to decentralize large git servers (of the Github/Gitlab variety) by having them talk to each other with ActivityPub. Drew DeVault ntes that notes that Git can already be used in a distributed way over email, describes how git forges could talk to each other via email, and contrasts it with ActivityPub. In the process of this article, Drew DeVault proposes using email not just between git forges but between git forges and users, and this is where my sysadmin eyebrows went up.
The fundamental problem with email as a protocol, as compared to more modern ones, is that standard basic email is an 'anonymous push' protocol. You do not poll things you're interested in, they push data to you, and you can be pushed to with no form of subscription on your part required; all that's needed is a widely shared identifier in the form of your email address. This naturally and necessarily leads to spam. An anonymous push protocol is great if you want to get contacted by arbitrary strangers, but that necessarily leads to arbitrary strangers being able to contact you whether or not you're actually interested in what they have to say.
This susceptibility to spam is not desired (to put it one way) in a modern protocol, a protocol that is designed to survive and prosper on today's Internet. Unless you absolutely need the ability to be contacted by arbitrary strangers, a modern protocol should be either pull-based or require some form of explicit and verifiable subscription, such that pushes without the subscription information automatically fail.
(One form of explicit verification is to make the push endpoint
different for each 'subscription' by incorporating some kind of
random secret in eg the web-hook URL that notifications are
It's possible to use email as the transport to implement a protocol that doesn't allow anonymous push; you can require signed messages, for example, and arrange to automatically reject or drop unsigned or badly signed ones. But this requires an additional layer of software on top of email; it is not simple, basic email by itself, and that means that it can't be used directly by people with just a straightforward mail address and mail client. As a result, I think that Drew DeVault's idea of using email as the transport mechanism between git forges is perfectly fine (although you're going to want to layer message signatures and other precautions on top), but the idea of extending that to directly involving people's email boxes is not really the right way to go, or at least not the right way to go by itself.
(To be blunt, one of the great appeals of Github to me is that I can to a large extent participate in Github without spraying my email address around to all and sundry. It still leaks out and I still get spam due to participating in Github, but it's a lot less than I would if all Github activity took place in mailing lists that were as public as eg Github issues are.)
The challenge of storing file attributes on disk
In pretty much every Unix filesystem and many non-Unix ones, files (and more generally all filesystem objects) have a collection of various basic attributes, things like modification time, permissions, ownership, and so on, as well as additional attributes that the filesystem uses for internal purposes (eg). This means that every filesystem needs to figure out how to store and represent these attributes on disk (and to a lesser extent in memory). This presents two problems, an immediate one and a long term one.
The immediate problem is that different types of filesystem objects
have different attributes that make sense for them. A plain file
definitely needs a (byte) length that is stored on disk, but that
doesn't make any sense to store on disk for things like FIFOs,
Unix domain sockets, and even block
and character devices, and it's not clear if a (byte) length still
make sense for directories either given that they're often complex
data structures today. There's also
attributes that some non-file objects need that files don't; a
classical example in Unix is
st_rdev, the device ID of special
(Block and character devices may have byte lengths in
results but that's a different thing entirely than storing a byte
length for them on disk. You probably don't want to pay any attention
to the on-disk 'length' for them, partly so that you don't have to
worry about updating it to reflect what you'll return in
Non-linear directories definitely have a space usage, but that's
usually reported in blocks; a size in bytes doesn't necessarily
make much sense unless it's just 'block count times block size'.)
The usual answer for this is to punt. The filesystem will define an on-disk structure (an 'inode') that contains all of the fields that are considered essential, especially for plain files, and that's pretty much it. Objects that don't use some of the basic attributes still pay the space cost for them, and extra attributes you might want either get smuggled somewhere or usually just aren't present. Would you like attributes for how many live entries and how many empty entry slots are in a directory? You don't get it, because it would be too much overhead to have the attributes there for everything.
The long term problem is dealing with the evolution of your attributes. You may think that they're perfect now (or at least that you can't do better given your constraints), but if your filesystem lives for long enough, that will change. Generally, either you'll want to add new attributes or you'll want to change existing ones (for example, widening a timestamp from 32 bits to 64 bits). More rarely you may discover that existing attributes make no sense any more or aren't as useful as you thought.
If you thought ahead, the usual answer for this is to include unused extra space in your on-disk attribute structure and then slowly start using it for new attributes or extending existing ones. This works, at least for a while, but it has various drawbacks, including that because you only have limited space you'll have long arguments about what attributes are important enough to claim some of it. On the other hand, perhaps you should have long arguments over permanent changes to the data stored in the on-disk format and face strong pressures to do it only when you really have to.
As an obvious note, the reason that people turn to a fixed on-disk 'inode' structure is that it's generally the most space-efficient option and you're going to have a lot of these things sitting around. In most filesystems, most of them will be for regular files (which will outnumber directories and other things), and so there is a natural pressure to prioritize what regular files need at the possible expense of other things. It's not the only option, though; people have tried a lot of things.
I've talked about the on-disk format for filesystems here, but you face similar issues in any archive format (tar, ZIP, etc). Almost all of them have file 'attributes' or metadata beyond the name and the in-archive size, and they have to store and represent this somehow. Often archive formats face both issues; different types of things in the archive want different attributes, and what attributes and other metadata needs to be stored changes over time. There have been some creative solutions over the years.
Some thoughts on performance shifts in moving from an iSCSI SAN to local SSDs
At one level, we're planning for our new fileserver environment to be very similar to our old one. It will still use ZFS and NFS, our clients will treat it the same, and we're even going to be reusing almost all of our local management tools more or less intact. At another level, though, it's very different because we're dropping our SAN in this iteration. Our current environment is an iSCSI-based SAN using HDs, where every fileserver connects to two iSCSI backends over two independent 1G Ethernet networks; mirrored pairs of disks are split between backends, so we can lose an entire backend without losing any ZFS pools. Our new generation of hardware uses local SSDs, with mirrored pairs of disks split between SATA and SAS. This drastic low level change is going to change a number of performance and failure characteristics of our environment, and today I want to think aloud about how the two environments will differ.
(One reason I care about their differences is that it affects how we want to operate ZFS, by changing what's slow or user-visible and what's not.)
In our current iSCSI environment, we have roughly 200 MBytes/sec of total read bandwidth and write bandwidth across all disks (which we can theoretically get simultaneously) and individual disks can probably do about 100 to 150 MBytes/sec of some combination of reads and writes. With mirrors, we have 2x write amplification from incoming NFS traffic to outgoing iSCSI writes, so 100 Mbytes/sec of incoming NFS writes saturates our disk write bandwidth (and it also seems to squeeze our read bandwidth). Individual disks can do on the order of 100 IOPs/sec, and with mirrors, pure read traffic can be distributed across both disks in a pair for 200 IOPs/sec in total. Disks are shared between multiple pools, which visibly causes problems, possibly because the sharing is invisible to our OmniOS fileservers so they do a bad job of scheduling IO.
Faults have happened at all levels of this SAN setup. We have lost individual disks, we have had one of the two iSCSI networks stop being used for some or all of the disks or backends (usually due to software issues), and we have had entire backends need to be rotated out of service and replaced with another one. When we stop using one of the iSCSI networks for most or all disks of one backend, that backend drops to 100 Mbytes/sec of total read and write bandwidth, and we've had cases where the OmniOS fileserver just stopped using one network so it was reduced to 100 Mbytes/sec to both backends combined.
On our new hardware with local Crucial MX300 and MX500 SSDs, each individual disk has roughly 500 Mbytes/sec of read bandwidth and at least 250 Mbytes/sec of write bandwidth (the reads are probably hitting the 6.0 Gbps SATA link speed limit). The SAS controller seems to have no total bandwidth limit that we can notice with our disks, but the SATA controller appears to top out at about 2000 Mbytes/sec of aggregate read bandwidth. The SSDs can sustain over 10K read IOPs/sec each, even with all sixteen active at once. With a single 10G-T network connection for NFS traffic, a fileserver can do at most about 1 GByte/sec of outgoing reads (which theoretically can be satisfied from a single pair of disk) and 1 GByte/sec of incoming writes (which would likely require at least four disk pairs to get enough total write bandwidth, and probably more because we're writing additional ZFS metadata and periodically forcing the SSDs to flush and so on).
As far as failures go, we don't expect to lose either the SAS or the SATA controllers, since both of them are integrated into the motherboard. This means we have no analog of an iSCSI backend failure (or temporary unavailability), where a significant number of physical disks are lost at once. Instead the only likely failures seem to be the loss of individual disks and we certainly hope to not have a bunch fall over at once. I have seen a SATA-connected disk drop from a 6.0 Gbps SATA link speed down to 1.5 Gbps, but that may have been an exceptional case caused by pulling it out and then immediately re-inserting it; this dropped the disk's read speed to 140 MBytes/sec or so. We'll likely want to monitor for this, or in general for any link speed that's not 6.0 Gbps.
(We may someday have what is effectively a total server failure, even if the server stays partially up after a fan failure or a partial motherboard explosion or whatever. But if this happens, we've already accepted that the server is 'down' until we can physically do things to fix or replace it.)
In our current iSCSI environment, both ZFS scrubs to check data integrity and ZFS resilvers to replace failed disks can easily have a visible impact on performance during the workday and they don't go really fast even after our tuning; this is probably not surprising given both total read/write bandwidth limits from 1G networking and IOPs/sec limits from using HDs. When coupled with our multi-tenancy, this means that we've generally limited how much scrubbing and resilvering we'll do at once. We may have historically been too cautious about limiting resilvers (they're cheaper than you might think), but we do have a relatively low total write bandwidth limit.
Our old fileservers couldn't have the same ZFS pool use two chunks from the same physical disk without significant performance impact. On our new hardware this doesn't seem to be a problem, which suggests that we may experience much less impact from multi-tenancy (which we're still going to have, due to how we sell storage). This is intuitively what I'd expect, at least for random IO, since SSDs have so many IOPs/sec available; it may also help that the fileserver can now see that all of this IO is going to the same disk and schedule it better.
On our new hardware, test ZFS scrubs and resilvers have run at anywhere from 250 Mbyte/sec on upward (on mirrored pools), depending on the test pool's setup and contents. With high SSD IOPs/sec and read and write bandwidth (both to individual disks and in general), it seems very likely that we can be much more aggressive about scrubs and resilvers without visibly affecting NFS fileserver performance, even during the workday. With an apparent 6000 Mbytes/sec of total read bandwidth and perhaps 4000 Mbytes/sec of total write bandwidth, we're pretty unlikely to starve regular NFS IO with scrub or resilver IO even with aggressive tuning settings.
(One consequence of hoping to mostly see single-disk failures is that under normal circumstances, a given ZFS pool will only ever have a single failed 'disk' from a single vdev. This makes it much less relevant that resilvering multiple disks at once in a ZFS pool is mostly free; the multi-disk case is probably going to be a pretty rare thing, much rarer than it is in our current iSCSI environment.)
TLS Certificate Authorities and 'trust'
In casual conversation about CAs, it's common for people to talk about whether you trust a CA (or should) and whether a CA is trustworthy. I often bristle at using 'trust' in these contexts, but it's been hard to articulate why. Today, in a conversation on HN prompted by my entry on the first imperative of commercial CAs, I came up with a useful explanation.
Let's imagine that there's a new CA that's successfully set itself up as a copy of how Let's Encrypt operates; it uses the same hardware, runs the same open source software, configures things the same, follows the same procedures, has equally good staff, has been properly audited, and in general has completely duplicated Let's Encrypt's security and operational excellence. However, it has opted for the intellectually pure approach of starting with new root certificates that are not cross-signed by anyone and it is not in any browser root stores yet; as a result, its certificates are not trusted by any browser.
(Let's Encrypt has made this example plausible, because as a non-commercial CA that mostly does things with automation it doesn't have as many reasons to keep how it operates a secret as a commercial CA does.)
In any reasonable and normal sense of the word, this CA is as trustworthy as Let's Encrypt is. It will issue or not issue TLS certificates in the same situations that LE would (ignoring rate limits and pretending that everyone who authorizes LE in CAA records will also authorize this CA and so on), and its infrastructure and procedures are as secure and solid as LE's. If we trust LE, and I think we do, it's hard to say why we wouldn't trust this CA.
If we say that this CA is 'less trustworthy' than Let's Encrypt anyway, what we really mean is 'TLS certificates from this CA currently provoke browser warnings'. This is a perfectly good thing to care about (and it's usually what matters in practice), but it is not really 'trust' and the difference matters because we have a whole tangled set of expectations, beliefs, and intuitions surrounding the idea of trust. When we use the language of trust to talk about technical issues of which CA certificates the browsers accept and when, we create at least some confusion and lose some clarity, and we risk losing sight of what browser-accepted TLS certificates really are, what they tell us, and what we care about with them.
For instance, if we talk about trust and you get a TLS certificate from a CA, it seems to make intuitive sense to say that you need to trust the CA and that it should be trustworthy. But what does that actually mean when we look at the technical details? What should the CA do or not do? How does that affect our security, especially in light of the fundamental practical problem with the CA model?
At the same time, talking about the trustworthiness of a CA is not a pointless thing. If a CA is not trustworthy (in the normal sense of the word), it should not be included in browsers (and eventually will not be). It's just that the trustworthiness of a CA is only loosely correlated with whether TLS certificates from the CA are currently accepted by browsers, which is almost always what we really care about. As we've seen with StartCom, it can take a quite long time to transition from concluding that a CA is no longer trustworthy to having all its TLS certificates no longer accepted by browsers.
There can also be some amount of time when a new CA is trustworthy but is not included in browsers, because inclusion takes a while. This actually happened with Let's Encrypt; it's just that Let's Encrypt worked around this time delay by getting their certificate cross-signed by an existing in-browser CA, so people mostly didn't notice.
(I will concede that using 'trust' casually is very attractive. For example, in the sentence above I initially wrote 'trusted CA' instead of 'in-browser CA', and while that's sort of accurate I decided it was not the right phrasing to use in this entry.)
Sidebar: The one sort of real trust required in the CA model
Browser vendors and other people who maintain sets of root certificates must trust that CAs included in them will not issue certificates improperly and will in general conduct themselves according to the standards and requirements that the browser has. What constitutes improper issuance is one part straightforward and one part very complicated and nuanced; see, for example, the baseline requirements.
Twitter probably isn't for you or me any more
I'm currently feeling unhappy about Twitter, because last Friday I confirmed that my Linux Twitter client is going to stop working in less than two months in Twitter's client apocalypse (my iOS client is very likely to be affected too). This development isn't an exception; instead it's business as usual for Twitter, at least as long term and active users of Twitter see it. Twitter has a long history of making product changes that we don't want and ignoring the ones that we do want, like a return of chronological timelines. Worse, even bigger ones are said to be on the way, with further changes from the classical Twitter experience. Why is Twitter ignoring its long term users this way? Is it going to change its mind? My guess is probably not. Instead, I've come to believe that Twitter has made a cold blooded business decision that it's not for you or me any more.
Here is how I see things at the moment, from my grumpy and cynical perspective.
Twitter is a tech company with a highly priced stock, which current investors want and need to be even higher. To support and increase its stock price, Twitter needs to grow its revenue and grow it fast (no one is going to sit around for five or ten years of slow growth). Like many modern Internet companies, Twitter is currently mostly an advertising company and makes money by showing ads to its users. There are three broad ways for an ad company to increase the money coming in; it can:
- increase the value of its ads, so companies will pay more for them.
- show more ads to current users.
- increase the number of (active) users, so it sells more ads in total.
The first seems unlikely to happen, especially given that Internet ad trends seem to be running the other way. The second generally doesn't work too well and can work against the first. Neither of them, separately or together, seem likely to deliver the sort of major growth Twitter needs (even if they work, they both have limits). So that leaves increasing the number of users.
But Twitter has already been trying to grow its user base for years, generally without much success and certainly without the very visible large scale growth that investors need. As part of this, Twitter has spent years refining and tinkering with the core Twitter product in attempts to draw in more users and get them to be more active, and with moderate exceptions it hasn't worked. The modern Twitter is genuinely more pleasant in various modest ways than it was when I started, but it's manifestly not drawing in hordes of new users.
In this situation, Twitter has a choice. It could double down on its past approach, trying yet more tweaks to the current core Twitter experience in a 'this time for sure' bet even though that's repeatedly failed before. Alternately, it could make a cold blooded business decision to shift to a significantly different core experience that (Twitter feels) has a much better chance of pulling in the vast ocean of users in the world who aren't particularly attracted to the current version of Twitter, and may even be turned off by it.
I believe that Twitter's made the second choice. It's decided to change what 'Twitter' is; as a result, 'Twitter' is no longer for you and me, the people who like it as it is, as a chronological timeline and so on. 'Twitter' the experience is now going to be for the new users that Twitter (the company) needs in order to have a chance of growing revenue enough and keeping its share price up. If the new experience displeases or outright alienates you and me, that's just tough luck for us. The Twitter that we find interesting and compelling, the product that's useful to us, well, it's apparently not capable of growing big enough (for Twitter's investors, at least; it might be a profitable company without the baggage of a high stock price).
(Analogies to the rise and then the stall of syndication feed reading are left as an exercise for the reader, including any arguments that there was or wasn't a natural limit to the number of people who'd ever want to use a feed reader.)
I have no idea and no opinions on where this leaves you and me, the people who like Twitter as it is, or what alternatives we really have, especially if the community we've found on Twitter is important to us. The unpleasant answer may be that things will just dissolve; we'll all walk away in our own separate and scattered directions, as people walked away from Usenet communities once upon a time.
PS: Twitter added 6 million 'monthly active users' in Q1 2018 (and not all of them will be bots), but it also attributed a bunch of this to new experiences and features, not the core product suddenly being more attractive. See also, about Twitter's Q4 2017. Twitter is also apparently making more money from video ads, but there's a limit to how much money growth that can drive; after a certain point, they're (almost) all video ads.
Networks form through usage and must be maintained through it
I recently read The network's the thing (via, itself probably via). One of the things this article is about is how for many companies, the network created by your users is the important piece, not the product itself; you can change the product but retain the network and win, despite the yelling of aggravated users. My impression is that this is a popular view of companies, especially companies with social networks.
(Purely social networks are not the only sort of networks there are. Consider Github; part of its value is from the network of associations among users and repositories. This is 'social' in the sense that it involves humans talking to each other, but it is not 'social' in the sense that Twitter, Facebook, and many other places are.)
On the one hand, this "networks matter but products don't" view is basically true. On the other hand, I think that this is a view that misses an important detail. You see, users do not form (social) networks on services out of the goodness of their hearts. Instead, those social networks form and remain because they are useful to people. More than that, they don't form in isolation; instead they're created as a side effect of the usefulness of the service itself to people, and this usefulness depends on how the service works (and on how people use it). People create the network by using the product, and the network forms only to the extent and in the manner that the product is useful to them.
As a corollary, if the product changes enough, the network by itself will not necessarily keep people present. What actually matters to people is not the network as an abstract thing, it's the use they get out of the network. If that use dries up because the product is no longer useful to them, well, there you go. For example, if people are drawn to Twitter to have conversations and someday Twitter breaks that by changing how tweets are displayed and threaded together so that you stop being able to see and get into conversations, people will drift away. Twitter's deep social network remains in theory, but it is not useful to you any more so it might as well not exist in practice.
In this sense, your network is not and cannot be separated from your core product (or products). It exists only because people use those products and they do that (in general) because the products help them or meet some need they feel. If people stop finding your product useful, the network will wither. If different people find your product useful in different ways, the shape of your (active) network will shift.
(For example, Twitter's network graph will probably look quite different if it becomes a place where most people passively follow a small number of 'star' accounts and never interact with most other people on Twitter.)
At the same time, changes in the product don't necessarily really change the network because they don't necessarily drastically change the use people get out of the service. To pick on Twitter again, the length of tweets or whether or not their length counts people mentioned in them are not crucial to the use most people get out of Twitter, for all that I know I heard yelling about both.
The superficial versus deep appeal of ZFS
Red Hat recently announced Stratis (for a suitable value of 'recently'). In their articles introducing it, Red Hat says explicitly that they looked at ZFS, among other similar things, and implies that they did their best to take the appealing bits from ZFS. So what are Stratis's priorities? Let's quote:
Stratis aims to make three things easier: initial configuration of storage; making later changes; and using advanced storage features like snapshots, thin provisioning, and even tiering.
If you look at ZFS, it's clear where Stratis draws inspiration from both ZFS features and ZFS limitations. But it's also clear what the Stratis people see as ZFS's appeals (especially in their part 2); I would summarize these as flexible storage management (where you have a pool of storage that can be flexibly used by filesystems) and good command line tools.
These are good appeals of ZFS, make no mistake. I like them myself, and chafe when I wind up dealing with less flexible and more cumbersome storage management via LVM. But as someone who's used ZFS for years, it's also my opinion that they are superficial appeals of ZFS. They're the obvious things that you notice right away when you start using ZFS, and for good reason; it's very liberating to not have to pre-assign space to filesystems and so on.
(Casually making a snapshot before some potentially breaking change like switching Firefox versions and then being able to retrieve files from the snapshot in order to revert is also a cool trick.)
However, the longer I've used it the more I've come to see the deep appeal of ZFS as its checksums and how these are deeply integrated into its RAID layer to enable things like self-healing (such deep integration is required for this). You generally can't see this appeal right away, when you just set up and use ZFS. Instead you have to use ZFS for a while, through scrubs, disks that develop problems, and perhaps ZFS noticing and repairing damage to your pool without losing any data. This reassurance that your data is intact and repairable is something I've come to really treasure in ZFS and why I don't want to use anything without checksums any more.
On the whole, Stratis (or at least the articles about it) provides an interesting mirror on how people see ZFS and how that's different from my view of ZFS. Probably there are lessons for how people view many technologies, and certainly I've experienced this sort of split in other contexts.
Intel versus AMD for me (in 2018)
I don't particularly like Intel for all sorts of reasons (eg), and as a result I want to like AMD (who I think have been very good for x86 in general when they've been competitive). I'm now in the unusual position (for me) of having essentially comparable machines built with a top end CPU from each; my work machine with a Ryzen 1800X, and my home machine with an i7-8700K. This has given me about the best Intel versus AMD comparison for what I want that I could ask for, and unfortunately it's not close.
For everything I care about, my Intel machine pretty much smokes my AMD machine. It has lower practical power consumption, it appears to run cooler (although CPU power usage is unpredictable and variable between CPUs), it's widely recognized as having faster single-CPU performance, and my Intel machine even builds Firefox from source significantly faster than my AMD machine does despite the AMD machine's more CPUs. There are undoubtedly highly parallel tasks that my AMD machine would perform better on than my Intel does, but they're irrelevant to me because I don't do them (at least so far).
(It's possible that RAM speed is one factor in the difference in Firefox build times, but this is a practical comparison. I can get faster non-ECC RAM for the Intel machine than we could get ECC RAM for the AMD machine, and Ryzens have complicated memory speed issues.)
The next issue, of course, is that my Intel machine is quite stable and my Ryzen machine (still) requires magic. My Ryzen does appear to now be stable with its magic, but I'm doing peculiar things to get there that likely have their own side effects and I'm never entirely sure that my Ryzen is completely trustworthy. One of my co-workers has a Ryzen Pro based machine (with the same motherboard and so on), and it hangs in the same way. Perhaps the new 2xxx Ryzens don't have this problem, but who knows, and as far as I'm concerned the mere fact that mysterious problems exist (and haven't been acknowledged by AMD) is a black mark against all Ryzens in practice. They're just not CPUs that I can trust at this point.
In summary, I'm very happy that I wound up choosing Intel for the machine that I spent my own money on. I'm still not particularly happy about my work AMD machine, and I'd be even less happy if I'd spent my own money on it. My office AMD machine works and it's okay, but the Intel one is clearly better, makes me happier, and I trust it significantly more.
(I'm also glad that I talked myself into going all the way up to an Intel i7 8700K, but that involves some extra hand waving that I'm not going to do in this entry.)
In terms of cost, I believe that my Intel machine was cheaper in practice for me. This is not an apples to apples comparison since my AMD machine's Radeon RX 550 is clearly a better GPU than the onboard Intel graphics, but again it's a practical one. With the Intel machine I could completely rely on the motherboard, while the AMD machine required an extra and not entirely cheap component for my usage case (which is one where I still won't use nVidia cards because of lacking open source support).
An incomplete list of reasons why I force-quit iOS apps
As a result of having iOS devices, I've wound up reading a certain amount of things about using iOS and how you're supposed to do that. One of the things that keeps coming up periodically is people saying 'don't force-quit iOS apps, it's a bad idea'. What reading a number of these articles has shown me is that people seem to have somewhat different views about why you might want to force-quit iOS apps than I do, and often more narrow ones. So here is an incomplete list of reasons why I end up force-quitting iOS apps:
- To remove the app from the carousel display of 'recently used'
apps. In order to make this carousel usable (especially on my
phone), I curate it down to only the apps that I'm actively
using and I want to switch between on a regular basis. If I use
an app only once in a while, I will typically eject it from the
carousel after use.
(I also eject apps that I consider sensitive, where I don't want their screen showing when I cycle through the apps.)
- To force Termius to require a thumbprint
to unlock, even if it's immediately used again. Termius's handling
of SSH keys is a bit like
sudo, and like
sudoI want to get rid of any elevated privileges (such as unlocked keys) the moment that I know I don't need them again. Generally this overlaps with 'remove an unused app from the carousel', since if I'm forcing Termius to be explicitly unlocked again I'm not planning to use it in the near future.
- To get the app back to its initial screen. I've read a proposal that this should be the only thing that a 'force-quit' does in a future iOS version.
- To abort or discard something that an app is doing. Sometimes resetting
an app back to its initial screen is the easiest way to get it out of
some activity, because the app itself is quite insistent that you not
have any easier way of outright cancelling things.
(In the case that comes up for me, the app in question is trying to avoid data loss, but as it happens I want to lose the 'data' in question.)
- To restart an app because it seems to have gotten itself into some
bad or hung state.
- To stop an app from talking to another device because I'm about to
do something to the other device that I know the app will react
badly to, for example restarting the device.
- To hopefully stop an app being active in the background for whatever
reason it thinks it has for doing that. There are some settings that
probably control this, but it's not entirely clear and there are apps
that I sometimes want to be (potentially) active in the background
and sometimes definitely don't want active, for example because their
purpose is over for the moment.
- To force an app out when I don't entirely trust it in general and only want it to be doing anything when I'm actually running it. Sure, I may have set permissions and settings, but the iOS permissions stuff is somewhat intricate and I'm not always sure I've gotten everything. So out it goes as a fairly sure solution.
What strikes me about these different reasons I have for force-quitting apps is how they'd be hard to provide in distinct app UIs or system UIs. Some of them perhaps should be handled in the app (such as locking Termius), but there's only so much room for app controls and there's always more features to include. And it makes sense that an app doesn't want to provide a too-accessible way of doing something that causes data loss, and instead leaves it up to you to do something which you've probably been told over and over is exceptional and brutal.
The other UI advantage of force-quit as a way of resetting an app's state is that it's universal. You don't have to figure out how to exit some particular screen or state inside an app using whatever odd UI the app has; if you just want to go back to the start (more or less), you know how to do that for every app. My feeling is that this does a lot to lessen my frustrations with app UIs and somewhat encourages exploring app features. This is also an advantage for similar effects that I want to be universal, such as cutting off an app's ability to do things in the background.
(In general, if I feel that an app is misbehaving the last thing I want to have to do is trust it to stop misbehaving. I want some outside mechanism of forcing that.)
Registering for things on the Internet is dangerous these days
Back in the old days (say up through the middle of the 00s), it was easily possible to view registering for websites, registering products on the Internet, and so on as a relatively harmless and even positive activity. Not infrequently, signing up was mostly there so you could customize your site experience and preferences, and maybe so that you could get to hear about important news. Unfortunately those days are long over. On today's Internet, registration is almost invariably dangerous.
The obvious problem is that handing over your email address often involves getting spam later, but this can be dealt with in various ways. The larger and more pernicious danger is that registering invariably requires agreeing to people's terms of service. In the old days, terms of service were not all that dangerous and often existed only to cover the legal rears of the service you were registering with. Today, this is very much not the case; most ToSes are full to the brim of obnoxious and dangerous things, and are very often not in your benefit in the least. At the very least, most ToSes will have you agreeing that the service can mine as much data from you as possible and sell it to whoever it wants. Beyond that, many ToSes contain additional nasty provisions like forced arbitration, perpetual broad copyright licensing for whatever you let them get their hands on (including eg your profile picture), and so on. Some but not all of these ToS provisions can be somewhat defanged by using the service as little as possible; on the other hand, sometimes the most noxious provisions cut to the heart of why you want to use the service at all.
(If you're in the EU and the website in question wants to do business there, the EU GDPR may give you some help here. Since I'm not in the EU, I'm on my own.)
Some Terms of Service are benign, but today ToSes are so long and intricate that you can't tell whether you have a benign or a dangerous one (and anyway, many ToSes are effectively self-upgrading). Even with potentially dangerous ToSes, some companies will never exercise the freedom that their ToS nominally gives them, for various reasons. But neither is the way to bet given an arbitrary company and an arbitrary ToS. Today the only safe assumption is that agreeing to someone's Terms of Service is at least a somewhat dangerous act that may bite you at some point.
The corollary to this is that you should assume that anyone who requires registration before giving you access to things when this is not actively required by how their service works is trying to exploit you. For example, 'register to see this report' should be at least a yellow and perhaps a red warning sign. My reaction is generally that I probably don't really need to read it after all.
(Other people react by simply giving up and agreeing to everything, taking solace in the generally relatively low chance that it will make a meaningful difference in their life one way or another. I have this reaction when I'm forced to agree to ToSes; since I can neither meaningfully read the terms nor do anything about them, what they are don't matter and I just blindly agree. I have to trust that I'll hear about it if the terms are so bad that I shouldn't agree under any circumstances. Of course this attitude of helplessness plays into the hands of these people.)