Wandering Thoughts archives

2018-05-28

An incomplete list of reasons why I force-quit iOS apps

As a result of having iOS devices, I've wound up reading a certain amount of things about using iOS and how you're supposed to do that. One of the things that keeps coming up periodically is people saying 'don't force-quit iOS apps, it's a bad idea'. What reading a number of these articles has shown me is that people seem to have somewhat different views about why you might want to force-quit iOS apps than I do, and often more narrow ones. So here is an incomplete list of reasons why I end up force-quitting iOS apps:

  • To remove the app from the carousel display of 'recently used' apps. In order to make this carousel usable (especially on my phone), I curate it down to only the apps that I'm actively using and I want to switch between on a regular basis. If I use an app only once in a while, I will typically eject it from the carousel after use.

    (I also eject apps that I consider sensitive, where I don't want their screen showing when I cycle through the apps.)

  • To force Termius to require a thumbprint to unlock, even if it's immediately used again. Termius's handling of SSH keys is a bit like sudo, and like sudo I want to get rid of any elevated privileges (such as unlocked keys) the moment that I know I don't need them again. Generally this overlaps with 'remove an unused app from the carousel', since if I'm forcing Termius to be explicitly unlocked again I'm not planning to use it in the near future.

  • To get the app back to its initial screen. I've read a proposal that this should be the only thing that a 'force-quit' does in a future iOS version.
  • To abort or discard something that an app is doing. Sometimes resetting an app back to its initial screen is the easiest way to get it out of some activity, because the app itself is quite insistent that you not have any easier way of outright cancelling things.

    (In the case that comes up for me, the app in question is trying to avoid data loss, but as it happens I want to lose the 'data' in question.)

  • To restart an app because it seems to have gotten itself into some bad or hung state.

  • To stop an app from talking to another device because I'm about to do something to the other device that I know the app will react badly to, for example restarting the device.

  • To hopefully stop an app being active in the background for whatever reason it thinks it has for doing that. There are some settings that probably control this, but it's not entirely clear and there are apps that I sometimes want to be (potentially) active in the background and sometimes definitely don't want active, for example because their purpose is over for the moment.

  • To force an app out when I don't entirely trust it in general and only want it to be doing anything when I'm actually running it. Sure, I may have set permissions and settings, but the iOS permissions stuff is somewhat intricate and I'm not always sure I've gotten everything. So out it goes as a fairly sure solution.

What strikes me about these different reasons I have for force-quitting apps is how they'd be hard to provide in distinct app UIs or system UIs. Some of them perhaps should be handled in the app (such as locking Termius), but there's only so much room for app controls and there's always more features to include. And it makes sense that an app doesn't want to provide a too-accessible way of doing something that causes data loss, and instead leaves it up to you to do something which you've probably been told over and over is exceptional and brutal.

The other UI advantage of force-quit as a way of resetting an app's state is that it's universal. You don't have to figure out how to exit some particular screen or state inside an app using whatever odd UI the app has; if you just want to go back to the start (more or less), you know how to do that for every app. My feeling is that this does a lot to lessen my frustrations with app UIs and somewhat encourages exploring app features. This is also an advantage for similar effects that I want to be universal, such as cutting off an app's ability to do things in the background.

(In general, if I feel that an app is misbehaving the last thing I want to have to do is trust it to stop misbehaving. I want some outside mechanism of forcing that.)

IOSAppsForceQuitWhy written at 23:13:11; Add Comment

2018-05-24

Registering for things on the Internet is dangerous these days

Back in the old days (say up through the middle of the 00s), it was easily possible to view registering for websites, registering products on the Internet, and so on as a relatively harmless and even positive activity. Not infrequently, signing up was mostly there so you could customize your site experience and preferences, and maybe so that you could get to hear about important news. Unfortunately those days are long over. On today's Internet, registration is almost invariably dangerous.

The obvious problem is that handing over your email address often involves getting spam later, but this can be dealt with in various ways. The larger and more pernicious danger is that registering invariably requires agreeing to people's terms of service. In the old days, terms of service were not all that dangerous and often existed only to cover the legal rears of the service you were registering with. Today, this is very much not the case; most ToSes are full to the brim of obnoxious and dangerous things, and are very often not in your benefit in the least. At the very least, most ToSes will have you agreeing that the service can mine as much data from you as possible and sell it to whoever it wants. Beyond that, many ToSes contain additional nasty provisions like forced arbitration, perpetual broad copyright licensing for whatever you let them get their hands on (including eg your profile picture), and so on. Some but not all of these ToS provisions can be somewhat defanged by using the service as little as possible; on the other hand, sometimes the most noxious provisions cut to the heart of why you want to use the service at all.

(If you're in the EU and the website in question wants to do business there, the EU GDPR may give you some help here. Since I'm not in the EU, I'm on my own.)

Some Terms of Service are benign, but today ToSes are so long and intricate that you can't tell whether you have a benign or a dangerous one (and anyway, many ToSes are effectively self-upgrading). Even with potentially dangerous ToSes, some companies will never exercise the freedom that their ToS nominally gives them, for various reasons. But neither is the way to bet given an arbitrary company and an arbitrary ToS. Today the only safe assumption is that agreeing to someone's Terms of Service is at least a somewhat dangerous act that may bite you at some point.

The corollary to this is that you should assume that anyone who requires registration before giving you access to things when this is not actively required by how their service works is trying to exploit you. For example, 'register to see this report' should be at least a yellow and perhaps a red warning sign. My reaction is generally that I probably don't really need to read it after all.

(Other people react by simply giving up and agreeing to everything, taking solace in the generally relatively low chance that it will make a meaningful difference in their life one way or another. I have this reaction when I'm forced to agree to ToSes; since I can neither meaningfully read the terms nor do anything about them, what they are don't matter and I just blindly agree. I have to trust that I'll hear about it if the terms are so bad that I shouldn't agree under any circumstances. Of course this attitude of helplessness plays into the hands of these people.)

DangerousRegistration written at 00:20:59; Add Comment

2018-05-23

Almost no one wants to run their own infrastructure

Every so often, people get really enthused about the idea of a less concentrated, more distributed Internet, one where most of our email isn't inside only a few places, our online chatting happens over federated systems instead of Twitter, there are flower gardens of personal websites and web servers, there are lots of different Git servers instead of mostly Github, and so on. There are many obstacles in the way of this, including that the current large providers don't want to let people go, but over time I have come to think that a large underappreciated one is simply that people don't want to run their own infrastructure. Not even if it's free to do so.

I'm a professional system administrator. I know how to run my own mail and IMAP server, and I know that I probably should and will have to some day. Do I actually run my own server? Of course not. It's a hassle. I have things on Github, and in theory I could publish them outside Github too, on a machine where I'm already running a web server. Have I done so? No, it's not worth the effort when the payoff I'd get is basically only feeling good.

Now, I've phrased this as if running your own infrastructure is easy and the only thing keeping people from doing so is the minor effort and expense involved. We shouldn't underestimate the effects of even minor extra effort and expense, but the reality is that doing a good job of your own infrastructure is emphatically not a minor effort. There is security, TLS certificates, (offsite) backups, choosing the software, managing configuration, long term maintenance and updates, and I'm assuming that someone else has already built the system and you just have to set up an instance of it.

(And merely setting up an instance of something is often fraught with annoyance and problems, especially for a non-specialist.)

If you use someone else's infrastructure and they're decently good at it, they're worrying about all of that and more things on top (like monitoring, dealing with load surges and DDOSes, and fixing things in the dead of night). Plus, they're on the right side of the issues universities have with running their own email; many such centralized places are paying entire teams of hard-working good people to improve their services (or at least the ones that they consider strategic). I like open source, but it's fairly rare that it can compete head to head with something that a significant company considers a strategic product.

Can these problems be somewhat solved? Sure, but until we get much better 'computing as a utility' (if we ever do), a really usable solution is a single-vendor solution, which just brings us back to the whole centralization issue again. Maybe life is a bit better if we're all hosting our federated chat systems and IMAP servers and Git repo websites in the AWS cloud using canned one-click images, but it's not quite the great dream of a truly decentralized and democratic Internet.

(Plus, it still involves somewhat more hassle than using Github and Twitter and Google Mail, and I think that hassle really does matter. Convinced people are willing to fight a certain amount of friction, but to work, the dream of a decentralized Internet needs to reach even the people who don't really care.)

All of this leads me to the conclusion that any decentralized Internet future that imagines lots of people running their own infrastructure is dead on arrival. It's much more likely that any decentralized future will involve a fair amount of concentration, with many people choosing to use someone else's infrastructure and a few groups running most of it. This matters because running such a big instance for a bunch of people generally requires real money and thus some way of providing it. If there is no real funding model, the whole system is vulnerable to a number of issues.

(See, for example, Mastodon, which is fairly centralized in practice with quite a number of large instances, per the instance statistics.)

NoPersonalInfrastructure written at 00:52:55; Add Comment

2018-05-20

Modern CPU power usage varies unpredictably based on what you're doing

I have both an AMD machine and an Intel machine, both of them using comparable CPUs that are rated at 95 watts TDP (although that's misleading), and I gathered apples to apples power consumption numbers for them. In the process I discovered a number of anomalies in relative power usage between the two CPUs. As a result I've wound up with the obvious realization that modern CPUs have complicated and unpredictable power usage (in addition to all of the other complicated things about them).

In the old days, it was possible to have a relatively straightforward view of how CPU usage related to power draw, where all you really needed to care about was how many CPUs were in use and maybe whether it was integer or floating point code. Not only is that is clearly no longer the case, but what factors change the power usage vary from CPU model to CPU model. My power consumption numbers show one CPU to CPU anomaly right away, where an infinite loop in two shells has one shell using more power on a Ryzen 1800X and the other shell using more power on an i7-8700K. These two shells are running the same code on both CPUs and each shell's code is likely to be broadly similar to the other, but the CPUs are responding to it quite differently, especially when the code is running on all of the CPUs.

Beyond this anomaly, there is also that this simple 'infinite shell loop' power measurement showed a different (and higher) power usage than a simple integer loop in Go. I can make up theories for why, but it's clear that even if you restrict yourself to integer code, a simple artificial chunk of code may not have anywhere near the same power usage as more complex real code. The factors influencing this are unlikely to be simple, and they also clearly vary from CPU to CPU. 'Measure your real code' has always been good advice, but it clearly matters more than ever today if you care about power usage.

(The corollary of 'measure your real code' is probably that you have to measure real usage too; otherwise you may be running into something like my Bash versus rc effect. This may not be entirely easy, to put it one way.)

It's not news these days that floating point operations and especially the various SIMD instructions such as AVX and AVX-512 use more power than basic integer operations; that's why people reach for mprime as a heavy-duty CPU stress test, instead of just running integer code. MPrime's stress test itself is a series of different tests, and it will probably not surprise you to hear that which specific tests seemed to use the most power varied between my AMD Ryzen 1800X machine and my Intel i7-8700K machine. I don't know enough about MPrime's operation to know if the specific tests differ in what CPU operations they use or only in how much memory they use and how they stride through memory.

(One of the interesting differences was that on my i7-8700k, the test series that was said to use the most power seemed to use less power than the 'maximum heat and FPU stress' tests. But it's hard to say too much about this, since power usage could swing drastically from sub-test to sub-test. I saw swings of 20 to 30 watts from sub-test to sub-test, which does make reporting a 'mprime power consumption' number a bit misleading.)

Trying to monitor the specific power usage of MPrime sub-tests is about where I decided both that I'd run out of patience and that the specific details were unlikely to be interesting. It's clear that what uses more or less power varies significantly between the Ryzen 1800X system and the i7-8700K system, and really that's all I need to know. I suspect that it basically varies between every CPU micro-architecture, although I wouldn't be surprised if each company's CPUs are broadly similar to each other (on the basis that the micro-architectures and the design priorities are probably often similar to each other).

PS: Since I was measuring system power usage, it's possible that some of this comes from the BIOS deciding to vary CPU and system fan speeds, with faster fan speeds causing more power consumption. But I suspect that fan speed differences don't account for all of the power draw difference.

VaryingCPUPowerDraws written at 01:13:11; Add Comment

2018-05-18

How I usually divide up NFS (operation) metrics

When you're trying to generate metrics for local disk IO, life is generally relatively simple. Everyone knows that you usually want to track reads separately from writes, especially these days when they may have significantly different performance characteristics on SSDs. While there are sometimes additional operations issued to physical disks, they're generally not important. If you have access to OS-level information it can be useful to split your reads and writes into synchronous versus asynchronous ones.

Life with NFS is not so simple. NFS has (data) read and write operations, like disks do, but it also has a large collection of additional protocol operations that do various things (although some of these protocol operations are strongly related to data writes, for example the COMMIT operation, and should probably be counted as data writes in some way). If you're generating NFS statistics, how do you want to break up or aggregate these other operations?

One surprisingly popular option is to ignore all of them on the grounds that they're obviously unimportant. My view is that this is a mistake in general, because these NFS operations can have an IO impact on the NFS server and create delays on the NFS clients if they're not satisfied fast enough. But if we want to say something about these and we don't want to go to the extreme of presenting per-operation statistics (which is probably too much information, and in any case can hide patterns in noise), we need some sort of breakdown.

The breakdown that I generally use is to split up NFS operations into four categories: data reads, data writes (including COMMIT), operations that cause metadata writes such as MKDIR and REMOVE, and all other operations (which are generally metadata reads, for example READDIRPLUS and GETATTR). This split is not perfect, partly because some metadata read operations are far more common (and are far more cached on the server) than other operations; specifically, GETATTR and ACCESS are often the backbone of a lot of NFS activity, and it's common to see GETATTR as by far the most common single operation.

(I'm also not entirely convinced that this is the right split; as with other metrics wrestling, it may just be a convenient one that feels logical.)

Sidebar: Why this comes up less with local filesystems and devices

If what you care about is the impact that IO load is having on the system (and how much IO load there is), you don't entirely care why an IO request was issued, you only care that it was. From the disk drive's perspective, a 16 KB read is a 16 KB read, and it takes as much work to satisfy a 16 KB file as it does a 16 KB directory or a free space map. This doesn't work for NFS because NFS is more abstracted, and both the amount of operations and the amount of bytes that flow over the wire don't necessarily give you a true picture of the impact on the server.

Of course, in these days of SSDs and complicated disk systems, just having IO read and write information may not be giving you a true picture either. With SSDs especially, we know that bursts of writes are different from sustained writes, that writing to a full disk is often different than writing to an empty one, and apparently giving drives some idle time to do background processing and literally cool down may change their performance. But many things are simplifications so we do the best we can.

(Actual read and write performance is a 'true picture' in one sense, in that it is giving you information about what results the OS is getting from the drive. But it doesn't necessarily help to tell you why, or what you can do to improve the situation.)

NFSMyMetricsSplit written at 01:44:01; Add Comment

By day for May 2018: 18 20 23 24 28; before May; after May.

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.