The effects of modest TCP latency (I think) on my experience with some X programs
As I mentioned recently, I recently had an extended outage on my home Internet. When my Internet came back, it was a little bit different. My old home Internet was DSL with 14 Mbits down, 7 Mbits up, and about 7 milliseconds pings to work. The new state of my home Internet is still DSL from the same provider, but now it's 50 Mbits down, 4 Mbits up, and about 18 milliseconds pings to work at the moment. When my Internet first came back, I didn't expect to feel or see any real difference in the experience. It turns out that I was naive.
(Almost all of the ping latency is in the first hop, over my DSL link. Because I have a home Prometheus setup, I actually have historical data on ping round trip times, so I can verify the pre-outage details.)
As far as I can tell, my experience of plain text mode SSH sessions is unchanged, with nothing feeling any different. Unfortunately the same is not true of my use of remote X (as forwarded over SSH). Since early 2020, I've become accustomed to doing a number of lightly graphical X things over remote X; after my link came back, all of these started feeling various laggy and slow (especially my remote exmh, which I normally handle much of my email in). They weren't unusable, but they were sluggish enough to make me unhappy. I would type a key to take some action, and then I'd have a perceptible lag before the program's visible state updated, in a way I hadn't really had to before.
Interestingly, not all X programs are particularly affected by this. In particular (and conveniently for me), GNU Emacs doesn't seem to be; a a remote X session of GNU Emacs is quite snappy and about as responsive as a text mode version for most things (although not all of them). This has led to me suddenly being interested in reading my (N)MH email through GNU Emacs' MH-E mode (and then some other latency issues led me to build a little system for remotely opening URLs without the latency of remotely manipulating X properties). Since X text these days is all graphics (the remote client draws glyphs locally and then sends the drawn glyphs as graphical blobs), I'm not sure why this is, but part of it may be that exmh is written in TCL/TK, which haven't seen much work for a long time, while GNU Emacs these days is based on modern graphical libraries that may have seen more optimization.
Now that I've written it out, it seems obvious at some level that more than doubling my ping round trip times would have a visible effect. But on the other hand, it's not particularly visible in my text typing over SSH, and the vastly increased incoming bandwidth should help with X programs pushing big glyphs or graphics blobs to me.
(I think this impact on some X programs is more likely to be from the increased latency than from my decreased upstream bandwidth, but I admit I can't be sure.)
PS: In a way this was (and is) also an interesting experience in seeing how even a bit of response lag can cause people to be unhappy. Exmh was still pretty prompt in updating to show new messages and things like that, but just a little bit of visible lag between typing an 'n' and seeing the next message displayed was enough to get to me.
What I understand about two-factor/multi-factor authentication (in 2023)
I am broadly a MFA (Multi-Factor Authentication) skeptic (cf) and as a result I don't have much exposure to it. For reasons beyond the scope of this entry, I've recently been needing to understand more than usual about how it works from the perspective of people using it, so here is my current understanding of your generally available non-hardware options that can be used in a desktop environment (security keys are out of scope).
There are three generally available and used approaches to MFA at the moment: SMS, time-based one time passwords (TOTP), and what I've heard called 'push-based approval' using smartphone apps. Of these, I believe that TOTP is the most popular, and a place that simply talks about 'MFA' is probably talking about TOTP MFA authentication, especially if they say they support multiple smartphone apps.
(At some point this list may include WebAuthn, but right now you mostly need a hardware security key to use it on your desktop or laptop.)
In SMS MFA, the place you're trying to log in to sends a SMS message to your phone number with a code that you have to enter (sometimes you can also get the code emailed to you). Websites vary on whether you can enroll more than one phone number for these messages. SMS MFA is considered insecure, partly because it's generally not that difficult to get someone's number 'ported' to a new device under your control, at which point you get their SMS messages. On the other hand, SMS is easy for people to start using, because practically everyone can already receive text messages.
In push-based approval, a special app on your phone gets push notifications of pending logins and asks you whether or not you approve them. With some hand waving, this is pretty secure, as it requires possession of your phone in an unlocked state and perhaps unlocking the app itself. An attacker can't step into the middle to impersonate your phone (or the app on it) the way they can with SMS and ported numbers. Again, websites and companies vary on whether you can enroll multiple devices for push based approvals. One current drawback of push based approvals is that there is no standard protocol for this, so each provider of this service has their own custom app (and, of course, you have to trust their app to not be scraping your phone for every bit of marketable information it can extract).
The third option is TOTP. In TOTP the website and you share a common secret code (often provided as a QR code) and you use a standard public algorithm to combine this secret with the current time to generate a numeric code. If you give the server the right code, the server 'knows' that you know the shared secret at this time (well, within a time window). Unlike push based approvals, there's no explicit communication between any server and the TOTP app on your phone; the app is a standalone, isolated thing. Because the algorithm is standard and public, it's been implemented by many different smartphone apps and those apps aren't tied to the website or provider they're from; any proper TOTP app can do TOTP with any proper TOTP website (or other server).
Looked at from a suitable angle, TOTP is really a second password (the shared secret) with a weird way of proving to the server that you know this password. It is 'Multi-Factor Authentication' partly because you're normally supposed to use another device to generate the TOTP code, not your desktop or laptop, and partly because you don't memorize the TOTP secret, you store it somewhere. If you're logging in on your smartphone in the first place, TOTP's MFAness boils down to 'your physical phone is what knows the TOTP secret, not you', so only someone in possession of your phone (in an unlocked state) can get at it.
TOTP is a popular way of doing MFA, perhaps the most popular right now, and it's not hard to see why. It's more secure than SMS and doesn't require the website to find (and pay) a SMS provider, and while it's probably less secure than push based approval, it doesn't require a bespoke mobile application along with a push notification backend cloud server setup. There are plenty of client applications for people with smartphones to chose and as I understand it, the server support is relatively widely available in open source libraries.
(There are some TOTP desktop applications, but I think your choices aren't as broad or as polished as on phones. On the other hand, you can get them even on Linux.)
Websites using TOTP MFA may allow you to enroll multiple devices, each with their own TOTP secret code. However, even if they don't explicitly offer this option, there is nothing stopping you from loading the same TOTP secret code into multiple TOTP apps on multiple devices, or even directly saving the TOTP secret code so that you can later feed it into whatever you want. Websites often ask you not to do this (and especially tell you to throw away the initial TOTP secret code or QR code, not keep it anywhere people can find it), but they can't force you and they can't tell if you're doing this because there's no explicit communication between them and your TOTP app the way there is with push based approval.
(The advantage of enrolling multiple devices with separate TOTP secret codes is that you can hopefully revoke just one device's TOTP secret code if something goes wrong. If everyone has the same code, everyone has a flag day if it has to be revoked and redone. You might also get better auditing.)
This means that if for some reason you have to add MFA to a shared administrative account on some website, you're generally best off if the website supports TOTP MFA authentication. You can probably get TOTP clients for any environment the relevant people use and load the TOTP secret code into all of them, enabling each person to MFA to the website as the shared account. You can probably even print out the QR code the website generates for you, fold it up, and seal it in a 'break glass in case of emergency' envelope in your password safe.
(Each TOTP app knows the TOTP secret code that's encoded in the original QR code, but they may well not support any way of giving it back to you, especially in usable form, partly because that's a security exposure.)
TLS CA root certificate name constraints for internal CAs
For a long time, one of the pieces of advice for dealing with various TLS certificate problems is that you should establish your own internal Certificate Authority with its own CA root certificate, have your systems trust it, and then issue certificates from your internal CA with whatever names and other qualities you needed. My reaction to this suggestion has traditionally been that it was extremely dangerous. If your internal CA was compromised in some way you had given an attacker the ability to impersonate anything, and generally properly operating a truly secure internal CA is within neither the skills nor the budget of a typical organization or group (it's certainly not within ours). Fortunately, this issue was obvious to a lot of people for a long time, so as part of RFC 5280 we got name constraints, which restricted the names (in most contexts, the DNS names) that the CA could sign certificates for. You could include only some (sub)domains, or exclude some.
(So, for example, you could make an internal CA for for your BMC IPMI web servers that was only valid for '.ipmi.internal.example.com'.)
All of this sounds good. However, in the real world, some things appear to have intervened. To start with, TLS libraries, browsers, and so on didn't immediately add support for these name constraints; as a result, even today you probably want to do some testing to see if your particular environment does (possibly using some resources from BetterTLS). The good news is that according to this 2020 article, browsers now support this, which is probably the most important case. Another issue is that creating TLS CA certificates with name constraints isn't the easiest thing in the world, at least with OpenSSL; other tools may be better, but I haven't looked for any.
(I care about how easy and straightforward it is to add name constraints because if it's tricky, we're going to need to test that we actually did it right. I can imagine unpleasant scenarios where we think we've created a CA root certificate with name constraints but we actually haven't.)
A third issue is that until Chrome 112 in April 2023, Chrome didn't pay any attention to name constraints on CA root certificates, and see also, based on their interpretation of RFC 5280 Certificate Validation. As I understand it, until then Chrome only applied name constraints from intermediate CA certificates; the root CA certificate was unconstrained. This is not exactly useful if you're worried about an attacker managing to compromise your root CA key in some way. Other TLS code and TLS libraries may have similar issues, although if you test them directly you can know for yourself.
(Looking at Go, since it's one of my areas of interest, it appears to support name constraints on CA root certificates and enforces them. See src/crypto/x509/name_constraints_test.go.)
We don't currently have any real internal CAs, although we have one for OpenVPN. If we ever set up one for some reason, I'm going to try to make sure to give it a name constraint, and ideally as narrow a one as possible.
YAML is an okay enough configuration file format
Ever since we set up Prometheus, I've had to deal with everyone's favorite configuration syntax to hate, YAML. Although YAML isn't universal in the Prometheus and Grafana ecosystem, it's pretty pervasive and many components and things you want to use are configured using it as the configuration syntax, so I've had to write and read plenty of it. While I have my issues with YAML, over time I've come to feel that it's an okay enough syntax and that often, the big picture issues aren't because of its syntax.
There have definitely been general languages for configuration that I am quite not fond of (I have a low opinion of writing XML by hand, for example). I don't find YAML to be like this. The syntax is simultaneously picky and lax, and deeply nested things can be hard to follow, but overall it's inoffensive to write, modify, and read (although you really want to get your editor to cooperate with its indentation; YAML is the one thing I actively configure vim for).
There are simpler formats for simple situations, such as TOML, but YAML has mostly won in practice in the areas that I work in. I believe that Python has steadily moved toward liking TOML, to the extent that tomllib is now in Python's standard library. In a way I wholeheartedly support that; if your program needs enough of a complex configuration that TOML doesn't work well, you probably should take the effort to create a focused configuration language for it instead of leaning on a serialization format. But as what is fundamentally a serialization format, YAML is okay.
(Well, the subset of YAML that people use in practice is okay. There are some esoteric features that people mostly don't touch, for good reason, like repeated nodes that use '&' and '*'.)
It feels a bit heretical to say this, but sometimes there are things that it's not worth having really strongly expressed views on. For me, YAML is one of those things. I may not really like it but I can certainly live with it if a project decides to use it. I'm not going to pick one project over another merely because one uses TOML and the other YAML, for example.
(This is pretty much a system administrator's view, which is to say the view of someone who uses systems configured with YAML and writes their configuration files. Programmers who have to decide how their system is configured can and probably should have stronger views and better reasons for picking one particular format than 'it's inoffensive and it's there'.)
One challenge in reducing TLS certificate lifetimes down to 90 days
Back in March, the Chrome team said that they wanted to reduce the maximum TLS certificate duration down to 90 days (because I'm not always completely in touch with the TLS ecology, I only found out about this recently). In general I'm in favour of short TLS certificate lifetimes and in automation for TLS certificate renewals and deployment, so you might expect me to be all in favour of this. But I actually think that this proposal would cause real problems and get significant pushback from people.
(The reduction in certificate lifetime wouldn't directly affect my group, since we already get all of our TLS certificates from Let's Encrypt, which only gives out 90 day certificates.)
The problem I see is black box devices with TLS that aren't built with support for automated (certificate) management and deployment, and instead only support manual installation of new TLS certificates (for example, through an administrative web interface). One such class of devices that I'm painfully familiar with is server management processors (BMCs). A typical BMC generates (or comes with) a self-signed TLS certificate but provides some way for you to equip it with a proper TLS certificate through its web interface. We don't bother to go through the hassle of giving our BMCs proper public DNS names and then getting proper TLS certificates for them, but I'm sure there are some people who do. And I'm also sure that there are plenty of other types of black box devices and appliances out there that have similar features for their TLS support.
This sort of manual update is tolerable if you only have to do it rarely (and you don't have too many of the things to do it to). If you keep having to do it every 80 days or so, people are going to be rather unhappy. Many of these people will be in small organizations (because that's the kind of place that buys black box devices) and so not well placed to spend a bunch of money to upgrade their devices, or spend a bunch of staff time to try to automate this from the browser, or get their voices heard about the problems.
In an ideal world all of these devices would get replaced with ones that have interfaces and APIs for automated TLS certificate deployment. In the real world, that will take years even if tomorrow all TLS certificates became valid for only 90 days, and so the vendors of these devices were immediately forced into developing it.
(These devices aren't necessarily directly connected to the Internet, so it isn't sufficient for them to have ACME clients, although for some of them it would be a nice extra. In general they need a way to push a TLS certificate to them, often along with a private key for it.)
CLAs create different issues than making (small) open source contributions
I've seen a view expressed that Contributor License Agreements are only a small extra piece of formality over contributing bugfixes and other open source changes. I think this is wrong. Often, the decisions that are made over whether or not to contribute changes to open source projects are significantly different than the decisions that must be made over CLAs, such that my university and similar institutions have little to lose from the former and a great deal to lose from the latter.
When I make a bugfix or some other small change to an open source project's source code as part of my work, my university has only two real options for what to with it; we can either keep the change private or publish it under the original open source license used by the project. Since universities are nominally not in competition with each other in the way that companies are and are instead into sharing things with the world, this is an easy and uncontroversial call for everyone to make. There is pretty much no reason not to share such small things.
(In a company, sharing your bugfixes for an open source project may help your competitors who also use the project, so you have some reason to keep them private. For large changes, the code I write might in theory reveal intellectual property that the university would like to keep private in order to patent or otherwise license, and in general might give the university some leverage to negotiate license changes or other things with the project. We have no leverage for small bugfixes or changes.)
A Contributor License Agreement is a legal document and a legal agreement. No institution enters into legal agreements without care, and I am specifically not authorized to enter into such agreements on behalf of my institution; very few people are, and they are all busy and senior. As with any legal document, signing a CLA requires the institution's lawyers to scrutinize the terms to see if there's anything dangerous we're accepting in the process, because a CLA may contain all sorts of surprising clauses and grants that the institution specifically agrees that it's giving the other party. This makes CLAs not anywhere near as simple as 'do we publish this change under the project's open source license or keep it private'. Signing a CLA is not at all the same as publishing a change under the open source license its project requires, especially if the project uses a standard, widely known open source license or a very close variation of it.
(And releasing something under what is fundamentally a copyright grant is quite different from executing a signed agreement with a specific counterparty, who may acquire new legal rights or causes of action against you due to clauses in the agreement.)
It's not at all odd or unusual that it's much easier to do one than the other at my institution. Probably this is the case at any number of organizations. This is a big factor in why CLAs impede modest contributions, even if and when the organization is fully in favour of publishing and sharing such things. One corollary is that it's extremely unwise to assume that someone's failure be able to execute a CLA means that they can't actually publish or share their change.
Requiring a CLA is a strong move by the owner of the project. It says that they would rather have fewer legitimate, fully allowed changes because they wish that all of the changes they do accept to be fully covered by their chosen license agreement (whatever the terms of it are, and sometimes these will include 'we can later relicense your code on any terms we chose, including commercial licenses only').
PS: This doesn't make CLAs intrinsically bad. I accept that there are some organizations that are sufficiently large lawsuit targets that they feel they need to take strong defensive measures, and CLAs are one of those measures. I do feel unhappy when such organizations react to bug reports with 'please write the small patch for us, and by the way you need to sign a CLA'.
Contributor License Agreements (CLAs) impede modest contributions
Over on the Fediverse, I said something:
As a university employee, can I sign an individual CLA to contribute a bugfix I made while at work, for something we use at work? I don't know, but I'm also pretty certain that I can't get the university's lawyers and senior management to come near your organizational CLA, and neither my management nor the university's lawyers probably want to even look into the individual CLA issue.
So basically a CLA means I'm not sending in our bug fixes. Not because I'm nasty, but because I can't.
I have some views on CLAs in general, but those only really apply to work I might do on my own. If I'm doing things as part of work, the university can decide whether or not to keep it private or send it upstream and by default not carrying private changes is easier and better (even if this feeds someone's monetization in the end).
However, as far as I know (and I did look), my university has no blanket policy on employees signing individual CLAs to contribute work they did on university time. Obtaining permission from the university would likely take multiple people each spending some time on this. Many of them are busy people, and beyond that you might as well think of this as a meeting where all of us are sitting around a table for perhaps half an hour, and we all know how much meetings cost once you multiply the cost of each person's time out. Universities may feel that staff time is often almost free, but that isn't universal and there are limits.
Things get much worse if the university would have to sign some sort of group or institutional CLA. Officially signing agreements on the behalf of (parts of) the university is a serious matter, as it should be. There is no such thing as a trivial legal agreement for an institution, especially an institution that's engaged in intellectual property licensing (possibly with one of the very companies that it previously executed an institutional CLA with).
The university and its sub-parts could probably overcome all of this if we were doing something large and significant; if someone's research group was collaborating, or a PhD student was doing a major chunk of work, or the like (and research work is somewhat different than work done by staff). But for a modest or trivial change? Forget it.
This doesn't make me happy. If I have a simple bugfix and I can make a trivial change and contribute it as a pull request, that's a win over filing a bug report and forcing other people to duplicate work I may already have done privately. But that's life in the land of CLAs. When you require CLAs, you're creating barriers to contributions.
(The same is true of a requirement for copyright assignment, although probably less obviously.)
A pointless review of my (current) favorite mouse, the Contour optical mouse
The short background is that I'm strongly attached to real three button mice (mice where the middle mouse button is not just a scroll wheel), for good reason, but what I really wanted was a three button mouse that also had a scroll wheel. For a long time people pointed me to the Contour mouse as the single mouse they knew of that featured this (see comments here and here), but I kept balking at the price. Then in late 2015, I finally talked myself into spending the money to get a Contour (it's a bit like talking myself into a decent desk chair), and once I started using it I came to love it. I always told myself I would write a review someday, but I never got around to it. Then I discovered that this mouse has been discontinued, which is why I call this a pointless review.
(Before I got the Contour I tried out a hack with two mice.)
The Contour (Optical) mouse is an ergonomic mouse that has three old fashioned mouse buttons on the top (like the HP 3 button mouse I once reviewed), plus a scroll wheel and a rocker button on the side where your thumb rests (well, the scroll wheel is above and the rocker button below; my thumb naturally rests comfortably between them). Because the scroll wheel has to be on the thumb side, the mouse comes in right and left handed versions, and also in three different sizes. The 'ergonomic' bit is mostly that the back of the mouse is comfortably shaped for my palm and the front mouse button area slopes down to one side (to the right on a right handed mouse).
When I started using the Contour, I thought the rocker button was a bit silly, especially since all it did for me by default was go back and forward in web browsers. Since then I've become quietly addicted to it to the point where I get irritated when it doesn't work in some browser (or some browser context) or browser-like environment. I almost never use the keyboard or click the buttons; I move my thumb down slightly and flick the rocker in the appropriate direction. It's so automatic now that I don't think about it.
(In X, the rocker button generates button 8 and button 9 events, which is apparently the standard for this.)
In general, the Contour is (and has been) everything I could ask from a mouse. It's been comfortable, responsive when I move it around, and the mouse buttons and the scrollwheel all work fine. It feels quite natural to work the scrollwheel with my thumb, especially scrolling down. People who need to move the scroll wheel long ranges might feel slightly differently, but I don't consciously noticing re-positioning my thumb to start scrolling again.
Except that we now get to a slight fly in the ointment that was one factor delaying this review, because there have been at least three versions of the Contour mouse. The first version of the Contour that I received had a conventional (black) scroll wheel with conventional and readily apparent click stops in the scrolling action. However, another one I received later had changed the scroll wheel action to be basically free of stops, which was apparently a deliberate change in the name of ergonomics (you're pushing less hard to move the scroll wheel) but had the side effect of making the scroll wheel much more hair trigger. On this Contour, brushing my thumb against the scroll wheel with a little too much friction could trigger an inadvertent scroll action, which was a little too easy to do when just moving the mouse around.
The third version of the Contour is the one that I received when I hastily bought some spares after I found out it had been discontinued. This version has a chunkier knurled scroll wheel (that's not all black), and in a quick test its scroll wheel action is back to the standard click stop style of regular mouse scroll wheels. I haven't used these so I can't comment about how the new scroll wheel feels in real use.
Overall I'm sad to see the Contour mouse be discontinued. Not only was it a good mouse (even in the hair trigger scroll wheel variation), but as far as I know this leaves us with no free standing relatively conventional mouse with three top buttons and a side scroll wheel.
PS: Contour still makes a variety of mice and other ergonomic things, but not this specific 'three buttons with side scroll wheel and rocker button' mouse. Past versions of this mouse were known as the 'Contour Perfit' or 'Perfit Optical'. My two currently active Contour mice report in Linux lsusb as 'Perfit Optical' (the older, clickier scroll wheel) and 'Contour Mouse' (the newer smooth scroll wheel). I believe the newest one also reports as 'Contour Mouse'. These different generations of mice may also use different USB versions, although it's hard for me to tell right now.
(I believe some versions of this mouse may have been wireless. I have the wired USB version, partly because I'm not a believer in wireless mice. Or wireless keyboards.)
The tangled problems of asking for people's '(full) legal name'
One response to my entry on the problems with 'first' and 'last' name data fields is that one should make forms that (only) ask for someone's legally recognized name, which should be unambiguous and complete. While superficially appealing, this is a terrible minefield that you should never step into unless you absolutely have to, which is generally because you are legally required to collect this information.
The first question is what you mean by legally recognized name or 'legal name'. I have several pieces of government ID and some well-attested things like credit cards (which are normally supposed to be in your name), and even the government IDs don't always have exactly the same name, never mind the credit cards. Depending on what you're doing with my name and what you need it to match, I would need to give you some different version of it. If I don't know why you're specifically demanding my legal name, I'm going to have to guess which one you need and the one you get may not be the one you want.
(If you really insist on legally recognized names and you deal with non-English people, be prepared to accept all sorts of Unicode input in non-English languages. The true legal name of a Japanese, Chinese, Korean, Egyptian, etc person is not written in Latin characters, and even Western names are not infrequently written with some accented Latin characters. Legal names absolutely do not fit in plain ASCII. If you're asking for 'legal name, but in the Latin alphabet', well, that's certainly something.)
The second issue (not so much a question) is who you are to be demanding to know the name on my government ID. If you ask for my legally recognized name, I am going to require you to explain why you specifically need that, instead of the name that I commonly go by or that I want to give you. If you are doing this to send me friendly greetings, using my full legal name is not the way to do it; you should be using whatever name I want to give you for this. If you're doing this to show my name to other people, even on purely functional grounds I want you to use the name that those people will know me by, not the full, formal legal name I only use in interactions with the government.
(And I'm someone in a position of privilege where it's not particularly dangerous for me to be known to your random service by my real world name (or even my real world picture, not that I want you to have that either). This is very much not always the case for people; real name only policies are toxic and dangerous for various reasons, and forcing them is being evil.)
The third issue is that people not infrequently have good reasons to not be addressed or known by their current legal name but instead by another name of their choice. One example is that in the West, a number of women (although not all) will change their last name under various circumstances. There are situations where the legal change to their chosen new last name will lag the actual desire to use that last name. If you insist on people using their legally recognized name, you're inflicting pain in the same way that not allowing people to change their logins does, and on the flipside you may be forcing people to broadcast changes in their status before they want to.
There are relatively few situations where you actually need to know someone's legally recognized name as opposed to what they want you to call them, and you should never ask for it unless you're actually in one of those situations. Otherwise, you and everyone else is much better off if you simply ask people for their name, in the sense of 'what do you want to be called'.
(And of course you need to allow people to change their name, legal or otherwise, because people's names do change.)
Good RPC systems versus basic 'RPC systems'
In my entry on how HTTP has become the default, universal communication protocol, I mentioned that HTTP's conceptual model was simple enough that it was easy to view it (plus JSON) as an RPC (Remote Procedure Call) system. I saw some reactions that took issue with this (eg comments here), because HTTP (plus JSON) lacks a lot of features of real RPC systems. This is true, but I maintain that it's incomplete, because there's a difference between a good RPC system and something that people press into service to do RPC with.
Full scale RPC systems have a bunch of features beyond the RPC basics of 'request that <thing> be done and get a result'. Especially they generally have introspection and metadata related operations, where you can ask what RPC services exist, what operations they support, and perhaps what arguments they take and what they return. Often they have (or will eventually grow) some sort of support for versioning. Although it's usually described as a message bus instead of an RPC system, Linux's D-Bus is a good example of this sort of full scale RPC system (including features like service registration).
(Large scale RPC systems may or may not have explicit schemas that exist outside of the source code, but generally the idea is there. Historically, some large RPC systems have tried to generate both client and server interface code from schemas, and people have sometimes not felt happy with the end result.)
These RPC system features haven't been added because the programmers involved thought they were neat. Full scale RPC systems are designed with these features (or have them added) because these features are increasingly useful when you operate RPC systems at scale, both how big your systems are now and how long you'll operate them. Sooner or later you really want ways to find out what versions of what services are registered and active, and introspection tools help supplement never up to date documentation (or reading the source) when you have to interact with someone else's RPC endpoint (or provide a new endpoint for a service where you need to interact with existing callers).
However, programmers don't need these features to do basic RPC
things. What programmers often start out wanting (and building) is
an interface that looks like '
res, err := MyRPC(some-name).MyCall(...)'.
Maybe there's a connection pool and so on behind the scenes in the
library, but the programmers using this system don't have to care.
And you can easily and naturally use HTTP (with JSON payloads) to
implement this sort of basic RPC system. Your 'some-name' is an
URL, your MyCall() packs up everything in a JSON payload and returns
results usually generated from a JSON reply, and so on. On the
server side, your RPC handling is equally straightforward; you
attach handlers to URLs, extract JSON, do operations, create reply
JSON, and so on. Since HTTP has become so universal, libraries and
packages for doing this are widely available, making such a basic
RPC system quite straightforward to code up on top of them. Plus,
you can test and even use this basic RPC system with readily
tools like '
curl' (for example, using curl to query your
(If you need authentication you may need to do some additional work, but this sort of thing is often used for basic internal services.)
It's not particularly easy or straightforward to make a HTTP based system into a good RPC system. But often you can get away with a basic HTTP based 'RPC' system for a surprisingly long time, and it may be the best or easiest option when you're just starting out.
(The history of programming has any number of things that were built to be good general RPC systems, but didn't catch on well enough to survive and prosper. See, for example, this list in the Wikipedia page on RPC; some of these are still alive and in active use, but none of them have achieved the kind of universality that HTTP plus JSON has.)