Wandering Thoughts

2019-04-14

A VPN for me but not you: a surprise when tethering to my phone

My phone supports VPNs, of course, and I have it set up to talk to our work VPN. This is convenient for reasons beyond mere privacy when I'm using it over networks I don't entirely trust; there are various systems at work that can only be reached from 'inside' machines (including the VPN server), or which are easier to use that way.

My phone also supports tethering other devices to it to give them Internet access through the phone's connection (whatever that is at the time). This is built in to iOS as a standard function, not supplied through a provider addition or feature (as far as I know Apple doesn't allow cellular providers any control over whether iOS allows tethering to be used), and is something that I wind up using periodically.

As I found out the first time I tried to do both at once, my phone has what I consider an oddity: only the phone's traffic uses the (phone) VPN, not the traffic from any tethered devices. The VPN is for the phone only, not for any attached devices; they're on their own, which is sometimes inconvenient for me. It would be a fair bit easier if any random machine I tethered to the phone could take advantage of the phone's VPN and didn't have to set up a VPN configuration itself.

(In fact we've had problems on our VPN servers in the past when there were multiple VPN connections from the same public IP, which is what I'd get if I had both the phone and a tethered machine using the VPN at the same time. I think those aren't there any more, although I'm not sure.)

As far as I know, there is no technical requirement that forces this; in general you certainly could route NAT'd tethered traffic through the VPN connection too. If anything, my phone may have to go out of its way to route locally originated traffic in one way and tethered traffic in another way (although this depends on how NAT and VPNs interact in the iOS kernel). Doing things this way seems likely to be mostly or entirely a policy decision, especially by now (after so many years of iOS development, and a succession of people asking about this on the Internet, and so on).

(I don't currently have a position on whether it's a good or a bad policy decision, although I think it is a bit surprising. I certainly expected tethered traffic to be handled just the same way as local traffic from the phone itself.)

IPhoneExclusiveVPN written at 20:59:30; Add Comment

2019-04-03

NVMe and an interesting technology change

Back in the middle of 2015, I wrote an entry on sorting out NVMe as the next way to connect SSDs to your system. Someone I know online was recently reading it, and he mentioned that he'd never heard of the 'U.2' connector that I talked about in that entry and that in fact I described as the way that future NVMe SSDs would be connected to your machine. In an aside in that entry, I wrote:

(PCIe and thus NVMe can also be connected up with a less popular connector standard called M.2. [...]

In 2015 and even 2016, U.2 was a reasonably big thing; you can read one example of it (and see some pictures of a U.2 connector and U.2 SSD) in places like this April 2016 article. Since then it has quietly vanished away, swept away by that 'less popular' M.2 standard that I mentioned in my aside. In fact the Wikipedia page on U.2 is pretty amusing (at least right now) due to its 'U.2 compared to M.2' section, which comes across pretty much as the last cry of people who cannot stand to see their beautiful thing crushed by the marketplace. U.2's fade out evidently didn't take all that long either, since when I did a late 2017 entry on sorting out M.2 and NVMe, it wasn't even on my radar and certainly didn't show up on any of the motherboards I looked at at the time.

(U.2 is apparently still sort of out there, especially in datacenter applications, and it seems to show up every so often on semi-consumer motherboards. See eg here.)

Technology change and failed standards are not exactly new to the PC world, but for me this is still an interesting and impressive example of it in action. U.2 was the obvious thing in the middle of 2015, and then two years later it had just disappeared completely.

(Possibly even in 2015 U.2 was more hype than anything else and I was taken in by it when I wrote my entry.)

Sidebar: Some speculations on why M.2 won out over U.2

I can think of a number of plausible contributing factors:

  • M.2 was easier to put into laptops, because it required fewer parts (eg no cables). That gave it volume and we all know volume drives down price.

  • M.2 was more flexible, since an M.2 connector can be used for either SATA or NVMe. Both manufacturers and consumers could trade off cost versus performance without having to change anything other than the M.2 card in use.

  • Most people don't use more than one or two drives in their computers.
  • Most people prefer the simplicity of plugging a M.2 card into a motherboard connector rather than mounting a separate drive and running cables to it.

  • Intel didn't make enough PCIe lanes available in consumer chipsets to run enough U.2 drives to be attractive. Whether U.2 or M.2, Intel was always only going to give you one or two.

  • M.2 SSDs (whether NVMe or SATA) are cheaper to make than U.2 SSDs.

While U.2 theoretically makes it easier to have larger NVMe SSDs, my impression is that in the consumer market the largest limiting factor on SSD sizes is how much people have been willing to pay for them. This certainly is the case for me.

(In the enterprise market I've read stories saying that the limit is how much data loss people want to be exposed to from one device failing.)

NVMeAndTechChange written at 01:12:39; Add Comment

2019-03-29

My NVMe versus SSD uncertainty (and hesitation)

One of the things I sometimes consider for my current home machine is adding a mirrored pair of NVMe drives (to go with my mirrored pair of SSDs and mirrored pair of HDs). So far I haven't really gotten very serious about this, though, and one of the reasons is that I'm not sure how much faster NVMe drives are going to be for me in practice and thus whether it is worth getting higher end NVMe drives or not.

(Another reason for not doing anything is that the cost per GB keeps dropping, so my inertia keeps being rewarded with more potential storage for the same potential cost.)

NVMe drives clearly have the possibility of being dramatically faster than SATA SSDs can be; their performance numbers generally cite large bandwidth, high number of IOPs, and a low latency (sometimes implicitly, sometimes explicitly). Assuming that these are relatively real, there are two obstacles to achieving them in practice; the OS driver stack and your (my) actual usage.

In order to delivery high bandwidth, many IOPs, and low latency for disk IO, the OS needs to have a highly efficient block IO system and good drivers. Extremely high-speed (and extremely parallel) 'disk' IO devices upend various traditional OS assumptions about how disk IO should be organized and optimized, and I'm not sure what the state of this is today for Linux. NVMe drives are now widely used, especially on high performance servers, so in theory things should be reasonably good here.

(Of course, one answer is that OS performance is only going to improve from here even with today's NVMe drives. But then my inertia whispers in my ear that I might as well wait for that to happen before I get some NVMe drives.)

The actual usage issue is that if you don't use your drives, how well they perform is mostly irrelevant. I have divided thoughts here. On the one hand, I don't really do much IO on my home machine these days, and much of what I do is not high priority (I may compile Firefox from source periodically, but I generally don't care if that's a bit faster or slower since I'm not actually waiting for it). On the other hand, I might well notice a significant improvement in latency even for infrequent disk IO activity, because it's activity that I'm waiting on (whether or not I realize it). Modern systems and programs go to disk for all sorts of reasons, often without making it clear to you that they are.

(In the days of loud HDs you could hear your disks working away, but SSDs are silent.)

Even given this, I do sometimes do IO intensive activities that I would really quite like to go fast. One periodic case of this is upgrading Fedora from release to release, which requires updating most of what is now over 5,000 RPM packages (with over 760,000 files and directories). This is a disk intensive process and I saw a real speed difference going from HDs to SSDs; I wouldn't mind seeing another one with a move to NVMe, assuming that NVMe would or could actually deliver one here.

PS: Since my major use of any new space will be for a ZFS pool, another question is how well ZFS itself is prepared for the performance characteristics of NVMe drives (well, ZFS on Linux). See, for example, this recent ZoL issue.

NVMeVsSSDUncertainty written at 00:33:24; Add Comment

2019-03-25

The mystery of my desktop that locks up when it gets too cold

This winter I have been having a sporadic but surprisingly consistent problem with my home desktop computer, which is that, well, here's my Tweet:

So I definitely appear to have the most ironic of things, a computer that doesn't like it when it's too cold. I don't think it's the CPU fan or the CPU, either, which leaves me strongly suspecting the motherboard. I guess it's time to update to the latest BIOS.

(Sadly, updating to the latest BIOS didn't fix the problem.)

What has been happening sporadically all winter is that when the ambient temperature around my home desktop drops too low, the machine will lock up. When the ambient temperature rises again, the machine boots up again. I have not nailed down exactly what 'too low' is, but based on motherboard sensor readings and an external electronic thermometer, it is around 60 F ambient outside my case and no more than 68 F inside it. Since this is a far lower temperature than I'm comfortable with, it only happens at times when I'm nowhere near the computer. The CPU temperature appears to be irrelevant; I have run CPU soakers that kept the CPU temperature fairly toasty warm and the CPU fan actively working away, and still had the machine lock up.

One time I deliberately created a low temperature situation where I could see the machine in this state after it had locked up, before the temperature rose again and it came back to life. When I observed it in its locked up state, the power lights were on and all of the fans were spinning but power was not being provided to my USB keyboard, and power-cycling the machine didn't bring it back to life (only letting the heat come on did).

(One might wonder why I didn't see this last winter, but as far as I can tell I only assembled and put together this desktop in late March of 2018, which is after the really cold weather that induces such low interior ambient temperatures.)

This is a genuine and somewhat frustrating mystery. I have no idea what the cause is, apart from that it seems most likely to be something related to the motherboard, although I suppose it could also be the power supply sagging the voltage on one or more rails when it gets too cold. One of the possibilities that worries me now is that sufficient cold is making various metal parts shrink and move enough that they're creating some sort of short that shuts things down; this could be a fault in the motherboard, or something about how I put the machine together.

(In theory I could completely disassemble and reassemble the machine, to re-seat and re-do all connectors and so on. In practice I have very little enthusiasm for that, especially taking apart the CPU and the CPU cooler.)

ColdLockupMachineMystery written at 21:11:23; Add Comment

2019-02-21

An advantage of tablets and two-in-one devices over small laptops

For reasons beyond the scope of this entry, I spent a certain amount of today just sitting around somewhere. Wisely, I took my iPad along and used it to pass the time, including doing a certain amount of productive work. As a result of my usage today, I have formed an opinion about why I think tablets (with physical keyboards) are often a superior option to a small laptop of roughly the same physical size.

To put it simply, what I found is that my iPad's display wasn't tall enough when in horizontal mode and wasn't wide enough in vertical mode. As a result, I kept going back and forth between the two orientations depending on what I was doing at the time and what I wanted to do (sometimes swapping even in the same SSH session). This limitation is essentially intrinsic to the relatively small form factor; there is no way around it. And the advantage of a tablet is that it swaps fluidly between horizontal and vertical orientation.

(This works fluidly for me with my iPad because the physical keyboard is also the cover and folds away when iPad is in vertical orientation.)

A traditional laptop is essentially locked to a horizontal orientation because of its physical construction as a clamshell; if you turn it vertically, you still have the keyboard half of the clamshell sitting there in the way, making your handling awkward and distorting the balance. To make it work well in vertical orientation you need something more sophisticated and mechanically complex (and thus more expensive), something that is not really a straightforward laptop any more.

This tablet advantage falls away as the screen size grows so that it's tall enough even when horizontal. Tastes will differ for when this happens; for me, even the XPS 13 is on the edge (and it's significantly bigger than my iPad). But I'm spoiled by my large desktop screens.

(There are also real advantages to a relatively small form factor in a device that is intended to be easily handled and easy to use in a wide variety of situations. My iPad is a comfortable lap device in a way that an XPS 13 sized tablet wouldn't really be.)

PS: Neither the iPad nor a conventional two-in-one device have a good solution for using the device in vertical orientation with a physical keyboard. The iPad's keyboard only works in horizontal mode, and the two-in-one devices I've seen all fold the keyboard away for vertical use. This is an annoying constraint for things like SSH sessions; if I want more vertical space or just to not flop the iPad sideways temporarily as I take a brief look at something, I have to give up a physical keyboard.

TabletVsSmallLaptopAdvantage written at 23:39:46; Add Comment

2019-02-14

A pleasant surprise with a Thunderbolt 3 10G-T Ethernet adapter

Recently, I tweeted:

I probably shouldn't be surprised that a Thunderbolt 10G-T Ethernet adapter can do real bidirectional 10G on my Fedora laptop (a Dell XPS 13), but I'm still pleased.

(I am still sort of living in the USB 2 'if it plugs in, it's guaranteed to be slow' era.)

There are two parts to my pleasant surprise here. The first part is simply that a Thunderbolt 3 device really did work fast, as advertised, because I'm quite used to nominally high-speed external connection standards that do not deliver their rated speeds in practice for whatever reason (sometimes including that the makers of external devices cannot be bothered to engineer them to run at full speed). Having a Thunderbolt 3 device actually work feels novel, especially when I know that Thunderbolt 3 basically extends some PCIe lanes out over a cable.

(I know intellectually that PCIe can be extended off the motherboard and outside the machine, but it still feels like magic to actually see it in action.)

The second part of the surprise is that my garden variety vintage 2017 Dell XPS 13 laptop could actually drive 10G-T Ethernet at essentially full speed, and in both directions at once. I'm sure that some of this is in the Thunderbolt 3 10G-T adapter, but still; I'm not used to thinking of garden variety laptops as being that capable. It's certainly more than I was hoping for and means that the adapter is more useful than we expected for our purposes.

This experience has also sparked some thoughts about Thunderbolt 3 on desktops, because plugging this in to my laptop was a lot more pleasant an experience than opening up a desktop case to put a card in, which is what I'm going to need to do on my work desktop if I need to test a 10G thing with it someday. Unfortunately it's not clear to me if there even are general purpose PC Thunderbolt 3 PCIe cards today (ones that will go in any PCIe x4 slot on any motherboard), and if there are, it looks like they're moderately expensive. Perhaps in four or five years, my next desktop will have a Thunderbolt 3 port or two on the motherboard.

(We don't have enough 10G cards and they aren't cheap enough that I can leave one permanently in my desktop.)

PS: My home machine can apparently use some specific add-on Thunderbolt 3 cards, such as this Asus one, but my work desktop is an AMD Ryzen based machine and they seem out of luck right now. Even the addon cards are not inexpensive.

Thunderbolt10GSurprise written at 23:07:03; Add Comment

2019-02-10

Open protocols can evolve fast if they're willing to break other people

A while back I read an entry from Pete Zaitcev, where he said, among other things:

I guess what really drives me mad about this is how Eugen [the author of Mastodon] uses his mindshare advanage to drive protocol extensions. All of Fediverse implementations generaly communicate freely with one another, but as Pleroma and Mastodon develop, they gradually leave Gnusocial behind in features. In particular, Eugen found a loophole in the protocol, which allows to attach pictures without using up the space in the message for the URL. When Gnusocial displays a message with attachment, it only displays the text, not the picture. [...]

When I read this, my immediate reaction was that this sounded familiar. And indeed it is, just in another guise.

Over the years, there have been any number of relatively open protocols for federated things that were used by more or less commercial organizations, such as XMPP and Signal's protocol. Over and over again, the companies running major nodes have wound up deciding to de-federate (Signal, for example). When this has happened, one of the stated reasons for it has been that being federated has held back development (as covered in eg LWN's The perils of federated protocols, about Signal's decision to drop federation). At the time, I thought of this as being possible because what was involved was a company moving to a closed product, sometimes the company doing much of the work (as in Signal's case).

What Mastodon (and Pleroma) illustrate here is that this sort of thing can be done even in open protocols where some degree of federation is still being maintained. All it needs is for the people involved being willing to break protocol compatibility with other implementations that aren't willing to follow along and keep up (either because of lack of time or disagreements in the direction that the protocol is being dragged). Of course this is easier when the people making the changes are the dominant implementations, but anyone can do it if they're willing to live with the consequences, primarily a slow tacit de-federation where messages may still go back and forth but increasingly they're not useful for one or both sides.

Is this a good thing or not? I have no idea. On the one hand, Mastodon is moving the protocol in directions that are clearly useful to people; as Pete Zaitcev notes:

[...] But these days pictures are so prevalent, that it's pretty much impossible to live without receiving them. [...]

On the other hand things are clearly moving away from a universal federation of equals and an environment where the Fediverse and its protocols evolve through a broad process of consensus among many or all of the implementations. And there's the speed of evolution too; faster evolution privileges people who can spend more and more time on their implementation and people who can frequently update the version they're running (which may well require migration work and so on). A rapidly evolving Fediverse is one that requires ongoing attention from everyone involved, as opposed to a Fediverse where you can install an instance and then not worry about it for a while.

(This split is not unique to network protocols and federation. Consider the evolution of programming languages, for example; C++ moves at a much slower pace than things like Go and Swift because C++ cannot just be pushed along by one major party in the way those two can be by Google and Apple.)

OpenProtocolsAndFastEvolution written at 20:05:28; Add Comment

2019-02-07

A touchpad is not a mouse, or at least not a good one

One of the things about having a pretty nice work laptop with a screen that's large enough to have more than one real window at once is that I actually use it, and I use it with multiple windows, and that means that I need to use the mouse. I like computer mice in general so I don't object to this, but like most modern laptops my Dell XPS 13 doesn't have a mouse, it has a trackpad (or touchpad, take your pick). You can use a modern touchpad as a mouse, but over my time in using the XPS 13 I've come to understand (rather viscerally) that a touchpad is not a mouse and trying to act as if it was is not a good idea. There are some things that a touchpad makes easy and natural that aren't very natural on a mouse, and a fair number of things that are natural on a mouse but don't work very well on a touchpad (at least for me; they might for people who are more experienced with touchpads).

(There is also a continuum of 'not a mouse'-ness. Physical mouse buttons made my old Thinkpad's touchpad more mouse-like than the multi-touch virtual mouse buttons do on the XPS 13.)

Things like straightforward mouse pointer tracking and left button clicks are not all that different between the two and so I can mostly treat things the same (although I think that moving more or less purely vertically or horizontally is harder on a touchpad). What is increasingly different on a touchpad is things like right or middle clicks, and moving the mouse pointer while a nominal button is 'down'. And of course there's no such thing as chorded mouse clicks on a touchpad, while at the same time a mouse has no real equivalent of multi-finger movement and swiping. The different physical locations of a laptop touchpad and a physical mouse also make a difference in what is comfortable and what isn't.

(On a touchpad, I think the more natural equivalent of moving the mouse with a button down is a single finger touchpad move with some keyboard key. Of course this changes things because now both hands are involved, but at the same time your hands aren't moving as far to reach the 'mouse'.)

For me, the things that are significantly different are moving the pointer while a mouse button is down and middle and right button clicks. For instance, with a physical mouse I'm very fond of mouse gestures in Firefox, but they're made with the right mouse button held down; as a result, I basically don't use them on my laptop touchpad. I'm also thankful that in Firefox, left clicking and middle clicking a link are equivalent if you use keyboard modifiers, because that lets me substitute an easy single finger tap for an uncertain multi-finger tap.

All of this has slowly led me to doing things differently when I'm using the laptop's touchpad, rather than trying to pretend that the touchpad is a mouse and stick to my traditional mouse habits and practices. This is a work in progress for various reasons, and on top of that I'm not sure that the X environment I have on my laptop is entirely well adopted to touchpad use.

(I know that some of my programs aren't. For one glaring example, the very convenient xterm copy and paste model is all about middle mouse clicks and being able to easily move the mouse pointer with the left button down. Selecting and copying text from one terminal window to another with the touchpad is both more awkward and more hit and miss. Probably this means I should set up some keyboard bindings for 'paste', so I can at least avoid wrangling with the multi-finger tap necessary to emulate the middle mouse.)

(On the one hand this feels pretty obvious now that I've written it down. On the other hand, it's not something that I've really thought about before now and I'm pretty sure that I'm still trying to do a certain amount of 'mouse' things with the touchpad and being frustrated by the so-so results. Hopefully UI designers have been considering this more than I have.)

TouchpadNotAMouse written at 21:35:11; Add Comment

2019-02-03

My temptation of getting a personal laptop

Despite being a computer person and a sysadmin, I don't have a personal laptop; my personal computing is a desktop and, these days, an iPad. The big reason for this is that most of the time, I don't really have a use for a laptop (well, a personal use, especially since a laptop will never be my primary machine). However, when I do have such a use and take my work laptop home, I always wind up getting reminded of how nice it is and how nice it is to have the laptop available, and in turn that tempts me with the thought of getting a personal laptop.

(This is on my mind because I had such a need this weekend, and as result wound up writing yesterday's entry entirely on the laptop. Doing so was a far more pleasant experience than I've had with drafting entries on my iPad, mostly but not entirely for software reasons. My natural way of writing entries involves a bunch of windows and generally a certain amount of cut and paste, and the iPad does not make this easy or natural. I drafted most of this entry on my iPad under similar circumstances as yesterday's entry, and it was slower and less fluid.)

On some sort of purely objective basis, it doesn't make sense for me to give in to this temptation and get a personal laptop. As mentioned, I usually don't have a use for it and when I do have a planned use, I can usually take home one from work, and a decent ultrabook style laptop (which is what I want) is not cheap. On the other hand, one of my ways of evaluating this sort of decision is to ask myself what I would do if money wasn't an issue, and here my answer is absolutely clear; I would immediately go get such a laptop, by default some version of the Dell XPS 13. I definitely have some uses for it, and in general it would be reassuring to have a second fully capable machine at home (and a portable one, which I could set up and use wherever I wanted to). So on a subjective basis, yes, absolutely, I should at least consider this and do things like look up how long lightly used ultrabooks tend to last these days.

(Now that I have a 4K display, any laptop I get should be able to drive a 4K display at 60 Hz with suitable external hardware. Fortunately I believe this is common these days.)

It's also possible that having a readily available personal laptop would change my behavior, by opening up new options and so on. Right now this seems unlikely, but I've been blind to this sort of thing in the past so it's at least something for me to think about, along side the countervailing thoughts of how little I would probably use a personal laptop in practice.

(Given my usual habits of not getting around to getting things regardless of how I feel about them, I am unlikely to actually move on getting a laptop any time soon even if I talk myself into it. But writing this entry may have made it slightly more likely, which is part of why I did; I want to at least think about the whole issue, and write down my current thoughts so I can look back at them later.)

PS: I might feel more temptation and interest in a personal laptop if I did things like travel to conferences, but I don't and I'm unlikely to any time in the future.

LaptopTemptation written at 23:52:54; Add Comment

2019-01-16

Perhaps you no longer want to force a server-preferred TLS cipher order on clients

To simplify a great deal, when you set up a TLS connection one of the things that happens in the TLS handshake is that the client sends the server a list of the cipher suites it supports in preference order, and then the server picks which one to use. One of the questions when configuring a TLS server is whether you will tell the server to respect the client's preference order or whether you will override it and use the server's preference order. Most TLS configuration resources, such as Mozilla's guidelines, will implicitly tell you to prefer the server's preference order instead of the client's.

(I say 'implicitly' here because the Mozilla discussion doesn't explicitly talk about it, but the Mozilla configuration generator consistently picks server options to prefer the server's order.)

In the original world where I learned 'always prefer the server's cipher order', the server was almost always more up to date and better curated than clients were. You might have all sorts of old web browsers and so on calling in, with all sorts of questionable cipher ordering choices, and you mostly didn't trust them to be doing a good job of modern TLS. Forcing everyone to use the order from your server fixed all of this, and it put the situation under your control; you could make sure that every client got the strongest cipher that it supported.

That doesn't describe today's world, which is different in at least two important ways. First, today many browsers update every six weeks or so, which is probably far more often than most people are re-checking their TLS best practices (certainly it's far more frequently than we are). As a result, it's easy for browsers to be the more up to date party on TLS best practices. Second, browsers are running on increasingly varied hardware where different ciphers may have quite different performance and power characteristics. An AES GCM cipher is probably the fastest on x86 hardware (it can make a dramatic difference), but may not be the best on, say, ARM based devices such as mobile phones and tablets (and it depends on what CPUs those have, too, since people use a wide variety of ARM cores, although by now all of them may be modern enough to have ARMv8-A AES-NI crypto instructions).

If you're going to consistently stay up to date on the latest TLS developments and always carefully curate your TLS cipher list and order, as Mozilla is, then I think it still potentially makes sense to prefer your server's cipher order. But the more I think about it, the more I'm not sure it makes sense for most people to try to do this. Given that I'm not a TLS expert and I'm not going to spend the time to constantly keep on top of this, it feels like perhaps once we let Mozilla restrict our configuration to ciphers that are all strong enough, we should let clients pick the one they think is best for them. The result is unlikely to do anything much to security and it may help clients perform better.

(If you're CPU-constrained on the server, then you certainly want to pick the cheapest cipher for you and never mind what the clients would like. But again, this is probably not most people's situation.)

PS: As you might guess, the trigger for this thought was looking at a server TLS configuration that we probably haven't touched for four years, and perhaps more. In theory perhaps we should schedule periodic re-examinations and updates of our TLS configurations; in practice we're unlikely to actually do that, so I'm starting to think that the more hands-off they are, the better.

TLSServerCipherPriority written at 01:11:18; Add Comment

(Previous 10 or go back to January 2019 at 2019/01/14)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.