Wandering Thoughts archives

2021-09-24

Understanding EGL, GLX and friends in the Linux and X graphics stack

Earlier this month I wrote a blog entry about Firefox having jank with WebRender turned on, in which I mentioned that the problem had appeared when I stopped using an environment variable $MOZ_X11_EGL that forced Firefox to use "EGL". The blog post led to someone filing a quite productive Firefox bug, where I learned in passing that Firefox is switching to EGL by default in the future. This made me realize that I didn't actually know what EGL was, where it fit into the Linux and X graphics stack, and if it was old or new. So here is a somewhat simplified explanation of what I learned.

In the beginning of our story is OpenGL, which in the 1990s became the dominant API (and pretty much the only one) for 3D graphics on Unix systems, as well as spreading to other platforms. However, OpenGL is more or less just about drawing things on a "framebuffer". Generally people on Unix and X don't want to just draw 3D things over top of the entire screen; they want to draw 3D things in an X window (or several) and then have those mix seamlessly with other 3D things being done by other programs in other windows. So you need to somehow connect the OpenGL world and the X world so that you can have OpenGL draw in a way that will be properly displayed in a specific X window and so on.

(This involves the action of many parties, especially once hardware acceleration gets involved and you have partially obscured windows with OpenGL rendering happening in them.)

The first version of an interconnection layer was GLX. As you can see from its features, GLX is a very X way of approaching the problem, since its default is to send all your OpenGL operations to the X server so that the X server can do the actual OpenGL things. The result inherits the X protocol's advantages of (theoretical) network transparency, at the cost of various issues. The 'glx' in programs like 'glxinfo' (used to find out whether your X environment has decent OpenGL capabilities) and 'glxgears' (everyone's favorite basic OpenGL on X test program) comes from, well, GLX. As suggested by the 'X' in its name, GLX is specific to the X Window System.

(Other platforms had similar interface layers such as WGL and CGL.)

Eventually, various issues led to a second version of an interconnection layer. This time around the design was intended to be cross platform (instead of being tied to X) and it was done through the Khronos Group, the OpenGL standards organization. The result is EGL, and you can read (some of?) its manpages here. EGL will let you use more than just OpenGL, such as the simpler subset OpenGL ES, and I believe its API is platform and window system independent (although any particular implementation is likely to be specific to some window system). EGL apparently fixes various inefficiencies and design mistakes in GLX and so offers better performance, at least in theory. Also, pretty much everyone working on the Unix graphics stack likes EGL much more than GLX.

On Unix, EGL is implemented in Mesa, works with X, and has been present for a long time (back to 2012); current documentation is here. Wayland requires and uses EGL, which is unsurprising since GLX is specific to X (eg). I suspect that EGL on X is not in any way network transparent, but I don't know and haven't tested much (I did try some EGL programs from the Mesa demos and they mostly failed, although eglinfo printed stuff).

On X, programs can use either the older GLX or the newer EGL in order to use OpenGL; if they want to use OpenGL ES, I believe they have to use EGL. Which one of GLX and EGL works better, has fewer bugs, and performs better has varied over time and may depend on your hardware. Generally the view of people working on Unix graphics is that everyone should move to EGL (cf), but in practice, well, Firefox has had a bug about it for nine years now and in my searches I've seen people say that EGL used to perform much worse than GLX in some environments (eg, from 2018).

While I'm here, Vulkan is the next generation replacement for OpenGL and OpenGL ES, at least for things that want high performance, developed by the Khronos Group. As you'd expect for something developed by the same people who created EGL, it was designed with an interconnection layer, Windows System Integration (WSI) (also [pdf]). I believe that a Vulkan WSI is already available for X, as well as for Wayland. Vulkan (and its WSI) is potentially relevant for the future partly because of Zink, a Mesa project to implement OpenGL on top of Vulkan. If people like Intel, AMD, and maybe someday NVIDIA increasingly provide Vulkan support (open source or otherwise) that's better than their OpenGL support, Zink and Vulkan may become an important thing in the overall stack. I don't know how an application using EGL and OpenGL on top of a Zink backed would interact with a Vulkan WSI, but I assume that Zink plumbs it all through somehow.

On Ubuntu, programs like eglinfo are available in the mesa-utils-extra package. On Fedora, the egl-utils package gives you eglinfo and es2_info, but for everything else you'll need the mesa-demos package.

PS: For one high level view of the difference between OpenGL and OpenGL ES, see here.

linux/EGLAndGLXAndOpenGL written at 22:21:21; Add Comment

Link: Examining btrfs, Linux’s perpetually half-finished filesystem

Ars Technica's Examining btrfs, Linux’s perpetually half-finished filesystem (via) is not very positive, as you might expect from the title. I found it a useful current summary of the practical state of btrfs, which is by all accounts still not really ready for use even in its redundancy modes that are considered "ready for production". There's probably nothing new for people who are actively keeping track of btrfs, but now I have something to point to if people ask why we're not and won't be.

links/BtrfsHalfFinished written at 12:07:48; Add Comment

Go generics have a new "type sets" way of doing type constraints

Any form of generics needs some way to constrain what types can be used with your generic functions (or generic types with methods), so that you can do useful things with them. The Go team's initial version of their generics proposal famously had a complicated method for this called "contracts", which looked like function bodies with some expressions in them. I (and other people) thought that this was rather too clever. After a lot of feedback, the Go team's revised second and third proposal took a more boring approach; the final design that was proposed and accepted used a version of Go interfaces for this purpose.

Using standard Go interfaces for type constraints has one limitation; because they only define methods, a standard interface can't express important constraints like 'the type must allow me to use < on its values' (or, in general, any operator). In order to deal with this, the "type parameters" proposal that was accepted allowed an addition to standard interfaces. Quoting from the issue's summary:

  • Interface types used as type constraints can have a list of predeclared types; only type arguments that match one of those types satisfy the constraint.
  • Generic functions may only use operations permitted by their type constraints.

Recently this changed to a new, more general, and more complicated approach that goes by the name of "type sets" (see also, and also). The proposal contains a summary of the new state of affairs, which I will quote (from the overview):

  • Interface types used as type constraints can embed additional elements to restrict the set of type arguments that satisfy the constraint:
    • an arbitrary type T restricts to that type
    • an approximation element ~T restricts to all types whose underlying type is T
    • a union element T1 | T2 | ... restricts to any of the listed elements
  • Generic functions may only use operations supported by all the types permitted by the constraint.

Unlike before, these embedded types don't have to be predeclared ones and may be composite types such as maps or structs, although somewhat complicated rules apply.

Type sets are more general and less hard coded than the initial version, so I can see why the generics design has switched over to them. But they're also more complicated (and more verbose), and I worry that they contain a little trap that's ready to bite people in the foot. The problem is that I think you'll almost always want to use an approximation element, ~T, but the arbitrary type element T is the default. If you just list off some types, your generics are limited to exactly those types; you have to remember to add the '~' and then use the underlying type.

My personal view is that using type declarations for predeclared types is a great Go feature, because it leads to greater type safety. I may be using an int for something, but if it's a lexer token or the state of a SMTP connection or the like, I want to make it its own type to save me from mistakes, even if I never define any methods for it. However, if using my own types starts making it harder to use people's generics implementations (because they've forgotten that '~'), I'm being pushed away from it.

Some of the mistakes of leaving out the '~' will be found early, and I think adding it wouldn't create API problems for existing users, so this may not be a big issue in practice. But I wish that the defaults were the other way around, so that you had to go out of your way to restrict generics to specifically those types with no derived types allowed.

(If you just list some types without using a union element you've most likely just created an unsatisfiable generic type with an empty type set. However you're likely to notice this right away, since presumably you're going to try to use your generics, if only in tests.)

programming/GoGenericsTypeSets written at 00:27:07; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.