Why I'm going to be skipping Fedora 15
I've pretty much decided that I will not be upgrading my machines to Fedora 15 or installing it on new machines; they will stay at Fedora 14. The short answer to why is 'Gnome 3', but not quite for the reason that you might think.
I wrote earlier about how Fedora 15 has a package dependency failure in the sshmenu applet that's an important part of my nice ssh environment for Gnome, ultimately due to the API changes from Gnome 2 to Gnome 3. At the time I casually said that it shouldn't be too much work to write a Python version of the sshmenu applet, since Python was still supported in Gnome 3.
(Given how much of Fedora's various GUI tools are written in Python, I'm pretty sure that Fedora 15 would not have Gnome 3 unless the Python bindings were in good shape.)
As it turns out, I'm both right and wrong. I'm right in that it's not too much work to write a Python equivalent of the Ruby sshmenu applet (although it was kind of annoying, much like all GUI programming seems to be). I'm wrong in that Gnome 3 doesn't support Gnome 2 style 'applets' at all. I didn't notice this because of course I developed my applet on my regular desktop, which is still running Fedora 14; I only discovered this when I got my applet into a decent enough shape that it was worth trying it on a Fedora 15 virtual machine, at which point I discovered that the system was completely ignoring its service definition file.
(This means that it was a red herring that the Ruby bindings haven't been ported to Gnome 3; even if they had been, the applet itself wouldn't work any more.)
This has a number of implications. Obviously, developing a version of the sshmenu applet for Fedora 15 is now a lot more work than I was expecting. Also, I need to be working in a Fedora 15 Gnome environment to develop it, which has certain bootstrapping problems. Finally, this change has removed a lot more than the sshmenu applet; as a start, Fedora 15 also lacks the 'Command Line' applet that I rely on for the other part of my nice ssh environment for Gnome. Getting my usual customized Gnome environment back in Gnome 3 is clearly going to take a bunch of time, if it's even possible.
In theory this only affects my work laptop (the only second sort of machine that I have); my home and work workstations use a completely customized non-Gnome environment (and my home is already far out of date and the hardware needs to be replaced anyways). In practice, I've traditionally used my laptop as the pilot for Fedora upgrades and I'm not happy with any of my options here; I get to choose either a blind Fedora upgrade on my primary machine at work and my laptop being stuck behind my workstation, or losing a significant amount of what makes my laptop convenient.
More generally it just seems that right now is too early for Gnome 3; people just haven't figured out how to duplicate lost functionality, updated language bindings, ported programs to the new APIs, written good documentation and howtos, and so on. Coming back in six months seems much more likely to get me a well developed Gnome 3 ecosystem than trying to use Fedora 15 today.
(Of course it's not clear that the laptop even has enough graphics power to run Gnome 3 in the first place, since Gnome 3 demands good 3D acceleration. And I don't know how well Gnome 3 supports such a 'lacking' environment.)
PS: the Gnome 3 fallback environment for machines without good 3D graphics (such as virtual machines) does not run Gnome 2 panel applets, although the look is still relatively classic Gnome.
Sidebar: links and references
(for my own future use if nothing else)
- a discussion of the Gnome 3 shell versus applets
- some documentation on Gnome shell extensions
- a somewhat disturbing Gnome developer view on 'applets'
- more disturbing things from Gnome developers about shell extensions,
- the status of Gnome language bindings
- some bits on Python on Gnome 3
- Gnome 3 code examples in various languages
- Gnome 3 tips
Some thoughts on creating simple and sane binary protocols
The best way to create a new simple and sane binary protocol is to not do so; create a text based protocol instead. Text based protocols have any number of advantages; they're easier to write handlers for, they can be debugged and tested by hand, semi-smart proxies are easy to write, it's easy to use network monitoring tools to trace them in live systems, and so on. And protocols like HTTP and (E)SMTP prove that they are viable at large scale and high traffic volumes. Really, your situation is probably not an exception that requires an 'efficient' binary protocol.
But suppose that you've determined that you need a (new) binary protocol for some reason. Because you're nice, you want to make one that irritates programmers as little as possible, ie that is as easy as possible to write protocol encoders and decoders for. Having looked at a number of binary protocols and just recently written a codec for sendmail's milter protocol, I have a few opinions on what you should do.
(Beyond the obvious one of 'document it', which the sendmail people skipped.)
First, wrap your various structures, bitstreams, or whatever in a simple packet format. The important bit of such a format is that packets have a common fixed-size header that includes the packet size and then the remaining variable sized data. Having the size up front allows the decoder to know very early on if it has all of the data that it needs for the packet; this simplifies further decoding and enables various sorts of error checks. You want the packet header to be fixed size so that it is easy to unconditionally read and decode.
Second, build your messages out of as few primitive field types as possible and make those primitive types as simple as possible to decode and encode. In my view, the simplest field types are fixed sized fields, then (fixed-size) length plus data, and then bringing up the rear are delimited fields (where there is some end marker that you have to scan for). If you create complex encoded field types, expect programmers to hate you.
(In general, creating a field type that can't be encoded with either
memcpy() or a printf-like formatter is probably a mistake.)
Finally, have only a single field that determines the rest of the message's format, and put this field at a fixed early point in the packet. In other words, you have a fixed set of structures (or messages) that are encoded into your binary protocol and then some marker of which message this is. Avoid variable format messages, where how you decode the message depends in part on the message contents; for example, a specification like 'if field A has value X, field B is omitted' creates a variable format message. Variable format messages require conditional encoding and decoding, which complicates everyone's life. By contrast a fixed format message can be decoded to a list of field values based only on knowing the field types and their order (and it can be encoded from such a list in the same way).
(If you have to have variable format messages, the closer you stick to this approach the better. Recursive sub-messages are one obvious approach.)
A simple protocol like this can be described in a way that enables quite simple and relatively annoyance free encoding and decoding in modern high level languages. But that's another entry, since this one is already long enough.