2011-01-31
There are two sorts of standards in the world
There are two sorts of standards in the world; let us call them forced standards and voluntary standards.
With a forced standard, someone makes you adhere to the standard. If you don't, you can't get your product licensed or certified or whatever, and you can't sell it or give it away or do whatever you want to do with it. Sometimes this is forced because of legislation, and sometimes this is just forced through the marketplace; no one will buy your product if it is not certified as whatever or if it doesn't work with other people's products.
(Note that it is perfectly possible for a de facto standard to be a forced standard because of interoperability requirements.)
With a voluntary standard, there is nothing in particular that makes you adhere to the standard. People adhere to the standard anyways because it's useful, because they're nice and like it, because it's good PR, or for a number of other reasons. They may also adhere to the standard only in part, picking the bits of it that they like or that they need or that are easy enough to implement.
Forced standards are the only sort of standards where you can write something that people have to implement (well, more or less; ask any home renovator about how many things adhere to building codes). This makes forced standards a very attractive idea to the sort of person who wants standards to lead from the front (to put it one way).
Mistaking what sort of standard you're dealing with or creating is a great way to get into a lot of trouble one way or another. On the surface it sounds relatively harmless to mistake a voluntary standard for a forced standard, but in practice it leads to a lot of aggravation all around and people can expend a great deal of effort in arguing about the situation (or, often, yelling at each other). At the extreme it leads to the creation of unwanted 'standards' that are a complete waste of time and effort.
(As with lots of things about standards, there is a continuum in practice. This is especially so if the forcing is happening because of market demands instead of legislation, since market demands change over time.)
2011-01-10
Why really high computer security is not interesting to most people
The US government (and to a less public extent other governments) have spent years developing a very robust and theoretically sound model of computer security, complete with rating levels and all sorts of good things. They even documented this thoroughly in what is called the rainbow series of books.
You might think that a government developed set of computer security standards would be widely adopted by industry and broadly used, in the same way that other government-developed standards generally are. You would be wrong; the industry reaction to all of this government work has generally been complete indifference and utter disinterest. It's tempting to dismiss this as the computer industry just not being interested in security, but I don't think this is the full story. My view is that a good part of why the industry isn't interested in this government security work is that the end result is wrong for industry because it has the wrong priorities.
Computer security involves, among other things, a tradeoff between availability and non-disclosure. Many of the measures that protect your sensitive information can also harm its availability; for example, how many copies there are of the key for decrypting sensitive files. The fewer copies the more secure you are, but the easier it is to lose all copies and thus have the files become unavailable. The government feels that it has extremely dangerous and sensitive secrets, and it does; it holds information that could get people killed (sometimes a lot of people). As a result, it has historically had a strong bias towards non-disclosure instead of availability, ie in many cases the government would prefer to have information lost and destroyed rather than risk it leaking out or being stolen.
A company's priorities are almost always the reverse. Having information leak out is bad, yes, but losing the information outright is usually worse, often much worse. For most really important sensitive information, loss would probably put the company out of business; eg, Intel would be harmed if the full details for all its chips was leaked, but it would probably be destroyed outright if that information was lost. In many cases even a temporary loss of access is terribly damaging to a company; imagine the impact on a bank if it lost access to a quarter of its customer records for a week.
Since the US government created its security models and standards for its own use, they often reflect the government's bias towards non-disclosure over availability. Since companies generally have the reverse bias, they are of course not going to be too interested in a security system built to the government's specifications; they would have to go against its biases or in some cases break it entirely in order to get something that reflected their priorities.
(Disclaimer: this is my understanding of the situation. It's possible that I'm repeating folklore and misunderstandings, since I'm not a computer security person, just an interested bystander.)
2011-01-07
More modest suggestions for bug trackers
I consider all of these to be corollaries to my first set of modest suggestions, or more exactly to the core idea behind my suggestions, that being that you should arrange it so that the open issues in your bug tracker are things that you are fixing, not wishlist items, not things that are maybe going to be fixed some time in the future. Stuff you are fixing now, or close to now.
This implies that you need a way to explicitly defer issues, in order to deal with things that you know are bugs, that you want to deal with at some point but which you are certainly not going to deal with now or deal with before the next release or whatever. To make this easy to use, the bug tracker should have the idea of 'events'. When an event happens (such as 'release1.1'), it triggers actions like automatically un-deferring issues that have been deferred until the release happened.
(If maintainers have to manually un-defer deferred bugs, they will either never defer bugs because it's too much of a pain to un-defer them later or they will never un-defer bugs that have gotten deferred.)
You could also use the same machinery to automatically close at least some sort of bugs that had been filed against the old version (thus copying what some people do by hand).
In general, I think that maintainers should be encourages to aggressively close (or defer) bug reports if they are not going to deal with them more or less right now. As cruel as it is, I think it's better to push the burden of re-filing or re-opening issues off to the people with the problems instead of the people trying to fix them. At the same time, bug trackers should make it easy (but perhaps not too easy) for people to redo bug reports without having to reenter them from complete scratch.
(If you make it too easy for people to redo or reopen old bugs, they won't confirm that their old bugs are still there in your current version before re-opening them. This leads to bug reports for issues that don't exist any more, which causes maintainers to start ignoring your bug tracker. We all know where that leads.)