2010-11-20
Why I avoid DSA when I have a choice
From Nate Lawson's most recent entry:
Most public key systems fail catastrophically if you ignore any of their requirements. You can decrypt RSA messages if the padding is not random, for example. With DSA, many implementation mistakes expose the signer's private key.
(emphasis mine.)
Even small implementation mistakes are dangerous to crypto systems, but there are degrees of danger. Most of the time, 'all' that happens is that a bad implementation doesn't deliver either the encryption or the endpoint authentication that you thought you had; an attacker can decrypt your messages or impersonate a host. This is still bad, but it is not totally catastrophic.
DSA is not like that. As Nate Lawson has covered, a mistake by a DSA implementation that you use can directly give away your private key. It doesn't matter if your key was securely generated, and it doesn't matter if you only used the bad implementation briefly; your key is bad now. Generate and propagate a new one, provided that you realize that this has happened.
I have no opinion on whether RSA is theoretically stronger or weaker than DSA. I generate RSA keys instead of DSA keys regardless of the relative theoretical merits because all of the theoretical security in the world doesn't matter when all implementors have to get everything right or they give away the house, because they won't (and haven't).
Sidebar: when it is theoretically less dangerous to use DSA
In order to disclose a private key, a weak DSA implementation must actually have it. Thus, it is theoretically safe to use a local DSA key to authenticate yourself to a remote party if you trust your local implementation but don't entirely trust the other end. The most obvious case for this is personal SSH keys.
Still, I wouldn't do it. Why take chances if you don't have to?
More on those Python import oddities
From the reddit discussion
and also the comments of my previous entry on import
,
I learned that people both do 'import os.path
' and then use
os.whatever without explicitly importing os
, and do 'import os
'
and then just use os.path.whatever without explicitly importing
os.path
.
First off, I don't think that you should do either of these even if
(and when) they work, for os
or any other module. This is a stylistic
thing, but when doing imports I prefer to be explicit about what other
modules my code uses. Also, I don't like clever tricks like this because
they run a high risk of confusing people who read my code, and this
includes me in the future if I've forgotten this bit of arcane trivia by
then.
Both of these clearly work for os
. But are they guaranteed to work in
general? The answer is half yes and half no.
As sort of discussed last time, 'import x.y;
x.whatever
' is guaranteed to work because the semantics of useful
multi-level imports require it. 'import x.y
' is pointless if you cannot
resolve x.y.whatever afterwards, and in order to do that you must have
'x
' in your namespace after the dust settles. So Python gives it
to you.
A more interesting question is whether 'import x; x.y.whatever
' is
guaranteed to work. The short answer is no, although I suspect that it
often will for relatively small modules. First off, modules that are
implemented as single Python files (as with os
) have to make this
work; as discussed last time, x.y
must be defined
after x.py has finished executing as part of the import process, because
there is no other way for the interpreter to find the x.y module.
For modules that are implemented as directories (with submodules as
either files or subdirectories) there is no requirement that the
module's __init__.py
import the y
submodule for you. The tradeoff
is that importing submodules automatically makes 'from x import
*
' work as people expect, at the cost of loading potentially large
submodules that people are not going to use; the larger and less used
your set of submodules is, the more this matters. So you can sensibly
have a module that requires explicit imports of submodules, and indeed
there are modules in the Python standard library that work this way
(xml
is one example).
Now we come to a piece of import
trivia: importing a submodule will
actually modify the parent module's namespace. If you do import x.y
(in any variant) and y
is not defined in x
's namespace, Python adds
it for you. Once I thought about it, I realized that it had to work
this way if Python wanted to support submodules that had to be loaded by
hand, but I find it vaguely interesting that Python is willing to drop
things in another module's namespace for you as a result of stuff that
you do.
(This happens even if you do not get an explicit reference to x
,
eg if you do 'import x.y as t
'.)