Why realistic UDP bandwidth testing is hard
Since very few real applications just blindly fire UDP packets at a target, when you want to know what UDP bandwidth you can get you are generally interested in the speed at which you can have a UDP-based conversation between two sides. In other words, you need a bidirectional tester, not just the sort of simple unidirectional one you can use with TCP bandwidth testing.
(Of course, TCP is actually bidirectional too. It's just that the conversation inherent in TCP is already handled by your TCP stack.)
Bidirectionality is not hard in and of itself. But when you write your own code to do conversations, you have to explicitly handle all of the aspects of the conversation. This means that you have to decide things like how to behave in the face of delayed or dropped packets (how do you notice? what is your retry policy?) and how much you're going to do in parallel and how (are you going to have an outstanding window size or multiple simultaneous outstanding requests? the two are subtly different).
And this is the real problem: you are presumably testing your UDP bandwidth because you are trying to assess how some real application or protocol will perform. Because everyone has to answer these questions themselves when writing UDP-based protocols, different applications can come up with quite different ones. If you do not match how your tester deals with these issues with how your target application does, your performance numbers may not actually match the real world, which means that your tester is not much good.
So: how do you find out what your target protocol does (or will do) in these situations, so that you can faithfully imitate it in your tester (in all of its possible complexity)?
All I can say is 'good luck with that', because now you know why realistic UDP bandwidth testing is hard. It's not the coding; it's figuring out what to code in order to get meaningful answers.
(Speaking from personal experience, it is very easy to create a UDP bandwidth tester that gives hopelessly optimistic and meaningless answers. And it's probably equally easy to create one that gives hopelessly pessimistic answers.)
Banging rocks together in Python
One of the things I continue to like Python for is what I call 'banging rocks together': quickly programming relatively small but non-trivial things, on the order of a few hundreds lines and anywhere from an afternoon to a few days of time. A representative example would be the basic UDP request/response bandwidth tester that I recently wrote; it came to a bit under three hundred lines of Python with some comments, and took me perhaps a day or two of time to write, tune, make more complex but more useful, and polish a bit.
I find that Python has a number of advantages for this:
- it has some pretty big rocks, so you can get pretty far without too
- it runs surprisingly fast enough; for example, my UDP ping-pong
tester could saturate gigabit Ethernet without very much tuning.
This too means that you can get pretty far without too much work.
- it has enough power that you can scale your program up to more sophistication if you need it. The UDP ping-pong tester started out doing very simple things, but they turned out not to be at all representative of how UDP NFS behaved on problematic networks; I was able to make it more complicated without particular much work.
(My end result turned out to still not be representative of what happens to UDP NFS performance, but at least it stopped giving hopelessly optimistic answers and melting networks down in the process.)
This is not unique to Python, of course; I'm sure that Perl would be as good, as would a number of other modern 'scripting' languages. What matters is a certain expressive power coupled with large rocks, and Perl certainly has a very good collection of large rocks.