AJAX vs Dialups
But please don't reach for AJAX too fast, because there is such a thing as being too interactive.
AJAX's interactivity comes through communication, and communication takes bandwidth. While it'd be nice if everyone coming to your web site had lots of bandwidth, it's not true (unless you want to make it true by driving away everyone else).
Let's take an example: using AJAX to implement incremental searches. The search box on your web pages uses AJAX to notice when I start typing and does a callback to your web server so it can show me matching results; once I've typed enough to pull up what I want, I can just go there.
So I start typing, entering 'p'. Lightning-fast, your highly interactive AJAX wakes up and sends the request back to your web server. Of course there are a lot of pages that match such a broad criteria, so the reply is not short (the RD light of my modem goes on solid). As I add a 'y' and a 't' the whole process repeats, possibly colliding with the data transfer for the initial 'p' in the process.
This hypothetical web site's great interactivity hasn't helped me, it's frustrated me. Search has turned into a laggy experience where I have to wait for the application to catch up to my typing. The slower a typist I am, the worse it may be; if I type fast I have at least a chance of outracing the AJAX over-interactivity.
So: don't be too interactive. If your AJAX needs results from your web server, you probably can't keep up with the user's interactions in real time. Don't try; wait a bit, let the user get a bit of a head start, give some feedback every so often, and reserve your big efforts for when the user has paused. (Pauses in user input are your big hint that the user is waiting for you now.)
Google Suggest shows another solution to this: don't return interactive results until they're small enough to be useful. (In a search interface I do ask that you put up some feedback to the effect of 'searching for "py": too many results to show in the sidebar', so that I can tell the difference between lots of results and no results.)
Whichever you choose, people on dialups (like me at home on my poky 28.8K PPP link) will thank you for considering them. And you may discover that there are more of us than you thought, along with the people using your web site from halfway around the world, the unfortunates stuck behind choked up corporate Internet links, and so on.
You can read about other AJAX design issues here and here. (And this entirely neglects the collection of practical issues one faces when implementing AJAX in the presence of network delays.) Note to self: AJAX is complicated in practice.
You can read more about AJAX in the Wikipedia article.
Iterator & Generator Gotchas
Python iterators are objects (or functions, using some magic) that repeatedly produce values, one at a time, until they get exhausted. Python introduced this general feature to efficiently support things like:
for line in fp.readlines(): ... do something with each line ...
.readlines() would have to read the entire file
into memory, split it up into lines, and return a huge list; now, this
code only has one line in memory at any given time, even if the file
is tens or hundreds of megabytes.
Generators are functions that magically create iterators instead of just returning values (ignoring some technicalities). Generators are the most common gateway to iterators, and are thus the more commonly used term for the whole area.
When iterators were introduced, a number of standard things that had previously returned lists started returning iterators, and using a generator instead of just returning a list became part of the common Python programming idioms.
In many cases it can be tempting, and temptingly easy, to replace things that return lists with generators; it looks like it should just work, and it mostly does. It can be similarly tempting to just ignore the difference in the standard Python modules.
But there are some gotchas when you write code like this, and I have the stubbed toes to prove it. At one point or another, I've made all of these iterator-confusion mistakes in my code.
Iterators are always true
t = generate_list(some, inputs) if not t: return print "Header Line:" for item in t: .....
generate_list returns an iterator instead of a list, this code
doesn't work right. Unless someone got quite fancy, iterator objects
are always true, unlike lists, which are only true if they contain
There's really no way to see if an iterator contains anything except to try to get a value from it. And there's no 'push value back onto iterator' operation.
Iterators can't be saved
def cached_lookup(what): if what not in cache: cache[what] = real_lookup(what) return cache[what]
real_lookup returns iterators, this code doesn't work.
When an iterator's exhausted, it's exhausted; if you try to use it
again (such as if
cached_lookup found it as a cached result), it
(Technically I believe there are semi-magical ways to copy iterators. I suspect one is best off avoiding them unless you really have to save an iterator copy.)
I can't use list methods on iterators
t = generate_list(some, inputs) t.sort() t = t[:firstN] # ... admire the pretty explosions
Of course, iterators don't have general list functions like
.len(), or so on). If you want to use those functions, you have
t = list(generate_list(some, inputs)) t.sort(); t = t[:firstN]
list() will expand the iterator for you and is
harmless to apply to real lists, so you can use it without having to
care if the
generate_list routine changes what it returns.
Writing recursive generators
Sometimes the most natural structure for a generator is a recursive one. This works, but you have to bear in mind a twist: you cannot simply return the results of the recursive calls. This is because the recursive results are themselves iterators, and if you return them straight your callers get iterators that produce a stream of iterators that produce a stream of iterators that someday, at some level, produce actual results. (But by that time the caller has given up in despair.)
Instead each time you recurse, you have to expand the resulting iterator and return each result, like so:
def treewalk(node): if not node: return yield node.value for val in treewalk(node.left): yield val for val in treewalk(node.right): yield val
This implies that significantly recursive generators can be quite inefficient, as they will spend a great deal of time trickling results up through all the levels involved.