== The pragmatic problem with strict XHTML validation
There is a pragmatic problem with strict XHTML validation (well, several,
but I'm only going to pick on one right now). It goes like this:
Strict XHTML validation in the browser clearly [[punishes users
../tech/WhyFailRSSGracefully]]. If there is more than a trace amount of
actual XHTML problems, this means that not doing strict validation is
significantly more user friendly and thus a significant advantage for
any browser that is not XHTML strict.
Given that you are punishing people by failing strictly, you are
effectively engaged in a giant game of chicken with all of the other
browser vendors. A policy of passing up the user friendliness advantage
of non-strict XHTML validation is sustainable only as long *everyone*
passes it up; the first significant browser vendor to break ranks will
necessarily cause a massive stampede for the 'graceful failure' exit.
And sooner or later someone is going to break ranks.
(This game of chicken is made more unsustainable by the fact that
Microsoft IE is not even playing yet.)
I don't think that hoping for only a trace amount of XHTML validation
failures is realistic. Even with the most optimistic view of content
generation (where all XHTML content is automatically generated or
checked) here are bugs and oversights in automatic page generators, and
for that matter bugs and oversights in validating parsers. My pessimism
says that someone is going to get something wrong sooner or later, even
in widely used software.
(In fact my personal view is that strict XHTML validation has
survived until now only because almost no content [[actually is XHTML
XHTMLValidation]]. In the real world of the web the only commonly used
'XML' formats are syndication feeds, which are often invalid and are
never parsed with strict XML error handling by any feed reader author
who wants actual users.)