== In practice, there are multiple namespaces for URLs In theory, the HTTP and URI/URL standards say that URLs are all in a single namespace, as opposed to _GET_, _POST_, etc all using different URL namespaces, where some URLs only exist for _POST_ and some only exist for _GET_. In practice, I believe that web traversal software should behave as if there were two URL namespaces on websites: one for _GET_ and _HEAD_ requests, and a completely independent one for _POST_ requests. Crawling software should not issue 'cross-namespace' URL requests, because you simply can't assume that a URL that is valid in one can even be *used* in the other. This isn't very hard for _POST_ requests; not much software makes them, and there's lots of things that make sending useful _POST_ requests off to URLs you've only seen in _GET_ contexts difficult. (In theory you could try converting _GET_ requests with parameters into _POST_ form requests with the same parameters, but I suspect this will strike people as at least dangerous and questionable.) Unfortunately I've seen at least one piece of software that went the other way, issuing _GET_ requests for URLs that only appeared as the target of _POST_ form actions. Since it tried this inside CSpace the requests went down in flames, because I'm cautious about anything involving _POST_ (and I get grumpy when things 'rattle the doorknobs'). (The crawler in question was called _SBIder_, from sitesell.com, and this behavior is one reason it is now listed in our [[robots.txt|]].)