You really want to put your switches in server racks
Once upon a time, not that long ago, when we were perhaps smaller and switches were certainly more expensive, we put our switches in network racks over on one side of the machine room, all of our servers in server racks, and ran cables under our machine room's raised floor from the servers to the switches. Please learn from our painful experience and don't do that; put almost all of your switches in server racks.
Yes, really. Even if this requires putting a stack of switches in a rack to get enough ports (or enough subnets, if you sensibly use one switch per subnet). Even if you need to put one switch in the front of the rack and another in the rear, just to get enough in (switches are shallow, you can usually pull this trick off).
Why you want to do this is simple. The more network cables you run under the floor, the more you discover the charms of machine room archaeology and the more time you will spend trying to trace and pull old cables when you remove old machines. (Unless you don't have the time to pull 'harmless' unused cables, or you're going to get around to it on some slow day, or you're leaving the old cable in place for now because you're pretty sure you're going to put a new machine in the rack in a bit and you'll just be re-running a cable so let's save some work. Then it gets worse.)
Putting as many switches as necessary in your racks means that you'll run roughly one network cable per switch back to your core switch interconnect points, instead of one or more cables per server. This is a lot fewer cables under the floor (or overhead if you use overhead cable trays, and they get messy too), and that is a very good thing. It also makes it a lot easier to remove and add cables as you remove and re-add servers, which usually drastically increases the chances that you'll actually do it.
Four years ago when I wrote RackNetworking, we had just begun to think about moving from our old way to putting a bunch of switches in server racks. Since then we've almost entirely moved to the server rack approach, but we still have a number of machines that were cabled up with the old under the floor approach; every time I have to clean up after one of those machines (as I had to today), I'm reminded of how much better the new approach is.
Sidebar: our answer for uplink bandwidth
One of my concerns back in RackNetworking was uplink bandwidth from the server rack switches to the core interconnect. In practice this has not been an issue for us, because most of our machines are not heavy bandwidth consumers. We continue to run direct connections to the core interconnect switches for the few machines where we think it may actually matter; I wrote the details up in my writeup on how our network is implemented.