The problem with treating RAID arrays as single disks
A lot of hardware RAID systems (whether controller based or SANs) like to present a multi-disk RAID array to the host operating system as a single device. While attractive, this can lead to hard to diagnose performance issues under load (well, a random IO load).
The basic problem is that you get the best performance by keeping all of your disks busy. But when you aggregate multiple disks into what the operating system sees as one disk, the operating system is going to schedule IO for one disk; the result is not necessarily even loading on all of the actual disks in the array.
The usual approach is to use SCSI TCQ or the equivalent to push as many outstanding IO requests into the array as possible, and let the array schedule it internally. The problem is that this is counting on statistics to keep things balanced, because the array can't selectively accept TCQ requests. If a disk backlogs, more and more IOs for it will pile up in the array's pool, potentially choking out other drives.
It may not take much of a hiccup, either, because arrays often have surprisingly low limits for how many outstanding commands you can push to them (partly because the controller itself has to store all the outstanding commands). SANs may suffer the most from this, since they often have to split the controller's pool over multiple hosts and multiple arrays.
(In general you may have remarkably low per-disk numbers; a 16-drive array set for 64 outstanding commands is only averaging 4 outstanding commands per drive, for example.)
Even when controllers are capable of lots of outstanding commands the operating system sees the array as a single disk, and many operating systems default to relatively low per-disk TCQ limits (because these are what makes sense for real, physical disks). In fact a lot of OS level queues are often sized to be sensible for physical disks, and so may need expansion when your 'disk' is actually a big array.
Also, even if you successfully push lots of commands into the array, you've moved IO scheduling from the operating system to the array, which means that any smart IO scheduling the OS is trying to do is ineffective. (In extreme situations, ordering guarantees may require the OS to stall write IO to the array.)
While one might say that this is no different from how modern disks hide their internal geometry from the outside world, it's not; however complex they are internally, modern disks don't have multiple (fully) independent mechanisms. By contrast, disk arrays fundamentally need to be driven in parallel to deliver their performance.
I wish I had some nice pat conclusion for all this, but I don't. Having OSes see RAID arrays as single disks isn't going to go away any time soon, so all I can suggest is keeping your eyes open about the resulting issues.