How we set up our iSCSI target servers
The SAN backend for our new fileserver environment is Linux servers acting as iSCSI targets. There's two sides to their setup; the hardware and the software.
Hardware-wise, the biggest problem was that there are essentially no generic servers (especially inexpensive ones) that have enough disks. So we punted; we're using 'generic' 1U servers, specifically SunFire X2100 M2s, with the iSCSI data disks in external ESATA-based enclosures. Our current enclosures give us twelve (data) disks in about 5U (counting both the enclosure and the server) at a reasonable cost.
(I would give a brand name but our current supplier recently told us they'd discontinued the model we had bought, so we're evaluating new enclosures from various people.)
Each server is connected to a single enclosure with a PCI-E card, currently SiI 3124-based ones. This requires a kernel that supports (E)SATA port multipliers, which only really appeared in 2.6.24 (for SII based cards) or 2.6.26 (for Marvell-based ones), and means that we have to build our own custom kernels. Software-wise, the servers run Red Hat Enterprise Linux 5 with the custom kernel and IET added as the iSCSI target mode driver, and are configured with mirrored system disks just in case.
(Recent versions of RHEL 5 have some support for SII based (E)SATA port multipliers, but 2.6.25 is what we've tested with and its support seems to work better.)
We don't do anything fancy with the iSCSI data disks (no software RAID, no LVM). Each disk is partitioned into even chunks of approximately 250 GBytes per chunk and then exported via iSCSI; we make each disk a separate target, and then each partition a LUN on that target. This makes life simpler when managing space allocation and diagnosing problems. (It also makes the IET configuration file contain a lot of canned text.)
We've decided to handle iSCSI as if it was a SAN, so we do not run iSCSI
traffic over our regular switch fabric and VLANs. Instead, all of the
iSCSI servers are connected to two separate iSCSI 'networks', which in
this case means a dedicated physical switch and an unrouted private
subnet; this gives us redundancy in case of switch, cable, or network
port failure, and some extra bandwidth. Each server also has a regular
'management' interface on one of our normal internal subnets, so that we
can ssh
in to them and so on.
(Since they are X2100s, they also have a dedicated remote management processor on yet another internal subnet.)
Comments on this page:
|
|