Dell MD3220I User Manual

Download Deployment manual of Dell PowerVault MD3220i Desktop, Enclosure for Free or View it Online on All-Guides.com.

Dell PowerVault MD3220i Deployment manual - Page 1
1
Dell PowerVault MD3220i Deployment manual - Page 2
2
Dell PowerVault MD3220i Deployment manual - Page 3
3
Dell PowerVault MD3220i Deployment manual - Page 4
4
Dell PowerVault MD3220i Deployment manual - Page 5
5
Dell PowerVault MD3220i Deployment manual - Page 6
6
Dell PowerVault MD3220i Deployment manual - Page 7
7
Dell PowerVault MD3220i Deployment manual - Page 8
8
Dell PowerVault MD3220i Deployment manual - Page 9
9
Dell PowerVault MD3220i Deployment manual - Page 10
10
Dell PowerVault MD3220i Deployment manual - Page 11
11
Dell PowerVault MD3220i Deployment manual - Page 12
12
Dell PowerVault MD3220i Deployment manual - Page 13
13
Dell PowerVault MD3220i Deployment manual - Page 14
14
Dell PowerVault MD3220i Deployment manual - Page 15
15
Dell PowerVault MD3220i Deployment manual - Page 16
16
Dell PowerVault MD3220i Deployment manual - Page 17
17
Dell PowerVault MD3220i Deployment manual - Page 18
18
Dell PowerVault MD3220i Deployment manual - Page 19
19
Dell PowerVault MD3220i Deployment manual - Page 20
20
Dell PowerVault MD3220i Deployment manual - Page 21
21
Dell PowerVault MD3220i Deployment manual - Page 22
22
Dell PowerVault MD3220i Deployment manual - Page 23
23
Dell PowerVault MD3220i Deployment manual - Page 24
24
Dell PowerVault MD3220i Deployment manual - Page 25
25
Dell PowerVault MD3220i Deployment manual - Page 26
26
Dell PowerVault MD3220i Deployment manual - Page 27
27
Dell PowerVault MD3220i Deployment manual - Page 28
28
Dell PowerVault MD3220i Deployment manual - Page 29
29
Dell PowerVault MD3220i Deployment manual - Page 30
30
Dell PowerVault MD3220i Deployment manual - Page 31
31
Dell PowerVault MD3220i Deployment manual - Page 32
32
Dell PowerVault MD3220i Deployment manual - Page 33
33
Dell PowerVault MD3220i Deployment manual - Page 34
34
Dell PowerVault MD3220i Deployment manual - Page 35
35
Dell PowerVault MD3220i Deployment manual - Page 36
36
Dell PowerVault MD3220i Deployment manual - Page 37
37
Dell PowerVault MD3220i Deployment manual - Page 38
38
Dell PowerVault MD3220i Deployment manual - Page 39
39
Dell PowerVault MD3220i Deployment manual - Page 40
40
Dell PowerVault MD3220i Deployment manual - Page 41
41
Dell PowerVault MD32xxi Configuration Guide for VMware ESX4.1 Server Software
Page 5
3. More than one Network Interface Card (NIC) set aside for iSCSI traffic
4. No Distributed Virtual Switch (DVS) for iSCSI traffic
Not every environment requires all of the steps detailed in this whitepaper.
Users wishing to only enable Jumbo Frame support for the iSCSI connection need to follow steps 1 and
steps 2 with the following changes:
Step 1: Configure vSwitch and Enable Jumbo Frames No changes to the instructions
Step 2: Add iSCSI VMkernel Ports Instead of assigning multiple VMkernel Ports, administrators will only
assign a single VMkernel Port
Once these two steps are done, the rest of the configuration can be accomplished in the vCenter GUI by
attaching NICs, assigning storage and then connecting to the storage.
The rest of this document assumes the environment will be using multiple NICs and attaching to a Dell
PowerVault SAN utilizing Native Multipathing (NMP) from VMware.
Establishing Sessions to a SAN
Before continuing the examples, we first must discuss how VMware ESX4.1 establishes its connection to
the SAN utilizing the new vSphere4 iSCSI Software Adapter. VMware uses VMkernel ports as the session
initiators and so we must configure each port that we want to use as a path to the storage. This is
independent of the number of network interfaces but in most configurations it will be a one-to-one
relationship. Once these sessions to the SAN are initiated, the VMware NMP will take care of load
balancing and spreading the I/O across all available paths.
Each volume on the PowerVault array can be utilized by ESX4.1 as either a Datastore or a Raw Device Map
(RDM). To do this, the iSCSI software adapter utilizes the VMkernel ports that were created and
establishes a session to the SAN and to the volume in order to communicate. With previous versions of
ESX, this session was established using a single NIC path and any additional NICs were there for failover
only. With the improvements to vSphere4 and MPIO, administrators can now take advantage of multiple
paths to the SAN for greater bandwidth and performance. This does require some additional
configuration which is discussed in detail in this whitepaper.
Each VMkernel is bound to a physical adapter. Depending on the environment this can create a single
session to a volume or up to 8 sessions (ESX4.1 maximum number of connections per volume). For a
normal deployment, it is acceptable to use a one-to-one (1:1) ratio of VMkernels to physical network
cards. This means if there are 3 physical NICs, you would establish 1 VMkernel per physical NIC and
associate a separate NIC with each VMkernel port. In this example you would establish 3 sessions to a
single volume on the SAN. This scheme can be expanded depending on the number of NICs you have in
the system. As the environment grows, you can establish multiple sessions to the SAN by oversubscribing
VMkernel ports to actual physical NICs. This establishes multiple sessions to a volume but still utilizes the
same physical NICs as the means to get to the volume. As more PowerVault members are added
intelligent routing will come into the picture and allow for dynamic allocation of sessions as the SAN
group grows.