Fundamental and MD3000i ISCSI concepts

Hi all

I try to understand the concepts in a MS Windows and especially the MD3000i iscsi environment and online help does not help much. I understand that you need an innitiator on a host to connect to a target. A target is a San with a large logic unit number or multiple LUNS. A target may have a gigabit ethernet port or two. Each NETWORK adapter on the target must have a unique IP address. This is the case with MD3000i with one controller or connection "simplex". Let's say I set up 4 starts: exchange, sharepoint, sql, and files. These 4 starts will connect to 4 servers. Each server has two NETWORK adapter, an intranet, an iscsi network.  I only use a single NIC on the MD3000i, as I understand it, if I put the IP address of the target when configuring host innitiator on 4 servers. This means that a 1 GB connection is shared between 4 servers. Is this correct?

Second, by presenting the diskgroup to the appropriate host, avoid a host 'touch' the another diskgroup host?

For example, using the second NIC on the MD3000i, I will divide halved. I have to point two servers to a new IP address in the MS innitator and attend the diskgroup these two hosts correctly in the MD3000i. Is this a correct concept?

Is it possible to combine two network cards on the MD3000i so that its total output/bandwidth is 2 GB?

Now we have a second controller, which makes things more confusing for me. At this point, I would like to note that I speak not of "high availability of applications. In one controller, a controller has 4 starts. IIF this controller 0 failed, I'll lose data. In one two controllers, two controllers are configured as active/active. My understanding is that controller 0 can now own one or all four starts (or 2 own diskgroup and controller 1 has the diskgroup remaining, 3, and 4). So, in this case, my server must point to the 2 different IP addresses on the second controller. For simplicity, all IP addresses are the same subnet. Assuming that the controller 0 is broken, the diskgroup 1.2 will change ownership to controller 1.  Now, controller 1 will hold 4 starts. However, my server 1.2 will lose the iscsi network connectivity. In order for me to get the data back, I'll have to reconfigure the iscsi innitator pointing to the IP address of the controller 1. The question is, what IP addresses on the two network adapters on controller 1.

I understand that if I want to achieve high availability of applications, I have 2 network cards on the server dedicated for iscsi between two controllers.

I read this article and it has confused me more:

http://support.Dell.com/support/eDOCS/systems/MD3000i/multlang/TS/FR483-info%20update/FR483A00MR.PDF

According to this article: "two network cards on a controller can be aggregated to provide optimal performance. A host can be configured to simultaneously use the bandwidth of both ports on a controller of virtual disks, held by the access controller". This means that the total production is 2gbs? So in my case, is it possible to do it when I have a NETWORK to the iscsi network interface card? How to configure the host to take advantage of this?

I know that I have a lot of questions, since it is a new concept for me. Thanks in advance for your time.


Tags: Dell Products

Similar Questions

  • with vSphere and Dell MD3000i iSCSI design

    I installed a server ESX 4 (the first of three for this cluster), and a SAN Dell MD3000i iSCSI connected through two switches Dell PowerConnect 2816.

    Now I read the VMware 'iSCSI SAN configuration guide', and I wanted to just make sure I did the design correcly.

    I have connected a Teddy ESX (vmnic1) to one of the pSwitch and another (vmnic5) in the second. This pSwitch SAN iSCSI ports are connected, so that one of the switches carry the 10.10.10.x subnet and the other the 10.10.11.x (see IP plan below)

    The two natachasery have a vSwitch each (a do not share), a portgroup each and are configured on different subnets, IP (see attached screenshot)

    Plan of intellectual property:

    Port iSCSI-10 group (VMkernel): 10.10.10.1

    Port group iSCSI-11 (VMkernel): 10.10.11.1

    0-0 SAN controller: 10.10.10.10

    SAN 0-1 controller: 10.10.11.10

    Controller 1-0 San: 10.10.10.11

    SAN 1-1 controller: 10.10.11.11

    By using this configuration, I could see four different paths for each LUN. and I was able to make the ESX Server will switch to different switches/controllers as I unplug the vmnic1 network cables and vmnic5, one at a time.

    Then I found the section on 'esxcli' configuration guide and multipathing configuration, which I carried out in accordance with the instructions ("esxcli swiscsi nic add - n vmk0 d vmhba34" and "esxcli swiscsi nic add - n vmk1 vmhba34 d ')

    Now I have eight different paths for each logic unit number, three of whom are 'Active' and an 'active (i/o)' (see my screen attached the capture).

    So, my questions are:

    1. the eight paths to each LUN really normal in designs of iSCSI?

    2. is my vSwitch/Teddy/pSwitch correct design choices?

    Thanks in advance!

    / Anders

    @hennish,

    As long as your new design continues to meet your needs of failover and workload, your infrastructure design should be fine.

    But you might want to take a look at this design Dell. In particular, step 12 that configures multiple paths in round-robin policy using the VI GUI client, for

    Each LUN exposed to the ESX Server. This allows balancing on two

    Active paths to the LUN. After completing this step, you should see2 "IO Active" paths.

    Kind regards

    KongY@Dell Inc.

  • How to segragate VM of each other AND Management & iSCSI network traffic?

    I want to test how I can keep VM isolated from each other AND the segments of iSCSI network and management... It's just a proof of concept, so I not be using redundant paths.  I want to know how I can keep all my shelter of the attackers 'would '?  that is if one of the virtual machines becomes "compromise" How could I isolate all "would be attackers" to travel to other virtual machines or networks of management/iSCSI?   Is there a way to create each virtual machine in a VIRTUAL LAN on the ESX host? Or is there a better way?

    For the installation of the physical hardware - sounds?

    http://www.gliffy.com/pubdoc/1608095/L.jpg

    TahoeTech wrote:

    "So, to make sure that I understand...". I'll install VMware ESX on

    the physical host machine and then create 3 vSwitches which will be

    related to 1 physical NETWORK card by vSwitch? "

    Since I only need to the DMZ network available to VMS - must - I still 3 vSwitches?

    Yes, one for virtual machines, one for management and one for iSCSI. You may combine up to vSwitches (2) combining management and iSCSI, but it is not recommended.

    The host machine does the iSCSI storage to virtual machines as a "disk"?

    More or less, Yes. When you create a virtual machine, you need to configure the disks for this virtual machine. You will be asked which data source to use, and you choose the iSCSI data store, that you create when you set up the data store and iSCSI.

    The host uses the vSwitches? I thought that the host machine could use physical NETWORK cards... i.e. Teddy 1 would be management and Teddy 2 would be iSCSI, Teddy 3 would be linked to a vSWITCH for virtual machines?

    Yes the host use the vSwitch, after all the console, etc. is a virtual machine its self

    Virtual machines will never see a teddy bear. You want to assign a vNIC to the virtual machine when you create the virtual machine. The vNIC is connected to a group of ports, which in turn is connected to a vSwitch, which in turn is connected to a teddy bear.

    The virtual machines do NOT need access to the networks of management or iSCSI (I don't think)?

    Fix.

    The management network connects to ESX physical HOST (Teddy 1) in order to control the accurate virtual machines?

    The management of the network give you access to control everything via vCenter or the VI client or the command line.

    And the iSCSI network connects to the physical ESX HOST (Teddy 2) where the ESX host will present data warehouses to virtual machines as 'physical' disks correct?

    Yes, it will look like just a physical disk to the VM operating system.

    I guess what I'm trying to understand, or how I see it is that the host ESX is the only machine that has 'see' the iSCSI network?

    The VMKernel manages iSCSI seamlessly. Guests don't need to know anything iSCSI.

    Virtual machines see drives that presents the ESX host? For the virtual machines do not need is access to the iSCSI network (unless I need to install additional storage or drive shared etc...)

    Fix.

    any idea when VI4 is scheduled to be released?

    Rumor is soon. But unless there is there a feature that radically changes your design I wouldn't get too worried about it.

  • Poor performance TL2000 LTO6 and bridge iSCSI

    Hi, I have a serious performance problem with a TL2000 (LTO6) and an iSCSI bridge.
    Both the TL and the iSCSI bridge run the latest firmware (as of March 29, 2016).

    The backup server is running Windows 2008 R2 Sp1 with the latest updates installed. the server uses the driver recommended for the TL.
    The backup software's Netvault 11.

    The TL is connected to the gateway using the provided cable to SAS and the bridge is connected to the server with a N4032 switch (latest firmware here too), using a VLANs separated, jumbo MTU enabled (and on the bridge iSCSI network cards in the server too).

    I configured two NICs on the bridge, same subnet, a jumbo mtu and different ip address.

    I used this guide to connect to the server: www.dell.com/.../EN , as I followed the user guide to configure the iSCSI Gateway.

    The problem is that backup performance are unacceptable: backups on tape 5 MBs, while the same but on disk backups will work perfectly.

    I tried with jumbo mtu on and outside, with a single iscsi connection or multiple connections iscsi, using a single NIC on the bridge, and then together again. I tried all the driver versions available on the Dell Support for TL2000. I restarted TL, bridge iSCSi and server too: tape backup keep incredibly slow.

    IBM tool shows me the library and the tape like OK drive.

    If I use a unique iSCSI total on 5MBs throughput connection, if I use two connections, the total flow is always 5MBs, about 2.5MBs each connection.

    I tried to disable TUR (although is there a single server connected to the TL2000), but after Netvault lost the disc and in the Device Manager Windows, the drive did not show a yellow exclamation point, saying that the pilot cannot be initialized.

    Any idea or suggestion about this problem?

    Thank you in advance!

    The cause is a bug in the TL2000 firmware known in the last version (D.10, A22). We had to go back to C.30 and then backup started functioning at 40 / 50 MB/s.

  • Size max for NFS and VMFS (iscsi, FCP) datastore on vSphere 4.1 and 5.0

    Hello

    What is the maximum size for NFS and VMFS (iscsi and FCP) data created on vSphere 4.1, 5.0, and 4.0 stores?

    Thank you

    Tony

    Hi Tony,.

    You should find the answers in the various maxima of Configuration documents:

    Good luck.

    Concerning

    Franck

  • VMware vsphere essentials more and Dell MD3000i iSCSI - virtual disk not on preferred path

    Hello world

    I am trying to set up our new vmware vsphere essentials plus the facility. We have 2 x Dell R710 servers running vmware esx4 and have a md3000i SAN.

    Each server has a network cable to connect to a network SAN controller.

    vmserver1 a vmnic6 IP 192.168.130.110 the IP controller connection 0 192.168.130.101

    vmserver1 has IP 192.168.130.111 vmnic4 connection to controller 1 IP 192.168.130.102

    vmserver2 a vmnic 6 IP 192.168.131.110 the IP controller connection 0 192.168.131.101

    vmserver2 has IP 192.168.131.111 vmnic4 connection to controller 1 IP 192.168.131.102

    Whenever I start a virtual server it causes the dell md3000i start flashes orange, and management tool says disc not on favorite virtual path.

    After entering the tool configuration and affecting the path preferred, from the commencement to vmware, even with a single connected server it causes the error.

    I tried to configure vmware to use direct fixed and alternate paths.

    I did a quick search on Google and found these sites:

    http://www.delltechcenter.com/page/VMware ESX4.0andPowerVault MD3000i

    http://en.community.Dell.com/support-forums/storage/f/1216/t/19269507.aspx

    Strange configuration.

    Looks like you are using some cross cable?

    Each host must have 2 vSwitches, each with a port vmkernel on one of the two iSCSI network.

    Each host must reach all 4 of the MD3000i (test with the vmkping command) interface.

    See also:

    http://www.delltechcenter.com/page/VMware ESX4.0andPowerVault MD3000i

    André

  • Setting up iSCSI between ESXi host and MD3000i, firewall Option missing

    I try to configure the iSCSI access and the MD3000i manual head to the firewall option in VI Client.  However, when I go to Configuration - & gt; Profile of security - & gt;  the option firewall appears briefly and disappears?

    I'm having problems to recognize the iSCSI host table and I think that this may be due to the firewall.

    Hello

    I believe that in the latest versions of ESX (3.5u3 for sure), 3260/tcp port (you need for iSCSI) is automatically opened when you turn on the iSCSI (iSCSI software that is).

    You should choose abe to connect using the VI client, click the host in question and select the 'configuration' tab and then the item "security profile. Who should be open all ports.

    Remember that when you use iSCSI, you must as well a VMkernel port and connection of console in your LAN IP storage space. If two IPs per ESX host. The Console launches the iSCSI link, the VMkernel carries the iSCSI connection data.

    EDIT: Sorry, only now I noticed that you are using ESXi. In this case, you won't need a port Console service (because ESXi has one)

    Visit my blog at http://erikzandboer.wordpress.com

  • VSphere 4.1 and Dell MD3000i iSCSI raid firmware version

    We had a problem to upgrade to one of our VMservers last week to version 4.1, seems that the version of firmware for the Dell San iSCSI raid is not compatible, and our Server 4.1 is not able to see the SAN.  Anyone encountered this?  Does anyone know if Dell has yet released a version update of the firmware?

    Thanks, Tandrist

    GEN 1 vs. Gen 2 is not a material difference, it's the firmware (vs 7.x 6.x).  At any given time a few years back, they decided that the firmware that they created was too buggy to fix and they scrapped it and started over.  All units purchased in the last two years must have been shipped with firmware Gen 2 and if you have an older one, it is easily (but slowly) being upgraded.

    My system is running 07.35.22.61 and it works fine.

    EDIT: Have you checked that the name of the iSCSI initiator has not changed during the upgrade?  Don't forget, the Bay Dell determines what LUN to show you the way in which the customer is identified.

    Jason Litka

    http://www.jasonlitka.com

  • faild to connect to the MD3000i iSCSI

    Hello!

    Our company bougth a MD3000i storage, PowerConnect 5448 switch and we have some 360-390 Optiplex745-755 and workstation. I installed everything step by step, I can config the storage with disk MD management, and I installed Microsoft iSCSI initiator.

    The ethernet ports and iSCSI ports have obtained the IP address, name of the initiator is the same as

    the name of the initiator of the host.

    Well, the problem is that an initiator cannot connect to the MD3000i.

    What's wrong?


  • MD3000i iSCSI host RHEL4 connetion problems

    I have a MD3000i directly connected to a PE1750 with RHEL4.5 I iscsi utils installed and to the best of my knowledge that the MD3000i is configured correctly, but I am not able to fdisk the storage on the server.  In the logs I get this when I start iscsi:

    Jul 24 16:24:20 dev3 kernel: sdb: unknown partition table

    Jul 24 16:24:20 dev3 kernel: Attached scsi disk sdb to scsi9, channel 0, id 0, lun 31

    Jul 24 16:24:20 dev3 kernel: Attached scsi generic sg3 scsi9, channel 0, id 0, lun 31, type 0

    Jul 24 16:24:20 scsi.agent dev3 [6168]: disc/devices/platform/host9 / target9:0:0 / 9:0:0:31

    Jul 24 16:24:23 dev3 kernel: iscsi - sfnet:host10: Connect failed with CR - 113: no route to host

    Jul 24 16:24:23 dev3 kernel: iscsi - sfnet:host10: establish_session failed. Could not connect to the target

    Jul 24 16:24:23 dev3 kernel: iscsi - sfnet:host10: waiting 1 second before the next attempt to connect

    Jul 24 16:24:27 dev3 kernel: iscsi - sfnet:host10: Connect failed with CR - 113: no route to host

    Jul 24 16:24:27 dev3 kernel: iscsi - sfnet:host10: establish_session failed. Could not connect to the target

    Jul 24 16:24:27 dev3 kernel: iscsi - sfnet:host10: waiting 1 second before the next attempt to connect

    Jul 24 16:24:31 dev3 kernel: iscsi - sfnet:host10: Connect failed with CR - 113: no route to host

    Jul 24 16:24:31 dev3 kernel: iscsi - sfnet:host10: establish_session failed. Could not connect to the target

    Jul 24 16:24:31 dev3 kernel: iscsi - sfnet:host10: waiting 1 second before the next attempt to connect

    Jul 24 16:24:35 dev3 kernel: iscsi - sfnet:host10: Connect failed with CR - 113: no route to host

    Jul 24 16:24:35 dev3 kernel: iscsi - sfnet:host10: establish_session failed. Could not connect to the target

    Jul 24 16:24:35 dev3 kernel: iscsi - sfnet:host10: Session give up after 3 attempts


  • To access ESXi 4 Update 1 custom server from dell MD3000i iSCSI SAN

    Hello

    I'm brand new on VMware ESX/ESXi products and for the most part, no.  I configured several machines virtual with server 1 and 2.  I am trying to connect a virtual ESXi 4 server in a SAN and starts to become very confused about what I can or not do.  I hope that someone here can break it down for me because I couldn't find the answer to my specific questions.

    Here's how I'm Setup:

    I have a Dell PowerEdge R710 with ESXi 4 Update 1 'Custom Dell' (whatever that means) is installed.

    I have a Dell MD3000i SAN using iSCSI with a pre-configured dedicated virtual disk.

    I would like to create virtual machines on virtual disk said.

    I think that ESXi 4 is completely free.

    It seems that to manage the server, I need to use 1 of the 2 products.  The Vsphere client or web application of VMware Go.

    If I use Vsphere:

    I have the ability to add a type of storage through the user interface.

    I'm on a 60-day trial. I thought that it was free.

    If I configure my iSCSI via Vsphere and then the eval expires... what happens?  All jump on me and then I have to purchase a license to get it back?

    If I use VMware go:

    It seems I don't really have any options to do any kind of configuration iSCSI in VMware Go.  I do?

    I also have a XenServer that is configured in this way and it's all free. I wanted to try the bare metal of VMware solution, but now I wonder if I'll have to go back to Xen.

    Can someone please relieve me of my confusion until I have to pull the plug on this experience.  Maybe direct me to the proper documentation on how to configure what I need with what I have at the cost free-is possible.

    Thanks a lot for your comments and have a wonderful weekend!

    I'd go by QuickStart Webcast and Webcast ESXi Webstart to help get a better understanding.

    Here is a configuration to the MD3000i http://www.dell.com/downloads/global/solutions/esx_storage_deployment_guide_v1.pdf guide

  • Need help for testing and practice the concepts of the server at home

    Original title: Hello team

    I need assistance with testing and to practice the concepts of the server at home... I don't have an option to test laboratory pls suggest possible ways that I can practice... need help... Please help

    Hi Sachinadi,

    You can ask your question in the MSDN Forums: http://msdn.microsoft.com/en-us/hh361695.aspx

    Thank you.

  • Port-groups, vSphere 5 and Jumbo (iSCSI) frames

    We will implement a UCS system with EMC iSCSI storage. Since this is my first time, I'm a little insecure in design, although I have acquired a lot of knowledge to read in this forum and meanders.

    We will use the 1000V.

    1. is it allowed to use only a GROUP of ports uplink with the following exchanges: mgmt, vmotion, iscsi, vm network, external network?

    My confusion here is what jumboframes? Should we not separate for this connection? In this design all executives are using jumboframes (or are this set by portgroup?)

    I read something about the use of the class of frames extended Service. Maybe it's the idea here.

    2. I read in a thread do not include mgmt and VMotion in the 1000V and put it on a vs. Is this correct?

    In this case, the design of uplink would be:

    1: Mgmt + vMotion (2 vNIC, VSS)

    2: iSCSi (2 vNIC, 1000v)

    3 data VM, external traffic (2 vNIC, 1000v)

    All network cards for parameter as active, Virtual port id teaming

    Answers online.

    Kind regards

    Robert

    Atle Dale wrote:

    I have 2 follow-up questions:

    1. What is the reason I cannot use a 1000V uplink profile for the vMotion and management? Is it just for simplicity people do it that way? Or can I do it if I want? What do you do?

    [Robert] There is no reason.  Many customers run all their virtual networking on the 1000v.  This way they don't need vmware admins to manage virtual switches - keeps it all in the hands of the networking team where it belongs.  Management Port profiles should be set as "system vlans" to ensure access to manage your hosts is always forwarding.  With the 1000v you can also leverage CBWFQ which can auto-classify traffic types such as "Management", "Vmotion", "1000v Control", "IP Storage" etc.

    2. Shouldn't I use MTU size 9216?

    [Robert] UCS supports up to 9000 then assumed overhead.  Depending on the switch you'll want to set it at either 9000 or 9216 (whichever it supports).

    3. How do I do this step: "

    Ensure the switch north of the UCS Interconnects are marking the iSCSI target return traffic with the same CoS marking as UCS has configured for jumbo MTU.  You can use one of the other available classes on UCS for this - Bronze, Silver, Gold, Platinum."

    Does the Cisco switch also use the same terms "Bronze", Silver", "Gold" or "Platimum" for the classes? Should I configure the trunk with the same CoSes?

    [Robert] The Plat, Gold, Silver, Bronze are user friendly words used in UCS Classes of Service to represent a defineable CoS value between 0 to 7 (where 0 is the lowest value and 6 is  highest value). COS 7 is reserved for internal traffic. COS value "any"  equals to best effort.  Weight values range from 1 to 10. The bandwidth percentage can be  determined by adding the channel weights for all channels then divide  the channel weight you wish to calculate the percentage for by the sum  of all weights.

    Example.  You have UCS and an upstream N5K with your iSCSI target directly connected to an N5K interface. If your vNICs were assigne a QoS policy using "Silver" (which has a default CoS 2 value), then you would want to do the same upstream by a) configuring the N5K system MTU of 9216 and tag all traffic from the iSCSI Array target's interface with a CoS 2.  The specifics for configuring the switch are specific to the model and SW version.  N5K is different than N7K and different than IOS.  Configuring Jumbo frames and CoS marking is pretty well documented all over.

    Once UCS receives the traffic with the appropriate CoS marking it will honor the QoS and dump the traffic back into the Silver queue. This is the "best" way to configure it but I find most people just end up changing the "Best Effort" class to 9000 MTU for simplicity sake - which doesn't require any upstream tinkering with CoS marking.  Just have to enable Jumbo MTU support upstream.

    4. Concerning Nk1: Jason Nash has said to include vMotion in the System VLANs. You do not recommend this in previous threads. Why?

    [Robert] You have to understand what a system vlan is first.  I've tirelessly explained this on vaiours posts .  System VLANs allow an interface to always be forwarding.  You can't shut down a system vlan interface.  Also, when a VEM is reboot, a system vlan interface will be FWDing before the VEM attaches to the VSM to securely retrieve it's programming.  Think of the Chicken & Egg scenario.  You have to be able to FWD some traffic in order to reach the VSM in the first place - so we allow a very small subnet of interfaces to FWD before the VSM send the VEM's programming - Management, IP Storage and Control/Packet only.  All other non-system VLANs are rightfully BLKing until the VSM passes the VEM its policy.  This secures interfaces from sending traffic in the event any port profiles or policies have changed since last reboot or module insertion.  Now keeping all this in mind, can you tell me the instance where you've just reboot your ESX and need the VMotion interface fowarding traffic BEFORE communicating with the VSM?  If the VSM was not reachable (or both VSMs down) the VMs virtual interface would not even be able to be created on the receiving VEM.  Any virtual ports moved or created require VSM & VEM communication.  So no, the vMotion interface vlans do NOT need to be set as system VLANs.  There's also a max of 16 port profiles that can have system vlans defined, so why chew up one unnecessarily?

    5. Do I have to set spanning-tree commands and to enable global BPDU Filter/Guard on both the 1000V side and the uplink switch?

    [Robert] The VSM doesn't participate in STP so it will never send BPDU's.  However, since VMs can act like bridges & routers these days, we advise to add two commands to your upstream VEM uplinks - PortFast and BPDUFilter.  PortFast so the interface is FWD faster (since there's no STP on the VSM anyway) and BPDUFilter to ignore any received BPDU's from VMs.  I prefer to ignore them then using BPDU Gaurd - which will shutdown the interface if BPDU's are received.

    Thanks,

    Atle, Norway

    Edit:

    Do you have some recommendations on the weigting of the CoS?

    [Robert] I don't personally.  Others customer can chime in on their suggestions, but each environement is different.  VMotion is very bursty so I wouldn't set that too high.  IP storage is critical so I would bump that up a bit.  The rest is up to you.  See how it works, check your QoS & CoS verification commands to monitor and adjust your settings as required.

    E.g:

    IP storage: 35

    Vmotion: 35

    Vmdata: 30

    and I can then assign management VM-kernels to the Vmdata Cos.

    Message was edited by: Atle Dale

  • data stores configuration and target iSCSI in VMware 5.5U1

    I work inside my lab at home where I have the configuration box a CentOS 6.5 to serve to an iSCSI target. Shows the current configuration:

    [root@localhost ~] # tgtadm - lld iscsi - op show - target mode

    Target 1: iqn.2012 - 10 .cpd .net: san.target01

    System information:

    Pilot: iscsi

    Status: ready

    I_T nexus information:

    Information about the LUN:

    LUN: 0

    Type: controller

    SCSI ID: EIT 00010000

    SCSI SN: beaf10

    Size: 0 MB, size of blocks: 1

    Online: Yes

    Removable media: no

    Prohibit deletion: No.

    ReadOnly: No.

    Type of backup store: null

    Path to store backup: no

    Flags of magazine rack:

    MON: 1

    Type: disc

    SCSI ID: EIT 00010001

    SCSI SN: beaf11

    Size: 268435 MB, block size: 512

    Online: Yes

    Removable media: no

    Prohibit deletion: No.

    ReadOnly: No.

    Type of backup store: rdwr

    Support store path: / dev/sdc

    Flags of magazine rack:

    Account information:

    ACL information:

    192.168.1.200

    192.168.1.201

    Initially, I had only a single device defined within the iqn.2012 - 10 .cpd .net: san.target01 and I was able to create a data store on the ESXi hosts that I have running in my lab. I then added a second disk (LUN 1 above), but I still show that the possibility of having the original data store. When I look at the "storage adapters" on another host, details the software iSCSI card show 1 target connected, 2 devices and 2-way. If I should not be able to mount MON 1 as a second data within the host store?

    I don't use iSCSI anywhere in production and only play with it so that I can familiarize myself with the portion of iSCSI for test VCP5-VTC. Any help would be greatly appreciated. Thank you.

    Has proved to be a problem with tgtd. I actually had to reboot the server to get exports to work properly - the normal command "tgtd restart service" has been leaving some kind of autour artifacts that prevented VMware to see the changes properly.

  • Left and initiator ISCSI hardware

    Hello

    I am wanting to install a Lefthand P4500 ISCSI SAN with 3 x ESXi4 servers and use hardware ISCSI initiator. Now, my question is I plug the adapter of material directly into the ISCSI storage or I can plug it into a switch that connects to the storage as with the ISCSI Software initiator?

    Thank you

    You can do both. I would use a switch (or more) to take advantage of multiple paths.

    Marcelo Soares

    VMWare Certified Professional 310/410

    Technical Support Engineer

    Globant Argentina

    Review the allocation of points for "useful" or "right" answers.

Maybe you are looking for