vDS and iSCSI VMKernel

It is preferable to use Standard switches or switches for iSCSI VMKernel ports vDS?  What is the best way to go?

Is there documentation describing the process somewhere?

aacjao wrote:

Could it facilitate the management of use vDS rather than this configuration on each host?

For iSCSI vmkernel will not really facilitate the management that you would have to create "virtual interfaces" unique for each host, so it will be more or less the same amount of work.

aacjao wrote:

Y at - it literature all about putting this in place on vDS?

The official manual on iSCSI has only documentation on how to implement on vSwitches standard: http://pubs.vmware.com/vsphere-50/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-50-storage-guide.pdf - page 76.

On how to configure Vmkernel interfaces on a distributed switch, you might see this manual and page 32.

Most of the best practical information on the web around iSCSI and VMware are around the ordinary vSwitches - that might make it easier to install and troubleshoot.

Tags: VMware

Similar Questions

  • 2 questions - group leader and iscsi

    We have a unique PS6500 of the firmware 6.0.5. From 3 days ago, Archbishop Grp freezes when I click on the member icon. I could go into all the rest of the GUI with no problems. I got the firmware installed for about a month, so I don' t think that is the problem. Has anyone else seen elsewhere?

    2nd edition - we have vmware hosts connected to QE through private switches. Vmware hosts have separate vswitches for vmotion and windows iscsi volumes. VMotion is on 10.2.0 subnet and iscsi volumes are on 10.2.1 subnet. The EQ is on 10.2.0 with a 255.255.0.0 mask. The switches are set with the VLANS separated for iscsi and vmotion.  So far in Grp Mgr, I allowed on each volume 10.2. *. *. I started to try to change it so that the vm volumes only allow connections from 10.2.0. * (vmotion subnet) and volumes windows only allow connection of the 10.2.1. *. Currently, I do not use chap or iscsi initiator to limit. Here is an example for a windows volume.

    But here is what I get in the Connections tab for this volume, even after a new analysis of storage host adapters:

    So either I completely ignore the setting of the limits of intellectual property, or my EQ has a problem. Shouldn't I have the 10.2.1 connections to this volume?

    Any ideas? Thank you.

    None of the fixes listed in the 6.0.6 release notes apply to my EQ,.

    BTW, you can add the ACL CHAP to the EQL live volume.  This will not affect existing sessions.  Just make sure that you first create the CHAP user on the table.  Then after setting the ACL volume, set the iSCSI initiator ESX to use CHAP.

    If you take a look at the RPM, you can find a few Commons left default setings.  There are some that will improve performance, other stability.

    vMotion has not EQL access at all.  It's strictly for compose the image memory of a computer virtual from one host to another ESX host.

    Storage vMotion includes EQL, but it does not use the port of vMotion to do anything.

    vMotion traffic must be completely isolated, because the content in the course of the transfer are not encrypted.   So no VM Network exchanges should be on this vSwitch and the physical switch should be isolated too.

  • FC and iSCSI

    I have an IBM Blade Center with 4 blades running enterprise 5.1 ESXI.

    The blades are connected to storage via FC (Fibre Channel) switch.

    I added a new SAN at the center of the blade, but due to some unexpected port licensing problems with my Cisco MDS 9000 FC switch, I can connect only one of the 2fc connects to my blade right now.

    The SAN also comes with ISCSI, so my question is, can I have a put in place with FC and ISCSI multipath?  or is it a bad idea?

    Hi friend

    Here's the thread that talks about your request:

    fiber channel - it is possible to multipath FC and iSCSI? -Server fault

  • Recommend bridge SCSI and iSCSI

    Hi guys,.

    Currently I have an ADIC FastStor 2 tape auto-chargeur connected to a physical machine, Backup Exec 12.5, which backs up our running VMS with internal agents. Our ESX host running ESX 4.1, running on servers of 4 HP DL380G5.

    We are looking into converting to a more focused VM disk backup over the next 9 months, but we have a short-term obligation to upgrade our current solution quickly so that we can remove the material (it has long been out of support and our backups start to fail). To this, looking at a quick upgrade to BE2010 R2 (since we have already supported) and put us the media server on a virtual machine. Remains to be seen how to connect to the tape library. A SCSI and iSCSI Bridge seems to be the best option to do it quickly and with headache less.

    Someone at - it experience with these devices? Recommendations for an individual? The current tape drive uses a connection of Ultra2 SCSI wide 68 pins.

    Thank you

    .. .After research forums and seeing the difficulty that people have, I wasn't convinced.

    The positions of 'problem' I see the most are those who use cards non - of Adaptec SCSI and / or not parallel SCSI drives.  The most recent posts I've seen are for tape drives, SATA or SAS drives.  Basically trying to use 'new' band players, rather than reuse the 'old' band of readers.

    Quick is the main concern here. The money is available, it cannot simply be a ridiculous amount since it is monstly going to be throwing next year (otherwise I'd just go straight for an iSCSI tape library).

    Then I recommend the SCSI card because it would be cheaper than a bridge.  Or if you have an existing workstation you can re-tasks to host your tape drive for awhile so that get you the rest of your new backup equipment.  Used & refurb workstations can be made for well under $ 500 and often as low as $200 and would be more powerful enough to accommodate a tape drive.  Because you can reuse the workstation after obtaining new equipment, you could get something more than the bare minimum for this project.  Maybe it's a better use of $$$ instead of a bridge iSCSI that you go to "throw".

  • VCB server with both FC and iSCSI connections

    Hello.

    Anyone know if a VCB implementation that has the FC and iSCSI LUN presented to the proxy server is possible/supported?  I see no technical reason that it wouldn't work, but I'm trying to understand whether or not this is a supported configuration.

    Thank you!

    I don't see why that would be a question of a standpoing of features, but if you didn't mind the performance hit you just the business day following mode of transport and not worry present LUNS.   I guess that maybe in the same way hotadd would work as well.

    If you have found this device or any other useful post please consider the use of buttons useful/correct to award points

  • vDS and vCenter Server loss

    Hypothetical question. If I create a vDS and my vCenter server goes down, each host will lose this switch or it will have been copied to each host config automatically? Thank you

    Hi Matt,

    Distributed switch are stored in data warehouses, in folders named .dvsData. Thus, even if the vCenter is off, guests have access to network configurations.

    Concerning

    Franck

  • How to save and reuse network and iscsi on reformatting configuration?

    We have all the physical servers to reformat and reinstall vsphere (Enterprise edition).

    We would like to avoid to rebuild all the configuration network and iscsi for each physical machine.

    Is there a way to save the configuration and apply again to the reformatted Server?

    Thank you

    Tonino

    Hello

    You can save your configuration with the command vicfg - cfgbackup.pl - Server

    Frank

  • vDS and vCenter Server on SAN

    I have a very specific question about it as possible.

    I have 3 guests with no local VMFS (ESXi installed on a Flash cards) and all data warehouses are on the SAN.

    VCenter Server is a virtual machine residing on the SAN.

    SAN access is configured with iSCSI connected through a vDS.

    What would happen if there is a power failure?

    Say that I turn on the SAN first, then try to power on the hosts, the hosts would have access to the San while vCenter Server is still off powerd (and because it's aVM)?

    Thanks for your understanding!

    On VMware ESXi ESX, the vDS configuration is saved on /etc/vmware/dvsdata.db. The file .dvsdata of data warehouses are used by HA and stores some information as the State of the port and its stats.

  • Moving from VSS to VDS and how many have

    Having some difficulty deciding to VDS how we should have in our Organization.

    We have 4 clusters all under one site, wait for 1 of them they run all 2 VSS, one for data and one for iSCSI storage. 1 cluster that is different is the last cluster of Production that still turns a VSS for iSCSI, but data are running on a VDS.

    Data of each cluster network is identical and connect to the same basic switch, all the VLANS on all clusters are identical

    Each iSCSI network cluster runs in its own set of switches and do not touch each other so very isolation.

    I'm a fan of join our VDS data network, but I've seen several posts on how to configure for Sites and I was wondering what is the best for our environment.

    I heard that normally it is 1 VDS to rule them all, but I think plus 1 VDS for each cluster and not one for the site.
    The reason behind this is that the role of each cluster is very different and I would like to work on clusters in a vacuum and I know that if I make a change in that it won't have another effect.

    Cluster-1 prod (5 guests) < == cluster of Production for applications outside of our web site main base claim system

    Dev cluster (6 guests) < == pole of development for all the work of development

    UAT Cluster (2 guests) < == user acceptance tests before accepting the changes in demand

    Prod Cluster 2 (2 guests, which has the vds) < == main cluster of Production for our web-based system

    So what you think 1 per site, 1 by cluster or stay with VSS

    First of all, go to vDS probably :-) And on how many vDS, I don't like the idea of a single vDS for the whole environment (including testing) but a vDS for clusters is too if you have clusters with similar roles, and for this reason I propose a vDS for the production Cluster and another for Dev/UAT Cluster.

  • Nested ESXi 5.1 and iSCSI Shared Storage

    People:

    I am unable to get my ESXi servers nested to see the iSCSI shared storage, that I put in place for them.  I use iSCSI for the ESXi host, who holds the guests of ESXi, so I have a successful iSCSI configuration to use as my reference.

    Here's what I have for the host network config:

    iSCSI targets

    + IP1: 172.16.16.7 (subnet for host IP storage)

    + IP2: 172.16.17.7 (subnet for storage over IP comments)

    vSwitch associated vmnic2 on NIC IP storage

    + Port "IP storage" group containing virtual NIC both ESXi hosts

    + VMkernel Port for iSCSI host connections: 172.16.16.28

    Here's what I have to config network comments:

    + The highest virtual NIC group of ports "IP storage".

    + vSwitch with only a VMkernel port for iSCSI of the guest connections: 172.16.17.38

    From the iSCSI host, I am able to ping 172.16.16.28.  However, I am unable to ping 172.16.17.38, and here's the really confusing part - I am able to get an answer ARP of that associated with the VMkernel port with the correct MAC network CARD!  This eliminates all kinds of potential configuration issues (for example means bad NIC, bad IP, etc..)

    The firewall on the host indicates the software iSCSI port outgoing 3260 Client is open.  Capture of the target iSCSI host package reveals no. TRAFFIC from VMkernel IP guest when updating to storage adapters.

    What should I look at?  The configuration of the host against the host looks identical yet we work and there is no...

    -Steve

    In the debugging process so, I turned on the "Promiscuous" mode on the vSwitch associated vmnic2 on the NETWORK card on the host IP storage ESXi and poof!  Everything magically started working.  The iSCSI traffic be unicast, so I don't see why promiscuous mode would be necessary, but I can't argue with the results observed.  Clearly, I have more to learn more about nested ESX, that's why I'm playing with it.  :-)

    -Steve

  • Creation of vMotion, management and iSCSI

    Hi all

    I have doubts that I'd like to have some clarification.

    First vMotion and traffic management.

    I have two network cards. My question is, should I create IDE oucederomsurlesecondport VMkernel for each one (and adding the vMotion Option and management traffic) adding a different IP address for each and put the two in a vSwtich but the example; an active nic1 for vMotion and nic2 Eve and the management of the same traffic, nic2 active and standby nic1. With this, I have two different networks, but by using two NICs in case of failure.

    Or create a VMkernel and activate them both on the same VMkernel and put the two active network cards.

    What is the best choice, or the best performance?

    I have 6 guests on this cluster, so I need to do this in on the hosts.

    First I use QLOGIC

    The iSCSI side, should I use VMkernel Port binding? Or just use a normal VMkernel with my wihout VLAN iSCSI definition of any Biding VMkernel Port on iSCSI Software adapter?

    I have both running, but do not know which is the best way to get the best performance.

    Thank you

    JL

    Nice day!

    In summary, you should do something like that.

    vSwitch0 - Mgmt & vMotion

    ====================

    vmk0

    --------

    Management

    Uplink 1: assets

    Uplink 2: Passive

    vmk1

    --------

    vMotion

    Uplink 1: Passive

    Uplink 2: Active

    vSwitch1 - iSCSI

    ====================

    vmk2

    --------

    iSCSI traffic

    Uplink 3: assets

    If you only have an uplink for iSCSI traffic, you don't have to set up the binding of ports.  iSCSI ports is for the multipath.  If you only have one NIC for iSCSI, you cannot have MPIO.  You need two or multiple NICs or uplinks to the multipath.

    Now, for the management and the links of vMotion, I suggest using only an asset and a liability for each type of traffic.  Although vSphere 5 has the multi-NIC vMotion functionality, the idea of separating vMotion and traffic management is to separate vMotion and its important traffic in your management traffic bursts, which happens to include your traffic HA (also important).  You have certainly not * have * to set it up like that, but maintaining vMotion on his own physical link will keep walking on your management traffic.  In the case of a defective cable, I'm sure you would rather a cluttered link as no management or vMotion traffic.

    All the best,

    Mike

    http://VirtuallyMikeBrown.com

    https://Twitter.com/#! / VirtuallyMikeB

    http://LinkedIn.com/in/michaelbbrown

  • iSCSI / vmkernel multipathing vs NIC teaming

    Hello

    I know of Configuration of VMware SAN Guide provides information about how to configure iSCSI multipath with vmkernel interfaces double on various uplinks.

    What are the disadvantages of using basic NIC teaming instead? Just an interface of vmkernel on a vswitch (d) with two uplinks. It will work properly about redundancy and failover?

    Kind regards

    GreyhoundHH wrote:

    What are the disadvantages of using basic NIC teaming instead? Just an interface of vmkernel on a vswitch (d) with two uplinks. It will work properly about redundancy and failover?

    I guess the difference is while using "Port Binding" iSCSI initiator uses vSphere Pluggable Storage Architecture to handle the load balancing/redundancy, that can make better use of multiple paths to NETWORK card available. Otherwise, the initiator will use vmkernel stack ensures redundancy network and balancing in the same way as a normal network traffic.

    I suggest that you look at the great post on iSCSI multi-vendor, I think the 2 statements below will summarize the difference:

    "

    • However, the biggest gain of performance allows the storage system to scale the number of network adapters available on the system. The idea is that multiple paths to the storage system can make better use multiple paths, he has at his disposal than the consolidation of NETWORK cards on the network layer.
    • If each physical NETWORK adapter on the system looks like a port to a path to the storage, the storage path selection policies can make best use of them.

    "


  • Migrate to vDS for iSCSI Software

    So I'm going through the vDS migration guide, but I'm still a little fuzzy on how I need to migrate my VMKernel ports software iSCSI. Here is my setup...

    I have 4 hosting a test cluster. Two network ports are dedicated on each host for vSwitch1 which has a VMKernel port with an IP address in the 10.5.33.

    How can I disconnect from my software iSCSI initiator or migrate the IP address of a vSS to a vDS without a cause of a conflict?

    http://www.keithsopher.com/Images/VCP_logo_40.gif

    While add you ESX host service VDS, there is an option to add the console service & VMkernel iSCSI to the vDS. If you already add this particular vDS host, remove first the host of vDS.

    vcbMC - 1.0.6 Beta

    Lite vcbMC - 1.0.7

    http://www.no-x.org

  • ESXi and iSCSI - firewall change initiator

    Hello

    I just bought a Dell Equallogic PS5000 and I'm setting up the iSCSI initiator, and I read the documentation of Equallogic.

    It is said to change the firewall security options, but when I go under ESXi I don't have the option to check the box for iSCSI...

    Is this a limitation with ESXi or I'm just something wrong?

    Concerning

    Dave

    ESX comes with a firewall, which is part of the Linux VM service console.   ESXi which did not and does not include a firewall.  You will need just a vmkernel with your iSCSI device connectivity port.

  • need help installing iSCSI/vmkernel

    Hello

    I admit, I'm a newbie on iSCSI. I'm trying to get a machine to esx 3.5 SU3 to see our HP MSA1510i "SIN." I have some documents that I need to create a vmkernel for iSCSI to work. I did it twice. The first time I used an additional ip address in the same range as my server address and it blew me out of the water. It is on the same network as my service console card. Fortunately, did here is no where close to being 'production' machine of dev. Then, once I had this kernel allowed to get out, I install one new one on another Beach, 192.168.1.3, using the gateway of the service console. It does not work.

    Any help as to what I'm doing wrong would be greatly appreciated.

    Thank you

    Ken

    ISCSI works really well, but there are a few things that you should stick to.

    ISCSI needs its own subnet and service of the console if you want to do it properly.

    Create a new vswitch and assign to one of your free NETWORK cards. To ensure that

    network adapter is connected to your dedicated ISCSI subnet.

    Create a vmkernel and a service console in this vswitch.

    Their IP addresses must be in that subnet.

    You will also need to connect the ISCSI ports on your San for this subnet.

    See you soon

Maybe you are looking for

  • Apple Mail display

    I'm on El Capitan, 10.11.6, and today I must have clicked when I shouldn't have and post display settings have changed. I run in the standard view, and my setup is that my Inbox list appears at the top, and when I chose a message in this list, I am a

  • Cannot log on to the Web site

    I can not connect in my PayPal site. I already uninstalled and re-installed the app from PayPal and Firefox. What is the problem?

  • Satellite Pro A40 take 4 hours to start

    While surfing the web, my computer has slowed right down and restart Windows XP began with a mistake and then took about 20 minutes to start. Whenever I rebooted, it took more time and more time to start. Made a fully up-to-date virus checking with n

  • CND5221T73: Computer laptop Hp CND5221T73 keyboard

    Hello I have a HP laptop. I could not find in the forum information related to my problem, so I'll discuss it in the hope that someone can explain to me. I just bought my laptop with windows 8.1. And I did the update to windows 10 as requested. After

  • Dell Inspiron: HP Officejet 6700 can scan slides or negatives for printing?

    My father who died left tons of slides and negatives of when in 1942, he joined the Army Air Corp.  I have boxes of slides that must be examined to see what they are and negatives he took during the period 1942-1956 while he was in Europe.  All sugge