ISCSI best practices

Doesn anyone know where I can find a guide to best practices, VMware ISCSI?  They have best practice NFS Guides but not an ISCSI.  I downloaded the Guide of ISCSI VMware, but I expect a guide of best practices.  If anyone knows where they have a guide please send me a link.  Thanks in advance.

The reason that there are choices, is that it is not better. It depends on your situation, SAN vendor, etc.

http://www.VMware.com/PDF/Perf_Best_Practices_vSphere4.0.PDF

Tags: VMware

Similar Questions

  • Best practices the iSCSI target LUN

    I connect to my ESX laboratory to a QNAP iSCSI target and wonder what best practices to create LUNS on the target. I have two servers each connection to its own target. Do I create a LUN for each virtual machine, I intend to create in the laboratory, or is it better to create a LUN of the largest and have multiple VMs per LUN? Place several virtual machines on a LUN has concequenses HA or FA?

    Concerning

    It is always a compromise...

    ISCSI reservations are per LUN.    If you get many guests on the same logical unit number, it becomes a problem and Yes we saw it.

    Slim sure to layout.   This way you can make your smaller LUN and still get several s VM we each their.

  • Best practices for iscsi - esx4

    I am trying to determine what a good or best practice would be for this situation.  We have a Server Blade esx4 so in vmware, it is already connected to our system of back-end storage.  I want to know is this:

    Can I install the iscsi initiator on the side of the OS (software) to connect and make a second drive (d:\) in this way I can push the disc, etc.  and also still available for failover, etc...

    Or is there a different way or advised to do this.

    Thank you

    Yes, you are right.

    Create a new data store (if that's what you want to do). Present this data store for all your hosts in your Cluster. Add a new disk to the virtual machine, specify the size of the drive, and then choose where you want the VMDK to be stored. You want to store this particular VMDK on the new data store that you just introduced.

    Your .vmx file will know where to find this VMDK at startup.

    Once the data store is visible to all the hosts, then he will be protected by HA (if you have HA as a feature under license)

  • Dell MD3620i connect to vmware - best practices

    Hello community,

    I bought a Dell MD3620i with 2 x ports Ethernet 10Gbase-T on each controller (2 x controllers).
    My vmware environment consists of 2 x ESXi hosts (each with 2ports x 1Gbase-T) and a HP Lefthand (also 1Gbase-T) storage. The switches I have are the Cisco3750 who have only 1Gbase-T Ethernet.
    I'll replace this HP storage with DELL storage.
    As I have never worked with stores of DELL, I need your help in answering my questions:

    1. What is the best practices to connect to vmware at the Dell MD3620i hosts?
    2. What is the process to create a LUN?
    3. can I create more LUNS on a single disk group? or is the best practice to create a LUN on a group?
    4. how to configure iSCSI 10GBase-T working on the 1 Gbit/s switch ports?
    5 is the best practice to connect the Dell MD3620i directly to vmware without switch hosts?
    6. the old iscsi on HP storage is in another network, I can do vmotion to move all the VMS in an iSCSI network to another, and then change the IP addresses iSCSI on vmware virtual machines uninterrupted hosts?
    7. can I combine the two iSCSI ports to an interface of 2 Gbps to conenct to the switch? I use two switches, so I want to connect each controller to each switch limit their interfaces to 2 Gbps. My Question is, would be controller switched to another controller if the Ethernet link is located on the switch? (in which case a single reboot switch)

    Tahnks in advanse!

    Basics of TCP/IP: a computer cannot connect to 2 different networks (isolated) (e.g. 2 directly attached the cables between the server and an iSCSI port SAN) who share the same subnet.

    The corruption of data is very likely if you share the same vlan for iSCSI, however, performance and overall reliability would be affected.

    With a MD3620i, here are some configuration scenarios using the factory default subnets (and for DAS configurations I have added 4 additional subnets):

    Single switch (not recommended because the switch becomes your single point of failure):

    Controller 0:

    iSCSI port 0: 192.168.130.101

    iSCSI port 1: 192.168.131.101

    iSCSI port 2: 192.168.132.101

    iSCSI port 4: 192.168.133.101

    Controller 1:

    iSCSI port 0: 192.168.130.102

    iSCSI port 1: 192.168.131.102

    iSCSI port 2: 192.168.132.102

    iSCSI port 4: 192.168.133.102

    Server 1:

    iSCSI NIC 0: 192.168.130.110

    iSCSI NIC 1: 192.168.131.110

    iSCSI NIC 2: 192.168.132.110

    iSCSI NIC 3: 192.168.133.110

    Server 2:

    All ports plug 1 switch (obviously).

    If you only want to use the 2 NICs for iSCSI, have new server 1 Server subnet 130 and 131 and the use of the server 2 132 and 133, 3 then uses 130 and 131. This distributes the load of the e/s between the ports of iSCSI on the SAN.

    Two switches (a VLAN for all iSCSI ports on this switch if):

    NOTE: Do NOT link switches together. This avoids problems that occur on a switch does not affect the other switch.

    Controller 0:

    iSCSI port 0: 192.168.130.101-> for switch 1

    iSCSI port 1: 192.168.131.101-> to switch 2

    iSCSI port 2: 192.168.132.101-> for switch 1

    iSCSI port 4: 192.168.133.101-> to switch 2

    Controller 1:

    iSCSI port 0: 192.168.130.102-> for switch 1

    iSCSI port 1: 192.168.131.102-> to switch 2

    iSCSI port 2: 192.168.132.102-> for switch 1

    iSCSI port 4: 192.168.133.102-> to switch 2

    Server 1:

    iSCSI NIC 0: 192.168.130.110-> for switch 1

    iSCSI NIC 1: 192.168.131.110-> to switch 2

    iSCSI NIC 2: 192.168.132.110-> for switch 1

    iSCSI NIC 3: 192.168.133.110-> to switch 2

    Server 2:

    Same note on the use of only 2 cards per server for iSCSI. In this configuration each server will always use two switches so that a failure of the switch should not take down your server iSCSI connectivity.

    Quad switches (or 2 VLAN on each of the 2 switches above):

    iSCSI port 0: 192.168.130.101-> for switch 1

    iSCSI port 1: 192.168.131.101-> to switch 2

    iSCSI port 2: 192.168.132.101-> switch 3

    iSCSI port 4: 192.168.133.101-> at 4 switch

    Controller 1:

    iSCSI port 0: 192.168.130.102-> for switch 1

    iSCSI port 1: 192.168.131.102-> to switch 2

    iSCSI port 2: 192.168.132.102-> switch 3

    iSCSI port 4: 192.168.133.102-> at 4 switch

    Server 1:

    iSCSI NIC 0: 192.168.130.110-> for switch 1

    iSCSI NIC 1: 192.168.131.110-> to switch 2

    iSCSI NIC 2: 192.168.132.110-> switch 3

    iSCSI NIC 3: 192.168.133.110-> at 4 switch

    Server 2:

    In this case using 2 NICs per server is the first server uses the first 2 switches and the second server uses the second series of switches.

    Join directly:

    iSCSI port 0: 192.168.130.101-> server iSCSI NIC 1 (on an example of 192.168.130.110 IP)

    iSCSI port 1: 192.168.131.101-> server iSCSI NIC 2 (on an example of 192.168.131.110 IP)

    iSCSI port 2: 192.168.132.101-> server iSCSI NIC 3 (on an example of 192.168.132.110 IP)

    iSCSI port 4: 192.168.133.101-> server iSCSI NIC 4 (on an example of 192.168.133.110 IP)

    Controller 1:

    iSCSI port 0: 192.168.134.102-> server iSCSI NIC 5 (on an example of 192.168.134.110 IP)

    iSCSI port 1: 192.168.135.102-> server iSCSI NIC 6 (on an example of 192.168.135.110 IP)

    iSCSI port 2: 192.168.136.102-> server iSCSI NIC 7 (on an example of 192.168.136.110 IP)

    iSCSI port 4: 192.168.137.102-> server iSCSI NIC 8 (on an example of 192.168.137.110 IP)

    I left just 4 subnets controller 1 on the '102' IPs for more easy changing future.

  • Best Practice Guide for stacked N3024 switches

    Is there a guide to BP for the configuration of the 2 N3024s stacked for the connections to the server, or is the same eql iscsi configuration guide.

    I'm trying to:

    1) reduce to a single point of failure for rack.

    (2) make good use of LACP for 2 and 4 nic server connections

    (3) use a 5224 with it's 1 lacp-> n3024s for devices of unique connection point (ie: internet router)

    TIA

    Jim...

    Barrett pointed out many of the common practices suggested for stacking. The best practice is to use a loop for stacking and distributing your LAG on multiple switches in the stack, are not specific to any brand or model of the switch. The steps described in the guides of the user or the white papers generally what is the recommended configuration.
    http://Dell.to/20sLnnc

    Many of the best practices scenarios will change of network-to-network based around what is currently plugged into the switch, and the independent networks needs / requirements of business. This has created a scenario where the default settings on a switch are pre-programmed for what is optimal for a fresh switch. Then recommended are described in detail in white papers for specific and not centralized scenarios in a single document of best practices that attempts to cover all scenarios.

    Express.ypH N-series switches are:
    -RSTP is enabled by default.
    -Green eee-mode is disabled by default.
    -Frother is enabled by default.
    -Storm control is disabled by default.

    Then these things can change based on the towed gear and needs/desires of the whole of society.

    For example, Equallogic has several guides that recommendations of configuration detail to different switches.
    http://Dell.to/1ICQhFX

    Then on the side server, you would like to look more like the OS/server role. For example a whitepaper VMware that has some network settings proposed when running VMware in an iSCSI environment.
    http://bit.LY/2ach2I7

    I suggest making a list of the technology/hardware/software, which is used on the network. Then use this list to acquire white papers for specific areas. Then use these white papers best practices in order to ensure the switch configuration is optimal for the task required by the network.

  • Connecting two 6224 separate batteries best practices LAG?

    Hello

    I wonder how I should configure LAG between two powerconnect 6224 batteries (2 x powerconnect 6224 by battery) for iSCSI against 4 members EQL traffic, I intend to use the 4 ports of each stack (stack cross-possible LAG on 6224?) and equallogic documentation leaves me in two minds when it wants to run LACP or not?

    I wonder what reviewed best practices in this scenario?

    Cross-stack, LACP and no PLEASE or am I better of without LACP?

    Thanks in advance

    Cree

    The ports connecting the two piles together will be configured differently than the ports of connection of the battery to the EQL appliance. During the connection of the control unit switch EQL is when you might want to disable STP on that specific port.

    «Do not use of Spanning Tree (STP) on switch ports that connect to the terminal nodes (iSCSI initiators or storage array network interfaces).» However, if you want to use STP or Rapid STP (preferable to STP), you must enable port settings available on some switches that allow the port immediately transition to PLEASE State reference to link up. This feature can reduce network interruptions that occur when devices to restart, and should only be enabled on switch ports that connect the nodes. "

    With the network cards on the EQL devices according to me, there is only one active port, and the other is pending. So on the switch ports that are plug on the EQL will be in access mode for your iSCSI VLANS. Maybe someone more about EQL can chime to confirm.

    Here are some good white pages.

    www.dell.com/.../Dell_EqualLogic_%20iSCSI_Optimization_for_Dell_Power_onnect_%20Switches.pdf

    docs.danielkassner.com/.../ISCSI_optimization_EQL.pdf

    www.Dell.com/.../EQL-8024f-4-Switch.pdf

  • Best practices para FC

    Senhores,

    Someone knows some best practices document aqui para configuracao ambiente com SAN storage CF? Em especial sober zoneamento?

    Aqui no trabalho we have some performance problems, travamentos e perdas caminhos back is hosting some LUN com e suspeitamos o zoneamento esta mau defined, quero entrar nesta has, so that Anthony storages em meu recover an iSCSI configurei um ambiente CF, o maximo as fiz faith dar suporte em um ambiente pronto, na exclusao e Criação of novas LUN , aumentando data warehouses no vCenter (increase).

    Aguardo e obrigado.

    Ivanildo,

    NAO is literatura better that esse Redbook da IBM, pelo fato ser focado em uma SAN design principalmente para ambiente VMware: IBM Redbooks | IBM SAN Solution Design Best Practices for VMware vSphere, ESXi

  • Best practices for network configuration of vSphere with two subnets?

    Well, then I'll set up 3 ESXi hosts connected to storage shared with two different subnets. I configured the iSCSI initiator and the iSCSI with his own default gateway - 192.168.1.1 - targets through a Cisco router and did the same with the hosts configured with its own default gateway - 192.168.2.2. I don't know if I should have a router in the middle to route traffic between two subnets since I use iSCSI ports linking and grouping of NETWORK cards. If I shouldn't use a physical router, how do I route the traffic between different subnets and use iSCSI ports binding at the same time. What are the best practices for the implementation of a network with two subnets vSphere (ESX host network: iSCSI network)? Thank you in advance.

    Install the most common iSCSI would be traffic between hosts and

    the storage is not being routed, because a router it could reduce performance.

    If you have VLAN 10(192.168.1.0/24) iSCSI, VLAN 20 (192.168.2.0/24) ESX

    MGMT and VLAN 30 (192.168.3.0/24) comments VMs and VLAN 40 (192.168.4.0/24)

    vMotion a deployment scenario might be something like:

    NIC1 - vSwitch 0 - active VMK (192.168.1.10) MGMT, vMotion VMK (192.168.4.10)

    standby

    NIC2 - vSwitch 1 - current (VLAN30) guest virtual machine port group

    NIC3 - vSwitch 2 - active VMK1 (192.168.1.10) iSCSI

    NIC4 - vSwitch 2 - active VMK2 (192.168.1.11) iSCSI

    NIC5 - vSwitch 1 - current (VLAN30) guest virtual machine port group

    NIC6 - vSwitch 0 - MGMT VMK (192.168.2.10) standby, vMotion

    VMK (192.168.4.10) active

    You would place you on VLAN 10 storage with an IP address of something like target

    192.168.1.8 and iSCSI traffic would remain on this VLAN. The default value

    gateway configured in ESXi would be the router the VLAN 20 with an ip address of

    something like 192.168.2.1. I hope that scenario help set some options.

    Tuesday, June 24, 2014 19:16, vctl [email protected]>

  • Fabric zoning best practices (new FC)

    I'm new to CF (we are an iSCSI store), but I'm learning to my DCD. I read on different blog posts on zoning, which on the whole I think logical. I came across this blog post and am a little confused by the diagram:

    http://vmfocus.com/2012/05/31/fabric-zoning-best-practices/

    If you look at the chart it is logical, but higher than that, look at the text:

    http://vmfocus.files.WordPress.com/2012/05/capture.PNG

    E1 to S1 via Fabric Switch 1

    E1 to S3 via Fabric Switch 2

    E2 to S2 switch fabric 1

    E2 to S4 via Fabric Switch 2

    Via Fabric Switch S1 E3 1

    E3 to S3 via Fabric Switch 2

    E4 in the S2 switch fabric 1

    E4 to S4 via Fabric Switch 2


    How can connect to the same physical port on the HBA to switch 2? Is the text in the wrong article? I presume to know more than any zoning CF topic but I do not see how E1 can be plugged in more than one switch.


    Totally confused...

    Steven

    Using my amazing art skills, I drew a diagram for you:
    http://i.imgur.com/WU6iGGv.PNG

    A high level view of fiber channel SAN would look like incredibly similar, if not identical to your iSCSI network. The text is just describing the logical connections, different server ports can do, such as ethernet. Let me know if it it clears, or I'll go into details with what you want.

  • Design by using NetApp's best practices

    I am preparing for my VCP5 and I read the new book by Scott Lowe. the book describes how the traffic should be isolated. your vMotion, vmkernal, etc., but in many organizations, I see the NetApp with some of data warehouses and a few LUNS to CIFS share LUNS. I guess you can have your vmMotion on a VLAN separated, but would not safer just configure a windows VM file server to host your files? In freenas and openfiler forums, they stress is not to run their software in virtual machines in a production environment.   Physical separation would be better then just a VLAN? I was inking and correct me if I'm wrong. I think the CIFS shares in a virtual hosting machine would SAN, vMotion, vmkernal, most reliable if you have redundant switches on both sides VMware hosts. So if your kernel switches drop your vmware environment will not drop.

    > traffic must be isolated.

    Yes, the network traffic must be split on networks separated for various reasons, including performance and safety.

    > NetApps with MON a few for data warehouses and a few LUNS to CIFS share.

    Yes, if you have a NetApp file server you can block-level storage server as FCP or iSCSI, CIFS or NFS file-level storage.

    > I guess you can have your vmMotion on a VLAN separated, but would not safer just configure a windows VM file server to host your files?

    OK, you lost me.  Yes, you must separate the vMotion traffic to enhance the performance and because the vMotion traffic is not encrypted.

    I don't see where you're going for vMotion to a Windows file server?

    However, if you are referring to, why don't you your NetApp instead of Windows CIFS Server:

    You don't need to patch and reboot the NetApp at least once a month.

    Performance is better

    You don't need to buy a Windows license and then maintain Windows

    Snapshots.  NetApp has the best shots in the business.  When your Windows I/O high, or just typing box because it of Tuesday and removes all of your VSS snapshots you really wish you had a NetApp.

    > In the forums of freenas and openfiler, they stress is not to run their software in virtual machines in a production environment.

    Note that there are a ton of storage there equipment running as VMs and server NFS for shared storage, including left and they have been stable for years.

    > Physical separation would be better then just a VLAN?

    Yes, if you have the infrastructure.  When it comes to the first time I've seen reference you VLAN?  Are you talking about now the NetApp as the series 2020 with two network cards where you need to carry all traffic (managent, CIFS and iSCSI) through them via VLAN?

    Like this: http://sostechblog.com/2012/01/08/netapp-fas2xxx-fas3xxx-2-nic-ethernet-scheme/

    > I was inking and correct me if I'm wrong. I think the CIFS shares in a virtual hosting machine would SAN, vMotion, vmkernal, most reliable

    CIFS is nothting to do with SAN, vMotion or VMkernel.  CIFS (SMB) is the protocol used mainly by Windows file sharing

    > If you have redundant switches on both sides of the VMware hosts. So if your kernel switches drop your vmware environment will not drop.

    You always want to redundant switches.  No single point of failure is the best practice.

  • What is the best practice to block through several layers sizes: hardware and hypervisor VM OS?

    The example below is not a real Setup, I work with, but should get the message. Here's my example of what I'm doing as a reference layer:

    (LAYER1) Hardware: The hardware RAID controller

    • -1 TB Volume configured in the 4K block size. (RAW)?


    (Layer2) Hypervisor: Data store ESXi

    • -1 TB of Raid Controller formatted with VMFS5 @ block size of 1 MB.


    Layer (3) the VM OS: Server 2008 R2 w/SQL

    • -100 GB virtual HD using NTFS @ 4 K for the OS block size.
    • -900 GB virtual HD set up using NTFS @ 64 K block size to store the SQL database.

    It seems that vmfs5 is limited to only having a block size of 1 MB. It would be preferable that all or part of the size of the blocks matched on different layers and why or why not? What are the different block sizes on other layers and performance? Could you suggest better alternative or best practices for the sample configuration above?

    If a San participated instead of a hardware on the host computer RAID controller, it would be better to store the vmdk of OS on the VMFS5 data store and create an iSCSI separated THAT LUN formatted to a block size of 64 K, then fix it with the initiator iSCSI in the operating system and which size to 64 K. The corresponding block sizes through layers increase performance or is it advisable? Any help answer and/or explaining best practices is greatly appreciated.

    itsolution,

    Thanks for the helpful response points.  I wrote a blog about this which I hope will help:

    Alignment of partition and blocks of size VMware 5 | blog.jgriffiths.org

    To answer your questions here, will:

    I have 1 TB of space (around) and create two Virutal Drives.

    Virtual Drive 1-10GB - to use for OS Hyper-visiere files

    Virtual Drive 2 - 990 GB - used for the storage of data/VM VMFS store

    The element size of default allocation on the Perc6 / i is 64 KB, but can be 8,16,32,64,128,256,512 or 1024 KB.

    What size block would you use table 1, which is where the real hyper-visiere will be installed?

    -> If you have two tables I would set the size of the block on the table of the hypervisor to 8 KB

    What block size that you use in table 2, which will be used as the VM data store in ESXi?

    ->, I'd go with 1024KO on VMFS 5 size

    -Do you want 1024KO to match the VMFS size which will be finally formatted on top he block?

    -> Yes

    * Consider that this database would eventually contain several virtual hard drives for each OS, database SQL, SQL logs formatted to NTFS to the recommended block, 4K, 8K, 64K size.

    -> The problem here is THAT VMFS will go with 1 MB, no matter what you're doing so sculpture located lower in the RAID will cause no problems but does not help either.  You have 4 k sectors on the disk.  RAID 1 MB, 1 MB invited VMFS, 4 k, 8K, 64 K.   Really, 64K gains are lost a little when the back-end storage is 1 MB.

    If the RAID stripe element size is set to 1 024 Ko so that it matches the VMFS 1 MB size of block, which would be better practice or is it indifferent?

    -> So that's 1024KB, or 4 KB chucks it doesn't really matter.

    What effect this has on the OS/Virtual HD and their sizes respective block installed on top of the tape and the size of block VMFS element?

    -> The effect is minimal on the performance but that exists.   It would be a lie to say that he didn't.

    I could be completely on the overall situation of the thought, but for me it seems that this must be some kind of correlation between the three different "layers" as I call it and a best practice in service.

    Hope that helps.  I'll tell you I ran block size SQL and Exchange time virtualized without any problem and without changing the operating system.  I just stuck with the standard size of microsoft.  I'd be much more concerned by the performance of the raid on your server controller.  They continue to do these things cheaper and cheaper with cache less and less.  If performance is the primary concern then I would consider a matrix or a RAID5/6 solution, or at least look at the amount of cache on your raid controller (reading is normally essential to the database)

    Just my two cents.

    Let me know if you have any additional questions.

    Thank you

    J

  • vSpere 5 Networking of best practices for the use of 4 to 1 GB NIC?

    Hello

    I'm looking for a networking of best practices for the use of 4-1 GB NIC with vSphere 5. I know there are a lot of good practice using 10 GB, but our current config does support only 1 GB. I need to include the management, vMotion, Virtual Machine (VM) and iSCSi. If there are others you would recommend, please let me know.

    I found a diagram that resembles what I need, but it's for 10 GB. I think it works...

    vSphere 5 - 10GbE SegmentedNetworks Ent Design v0_4.jpg(I had this pattern HERE - rights go to Paul Kelly)

    My next question is how much of a traffic load is each object take through the network, percentage wise?

    For example, 'Management' is very small and the only time where it is in use is during the installation of the agent. Then it uses 70%.

    I need the percentage of bandwidth, if possible.

    If anyone out there can help me, that would be so awesome.

    Thank you!

    -Erich

    Without knowing your environment, it would be impossible to give you an idea of the uses of bandwidth.

    That said if you had about 10-15 virtual machines per host with this configuration, you should be fine.

    Sent from my iPhone

  • Best practices for vSphere 5 Networking

    Hi all

    Given the following environment:

    (1) 4 physical servers, each server has 16 (Gigabit) network interface cards will install vSphere 5 std.

    (2) 2 switches with SAN storage battery function

    (3) 2 Equallogic PS4000 SAN (two controllers)

    (4) 2 switches for the traffic of the virtual machine

    As for networking, I intend create some vSwitches on each physical server as follows

    1 vSwitch0 - used for iSCSI storage

    6 network adapters are associated with IP-hash, multitracks with iSCSI Storage consolidation policy. and storage load balancing is round Rodin (vmware)

    (vmware suggests to use 2 cards for 1 traget of IP storage, I'm not sure)

    2 vSwitch1 - used for the virtual machine

    6 Teamed network adapters for the traffic of the virtual machine, with the hash intellectual property policy

    3 vSwitch2 - management

    2 network cards are associated

    4 vSwitch3 - vMotion

    2 network cards are associated

    You would like to give me some suggestions?

    Alex, the standard set by the storage and VMware is used by dell for their servers and tested sound on their equipment and the publication of the document, it is recommended...

    This sound is the best practice for dell server with this model mentioned in the document to use.

    Hope that clarifies...

  • Best practices of ESXi 4.1 installation

    Since esxi a sc removed no matter if I create different data stores, one for vkernel and another for virtual machines?

    Also for networking is important to create the management network on the separate vswitch from VM network?

    What is best practice?

    traffic will pass by vmkernel and distributed. environment like yours, that separates the vmkernel makes no difference because there isn't any iscsi or vmotion traffic.

  • Nexus 1000v and vSwitch best practices

    I am working on the design of our vDS Nexus 1000v for use on HP BL490 G6 servers. 8 natachasery is allocated as follows:

    vmnic0, 1: management of ESXi, VSM-CTRL-PKT, VSM - MGT

    vmnic2, 3: vMotion

    vmnic4, 5: iSCSI, FT, Clustering heartbeats

    vmnic6, 7: data server and Client VM traffic

    Should I migrate all the natachasery to 1000v vDS, or should I let vmnic 0.1 on a regular vSwitch and others migrate to the vDS? If I migrate all the natachasery at the very least I would designate vmnic 0.1 as system so that traffic could elapse until the MSM could be reached. My inclination is to migrate all the natachasery, but I've seen elsewhere on comments in forums that the VSM associated networks and, possibly, the console ESX (i) are better let off of the vDS.

    Thoughts?

    Here is a best practice-how-to guide specific to 1000v & VC HP might be useful.

    See you soon,.

    Robert

Maybe you are looking for