VMkernel for VSANs

Hello

I have a question about Vsan

1 can. 1 vcenter cause 2 vsan cluster? Each vsan cluster will have its own separate cluster by cluster (in VMware vCenter).

2. for clusters VSAN, they will use same physical switch uplink. How can we separate these clusters? Is - this same correct range of IP addresses with the same multicast address? or we have to separate the IP address range and the multicast address?

I use vSAN 6.2.

Thank you.

Hi, to comment further on your second question, if you have multiple clusters VSAN, best practices is to provide these more separate network segments (say, different VLAN). I'm not sure what you mean by "same multicast address"; each VSAN node performs the multicast, so all vmkernels VSAN are multicast addresses.

For more help with this kind of considerations, please consult the Virtual SAN 6.2 Networking Design Guide.

Tags: VMware

Similar Questions

  • Support adapter 40Gbps for VSANS

    It is supported by vmware to use adapter for the all-flash vsan 40Gbps?

    Yes this is totally supported

  • Drivers Intel P3700 for VSANS?

    I worked with our VMWare team to design a VSAN out and it included some Intel P3700 PCIe cards like the layer flash supported by about 1.2 TB of Seagate sas disks 10 k for storage.  Unfortunately, even of the P3700 disappeared not on the HCL, and I am currently "chasing my tail" with latency, and i/o pending.  I opened a SR, but recommended for the H730 drivers do not seem to do anything.  And I'm fairly certain that's not the issue (deactivation does not seem to be a problem).  I understand that the P3700 didn't HCL yet, but it is still ongoing and so many white papers were written with this card as part of the architecture.

    It's all running VMWare 6. 0 b (VMware ESXi 6.0.0 build 2494585 on hosts)

    Material:

    -From r630 Dell (30)

    -2 x Intel P3700 400 GB Flash cards (driver intel-nvme - 1.0e.1.1 - 1OEM.550.0.0.1391871.x86_64.vib)

    -10 x 1.2 TB SAS of Seagate 10 k disks

    -H730 Dell RAID controller in HBA mode with cache is disabled, etc. (drivers for this were something to worry about, but since I assume that the first writings go directly to PCIe cards, I don't think that's the question long - firmware 25.3.0.0016 pilot 6.606.12.00)

    -Intel X 710 with DAC 2 x 10 Gb connections

    -4 x Juniper QFX5100 10 GB switches (15 r630 by 2 switches, scaling up to 30 servers in 2 past later)


    SAN traffic exceeds 2 uplink (with uplink 1 standby), everything else is plus 1 uplink (with uplink 2 as before)-working on a plan to spend LACP, but it is still under construction


    The current workload is very low.  It is only about 6 Production VM running on that cluster to the repositories of the project and kicking the tires, with the idea that 500-1000 VM will be launched upward in the coming months.  For the majority of virtual machines, the performance is not essential, but some of the problems I see currently are a bit a showstopper.


    Problems:

    If I run "bonnie ++ u root" on a virtual machine single, I can see the latency time to go up to 65, 000ms (Yes, really 65 k ms) and the virtual machine is basically responding (100% iowait and very rarely is able to write i/o because of the huge latency).  The write buffer is never very full during this period (stuck at 30% and deactivation does not start even during the race.)  Similar issues occur ATTO Disk Benchmark running on a windows system with a disk queue high (4 seems to be well, 10 kills of the VM)


    I can get very high write speeds (500 - 800 MB/s or more), but as soon as the latency time leapt to a few 100 ms, it's all downhill.


    Even with a fairly simple recording VM with all our guests directed to these, I get occasional spikes of latency (1400ms +, with an average of 15ms, which seems high even).  This box just works a lot of entries to logstash and a hint of elasticsearch with occasional when bed kibana someone shows something.

    Y at - it no rider special I read somewhere for the P3700 Intel?  Another thing that I really should look into?  I'm tired of chasing my tail and want to start the actual load of the migration to this new cluster.  I tried RAID0 on a small cluster of 4 boxes, but that wasn't much better and is much more boring.

    I use the 1.6 and 2.0 TB P3700 cards in my VSAN without problem. I have a similar setup, except using r730xd, the same with the same firmware + drivers raid card, also on 6. 0b. When I originally put in place, I could see peaks of latency high 200-400 + ms, it was resolved by updating the drivers of NETWORK card for me. I use a NIC (Intel X 540 - AT2), different, but the firmware upgrade + the driver for my NIC brings my latency ~ 3ms avg and blip occasional peak at ~ 15ms. Probably worth to update any firmware on your X 710 and use the appropriate driver on VMWare HCL VMware Compatibility Guide: search for i/o device

    I've also seen really bad latency problems out of network configuration problem. In our case that we had a link to 1 GB failover in case our 10 GB failed, but instead things are started to be load-balanced and performance/latency was very poor until we noticed that the 1 GB has been fully utilized.

    Also try to use the Inbox NVME driver, I ran on it without problem before Intel released the driver 1.1 you use now. In my limited testing, Intel drivers are slightly faster performance but nothing majorly different.

  • Required DRS for VSAN HealthCheck?

    Sorry, I know I'm going to go here on a branch, but it is, or why the DRS is required for the VSAN Health Check Plugin?

    We have the license for vSphere 6 Std and VSAN.

    I tried to install it via the web gui and esxcli, but no dice on activate.

    Thank you

    -Justin

    references

    http://www.VMware.com/files/PDF/products/VSAN/VMW-GDL-VSAN-health-check.PDF?RCT=j & q = & ESRC = s & source = Web & CD = 1 & ved = 0CB8QFjA...

    ttp://www.VMware.com/files/PDF/products/VSAN/VMware-virtual-SAN6-proof-of-concept-Guide.PDF

    VMware KB: Virtual SAN Health Service - Health Cluster - VSAN put day healthcare Service

    VMware KB: Installation of patches on an ESXi 5.x/6.x host from the command line

    2015_06_29_17_45_25_bridge2_Remote_Desktop_Connection.png

    2015_06_29_17_46_01_bridge2_Remote_Desktop_Connection.png

    I threw in the towel. I have installed evaluation licenses on the hosts, DRS Automated helped, and I am trying to install the Health Service.  I'll let you know if I see side effects after I put it to any DRS and Standard licenses.

  • Host cannot see Local disk for VSANs

    Hello

    We are building our first host VSANs and fell at the first hurdle. Our host config is:

    IBM x 3650 m4 7915
    ServerRaid M5110e
    1 x 200GB SSD SATA
    1 x 600 GB 10 k SAS SSD
    IBM Vmware Hypervisor USB with 5.5 u1 image installed.

    The problem is that the esx host cannot see the 2 disks that are mounted. Lists under storage controller MegaRAID SAS merger adapters (not sure where he gets the name of merger of). When I click on the controller for 2 drives do not appear under devices or paths. If I go in to the RAID controller to start upward and configure drives individually to be RAID 0 then ESX sees the two discs, but unconsciously, cannot identify the Type of the SSD drive. I think that there is a work around that, but I know that's not ideal, and according to the vSAN HCL of the Raid Controller above supports passthrough. Get the pull-out hold to work turns out to be difficult, but I hope that it is just something simple, I missed. Someone at - it an idea where I should start looking?

    Thanks in advance.

    Thanks for the tip CHogan, it worked, but after reading the disadvantages of this method in your book "Essential Virtual SAN" I persisted for the pass-through to work on the raid controller.

    After some further research it turns out that I had to physically remove the RAID 1 GB Flash cache on the motherboard. In removing what it affects the RAID controller. He calls it. mode of TMI which allows for amp JBOD.

    Thanks to all for your responses.

  • Separate the VMkernel for pulse data store traffic?

    Hello.

    I was just reading Tech EqualLogic report 'Configuring iSCSI connectivity with VMware vSphere 5 and Dell EqualLogic PS Series storage' and saw something again on pages 3-4.

    Dell recommends to create a vmkernel port highly available on the subnet iSCSI serving as vmkernel default port for pulse data store traffic, so that the pulsation of data store traffic will then sit outside the iSCSI Software initiator and consumes any connection of additional iSCSI storage. He goes on to say that the traffic of pulsation of the data store will always use the lowest numbered VMkernel ports on the vSwitch.

    It makes sense, but this is the first I've heard of this. Everybody does that, with EQ or other iSCSI solutions?

    Thank you

    Brian


    Not sure that what talking about Dell, but the heartbeat "traffic" data store using the links normal iSCSI. Each host will have a file on one of the heartbeat data stores and open it which means there a lock on the file. The overhead is minimal and there is no need to worry at all.

  • Why several vmkernels for iSCSI multipath?

    I'm a little confused about the iSCSI Multipathing. What is the reason for the creation of two vmkernel interfaces and then use command line tools to link them to a new vmhba? Is it different for the use of a software iSCSI initator on a vSwitch with multiple VMNIC, connected to different physical switches?

    There is the same discussion in thread Russian vmug.

    iSCSI Multipathing is not on the trail, its about us pairs - initiators (interfaces vmk) and targets.

    If you have two targets

    • and you need to fail over only - 1 vmk is enough (NIC in Active mode / standby)
    • If you need loadbalacing
      • and you can use the aggregation of links + Hash IP - 1 vmk is sufficient (PSP is Roundrobin and NICs in active/active mode)
      • If you cannot use THE - vmk 2 are needed.
  • VSAN disks required for Instalation of the host

    Hi all

    I have 2 questions on VSAN

    (1) controller HP smart array is not needed that storage is VSAN handles.

    (2) the selected material can accommodate 5-disc, so if we use a disc to install ESXi5.5, can we use the remaining disk to can be used for VSAN somehow, or we have to live with for VSAN 4 drives in my case.

    Thank you

    Kapil

    you will need to have a disk controller that supports passthrough to a minimum and one from the HCL VSAN. Judging by the HCL VSAN you need a controller to purchase anyway.

  • Separate VSANS / Mgmt / Vmotion traffic on different switches

    Hello

    I want to build a hybrid Cluster VSAN 4 knots (6.x).

    I have 2 x 10GE switches dedicated for Taffic VSAN

    also 2 * 1GBE switches for management / VM / Vmotion traffic

    Each host has 2 10 GbE NIC, 1 uplink for each of 10GBE switches

    Each host also has 4 * rising 1GBE, 2 for each of the switches 1GBE

    My idea is to use 10GBE just for VSAN traffic (separate DVSWitch, an asset, a standby) on each 10GBE switch

    Also to use 2 Ports on the management traffic (separate DVSWitch, an asset, a standby) on each 1GBE switch

    And use 2 Ports on traffic VM (separate DVSWitch, an asset, a standby) on each 1GBE switch

    VMotion traffic would be on the same rising as management, but active / standby vice versa

    Question: is - this possiuble / useful to connect vmotion for 10 GbE uplink ports (active standby vice versa VSAN traffic)

    I appreciate any useful comment on my scheduled install (I have no boy networking)

    Couple thoughts.  You can run several vKernels to VSAN and vMotion (allowing one on each switch).  To VSAN, use of VLANS / subnets (one on each switch offers you a pleasant has / B space) while vMotion must have all the vkernel on the same subnet 2 ideally layer.  This allows for maximum reliability, in theory faster failover and access flow.  This also has the advantage of if your cluster is not big (bigger that say 48 ports) you can avoid having multicasting, leaving the switch (usually requires more quarrels with the cats of network) as each switch TOR will have its own VSAN network.    Then use the NIOC for butterfly and protect the traffic between them.  Now the NIOC is not as effective as SR - IOV (but its good enough) to 100% and you will maximize your investment 10 Gbps without swamping the things.

    • Network management interface VMkernel = explicit order of Fail-over = active P1 / P2 ensures 10.1.1.1/24 VLAN 100
    • vMotion VMkernel - an explicit order of Fail-over = interface = active P1 / P2 ensures 10.1.2.1/24 VLAN 101
    • vMotion VMkernel-B interface = explicit Fail-over order active = P2 / P1 ensures 10.1.2.2/24 VLAN 101
    • Machine virtual Portgroup = explicit Fail-over order = active P1 / P2 ensures 10.1.2.1/22 VLAN 102
    • Virtual VMkernel-A SAN interface = explicit order of Fail-over = active P1 / P2 in standby (or not use) * 10.1.3.1/24 VLAN 103
    • Virtual interface of SAN VMkernel-B = explicit Fail-over order active = P2 / P1 Eve (or not use) * 10.1.4.1/24 VLAN 104

    In this case, VLAN, 100, 101 and 102 will be on all switches.

    * 103 and 104 could be configured to only exist on each switch (has / B isolation, failover do not switch on the value and set up on both) or exist on both (and use standby) as described.  This design will focus on maintaining the communications host-to-host on the same switch (decreases complications with multicasting and reduces its lag time as VSAN traffic does not have to jump to another switch unless you run out of switch ports, but now with dense 40Gbps past using 10 Gbps break out in theory you could hit the limit of 64 nodes on a single switch).

    I'm curious of anyone thoughts on just disable the failover and forcing them to each core sticking to its switch (and accepting loss of communication) on this vKernel in the event of a switch failure.  I want to do some tests with both laboratory and to test the failover of the switch/path between both of these configurations (compared to a configuration unique vkernel).

    However, some people prefer a 'simpler' configuration if (and I'm not opposed to that).

    Duncan drawn active passive failover configuration with of the vKernel unique for each host.

    In theory not being not not as dependent on the NIOC for insulation of storage should help the latency for the short bursts it takes to NIOC launch design active active vs.

    Control IO SAN and virtual network

    • Network management interface VMkernel = explicit order of Fail-over = active P1 / P2 ensures
    • vMotion VMkernel interface = explicit Fail-over order = active P1 / P2 ensures
    • Machine virtual Portgroup = explicit Fail-over order = active P1 / P2 ensures
    • SAN VMkernel virtual interface = explicit order of Fail-over = P2 active / standby P1
  • VSAN 6.2 error

    Hello

    I have problem on VSAN Ver 6.2
    Deploy Cisco C240 5 machines (split 5 domain fault)

    but the health check display this message

    1461897416393.jpg

    VSAN can create but in vsan cannot create or use anything it shows the error message

    1461897791506.jpg

    Please help me!

    Had you used a custom .iso to install ESXi.  Cormac speaks using the .iso in VMware for vSAN image file here: VSAN and OEM ESXi ISO - CormacHogan.com images happy you found the problem.  Thank you, Zach.

  • Performance tests of VSANS: what is the attitude of your cluster under actual stress?

    Our config node VSAN

    Reference Dell R730xd

    768 GB OF RAM

    PERC730

    ability level: 21 x SAS disks 1, 2 TB

    the caching level: 3 x SANDISK SX350-3200

    dedicated NIC for VSAN 10GE

    We encounter problems while driving a large number of virtual machines on our cluster which themselves generate a large number of the IOPS / s (similar to IOmeter).

    We are faced with questions:

    -guests are cranked out vcenter

    -vmx files have been corrupted

    This does not mean that if we only use one virtual machine host to generate the load. In this case, these are our observations:

    % Of reading

    Write %

    The KB block size

    Oustanding IO

    GB of file size

    # Workers

    % Randomly

    Sequential %

    FTT

    # Components

    read a book

    IOPS / total s

    Totl Mbps

    AVG ms Lat

    45

    55

    4

    5

    10

    4

    95

    5

    1

    6

    0

    8.092

    33,15

    0.6

    45

    55

    4

    5

    10

    4

    95

    5

    1

    12

    0

    10.821

    44,33

    0.46

    45

    55

    4

    5

    10

    4

    95

    5

    1

    6

    100

    12.611

    51.65

    0.39

    45

    55

    4

    5

    10

    4

    95

    5

    1

    12

    100

    11,374

    46,59

    0.43

    45

    55

    4

    64

    10

    4

    95

    5

    1

    12

    100

    29.746

    121,8

    2.15

    100

    0

    4

    64

    10

    4

    100

    0

    1

    12

    0

    50.576

    207,1

    1.26

    100

    0

    4

    64

    10

    4

    100

    0

    1

    12

    100

    50.571

    207,1

    1.26

    100

    0

    0.512

    64

    10

    4

    0

    100

    0

    1

    100

    67.330

    34,47

    3.8

    If these numbers seem very good, Don't they?

    From our point of view, it would be ok if latency increases when you increase the load with multiple virtual machines. But as already out doubled over our cluster becomes broken after increasing the number of virtual machines at some point.

    We are currently testing with a subset of four nodes VSAN with the above config. In this group, we were able to turn on 187 load generation machines before that three of the four hosts entered the State "is not responding.

    We wonder if anyone has also made large performance tests. If so, we would be very interested in comments you make. Maybe this helps us to find the fault of our construction.

    Kind regards

    Daniel

    With the help of VMware we discover that the problems we face are caused by virtual machines gen load... each vm is generating 2048 ios in circulation... so with 187 VM on the cluster, we unreal ios in circulation (382976... when I remember the graphic observation of our cluster it would reduce up to 13000 per host) which we will never see in real world scenarios. With this high traffic vsan 6.1 ios has problems and vmware is working on a solution... maybe with 6.2 and qos that is resolved?

    A few other changes:

    -a newer driver for our network cards intel x 710

    -new driver and firmware (beta version of dell) for our h730 perc controller

    and some advanced settings:

    esxcfg-advcfg - 100000/LSOM/diskIoTimeout s

    esxcfg-advcfg - s 4/LSOM/diskIoRetryFactor

    esxcfg-advcfg - s LSOM/2047/heapSize (with these parameters, we were able to create 3 starts - with 7 metal discs and 1 flash device - for each host)

  • Cluster VSAN configuration

    I would check a few questions about VSAN:

    1. How is starts up I can create about the cluster size?

    Scenario 1:

    Server 01:16 discs (2 SSD 1 TB, SATA 2 300 GB, SATA 1 to 12)

    Server 02:16 discs (2 1 TB SSD, SATA 1 to 14)

    In my example scenario 1, which is the ideal server for VSAN configuration? What I have to install ESXi on the 300 GB drive and the rest is used for the disk group as in Server 1? Or I can have a 2 server configuration since after installation, if applicable 14 discs I install ESXI, it will always be included when creating disk group?

    Scenario 2:

    If I apply VSAN with MRS, I created a data store separate space which is not included in the VSAN disk group or I can create within VSAN as a separate disk group (what I think is tedious because a diskgroup will include 1 SSD used for reserved space for a data store)?

    -Can I create starts with SSDS and SATA which are made up of different servers. As 3 SSD (1 each 1-3 Server) and 6 SATA drives (2 of each for 1-3 server) but they are grouped as 1 diskgroup?

    No, you can't.  The SSD and HDS support must be on the same host for a group of records.


    I see so if I installed the ESXi on this disc, it will be a full disc (I assume it will be the default value for the local data store seen first by the host) and not be included if you create groups of disks VSAN. Am I right? So it is best to have another drive to install ESXI.

    You are right.  A common practice is to install ESX on a SD card or USB stick, preferably in the opposite direction.  Attention to the good size.  Newspapers must not reside on the storage of vSAN and logging for vSAN will fill quickly a 8 GB or 16 GB SD/USB fast enough.

    -J' I always RAID on drives at the level of the material on the SSD SATA disks, and it is still necessary to right? If so, what VMware recommends RAID level? Can we raid SATA RAID 0, RAID 1 or RAID 6 disks? Or no RAID is necessary because VSAN will be pooling in a large data store (I guess it will be like a basic RAID software)?

    No, you do not have any RAID.  OK, you could do RAID 0 if you can not do passthrough.  vSAN needs to see the disks directly.


    -I guess I can create a data store local placeholder (another drive on the local server) to the replication vsphere.

    Yes, you could.


    Thank you, Zach.

  • vSAN re-design options?

    Hi all - I have the following vSAN and environment need to input the direction in which you think I should go to achieve the best design

    Current configuration:

    3-node Dell R720 vSAN

    Dedicated network 10Gig for VSANs

    Each node has two groups of disks

    First group of disks = 200 GB SSDS with SAS 10 K 4 x 1.2 TB disks

    Second group of disks = 200 GB SSDS with SAS 10 K 3 x 1,2 TB disks

    I would have liked to have had more large SSDS when buying, but the budget was not available.

    I now have the option to buy another server to make a 4 node vSAN environment. Best for greater redundancy and my ultimate goal.

    I hope to be able to pull in the budget for the largest drives in this server costs that I can reuse additional SSD to other servers.

    Question 1:

    If I have the new server with a higher spec big (400 GB) SSD drive and add it to the current cluster vSAN, vSAN will have problems with the gap between performance and size of disc between the nodes? I don't think so but asking anyway

    Question 2:

    If I am able to upgrade an SSD drive in each existing server will there be problems running a 2 disk group per server with different sizes (400 GB and a 200 GB) SSD - I don't think so but asking anyway

    Question 3:

    How would re-design of my new vSAN given that if I buy new disks SSD of 400 GB that I will still have available 200 GB SSDs for use somewhere should I change the number and size of my groups of disks per server for example? There are so many possibilities, I'd be interested to hear your opinions?

    Thank you.

    Question 1:

    If I have the new server with a higher spec big (400 GB) SSD drive and add it to the current cluster vSAN, vSAN will have problems with the gap between performance and size of disc between the nodes? I don't think so but asking anyway.

    It works perfectly.  However, Cormac and Duncan vSAN essential guide, it is recommended that all hosts are the same.

    Question 2:

    If I am able to upgrade an SSD drive in each existing server will there be problems running a 2 disk group per server with different sizes (400 GB and a 200 GB) SSD - I don't think so but asking anyway.

    Same answer as above.

    Question 3:

    How would re-design of my new vSAN given that if I buy new disks SSD of 400 GB that I will still have available 200 GB SSDs for use somewhere should I change the number and size of my groups of disks per server for example? There are so many possibilities, I'd be interested to hear your opinions?

    I think you pretty much designed it perfectly.  4 knots are the best because you have place to rebuild, if there is a failure.  You have dedicated 10 GB for vSAN.  All other components look too in the correct order.  And I don't think it's a big problem to mix and match the size of the SSD layer as long as they are equal performance of latency and/or throughput perspective.  And if you get to vSphere 6, you can use the plugin from health to make sure everything is functioning.  vSphere 6 offers also an add-in for Operations Manager vRealize for an additional view in how works the SSD layer.


    What a load of work is on your vSAN?


    Nice work!  Thank you, Zach.

  • Disk groups are not visible cluster. vSAN datastore exists. 2 guests (on 8) cluster do not see the vSAN data store. Their storage is not recognized.

    http://i.imgur.com/pqAXtFl.PNG

    http://i.imgur.com/BnztaDD.PNG

    Do not know how even tear it down and rebuild it if the disk groups are not visible. The discs are in good health on each host storage adapters.

    Currently the latest version of vCenter 5.5. Hosts running 5.5 build 2068190

    Just built. Happy to demolish and rebuild. Just do not know why it is not visible on the two hosts and the disk groups are only recognized 3 guests when more are contributing. Also strange that I can't get the disk groups to fill in vCenter. I tried two different browsers (chrome and IE).

    I have now works.

    All the identical 5.5 relies on ESXi hosts. All hosts are homogeneous CPU/sum of the prospects for disk controller / installed RAM/storage.

    I have work. I had to manually destroy all traces of the vSAN on each single to help host node:

    (1) put the hosts in maintenance mode and remove the cluster. I was unable to disable vSAN in the cluster, I made on each node host (manually via the CLI below) then disconnected web client vCenter and return to finally refresh the ability to disable on the cluster.

    esxcli vsan cluster get - to check the status of each host.

    esxcli vsan cluster drop - the vSAN cluster host.

    storage of vsan esxcli list - view records in the individual host group

    esxcli vsan storage remove-d naa.id_of_magnetic_disks_here - to remove each of the disks in the disk group (you can ignore this using the following command to remove the SSD only falling each disc in this host group).

    esxcli vsan storage remove s naa.id_of_solid_state_disks_here - this is the SSD and all the magnetic disks in a given disk group.

    After that, I was able to manually add hosts to the cluster, leave maintenance mode and configure the disk groups. Aggregated data of the vSAN data store is correct now, and everything is functional.

    Another question for those of you who still read... How to configure such as the VM storage strategy that migrates towards (or inspired) the vSAN data store will immediately resume the default storage policy, I built for VSANs?

    Thanks for anyone who has followed.

  • replication ports 6 VMkernel vSphere

    Thanks to Jeff Hunter for his recent updates and documentation on vSphere for replication 6.0.  Read the docs online, I have a few questions on the vSphere newly supported, dedicated replication VMkernel profits.

    Here (vSphere replication 6.0 Documentation Center) and here (vSphere replication 6.0 Documentation Center) are notes on configuring the VMkernel ports dedicated to the RV on a source host and RV traffic on a target host (one for the VR traffic and another for VR NFC traffic, respectively).

    Considering that it is probably a common practice to use VR as the replication engine with SRM with the intention to fail on the original production site, what is the value in the configuration of two ports VMkernel for VR?

    On the protected Site, you configure a VR VMkernel port to send traffic.  He sends the replicated data from VM for device of recovery of the RV Site, who turns and sends that data replicated for recovery Site ESXi hosts VR NFC VMkernel ports.

    To not return, then the recovery Site can (should?) have an additional port of VR VMkernel, which sends the data replicated VM for device of original VR protected site, which in turn sends the data replicated to the ports of VR NFS VMkernel from the original host protected Site ESXi.

    This looks like it may or must be a distinction between the traffic between the sites and traffic of VR NFC within a site since there are two types of traffic for VMkernel (VR and VR NFC) VR.

    What is this distinction that guarantees a dedicated RV NFC VMkernel port? Why not just use VR VMkernel port? Thank you!

    Edit: I consider these types of traffic to be at the same level of importance and safety.  I have no problem to put two VMkernel ports in the same VLAN.  If I did this, it would put two VMkernel ports per host, in the same network segment.  I wonder why I don't want to do that rather than just use a single VMkernel port or multiple VLANs.

    Post edited by: Mike Brown

    I think it boils down essentially to the options. you don't have to do that, but based on the reviews, it has been estimated that US aircraft enough requests from customers to provide a mechanism which not only allows you to control the path allows the replication traffic (incoming and outgoing) (the source host and VR target devices), and routes it takes on the network but also control the card used for the VR NFC traffic on the target sites. As you RV relies on the NFC to push the data down to the storage of data target on the target sites and some desired customers be able to separate this circulation as well.

    So in the case of the NFC, you can if you want things together (optional) upwards so that the traffic is storage hosts (and I mean here the hosts VR has determined have access to data target stores) can be sent on a physical LAN separated if you wanted that... and a lot of people have asked that flexibility. Allows customers to isolate the common VR NFC (and traffic pass VR) of management traffic not VR "regular".

    Once VRM note that a host has a vmknic marked as RV NFC, only this address is reported on the VR server, which means when it comes to this host in the future that we will only use this address for traffic from VR NFC.

    just my 2cents on why we did it.

Maybe you are looking for