OTV, FabricPath, and VxLAN

Hi gurus

We want to evaluate Cisco OTV Fabricpath &, as well as VXLAN. Would you be able to help in the list on the pros & cons between these technologies? We seek to simulating 4 interconnected sites mesh fully Active/Active setup. We would like to reduce up to 2 solutions so that CEP would not be so resource intensive and time consuming. Thank you.

Concerning

Aaron

See my comment in

https://supportforums.Cisco.com/discussion/12392916/fabricpath-or-OTV-DCI-pros-and-cons

Tags: Cisco DataCenter

Similar Questions

  • What is the difference between Cisco OTV and VXLAN?

    Hi all

    I know, both VXLAN and OTV is a technology to extend the Layer2 (wrap MAC in IP) network. Can I tell the difference between them? Especially from the point of view object design. When we would choose OTV and vice versa.

    Thank you very much!

    Can I tell the difference between them? Especially from the point of view object design. When we would choose OTV and vice versa.

    There has been many, many posts on the Internet about this. For starters, Scott Lowe wrote an article of good 'vs' - here

    http://blog.scottlowe.org/2011/12/22/OTV-and-vxlan-layer-3-connectivity-compared/

    OTV is limited to Cisco technology, your design is limited to a Nexus 7000 if you go this route. A good book on OTV can be found here-

    http://www.Cisco.com/en/us/docs/solutions/Enterprise/Data_Center/DCI/whitepaper/DCI3_OTV_Intro_WP.PDF

    Where VXLAN is a standard (Cisco participates also). Some details about VXLAN here-

    http://www.borgcube.com/blogs/2011/11/vxlan-Primer-Part-1/

  • Logical switches NSX and the VGT Mode

    Hello

    I think that the answer is "", but just to play it safe, I can assume that VGT is not supported for virtualwire Port groups?

    Configurations allows good defune vlan trunking, but my thought is that we need a one-to-one relationship between vlan id and vxlan vnid - (if at all a vlan-id is used for example for purposes of L2GW)

    Issues occurred in the test of a nested environment...

    see you soon

    / Rik

    We do not support the DLR passage with tagging VLAN comments. There must be L2 only... VM VM (in a VXLAN) or VM-to-VM(Software L2 bridging).

  • Different routes for interfaces VXLAN?

    Hi guys,.

    According to the NSX v2.1 Design Guide:

    The need for static routing is due to the fact that ESXi support only two TCP/IP stacks:

    -VXLAN: This is dedicated to the VMkernel VTEP interface traffic. A dedicated-default route 0.0.0.0/0 can then be configured on the stack for each ESXi pointing the gateway deployed on the local of ToR, and this allows to communicate with the VTEPs deployed in different subnets of Transport distance.

    -By default: This battery is used for all other types of traffic (vMotion, management, storage). It is typical to use a default for purposes of management route (given that the connection to the management interface of vmk0 could be from several remote IP subnets). This means that the static routing configuration is necessary to support communication inter-sous for other types of traffic.

    Sounds good, but is this special VXLAN stack added by default? Or do I need to create and configure? I configured a host with a separate management and VXLAN VMkernel interfaces. I see my traffic VXLAN hustled through the default gateway on the management network.

    I use static IP pools on the direction and the VXLAN interfaces. Does anyone know how to add this route dedicated by default under the battery VXLAN?

    Released of one of my hosts of calculation:

    vmk0 = management

    vmk1 = VXLAN

    ~ # esxcfg - road - l

    VMkernel itineraries:

    Interface of network gateway subnet mask

    172.16.100.0 255.255.255.128 subnet local vmk0

    by default 0.0.0.0 172.16.100.1 vmk0

    ~ # esxcfg - vmknic - l

    Interface Port Group/DVPort IP IP family address Netmask Broadcast MAC address MTU TSO MSS active Type

    0 IPv4 172.16.100.100 vmk0 255.255.255.128 172.16.100.127 00: 0C: true 29:ab:31:c9 1500 65535 vmk0 STATIC IPv6 fe80::20c:29ff:feab:31 0c 00 64 9:0 c: true 1500 65535 STATIC 29:ab:31:c9.

    FAVORITE

    vmk1 2 IPv4 172.16.200.100 255.255.255.128 172.16.200.127 00:50:56:64:0e:f3 1600 65535 true vmk1 STATIC IPv6 fe80::250:56ff:fe64:ef3 64 00:50:56:64:0e:f3 1600 65535 true STATIC 2.

    FAVORITE

    Kind regards

    Bobby

    VTEP interface is created by NSX during the preparation of the host.

    You can see this step the NSX Getting Started Guide - "step 3: prepare the ESXi NSX hosts" (Getting Started Guide for vSphere NSX).

    Once you have your VXLAN battery in ESXi, you can see its stack via vCenter UI (as usual).

    Or if you like CLI:

    . Directions:

    root@Lab1_ESXi1: ~ # esxcli ip network route ipv4 list n vxlan

    Interface Source network gateway subnet mask

    ------------  -------------  ------------  ---------  ------

    by default 0.0.0.0 192.168.20.1 vmk1 MANUAL

    192.168.20.0 255.255.255.0 0.0.0.0 vmk1 MANUAL

    . Ping

    root@Lab1_ESXi1: ~ # ping ++ netstack = vxlan 192.168.20.22

    PING 192.168.20.22 (192.168.20.22): 56 data bytes

    64 bytes from 192.168.20.22: icmp_seq = 0 ttl = 64 time = 0,616 ms

    Dimitri

  • monitor L2 Frame NSX

    How can I track the plot of L2 goes off form any VM before and after encapsulation VXLAN, I mean before the tag after the tag VXLAN and VXLAN?

    Thank you

    There are several tools available to capture packets.

    You can capture directly on the host computer by using pktcap-uw, allowing you to capture traffic in different parts of the stack.  Captures can be exported to file readable by Wireshark.

    VMware KB: Using the tool pktcap-uw ESXi version 5.5 or later

    You can also take advantage of port mirroring to capture traffic going through the CDS that will send traffic to a monitoring station as well as other options.

    You can also capture the physical wire traffic, on the side of the physical switch if you physical switch by using the port mirroring, send the traffic to a monitoring station.

  • NSX vSwitch for ESXi 6

    Hello

    Is driver vSwitch NSX that must be installed on ESXi host must be installed separately on a host (using file VIB) or it can be installed using the device of the NSX somehow?

    This process can be automated somehow or it's still a manual process e.g. install ESXi 6 and install driver vSwitch NSX?

    Thank you.

    The tutorial link you posted was for the NSX - HD (HD meaning Multi-hypervisor).  Installation architecture is a little different.  As you work with the NSX for vSphere, I recommend a blog like http://www.routetocloud.com or it has links to sites listed here NSX resources | VMware Professional Services

    Unlike NSX - HD, which requires a nsx-vswitch component installed on the host computer, the installation for the NSX for vSphere version 6.2 requires the installation of only the vibs vsip and vxlan on the host.  The NSX vSwitch to which you refer is more commonly called a logical switch.  (see Deep Dive: how the VSwitch NSX |) Virtual Me)

    I would recommend the following suggestion Mr. Salek work your way through the installation guide for the Foundation of the basic installation and configuration, so you can create a logical switch.

  • FabricPath or OTV between two data center using Direct fiber cable

    Hello

    I have two data center both of them has the same equipment N7k, N5k and N2k, and we want the dataCenter being active/active, I'm really confused to use OTV or FabricPath characteristic, if someone can help me with my scenario and explain to me what is the best solution and advantage and disadvantage between OTV and PabrcPath.

    Many thanks in advance

    Hi Steven,

    No problem, I'll go through your points as completely as possible. I advise you to read more about these protocols, maybe if you have access to INE or similar, see their videos on this. I would also like to say again that I have not seen all documentation Cisco indicating that FabricPath to be used as a DCI.

    With regard to the way fabric you ask what follows...

     1. only can use it between two datacenters of you have more we can't, please correct me?

    No, you can use the path of fabric with more than two data centers, but even with OTV, you can use it with more than two data centers.

     2. HSRP localization can not be implemented as OTV. However You can have two differnet Gateways at the Data Center 1 and 2 using two different HSRP groups. If server is moved dynamically from, (i didn't understand this point can you please explain with example?

    OK, so this is a GREAT topic. Location of HSRP CAN be implemented with OTV, but cannot be implemented with fabric path. First hop redundancy protocols can be localized and is supported by Cisco with OTV, this basically allows the same default gateway to reside in two of your data centers providing the ACTIVE/ACTIVE configuration. So no matter where your VM is, they did not change their default, even if gateway your servers to move to the other datacenter.

    If we didn't have this, we would have only an active member of HSRP divided between DC and things would be extremely troublesome in regard to traffic flows. A virtual machine in DC2 VLAN needs to talk to host in VLAN B. But the default gateway is completely in DC1. So frame is sent to the ICD in DC1, then the gateway by default, routes packets VLAN B. This VLAN B lies in fact in DC2, so now it has to go all the way back to DC2. You get my point...? :)

    With localization happen only local to the domain controller. If all servers / VMS in the domain controller can speak locally to its "own" default gateway.

     3. unknown unicast flooding (can you give me an example?)

    Unknown unicast traffic is unicast packets/images with unknown destination mac address. By default, switches are flooding this type of traffic to all ports in the VLAN. With path of fabric that would take place during your DCI, but with OTV, it is all taken care locally, so massive savings on bandwidth here and it is much more effective.

     4. ARP optimization between Data Center (can you give an example regarding ARP optimization?)

    There is another function of OTV, which makes it far superior on the way of tissue. Essentially, we are reducing the volume of traffic passing through the transport infrastructure (i.e. ICD)

    When ARP, host in DC1 to host that responds in DC2, we use links and there is travel time of package that might be minimal, but is not the most optimal. OTV AED - or edge device spy ARP response and subsequently knows that this mapping exists from there. ARP takes place after the first Protocol, the EDA almost proxy ARP to DC1 so the ARP request locally does not have to travel to DC2.

     5. Typically two flows (Odd VLANs by OTV-VDC-1 and even vlans by OTV-VDC-2) carry the entire layer 2 traffic flow between the two Data Centes. Hence the load balancing the links is not efficient. ( (can you explain compare with FabricPath if you have example?)

    IMHO, it's bad and good. Balance the workload of the OTV if you have more than an AED on site. VLAN strange appointment via an AED, even numbered VLAN go through the other. Depending on traffic on VLANs, this could become unbalanced. Fabric used by all its links to mac addresses 'route' to the respective SID - ID switch she needs to do. So perhaps a better uniformity of split here.

     6. VLAN scalability for OTV is lower than FabricPath as of this content writing. (can you explain what this mean i didn't understand it)

    I completely disagree with this comment. I too do not understand.

     7. Resiliency of FabricPath network is better than OTV in some failure scenarios.(can me an example ?)

    I also disagree with that. Resilience of path of fabric could be same as OTV or perhaps better. However, my personal experience is that OTV fine tuning with things like BFD failover is much faster!

    Fabric is good because the control of aircraft ISIS and its operation is admirable, but could say the same for the OTV.

    Lets say one of the DCI links had to die, the transmission of the tissue path would continue through the other links, then perhaps for low latency, high frequency, environments that would be beneficial. OTV will change the EDA and re - learn mac, announced by other AEDS, addresses, but as I said, the time could be extremely minimal and tuning. This isn't a big deal, unless you need under second time convergence!

    I hope that I have answered your questions, I recommend use for your DCI OTV, use the path of fabric for your inside of local switching in your DC. This has been implemented repeatedly and the links I sent you the models validated Cisco also point out.

    Remember - fabric has been built to be a step towards TRILL, and replacement of protocols spanning-tree, OTV was built especially for the dci. They are both built and examples of specific design. It makes no sense to get these confused or mixed up, unless there is a real and pressing the case.

    Joel conclusion is right, use the right tools for the job. If the use case is good for the FP then OK, if not, OTV.

    Rcmnd - reading http://www.packetmischief.ca/2013/04/23/DCI-series-overlay-transport-vir...

    These are just my thoughts.

    Bilal (CCIE #45032)

  • DLR Uplink and GSS internal transit same VXLAN cannot ping each other.

    Start with, I run NSX 6.2.2 firewall rules on 'allow all' to 'all' to 'all' "all protocols", in other words disabled...

    I have a VXLAN 5000 transit, with an uplink DLR interface attached to it, and an internal interface GSS in the appendix in which neither of the parties can ping to another. So for troubleshooting, I added 2 VM Windows attached to the same transit VXLAN 5000, a virtual machine is on ESXi host 1 and the other is on host ESXi 4. They can fine ping each other, and two virtual machines can ping both the uplink of DLR and internal interfaces of the GSS.

    This question has puzzled me because it makes no sense, why the DLR and the GSS cannot ping each other but 2 virtual machines that VXLAN can ping all adjacent devices. I can even put bridges on those virtual machines with a rule NAT on the GSS and those virtual machines can get internet through the GSS, but no matter what I try, the DLR cannot ping the GSS, and the GSS cannot ping DLR...

    I need to define a static route between the GSS DLR <>- but if I can't even answer ping interfaces I'm dead in the water.

    If I install virtual machines in a network LAN DLR interface such as WebApp and test for example database, I can ping throughout the DLR together until the IP DLR Uplink, but then he cannot ping the GSS internal.

    Does anyone have suggestions for troubleshooting? Test commands that I can run? I tried many things and then lots of websites with the troubleshooting steps. Everything seems fine, all green checks in the installation steps... All roads, MACs, ARP tables appear as expected when I run test on host computers commands and controllers. I don't know what is the cause except for a bug in the code...

    All ideas are welcome... Thank you

    UPDATE:

    Yes, so it has need of a static NAT rule on the GSS...

    In my environment, I added a SNAT rule on adapter: ESG_Uplink with 0.0.0.0/24 CBC-translation dst: 1.1.1.101 (my lab ESG IP Uplink).

    It works now... VM tenant box connected to WebApp portgroup (192.168.13.115) can now ping gateway DLR, through routing OSPF to the GSS and ping on physical bridge of...

    I learned a lot on this one... I'm not going to worry about why the static route, I tried first post didn't work, since I was the OSPF running instead (which is more appropriate for my laboratory for realistic scenario anyway), and the Foundation will now suffice to build the rest of this POC vRA / vRO lab...

    Thank you in any case, sometimes it's just nice to have someone to listen.

  • VXLAN on UCS: IGMP with Catalyst 3750, 5548 Nexus, Nexus 1000V

    Hello team,

    My lab consists of Catalyst 3750 with SVI acting as the router, 5548 Nexus in the vpc Setup, UCS in end-host Mode and Nexus 1000V with segmentation feature enabled (VXLAN).

    I have two different VLAN for VXLAN (140, 141) to demonstrate connectivity across the L3.

    VMKernel on VLAN 140 guests join the multicast fine group.

    Hosts with VMKernel on 141 VLAN do not join the multicast group.  Then, VMs on these hosts cannot virtual computers ping hosts on the local network VIRTUAL 140, and they can't even ping each other.

    I turned on debug ip igmp on the L3 Switch, and the result indicates a timeout when he is waiting for a report from 141 VLAN:

    15 Oct 08:57:34.201: IGMP (0): send requests General v2 on Vlan140

    15 Oct 08:57:34.201: IGMP (0): set the report interval to 3.6 seconds for 224.0.1.40 on Vlan140

    15 Oct 08:57:36.886: IGMP (0): receipt v2 report on 172.16.66.2 to 239.1.1.1 Vlan140

    15 Oct 08:57:36.886: IGMP (0): group record received for group 239.1.1.1, mode 2 from 172.16.66.2 to 0 sources

    15 Oct 08:57:36.886: IGMP (0): update EXCLUDE group 239.1.1.1 timer

    15 Oct 08:57:36.886: IGMP (0): add/update Vlan140 MRT for (*, 239.1.1.1) 0

    15 Oct 08:57:38.270: IGMP (0): send report v2 for 224.0.1.40 on Vlan140

    15 Oct 08:57:38.270: IGMP (0): receipt v2 report on Vlan140 of 172.16.66.1 for 224.0.1.40

    15 Oct 08:57:38.270: IGMP (0): group record received for group 224.0.1.40, mode 2 from 172.16.66.1 to 0 sources

    15 Oct 08:57:38.270: IGMP (0): update EXCLUDE timer group for 224.0.1.40

    15 Oct 08:57:38.270: IGMP (0): add/update Vlan140 MRT for (*, 224.0.1.40) by 0

    15 Oct 08:57:51.464: IGMP (0): send requests General v2 on Vlan141<----- it="" just="" hangs="" here="" until="" timeout="" and="" goes="" back="" to="">

    15 Oct 08:58:35.107: IGMP (0): send requests General v2 on Vlan140

    15 Oct 08:58:35.107: IGMP (0): set the report interval to 0.3 seconds for 224.0.1.40 on Vlan140

    15 Oct 08:58:35.686: IGMP (0): receipt v2 report on 172.16.66.2 to 239.1.1.1 Vlan140

    15 Oct 08:58:35.686: IGMP (0): group record received for group 239.1.1.1, mode 2 from 172.16.66.2 to 0 sources

    15 Oct 08:58:35.686: IGMP (0): update EXCLUDE group 239.1.1.1 timer

    15 Oct 08:58:35.686: IGMP (0): add/update Vlan140 MRT for (*, 239.1.1.1) 0

    If I do a show ip igmp interface, I get the report that there is no joins for vlan 141:

    Vlan140 is up, line protocol is up

    The Internet address is 172.16.66.1/26

    IGMP is enabled on the interface

    Current version of IGMP host is 2

    Current version of IGMP router is 2

    The IGMP query interval is 60 seconds

    Configured IGMP queries interval is 60 seconds

    IGMP querier timeout is 120 seconds

    Configured IGMP querier timeout is 120 seconds

    Query response time is 10 seconds max IGMP

    Number of queries last member is 2

    Last member query response interval is 1000 ms

    Access group incoming IGMP is not defined

    IGMP activity: 2 joints, 0 leaves

    Multicast routing is enabled on the interface

    Threshold multicast TTL is 0

    Multicast designated router (DR) is 172.16.66.1 (this system)

    IGMP querying router is 172.16.66.1 (this system)

    Multicast groups joined by this system (number of users):

    224.0.1.40 (1)

    Vlan141 is up, line protocol is up

    The Internet address is 172.16.66.65/26

    IGMP is enabled on the interface

    Current version of IGMP host is 2

    Current version of IGMP router is 2

    The IGMP query interval is 60 seconds

    Configured IGMP queries interval is 60 seconds

    IGMP querier timeout is 120 seconds

    Configured IGMP querier timeout is 120 seconds

    Query response time is 10 seconds max IGMP

    Number of queries last member is 2

    Last member query response interval is 1000 ms

    Access group incoming IGMP is not defined

    IGMP activity: 0 joins, 0 leaves

    Multicast routing is enabled on the interface

    Threshold multicast TTL is 0

    Multicast designated router (DR) is 172.16.66.65 (this system)

    IGMP querying router is 172.16.66.65 (this system)

    No group multicast joined by this system

    Is there a way to check why the hosts on 141 VLAN are joined not successfully?  port-profile on the 1000V configuration of vlan 140 and vlan 141 rising and vmkernel are identical, except for the different numbers vlan.

    Thank you

    Trevor

    Hi Trevor,

    Once the quick thing to check would be the config igmp for both VLAN.

    where did you configure the interrogator for the vlan 140 and 141?

    are there changes in transport VXLAN crossing routers? If so you would need routing multicast enabled.

    Thank you!

    . / Afonso

  • Design of DC question: extension of the L2. A VPLS? OTV? Link?

    Hello to all,

    during a phase of restructuring of 2 data centers (locate 500 m one from the other), we are wondring what allows to extend the L2.

    All devices are 6500 in VSS (except the 2 above which are 4500), no problem on the link to use (it's a campus, I think we have more or less 20 dark FO).

    Red: L3 PtP GUY

    Blue: Link L2

    A solution is in the file sol_one: L3 ptp in North-South (via EIGRP and that's ok), L2 through black fiber in EST-ovest... area extendig the L2 with STP and VTP (!).

    We think about the placement of a couple of N7004 with OTV and then stretch VLAN (sol_two).

    We also use AVPLS, but it seems that it is more difficult to configure (and more expensive).

    Any help is appreciate.

    L.

    I would like to know if you need more assistance in this topic and make sure your rate helpful messages

    Sent by Cisco Support technique iPad App

  • NSX design with cisco UCS/fabric interconnects and Nexus switches

    Hi Experts

    I am new to NSX design and deployment and working on a project. We deploy NSX for applications of level 4 (web, app, db, DC). I use logic, DLR, ESG and DFW switches. I next we intend to use roads static confusion..

    1. do we cover all the VLAN from the virtual to the physical environment? for example mgmt VLAN, level vlans(web,app,db), vxlan transport vlan or it should be only a VLAN specific?  which means would be I have set all the VLANS in environment NSX in my physical switching environment?

    2 vds? don't we create not only 1 vds initially during the deployment of vcenter or more? Should we take any special consideration while deploying to the deployment of the NSX?

    3 static routes - we configure static routes on the DLR and the GSS? Should I use the default routes upstream? on the physical router should we be routing all subnets from virtual environment to the GSS.

    4. where and who should create virtual machines? Via vCenter or before the deployment of the nsx NSX?

    5. we have a level of domain controller. Should it be part of 3 or separate applications with allow any any rule on DFW?

    Thank you

    Sam

    (1) the VLANs which exist for physical Machines span the logical switch VXLAN NSX in the following cases:

    • If the current deployment there are physical Machines in the same Vlan and subnet IP with Virtual Machines. If this common Port Vlan group is migrated to a switch logic VXLAN Backed port group and not possible to change the IP addresses of the virtual machines, and then a bridge DLR (Distributed logical router) works as the conversion between Vlan physical and virtual VXLAN
    • If Conversion of P-to-V of the physical Machines continue on this Vlan

    VLAN which cover only the virtual machines or virtual local networks which cover only physical Machines must not be delayed.

    (2) for the deployment of the NSX, there may be more than 1 dVS or only 1 vDS according to the design. There may be another type of traffic other VXLAN base of virtual machines such as backup, storage, VMotion and the overall design, management, best practices apply here as well.  A requirement of the NSX is a common VDS that spans the entire Cluster. For each Cluster, this "common VDS' may be different. Yet once this VDS maybe a separate VDS dedicated VTEP or VTEP features functionality can be added to the existing VDS. It may be best to separate the VTEP vDS.

    (3) for the DLR, a default gateway is usually sufficient. If static routes are used, the GSS must then drive by default upstream and the static routes with the next hop of the DLR downstream for the subnets in the subnets IP VM logical switch. On the physical router static route to the VM, but also DLR - ESG logical subnets Subnet switch is required. Management of static routes is easier if route summarization is possible, or if necessary, close to the IP subnets, so it may be a good idea to use the dynamic routing such as Ospf or BGP protocol. There are also features of IP address management in Vrealize and other IPAM solutions if Automation is necessary for large and dynamic environments.

    (4) NSX has no functionality in the creation of the VM, it only creates Services network such as switches, routers, Firewalls, Load Balancing. The creation of the part VM continiues the same way as before. A point to note is maybe the logic is created appear as VXLAN named port groups on the VDS. NSX Manager creates groups of ports on the VDS, the only difference is that the name includes VXLAN. The virtual machine is like before added to this group of VXLAN Backed Port settings, or added to the logical switch from NSX Manager interface that appears again as a Plugin for VCenter. VCENTER is so point to create virtual machines and add these VMs to the logic is.

    (5) level of domain controller can be a separate layer, or other third party, may be preferable to upgrade separated except 3 applications. Usually, it's the same design without NSX. dFW rules can help protect the domain controller with allowing only ports of the virtual machine or physical Machines being admitted. dFW rules can apply to VXLAN based logical switches NSX so that VLAN based DVS Port groups because it's the kernel module.

  • VDS and NSX Cluster

    Hi all

    I want to know the role of the cluster and VDS in NSX

    and what is the difference with the HA or DRS cluster

    Thank you

    both are presequite installation for the component of the NSX. He uses to transport VXLAN. VDS are designed to accommodate VXLAN portgroup. There is no different between cluster in HA and Drs Because it used the same cluster.

    Here is the document for your question = NSX 6 Documentation Center

  • How to change the VXLAN port in the NSX Manager?

    Hi Expert,

    I use the NSX 6.2.2 Manager and the default value for the UDP VXLAN port is 8474.

    I want to change it to the default 4789 UDP IANA port.

    How can I do?

    Thank you very much

    Mike

    Also, if you want to not upgrade for some reason that you should be able to change it via the API documented in this here blog call

    NSX for vSphere and number of Port UDP VXLAN - Stretch Cloud - undressed technology

  • SR - IOV and VXLN

    Hello

    Is SR - IOV support VXLAN?

    This KB shows virtual VXLAN son not being supported witih SR - IOV in vSphere 5.1 and 5.5: VMware KB: SR - IOV FAQ support status

    Page 139 of the vSphere network guide also lists 6, VXLAN virtual son as features not available for virtual machines configured wit SR - IOV.

  • NSX distributed environment routing logic but without vxlan

    I would like to know if its possible to create an environment of NSX with distributed logical routing capabilities, but with regular exchanges of DVS after tags vlan?

    I ask this because in all the design and deployment guides that I have read so far the DLR function is always associated with VXLAN.

    Hello

    You can (and it works in real life), however, there are a few conditions. Have a look here, he speaks of dvPortgroups several times: 6.2 NSX VMware vSphere Documentation Center

Maybe you are looking for