iSCSI vNIC a requirement?

If I do not intend to start from an array of iSCSI, what I have to go an extra step and configure iSCSI vNIC for use in vSphere or I'm sure with only created vNic standard with the appropriate settings for Jumbo frames, etc...?

I think before UCS 2.0 that was sufficient, but talk with TAC, it sounds like the vNIC iSCSI is now required with vSphere...

Is there an advantage to create iSCSI vNIC (beyond starting from an iSCSI array).

Thanks in advance,

Lewis Benton

vNIC iSCSI is only necessary if you do iSCSI initialization.

If you do not have the iSCSI initialization you need not of a vNIC iSCSI in your service profile.

If you want frames set to the vNIC and be sure to put it also on the QOS policy

Louis

Tags: Cisco DataCenter

Similar Questions

  • iSCSI vNIC etc.

    An iSCSI vNIC is created only to allow SAN boot? When the OS is installed these vNIC will not show upward, only the NIC overlay will show?

    I'm trying to understand what vNIC is necessary to allow the start of SAN SAN connectivity and iSCSI once the operating system is loaded.

    Fix.

    Yes it is supported.  The CIV iBFT functionality (I assume you are using the adapter) will allow the host iSCSI initialization, and then to access an iSCSI network SAN LUN data (for example a VMFS) will build the OS Software integrated into ESX initiator.

    Robert

  • ISCSI Boot B200 M2 need, but get 'not enough available vNIC.

    Hello

    I'm trying to implement initialization iSCSI ESXi on a B200 M2 w / M71KR-Q card above.

    I add vNIC A and B (each linked to a fabric respectively), but trying to implement a vNIC iSCSI - I get the error «not enough vNIC» available

    Looking around, it seems that M71KR-Q only can support 2 vNIC (which I also confirmed by trying to add a third, same error message as above).

    I just thought that since the overlay of iSCSI vNIC real - it works.

    However, even when adding just 1 vNIC and trying to superimpose a vNIC top iSCSI - I get the "there not enough resources overall, not enough vNIC available" error message.

    Does anyone know if it is even possible to initializing iSCSI on B200-M2 w / M71KR-Q on?

    Thanks in advance!

    Petar

    Please refer to this document in the section documents iSCSI boot https://supportforums.cisco.com/docs/DOC-18756 it is very detailed, I tell myself.

    initializing iSCSI is only supported on CiscoVIC and Broadcom mezz cards. It is not supported on the M71KR-Q Qlogic Gen - 1 mezz.

    Dave

  • iSCSI Boot target IP Question

    Hi all

    Setting up iSCSI starts for the first time and had a question.

    Installation: Blades UCS about a pair of N5K (vPC) with a double NetApp controller connected.  There are 4 UCS vNIC - two for only iSCSI (as the VLAN native) and two for everything else.

    The blades odd dΘmarrez since LUN on NetApp controller and the blades even dΘmarrez since LUN on NetApp B controller.

    So my question.  When I make for an odd Blade (for example) iSCSI targets, do I have both A and B "NIC" s point on A controller's IP?  Or NIC - A point on A controller and IP point of NIC - B to B controller IP?  I was not sure that, in this example, the controller would hold unit number logic that this odd blade is starting from.

    Thank you!

    Hi Ian

    You can add up to two iSCSI vNIC startup strategy. A vNIC acts as the source of primary iSCSI boot, and the other acts as the source of secondary iSCSI boot.

    "point of NIC - A controller - A IP and NIC B points to the IP Address of the controller of B ' YES.

    You would create another strategy of starting for blades, with the primary order / trade school.

  • iSCSI - two subnets on a vswitch iscsi ports link

    Hello

    Is less than supported scenario about binding ports for the software iSCSI (ESXi 6.x)?

    Two in two different subnets (2 controllers) iSCSI storage devices: 192.168.10.x and 192.168.20.x (mask 255.255.255.0).

    ESXi host with a vSwitch iSCSI.

    Four exchanges vmkernel: two 192.168.10.x and two subnet 192.168.20.x subnet.

    There is a connection of software ISCSI ports configured for each vmkernel port.

    It is worth noting that this scenario is little different from the examples on VMware KB: considerations for use port binding software iSCSI in ESX/ESXi

    Does not this way. iSCSI ports link requires a one-to-one relationship between the vmkernel ports and vmnic.

    Of https://kb.vmware.com/kb/2045040

    To implement a group policy that is compatible with the binding of iSCSI ports, you need 2 or more ports vmkernel vSwitch and an equivalent of physical cards amount to bind them to....

    André

  • Generational UCS/VMWare

    Hello

    I'm looking for clarification on how the Palo VIC presents the vNIC, how say/control which card vNIC to which ports physical and how best to configure my environment for VMWare.

    Summary of material:

    Blades (full-width) B250 with 2 cards of Palo VIC

    5108 chassis with double FEX (double uplinks from each FEX)

    2 x 6120 controllers with 2 x 10 Gbit links rising each to a pair of Nexus s 5020

    Summary of the software:

    UCS 1.3.0 boot loader, the kernel/system 4.1 version (3) N2(1.2d)

    VMWare vSphere Enterprise Plus 4.1

    According to my interpretation, Cisco and VMWare recommend against bandwidth limiting the interfaces - but prefer to go with actions and ability to burst.

    Therefore, my desired construction is the following:

    for each of the 4 available in B250 blade 10gbit interfaces, I want to have 2 vNIC (1 for VM distributed vSwitch [6Gbit minimum actions, 802. 1 q trunks], 1 for iSCSI [4 Gbit minimum actions, access port/no 802. 1 q trunk]). This means that my Service profile must define 8 vNIC altogether.

    Q1: How can I make sure that I get 1 'data' and 1 'iscsi' vNIC per physical port?

    Q2: how to set a policy to set the minimum bandwidth on each vNIC?

    Q3: How do I control/make sure the binding order correct so that I can script my configuration of vSphere?

    Generation of VMWare will be vswitch distributed containing all the 'data' vNIC, allowing control of network IO with actions defined to manage the various allowances for groups of data ports, console and VMotion. The vNIC iSCSI will each receive a VMKernel port dedicated.

    Thank you very much in advance for any help.

    Our installation program is not the same, but the principles can be applied as we also CIV.  To make sure that we had the installation of ports properly, I put into service six vNIC in UCSM to each host.  I put in service the first vNIC of fabric and the second vNIC fabric B - do not check the box "Fabric Failover"...  Repeat for other network cards for four.  This will give you 4 vNIC by fabric.  Note MAC addresses and which tissue they are attached.

    Then, in case of VC and to configure your hosts, enter the network cards and line up the vmnic and MAC addresses so that you know how they trace.  When you configure your vSwitches, just make sure you add a vmnic from fabric and a fabric vmnic B.    Do this for the Service Console, vmkernel and VM switches.

    If you make this unique environment of VIC inas, it will ensure that your vswitches are attached to every tissue.  I guess you can do this same thing with two VICs, since each card may appear independently in UCSM.

    As for the bandwidth mgmt and others, check out http://bradhedlund.com/ its recent messages are on the networks of the UCS.

    Adam

  • VM with a snapshot of 900 GB!

    Hello everyone!

    Some information:

    (1) we use ESX 3.5 Update 4 with vCenter 4 Update 1 on Netapp Storage connected via NFS for the ESX servers.

    (2) I have a Win2K3 VM who had a D: drive which was a 900 + GB hard disk that I migrated all the files to a connected iSCSI volume.

    (3) before I migrated the data, I took a snapshot of the VM, has added improved iSCSI vNIC and it configured in the operating system. And an another snapshot was taken just in case.

    4) 900 + GB of data has been copied to the iSCSI volume.

    (5) VM has been turned off to remove the 900 + GB D: drive.

    (6) VM power and using the iSCSI ok disc.

    Unfortunately I forgot to commit 2 shots... and a month later, I noticed that the VM is projected in vCenter as having an instant 923 GB!

    The tab view storage for the cluster shows the size of the snapshot!

    Research in the computer virtual folder, the delta files are not nearly this big:

    Looking at the volume via the console, files do not display a snap 900 GB:

    So vCenter think that the virtual machine has a 900 GB snapshot because I removed the 900 GB D: drive hard file in the VM properties?

    I really don't want to wait for 3 years to commit this snapshot!

    So, theoretically this VM has only really Go 2 x 3 buttons pressures correct? And shouldn't take very long to engage, if she?

    Ideas as to why please VCenter think this VM has a large snapshot?

    You must be really happy to know that you do not have a 900G instant. Commit (Group) the other two instant you can clone the virtual machine that is the safest way.

  • Vnic iSCSI issue

    Hi all

    I'm having a problem trying to my Iscsi paths on seprate phyical spend the vNIC phyical spreate configuration switch management. What I was going to do, install a switch management traffic and an another vmotion/iscsi. The problem occurs when I plug the vnic iscsi switch no dhcp and assign an IP address. After that I give him a grave management port IP address fail though even it is not connected to this switch.

    For me, the vNIC should be dependent on each other and I'm really confused at this point. Any help would be great.

    Steve

    After the screenshot of the network the IP address used for iSCSI (192.168.3. 1) is in the same subnet as the management network (192.168.3. 196) which most likely results in a routing problem. I would recommend that use you a dedicated subnet, not routed for traffic iSCSI, such as (192.168.1. 113) you mentioned earlier.

    André

  • Comments iSCSI - is a second vNIC a good idea?

    If the implementation of the guest with iSCSI operating systems internal initators, how do you deal with access to the iSCSI network?

    As in, it seems advisable to have a separate iSCSI storage network that guests normally cannot reach. But if some of the guests must run internal to access certain LUN on the same SAN as VMFS data warehouses are on iSCSI, how should it be set up?

    I think about adding a second virtual NETWORK adapter to the guests and put this vNIC on the portgroup of storage network with VLAN correct, as it would be to the less restricted to a small number of comments to achieve net iSCSI. What is a possible solution?

    How would you protect the accessed directly VMFS iSCSI LUN and, eventually, destroyed by the guests? CHAP or a SAN security feature?

    I think about adding a second virtual NETWORK adapter to the guests and put this vNIC on the portgroup of storage network with VLAN correct, as it would be to the less restricted to a small number of comments to achieve net iSCSI. What is a possible solution?

    Yes, another vNIC is actually the only way to connect to the storage.

    How would you protect the accessed directly VMFS iSCSI LUN and, eventually, destroyed by the guests? CHAP or a SAN security feature?

    Follow these steps on the side storage. According to storage, create hosts or host groups and present different LUNS only to guests who should see.

    André

  • iSCSI HBA on 3.5 U3 - vswitch installation is required

    I have ESX 3.5 U3 installed on an IBM 3850 with 2 QLogic is installed.  I did the firmware updates and ESX recognizes the HBA as storage adapters.  My problem is this, all my other ESX servers use standard NIC so I create a second vswitch for my iSCSI LAN and add 2nd NETWORK card for guests of this switch.  When I go to create a my vswitch iSCSI only available adapters are my cards, I can't add the HBAs.  How can I configure a vswitch on my iSCSI LAN that will be using / accessing the HBAs?

    If you use the initator MS iSCSI inside your virtual machine then you will need to use regualr NIC you wouldn't be bale to use iSCSI HBAs as they are used by the vmkernel to access the iSCSI SAN

  • Port-groups, vSphere 5 and Jumbo (iSCSI) frames

    We will implement a UCS system with EMC iSCSI storage. Since this is my first time, I'm a little insecure in design, although I have acquired a lot of knowledge to read in this forum and meanders.

    We will use the 1000V.

    1. is it allowed to use only a GROUP of ports uplink with the following exchanges: mgmt, vmotion, iscsi, vm network, external network?

    My confusion here is what jumboframes? Should we not separate for this connection? In this design all executives are using jumboframes (or are this set by portgroup?)

    I read something about the use of the class of frames extended Service. Maybe it's the idea here.

    2. I read in a thread do not include mgmt and VMotion in the 1000V and put it on a vs. Is this correct?

    In this case, the design of uplink would be:

    1: Mgmt + vMotion (2 vNIC, VSS)

    2: iSCSi (2 vNIC, 1000v)

    3 data VM, external traffic (2 vNIC, 1000v)

    All network cards for parameter as active, Virtual port id teaming

    Answers online.

    Kind regards

    Robert

    Atle Dale wrote:

    I have 2 follow-up questions:

    1. What is the reason I cannot use a 1000V uplink profile for the vMotion and management? Is it just for simplicity people do it that way? Or can I do it if I want? What do you do?

    [Robert] There is no reason.  Many customers run all their virtual networking on the 1000v.  This way they don't need vmware admins to manage virtual switches - keeps it all in the hands of the networking team where it belongs.  Management Port profiles should be set as "system vlans" to ensure access to manage your hosts is always forwarding.  With the 1000v you can also leverage CBWFQ which can auto-classify traffic types such as "Management", "Vmotion", "1000v Control", "IP Storage" etc.

    2. Shouldn't I use MTU size 9216?

    [Robert] UCS supports up to 9000 then assumed overhead.  Depending on the switch you'll want to set it at either 9000 or 9216 (whichever it supports).

    3. How do I do this step: "

    Ensure the switch north of the UCS Interconnects are marking the iSCSI target return traffic with the same CoS marking as UCS has configured for jumbo MTU.  You can use one of the other available classes on UCS for this - Bronze, Silver, Gold, Platinum."

    Does the Cisco switch also use the same terms "Bronze", Silver", "Gold" or "Platimum" for the classes? Should I configure the trunk with the same CoSes?

    [Robert] The Plat, Gold, Silver, Bronze are user friendly words used in UCS Classes of Service to represent a defineable CoS value between 0 to 7 (where 0 is the lowest value and 6 is  highest value). COS 7 is reserved for internal traffic. COS value "any"  equals to best effort.  Weight values range from 1 to 10. The bandwidth percentage can be  determined by adding the channel weights for all channels then divide  the channel weight you wish to calculate the percentage for by the sum  of all weights.

    Example.  You have UCS and an upstream N5K with your iSCSI target directly connected to an N5K interface. If your vNICs were assigne a QoS policy using "Silver" (which has a default CoS 2 value), then you would want to do the same upstream by a) configuring the N5K system MTU of 9216 and tag all traffic from the iSCSI Array target's interface with a CoS 2.  The specifics for configuring the switch are specific to the model and SW version.  N5K is different than N7K and different than IOS.  Configuring Jumbo frames and CoS marking is pretty well documented all over.

    Once UCS receives the traffic with the appropriate CoS marking it will honor the QoS and dump the traffic back into the Silver queue. This is the "best" way to configure it but I find most people just end up changing the "Best Effort" class to 9000 MTU for simplicity sake - which doesn't require any upstream tinkering with CoS marking.  Just have to enable Jumbo MTU support upstream.

    4. Concerning Nk1: Jason Nash has said to include vMotion in the System VLANs. You do not recommend this in previous threads. Why?

    [Robert] You have to understand what a system vlan is first.  I've tirelessly explained this on vaiours posts .  System VLANs allow an interface to always be forwarding.  You can't shut down a system vlan interface.  Also, when a VEM is reboot, a system vlan interface will be FWDing before the VEM attaches to the VSM to securely retrieve it's programming.  Think of the Chicken & Egg scenario.  You have to be able to FWD some traffic in order to reach the VSM in the first place - so we allow a very small subnet of interfaces to FWD before the VSM send the VEM's programming - Management, IP Storage and Control/Packet only.  All other non-system VLANs are rightfully BLKing until the VSM passes the VEM its policy.  This secures interfaces from sending traffic in the event any port profiles or policies have changed since last reboot or module insertion.  Now keeping all this in mind, can you tell me the instance where you've just reboot your ESX and need the VMotion interface fowarding traffic BEFORE communicating with the VSM?  If the VSM was not reachable (or both VSMs down) the VMs virtual interface would not even be able to be created on the receiving VEM.  Any virtual ports moved or created require VSM & VEM communication.  So no, the vMotion interface vlans do NOT need to be set as system VLANs.  There's also a max of 16 port profiles that can have system vlans defined, so why chew up one unnecessarily?

    5. Do I have to set spanning-tree commands and to enable global BPDU Filter/Guard on both the 1000V side and the uplink switch?

    [Robert] The VSM doesn't participate in STP so it will never send BPDU's.  However, since VMs can act like bridges & routers these days, we advise to add two commands to your upstream VEM uplinks - PortFast and BPDUFilter.  PortFast so the interface is FWD faster (since there's no STP on the VSM anyway) and BPDUFilter to ignore any received BPDU's from VMs.  I prefer to ignore them then using BPDU Gaurd - which will shutdown the interface if BPDU's are received.

    Thanks,

    Atle, Norway

    Edit:

    Do you have some recommendations on the weigting of the CoS?

    [Robert] I don't personally.  Others customer can chime in on their suggestions, but each environement is different.  VMotion is very bursty so I wouldn't set that too high.  IP storage is critical so I would bump that up a bit.  The rest is up to you.  See how it works, check your QoS & CoS verification commands to monitor and adjust your settings as required.

    E.g:

    IP storage: 35

    Vmotion: 35

    Vmdata: 30

    and I can then assign management VM-kernels to the Vmdata Cos.

    Message was edited by: Atle Dale

  • UCS iSCSI SAN Configuration

    All,

    I can't find good documentation on the connection of a UCS SAN iSCSI storage system.  We run an Equallogic SAN iSCSI and I need to connect our UCS system.  I find a lot of documentation on Fibre Channel, but of course who does not help me.  Does anyone know if any documentation configuration iSCSI they can point me to?  I intend to just use it for storage and will NOT try to boot from SAN.

    Thank you

    Ken

    Ken,

    There is no "formal" for iSCSI access guide, but it's pretty simple.  It's just like the implementation of ethernet access to something else. The only consideration you may want to make your MTU size is.  Most of the people set up MTU/frames extended on their iSCSI VLANS.

    This requires the following to accomplish.

    1. create a class system UCS QoS with a MTU of 9000 & apply to the vNIC (s) Service profile-> vNIC iSCSI->

    2. enable extended frames (9000 MTU) on all switches between (UCS) initiator and iSCSI Target (Equalloqic).

    3. turn on the frames on OS adapter (Vmkernel port for ESX, the Windows network adapter advanced properties)

    As you can see the only task-specific UCS is the QoS policy that allows an MTU of 9000 bytes size - assuming you use frames in your environment - very recommneded especially with 10 G.

    Robert

  • Net M4110x - Net management and iSCSI. Clarification of the configuration steps. The documentation is at best ambiguous.

    Hi guys,.

    I'm having a lot of trouble to translate what I know PS6000 table to the new M4100x.  Here's what I'm building:

    I want my iSCSI traffic that is completely isolated from all other traffic, and I want to use the CMC network to run the Board.  It must be a simple configuration, but Dell documentation for which is worse than useless.  It confuses and everyone who's read it confuses.  Why is this?

    It seems that I should be able to assign the IP addresses of management using the MCC according to DELL:

    Step 1.  Initialize the storage

    * Once in the CMC right-click on storage open storage gui initialization.

    * Member name: MY_SAN01
    * Member, IP: 192.168.101.10
    * Member gateway: 192.168.101.254
    Group name: MY_SAN
    Group IP address: 192.168.101.11
    Group of Memebersip password: groupadmin
    * Password Admin group: groupadmin

    It sounds simple enough, and when I apply this I guess I will be disconnected my M4110x simply because it currently resides on a separate network (net 2 in the image above).  Now how to set up the IP address of my CMC (net0 in the picture above) network management?

    Step 2.  Set ip management port

    According to the documentation of Dell, I have:

    To set the management port:

    * Open a telnet (ssh) session on a computer or console that has access to the PS-M4110 range. The array must be configured beforehand.
     
    * Connect the PS-M4110 modules using the following racadm command: racadm Server 15 connect
     
    * Connect to the PS-M4110 array as grpadmin

    Once I am in:

    Activate the management controller ports using the following commands in the CLI:
    0 > member select MY_SAN01
    1. (array1) > eth select 1
    2. (array1 eth_1) > 10.10.10.17 ipaddress netmask 255.255.255.0
    3. (array1 eth_1) > upward
    4. (array1 eth_1) > exit
    5. (array1) > grpparams
    6. (array1 (grpparams)) > network management-ipaddress 10.10.10.17

    (array1 (grpparams)) > exit

    My interpretation is correct?  Now my questions:

    1. in step 2. SubStep 1 - How can I know what ethernet interface to use?  Step 1 automatically assume eth0?

    2. am I correct in using the same IP address for both step 2 - substep 2 and substep 6?  Or do I have to assign a different IP address for these?  10.10.10.18 maybe.

    3. step 2 - substep 6, it doesn't seem to be that a network mask is that correct?

    4. comparison of the ps6000e - I set up an IP address for each controller (so 2) and then assigned an IP address for the group.  It's 3 IP addresses.  For this M4110, it seems that I have only one controller.  Is this correct?  The specifications make a point that there are 2 controllers.  What happened to the IP address of the controller of the 2nd?

    CLOSE-UPS

    I intend on building a VMware cluster using the algorithm of multiple paths of DELL and I built it to the DSC, but a technician Dell put in place the table initially and have not set up a dedicated management port.  Required configuration routing traffic on the net iSCSI management.  It is not recommended, and I don't want to set up this way.

    Currently, he is a blocking problemand I need to go beyond this ASAP.  I work with a large system integrator in Texas and plan on the order that these systems built this way on their part.  This means that I must be able to explain to them how to proceed.  This issue is standing in the way of progress, and I really hope I can get a satisfactory response from this forum.  Thanks for any helpful answers.

    I think I have the answers to my own questions:

    1. YES.  Step 1 automatically assume eth0.  There are TWO Ethernet interfaces and eth1 is disabled by default, and unless you use step 2 to set the management port this second Ethernet interface is never used.

    2. No. I can't use the same IP address for both lines.  In lower level 6 I need to use a different IP address on the same network 10.10.10.18 would work fine.

    3. YES.  It is correct.  Lower level 6 assumes the network mask that I have included in the point 2.

    4. it's tricky.  There is NO WAY to configure Active/active on these tables.  There are 2 controllers, but one "still asleep," unless the other fails.  Actually, the IP address is assigned to an Abstraction Layer, it maintains.  When fails another controller "awakens" and just starting to accept traffic and it doesn't care what its IP address.

    Another point.  Now that my table is initialized and my interfaces are configured, I need to know what IP address to point my ESXi hosts for their storage.  Use the IP address of the group assigned in step 1.  It is 192.168.101.11 (there is typo in the original post).

  • install new PS6510e - no communication on Iscsi network

    Getting desperate here now.

    bought refurbished ps6510e of third-party reseller - have been categorically denied access to the site of dell for support on that basis.

    connected to 2 servers R720 via grouped 10 gb SFP + cables through a switch of 8024F to SAN with redundant connections of 10 GB on two controllers. configured on the private network for iscsi only traffic with addresses private to each server, switch, Saint Nic individual and the Group ip san.

    set up in business / management network for servers / san / management switch with works perfectly.

    2 (2012r2) servers can communicate with each other via the links of 20 GB. No communication with san via ping or connecting iscsi connectors at all. servers can ping each other and switch, not san. CLI in san - cannot ping anything - not even his own individual ip addresses. SAN web interface reports that snap ports (15 + 16), it is plugged into the switch so that he knows that she is and that she connects to her apparently happy.

    configured according to the guidelines of dell white papers, frames, LAG switch. I swapped 10 GB connections between servers and san groups, and regardless of the combination of connections, servers talk, san does not work.

    interface Web and SAN HQ all report everything is hunky dory. one mistake is on the free space that we have configured volumes while.

    bright ideas about what was fundamentally wrong very favorably received.

    Hello

    Re: refurbished.  Sorry, but Dell does not offer without PS Series in this way.  Dell partners cannot provide these services either.  Support requires the table under warranty or support contract access to firmware, and other downloads.  In addition, the license to use the table is maintained in the contract, not the hardware.  This license may not be transferred or resold.

    Re: is associated. PS Do not support series without grouping at all.  You need to configure MPIO on servers instead.  Almost sure that tagging VLAN is not in use, or stripped of all PS Series ports.

    In the GUI, network ports show online?

    On the switch, make sure that you have the current firmware, and data center bridging (DCB) is disabled.

    Given you cannot ping anything, I suspect the cable or switch configuration.  If you use TWINAX cables be PASSIVE, Active cables are not supported.

    Kind regards

    Don

  • Update the profile of UCS after adding vNIC?

    I've got profiles service and blades running off this service profile. Each blade has two vNIC and I must add two vNIC more their. I want to add the additional vNIC to the models and let the service profiles "updated" with the new vNIC. This happens automatically if the models are "updated", but all my models are currently "initial" so they wont push the new config vNIC on the blades.

    To get the blades to recognize the new vNIC should I re - link models with the new vNIC to the profile service, or is it a 'push' that I can trigger model somewhere that I'm missing?

    The blades are running ESX and I don't want an unplanned restart.

    Thank you!!!

    GAK

    Greetings Ganesh,

    When you use static models (Initial report), the only way to apply your changes would be to untie/re-bind the SP with the updated model.  As it is an "original" model, you can make changes without them "pushed" immediately.

    As a great value, the best option for you would be to:

    1. remove your SP to the SP model

    2. Add the new vNIC to the SP Model (currently has nothing attached/linked to it)

    3. place your first ESX host in maintenance mode & close.

    4 link again this ESX host service profile to the model (this will cause a re-association event)

    5. turn on your ESX host & check for changes

    6. Repeat steps 3 to 5 for your remaining ESX hosts.

    Let me know if you have any other questions.

    Kind regards

    Robert

    * Note: I much prefer original models for the reason that they don't PUSH configuration changes that can potentially cause a reboot.  Updated models are good if you make profile changes that do not require a reboot to apply - IE. Add or remove a VLAN.  Apart from that, your Initial model will keep your changes "locked" without risking any config accidental push/reboots.

Maybe you are looking for

  • How to get the archived emails and the active elements of Thunderbird Inbox

    Windows 10 temptation of power from my hard drive. The only way to access the drive is to load it into another machine as secondary drive. I have files and folders from, but I can't access Thunderbird. Y at - it a way to access archival records to ge

  • Updates in tempo Equium P200

    Hi new to this forum... got the Equium P200-178 since last year and everything was very well, how ever I downloaded the program Tempo which alerts me up to dates, but when I supported the update for my computer, extract the files then run update the

  • Can I add a button next to the bar of address (such as the Home key) that will open a new tab?

    In Firefox 3, there was a button that would open up a new tab. This button was close to my address (next to the home, refresh, buttons etc.) bar and can also be used when I had no open tag (unlike the + on the tabbar, who is only there when I turn on

  • Update to iMovie on Apps Store

    I have a MacBook Pro (13-inch Mid2010) running on OS X El Capitan Version 10.11,3 When Version 10.1.1 iMovie was released January 20, 2016 I updated and installed successfully on January 21, 2016 Thereafter he continued to appear as an update is avai

  • Qosmio G30 - shaded black lines on the screen

    Hello I bought the Qosmio G30-116. Great set, but it is a question. On the bottom of the screen there are two dark outline shaded (the brighter the background color is more wisible it gets.) It is very annoying, I bought it mainly for movies. I took