Nexus 1000v 4.2 (1) SV2(2.1a) - license Question

I have two VSM (vm) running in HA pair two server blades in the same chassis, HP. At the beginning I was running environmental 1000v using the Eval licenses but now I installed the provided Cisco license files.

My question is this: when I call the command 'use of licenses show NEXUS1000V_LAN_SERVICES_PKG' I see the 6 installed Licenses, however, I also see that 'licenses installed in use equals 0.  I see licenses installed in 2 use?

vdivc01-n1kv # show the use of the NEXUS1000V_LAN_SERVICES_PKG licenses

----------------------------------------

Info use function

----------------------------------------

Installed Licenses: 6

By default the Eval Licenses: 0

Licenses discovered max: 16

Installed licenses in use: 0

Licenses found in use: 0

By default the Eval Lic in use: 0

By default the days left Eval: 0

Licenses available: 22

More short expiration: never

----------------------------------------

vdivc01-n1kv # show vem mod license-info

Licenses are post-it

License Version mod Socket Count Count licenses, state license

---   ------------   -------------------   ---------------   --------------

3     2              -                     -                 licensed

4     2              -                     -                licensed

vdivc01-sho n1kv # license brie

N1KVFEAT201312300740270040.LIC

N1KVFEAT201312300741200540.LIC

N1KVFEAT201312300742010570.LIC

N1KVFEAT201312300742330820.LIC

N1KVFEAT201312300743060900.LIC

N1KVFEAT201312300743430650.LIC

Any help is very appreciated!

David

The installed licenses are used for advanced editing.  I suspect your switch is always essential editing mode as it is not a county or the Version of the license for these modules.  Output check shSwitch edition.

Example:

# Sh SV2 switch edition

Switch Edition: Advanced  <>

Advanced features

Function name function State

-----------------------------

Disabled CTS

monitoring DHCP disabled

vxlan-Bridge enabled

Licenses available: 22

Licenses in use: 10<>

Expiration date of license:

Tags: Cisco DataCenter

Similar Questions

  • Question about license for Nexus 1000v and it time to stand not licensed ESXi

    Hello

    I have the license for 12 cpu that are 3 popular ESXi with 4 processors each

    so if I add an another ESXi, which will support 60 days

    I'm pretty sure but wanted confirmation

    Thank you

    What version do you use? 2.1 they went to a model free and advanced, depending on the features you need, you can be able to upgrade and continue without additional licenses.

    No doubt, you can convert your current license VSG, contact your engineer.

    http://blogs.Cisco.com/tag/nexus-1000V/

  • Nexus 1000v VSM compatibility with older versions of VEM?

    Hello everyone.

    I would like to upgrade our Nexus 1000v VSM 4.2 (1) SV1 (5.1) to 4.2 (1) SV2(2.1a) because we are heading of ESXi 5.0 update 3 to 5.5 ESXi in the near future. I was not able to find a list of compatibility for the new version when it comes to versions VEM, I was wondering if the new VSM supports older versions VEM, we are running, so I must not be upgraded all at once. I know that it supports two versions of our ESXi.

    Best regards

    Pete

    You found documentation, transfer of the station from 1.5 to latest code is supported in a VSM perspective.  Which is not documented is the small one on the MEC.  In general, the VSM is backward compatible with the old VEM (to a degree, the degree of which is not published).  Although it is not documented (AFAIK), verbal comprehension is that MEC can be a version or two behind, but you should try to minimize the time that you run in this configuration.

    If you plan to run mixed versions VEM when getting your upgraded hosts (totally fine that's how I do mine), it is better to move to this enhanced version of VEM as you upgrade the hypervisor.  Since you go ESXi 5.0 5.5, you create an ISO that contains the Cisco VIBs, your favorite driver async (if any), and the image of ESXi 5.5 all grouped together so the upgrade for a given host is all of a sudden.  You probably already have this cold technique, but the links generated by the Cisco tool below will show you how to proceed.  It also gives some URLS handy to share with each person performing functions on this upgrade.  Here is the link:

    Nexus 1000V and ESX upgrade utility

    PS - the new thing takes clones your VSMs offline.  Even if they are fairly easy to recover, having a real pure clone will save some sauce secret that otherwise you may lose in a failure scenario.  Just turn off a VSM, then right click and clone.  Turn on again this MSM and failover pair HA, then take to the bottom of it and get a clone of it.  So as a security measure, this upgrade, get your clones currently out of the current 1.5 VSMs, then some time after your upgrade some clones offline, saved from the new version.

  • Remove the ' system VLAN "Nexus 1000V port-profile

    We have a Dell M1000e blade chassis with a number of Server Blade M605 ESXi 5.0 using the Nexus 1000V for networking.  We use 10 G Ethernet fabric B and C, for a total of 4 10 cards per server.  We do not use the NIC 1 G on A fabric.  We currently use a NIC of B and C fabrics for the traffic of the virtual machine and the other card NETWORK in each fabric for traffic management/vMotion/iSCSI VM.  We currently use iSCSI EqualLogic PS6010 arrays and have two configuration of port-groups with iSCSI connections (a physical NIC vmnic3 and a vmnic5 of NIC physical).

    We have added a unified EMC VNX 5300 table at our facility and we have configured three VLANs extra on our network - two for iSCSI and other for NFS configuration.  We've added added vEthernet port-profiles for the VLAN of new three, but when we added the new vmk # ports on some of the ESXi servers, they couldn't ping anything.   We got a deal of TAC with Cisco and it was determined that only a single port group with iSCSI connections can be bound to a physical uplink both.

    We decided that we would temporarily add the VLAN again to the list of VLANS allowed on the ports of trunk of physical switch currently only used for the traffic of the VM. We need to delete the new VLAN port ethernet-profile current but facing a problem.

    The Nexus 1000V current profile port that must be changed is:

    The DenverMgmtSanUplinks type ethernet port profile

    VMware-port group

    switchport mode trunk

    switchport trunk allowed vlan 2308-2306, 2311-2315

    passive auto channel-group mode

    no downtime

    System vlan 2308-2306, 2311-2315

    MGMT RISING SAN description

    enabled state

    We must remove the list ' system vlan "vlan 2313-2315 in order to remove them from the list" trunk switchport allowed vlan.

    However, when we try to do, we get an error about the port-profile is currently in use:

    vsm21a # conf t

    Enter configuration commands, one per line.  End with CNTL/Z.

    vsm21a (config) #-port ethernet type DenverMgmtSanUplinks profile

    vsm21a(config-port-Prof) # system vlan 2308-2306, 2311-2312

    ERROR: Cannot delete system VLAN, port-profile in use by Po2 interface

    We have 6 ESXi servers connected to this Nexus 1000V.  Originally they were MEC 3-8 but apparently when we made an update of the firmware, they had re - VEM 9-14 and the old 6 VEM and associates of the Channel ports, are orphans.

    By example, if we look at the port-channel 2 more in detail, we see orphans 3 VEM-related sound and it has no ports associated with it:

    Sho vsm21a(config-port-Prof) # run int port-channel 2

    ! Command: show running-config interface port-canal2

    ! Time: Thu Apr 26 18:59:06 2013

    version 4.2 (1) SV2 (1.1)

    interface port-canal2

    inherit port-profile DenverMgmtSanUplinks

    MEC 3

    vsm21a(config-port-Prof) # sho int port-channel 2

    port-canal2 is stopped (no operational member)

    Material: Port Channel, address: 0000.0000.0000 (bia 0000.0000.0000)

    MTU 1500 bytes, BW 100000 Kbit, DLY 10 usec,

    reliability 255/255, txload 1/255, rxload 1/255

    Encapsulation ARPA

    Port mode is trunk

    Auto-duplex, 10 Gb/s

    Lighthouse is off

    Input stream control is turned off, output flow control is disabled

    Switchport monitor is off

    Members in this channel: Eth3/4, Eth3/6

    Final cleaning of "show interface" counters never

    102 interface resets

    We can probably remove the port-channel 2, but assumed that the error message on the port-profile in use is cascading on the other channel ports.  We can delete the other port-channel 4,6,8,10 orphans and 12 as they are associated with the orphan VEM, but we expect wil then also get errors on the channels of port 13,15,17,19,21 and 23 who are associated with the MEC assets.

    We are looking to see if there is an easy way to fix this on the MSM, or if we need to break one of the rising physical on each server, connect to a vSS or vDS and migrate all off us so the Nexus 1000V vmkernel ports can clean number VLAN.

    You will not be able to remove the VLAN from the system until nothing by using this port-profile. We are very protective of any vlan that is designated on the system command line vlan.

    You must clean the canals of old port and the old MEC. You can safely do 'no port-channel int' and "no vem" on devices which are no longer used.

    What you can do is to create a new port to link rising profile with the settings you want. Then invert the interfaces in the new port-profile. It is generally easier to create a new one then to attempt to clean and the old port-profile with control panel vlan.

    I would like to make the following steps.

    Create a new port-profile with the settings you want to

    Put the host in if possible maintenance mode

    Pick a network of former N1Kv eth port-profile card

    Add the network adapter in the new N1Kv eth port-profile

    Pull on the second NIC on the old port-profile of eth

    Add the second network card in the new port-profile

    You will get some duplicated packages, error messages, but it should work.

    The other option is to remove the N1Kv host and add it by using the new profile port eth.

    Another option is to leave it. Unless it's really bother you no VMs will be able to use these ports-profile unless you create a port veth profile on this VLAN.

    Louis

  • What does Nexus 1000v Version number Say

    Can any body provide long Nexus 1000v version number, for example 5.2 (1) SV3 (1.15)

    And what does SV mean in the version number.

    Thank you

    SV is the abbreviation of "Swiched VMware"

    See below for a detailed explanation:

    http://www.Cisco.com/c/en/us/about/Security-Center/iOS-NX-OS-reference-g...

    The Cisco NX - OS dialing software

    Software Cisco NX - OS is a data-center-class operating system that provides a high thanks to a modular design availability. The Cisco NX - OS software is software-based Cisco MDS 9000 SAN - OS and it supports the Cisco Nexus series switch Cisco MDS 9000 series multilayer. The Cisco NX - OS software contains a boot kick image and an image of the system, the two images contain an identifier of major version, minor version identifier and a maintenance release identifier, and they may also contain an identifier of reconstruction, which can also be referred to as a Patch to support. (See Figure 6).

    Software NX - OS Cisco Nexus 7000 Series and MDS 9000 series switches use the numbering scheme that is illustrated in Figure 6.

    Figure 6. Switches of the series Cisco IOS dial for Cisco Nexus 7000 and MDS 9000 NX - OS

    For the other members of the family, software Cisco NX - OS uses a combination of systems independent of the platform and is dependent on the platform as shown in Figure 6a.

    Figure 6 a. software Cisco IOS NX - OS numbering for the link between 4000 and 5000 Series and Nexus 1000 switches virtual

    The indicator of the platform is N for switches of the 5000 series Nexus, E for the switches of the series 4000 Nexus and S for the Nexus 1000 series switches. In addition, Nexus 1000 virtual switch uses a designation of two letters platform where the second letter indicates the hypervisor vendor that the virtual switch is compatible with, for example V for VMware. Features there are patches in the platform-independent code and features are present in the version of the platform-dependent Figure 6 a above, there is place of bugs in the version of the software Cisco NX - OS 4.0(1a) are present in the version 4.0(1a) N1(1a).

  • The Nexus 1000V loop prevention

    Hello

    I wonder if there is a mechanism that I can use to secure a network against the loop of L2 packed the side of vserver in Vmware with Nexus 1000V environment.

    I know, Nexus 1000V can prevent against the loop on the external links, but there is no information, there are features that can prevent against the loop caused by the bridge set up on the side of the OS on VMware virtual server.

    Thank you in advance for an answer.

    Concerning

    Lukas

    Hi Lukas.

    To avoid loops, the N1KV does not pass traffic between physical network cards and also, he silently down traffic between vNIC is the bridge by operating system.

    http://www.Cisco.com/en/us/prod/collateral/switches/ps9441/ps9902/guide_c07-556626.html#wp9000156

    We must not explicit configuration on N1KV.

    Padma

  • Cisco Nexus 1000V Virtual Switch Module investment series in the Cisco Unified Computing System

    Hi all
    I read an article by Cisco entitled "Best practices in Deploying Cisco Nexus 1000V Switches Cisco UCS B and C Series series Cisco UCS Manager servers" http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/white_paper_c11-558242.html

    A lot of excellent information, but the section that intrigues me, has to do with the implementation of module of the VSM in the UCS. The article lists 4 options in order of preference, but does not provide details or the reasons underlying the recommendations. The options are the following:

    ============================================================================================================================================================
    Option 1: VSM external to the Cisco Unified Computing System on the Cisco Nexus 1010

    In this scenario, the virtual environment management operations is accomplished in a method identical to existing environments not virtualized. With multiple instances on the Nexus 1010 VSM, multiple vCenter data centers can be supported.
    ============================================================================================================================================================

    Option 2: VSM outside the Cisco Unified Computing System on the Cisco Nexus 1000V series MEC

    This model allows to centralize the management of virtual infrastructure, and proved to be very stable...
    ============================================================================================================================================================

    Option 3: VSM Outside the Cisco Unified Computing System on the VMware vSwitch

    This model allows to isolate managed devices, and it migrates to the model of the device of the unit of Services virtual Cisco Nexus 1010. A possible concern here is the management and the operational model of the network between the MSM and VEM devices links.
    ============================================================================================================================================================

    Option 4: VSM Inside the Cisco Unified Computing System on the VMware vSwitch

    This model was also stable in test deployments. A possible concern here is the management and the operational model of the network links between the MSM and VEM devices and switching infrastructure have doubles in your Cisco Unified Computing System.
    ============================================================================================================================================================

    As a beginner for both 100V Nexus and UCS, I hope someone can help me understand the configuration of these options and equally important to provide a more detailed explanation of each of the options and the resoning behind preferences (pro advantages and disadvantages).

    Thank you
    Pradeep

    No, they are different products. vASA will be a virtual version of our ASA device.

    ASA is a complete recommended firewall.

  • Nexus 1000v, UCS, and Microsoft NETWORK load balancing

    Hi all

    I have a client that implements a new Exchange 2010 environment. They have an obligation to configure load balancing for Client Access servers. The environment consists of VMware vShpere running on top of Cisco UCS blades with the Nexus 1000v dvSwitch.

    Everything I've read so far indicates that I must do the following:

    1 configure MS in Multicast mode load balancing (by selecting the IGMP protocol option).

    2. create a static ARP entry for the address of virtual cluster on the router for the subnet of the server.

    3. (maybe) configure a static MAC table entry on the router for the subnet of the server.

    3. (maybe) to disable the IGMP snooping on the VLAN appropriate in the Nexus 1000v.

    My questions are:

    1. any person running successfully a similar configuration?

    2 are there missing steps in the list above, or I shouldn't do?

    3. If I am disabling the snooping IGMP on the Nexus 1000v should I also disable it on the fabric of UCS interconnections and router?

    Thanks a lot for your time,.

    Aaron

    Aaron,

    The steps above you are correct, you need steps 1-4 to operate correctly.  Normally people will create a VLAN separate to their interfaces NLB/subnet, to prevent floods mcast uncessisary frameworks within the network.

    To answer your questions

    (1) I saw multiple clients run this configuration

    (2) the steps you are correct

    (3) you can't toggle the on UCS IGMP snooping.  It is enabled by default and not a configurable option.  There is no need to change anything within the UCS regarding MS NLB with the above procedure.  FYI - the ability to disable/enable the snooping IGMP on UCS is scheduled for a next version 2.1.


    This is the correct method untill the time we have the option of configuring static multicast mac entries on
    the Nexus 1000v.  If this is a feature you'd like, please open a TAC case and request for bug CSCtb93725 to be linked to your SR.

    This will give more "push" to our develpment team to prioritize this request.

    Hopefully some other customers can share their experience.

    Regards,

    Robert

  • How change 1010 Nexus and Nexus 1000v IP address

    Hi Experts,

    We run two VSM and a NAM in the Nexus 1010. The version of Nexus 1010 is 4.2.1.SP1.4. And the Nexus 1000v version is 4.0.4.SV1.3c. Now we need to change the IP address of management in the other. Where can I find the model SOP or config? And nothing I need to remember?

    If it is only the mgmt0 you IP address change, you can simply enter the new address under the mgmt0 interface. It automatically syncs with the VC.

    I guess you are trying to change the IP address of the VC and the management VLAN. One way to do this is:

    -From the Nexus 1000v, disconnect the connection to the VC (connection svs-> without logging)

    -Change the IP address of the VC and connect (connection-> remote ip address svs)

    -Change the address for mgmt0 Nexus 1000v

    -Change the mgmt VLAN on the 1010

    -Change the address of the 1010 mgmt

    -Reconnect the Nexus 1000v to VC (connection-> connect svs)

    Therefore, to change the configuration of VLANS on the switch upstream, more connection to the VC as well.

    Thank you

    Shankar

  • Replacement of failing on Nexus 1000v VEM

    I was curious how others accomplish the replacement of a failed ESXi host who is an on the Nexus 1000v VEM. I did this procedure once and it seemed endless. The goal is to make the swap of the transparent to the Nexus 1000v (VEM even #, just different VMware UUID)

    -Transition to the standard vSwitches home

    -Remove the switch distributed via vCenter host (host of right click and delete)

    -Swap physically on the chassis

    -Find the UUID for new host (got esxcfg-info)

    -Install new MEC in the 1000v with this UUID

    -Host of replacement over 1000v that assumes the number of Vincent who was installed just to migrate

    Ben,

    Note - 1000v in relationship issues are better posted in the «Server network» forum  This forum is specific to the UCS.

    https://supportforums.Cisco.com/community/NetPro/data-center/server-network?view=discussions

    The procedure that you use is the right.  Another method is to remove the host of 1000v with elegance, then issue a 'no vem X"that removes the MEC does record from the MSM.  Exchange your hosts, then add back to the 1000v.  Is there a reason that you need the same UUID?

    Kind regards

    Robert

  • VXLAN on UCS: IGMP with Catalyst 3750, 5548 Nexus, Nexus 1000V

    Hello team,

    My lab consists of Catalyst 3750 with SVI acting as the router, 5548 Nexus in the vpc Setup, UCS in end-host Mode and Nexus 1000V with segmentation feature enabled (VXLAN).

    I have two different VLAN for VXLAN (140, 141) to demonstrate connectivity across the L3.

    VMKernel on VLAN 140 guests join the multicast fine group.

    Hosts with VMKernel on 141 VLAN do not join the multicast group.  Then, VMs on these hosts cannot virtual computers ping hosts on the local network VIRTUAL 140, and they can't even ping each other.

    I turned on debug ip igmp on the L3 Switch, and the result indicates a timeout when he is waiting for a report from 141 VLAN:

    15 Oct 08:57:34.201: IGMP (0): send requests General v2 on Vlan140

    15 Oct 08:57:34.201: IGMP (0): set the report interval to 3.6 seconds for 224.0.1.40 on Vlan140

    15 Oct 08:57:36.886: IGMP (0): receipt v2 report on 172.16.66.2 to 239.1.1.1 Vlan140

    15 Oct 08:57:36.886: IGMP (0): group record received for group 239.1.1.1, mode 2 from 172.16.66.2 to 0 sources

    15 Oct 08:57:36.886: IGMP (0): update EXCLUDE group 239.1.1.1 timer

    15 Oct 08:57:36.886: IGMP (0): add/update Vlan140 MRT for (*, 239.1.1.1) 0

    15 Oct 08:57:38.270: IGMP (0): send report v2 for 224.0.1.40 on Vlan140

    15 Oct 08:57:38.270: IGMP (0): receipt v2 report on Vlan140 of 172.16.66.1 for 224.0.1.40

    15 Oct 08:57:38.270: IGMP (0): group record received for group 224.0.1.40, mode 2 from 172.16.66.1 to 0 sources

    15 Oct 08:57:38.270: IGMP (0): update EXCLUDE timer group for 224.0.1.40

    15 Oct 08:57:38.270: IGMP (0): add/update Vlan140 MRT for (*, 224.0.1.40) by 0

    15 Oct 08:57:51.464: IGMP (0): send requests General v2 on Vlan141<----- it="" just="" hangs="" here="" until="" timeout="" and="" goes="" back="" to="">

    15 Oct 08:58:35.107: IGMP (0): send requests General v2 on Vlan140

    15 Oct 08:58:35.107: IGMP (0): set the report interval to 0.3 seconds for 224.0.1.40 on Vlan140

    15 Oct 08:58:35.686: IGMP (0): receipt v2 report on 172.16.66.2 to 239.1.1.1 Vlan140

    15 Oct 08:58:35.686: IGMP (0): group record received for group 239.1.1.1, mode 2 from 172.16.66.2 to 0 sources

    15 Oct 08:58:35.686: IGMP (0): update EXCLUDE group 239.1.1.1 timer

    15 Oct 08:58:35.686: IGMP (0): add/update Vlan140 MRT for (*, 239.1.1.1) 0

    If I do a show ip igmp interface, I get the report that there is no joins for vlan 141:

    Vlan140 is up, line protocol is up

    The Internet address is 172.16.66.1/26

    IGMP is enabled on the interface

    Current version of IGMP host is 2

    Current version of IGMP router is 2

    The IGMP query interval is 60 seconds

    Configured IGMP queries interval is 60 seconds

    IGMP querier timeout is 120 seconds

    Configured IGMP querier timeout is 120 seconds

    Query response time is 10 seconds max IGMP

    Number of queries last member is 2

    Last member query response interval is 1000 ms

    Access group incoming IGMP is not defined

    IGMP activity: 2 joints, 0 leaves

    Multicast routing is enabled on the interface

    Threshold multicast TTL is 0

    Multicast designated router (DR) is 172.16.66.1 (this system)

    IGMP querying router is 172.16.66.1 (this system)

    Multicast groups joined by this system (number of users):

    224.0.1.40 (1)

    Vlan141 is up, line protocol is up

    The Internet address is 172.16.66.65/26

    IGMP is enabled on the interface

    Current version of IGMP host is 2

    Current version of IGMP router is 2

    The IGMP query interval is 60 seconds

    Configured IGMP queries interval is 60 seconds

    IGMP querier timeout is 120 seconds

    Configured IGMP querier timeout is 120 seconds

    Query response time is 10 seconds max IGMP

    Number of queries last member is 2

    Last member query response interval is 1000 ms

    Access group incoming IGMP is not defined

    IGMP activity: 0 joins, 0 leaves

    Multicast routing is enabled on the interface

    Threshold multicast TTL is 0

    Multicast designated router (DR) is 172.16.66.65 (this system)

    IGMP querying router is 172.16.66.65 (this system)

    No group multicast joined by this system

    Is there a way to check why the hosts on 141 VLAN are joined not successfully?  port-profile on the 1000V configuration of vlan 140 and vlan 141 rising and vmkernel are identical, except for the different numbers vlan.

    Thank you

    Trevor

    Hi Trevor,

    Once the quick thing to check would be the config igmp for both VLAN.

    where did you configure the interrogator for the vlan 140 and 141?

    are there changes in transport VXLAN crossing routers? If so you would need routing multicast enabled.

    Thank you!

    . / Afonso

  • Configuration of the channel of port on nexus 1000V

    Hello

    I'm new on nexus 1000V, the configuration is as follows, UCS chassis with 4 blades full-width connected to 2 FI 6248UP.

    each FI's uplink to a n5k (no mail ORDER).

    is there any configuration model the nexus 1000v? How to configure port-channel?

    Thank you.

    Hello

    We recommend using mac pinning ("channel-group auto mode on mac - pinning") when N1KV is used on the blades of the UCS.

    Next doc provides good overview on best practices.

    Best practices in deploying Cisco Nexus 1000V switches in the Cisco UCS B and C series series Cisco UCS Manager servers

    http://www.Cisco.com/en/us/prod/collateral/switches/ps9441/ps9902/white_paper_c11-558242.html

    HTH

    Padma

  • VM - FEX and Nexus 1000v relationship

    Hello

    I'm new in the world of virtulaization and I need to know what the relationship between Cisco Nexus 1000v and Cisco VM - FEX? and when to use VM - FEX and when to use Nexus 1000v.

    Concerning

    Ahmed,

    Nexus 1000v is a Distributed Switch that allows you to manage your VEM, see this relationship as a supervisor-LineCard relationship.

    VM - FEX gives you the opportunity to bypass the vSwitch embarked on each host ESXi (VEM) for example.

    With VM - FEX, you see the virtual machines as if they were directly connected to the Parent switch (N7K / 5K for example), making it the parent spend management (cause there is more no vSwitch in the middle).

    This is a good topic that can be discussed and is difficult to summarize in a few lines, you read something in particular? any questions or doubts we can help clarify?

    -Kenny

  • Authentication of connection Cisco AAA to shelf (MS IAS) Nexus 1000v

    Hey

    I have a link, I'll add to my radius for the logon server.

    On a sw IOS, I need to do more

    Number of attribute change to '1 '.
    Set the Format of the attribute to "String".
    Type "shell: priv-lvl = 15" in the value of the attribute field

    But should I put in the "shell", so I'll work on a Nexus 1000v

    Shell: roles = "network-admin".

    (or replace any role to assign the user to network-admin)

  • Issues of Nexus 1000v

    Hi all

    How Nexus 1000v (DVS) are that a Vcenter version 5.1 can connect?

    Thank you.

    By the release notes

    Support of the unique VMware data center

    The Cisco Nexus 1000V can be connected to a data center VMware vCenter Server unique object. Note that this virtual data center can span several physical data centers.

    http://www.Cisco.com/c/en/us/TD/docs/switches/Datacenter/nexus1000/SW/4 _...

    See also table 1 Configuration limits for Cisco Nexus 1000V

Maybe you are looking for