Binding VMK to vDS portgroup

Hi guys,.

I try to add iSCSI to ESXi storage. It's on vCenter 6 device and ESXi hosts that have been upgraded to ESXi 5.5 6.  I have created 2 Exchange iSCSI in a vDS, the vSS vmk migrated vDS (it was on a different set of physical NIC in vSS). Configured the 1:1 relationship between vmk and physical NIC but I don't see the vmk trying to link it to a storage card. What I'm missing here. I've done this many times before, so I'm sure I did it correctly. I also checked using esxcli, if there is anything related to the vmhab - nothing. When I try to link the vmhba, interestingly I see the physical NETWORK card that was previously attached to the vmk. I think that there is a link between the vmk and the NIC physical. If so, how do I remove the link to it?

Update *.

If I move the vSS or vDS vmk who has the old physical NIC, I'm able to bind the vmk to vmhba. So, it seems like there is a connection between the vmk and the NIC physical. Someone knows how to remove the link to them?

SOLVED!

Simply put a note that I managed to solve this problem. It may be useful to someone. There was a configuration error in the NPAR configuration in the BIOS of the host. Even if I had changed the NPAR to accept unloading iSCSI on network cards, it has not been applied. I had to change to new NPAR and thereafter, I managed to connect iSCSI vmk.

Tags: VMware

Similar Questions

  • ISCSI vss for vds VMK migration

    ESXi 5.0.0 469512

    vCenter 5.0.0 455964

    I am migrating standard switches in the switches distributed in my lab and would like to know how to migrate the vmk iSCSI vSS to the vDS.  I read through a few articles and a few vmware docs but have not been able to find any useful info.  Very well, I moved my portgroups\vmks management, vm, and vmotion.  When I try to migrate the vmk iSCSI to the vDS portgroup via the Wizard add a host, I get the following message appears saying that the vmk is always linked to the iscsi adapter.

    "This can only be done while the vmkernel network adapter vmk2 is used by the iSCSI adapter.  Remove the VMKernel network adapter
    adapter for iSCSI to carry out this operation. "

    Do I really need during dissociation of the core of the iSCSI adapter?  I do this manually in some way?  If this is the case, what are the General steps for getting the vmk iscsi migrated successfully?

    Another question...  I read that it is not a good idea to have vCenter virtualized and using vDS and actually Vmware does not support.  Even if it's a waste of nics, vCenter should have it's own of the vss?

    Thank you!

    I had the same error/scenario and that's what I did to make it work.

    (1) I put the host in maintenance mode (optional but wasn't sure of the result)

    (2) entered in the vmhba iSCSI storage--> properties--> tab Configuration network adapters and removed the two vmk, I created on the vSS service in order to configure paths multiple iSCSI.  Once you remove the vmk, your host will see iSCSI data warehouses, but you won't just active multipathing (I went from 2 devices - 4 paths of 2 devices - 2-way).

    (3) went home--> inventory--> Networking and had all my dv port pre-created groups. Right clicked on the dvSwitch and click on manage hosts.

    (4) click on the checkbox for a host that I am train (in maintenance mode) and made the migration as documented in the above pdf.

    (5) I re-entered the vmhba iSCSI storage adapters and added the two new vmk (which should have the same number of vmk carried out the vSS service).

    (6) repeat the other hosts one at a time

    Sorry I had to type this out quickly.  Let me know if I was not clear on some time.

  • How to change IP address for vmk ports on vDS?

    Hello

    I can't find a place to change the IP address for vmk ports vDS, if anyone could share some lights?

    vmware-vds.png

    Thank you very much!

    This is the configuration of the host. From the vSphere Client:

    1. Choose inventory hosts and Clusters
    2. Click on the host whose IP address you want to change
    3. Go to the Configuration tab
    4. Select the network from the menu on the left
    5. Select vSphere Distributed Switch
    6. Click on 'Manage e-cards' above the VDS you want to change

    Example:

  • Host on vDS loses packages, work well on vs.

    Hello

    I've implemented vDS on my vCenter and now I want to add a host to it. So I did spend vSS vmnic vmk with equivalent portgroup management network on vDS and vDS. After that ping to the host looks like:

    20:23:58 (0) ~ # ping 10.122.4.9

    PING 10.122.4.9 (10.122.4.9): 56 data bytes

    64 bytes from 10.122.4.9: icmp_seq = 1 ttl = 64 time = 0,256 ms

    64 bytes from 10.122.4.9: icmp_seq = 3 ttl = 64 time = 0,372 ms

    64 bytes from 10.122.4.9: icmp_seq = 5 ttl = 64 time = 0.240 ms

    64 bytes from 10.122.4.9: icmp_seq = 7 ttl = 64 time = 0,192 ms

    64 bytes from 10.122.4.9: icmp_seq = 9 ttl = 64 time = 0,331 ms

    64 bytes from 10.122.4.9: icmp_seq = 11 ttl = 64 time = 9,055 ms

    64 bytes from 10.122.4.9: icmp_seq = 13 ttl = 64 time = 0,193 ms

    -ping - 10.122.4.9 statistics

    14 packets transmitted, 7 received packets, 50.0% packet loss

    round-trip min/avg/max/std-dev = 0.192/1.519/9.055/3.077 ms

    So I get answer icmp for 50% of the packets. Restore network on VMhost (vmk on vSS) and the connectivity is good, no packet loss.

    What could be a problem for such behavior?

    Kind regards

    p.

    Problem with: VMware KB: ESXi uplink Port of 5.5 using Broadcom bnx2 driver components when it is connected to a Distributed Switch vSphere

  • Can I use VDS with LACP to reduce on group SAN iSCSI connections?

    I'm trying to reduce the number of iSCSI connections to my EqualLogic storage group. I currently use a standard vSwitch with two ports vmk by server for SAN. Can I use a single vmk with VDS/LACP to reduce my by half by server iSCSI connections and still maintains band bandwidth/redundancy?

    I'm quite confident, it will work, but hoped for validation. Please correct me if I'm totally turned off, or if it would be not supported.

    My volumes cover several physical devices (20 paths for 7 volumes by iSCSI adapter). Can I assume that with load balancing on an OFFSET of the source and destination hash value, I would get at least some balancing between adapters?

    If you have several points of termination targets L2 - 4, Yes. For example an IP address, MAC address, unique ports or. Paths, volumes and devices are not relevant to the LACP.

    If I don't choose a LAG, I can't visualize how it would reduce my iSCSI for my storage pool connections. I'm setting targets on my Broadcom cards, the vmk connects through these cards. Is the logic that is smart enough to only connect the active paths on an adapter or another in an OFFSET? I do know that I have make sense more that I might have to play with a SHIFT on a host isolated for a bit and see exactly how it will behave.

    The adapters are more exposed. The LAG is a logical interface. Traffic can be sent or received through any physical adapter in the GAL.

    Note: I would not use an OFFSET for iSCSI on a vSphere host traffic: seriously, Stop Using Port channels for vSphere storage traffic. Wahl network

  • iSCSI and distributed switches

    This could be a long post, but I did not ' have much time so will now start on it and Add.

    I am trying to install two vswitches distributed with ports vmkernel on each and link them to two physical cards on their own subnets connected to an iSCSI device that has two interfaces.

    I do all this in my lab ESXi nested under fusion using OpenFiler as the iSCSI device.

    I currently have two hosts, I have a cluster with vcenter server and openfiler iSCSI shared storage that works perfectly.  All comms management are on the subnet of the network management including iSCSI.

    I then created two networks more molten vmnet4 and vmnet 5, added 2 network I / f on each host and openfiler.

    Then created two dvswitches and vmkernel ports added to each and vmnic connected to iSCSI networks.

    What are hosts autodeply btw, so I then updated my host from my hosts of reference profiles and applied to the other host (I have a couple more autodeploy of hosts also although I use for practice in collaboration with autodeplay and everything works well) rebooted everything and the dvswitches are soft, pings all good work.

    Now comes the problem.  I can add these vmk/portgroup/vmnic for vmhba33 iSCSI software initiattor...  I can't enable iSCSI on the vmkernels in the vds or as it is grayed out.  When I try and add them to the adapter vmhba33 I get the message:

    The selected physical network adapter is not associated with VMkernel consistent teaming and political failver. VMkernel NIC must have exactly a link active and no rising shall be eligible to bind to the iSCSI HBA.

    so, I have attached a picture of this.  When I watch the vmkernel ports, iSCSI is greyed out.

    Basically, I'm trying this for is better acquainted with dvs and thought to deploy dvs and assign to the many guests would be the fastest way to implement an iSCSI for many guests esxi autodeployed network.

    The curious bit is that the görüş and vmkernel port adapter not listed FRO the dvs on vmnic 2 and 3 networks that are my iSCSI.  There are ports of the vmk in a portgroup with the name and the port group.

    I stuffed the properties I can find, and each button and the disturbingling, a lot of things is grayed out.

    I am stop for tonight and hope that someone can tell me first if what I'm doing is really possible and I do not do something stupid... and on the other hand, if someone can tell me the correct downwards path as I seem to have gone astray.

    Thank you

    Bill

    You can retrace your steps using this procedure to see if you have missed the steps;

    http://thefoglite.com/2012/06/14/configure-software-iSCSI-port-binding-on-a-VDS-with-dvPorts/

  • Userful info fromesxcfg-* by powercli

    I often asked by the network, info info team / vlan mac. I found myself running under cmd 3 to explain things to them. can we get this in powercli & xls?

    Mac vmhost, Nic, Nic, nic belongs to what vswitch / vmkernel ports & they IP...

    ~ # esxcfg - vmknic - l

    ~ # esxcfg - NICS - l

    ~ #

    ~ # esxcfg - vswitch - l

    Thank you

    You can do all this with the Get-EsxCLi cmdlet.

    Like this

    $esxcli = get-EsxCli - VMHost MyEsxi

    # esxcfg - vmknic - l

    {foreach ($vmk to {$esxcli.network.ip.interface.list ()})}

    $esxcli.network.ip.interface.ipv4.get($vmk.) Name) |

    Select Name,@{N='PortGroup'; E = {$vmk. The PortGroup}},@{N='IP family '; E = {'IPv4'}},

    @{N = 'IP address'; E={$_. IPv4Address}},

    @{N = 'Subnet mask'; E={$_. IPv4Netmask}},

    @{N = "Broadcast"; E={$_. IPv4Broadcast}},

    @{N = 'MAC'; E = {$vmk. MACAddress}},

    @{N = "MTU"; E = {$vmk. MTU}},

    @{N = "TSOMSS"; E = {$vmk. TSOMSS}},

    @{N = "enabled"; E = {$vmk. Activated}},

    @{N = ' Type'; E={$_. AddressType}}

    }

    # esxcfg - NICs - l

    $esxcli.network.nic.list () |

    where {$_.} Name - match "vmnic"} |

    Select Name,@{N='PCI'; E={$_. PCIDevice}},

    Driver, speed, Duplex, MACAddress, MTU, link, Description

    # esxcfg - vswitch - l

    {foreach ($vss to {$esxcli.network.vswitch.standard.list ()})}

    $vss | Select @{N = "Report name"; E={$_. Name}},

    NumPorts,UsedPorts,ConfiguredPorts,MTU,@{N='Uplinks'; E={$_. {{Links rising-join ","}}

    $esxcli.network.vswitch.standard.portgroup.list () |

    where {$_.} VirtualSwitch - eq $vss. Name} |

    Select @{N = 'Portgroup name'; E={$_. Name}},

    VLANID,@{N='Used Ports; E={$_. ActiveClients}},

    @{N = 'Links'; E = {}

    ($esxcli.network.vswitch.standard.portgroup.policy.failover.get ($_.)) Name) |

    Select ActiveAdapters - ExpandProperty) - join «,»

    }}

    }

    {foreach ($vds to {$esxcli.network.vswitch.dvs.vmware.list ()})}

    $vds | Select @{N = 'Name DVS'; E={$_. Name}},

    NumPorts,UsedPorts,ConfiguredPorts,MTU,@{N='Uplinks'; E={$_. {{Links rising-join ","}}

    $vds. DVPort |

    Select DVPortgroupID,InUse,@{N='Client'; E={$_. {{CLient-join ","}}

    }

  • To change host vMotion PortGroup & VMK IP on a vDS PowerCLI

    Hi all...

    I have a lot of hosts where I need to change the IP address (not netmask, gw, etc.) and the VM my interfaces of vmk vMotion PortGroup.

    I have a list of servernames, new IPs and the vm portgroup.  I'm assuming that powercli can do but don't know where to start because all my hosts are now on a vDS.

    Any suggestions?

    Thank you!

    What PowerCLI version do you use? Make one

    Get-PowerCLIVersion

    The most recent version at the moment is 5.1 R2

  • internal only portGroup on vDS

    Hello, is it possible to create on vDS (vSphere Enterprise Plus, 5.0) PortGroup for internal virtual machines only. We need to configure two virtual machines: APPS and DB. APPS VM should have access to the demilitarized zone, DB server and internal campus network. DB VM should have access to the VM applications only (no Internet, no internal campus network). Wouldn't be better to use standard vSwitch for this purpose? That we will be able to use vMotion between hosts?

    Thank you for all the advice and recommendations.

    Hello ndmuser,

    Yes I forgot that part. If you want that your VM DB to be connected to the machine virtual APP so only a simple solution exist. Create two separate vSwitch or vDS. Do not attach any physical NETWORK card to the first switch (no uplinks). Create a portgroup on this vSwitch (i.e. secure name). Set the second vSwitch (uplink) physical network interface cards and create one or more groups of ports (for example internal). Now your VM DB should have only one card NETWORK and connected to the "secure" portgroup In your machine virtual APP create two vNIC and attach them to the "secure" and "internal". So in this way the vSwitch who does not uplink will not be connected to any internal or external network.

    In fact you must place the DB and APP VM on the same host and then traffic between DB and APP never goes out of the host. Whereas APP vm traffic goes out to the appropriate location.

    But one of the shortcomings of this process are you must vMotion two virtual machines together and they must always be in the same host (well you can create VM - VM affinity rule for this).

  • Migration to Cisco 1000v with portgroup Vmnic (almost there!)

    I am trying to automate the following steps so that I can configure networking for a host, end-to-end (using vCenter 5.1, PowerCLI 5.1 Release 2 Build 1012425 to PowerCLI 6.0 Release 2 build 3056836). The host has two network adapters physical, one of which (vmnic0) is obviously taken by vSwitch0 during initial installation. So, I need to:

    (1) migrate physical vmnic1 a VDS (1000v)

    (2) set the GRPE of ports uplink for vmnic1 in the "system-binding rising-vc02."

    (3) turn on the VDS called "vc02-vmsc" vSwitch1 to a portgroup vmk0 management network.

    (4) physical vmnic0 pass vSwitch1 a VDS, adding to the bond rising portgroup "rising-vc02 system-binding."

    I can only reach 1-3. I can handle a party of 4), moving the 1000v vmnic0, but only in the portgroup of 'Unused_or_Quarantine_Uplink' and not the one I need.

    Stripped of any handling error, this is the code I used (largely provided by)

    http://blogs.Cisco.com/Datacenter/automate-migrating-ESX-host-interfaces-to-nexus-1000V )

    http://www.virtuallyghetto.com/2013/10/automate-migration-from-virtual.html):

    $esxihost = 'host-0201'
    $vmnic = 'vmnic1'
    $1000vName = 'cisco-1000v-data-centre-1'
    $uplinkName = 'system-uplink-vc02'
    
    # Get the 1000v object that that ESX host will be added to
    $1000vObject = Get-VDSwitch | Where-Object -FilterScript {
        $_.name -eq $1000vName
    }
    
    # Get the ESX host object
    $vmHost = Get-VMHost -Name $esxihost -Erroraction Stop | Get-View
    
    # Create a DVS Configuration Specification object
    $spec = New-Object -TypeName VMware.Vim.DVSConfigSpec
    
    # Create a target host DVS Host member configuration specification and set the operation to add
    $targetHost = New-Object -TypeName VMware.Vim.DistributedVirtualSwitchHostMemberConfigSpec
    $targetHost.operation = 'add'
    
    # Create a Pnic backing object in the target host
    $targetHost.backing = New-Object -TypeName VMware.Vim.DistributedVirtualSwitchHostMemberPnicBacking
    
    # Create a Pnic Device object
    $pnic = $vmHost.Config.Network.Pnic | Where-Object -FilterScript {
        $_.Device -eq $vmnic
    }
    
    $targetHost.Backing.PnicSpec = New-Object -TypeName VMware.Vim.DistributedVirtualSwitchHostMemberPnicSpec
    $targetHost.Backing.PnicSpec[0].pnicDevice = $pnic.Device
    
    # Get the 1000V uplink object:
    $uplinkObj = Get-VDPortgroup | Where-Object { $_.Name -eq $uplinkName }
    $targetHost.Backing.PnicSpec[0].UplinkPortGroupKey = $uplinkObj.Key
    
    # Set the target host to the ESX host object reference
    $targetHost.host = $vmHost.MoRef
    
    # Set the DVS configuration specification object host property, to the target host object reference we've created above:
    $spec.Host = $targetHost
    
    # Get the current status of the 1000v and set the version in the configuration specification version
    $dvSwitch = Get-View -Id $1000vObject.ExtensionData.MoRef
    $dvSwitch.UpdateViewData()
    $spec.ConfigVersion = $dvSwitch.Config.ConfigVersion
    
    # Run  the task
    $taskMoRef = $dvSwitch.ReconfigureDvs_Task($spec)
    
    # Get the status
    $taskID = 'Task-' + $taskMoRef.Value
    while((Get-Task -Id $taskID).PercentComplete -lt "100")
    {
        $percentComplete = (Get-Task -Id $taskID).PercentComplete
        Write-Verbose "Percent Complete: $percentComplete"
        Start-Sleep -Seconds 2
    }
    
    
    # 3) Migrate vmk0 Management Network from vSwitch to the VDS with correct portgroup:
    
    # Get the VMKernel port
    $vNicManagement = Get-VMHostNetworkAdapter -VMHost $esxihost -Name vmk0
    # Get the destination port group:
    $vdPortgroupManagement = Get-VDPortgroup -VDSwitch $1000vName -Name 'vc02-vmsc'
    # Set the physical NIC to use:
    $pnicToUse = Get-VMHostNetworkAdapter -VMHost $esxihost -Physical | Where-Object { $_.Name -eq $vmnic }
    # Migrate:
    Add-VDSwitchPhysicalNetworkAdapter -DistributedSwitch $1000vName -VMHostPhysicalNic $pnicToUse -VMHostVirtualNic $vNicManagement -VirtualNicPortGroup $vdPortGroupManagement -ErrorAction Stop
    
    
    
    
    
    
    
    
    
    
    
    

    The bit that will move to vmnic0 VDS service is:

    # Get vmnic0 which is still connected to the vSwitch:
    $lastNic = 'vmnic0'
    $pnicToMove = Get-VMHostNetworkAdapter -VMHost $esxihost -Physical | Where { $_.Name -eq $lastNic }
    # Migrate vmnic0 from vSwitch to VDS:
    Add-VDSwitchPhysicalNetworkAdapter -DistributedSwitch $1000vName -VMHostPhysicalNic $pnicToMove -Confirm:$false
    
    
    

    ... but as I said, this only puts it in the Unused_or_Quarantine_Uplink group.

    1000v-migrate-uplink.png

    I tried to repeat the code above this vmnic1 against vmnic0 objectives, but this indicates that the host is already member of the VDS.

    I suspect that the answer lies in a change of migrate ESXi host physical adapters specific dvUplink port | vBombarded but I had no luck so far.

    Help appreciated on the final bit

    Thank you.

    I managed to solve this problem by using:

    $config = New-Object VMware.Vim.HostNetworkConfig
    $config.proxySwitch = New-Object VMware.Vim.HostProxySwitchConfig[] (1)
    $config.proxySwitch[0] = New-Object VMware.Vim.HostProxySwitchConfig
    $config.proxySwitch[0].changeOperation = "edit"
    $config.proxySwitch[0].uuid = $1000vObject.key
    $config.proxySwitch[0].spec = New-Object VMware.Vim.HostProxySwitchSpec
    $config.proxySwitch[0].spec.backing = New-Object VMware.Vim.DistributedVirtualSwitchHostMemberPnicBacking
    $config.proxySwitch[0].spec.backing.pnicSpec = New-Object VMware.Vim.DistributedVirtualSwitchHostMemberPnicSpec[] (1)
    $config.proxySwitch[0].spec.backing.pnicSpec[0] = New-Object VMware.Vim.DistributedVirtualSwitchHostMemberPnicSpec
    $config.proxySwitch[0].spec.backing.pnicSpec[0].pnicDevice = "vmnic0"
    $config.proxySwitch[0].spec.backing.pnicSpec[0].uplinkPortgroupKey = $uplinkObj.key   
    
    $vmhostRef = ($vmhost.MoRef.value).split('-')[1]
    $_this = Get-View -Id "HostNetworkSystem-networkSystem-$vmhostRef"
    $_this.UpdateNetworkConfig($config, "modify")
    
  • Several vmk vMotion using Nexus 1000v?

    Hello

    We would like to use several for vMotion vmkernel ports in an environment with a Nexus 1000v dVS.

    We work in an environment where the vMotion traffic crosses a dVS VMware and it works surprisingly well. We would like to use this new feature cool vSphere5 using the vmk vmotion ports attached to a Nexus 1000v? Is this possible? Someone has it in production?

    Thanks in advance.

    Œuvres multi-NIC with any active VMotion VMkernel portgroup vMotion, no matter if it is using standard, vDS switches VMware or a third party such as the Cisco Nexus 1000v vDS.

    -Andreas

  • VMotion vm between 2 hosts with switch std and vds

    Nice day.

    I had 2 hosts (5.5 and 5.0) uinder signle vCenter and I need to move all the VMs from the second one (5.0) to the first (5.5). ESXi 5.5 is the distributed vswitch so I decided to remove the single vmnic of vds and add it to the standard switch. This host has 4 vmk interfaces now - 3 in the vds (simple management (10.12.0.41) and 2 for vmotion, but now the flag "use it for vmotion" is disabled) and only vmk (vmk3) in standard vswitch. vmk3 is falgged for vmotion only, its address is 10.12.0.100. Host 5.0 is located in another network, single vmk0 (192.168.107.94) and it is marked for traffic management and vmotion. OK, now, I put the correct rules on my router but vmotion ifaces cannot see the other cause of on ESXi 5.5 routing table:

    by default 0.0.0.0 10.12.0.1 vmk0 MANUAL

    10.12.0.0 255.255.255.0 0.0.0.0 vmk0 MANUAL

    10.30.1.0 255.255.255.0 0.0.0.0 vmk1 MANUAL

    and if I add route adds via vmk0, so vmk3 on ESXi 5.5 is not always on the road. Of more if I have "show the routing table" of the vmk in the vSphere client - I got an empty list. How can I add a route for vmkernel specific interface?

    P.S. Add what esxi 5.0 service VDS is no option for me.

    Wow, I guess my problem is solved. Tbh it was not a problem at all. I knew that you can not migrate vm group of standard port of a host to group ports distributed from another host, BUT it seems that I can use vmotion vmkernel interface that is in the Group of ports on a vmk host and vmotion to the Group of standard on another host port, so I just deleted the vmk3 and set vmk0 to be used also for vmotion traffic.

  • VDS and NSX Cluster

    Hi all

    I want to know the role of the cluster and VDS in NSX

    and what is the difference with the HA or DRS cluster

    Thank you

    both are presequite installation for the component of the NSX. He uses to transport VXLAN. VDS are designed to accommodate VXLAN portgroup. There is no different between cluster in HA and Drs Because it used the same cluster.

    Here is the document for your question = NSX 6 Documentation Center

  • Get PortGroup with DvUplinks value is not used

    Would hi be possible to report on trade within a same distributed switch who have unused value by using a script powercli dvUplinks?

    The question is not 100% clear to me, but I guess you want to see the VDS with unused Uplink ports.

    Then you can do

    Get-VDSwitch |

    where {}

    $uplinkPg = get - see $vds. ExtensionData.Portgroup | where {$_.} Tag.Key - match "UPLINKPG"}

    $uplinkPG.Config.NumPorts - only $uplinkPG.PortKeys.Count

    } |

    Select Name,

    @{N = "Uplink Ports"; {E = {$uplinkPG.config.NumPorts}}.

    @{N = "Ports Uplink used"; {E = {$uplinkPG.PortKeys.Count}}

  • If all goes well a matter of vDS Smiple

    We have a security requirement in our environment to limit the amount of virtual ports in each group of ports to which is necessary... essentially, any unused virtual ports in the port group.  Ridiculous, if I change the number of available ports will be that cause disruption?  The assignment of the ports is not sequential so I think that the VMS should be mixed with new ports.  Mainly concerns trade vStorage and vMotion.  Thank you all for any input/help!

    If I understand the question could be rephrased as:

    I have a dvPortgroup with static binding (inelastic) fixed size of 128 ports

    Currently properties portgroup shows dvPorts from 501 to 628 belonging to this portgroup

    There are 2 virtual machines (with single vNIC) connected, occupying ports 544 and 601

    If I reduce the number of ports in that group of ports, there will be a series of new or different ports assigned to my portgroup and will be my VMs be reconnected to different ports?

    Just tried it out in my own environment (5.5) - after you change the number of ports on my portgroup to 2, that it shows that now in portgroup properties these previously occupied 2 ports, if he refers to the example above - 544 and 601. No reconnect / disconnects / dragging around.

    PortGroup in dvSwitch is nothing else than a container, that allows applying the same settings to the boot of the ports.

    And just for the other thoughts - on dvSwitch a port can exist without a portgroup. If you are concerned about security, you probably pay attention also that can connect things to individual ports and change the settings on them.

    I hope this helps.

Maybe you are looking for