UC on NGC with iSCSI

Hello

I was wondering if there is any update about the CPU on UCS support for booting from the network iSCSI SAN?

I would like to implement the following scenario: B Series chassis with a B200M3 5.1 ESX Server that hosts the UC (UCSM 2.1) applications.

Could start us this server iSCSI SAN (network running 10 GB, QoS policy without moving to start vNIC)?

According to the link below, only FC SAN is supported, is that still correct? This improvement is on the roadmap?

http://docwiki.Cisco.com/wiki/UC_Virtualization_Supported_Hardware#storage

Thank you!

Dmitri

Hello, Dimitri.

Yes, iSCSI SAN is called as plug support base.

Padma

Tags: Cisco DataCenter

Similar Questions

  • 5.0 to 5.5 with iSCSI ESXi ESXi

    I have some servers ESXi 5.0 with iSCSI configured for my storage. If I'm on 5.5 ESXi, should I reconfigure the iSCSI? If so, how differente different is the configuration? I know that we had to set up when we went from 4.x to 5.0 ESXi ESXi and the configuration options are very different. Is this the case with the new version?

    No, you stay within the same major version and it should work without any problems. ESX ESXi (i) 4-5 were important steps, they are smaller. Made this upgrade in environments of production before without any problem or reconfiguration.

  • VMotion with ISCSI

    Hi all

    I do my first vSphere deployment and I have a question about the network requirements for VMotion. I currently have 1 physical switch with iSCSI traffic THAT VLAN has had to reduce the broadcast storms. We will get a second switch at a later date.

    I have one EqualLogic PS4000XV SAN she has 2 ethernet ports 1 Gbps active on this issue.  And 1 management port. I read that it is preferable to assign another VLAN to dedicate to VMotion, my question is, I have to assign 1 port on the VLAN ISCSI SAN and another port on the SAN to the VLAN VMotion? Because I prefer to keep them on the same VLAN so I can set for some extra speed LAG.

    I suggest using different VLAN and if possible, also different NICS for iSCSI and VMotion.

    See also: best practices for design vSwitches and NIC cards

    André

  • Can I use a custom with iSCSI TCP/IP stack?

    We have a group of 3 guests identical at each connection to the network through 2 natachasery using 2 vmks SAN. Each vmk resides on a separate broadcast domain so they belong to separate vswitches. No link is used in accordance with the guide of vSphere.

    Thus, as the title suggests, should I use a separate TCP/IP stack custom for iSCSI vmks? or leave them in the stack by default (even if it is not an option to mark the vmks as IP storage)

    As long as your vmkernel iSCSI cards and their respective iSCSI targets are in the same subnet of layer 3, i.e. you are not routing iSCSI traffic (vmkX<->TargetSPX vmkY<->TargetSPY separate subnet design iSCSI does not, apply as never before the traffic is routed between networks), OR you do not want to configure static routes customized for iSCSI targets in other subnets then there is no point in using a separate TCP/IP stack.

    The main point to use the new functionality of ESXi 6 with several TCP/IP stacks is to have a completely separate battery routing table. Most of the admins were not able to handle very basic layer 3 static routing with itineraries dedicated by subnet, so they assigned default gateways on several vmkernel NIC and I wondered why things broke or has never worked.

    It also allows better control what interfaces to send data when you have several paths of layer 3 or subnets to communicate with, but it's largely irrelevant and already a point an integral part of an iSCSI network fine grain.

  • With the help of site with iscsi

    You will need to validate whether you site for a config where vmotion and iscsi share the same vmnic.

    MY idea is to set up a single vDS with 2 vmk for iscsi connections, 1 with vmnic2 active and vmnic3 not used, the other vmk would be configured inversely with active vmnic3 and vmnic2 as unused.

    Now now for vmotion I would create a configuration multi-NIC where vmnic2 is active and vmnic 3 is not used and once again the 2nd vmk vmotion would be configured conversely where vmnic3 is active and vmnic2 is not used.

    VMotion networks would be configured w lower priority than iscsi networks to allow storage traffic

    Concerns:

    in iscsi multi pathing others vmnic in the vDS other than the active vmnic must be unused when the use binding, it is also the case for several microphones vmotion or is having other vmnic in the ok mode standby?  Which is the preferred method?

    the above configuration is officially suppotted?

    site slow performance of iscsi network will somehow?

    Thanks in advance

    in iscsi multi pathing others vmnic in the vDS other than the active vmnic must be unused when the use binding, it is also the case for several microphones vmotion or is having other vmnic in the ok mode standby?  Which is the preferred method?

    Unlike iSCSI use cases for Multi-NIC vMotion inactive NICS must be configured as standby and not as unused.

    See VMware KB: Multiple-NIC in vSphere vMotion 5

    the above configuration is officially suppotted?

    Yes.

    site slow performance of iscsi network will somehow?

    It obviously depends on how to actually configure NIOC and how the consumption of bandwidth real iSCSI and available bandwidth. Usually you would not define the limits of the NIOC on iSCSI traffic bandwidth and assign a higher priority as well.

    You can also consider to use the traffic on the port vMotion group shaping, see the following articles:

    Test vSphere NIOC host limits on Multi-NIC vMotion traffic | Network of Wahl

    Based on the Multi-NIC vMotion bandwidth control traffic shaping | Network of Wahl

  • Nested ESXi 5.5 lab for the SRM demo with iSCSI

    Hi all

    I was wondering if it is possible to install ESXi in ESXi 5.5 5.5 make recovery of the Server Tier - 1 Microsoft with Site Recovery manager?

    So level 1 server applications are: Exchange Server, SQL Server, SharePoint Server and AD DC servers with 5 x test Win 7 desktop.

    The external data store to the VMFS will take place by iSCSI NAS connected to the physical server.

    Any suggestion and input would be greatly appreciated.

    Thank you

    One thing you'll want to do is to make sure that use you VSphere switches distributed on your cluster nested to ESXi hosts so that you can use the network i/o control (NIOC). For me, I put iSCSI to high priority traffic and everything to normal priority.

    With the help of the NIOC will allow your cluster nested in order to operate more fluidly than without it.

    My SRM 5.x laboratory is fully nested - probably not supported in a production environment where supported VMware should however.

    Here's a blog post of Ather Beg on its results by using the nested NIOC and ESXi hosts - it is the reduction of the spikes that Ather blog article shows that made work nested ESXi hosts networks so much better:

    http://atherbeg.com/2014/02/04/why-enabling-NIOC-network-IO-control-in-a-home-lab-is-a-good-idea/

    Datto

  • Need assitance with iSCSI storage

    I have a new SAN, I'm trying to migrate my existing data.  I'm under ESXI 5.1.  I use the software iscsi VMware adapter.

    Network configuration:

    Old san target IP: 10.10.10.10

    Connection: ESXi > old switch > old SAN

    ESXi connection: 10.10.10.100 - 1 vswitch with 2 network cards for

    New IP target San: 10.10.10.11

    Scheduled connection: ESXi > new switch > new SAN

    ESXi connection: 10.10.10.101 - 1 vswitch with 2 network cards for

    ESXi host is currently connected to the old SAN (a data store in VMware).  The connection is made by 1 vswitch with to vNIC.  No vkernal port binding is used on the map of the iSCSI software.

    Problem, it is the new volume, I installed on the new network SAN just don't appear when you perform a new analysis of the HBA, UNLESS I have add the new vNIC to the new vswitch to the port binding under network config for the VMware iSCSI initiator.  The volume is configured with unlimited access, no chap, IP, etc..  I'm curios that might prevent the new SAN to appear without having to add links in the port.  My only thought is that there is a problem with routing both the new SAN switch and the old SAN switch be configured with the same subnet; resulting esxi attempt at analysis to the INVESTIGATION period on the old switch instead of the new?

    You can not have 2 ip address on the same subnet for storage. If you need binding ports.

    The two objectives are also on the same subnet. If you do not use the port binding, you should have an ip address for the storage on the same subnet.

    Also those who are natachasery not right vNIC? Are all the 4 natachasery on the same physical switch?

    If so, you can just have a vSwitch and add all 4.

    Or put your new storage space on an independent network.

    Edit, typo

  • How better to configure ports of ether-channel for use with iSCSI?

    I had a quick glance and find a number of different discussions about the best solution for simlar scenarios but nothing quite close enough to convince me to implement the proposed solution.

    The configuration that I was left with is not ideal, because it is not redundent. So I need to find a quick solution pritty.

    Two cards (each NIC goes to different switch) - Console of Service

    Two network cards (both in the same key) - vMotion

    Two network cards (they both go to the same switch) - Prod (10 discussions)

    Two network cards (they both go to the same switch in an ether channel) - storage (iSCSI)

    The environment is a cluster of three ESXi 4.1 ENT U1 + environment, connected to a vCenter STD.

    Here's what I intend to allocate, but hoped a nice person enough to suggest the best configuration. I realize the number of connections to Prod is excessive and we will reach neaver as bandwidth, but I have free network cards.

    Two cards (each NIC goes to different switch) - Console of Service

    Two cards (each NIC goes to different switch) - vMotion

    Four NICs (two pairs in different switches) - Prod (10 discussions)

    Four (#) - storage (iSCSI) network interface cards

    The part that concerns me is get the configuration for storage network cards just like we use the ether-channel, which seems to complicate things from what I've read.

    So in summary I want to use four NICs for storage and bandwidth is not wasted I currenlty have using the channel of the ether. If I can work it another way without the use of ether-channel I am pleased to see again.

    If I have not provided enough information to evaluate, please let me know and I'll give you more.

    Thanks in advance for any helpful information.

    So in summary I want to use four NICs for storage and bandwidth is not wasted I currenlty have using the channel of the ether. If I can work it another way without the use of ether-channel I am pleased to see again.

    First of all, let me squash the myth an EtherChannel gives you additional bandwidth for iSCSI. Each iSCSI initiator can use an uplink to an iSCSI target vSphere being limited to a hash value "IP". Given that the source and destination IPs are always the same, the hash is always the same.

    I wrote about it here in my post seriously, Stop "Using Port channels for vSphere storage traffic":

    http://wahlnetwork.com/2013/03/05/stop-using-port-channels-to-vSphere-hosts/

    I recommend to use the vmkernel ports link. It is available in 4.1, but not through the GUI.

    http://pubs.VMware.com/vSphere-4-ESX-vCenter/topic/com.VMware.vSphere.config_iscsi.doc_40/esx_san_config/configuring_iscsi/t_connect_software_iscsi_initiators_to_iscsi_vmkernel_ports.html

  • Reclassification of 5.0 to 5.1 issue with iSCSI

    I have a number of HP BL490c G7 SAN EMC e via flexible HP 10 virtual connect network.

    With esxi 5.0 U1, could see the LUN and boot from SAN, which was great.

    I used VUM 5.1 update after update vCenter.

    The update process failed leaving me with a black screen on the console startup esxi indicating the installation script failed, pressing enter just rebooted.

    I deleted and then re-created the boot LUN and I install from scratch, 5.1 when I boot from the ISO image, he is unable to see any external storage to install to, 5.0 U1 still shows any fine the LUN during the installation process, so I know the material is always good and esxi 5.0 U1 still works fine.

    If something has changed for the iSCSI HBA support between U1 5.0 and 5.1?

    Is that what I can do to get this working?

    Thanks in advance.

    OK, looks like the images are missing the driver necessary be2iscsi for iSCSI independent material, to ESXi vanilla as well as HP Custom ISO:

    ~ software # esxcli profile
    (Updated) HP-ESXi - 5.1.0 - standard-iso
    Name: (updated) HP-ESXi - 5.1.0 - standard-iso
    Seller: host01.local
    Creation date: 2012-09 - 17 T 05: 22:29
    Modification date: 2012-09 - 25 T 08: 57:20
    Loan without a State: false
    Description:

    2012 09-17 T 05: + 00:00 22:29.656115: are the following VIBs
    installed:
    epsec-mux 5.1.0 - 757363
    ----------
    2012 09-13 T 18: + 00:00 30:35.687583: are the following VIBs
    installed:
    VMware-fdm 5.1.0 - 799731
    ----------
    Profile of customized HP for ESXi 5.1.0 ISO Image

    [...]

    ~ List of the vib # esxcli software | grep be2net
    NET-be2net 4.1.255.11 - 1vmw.510.0.0.799733 VMware VMwareCertified 2012-09-12
    ~ List of the vib # esxcli software | grep be2iscsi

    [Nothing]

    Picture of HP information list nor a updated driver be2iscsi:

    http://h18004.www1.HP.com/products/servers/Software/VMware-ESXi/driver_version.html

    Image 5.0 has a SCSI-BE2. V00 file that contains the driver be2iscsi. I don't know why they failed in 5.1 ESXi.

    Try to manually add the be2iscsi driver to your image:

    https://my.VMware.com/Web/VMware/details?downloadGroup=DT-ESX50-Emulex-be2iscsi-413343&ProductID=229

    http://v-front.blogspot.de/p/ESXi-customizer.html

    And to your original question around change 5.1 to hardware iSCSI, they have added support for frames:

    http://cormachogan.com/2012/09/10/vSphere-5-1-storage-enhancements-part-5-storageprotocols/

  • HA with iSCSI questions

    Hello world

    Just trying to get a better understanding of the AH in vSphere 5. In my test harness, I have two hosts, both hosts have a single path to an iSCSI data store. Now, I know in production, I would like to have more than 1 way, but this is just a test lab and I'm sure that somewhere out there, a customer has a similar setup.

    Basically, that's what I did. I migrated each VM off-1 host, with the exception of 1 VM... while I was doing was unplugged the network connection on this host that served as iSCSI. Management network could continue to communicate, because it is a group of different ports and using a different NETWORK card physical, so I was interested to see what will happen because the virtual machine itself is dead at this point. To my surprise, nothing happened. It was never restarted on another host. That said, there really confuses me and made me realize that I did not understand as well as I thought I did HA

    Thanks for any help.

    Mike

    you wrote

    Basically, that's what I did. I migrated each VM off-1 host, with the exception of 1 VM... while I was doing was unplugged the network connection on this host that served as iSCSI. Management network could continue to communicate, because it is a group of different ports and using a different NETWORK card physical, so I was interested to see what will happen because the virtual machine itself is dead at this point. To my surprise, nothing happened. It was never restarted on another host. That said, there really confuses me and made me realize that I did not understand as well as I thought I did HA

    simple question,

    You declare data store in your heart data store beat config?

    What is your response to selected insulation?

    What is your option of selected virtual machine monitor?

    What is your admission to the HA control?

    You can check your fdm logs for more information.

    His has already been mentioned that by default, HA will verify for the management network which is very true (in the previous post)

  • ESXi 4.1 or ESXi 5.0 with iSCSI SAN

    Hello

    I'm trying to establishing an environment that is / will be an iSCSI SAN backend entirely.  I know of a lot of reading that the best practice is to config the ESXi host with:

    VMKernel > vmnic > physical network adapter > switch ports > port on SAN (config 1:1 - no grouping of vmnic)

    Environment:

    ESXi 4.1 & 5.0

    Force10 switches

    EqualLogic iSCSI SAN (PS4000 & 6000)

    Here's my real question (playing the Devil's advocate):

    Why I shouldn't team 3-4 vnmics on a vswitch with multiple vmkernel ports?

    Given my environment.  Can someone point me to a technical document that explains what will / could happen if I set up the environment ESXi in this way?

    Thank you

    BDAboy22

    So, basically, we want the SAN to take full responsibility for the failover instead of the network and/or the SAN for who should resolve the failure (which could lead to the time of restitutio in integrum longer session).   I understand that correctly?

    Yes, that's right, we want to stack storage for vSphere manage with path selection and tipping - and by connecting each vmknic directly to the physical vmnic map we sort of "simulate" how to CF cards and also ignores the network failover mechanisms. As you probably know you also need to "bind" the iSCSI Software to the two vmnic (GUI now in 5.0, esxcli in 4.x)

  • With iscsi failover data warehouses

    Hello to all members,

    Im looking for a solution to create a failover (active/active preference) data warehouses (iscsi) with 2 different devices.

    I read on FT but I think that FT do not create a failover with data warehouses (transparent failover) hosts only, correct?

    I need a solution for reading/writing in the two data stores at the same time.

    Thank you very much and sorry my English.

    Welcome to the community,

    you are correct FT so that DRS and HA only care about the workload of VMS, not storage.

    For the requirement type (transparent failover) you will need a storage based replication/mirroring as HP P4000 (formerly Lefthand).

    André

  • How many network adapters I - ESXi 4.1 with iSCSI

    I was wondering if someone could help me give a sense the news I read on # NIC for ESXi 4.1.  I came across the following article, but I'm not sure I understand all parts of traffic that he speaks:

    http://livingonthecloud.blogspot.com/2009/09/ESXi-networking-best-practices.html

    At the moment I have 6 cards 1 GB on each of my 3 servers.  I connect to my SAN via iSCSI, so I know I'm going to need at least 2 of these network adapters for that traffic. I know I'll need at least 2 NICs for connection between my ESXi server and my switch to the network of the VM.  What I don't understand is what I really need for the VMKernel FT, VMkernal Vmotion and VMKernal the Network Administration.  I do not plan on using Vmotion very often at all, so I'm not concerned about its impact on my network.

    Any advice?  I need to replace my double NICs with Quad NIC card or is it exaggerated?

    The management doesn't have a dedicated network card. It can share the network with virtual machines cards.

    In this case the following configuration would be interesting:

    vSwitch0: (3 NIC uplinks) (assuming vmnic0, vmnic1 vmnic2)

    Port group 1 - networking (vmnic0 and vmnic1-> active vmnic2,-> stand-by)

    Port group 2 - network of the VM (vmnic0 and vmnic1-> active vmnic2,-> stand-by)

    Port 3 - VM Network Group (vmnic0 and vmnic1-> stand-by, vmnic2-> active)-for the "Pervasive SQL.

    vSwitch1: (1 NIC/uplink)

    Port group 1 - VMkernel for vMotion on a separate network (192.168.x.x)

    vSwitch2: (2 cards network/links rising)

    configuration by following the best practices of your iSCSI storage provider

    In this way the "Pervasive SQL" would have a dedicated NIC (vmnic2) and may switch to another NETWORK card in the event of disruption of the network.

    André

  • SMB ESXi with iSCSI SAN

    I'm looking for make a SMB deployment for the clients of mine. The new features offered by 4.1 for SMB as VMotion and HA are now in their budget. I've seen DELl offering some broadcom with TOE and ISCSI unloading cards. I know that 4.1 now supports TOE cards, but these broadcom cards actually iSCSI HBAs or just TOE cards Motors iSCSI? VSphere 4.1 will support this card? Its much cheaper than buying full host bus adapters blown for a small business.

    I'd go with the R710 mainly because of the experience with the model and how strong they are. You will save all the money using the R610, so you could spec on a solid R710 for the host. In addition, the R710 has four GB of NETWORK map (with the option of TOE on the four) and four open expansion for multiple network interface cards and other slot. With the R610, you will have only two expansion slots. So if you want to add more cards later (more than two), you must either delete that you installed to make room, or do the purchase.

    I would always use the SAS drives on board to install ESX/ESXi on. You will save money by not getting NOT the HBA you can use to boot from SAN with the same (or similar) performance level. It also helps to isolate the questions when you're starting from local disks... I would go with a pair of 10 k / 15 k RPM SAS drives (2.5 "or 3.5", your choice there) on the RAID controller (using RAID 1 on them) support. Confirm with either your VAR or rep Dell to make sure that the controller is 100% compatible with VMware (or you check the HCL). They list the 146 GB drives in the Configurator (normal site of SMB) fairly cheap price.

    I'd also go with processors Xeon E5620 (get a pair inside the host) so you'll have plenty of CPU power. Get as much RAM as you can, using not more than 12 sticks (multiple of six).

    In fact, I have configured a R610 and R710 a using the SMB website... The R710 has the 146 GB (10 k RPM) drives the R610 has 73 GB 15 k RPM drives. The R610 comes out to almost $300 more than the R710. We're talking same memory (24 GB), TOE enabled on the built-in NETWORK interface, no additional NIC at build time (they would get another provider, getting Intel NIC, NOT of Broadcom), redundant power supplies, kits of rapid-rail (without the wire management arm), two-processor Xeon E5620, iDRAC 6 Enterprise (so you can do it all remotely, no need to use any KVM don't setup once configured the iDRAC 6). Both versions include the 5 x 10 hardware only (NBD onsite) support. I also put the hard drives RAID 1, so you have redundancy here.

    Seen readers on board, for ESX/ESXi is on will also configure the host much easier and will not require something special on the side storage. Otherwise, you need to configure up to the MD3000i to present the solitary start-up LUN to a host for ESXi. Everyone that I spoke with who implements guests, always with local disks. Especially if you are looking to save the ~ 1 k $ the HBA adds to the build (not to mention the extra complexity to things). I was talking with some people in charge of a large IBM virtual data center not long there. They were forced to boot from SAN on their hosts fear. They were forced to do so by higher ranking people within the company. It was decided that because it sounded good on paper, it would be good in practice. Initial tests have shown how much pain he would be, and they were not happy about the extra work required so that it can happen. Until VMware clearly indicates that booting from THE SAN is recommended to local disk (properly configured on either) I will continue to use local drives for ESX/ESXi reside opon. Even in this case, we have a significant performance win by going to the boot of model SAN. Maybe when 10 GB is on the environment as a whole, it will make sense. As it is, you get new SAS disks that are 6 GB, everything in-house, and you still beat the trunk of the SAN performance model. In addition, you don't need to worry about someone making something accidentily to the LUN you are booting from...

    Network administrator

    VMware VCP4

    Review the allocation of points for "useful" or "right" answers.

  • Config network initial ESX with iSCSI SAN

    Hi all

    I want to install 2 ESX 3.5 servers which will be connected to an EqualLogic iSCSI SAN.

    The SAN is on a VIRTUAL, 10.x.x.200 with a 255.255.255.224 LAN Gateway.  This VLAN is not routable, DNS servers, etc.

    What I am trying to understand, it's for the initial setup of ESX, when I set the config network (console), should I register the IP address for the VIRTUAL LAN, example was:

    IP address: 10.x.x.201

    Gateway: 255.255.255.224

    Primary DNS: white

    Secondary DNS: white

    Or, for example B, should I use our 'public' speaking:

    IP Adress:129.x.x.201

    Gateway: 255.255.255.0

    Primary DNS:129.x.x.1

    Secondary DNS: 129.x.x.2

    I know that with the VIC, I can later add vSwitches, etc., but at least for the initial installation, I want the configuration to provide smoother operation. Thanks for any idea that you can provide!

    Chad

    Hello and welcome to the forums.

    What I am trying to understand, it's for the initial setup of ESX, when I set the config network (console), should I register the IP address for the VIRTUAL LAN, example was:

    Use example B (129.x.x.x) for the Service Console (management functions), and after that the system is in place to add an another vSwitch used to connect to the SAN VLAN.

    Good luck!

Maybe you are looking for

  • Why is it so slow FaceTime?

    I run to OS X El Capitan v 10.11.4 on an iMac 27-inch 5K retina fully loaded with processor i7, 32 GB of RAM and 4 GB of graphics Radeon AMC card.  Yet of the loads incredibly slow FaceTime and takes forever when you want to go to the video tab on th

  • HP Envy m6: HP Envy m6 broken hinge

    The hinges on my HP Envy M6 is broken. How can I solve this problem?

  • error code 80070673

    Mr President, I get an error update 80070673 for Microsoft .net Framework 3.5 code family udate (KB959209) x 86, waiting for support

  • I forgot the bios password on my HP 635.

    HP laptop computer 635. I forgot the bios password on my HP 635. After 3 attempts it disables the use of give me this number 79751707. Can someone help me please!

  • MBAM - BitLocker Administration and Monitoring (end point of service recovery)

    When I have access to the http://webservername/MBAMComplianceStatusService/StatusReportingService.svc and show the message in the web browser below. You have created a service. To test this service, you will need to create a client and use it to call