HP ProCurve 1800 - 24G of cascade switches

Hello

I have the following problem. I would like to connect/stack 3 switches not by RJ45 cables copper (which I do now), but by a cascading system or optical cables. In fact, I would have preferred cascading but I don't know if this is possible with my devices. My system looks like this now

[1] proCurve 1810 - 24G J9450A software ver. ?
|
|
[2] the software J9028A proCurve 1800 - 24G worm. PB.02.09
|
|
[3] software J9028A proCurve 1800 - 24G worm. PB.02.09
[3A] Here's a ProCurve Gigabit-SX-LC Mini-GBIC with full duplex LC connector
|
The switch below ([4]) is in a different location, so no need of it cascading! I wanted to just make sure that you notice that on the switch [3] an optical port is already in use. But I don't know if this generation switches are supported two optical ports that are used at the same time?
|
[4] the software J9028A proCurve 1800 - 24G worm. PB.02.09
[4b] Here's a ProCurve Gigabit-SX-LC Mini-GBIC with full duplex LC connector

In the near future, my system will look like this:

Nortel Avaya 4524GT [0]
|   [0] here a SFP Gigabit Ethernet Transceiver connected to another switch Nortel Avay 4524GT
|?
|
[1] proCurve 1810 - 24G J9450A software ver. ?
|
|
[2] the software J9028A proCurve 1800 - 24G worm. PB.02.09
|
|
[3] software J9028A proCurve 1800 - 24G worm. PB.02.09
[3A] Here's a ProCurve Gigabit-SX-LC Mini-GBIC with full duplex LC connector
|
The switch below is in another place so no need of it cascading! I wanted to just make sure that you notice that on the switch [3] an optical port is already in use. But I don't know if this generation switches are supported two optical ports that are used at the same time?
|
[4] the software J9028A proCurve 1800 - 24G worm. PB.02.09
[4b] Here's a ProCurve Gigabit-SX-LC Mini-GBIC with full duplex LC connector

Once again, my question: is there a way to cascade [1], [2] and [3]. If so, how and how much can I add [0]?

Thank you a lot for solutions...

Hello

As mentioned above, I did not use copper cables for this type of configuration so I don't know, BUT for optical cables, it should work.

Kind regards.

Tags: Notebooks

Similar Questions

  • vMotion fails at 9% - Source host cannot connect to the destination host

    Hello

    I wonder if someone could shine any light on why vmotion fails to occur between an ESXi host who has just been restarted, in order to test HA. I have the following configuration:

    3 x ESXi5 DL360 G7s with two 4-way NIC

    VSphere Vcenter 5

    1 x cluster configured for DRS HA

    2 x Procurve 2910 - 24G

    The switches are not connected to each other.

    Both switches are configured as such:

    1894901.png

    All ports are not marked. NO PLEASE, no routing

    Each host vMotion vSwitch is connected to two switches by 1 x 1 GB nic

    I configured a vswitch on each host of vMotion. There are two ports vmkernel with two IPS on the same subnet. There are two vmnic attached to the vSwitch. On each port, a Vmic is configured to be active while the other is unused. I enabled frames on the vSwitch and 2910 switches. A VLAN has been configured on the two predisposees 2910 for vmotion with frames and defined traffic on "no label". I can successfully vmkping all the ip s vmotion on all ESXi hosts. However, when I test HA by stopping an ESXi host, when I restart the ESXi host I am unable to vmotion in that ESXi host. When I test vmkping I find that the host rebooted can only vmkping itself and no other host can vmkping it. Vmotion attempt fails on 9% and errors with the source host cannot connect to the destination host. If I restart the two switches of 2910 I can then perform a vMotion and the vmkping is a success.

    Help, please?

    Thank you

    lansley2000 wrote:

    I have since read up on the various load balancing options and find that your method is preferable during the method of "ip hash.

    I'll make the change to "port based" and link the two switches

    Hello Simon, I think it's a good option to do, since the hash IP load balancing is a bit special and really requires both interfaces to connect to the same physical switch, which must also have a specific configuration. If you like the results after the new changes.

  • Switch proCurve 2626 (J4900B) - supported VLAN?

    Hello!

    I tried to Setup VLAN because I really need this feature, but I encountered a problem.

    My current setup:

    I had 2 switches that are connected via an optical cable.

    My question is how to install ports than Let's say 10 to 15 are on the same thing but separated on the two switches network?

    So that DHCP which is on 10-15 does not give the addresses to other ports?

    and devices on 10-15 on one and 10 to 15 other Switch are the?

    Thank you!

    Hello:

    I suggest that also post your question in the Forum of HP Business - section Procurve switches Support.

    http://h30499.www3.HP.com/T5/ProCurve-provision-based/BD-p/switching-e-series-Forum

  • Consiglio switch

    Hello to all,

    UN client mio mi ha UN consiglio per a cambio asked switch. Currently sta ham uno Zyxel GS1510... Vorrebbe passare ad uno prodotto più performance. Ho pensato di proporgli price di Hp ProCurve 2910 switch 2. Very dira imported e troppo alto... che mi mi consigliate una alternativa valida spendendo UN Hotel meno works he carico di lavoro di 2 storage risk colli di bottiglia la senza troppo cheap switch?

    His configurazione e the following:

    4 esxi 5.1 host

    2 NFS raid 10 storage

    Grazie

    Con 2 ProCurve 2530 - 24G non dovresti aver problemi.

    Control flow supportano Jumbo Frame, VLANS, LACP, ecc.

    24G POI it ha una banda property sulla backplane 56 Gb/s che nel tuo caso di ti no limiteranno.

    Don't con ho 2 da UN client NFS 3 ESXi NetApp FAS 2240 24HDD 10kSAS con UN collegati ad.

    Nessun problema di performance.

    He 2910al - 24G oltre che essere layer 3, ha una di che switching ability came a package UN Gbps 128 buffer size da 6Mo contro i 3 MB del 2530 - 24G. E of CPU RAM maggiori nel 2910al.

    Se you Shrek occupare all the door by consistent iSCSI NFS o ti better di andare senza ombra di dubbio sul 2910al.

    Nel tuo caso, invece non dovresti utilizzarne più di 7/8 by switch dedicate a questo area.

    It quality VM nel 99% dei casi ininfluente e delle.

  • Network of twinning with Port trunks to support the host ESX VShere 4 with several NIC for load balancing across a HP ProCurve 2810 - 24 G

    We are trying to increase production of our ESX host.

    ESX4 with 6 NIC connected to HP Procurve 2810 - 24G 2 ports; 4; 6; 8; 10 and 12.

    The

    grouping of parameters on ESX is rather easy to activate, however, we do not know

    How to configure the HP switch to support above connections.

    Pourrait

    someone please help with a few examples on how to seup the HP switch.

    Help will be greatly appreciated as we continue to lose tru RDP sessions

    disconnects.

    Best regards, Hendrik

    Disabling protocols spanning-tree on the Procurve ports connected to the ESX host is going to promote a recovery more rapid port. Similarly, running global spanning tree is not recommended if you mix some VLAN iSCSI and data in the same fabric (i.. e. you do not want a STP process to hang storage IO). Spanning tree on your switches, look PVST (or Procurve BPMH) to isolate the STP VLANs unique events.

    In regard to the load balancing is, by default (route based port ID) value algorithm requires less overhead on the ESX hosts.  You may not use LACP on the Provurve the lack of facilities LACP ESX. You must use "route based on the IP hash" sideways ESX and 'static trunks' on the side of Procurve. Unless you have specific reasons why your network need loads this configuration, I'd caution against it for the following reasons:

    (1) IP hash requires thorough inspection of packages by the ESX host, increasing CPU load as load package increases;

    (2) the static configuration puts switch physics rigid and critical ESX host port mapping. Similarly, groups of ports all will fail as the Procurve batteries for management only and won't be on switches 802.3ad circuits Group (i.e. all ports of a group of circuits must be linked to a single switch) - this isn't a limitation of the port ID routing;

    (3) K.I.S.S. love port ID mix of port ID, beacon probe and failover on the port assignments you will get segregation of the raw traffic without sacrificing redundancy - even through switches.

    I hope this helps!

    -Collin C. MacMillan

    SOLORI - Oriented Solution, LLC

    http://blog.Solori.NET

    If you find this information useful, please give points to "correct" or "useful".

  • ProCurve 2610

    2910-al example hp configuration: 2 LAN switch and backplane with management LAN (cluster HA). Background basket of interconnection example

    These instructions assume that you configure a button for employment in two switches, LAN and backplane with LAN management and interconnection. If you use a different configuration, modify the steps according to the case.

    To assign ports, follow these steps:
    1. in the VLAN Menu, highlight the Port of VLAN assignment and press ENTER. The Port assignment screen opens.
    2. use the arrows on the Action-> menu to select modify and press ENTER. You can now change the information on this screen.
    3. set each port on Forbid to the DEFAULT_VLAN by selecting the parameter in the column for the VLAN, and then press space.
    4. Select Port 1 as the Uplink of VLAN LAN / Mgmt to connect your cluster to the rest of your network.
    5. set network LAN to Untagged.
    6. set network BACKPLANE VLAN to Forbid.
    7 Select Port 2 as the VLAN backplane
    Uplink port to connect the two switches together for the network VLAN BACKPLANE.
    8. for background basket ports, the value Ports 17-24 to allow 8 knots without any adjustment of switch VLAN Untagged. (For Option 3: single switch, rack only with the management of LAN, defined Ports 2-24)
    to Untagged.)
    How to steps 4 and 7.
    8 throws me a loop for, because I'm supposed to do step 6?

    Hello:

    You can also ask your question in the Forum of HP Business - section ProCurve switches Support.

    http://h30499.www3.HP.com/T5/ProCurve-provision-based/BD-p/switching-e-series-Forum#.Uel_BL4o69I

  • HP Procurve vlan voice with trunks

    Hi all

    I am a trained guy cisco, so I try to transfer my knowledge to the HP Procurve switches but it takes a little help to obtain VLAN etc set up.

    What I have is 4 switches, 3 at the access layer to the and 1 to the base and distribution.

    I want that switches to a trunk of the base and distribution layer 2 interfaces access layer allows to increase the speed of 2 instead of 1 gigabit uplink. Also, I want is that 2 VLANS is set up for separate voice and data. I want that all ports to be able to take in charge a PC or a VOIP phone. I put the phones to automatically tag the tag of vlan for the vlan voice, but I want all traffic to forward the link to resources shared at the base and distribution layer.

    From what I understand, so I need to:

    Configure a network interface on the access and use of basic/distribution layer switches: b1 - b2 trk1 lacp trunk

    Add VLAN for voice and data and assign vlan voice.

    The problem I have is the tag-no identified parameters.

    I tag vlan trk1 voice and set the priority of the qos to 6 and then comes to create the vlan data not marked on trk1?

    the config I've written so far is:

    b1 - b2 trk1 lacp trunk
    show trunks
    spanning tree
    spanning tree force version rstp operation
    voice VLAN 100 name
    voice
    Tagged trk1
    QoS priority 6
    data name VLAN 200
    not tagged trk1

    is this correct or am I missing something here?

    Thanks in advance!

    Hello:

    You can also copy and paste your message into the HP Business Support Forum - section Procurve switches.

    http://h30499.www3.HP.com/T5/ProCurve-provision-based/BD-p/switching-e-series-Forum

  • Firmewareupgrade failed - HP ProCurve 1810 G - 8 GE, P.1.6

    Hi @ all,

    I have a HP ProCurve 1810 G - 8 GE, eCos P.1.6, - 2.0.

    I tried to update the Firmeware P.1.6 initial P.1.17 or P.1.20, but without success.

    1 these are the steps I've tried:

    HTTP P.1.17 > Image "Backup" (with Windows & Linux Client)

    TFTP P.1.17 > Image "Backup" (with Windows & Linux TFTP Server)

    HTTP P.1.20 > Image "Backup" (with Windows & Linux Client)

    TFTP P.1.20 > Image "Backup" (with Windows & Linux TFTP Server)

    2. environment:

    OS1: Windows 7 x 64 (physical Machine)

    OS2: Ubuntu 14.04 (physical Machine)

    TFTP Server Windows 1: Tftpd32

    TFTP Server Windows 2: SolarWinds TFTP server

    TFTP Server Ubuntu: tftpgui (https://code.google.com/p/tftpgui/)

    No static IP address of firewall, without DHCP,

    PC1 PC2 & <> Switch - directly connected with LAN (not cross-over) cable

    1 x CAT5e cable (0.5 m) cable

    1 x CAT6 (1 meter) cable

    3. HTTP error messages:

    Download for IMAGE1 failed. Not enough memory to download the "P_1_17.stk".

    Download for valMapImage1 failed. File 'P_1_17.stk' does exist or is empty.

    Download for IMAGE1 failed. Not enough memory to download the "P_1_20.stk".

    Download for valMapImage1 failed. File 'P_1_20.stk' does exist or is empty.

    4 TFTP Error Messages (Windows and Linux):

    Not enough memory to complete the download of the file.

    The file download failed!

    6. note:

    -tried the clear on the switch button > without success

    -tried the reset switch button > without success

    -tried all ports with each cable with each client > unsuccessfully

    Now I don't know what to do now. Does anyone have a solution for me?

    With friendships

    Cookie

    P.S: Sorry for my English I'm a German speaker. :-)

    Hello:

    You can also ask your question on the Business - section switches Procurve HP Support Forum.

    http://h30499.www3.HP.com/T5/ProCurve-provision-based/BD-p/switching-e-series-Forum#.VCgRxHl0y9I

  • PowerConnect M6220 problem without tag switch of VLAN

    Good afternoon.
    I do not have much knowledge of networks and need help.
    M6220 have this connected to a HP ProCurve switch switch connected by a trunk.
    My problem is that I can not pass the 10 untagged procurve VLAN to the M6220 switch, can you help me?
    M6220 has firmware version 1.
    Thank you

    On a network connection VLAN native is the VLAN used to send and receive packets not marked. Here is an example of the configuration.

    Console (config-if-article gi1/0/1) #interface item in gi1/0/24
    trunk mode console (config-if-article gi1/0/24) #switchport
    Console (config-if-article gi1/0/24) #switchport trunk vlan 10 native
    trunk allowed vlan #switchport console (config-if-article gi1/0/24) add 10,11,12,13

    If trunk mode does not work, you can also try to use the general mode.

    Console (config-if-article gi1/0/1) #interface item in gi1/0/24
    mode console (config-if-article gi1/0/24) general #switchport
    Console (config-if-article gi1/0/24) #switchport general pvid 10
    Console (config-if-article gi1/0/24) #switchport general allowed vlan add 10 untagged
    Console (config-if-article gi1/0/24) #switchport general allowed vlan add 11,12,13 tag

    Hope this helps

  • Enable Snmp - Hp Procurve 2848

    Hi all

    I don't know why the snmp Protocol does not work on my Hp procurve 2848. (ping and interface web work!)

    This is my config.

    SW3_STIPA(config)# show config
    
    Startup configuration:
    
    ; J4904A Configuration Editor; Created on release #I.10.70
    
    hostname "SW3_STIPA"
    snmp-server contact "STIPA"
    snmp-server location "Montreuil"
    no cdp run
    interface 1
       no lacp
    exit
    interface 2
       no lacp
    exit
    interface 3
       no lacp
    exit
    interface 4
       no lacp
    exit
    interface 5
       no lacp
    exit
    interface 6
       no lacp
    exit
    interface 7
       no lacp
    exit
    interface 8
       no lacp
    exit
    interface 9
       no lacp
    exit
    interface 10
       no lacp
    exit
    interface 11
       no lacp
    exit
    interface 12
       no lacp
    exit
    interface 13
       no lacp
    exit
    interface 14
       no lacp
    exit
    interface 15
       no lacp
    exit
    interface 16
       no lacp
    exit
    interface 17
       no lacp
    exit
    interface 18
       no lacp
    exit
    interface 19
       no lacp
    exit
    interface 20
       no lacp
    exit
    interface 21
       no lacp
    exit
    interface 22
       no lacp
    exit
    interface 23
       no lacp
    exit
    interface 24
       no lacp
    exit
    interface 25
       no lacp
    exit
    interface 26
       no lacp
    exit
    interface 27
       no lacp
    exit
    interface 28
       no lacp
    exit
    interface 29
       no lacp
    exit
    interface 30
       no lacp
    exit
    interface 31
       no lacp
    exit
    interface 32
       no lacp
    exit
    interface 33
       no lacp
    exit
    interface 34
       no lacp
    exit
    interface 35
       no lacp
    exit
    interface 36
       no lacp
    exit
    interface 37
       no lacp
    exit
    interface 38
       no lacp
    exit
    interface 39
       no lacp
    exit
    interface 40
       no lacp
    exit
    interface 41
       no lacp
    exit
    interface 42
       no lacp
    exit
    interface 43
       no lacp
    exit
    interface 44
       no lacp
    exit
    interface 45
       name "INTERCO_VERS_SW1"
       no lacp
    exit
    interface 46
       name "INTERCO_VERS_SW1"
       no lacp
    exit
    trunk 45-46 Trk2 Trunk
    ip default-gateway 192.168.12.1
    snmp-server community "public" Operator
    snmp-server community "snmp-private" Operator Unrestricted
    snmp-server host 192.168.12.230 "public"
    snmp-server enable traps authentication
    vlan 1
       name "DEFAULT_VLAN"
       untagged 33-34,36-44,47-48,Trk2
       ip address dhcp-bootp
       no untagged 1-32,35
       exit
    vlan 11
       name "VLAN_STIPA"
       untagged 1-32
       no ip address
       tagged Trk2
       exit
    vlan 12
       name "VLAN_PROCESS"
       untagged 35
       ip address 192.168.12.13 255.255.255.0
       tagged Trk2
       exit
    vlan 20
       name "VLAN_TOIP"
       ip address 10.0.0.3 255.255.255.0
       tagged 1-44,Trk2
       exit
    spanning-tree
    spanning-tree Trk2 priority 4
    spanning-tree priority 4
    ip ssh version 1-or-2
    password manager
    

    (192.168.12.230) server:

    [root@ces:~/09:41:11]# ping -c 1 10.0.0.3
    PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
    64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=1.15 ms
    
    --- 10.0.0.3 ping statistics ---
    1 packets transmitted, 1 received, 0% packet loss, time 0ms
    rtt min/avg/max/mdev = 1.158/1.158/1.158/0.000 ms
    [root@ces:~/09:41:14]# snmpwalk -v 1 -c public 10.0.0.3
    Timeout: No Response from 10.0.0.3
    

    Thank you for your help!

    Hello:

    I recommend that also post your question on the Business - section switches Procurve HP Support Forum.

    http://h30499.www3.HP.com/T5/ProCurve-provision-based/BD-p/switching-e-series-Forum#.Uyr1POlOW9I

  • iSCSI Flow Control vs Jumbo Frames

    It seems with my setup (HP 1800 - 24G switches) I'll have to choose between two frames of flow of GOLD for iSCSI san access control. I am running of frames with a MTU of 9000 bytes. Enabling flow control in addition to yields close to zero rate transfer jumbo frames, so I guess that its not supported by ProCurve 1800-24 G.

    Which option would you recommend to improve the performance of a 2 esx Server (HP DL380G5 ESX 3.5 U2), 1 san installation (HP MSA 2012i, two controllers)?

    Virtualized applications include databases (SAP, Exchange) aswell as moderate use fileserver (actions of the user, company documents).

    I have read other threads on this topic but did not come to a clear conclusion.

    Best regards, Felix Buenemann

    the 1800-24 has a size of 500KB packet buffer.  The specs don't say if this is by port or chassis however since 1800-8A 144KB of buffer space I guess it's chassis, it is not very good news for iSCSI.

    If you have not enough space buffer per port, you can run with a loss of images during periods of high flow.  Loss of images means that TCP will re-transimt the framework and which is slow (compared to normal operation).  I would choose allow control of flow on Jumbos, flow control indicate the end device to stop sending framesuntil the switch has had time to process these images.

    Ben

  • SG200 - 26 [FW - 1.1.2.0]-very high response time: &gt; 1000 ms!

    Hello

    Problem: New SG - 200 26 Smart Switch with the latest Firmware - very high response time 500-800 ms

    We have a router of EdgeMarc 4500 with 10 VPN tunnels at 10 locations of brach. SG-200 26 Smart Switch is connected to 7 servers (Terminal 2, SQL and others), all locations have 50 MB download and download 20 MB of Verizon FiOS Internet service speed.

    Depending on the tool Kulvik, the response time of this switch is around 500ms. At the same time, the response time of the EdgeMarc 4500 router is around 40ms and less.

    We have 60 remote desktop computers connected to our SQL Server database and 40 users of RDP via Remote Desktop. The configuration is the same for 3 years. But we change the switch HP 1800 - 24G of Cisco because of some failures of connection. For connection failures, we first think the old switch from HP, but it looks like problem with router EdgeMarc.

    This response time is normal? I have attached two screenshots of Switch Cisco and response time for the past 24 hours EdgeMarc router according to the tool Kulvik. Any other advice would be greatly appreciated. Thank you.

    Hello Srinath,

    Thank you for your participation in the community of support to small businesses. My name is Nico Muselle of Cisco Sofia HWC.

    The response time of the switch can be considered quite normal. Reason for this is that the switch gives CPU priority to its actual tasks that would be sure to pass, lists of access, VLANs, QoS, multicast, and DHCP snooping etc etc. As a result, the switch itself ping response time does not show in any way the good operation of the switch.

    I invite to try the rattling customers connected to the switch, you should be able to notice that the response time to customers are much lower than the response time of the switch itself.

    I hope that answers your question!

    Best regards

    Nico glacier

    Senior Network Engineer - CCNA - CCNA security

  • vmkping cannot send packets larger than 504 bytes? IGB driver broken?

    Hi all

    I have a HP ProLiant DL360e Gen8 with 96 GB RAM Server installed.

    There are 4 1Gbit NIC (igb driver used)

    ~ # esxcfg - NICS - l

    Name PCI Driver link speed Duplex MAC address MTU Description

    vmnic0 0000:02:00.00 igb up to 1000Mbps Full 38:63:bb:2 c: a5:b8 1500 Intel Corporation I350 Gigabit Network Connection

    vmnic1 0000:02:00.01 igb up to 1000Mbps Full 38:63:bb:2 c: a5:b9 9000 Intel Corporation I350 Gigabit Network Connection

    IGB 0000:02:00.02 vmnic2 low 0Mbps half 38:63:bb:2 c: a5:ba 1500 Intel Corporation I350 Gigabit Network Connection

    IGB 0000:02:00.03 vmnic3 low 0Mbps half 38:63:bb:2 c: a5:bb 1500 Intel Corporation I350 Gigabit Network Connection

    ~ #

    I use ESXi 5.5 U2:

    ~ # esxcli get version system

    Product: VMware ESXi

    Version: 5.5.0

    Build: Releasebuild-2718055

    Update: 2

    ~ #

    There are 2 local vSwitches:

    ~ # esxcfg - vswitch - l

    Switch name Num used Ports configured Ports MTU rising ports

    2432 4 128 1500 vmnic0 vSwitch0

    Name PortGroup VLAN ID used rising Ports

    The VM network 0 0 vmnic0

    0 1 vmnic0 Mgmt

    Switch name Num used Ports configured Ports MTU rising ports

    2432 4 128 9000 vmnic1 vSwitch1

    Name PortGroup VLAN ID used rising Ports

    San 1 950 vmnic1

    ~ #

    ~ # esxcfg - vmknic - l

    Port Group/DVPort/Opaque IP IP family network interface address Netmask Broadcast MAC address MTU TSO MSS active Type

    vmk0 mgmt IPv4 192.168.4.232 255.255.255.0 192.168.4.255 38:63:bb:2 c: a5:bb 1500 65535 true STATIC

    vmk1 san IPv4 172.25.50.232 255.255.255.0 172.25.50.255 00:50:56:67:28:d2 9000 65535 true STATIC

    ~ #

    The VMKernel vmk1 interface is configured to connect to the NFS data store (using separate nic vmnic1 and VLANS separated 950). But there was time-out problems when you try to access the NFS datastore.

    What I realized is that I'm not able to ping to NFS datastore (and vice versa) with the larger than 504 bytes... packet size:

    ~ # vmkping - I vmk1 d 172.25.50.233

    PING 172.25.50.233 (172.25.50.233): 56 data bytes

    64 bytes from 172.25.50.233: icmp_seq = 0 ttl = 64 time = 0.296 ms

    64 bytes from 172.25.50.233: icmp_seq = 1 ttl = 64 time = 0,235 ms

    64 bytes from 172.25.50.233: icmp_seq = 2 ttl = 64 time = 0,236 ms

    -172.25.50.233 - ping statistics

    3 packets transmitted, 3 packets received, 0% packet loss

    round-trip min/avg/max = 0.235/0.256/0.296 ms

    ~ # vmkping - I vmk1 s 504 - d 172.25.50.233

    PING 172.25.50.233 (172.25.50.233): 504-data bytes

    512 bytes from 172.25.50.233: icmp_seq = 0 ttl = 64 time = 0,338 ms

    512 bytes from 172.25.50.233: icmp_seq = 1 ttl = 64 time = 0,268 ms

    512 bytes from 172.25.50.233: icmp_seq = 2 ttl = 64 time = 0,234 ms

    -172.25.50.233 - ping statistics

    3 packets transmitted, 3 packets received, 0% packet loss

    round-trip min/avg/max = 0.234/0.280/0.338 ms

    ~ # vmkping - I vmk1 s - 505D 172.25.50.233

    PING 172.25.50.233 (172.25.50.233): 505-data bytes

    -172.25.50.233 - ping statistics

    3 packets transmitted, 0 packets received, 100% packet loss

    ~ #

    What more, I ran pktcap-uw tool to look at packets on a network interface:

    for:
    # vmkping - I-c - 1 d 172.25.50.233 vmk1

    There are visible packages:

    12:17:14.50741 [6] captured at point EtherswitchDispath, TSO do not activated, Checksum not unloaded and not checked, 950, length 98 brand VLAN.

    Segment [0] - 98 bytes:

    0x0000: 0050 5667 28 d 2 0cc4 7a 18 0800 4500 3bd4

    0 x 0010: 0054 1e4e 4000 4001 ac19 32e9 ac19 5e57

    0 x 0020: 32e8 0000 e6af 0000 4699 555d CCCA 0000

    0 x 0030: c58b 0809 0a0b 0c0d 1011 1213 1415 0e0f

    0040: 1617 1819 1a1b 1c1d 1e1f 2021 2223 2425 x 0

    0 x 0050: 2627 2829 2a2b 3031 3233 3435 2e2f 2c2d

    0 x 0060: 3637

    but for:

    # vmkping - I vmk1-c 1 s - 505D 172.25.50.233

    There is nothing visible on the physical interface.

    The package is visible on the layer "vmk1":

    ~ # pktcap - uw - vmk vmk1

    The name of the key vmk is vmk1

    No server specified, port select 39635 as port

    The news of package to the output console.

    CID local 2

    Listen on port 39635

    Accept... Vsock connection port 1028 cid 2

    12:19:52.182469 [1] captured at PortInput point, OSI not activated, not unloaded and not verified Checksum, length 547.

    Segment [0] - 547 bytes:

    0x0000: 0cc4 7a 18 0050 5667 28 2 0800 4500 d 3bd4

    0 x 0010: 0215 1ac7 4000-4001-601 d ac19 32e8 ac19

    0 x 0020: 32e9 0800 0000 555d cd68 0002 de9a b1a5

    0 x 0030: c87a 0809 0a0b 0c0d 1011 1213 1415 0e0f

    0040: 1617 1819 1a1b 1c1d 1e1f 2021 2223 2425 x 0

    0 x 0050: 2627 2829 2a2b 3031 3233 3435 2e2f 2c2d

    0 x 0060: 3637 3839 4041 4243 4445 3e3f 3c3d 3a3b

    0 x 0070: 4647 4849 4a4b 4c4d 5051 5253 5455 4e4f

    0 x 0080: 5657 5859 5a5b 5c5d 6061 6263 6465 5e5f

    0 x 0090: 6667 6869 6a6b 6c6d 6e6f 7071 7273 7475

    0x00a0: 7677 7879 7a7b 7c7d 7e7f 8081 8283 8485

    0x00b0: 8687 8889 8a8b 8c8d 9091 9293 9495 8e8f

    0x00c0: 9697-9899 9a9b 9c9d 9e9f a0a1 a2a3 a4a5

    0x00d0: a6a7 a8a9 aaab acad b0b1 b2b3 b4b5 aeaf

    0x00e0: b6b7 b8b9 bebf c0c1 c4c5 c2c3 CHB babb

    0x00f0: CACB RCC cecf d2d3 d4d5 d0d1 c8c9 c6c7

    0x0100: d6d7 d8d9 David e0e1 e4e5 e2e3 dcdd dadb

    0 x 0110: e6e7 e8e9 eaeb eced eeef f0f1 f2f3 f4f5

    0 x 0120: f6f7 f8f9 with fcfd 0001 0203 0405 feff

    0 x 0130: 0607 0809 0a0b 0c0d 1011 1213 1415 0e0f

    0140: 1617 1819 1a1b 1c1d 1e1f 2021 2223 2425 x 0

    0 x 0150: 2627 2829 2a2b 3031 3233 3435 2e2f 2c2d

    0 x 0160: 3637 3839 4041 4243 4445 3e3f 3c3d 3a3b

    0 x 0170: 4647 4849 4a4b 4c4d 5051 5253 5455 4e4f

    0 x 0180: 5657 5859 5a5b 5c5d 6061 6263 6465 5e5f

    0 x 0190: 6667 6869 6a6b 6c6d 6e6f 7071 7273 7475

    0x01a0: 7677 7879 7a7b 7c7d 7e7f 8081 8283 8485

    0x01b0: 8687 8889 8a8b 8c8d 9091 9293 9495 8e8f

    0x01c0: 9697-9899 9a9b 9c9d 9e9f a0a1 a2a3 a4a5

    0x01d0: a6a7 a8a9 aaab acad b0b1 b2b3 b4b5 aeaf

    0x01e0: b6b7 b8b9 bebf c0c1 c4c5 c2c3 CHB babb

    0x01F0: CACB RCC cecf d2d3 d4d5 d0d1 c8c9 c6c7

    0 x 0200: d6d7 d8d9 David e0e1 e4e5 e2e3 dcdd dadb

    0 x 0210: e6e7 e8e9 eaeb eced eeef f0f1 f2f3 f4f5

    0 x 0220: f8 f6f7

    If I understand correctly, packets are lost somewhere between vmk1 and vmnic1 interfaces?

    I tried to change the interfaces physical, but all the same behavior.

    What is my broken hardware?

    Is the driver of the igb broken?

    See you soon

    Marek

    The problem is resolved.

    The question was in the configuration of switch HP (HP Procurve 2530 - 24g). There was a line in the config:

    back-filter

    After you turn off the back of the filter (# filterless back) all started to work.

    Marek

  • Please validate my scheduled installation

    Prepare a migration + expansion, here is the list of what I have to work with:

    1. 3 Dell R710 servers with 8xGbE x interfaces (each 4xBCM5716C on board, an expansion card 4 x Intel PRO/1000)
    2. 2 servers Dell 2950 x with 4xGbE interfaces each (2 x onboard, 2 x add-on cards)
    3. 2 x FAS2040 of NetApp (a chassis with two heads clustered, 4xGbE interfaces / hab.) with 1xDS4243 disc tray (total 36x300GB 15 k disks)
    4. 4 x 24-port gigabit switches (three are HP ProCurve E2510 - 24G, we're a 3COM 3824)
    5. 2 x FortiGate 200 b (active/passive cluster)
    6. vSphere Essentials Plus 4.1 bundle

    The Setup will run several sites charge quite high on IIS + MS SQL (production and development), Exchange 2007 for users to a hundred and back end SQL for several internal applications. He resides in a collocation, all remote users, access is through links public VPN internet or site to site. IIS runs on multiple load-balanced servers and use of NetApp CIFS shares for shared storage.

    The current plan is:

    • Two switches are designated for an application traffic, named LAN1 and LAN2
    • Two switches are designated for storage, named SAN1 and SAN2 traffic
    • SAN1 and SAN2, define the following VLANs:
      • VLAN2 - NFS
      • VLAN3 - CIFS
      • VLAN21 - iSCSI1
      • VLAN22 - iSCSI2
    • The LAN1 and LAN2, define the following VLANs:
      • VLAN4 - VMotion
      • VLAN5 - DMZ1
      • VLAN6 - DMZ2
      • VLAN7 - DMZ3
      • VLAN8 - LAN
      • VLAN9 - management
    • SAN1 and SAN2, 23-24 on each junction ports, define a VLAN 2 and 3 of the trunk, the two cables between switches
    • LAN1 and LAN2 ports of junction 21-22 on each, assign 4 VLAN Trunk, ports 23-24, the tag assigned VLAN 5-9 of the trunk, the cables to four between switches
    • On each head of NetApp, configure the network as follows:
      • vif0 - single mode VIF on e0a and e0b
        • VLAN2 on vif0
        • On vif0 VLAN3
      • VLAN21 on e0c
      • VLAN22 on e0d
      • Plug SAN1, e0b and e0d in SAN2 e0a and e0c
      • Assign the e0c and e0d to a target portal
    • On each vSphere host, configure the network as follows:
      • vSwitch0 - vmnic0 vmnic0 and vmnic4, order of use explicit failover, active, standby vmnic4
        It's something I'm not quite clear on - if I use two switches non-Stack with a bond between them, what I have to use explicit order of failover with active / standby on the switch, I leave it the default settings of the routing based on originating virtual port ID, or something else entirely? I can and do I need to use the tag detection failover detection?

    • Port group DMZ1, denominated 5
    • Port DMZ2 group, tagged 6
    • Port group DMZ3, labeled 7
    • Port group LAN, 8 tagged
    • The VMkernel port Management, 9 tag
    • Management, 9 tagged port group
  • vSwitch1 - vmnic1 vmnic1 and vmnic5, order of use explicit failover, active, standby vmnic5
    • 4 the VMotion VMkernel port, tag
  • vSwitch2 - vmnic2 vmnic2 and vmnic6, order of use explicit failover, active, standby vmnic6
    • VMkernel port NFS, tag 2
    • Port group CIFS, labeled 3
  • vSwitch3 - vmnic3
    • Port group iSCSI1, labelled 21
  • vSwitch4 - vmnic7
    • Port group iSCSI2, labeled 22
  • Connect LAN1, vmnic2 and vmnic3 SAN1, vmnic4 and LAN2, vmnic6 vmnic5 and vmnic7 in SAN2 vmnic0 and vmnic1
  • FortiGate 1 connects to LAN1 and SAN1, FortiGate 2 plugs to LAN2 and SAN2; each trio shares a drive system as well as in case the power goes down, a full path is left intact
  • vSphere using NFS to access VMDK
  • Each VM needs CIFS access Gets a vNIC connected to CIFS port group
  • Each VM that requires access to iSCSI (SQL, Exchange) obtains a vNIC connected to the port iSCSI1 group and a vNIC connected to the Group of ports iSCSI2, MCS is configured on both links
  • MSSQL is configured in a cluster with two nodes with a single instance, the nodes are kept on hosts different vSphere
  • Host IIS are grouped into batteries of web servers with nodes on hosts different vSphere, FortiGate is used as a load balancer and SSL proxy
  • A server 2950 (with the 6x2TB SATA drives in a RAID5 configuration) is running vCenter, an SME to share for backups with virtual PHD VMDK and a SMB share to replicate the contents of the NetApp CIFS share with robocopy.  on board two Broadcom network interface cards are configured with BACS3 in a team active SLB / standby and trendy LAN1 and LAN2, two network Intel of add-on cards do the same thing with SAN1 and SAN2
  • A 2950 (with 6x300GB readers SAS in a RAID5 configuration) server is a domain controller (two more virtual domain controllers are also present), runs an instance of SQL server used purely as a target mirror for production SQL and MS iSCSI target LUNS for Exchange LCR, networking serving is the same as vCenter
  • 2950 servers have DRAC5 card, but DRAC5 doesn't support tagging VLAN. I will try to configure it to use BCM5708 ports onboard (NIC selection: shared with failover) and set the LAN1 and LAN2 ports assigned to these servers to both marked and unidentified VLAN9, but I don't know if it will work. If it isn't, I can switch to the dedicated port, but will reduce redundancy. Yet, the DRAC is not a critical production function
  • . R710s running vSphere have the iDRAC6 Enterprise card that support VLAN tagging.

    Possible failures that I am counting for:

    • If a feeder of matrices, a FortiGate and two switches drop, their counterparts to pick up the load, CIFS and NFS connections are quickly restored, iSCSI loses one way, all servers, file server chassis and disc tray have double power supplies, each drive system is enough to run all the equipment (120V/30 a)
    • If a switch or a FortiGate, same thing as above with the exception of more limited scope
    • If a port or a whole NIC dies, same thing
    • If a host dies, MSCS switches to the SQL instance, balancer load FortiGate detects members dead and stops sending traffic their way, VMware HA restart affected VMs on two surviving hosts
    • If a head spinning or IOM die, the other head takes over and picks up the CIFS and NFS connections as well as iSCSI sessions
    • If the whole storage eating himself for some reason, I have backups of all VMDK valid to in the last 24 hours (PHD), a copy of all CIFS data valid at the last of six hours (robocopy) and a copy of all of the SQL and you exchange valid data bases to in the last seconds (mirror/CSF implementation); back to operations will take some time, for the most part limited by the time needed to fix or buy and install new hardware
    • If the vCenter host dies, management is affected until I have launch a new, but production is not affected. I also lose my VMDK and file backups until they are recreatet, but again, this does not affect the production
    • If the physical DC host dies, I lose my mirrors SQL and Exchange LCR copies until it is replaced, but the production is not affected (virtual domain controllers are used)
    • If I need the whole system cold start (for example, if there is a failure of the installation), I bring a host of first, then vCenter and NetApp, physical vSphere DC, so all virtual machines
    • If I need to reboot a host (patches, hardware maintenance, etc.), I use MSCS to leave running SQL instances, stop and migration of domain controllers and VMotion everything
    • If I need to recharge or replace an active switch, I use bright favor on NetApp controls and reorganize the adapters active / standby on VMware ESX before taking down
    • If the application data is deleted or damaged, it can be retrieved from NetApp snapshots (FAS2040 is purchased with a full bundle, so I can use SMSQL, SMBR, FlexClone, SMEs, etc.)
    • A point of failure more than I have not touched on yet - the mechanism provides only an external power supply. There is a fifth switch which is used exclusively to obtain a separation that feed between the FortiGates WAN ports two, and that the switch (and its drive system) are a single point of failure that can capture the entire cluster access. There is no budget for a dual redundant power (they want more $1 k/month for this feature), so this is a known risk. If it falls, the plan is to use 'remote hands' $150 / incident to restore connectivity via a different path (connect the power supply directly into a FortiGate or move the food to another PDU switch or move EXTENDED to another switch network cables)

    There is no budget for another site or a tape library with storage tape offsite to protect against site failure; It is a known risk that is aware of the management.

    This is based on a recommendation of the HA of Duncan Epping section in his book HA/DRS Deepdive technical.  And also here:

    http://www.yellow-bricks.com/2011/03/22/ESXi-management-network-resiliency/

  • Open Source iSCSI target; should I use EIT or SCWIST?

    I got recently EIT work w / my ESXi vmhost but I've seen many posts recommending SCWIST on EIT while I was debugging my current setup. Is that still correct? I know, there is an a there, but most people stay away from this one due to poor performance compared to SCWIST & EIT.

    Is difficult to get running SCWIST and y at - it tricks I should know about advance (example: EIT really needs to use BLOCKIO vs FILEIO and you really want to change the/sys/block / < device > Scheduler/queue / delay vs qfc.) I saw a HUGE jump in performance by changing it yourself!)

    Finally, I use 6 GB NIC on the SAN w / 5 in an interface blocked using the ALB load balancing. Is it better to break up network cards and assign them static IPs w / in the same subnet so that my ESXi box using MPIO w / multiple vmks can have multiple paths to the target?

    I read the excellent post from Chad here btw. ! It has been extremely helpful, and if I never met the guy, I'll buy him a beer!

    Thanks for your help!

    -Jeff

    Just for laughs, here's my current setup:

    Switch: HP Procurve 2810 - 24G. Flow & Jumbo frames control. Unfortunately, as most have already found, not at the same time. Currently, using any flow control on ports of SAN VLAN.

    vmhost: ESXi 4.1 using 4 Gb NIC w / 3 of them to use SAN. I created 1 vSwitch w / 3 vmks as Chad says. I use RoundRobin to my PSP w / type = bytes, bytes = 11. I got the best performance w / cela versus RoundRobin w / type = IOPS / s and IOPS / s = 1. Currently have 3 paths of target. With EIT, I saw a throughput higher than 210MBps w / two VM Win7 guests running HD tune at the same time.

    SAN: Using a card Adaptec RAID 3805 w / 8-500 GB Samsung HDs. OS is OpenSUSE 11.2 w / software iscsitarget (EIT he dunno the version.) A 6 GB NIC w / 5 GB NIC in a bonded interface. Seriously, I'm pointing out the additional latency of servile inteface if my boxes of vmhosts using MPIO can make the roundrobin on the iSCSI initiators!

    I really didn't do a lot of work with nic bonding on linux target. I don't think that one advantage is really gained in this scenario to have NIC glued to load balance the traffic like that logic should be supported by esx host MPIO is implemented on the host.

Maybe you are looking for

  • White screen with error

    I have an ipad pro for a month. The pro was working fine until yesterday. Yesterday, Philippe, I worked with the pro, it frooze. A white uppears screen with the error of activation of words. I have to restart the ipad. Today, when the battery gives m

  • A DVD of printing on Canon Pixma Pro-100

    I try to print an image on a DVD using the above referenced printer. Lightroom and Photoshop Elements keep send me error messages that the inner cover is open. Of course, it is open: you must insert the drawer of the DVD to print! I also tried Canon

  • Fragmentation of 10-15% is normal over a period of 12 hours?

    System Specs: AMD AM2 +. M2N68-AM SE 2 1 GB memory Lite-on DVD SATA Seagate 40 GB IDE Windows XP Home SP3, completely up to date. From one day to the next, the system was idle, with no activity restart and no program activity. No updates have been de

  • Windows XP drivers

    I have a problem with my & SCSI RAID IDE controller. The hardware wizzard keeps popping up whenever I have star computer and ends upward in an error situation. Is a Highpoint Rocket Raid 231 x 230 x SATA IDE Controller controller. Could you please he

  • Dell 630i want to upgrade video nVidia

    I have a Dell 630i with the installed nVidia GTX240 standard. I want to improve the maximum supported video, which I think is an installation of video card nVidia SLI double. I want to know is what current nVidia cards are compatible with the 630i. U