NICs for redundancy private interconnection

Version of DB/grid: 11.2.0.2
Platform: AIX 6.1

We will install a RAC 2 nodes on AIX (that thing that is almost good as Solaris)

Our primary private interconnection is
### Primary Private Interconnect
169.21.204.1      scnuprd186-privt1.mvtrs.net  scnuprd186-privt1 
169.21.204.4      scnuprd187-privt1.mvtrs.net  scnuprd187-privt1
For redundancy of the inteconnect Cluster, Unix team has attached an additional NIC for each node with an additional Gigabit ethernet switch for these cards.



###Redundant Private Interconnect attached to the server
 
169.21.204.2      scnuprd186-privt2.mvtrs.net  scnuprd186-privt2  # Node1's newly attached redundant NIC
169.21.204.5      scnuprd187-privt2.mvtrs.net  scnuprd187-privt2  # Node2's newly attached redundant NIC
Example borrowed from the post of citizen2


Apparently, I have 2 ways to implement redundancy of cluster inteconnect

Option1. NIC binding at the OS level
Option2. Let the grid software to do

Question1. What is the best: Option 1 or 2?

Question2.
With regard to option 2.
Googling and OTN, I believe that, during the installation of the grid provide you just 169.21.204.0 to inteconnect to cluster and grid will identify the redundant NIC and switch. And if something goes wrong with the installation of primary interconnection (above), grid automatically réacheminera interswitching traffic using the redundant NIC configuration. Is this correct?


Question 3.
My colleague told me, to the redundant switch (Gigabit) unless I have set up some multicast (specific AIX), I might get errors during installation. He has not clearly what it was? Anyone in front of multicast related question on this?

Hello

My recommendation is you use the AIX EtherChannel.
The EtherCannel of AIX is much more powerful and stable compared to HAIP.

See how to configure AIX EtherChannel on 10 Gigabit Ethernet interfaces
http://levipereira.WordPress.com/2011/01/26/setting-up-IBM-power-systems-10-Gigabit-Ethernet-ports-and-AIX-6-1-EtherChannel-for-Oracle-RAC-private-interconnectivity/

If you choose to use HAIP I recommend to read this note and find all the notes on the bugs of HAIP on AIX.

11 GR 2 Grid Infrastructure redundant Interconnect and ora.cluster_interconnect.haip [1210883.1 ID]

ASM crashes as HAIP Does not failover when two or more private network fails [1323995.1 ID]

On the multicast, read:
Infrastructure 11.2.0.2 grid or upgrade installation may fail because of the requirement of multicast [1212703.1 ID]

Kind regards
Levi Pereira

Tags: Database

Similar Questions

  • Could not find an appropriate interface for the private interconnection range

    I'm going to install RAC 11.2.0.2 on two nodes REDHAT linux.

    I ran runcluvfy pre crsinst, he gave me warning as follows:
    Could not find an appropriate interface for the private interconnection range.

    I asked her to use a different subnet as public interconnection for interconnecting private as well as the name / etc/hosts also set private ip addresses. Private ip address is not defined in the DNS system.

    .runInstaller also not will pop up the Ethernet because private interconnection over choice of network.

    What to do to solve this problem?

    Thanks in advance.

    You can temporarily change the mask of 255.255.255.0 on eth3 and see if YES recognizes eth3?

  • Configuration of NIC for ESXi 4.1

    Happy to wander what others do for their NETWORK card for ESXi 4.1 configuration I want to use the following:

    pNIC0-> vSwitch0-> Portgroup0 (management)

    pNIC1-> vSwitch0-> Portgroup1 (VMotion)

    pNIC2-> vSwitch1-> Portgroup2 (the VM network)

    pNIC3-> vSwitch1-> Portgroup2 (the VM network)

    pNIC4-> vSwitch2-> Portgroup3 (DMZ network)

    pNIC5-> vSwitch2-> Portgroup3 (DMZ network)

    I don't have that much wandering DMZ traffic so if I should use one of the DMZ NIC for VMotion or management.

    Mike

    According to me, it looks good, you have redundancy built in.  The only thing that we do, and maybe you do the same thing is on our vSwitch0 active/standby network interface cards to each other.  The other vSwitches are set to active/active.

  • Oracle 10g RAC - private interconnection on VLAN not routable private

    There is in our data center existing Oracle 10g RAC configured with VLAN private interconnection managed by a different group of DBA.

    We create a new, separate Oracle 10 g RAC environment to support our request.

    When we discussed with our data center people to set up a local VIRTUAL private network for our CARS of interconnection, they suggest to use the same VLAN used by the other Oracle RAC configurations existing private. In this case the IPs of interconnection will be on the same subnet as the other Oracle RAC configurations.

    For example, if
    RAC1 with 2 nodes use 192.168.1.1 and 192.168.1.2 in the VLAN_1 for the Interconect, they want us to use the same VLAN_1 with interconnecting IP 192.168.1.3 and 192.168.1.4 to our 2 node RAC.

    Share the same subnet on the same private VLAN for the interconnection of different configurations of RAC, supported?
    Which will cause a drop in performance? This means the IPs of a RAC interconnect configuration is to pings from other RAC configuration.

    Someone met with such purpose?

    Could not find any info on it on Metalink.

    Thank you

    Yes
    It is practically very doable... as you would have only 4 m/c in subnet ip... and it is much less than the public subnet that we should abstain from interconnection.

  • ICPPX 4.05 and/or call Mgr 4.13 multiple LDAP servers for redundancy

    We run IPCCX 4.05 to high availability (active / standby) and Call Manager 4.13 Pub/Sub. In this configuration, we use LDAP for authentication AD instead of the directory of DC (not my choice... things you inherit in life).

    The call of Bishop and/or the servers IPCCX can be setup to point to multiple LDAP servers for redundancy?

    CAN CM 4.13 and/or IPCCX 4.05 LDAPS support (as I have said, things you inherit)?

    Our sysadmin team won our main server to the DC, and with him all functins LDAP search broke. Needless to say they will be put in place of LDAP or LDAPS on our main and backup DC in the near future.

    Any information/suggestions/recommendatinos are appreciated.

    Thank you

    -Scott

    Hello

    This IS possible.

    If the CRS web interface admin (/ appadmin) is available:

    1. open a session

    2. go to the system > LDAP information

    3 type the FQDN / IP addresses (I recommend the latter) for LDAP servers, separated by commas (for example, I have something like in our laboratory: "ldapserver.domain.as, 10.1.1.1" - works like charm)

    4. a window will appear asking if the LDAP information must be created or you just want to add another LDAP server (~ configuration already there). Choose wisely :-)

    5. restart the server. No, restart the CRS engine is not enough.

    If the CRS web administration interface is not available (~ as you said Mr. Sysadmin won DC backend), the there is a chance to get rid of this guy ;-) Anyway, there is always a chance that you can make it work. Of course, the LDAP server must already contain the appropriate configuration.

    1. connect to the CRS Server using rdesktop/VNC

    2. look for this file: C:\Program Files\wfavvid\properties\directory.properties it's just a plain text file. Look for this CCNIniFile=c:\\winnt\\system32\\ccn\\ccndir.ini

    In fact, it can be something else too, this is the default path.

    3. this file contains the information that we are looking for: LDAPURL 'ldap://10.1.1.1:389, ldap://10.1.1.2:389' and other important things like passwords and base DN

    Change it according to your needs. :-)

    4. restart the server.

    Good luck.

    G.

  • How to connect wlc 5508 on 2 switches for redundancy

    Hi, I'm planning implementation of WLC5508, but in the documentation is not clear how to connect WLC5508 to two chassis for redundancy. I do it with WLC4404, where I have the opportunity to choose the primary and secondary port for the management interface. I understand that in 5508 management interface is not mapped to any port.

    You can always set the primary/secondary ports as long as you have not enabled lag. If you try to use two different switches so you should not LAG have enabled anyway (unless it is a battery 3750 or vss)

    You can always set several AP-managers as well.  By default, the management interface is an ap - manager, but he will not be an ap-Manager on the port of relief if you have a defined ap-Manager for the port...   Make sense?

    Just think of the 5508 ports defined as a 4400, except that you don't have to have a Manager AP for each port, unless you want a...

  • Connector for Siebel for OPA Private cloud 12.1.0

    When is the expected release date of Siebel connector for OPA Private cloud 12.1.0?.

    The plan is for comparable features available in future releases of Siebel. Note that the mapping is built directly in to OPA 12.x, a separate connector may no longer be necessary.

    More information on this approach will be available by releasing Siebel 15.4.

    Davin.

  • IPA files for a private distribution app

    is there a special way to make the files ipa for a private distribution app?

    If you mean you want to distribute your app via iTunes then you have a company account DPS and a developer of company iTunes account do private distribution.

    Neil

  • Redundancy of interconnection of private CARS

    Hi all

    We create (implementation will be later) a database of CARS of 2 nodes with IG with the version 12.1.0.2 and RDBMS-S/W, have the 11.2.0.4 version.

    We want to make it private redundant interconnection but sysadmin doesn't have two same channels of bandwidth, it gives two NICs (Giga bit Ethernet) 10GbE and 1Gbe respectively.

    I got to know only 1 Gbe is sufficient for GHG and GCS but will this work of fine architecture means no damage to have 2 different channels of bandwidth also failure of 10 GbE interface well there will be performance degradation.

    Thank you

    Hemant.

    DO NOT use two bandwidths of different network for your Cluster of interconnection. With two physical network adapters, you will use the collage of NIC or HAIP, the latter being recommended by Oracle Corp. Since you use 12 c. In both cases, there will also be two NICS. In other words, part of the traffic to the private network will be "slower" than the rest of the traffic. Run you the risk of having some performance issues with this configuration.

    Also... There are two reasons for implementing several cards Cluster of interconnection network, performance and high availability. I addressed this above performance. On the side of HA, double NIC means that if one channel fails, the other channel is available and the cluster may remain operational. There is a law of the universe that says if you have a side 10gE and 1gE on the other side, you have a 99% chance that if one channel fails, it will be the 10gE one.   This means that you don't have enough bandwidth on the remaining channels.

    See you soon,.

    Brian

  • Oracle RAC private interconnectivity problem

    Hello

    We set up Oracle RAC on RHEL 6.5.

    Our reference document is "deployment Oracle RAC 11 g R2 Database on Red Hat Enterprise Linux 6" verion 1.0 of Red Hat Reference Architecture Series.

    According to the document.

    Two subnets have been used for interconnectivity.

    Two physical switches were used.

    rac.png

    My Question: I see that there is a connection between two switches. What is the purpose of this link?

    Please let me know example of configuration of Private Switch A and Switch B private.

    Thank you.

    Sajeeva.

    Because its use of redundant interconnection function because the function HAIP is officially called would say that networks need to be in different subnets. So, if we ignore the diagram above we'll finish with the following configuration:

    eth1 of each server that is running on the same subnet:

    Server1 - 10.0.1.1

    Server2 - 10.0.1.2

    eth2 of each server that is running on the same subnet:

    Server1 - 10.0.2.1

    Server2 - 10.0.2.2

    FAQ for IP highly available (HAIP) for version 11.2 (Doc ID 1664291.1):

    "Each NIC is defined as a cluster interconnect (using oifcfg setif) on a given node will have a static ip address assigned to it and each cluster NIC interconnection on a given node must be on a single subnet."

    The only advantage I see of HAIP, is that all your interfaces are active at all times where as with collage will be you almost end up with an active/passive configuration.

    Kind regards

    EVS

  • Reference Dell MD3200i correct editing with IP system and the NIC for faster network speeds

    Hi all

    I worked on this project for a while now. I did a lot of research and have contacted Dell several times. I need help.

    I have 2 sets of identical to use for Hyper-V clusters servers. We bought a Dell MD3200i in November, we also bought a Dell 5424 switch and a Cisco SLM2048 switch. These servers have Real Tek 2 network cards for iSCSI Jumbo frames is 9 k

    The Dell MD3200i is two SAN controllers. Dell 5424 is the iSCSI optimized switch and Cisco is a 48 switch ports. iSCSI is on its own network that nothing else is connected to it. (There is an Iomega NAS and a promise Vtrak SAN Tech with all current domain of execution with iSCSI controller)

    Reference Dell MD3200i controller 0 Port 0 192.168.130.101 controller 1 Port 0 192.168.131.101

    Controller controller Port 1 0 192.168.130.102 1 1 192.168.131.102

    Controller controller Port 2 0 192.168.130.103 1 2 192.168.131.103

    Controller controller Port 3 0 192.168.130.103 1-3 192.168.131.104

    Cisco switch has 1 VLAN n ° 1 on it and the 130 subnet inside elements. There are 8 connections to the switch. 4 controller 0 and 1 of each of the servers. It has an IP of 192.168.132.4, but that switch IP does not make difference. Lighted Jumbo frames

    Dell switch has 2 of #1 for various things such as the domain controller, Vtrack, and SIN. Subnet 2 is 131 and has a total of 8 connectors inside. 4 in 1 controller and 1 from each of the servers. Lighted Jumbo frames. The switch IP address is 192.168.132.2

    Groups of servers. Each server node running Windows 2008 R2 SP1 and running hyper-v with virtual server clusters / storage folder on them. 2 Realtek NIC with Jumbo frames made 9 k ProdNode1 and ProdNode2 have 12 Core i7 CPU with 12 GB memory, EXNode1 and ExNode2 have 12 Core i7 CPU with 24 GB of memory.

    ProdNode1 iSCSI-1 192.168.130.110 ProdNode1 iSCSI-2 192.168.131.110

    ProdNode2 iSCSI-1 192.168.130.120 ProdNode2 iSCSI-2 192.168.131.120

    ExNode1 iSCSI-1 192.168.130.130 ExNode1 iSCSI-2 192.168.131.130

    ExNode2 iSCSI-1 192.168.130.140 ExNode2 iSCSI-2 192.168.131.140

    Portal discovery iSCSI on ProdNode1 went to 192.168.130.110 192.168.130.101 and 192.168.131.110 192.168.131.101 each of the other servers Discovery Portal have been pairs like this.

    iSCSI tab click on add 8 MPIO sessions at each of the ports on the MD3200i from Dell, target. For example, I added 1 session of 192.168.130.101 using it. Add Session,--> activate--> Advanced Mutlipath--> Local--> IP 192.168.130.110 initiator and target 192.168.131.101 iSCSI adapter. This is repeated on all servers. 4 connections for each NIC.

    The network is very slow. It revolved around 2 to 4 MB/MIN and it should be much faster. Initially, we had only the Cisco switch with only 1 VLAN and everything connected directly to it. I read somewhere that MPIO has been the problem, so I went through the same configuration but does not verify the MPIO checkbox above. Once this has been done, the network speed went up to 1.5 GB/Min but modular storage Dell was not happy, and virtual servers would fail and the connection is not very stable.

    What wrong with my setup? Can I directly home and fix the servers directly to the PowerVault and will make stable and allow for failover and redundancy NIC? I need to get this stable so I can move with something else.

    Thank you Ron

    You will need to change your system of intellectual property on this:

    rbramble
    EX-Node1 iSCSI-1 192.168.130.130
    EX-Node1 iSCSI-2 192.168.131.130
    EX-Node2 iSCSI-1 192.168.132.140
    EX-Node2 iSCSI-2 192.168.133.140
    Prod-Node1 iSCSI-1 192.168.130.10
    Prod-Node1 iSCSI-2 192.168.131.10
    Prod-Node2 iSCSI-1 192.168.132.20
    Prod-Node2 iSCSI-2 192.168.133.20
    Reference Dell MD3200i
    Controller 0 Port 0 - 192.168.130.101 Jumbo Frames 9K
    Controller 0 Port 1 - 192.168.131.101 Jumbo Frames 9K
    Controller 0 Port 2 - 192.168.132.101 Jumbo Frames 9K
    Controller 0 Port 3 - 192.168.133.101 Jumbo Frames 9K
    Controller 1 Port 0 - 192.168.130.102 Jumbo Frames 9K
    Controller Port 1 1 - 192.168.131.102 Jumbo Frames 9K
    Controller 1 Port 2 - 192.168.132.102 Jumbo Frames 9K
    Controller 1 Port 3 - 192.168.133.102 Jumbo Frames 9K

    Your iSCSI sessions will be:

    For EX-Node1:

    -192.168.130.130 to 192.168.130.101

    -192.168.130.130 to 192.168.130.102

    -192.168.131.130 to 192.168.131.101

    -192.168.131.130 to 192.168.131.102

    For EX-Node2:

    -192.168.132.140 to 192.168.132.101

    -192.168.132.140 to 192.168.132.102

    -192.168.133.140 to 192.168.133.101

    -192.168.133.140 to 192.168.133.102

    For Prod-Node1:

    -192.168.130.10 to 192.168.130.101

    -192.168.130.10 to 192.168.130.102

    -192.168.131.10 to 192.168.131.101

    -192.168.131.10 to 192.168.131.102

    For Prod-Node2:

    -192.168.132.20 to 192.168.132.101

    -192.168.132.20 to 192.168.132.102

    -192.168.133.20 to 192.168.133.101

    -192.168.133.20 to 192.168.133.102

  • ISE Cisco 3395 NIC Teaming/redundancy

    Is it possible to implement the consolidation of NETWORK cards on a 3395, I see that it is available on the SNS 3400 series? However, I was unable to locate any information about NIC grouping for purposes of redundancy on of the 3395. This feature is taken in charge, and if so, how I would approach him allowing of correctly? Thank you very much for the help in advance.

    Hello. For now, ISE does not support the NIC teaming/pipe of any kind. It asked that several times so I hope that Cisco will implement in a future version.

    Thank you for evaluating useful messages!

  • Unplug the machine but private interconnection cable has not restarted

    Dear all,

    I have RAC Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - database of 64-bit on Linux.

    I had a strange experience with my private of interconnection between the 2 nodes. When I test disconnect interconnection link private in one of the nodes, the machine has not restarted. When I check in the cluster log, it is said to one of the nodes is restarted.

    2013-10-07 12:34:18.570

    [cssd (7565)] CRS - 1612:Network communication with node centaurus22 missing (2) for 50% of the timeout interval.  Removal of this node of cluster in 14,360 seconds

    2013-10-07 12:34:25.572

    [cssd (7565)] CRS - 1611:Network communication with node centaurus22 missing (2) 75% of the timeout interval.  Removal of this node of cluster in 7,360 seconds

    2013-10-07 12:34:30.573

    [cssd (7565)] CRS - 1610:Network communication with node centaurus22 missing (2) 90 percent of the timeout interval.  Removal of this node of cluster in 2,360 seconds

    2013-10-07 12:34:32.935

    [cssd (7565)] CRS - 1607:Node centaurus22 is to be expelled in incarnation cluster 272740834; details at (: CSSNM00007 :) in opt/app/11.2.0/grid/log/centaurus21/cssd/ocssd.log.)

    2013-10-07 12:34:34.937

    [cssd (7565)] CRS - 1625:Node centaurus22, number 2, has been closed manually

    2013-10-07 12:34:34.952

    [cssd (7565)] CRS - 1601:CSSD Reconfiguration is complete. Active nodes are centaurus21.

    2013-10-07 12:34:34.965

    [crsd (8720)] CRS - 5504:Node event reported for the node 'centaurus22 '.

    2013-10-07 12:34:36.427

    [crsd (8720)] CRS - 2773:Server "centaurus22" has been deleted from the pool "generic."

    2013-10-07 12:34:36.428

    [crsd (8720)] CRS - 2773:Server "centaurus22" has been deleted from the pool ' ora. SASDB'.

    2013-10-07 18:46:28.633

    Have you ever faced this problem?

    Your help will be kindly appreciated.

    Thank you

    Kind regards

    Moussa Hanafie

    Fencing without rebooting is introduced in 11.2.0.2 grid Infrastructure, instead of restarting a node as pre - 11.2.0.2 during expulsion happens, he'll try to stop GI gracefuly on the evicted node to avoid a restart of the node.

    http://www.Trivadis.com/uploads/tx_cabagdownloadarea/Trivadis_oracle_clusterware_node_fencing_v.PDF

  • Network of twinning with Port trunks to support the host ESX VShere 4 with several NIC for load balancing across a HP ProCurve 2810 - 24 G

    We are trying to increase production of our ESX host.

    ESX4 with 6 NIC connected to HP Procurve 2810 - 24G 2 ports; 4; 6; 8; 10 and 12.

    The

    grouping of parameters on ESX is rather easy to activate, however, we do not know

    How to configure the HP switch to support above connections.

    Pourrait

    someone please help with a few examples on how to seup the HP switch.

    Help will be greatly appreciated as we continue to lose tru RDP sessions

    disconnects.

    Best regards, Hendrik

    Disabling protocols spanning-tree on the Procurve ports connected to the ESX host is going to promote a recovery more rapid port. Similarly, running global spanning tree is not recommended if you mix some VLAN iSCSI and data in the same fabric (i.. e. you do not want a STP process to hang storage IO). Spanning tree on your switches, look PVST (or Procurve BPMH) to isolate the STP VLANs unique events.

    In regard to the load balancing is, by default (route based port ID) value algorithm requires less overhead on the ESX hosts.  You may not use LACP on the Provurve the lack of facilities LACP ESX. You must use "route based on the IP hash" sideways ESX and 'static trunks' on the side of Procurve. Unless you have specific reasons why your network need loads this configuration, I'd caution against it for the following reasons:

    (1) IP hash requires thorough inspection of packages by the ESX host, increasing CPU load as load package increases;

    (2) the static configuration puts switch physics rigid and critical ESX host port mapping. Similarly, groups of ports all will fail as the Procurve batteries for management only and won't be on switches 802.3ad circuits Group (i.e. all ports of a group of circuits must be linked to a single switch) - this isn't a limitation of the port ID routing;

    (3) K.I.S.S. love port ID mix of port ID, beacon probe and failover on the port assignments you will get segregation of the raw traffic without sacrificing redundancy - even through switches.

    I hope this helps!

    -Collin C. MacMillan

    SOLORI - Oriented Solution, LLC

    http://blog.Solori.NET

    If you find this information useful, please give points to "correct" or "useful".

  • Card NIC for FT assignments: where is Ko 1011966?

    I'm on a quest to find KB 1011966 as shown in VMware vSphere 4 Fault Tolerance: Architecture and Performance white paper (see excerpt below).  Anyone know where I can find it?  I searched vmware.com without success.

    Anyone can join me in this quest?

    *

    2.6 NIC assignments for logging traffic

    *

    FT generates two types of network traffic:

    • Migration traffic to create the secondary virtual machine

    • FT logging traffic

    Migration traffic happens on the NETWORK card that is designated for VMotion and it cause bandwidth utilization of the network boost for a short period.

    Cards separate and dedicated network is recommended for FT logging, and VMotion traffic, especially when several virtual FT

    resident on the same host machine. Sharing the same NETWORK card for FT logging times and VMotion can affect the performance of virtual FT

    machines whenever a secondary antibody is created for another pair FT or a VMotion operation for any other reason.

    VMware vSwitch Networking lets you send traffic as redundant for VMotion and FT to separate network cards using the links

    NIC failover. See KB article 1011966 for more information.

    The KB is now here. http://KB.VMware.com/kb/1011966

    Rick Blythe

    Social media specialist

    VMware Inc.

    http://Twitter.com/vmwarecares

Maybe you are looking for