LB vs Webcomponent switch servers OAM

I have three servers of identity. I can do what follows from the Webpass

(1) I can report two primary identity servers and the last Fail-over.
(2) - or - I can point to three identity-servers as servers.

To test whether the failover works for point # 2, I dropped one of the servers identity and the webpass still works. So, what is the advantage of creating a failover server? Anyway, failover is already supported when all servers are used to balance the load. Am I missing something?

Thank you!
Kabi

The two techniques are valid ways to provide redundancy.

Multiple primitive cancers is a way of distributing the payload and increase in capacity (descaling).

Addition of failover is generally to ensure that you can maintain a capacity of minimum service in case you start to lose primary resources. I suggest you read up on the setting of 'failover threshold '.

It is possible that you run with more capacity you need and that you do not configure failover of resources. It is also possible that, in a deployment dispersed geographically, and cross-configure the failover of the resources of the remote sites. It is a very flexible system.

Hope this adds useful information to your thoughts.

Mark

Tags: Fusion Middleware

Similar Questions

  • Load balanced OAM servers

    I have 2 instances OAM, set up on 2 different machines, working with them through in a cluster. When I created a webgate 11g to protect the oamhost1.mycompany.com:7777/index.html and two oam servers are on, I'm redirected correctly to authenticate on oamhost1 and it works perfectly. When I take the oamserver1, the redirect fails, I get an error page, and it redirects me to (stop now) oamhost1. But when I manually replace oamhost1.mycompany.com by oamhost2.mycompany.com in the address bar, authentication works properly.

    So the problem I have is not as long as the system will not work, but rather that there is no failover of the URL redirection. That is to say the webgate will always redirect to oamhost1 for authentication even when turned off, even when a redirect to oamhost2 for authentication works.

    Anyone where I can fix this minor but annoying?

    You will need to install a load balancer (using SST or any Web server or a load balancer), and balance the load 2 servers OAM on 14100. Example: MatchExpression /oam * WebLogicCluster = node1.oam.com:14100, node2.oam.com:14100

    After that you will need to change the value of "OAM host server" through "Configuration system-> common settings" and change the value to the URL of your server that does the load balancing.

    Once the above changes, when you hit your protected site, you will be redirected to the url of the load balancer for oam who, in the case of a failure in node OAM are failed over to the other node.

  • OAM traffic redirections

    Hello

    We have three different material OAM 11 GR 1 servers in a single cluster (z1, z2, z3) on four SST 11 g webgates (v1, v2, v3, v4). We need that traffic of v1, v2, v3 webgates goes to z1, z2 servers OAM and traffic of the v4 goes to z3.

    All four webgates are under a domain xyz.company.com

    Need to switch on how can achieve us?

    In addition,

    Can we have two agents OAM (webgate1 and webgate2) in the console of the OAM with the same favorite host (xyz.caompany.com)? If so, how can we make this configuration?

    Thank you

    It is technically possible to reach above scenario with unique webgate profile.

    "inactiveReconfigPeriod" is the parameter set by the user in the webgate profile. This profile decides the frequency at which webgate configuration is updated.

    Do 'inactiveReconfigPeriod' at-2, so that the webgate configuration not updated automatically.

    Go to host webgate V4 OSH, locate the ObAccessClient.xml file. Updated the ObAccessClient.xml file so that it contains only the server of OAM z3.

    Monitor connections to make sure that V4 webgate only connects to the server OAM z3.

    Downside of this solution is if you make changes in the configurations webgate you must manually copy the ObAccessClient.xml file on each server.

    I hope this helps.

  • Parameters of the OAM Agents

    Hi Experts,

    I'm new to OAM11g.
    While browsing the OAM Product Documentation, I came through the configuration of OAM Agents setting. I'm a bit confused between the two parameters.
    1 Max connections
    2 list of server principal & list of secondary servers: maximum connections

    If I configure the values of Max Number of connections in

    Primary list of servers: Max Number of connections: 5
    List of secondary servers: Max number of connections: 8

    then "" Max Connections"" value should be greater than 13 or 13. Correct me if I'm wrong. Please give a few examples to better understand.


    Thanks in advance

    Hi Sandy,

    Maximum connections is the maximum number of connections on all servers OAM the WebGate is opened at any time; and Server List primary & secondary Server List: Max Number of Connections is the maximum number of connections that the WebGate will open to each such OAM server specified in the list.

    In your example:

    Primary list of servers: Max Number of connections: 5
    List of secondary servers: Max number of connections: 8

    You must "Max Connections" the value 8, because the WebGate is connected either to primary or secondary, not both at the same time. If you had defined 3 primary OAM servers each with 5 connections and 3 secondary servers with 8 connect, "Max Connections" should be set to 24 so that the WebGate to the benefit of the full contingent of connections to the secondary servers.

    You can control the number of connections using netstat - you may notice one or two additional connections that the WebGate creates for the position of Director, the numbers specified in the oamconsole check to request the conversion of connections.

    Kind regards
    Colin

  • best practices v-switch

    guys,

    I need help in part v-sciwtch. During playback on the v-scwitch, I still can't understand a best practice.

    Can I use more than 1 v-switch when I first came to more than 1 vm with the same features (say, 2 exchange on the same v-switch servers)? or should I use v-switches to split VMS with different features?

    Thanks much for the reply.

    See also: http://www.vmware.com/files/pdf/virtual_networking_concepts.pdf

    Thread has been moved to the vNetwork area.

  • How to reconfigure the OHS 11 g WebGate with OAM 11 g?

    Hi all

    Can you please let me know your opinion on below scenario?

    1. I set up a SST 11 g WebGate in OAM 11 g with main server with unique. WebGate works very well.
    2. in the future, I created a new OAM server with different proxy port and want to add as a secondary server to OHS 11 g webgate. To do this, my thoughts are: Goto OAM admin console and change the profile of the agent to add the secondary server. Is this all enough to make the complete work? By the way, ObAccessClient.xml no is not updated in the folder RREG_HOME/output of artifacts. If it is updated automatically after changing details in the OAM console so I can just copy to WebGate instance.

    The same question arises for 10g WebGate with OAM 11 g. Is it also possible to reconfigure the webgate as in the case of OAM 10 g and 10 g webgates?

    -Mango

    Hi Manon,.

    You only need to make the change in the oamconsole (change the agent profile as you suggest) and you do not need to re - copy the file ObAccessClient.xml. You may need to wait a few minutes for the change must be executed by the WebGate, or I expect a restart of the web server in order to acquire the new settings. Using the url of diagnosis webgate will tell you which servers OAM the WebGate is connected to (http://server:port/ohs/modules/webgate.cgi?progid=1 mfor 11 g WebGate).

    Kind regards
    Colin

  • Server outgoing apple ID address does not

    I can not send (but can receive) mail to my address ID on my Macbook Pro. It; s a pain because I have to keep switching servers when I try to send. I can paste it into another server to stop it, but why can't the address has its own mac for outgoing mail server? He has one for incoming. I think that it started after I upgraded to El Capitan but am not sure.

    Hey studiowriting,

    If I understand correctly, you have difficulties, sending mail from the Mail application in Mac OS.  Looks like you have changed servers in order to send.  I recommend you to read this article, it may be able to help solve the problem.

    If you cannot send or receive e-mail on your Mac.

    You can also use this link to find the settings for your e-mail account
    Search for e-mail settings

    Thank you for using communities of Apple Support. Have a good.

  • Reference Dell M1000e Blade Server

    Hi all

    We have Dell M1000e Server Blade, 13 3 full height and 10 low-profile, I/O switch servers are A1, A2 B1B2 are MXL blade switches and C1, C2 are Brocade Fiber switches.

    mid-height servers are M630 PowerEdge Blade Server

    with 1. Port double QLogic 57810 - k 10 GB

    2 QLogic 57810-k dual port Mezz 10 GB KR ANC card for M-Series Blade servers

    3 Emulex LPM16002 Fibre Channel i/o Card 16Gbps

    and full height blade PowerEdge M820

    1 Broadcom 57810-k Dual port 10 GB KR

    2 Emulex LPM16002 Fibre Channel i/o Card 16Gbps

    the problem is when you first start, we received this message for full height only servers

    "A fabric mismatch detected for B1 Dell Blade mezzanine card"

    and full height servers only does not illuminate they keep flashing red

    What is the problem?

    It looks like the Emulex on the M820 may be installed in wrong mezzanine card. 1 to 3 media fabric C mezzanine card slots. do you know which slit it is currently installed in? Here is some additional info on removal and installation of the mezzanine cards.

    http://Dell.to/2bZ7Ohl

  • WebLogic Instrumentation field to start the Weblogic Console

    Hi people,

    I have a question on the instrument WebLogic domain with two groups on two servers that the application servers are instrumented when they are started from the Weblogic console (I'm not terribly familiar with Weblogic so go easy on me)

    The manual is 'almost' clear for stand-alone Weblogic Server instrumentation or using Weblogic node Manager, but does not mention anything on Weblogic instrumentation so that instrumentation is picked up when the Console is used for stop/start server

    The environment includes:

    -A server for the executed on server1 and started Weblogic domain administrator but this script:

    /bin/startWebLogic.sh

    -NodeManagers two, one on each server1 and server2, started by these scripts on each server:

    /bin/startNodeManager.sh

    -Also has two startup scripts Weblogic server on each server:

    /bin/startManagedWebLogic.sh

    /bin/start_server1.sh

    So far we've instrumented the /bin/startManagedWebLogic.sh with:

    QUEST_DEPLOYMENT_DIRECTORY = / foglight/Quest_Software/Foglight_Agent_Manager/agents/JavaEE
    If [f ' $QUEST_DEPLOYMENT_DIRECTORY/integrate.sh ']
    then
    QUEST_JAVA_ENV_OPTS = WEBLOGIC:SERVER
    . "$QUEST_DEPLOYMENT_DIRECTORY/integrate.sh".
    on the other
    echo agent Java EE not activated
    FI

    and instrumented the /bin/startNodeManager.sh with:

    QUEST_DEPLOYMENT_DIRECTORY = / foglight/Quest_Software/Foglight_Agent_Manager/agents/JavaEE
    If [f ' $QUEST_DEPLOYMENT_DIRECTORY/integrate.sh ']
    then
    QUEST_JAVA_ENV_OPTS = WEBLOGIC:NODEAGENT
    . "$QUEST_DEPLOYMENT_DIRECTORY/integrate.sh".
    on the other
    echo agent Java EE not activated
    FI

    This set works fine as long as the node and the server are started from the command line but it not instrument anything if they are handed over to walk through the Weblogic Console

    Issues related to the:

    -the scripts above (or all) of which have to be modified (and how) so that Weblogic Server starts instrumented to start the Console

    -what I'm missing? everything that needs to be done so that the Console is restarted with start the instrumented servers?

    Thank you

    Ovi,

    When you're setting switch servers using the console are not be started by the Manager of nodes? If they are, the instrumentation on Node Manager should do the trick. If they are NOT started using the node Manager then how they get the JVM settings?

    I saw the case in which settings for the boot server were not going through the Manager of knots, they were taken in the FMV of the options in the welogic server administration console. If you can find this place (pretty good the region where you set your memroy settins), you can manually add the flags that get the agent running.  You can see the parameters when you start weblogic from script, Xbootclasspath-... - javaagent..., you can just copy paste these lines and hardcode them into the admin console with settings java (jvm) for these servers.

    It will be useful.

    Golan

  • 10GBase-T or SFP +?

    Hello, we want to migrate our Data Center to 10G, and I'm dealing with the first problem: 10GBASE-T and SFP +. I understand that this decision will affect to the switches, servers and SAN (iscsi) and their interoperability. Right?

    My first step is to allow 10 G switch, done it must support SFP + and 10GBASE-T?

    Thank you

    See the Intel Romley launch, Dell has launched 3 10GBase-T cards intel, it includes X 540 NIC, LOM and NAC.

    These should be available soon through Dell and all other providers.

    VJ

  • SNMP management.

    Hello world

    I have a doubt. How to add a device to a SNMP community?.

    I mean the community already exists and it is already monitorizing by a PC (monitoring tests are performed in implemantacion possible).  and there are several teams that are monitored, but are thought to put the other monitoring devices, which makes me wonder: how new devices could be added?  What I have to configure any what ACL?

    Greetings.

    Johnnantan,

    The SNMP community string allows to control the device.  I recommend generally that for a company, community channels will be the same for all devices allow read-only access to the devices.  You can have several platforms of different software, surveillance devices at the same time.  I usually do not define read-write community strings or have completely different channels for RW and ACL.

    The limiting factor is on the device to keep track of how many devices it may follow and the amount of information she asks monitored equipment.

    If you want to control which monitors SNMP routers/switches/servers, you can implement ACLs on the channels of the community to respond only queries snmp specific devices only if they have the correct channel.  If you have the failures of access to the log list, so you can go back and find out what is scanning devices.

    Do not forget that qureies snmp take processing power so that the more devices that monitor a router and with short times between requests will have a performance affect on the router or switch.

    Hope this helps.

  • Buffalo Linkstation does not connect to SLM2024

    I recently installed a Gigabit SLM2024 switch. We have a mix of devices on the switch, servers, computers, printers, wireless access points, and all work ok. We have also four Linkstations Buffalo with Gigabit Ethernet. Each linkstation has a DHCP reservation, but when it is connected to the SLM2024 ip address is not issued and the linkstation is not accessible because the default ip address is used by the devices. When the linkstations are connected to a Netgear switch they work as expected. The SLM2024 was purchased to replace the Netgear because it has only 2 gigabit ports.

    I did not any changes to the configuration of SLM2024. Any ideas?

    Hi Richard,

    All four Buffalo Linkstations show the same symptoms when it is connected to my SLM2024 or a device has a problem with connectivity.

    The second screenshot that you pasted shows ZERO ethernet frames are seen from the Buffalo Linkstation.  This is not good.

    If all the four Buffalo Linkstations do not work when it is connected to the SLM2024, I think there is probably a problem with the way that we auto-negotiation ethernet connection between the switch and the NAS server.

    Most likely you checked Buffalo to a software update for the linkstation, I noticed that Buffalo has new software on a little more than a month for the linkstation.  If you do not have the new software on the Buffalo Linkstation, is now the perfect time to update the code.

    It would be useful to take the Buffalo Linkstation my switch and plugged into another device perhaps as the old switch from netgear.

    It is possible that we, according to what I saw in your screenshot 1 above, turn off auto-negotiation ethernet port or uncheck auto-negotiation on my switch and difficulty or adjust the speed of the port to 1000 Full Duplex on both my switch port and the Buffalo Linkstation and see if connectivity works then.  Make sure you save any changes to my switch.

    Thus, measures we can try;

    1. look for the software updates of Linkstation and apply them,

    2. If that isn't counters port received to increment, remove the auto-negotiation ethernet on both my switch and the Buffalo Linkstation and see if the packages flowing.

    [ Note: I've highlighted the word at a time to indicate that auto-negotiation of ethernet ports is cut at one end of an ethernet link should be disabled at the other end of the link ethernet so]

    Best regards, Dave

  • Where to install Vcenter

    Hi all

    I was in charge of the implementation of a Dell VRTX (5u chassis consolidated with two blades, shared storage and internal switch servers). I installed two server ESXi blades, my question is where can I install vCenter and vSphere to get the operational environment?  Do I need another physical server for this?

    Any input would be appreciated.

    Thank you!

    Install ESXi vSphere on blades servers, connect to the ESXi vSphere using the vSphere Client (installed on your office desktop) and create a virtual machine for vCenter Server (Windows version) or deploy the vCenter Server Appliance.

    After install or deploy the vCenter, add the two ESXi vSphere vCenter... and depending on your edition of vSphere, create the cluster and configure HA/DRS.

    Since you have a small environment, you don't need an additional physical server just to use as the server vCenter.

  • Virtual machines can ping throughout the network, but can't ping in them.

    More in detail, we run ESXi 5.1 on our host servers. We then run a client vSphere on one of our client machines that can reach the host by a Cisco 3750 switch servers. So it is, machine client, Cisco switch, then the host servers. The server host has 2 network cards going to the switch. NIC0 goes to a trunk port, and NIC1 goes to an access port. In vSphere, we have created a virtual switch that has two physical network interface cards activated. For our management network, the VMKernel port, VLAN ID All we (4095), then we have NIC0 as the active map. Then we have a "Server" network, the Virtual Machine port group, with VLAN ID 30 (our vlan internal) with NIC1 as the active map. Once we have this, we can ping from our virtual servers to our default gateway, then anywhere else in the network, but nothing can ping the virtual machines. We tried to watch a traceroute to the VM, but all it shows is, VM NIC, default gateway, and then destination... Ask for more information if you need.

    Thank you

    Tim.

    Hi and welcome to communities,

    What guest operating system are your current virtual machines to run? If it is windows server, it is a chance network discovery is simply disabled. This will diminish the ICMP packets and you may not their ping.

    If the port of 3750 for NIC1 is configured as VLAN ID 30 trunk.

    Your Portgroup is configured with VLAN ID 30.

    Inside the guest virtual machines you did not indicate any configuration of VLAN, right?

    Tim

  • Experience with 5.2 and composer 3.0 upgrade

    I'm planning an upgrade view 5.1.3 to Horizon view 5.2 for two installations of vSphere 5.1 display (5.1, 5.1 ESXi vCenter).

    Looking at the Horizon view 5.2 compatibility upgrade matrix (http://pubs.vmware.com/view-52/topic/com.vmware.view.upgrade.doc/GUID-E9BB81A4-1054-4BBB-A43B-CB60A6EF909E.html) and he says that view composer 3.0 is not supported with Server 5.2 connection.

    In addition, on http://pubs.vmware.com/view-52/topic/com.vmware.view.upgrade.doc/GUID-504805D9-6645-438B-B3C5-F56EF6E38E77.html, it says "during a level, view of the Horizon does not support View Composer provisioning and maintenance, operations in view of publication operations transfer server or local mode. Operations such as the provisioning and recomposition of the linked clone desktops, check or check in desktop computers and publish images of base View Composer does not support during the transition period when all servers view Horizon are still the previous version. You can perform these operations successfully only when all instances of transfer view connection server server and View Composer have been updated"

    On http://pubs.vmware.com/view-52/topic/com.vmware.view.upgrade.doc/GUID-AF61903D-E162-4B30-8102-FEC4D14809C3.html , he said: "during the first maintenance window, will modernize you View Composer. Operations such as the provisioning and recomposition of the linked clone desktops and View Composer base images are not supported until all servers in view of the Horizon are upgrades of editing", then"in your next maintenance window, continue with the upgrade of the view of the Horizon"

    I need to know exactly what that means the performance of Server connection 5.2 with the composer 3.0 temporarily between maintenance windows. I understand this isn't a combination "supported", but it still works?

    Here is my question - I have two separate environments of the view: Prod and Dr. entirely of sets of connection, security servers, vCenter and composer separate servers. However, they share the same composer for each vCenter servers. A composer vCenter Prod server and another composer for Dr. vcenter. The composer & vcenter servers are configured in both environments. This has been working well, but can be confusing during upgrades.

    My plan is to upgrade the environment DR View, which includes the composer DR server that works with DR vcenter. However, the Prod environment will always be the connection to the Server 5.1.3 and you will need to use the 5.2 composer newly improved to DR discovers the provision of environments and maintain the work stations in both environments.

    DR View upgrade will take place all at once, that is, composer, connect to the server, and security servers will go the same day.

    However, the view Prod upgrade will take place almost a week later. We go to the users to connect to desktop computers managed by Prod those managed by DR at halfway through the week (load balancing via Microsoft NLB and incoming connections for use connection of DR of switching servers)

    So what I need to know is: Prod on 5.1.3 will fail to do any operation that composer DR vcenter when DR Composer is 5.2?

    On the other hand, DR 5.2 fails to do any operation that composer vcenter Prod when composer Prod is 3.0 (not yet upgraded)?

    If operations of composer really won't happen at all (fail with errors in its newspaper or something), then I need to rotate a little more servers and install composer on them.

    > So what I need to know is: Prod on 5.1.3 will fail to do any operation that composer DR vcenter when DR Composer is 5.2?

    > Conversely, DR 5.2 fails to do any operation that composer vcenter Prod when composer Prod is 3.0 (not yet upgraded)?

    Yes, on both counts, they need to match so you will have to set up the server instances additional standalone composer or update at the same time as you will not be able to share through mixed versions of the connection view servers.

Maybe you are looking for