OBIEE 11 g high availability/evolution using two servers

Hi all

I have 2 servers from Redhat Linux 64 bits.
By using these 2 servers, I want to achieve high availability & failover.

I have installed OBIEE (installation of the company) on the first server (main node) and the scale on the field on the second server (secondary node).

Now, if the master goes down because of a reason any (by example-failure), will be the administrator of the server on the secondary node becomes active?
My goal is to provide a system with minimal downtime.

Please advice.

Thank you
Nitin Aggarwal

Make sure the parameter EM is defined: virtualize = true

This means that if the server administrator is no longer available, the servers BI will be always available for all users

http://docs.Oracle.com/CD/E21764_01/bi.1111/e10543/authentication.htm#BIESC6075

Tags: Business Intelligence

Similar Questions

  • Two WLC 5508 anchor high availability

    Hello.

    It is possible use 2 WLC 5508 EN HOW to ANCHOR in an active scenario?.

    For example, if a WLC down the service, another Dungeon provide service to customers of anchor?

    At the moment we have just a WLC 5508 anchor mode. What do I have to configure high availability of the ANCHOR.

    Thank you very much!!!

    You have redundant WLC as anchor points, but if an anchor fails, the user must reconnect.

    There is a feature on the WLC HA, but it is mainly for foreigners redundancy WLC anchor no redundancy. With guest several anchors overseas WLC balance the load between the two. You will not be able to put a primary or backup.

    Sent by Cisco Support technique iPhone App

  • Bridges 2 1552 for high availability using

    Switches: 3560

    routers: 2811

    AP: 1552e

    We have 4 1552 AP we would like for installation in a point-to-point environment. Currently, they are light but would make us independent for this scenario, unless there is a feature in the controller that would achieve what it takes I don't know.

    We would like to implement in pairs on each side with the 1st goal consisting of high-availability so if one goes down there is still connectivity. Our second objective would be to use the two AP for higher bandwidth.

    I thought I'd try to configure a port channel, but as I read it, you can't do it, but I would like to know if I'm wrong. I saw where someone else tried to do the same thing in the forums and there were some responses, but the links do not work for me.

    http://www.Cisco.com/en/us/partner/products/HW/wireless/ps5279/products_tech_note09186a0080736199.shtml<----- doesn't="" work="" for="">

    Just look at showing up in the best possible direction. It seems that routing should take care of what we need, but would like to see if there are any other avenues.

    Thanks in advance.

    The best thing you can do is to configure each P2P connection as a trunk and use priority of vlan spanning tree in order to balance traffic between the two links.  You can't really bundle, because the bridge that sends the traffic expected for the traffic to come back through it.  It's the best way to use the two bridges, this is how it has been done in the past and how its still quite well done.

    Thank you

    Scott

    Help others by using the rating system and marking answers questions as 'response '.

  • 11.1.1.7 OBIEE integration in workspace 11.1.2.4 (high available)

    Hi people,

    I used the method in my dev where there is no high-availability configuration, it works perfectly, however in my environment of TST with the installation of high availability, the OBIEE SSO does not work, John Goodwin to integrate 2 environments.

    Anyone have a success story with high available?

    Thank you

    Fixed a problem, it was to virtualize = parameter true... but strangely, this same setting was not there when I install DEV environment, and DEV works very well.

  • OBIEE 11.1.1.5 high availability

    I have server with DB and OBIEE 11.1.1.5.

    My client application high availability solution.

    Is it something to build into the platform for this or I have to develop a mechanism by myself?

    Also, should I build a mechanism to synchronize the Oracle SB, OBIEE use for its configuration data, as well as scheduler jobs?

    OBIEE should he ability for high availability

    http://docs.Oracle.com/CD/E25054_01/core.1111/e10106/bi.htm

    Rittman Mead Consulting & raquo; Blog Archive & raquo; OBIEE 11 GR 1 material: development, management, clustering and high availability

  • High availability with two 5508 WLAN controllers?

    Hi all

    We are considerung to implement a new wireless solution based on Cisco WLC 5508 and 1262N Access Points. We intend to buy about 30 access points and have two options: either buy a WLC 5508-50 or, for redundancy to, two controllers 5508-25.

    Is it possible to configure two WLC 5508 as a high availability solution, so that all access points are distributed on the two WLCs and during breaks WLC one another case manages all the APs?

    If we have 30 access points, and one of the two WLC 5508-25 breaks of course that not all access to 30 but only 25 points can be managed by one remaining. Is there some sort of control to choose the access points must be managed and which do not?

    How does such a configuration looks like in general, is the implementation of an installation of two controller quite complex or simple?

    Thank you!

    Michael

    Hi Michael,

    Do not forget that the 5508 works with a system of licensing. The hardware can support up to 500 APs, but it depends on the license that you put in.

    I think 2 5508 with 25 APs license will be more expensive than a 5508 with 50 APs license.

    If you have 2 WLCs, the best is NOT to spread access between the WLCs points. In doing so, you increase the complexity of homelessness (WLCs have to discount customers to each other all the time). If your point was to gain speed, it really doesn't matter as the 5508 can have up to 8 Gbit/s of uplink speed and has the ability of UC to treat 50 APs with no problems at all. So I find it best to have all the access points on 1 WLC, then if something goes wrong, all the APs migrate anyway for the other WLC.

    If you want 50 APs at a 25-degree WLC failover, you can select who will join Yes. The APs have a priority system, so you assign priorities. If the WLC sees it's full capacity but higher priority APs are trying to join, it will kick down-prio APs for the high prio allow to connect.

    WLCs is not exactly "HA." It's just that if you have 2 WLCs work together (as if you had 700 APs and needed to put 2 WLCs) and delivered to customers. Or all APs sat on a WLC and when it breaks down, they join the other available controller.

    The only thing to do is to put each WLC in the same group of mobility so that they know.

  • SNS with high availability option

    I need help to understand the requirements of network and connectivity to deployment Cisco show and share with the high availability option where the DC and DR are different geographical locations.

    As I understand it, to achieve high availability, the DMM and the SNS servers require a L2 connectivity between the primary and the secondary server. How can this be achieved in a scenario where two data centers are connected by WAN / MAN links?

    Thank you.

    Chaitanya Datar

    + 91 99225 83636

    Hi, Datar,

    I already asked this question to the TAC, and it is unfortunately clear today the HA mode with 2 servers connected mode "back to back" via Ethernet2, REQUIRES to be on the same network of L2.

    It is not yet based on the IP routing layer, and therefore it is did not in charge of the design when using remote data centers...

    :-(

    It's a shame, I know, and I pointed to the BU of the SnS.

    May be part of a next version of work will be based on network IP L3 routed...

    Wait and see.

    Hervé Collignon

    Converged Communications Manager

    Dimension Data Luxembourg.

  • NAC Manager high availability peer CAM DEAD

    Hello

    I have two managers of the NAC with high availability and I used both interface eth1 of sides as a link Heartbit.

    I did following steps for high availability.

    (1) synchronize the time between two cams.

    (2) generate a temporary SSL certificate in CAMs and import-export procedure made in the other.

    (3) make a CAM as a primary and the other as secondary.

    But after all this made configuration I can see the State in surveillance > reports-primary CAM is in place in both servers and redundant CAM is down.

    Also on the failover tab, I can see - Local CAM - OK [Active] and counterpart CAM:-DEAD.

    I have attached some screenshots so that you can find the same.

    Your help will be very appreciated.

    Thank you

    Try these steps and check that all steps were followed:

    http://www.Cisco.com/c/en/us/support/docs/security/NAC-appliance-clean-access/99945-NAC-cam-HA.html

  • Fabric interconnecting Cisco high availability

    Hi Experts,

    I would like to know,

    (1) what does fabric adapter interconnection cluster (L1, L2) ports are using high availability?

    (2) can form us able to interconnect fabric and fabric interconnection interconnection 6100 6200 to a cluster? If ok means how they exchange of history since the two are completely different material. ?

    Thanks in advance,

    Jako Raj

    Hello

    (1) L1 and L2 ports are used for the purpose of managing only basically to synchronize data management, triggering a possible failover.

    (2) different hardware platforms are supported in the same cluster only in a process of upgrading equipment. Actually what really happens in this meanwhile is the copy of the configuration and the new FI is promoted as primary. Although he works as a group, he is an active-standby cluster, where a single FI acts as principal.

    Kind regards

  • VPN high availability: double 3 k in the hub and the PIX as rays

    Hi Experts.

    In my scenario, I need routing between the rays and, above all, high availability (HA).

    On the shelves, I have Pix 501/506E, OS ver 6.3. In the hub, I have a couple of redundant VPN3k.

    What mechanism is the best:

    1 - hub and spoke topology with remote EzVPN in rays - to HA, I can take advantage of the "load balancing" feature of the VPN3k?

    2 - hub and spoke topology with remote EzVPN in rays - to HA, I can take advantage of the "backup server" feature of the VPN3k?

    3 any-to-any topology (an IPSEC tunnel between any pair of sites) - for HA, I can take advantage of the 'LAN-to-LAN backup' feature of the VPN3k?

    Thank you

    Michele

    I'd go with NLB on the backup server. With load balancing your connections will be spread over the two hubs. If a hub dies, then at least it will only affect half of your connections, rather than each of them in case of death of your primary and backup servers using.

    If a hub dies, your PIX connections will be de-energized for a short period, but they will be able to reconnect back automatically without making you no change.

  • High-availability software ACS 5.2

    Hello

    Can I configure main configuration / backup with the ACS 5.2 on Vmware software? Replication / high availability will happen just as if you use two devices? I need to install two ACS5.2 software on different machines to Vmware.

    concerning

    Joe

    Hey Joe,

    Yes it is possible. Make sure that you meet the requirements listed on this document:

    http://www.Cisco.com/en/us/docs/net_mgmt/cisco_secure_access_control_system/5.2/installation/guide/csacs_vmware.html

    Thank you

    Serge

  • ACS high availability

    Hello

    I have ACS solution engine. Currently, it is connected to the switch using a single network adapter. For the

    high availability to change aside, I want to use the second card netwrok thus linking

    the second main switch as well as in the case of connectivity with a carrot switch break. ACS will be accessible via second switch.

    Network card Ip address is currently 192.168.200.14/24

    How to configure the second network adapter on the ACS in order to achieve high availability.

    Hello

    You cannot use the second NETWORK card on GBA.

    The following link mentions "ACS takes care operating an Ethernet connector, but not the two connectors"

    http://www.Cisco.com/en/us/docs/net_mgmt/cisco_secure_access_control_server_for_solution_engine/4.2/installation/guide/solution_engine/ovrvuap.html#wp1053900

    I hope this helps.

    Kind regards

    Anisha

    P.S.: Please mark this message as answered if you feel that your query is resolved. Note the useful messages.

  • ODI 11 g high availability features

    Hello

    Is there a link to all official documentation describing how ODI can be configured for high availability, and more specifically, which described how agents and repositories behave in case of failure.

    I saw the notes on how to configure the load balancing between two or more agents.

    However, I would like to know what happens if an agent fails:

    • Is there a process of follow-up that imprisons the failure and tries to restart the agent?
    • In this case, the ODI load balancer will be perfectly direct traffic only to available agents.
    • What happens if the agent fails in the middle of a stage of script performance?
    • Will of the database session, getting lost and rollback?
    • Stop the script or remaining available agents will manage the process of running success?

    Similar requests are the repository itself - if the master repository database or the work fails what is happening? Assuming, the repository is installed on a multiple-node cluster environment, if only one node fails, everything will run scenario fail (and need to reboot) or that there is a transparent failover to the remaining nodes in the cluster housing deposits ODI?

    I have a requirement where the ETL (ODI) system must be available 99.5% of the year and thus resilience is a key factor and I need to understand the capabilities.

    I need to know if high in ODI availability simply means that a process is restarted, if a node or agent fails, or if an existing process will continue until the end due to a transparent failover.

    see you soon,

    John

    Take a look at the link below:

    http://www.rittmanmead.com/2012/03/deploying-ODI-11g-agents-for-high-availability-and-load-balancing/

    I hope this has given a table on this topic clear about close.

    Apart from this we use EM to monitor the status of the Agent.

    See you soon!

    SH! going

  • VSphere hosts high availability 2

    Hi all

    I am trying to achieve the following by using the essential plus the package. I had 2 identical machines with 2 TB of storage. The two are running ESXI and I want to configure high availability and load the pendulum. From what I've read on for them is that vmware needs shared storage so kind of san if I want to use high availability. But I don't have this machine, what I want to achieve is the following: Vmware reflects the discs on both machines (network raid 1) and if a hardware failure occurred on one of them, the other machine will start all the vm remaining. In the same way vmware should do load balancing. From what I've read on the internet vmware had VSA, but this has been discontinued. VSAN is not an option, because I don't have 3 machines. One possible option would be for example starwind but I would have preferred an option of vmware itself.

    So in short it is a way to configure high availability and load with 2 hosts balancing and not shared storage? Preferably without third party software.

    Kind regards

    Sebastian Wehkamp

    Technically, you can use DRBD and Linux (or FreeBSD and POLE if you're on the dark side of the Moon) to create a device block tolerant replication between a pair of virtual machines running on a hypervisor nodes. Throw in

    failover NFSv4 point mounting on top and you have a nice VMware VM data store. Tons of a defined software storage vendors have exactly this ideology within their virtual storage devices if you do not go to the difference

    However as StarWind Virtual SAN is FREE for the installation of 2 knots in the scenario of VMware (Hyper-V licensing is different if you care) and can be run on a free Hyper-V Server (no need to pay for licenses of Windows), you CAN end up with a faster

    back from road StarWind. Depends on what you want from a set of functionality (iSCSI? NFS? SMB3? Online dedupe? Cache?) and what forum do you prefer for public support to ask questions

    --

    Thank you for your response. I thought that since preparations are exactly the same secondary machine would be able to easily start the vm of lost. For synchronization, I thought it would be something like drbd, when unable to connect with the other secondary hosts will become primary. This works in the case of a hardware failure, and network failure half a brain will take place that must be resolved manually. I play with it replicaton and reconsider the starwind software.

  • vSphere high availability 'Élection' fails at 99% 'operation timed out' to 1 of the 2 hosts

    Hello

    We had a system with 1 host ESXi 5.1 with local disks.

    Now, we install redundancy by adding a 5.5 U2 ESXi host and a device of vCenter 5.5.

    After having installed and any addition to vcenter, we went the ESXi 5.1 to 5.5 U2 ESXi. The SAN works correctly (vMotion works on the own NETWORK card).

    Now, if I try to activate the high availability, both servers will install the HA Agent and start "Election".

    All the warehouses of data (4) on the SAN are chosen for the HA heartbeat, response of isolation is "keep it turned on" by default.

    A server will always be this process done, and the other will keep "elect" until it reaches 100% and errors on the operation "election timed out.

    I've seen this problem on both servers, so I think that the 'master' elected does not have the problem, only the "slave".

    I have checked these items and executed, but not worked:

    VMware KB: Reconfiguration HA (FDM) on a cluster fails with the error: operation timed out

    -The services were running

    VMware KB: Configuration HA in VMware vCenter Server 5.x does not work, error: Operation Timed out

    -All the MTU has been set at 1500

    VMware KB: VMware High Availability Configuration fails with the error: unable to complete the configuration of the ag of HA...

    -the default gateway is not the same on the two hosts, but I fixed that. There is no itinerary changes. HA the setting is "leave power." After correction and deactivation/reactivation HA, the problem is always the same.

    VMware KB: Check and reinstall the correct version of the VMware vCenter Server agents

    -J' ran "Reinstall ESX host management agents and HA on ESXi" agent of HA, and I checked that it was uninstalled and reinstalled during the reactivation of HA.

    CP /opt/vmware/uninstallers/VMware-fdm-uninstall.sh/tmp
    chmod + x /tmp/VMware-fdm-uninstall.sh
    /tmp/vmware-FDM-uninstall.sh

    I did this for two guests. This fix makes the problem of the election and I was still able to run a HA test successfully, but when after this test, I turned off the 2nd Server (in order to test the PA in the other direction), HA has no failover to 1 and everything remained low. After pressing 'reconfigure HA', the problem of the election appeared again on 1 hosts.

    Here are a few extractions of newspapers:

    -L' availability vSphere HA of that host State became election info 29/11/2014 22:03 192.27.224.138

    -vSphere HA agent is healthy info 29/11/2014 22:02:56 192.27.224.138

    -L' availability vSphere HA of that host State became Master info 29/11/2014 22:02:56 192.27.224.138

    -L' availability vSphere HA of that host State became election info 29/11/2014 22:01:26 192.27.224.138

    -vSphere HA agent is healthy info 29/11/2014 22:01:22 192.27.224.138

    -L' availability vSphere HA of that host State became Master info 29/11/2014 22:01:22 192.27.224.138

    -L' availability vSphere HA of that host State became election info 29/11/2014 22:03:02 192.27.224.139

    -Message "host vSphere HA State" on 192.27.224.139 changes from green to red info 29/11/2014 22:02:58 192.27.224.139

    officer of HA - vSphere for this host has an error: vSphere HA agent may not be properly installed or configured WARNING 29/11/2014 22:02:58 192.27.224.139

    -L' availability vSphere HA of that host State became info error of initialization of 29/11/2014 22:02:58 192.27.224.139

    -L' availability vSphere HA of that host State became election info 29/11/2014 22:00:52 192.27.224.139

    -DSMD3400DG2VD2 data store is selected for the storage heartbeat controlled by the HA agent vSphere on this info host 29/11/2014 22:00:49 192.27.224.139

    -DSMD3400DG2VD1 data store is selected for the storage heartbeat controlled by the HA agent vSphere on this info host 29/11/2014 22:00:49 192.27.224.139

    -Firewall configuration has changed. 'Enable' to rule the fdm value successful.  29/11/2014 info 22:00:45 192.27.224.139

    -L' availability vSphere HA of that host State became not initialized info 29/11/2014 22:00:40 reconfigures vSphere HA 192.27.224.139 root host

    -vSphere HA agent on this host is disabled info 29/11/2014 22:00:40 reconfigures vSphere HA 192.27.224.139 root

    -Reconfigure HA vSphere host 192.27.224.139 operation has timed out.     root of HOSTSERVER01 29/11/2014 22:00:31 29/11/2014 22:00:31 29/11/2014 22:02:51

    -Configuration of vSphere HA 192.27.224.139 operation has timed out.     System HOSTSERVER01 29/11/2014 21:56:42 29/11/2014 21:56:42 29/11/2014 21:58:55

    Can someone give me please with help here?

    Or the extra things that I can check or provide?

    I'm currenty running options.

    Best regards

    Joris

    P.S. I had problems with cold Migration during the implementation of the SAN. After setting up all (vMotion, ESX upgrade), these problems have disappeared.

    When you search for this error, I came to this article: KB VMware: VMware vCenter Server displays the error message: unable to connect to the host

    And this cause could make sense, since the server vCenter changed and IP addressing has been changed during the implementation.

    However, in the file vpxa.cfg, the < hostip > and < serverip > is correct (verified by using https://< hostip > / home).

    Tried this again today, no problem at all.

    P.P.S. I have set up several of these systems from scratch in the past without problem (if it is an 'upgrade').

    OK, so the problem is solved.

    I contacted Dell Pro Support (OEM offering the license) and they checked the logs (fdm.log) and found that the default IP gateway could not be reached.

    The default gateway is the ip address by default host isolation, used by HA.

    Because it is an isolated production system, the front door provided proved only for future purposes.

    Now, I've changed the default gateway address of management on the switch plugged into both hosts, namely the ping requests.

    This solved any problem.

Maybe you are looking for