Cluster high availability

How can I configure a cluster of high availability in Gemfire? I need a cluster with two nodes and two servers.

For high availability of data, configure the redundancy of your data regions; If you are using partitioned regions, see Configure a maximum availability partitioned region in the user's Guide the GemFire, for all the details and options available. You can also use the persistence.

For client/server e-mail high availability, see Configuring highly available servers.

For high availability of function execution see running a function in vFabric GemFire.

Tags: VMware

Similar Questions

  • Single point of failure/Cluster/high availability on Essbase 11.1.2

    Hello

    The project I am currently working has only one Version of Unix 11.1.2 Essbase server that's threat as a single Point of failure. Management thinking to upgrade the Essbase to superior verion 11.1.2.1 to achieve clustering & high availability and to overcome the situation of single Point of failure by having just a server unique essbase.

    My concerns here are;

    Note: Essbase in this project is used to generate reports only, write back data is not allowed.

    (1) not supported of the Essbase Version 11.1.2 Cluster / HA / failover to the other node?

    (2) if it is Yes, can you please share the link to the documentation for this version 11.1.2.

    There are other components of Hyperion as HFM, FDM used the same version in this project. If the failover is not supported in the version of Essbase 11.1.2 we need upgrade other components as well as the Essbase or just Essbase upgrade upgrade to version 11.1.2.x is sufficient.

    Please share your valuable thoughts on that.

    Thanking in advance for your ideas.

    Kind regards

    UB.


    Found the reason why Essbase Cluster does not work when implemented with failover / high availability when Essbase Analytics link (EAL) used in the project. Due to these facts, we have stopped application Essbase Cluster and the Essbase in my project remains SPOF (Single Point of Failure).

    Not sure if Oracle Corp. comes with EAL product improvements that can take over for Essbase Failover.

    "

    1. Essbase Analytics link (EAL release 11.1.1.4) does not support high availability facilities. This means that EAL cannot connect to a cluster of Essbase, always happens only connections to a server regular Essbase through the Regional service console. Although we can define availability high for Essbase, EAL will use a single Essbase server. If it breaks down another Essbase server is used, we must manually redefine the EAL bridges on the new Essbase server and re-create the database Essbase. It is not considered high availability in this case.
    2. There is no technical mechanism to link point to a Cluster Essbase-Essbase analytics. EAL is supported only to connect to a single named Essbase Server.
    3. Transparent partitions can support when configured Essbase as component availability high when EAL is used. EAL bridges in the active project uses Transparent partition.

    "

    Please feel free to correct me if this interpretation is wrong.

    Thank you

    UB.

  • OLIVIER components on a server cluster for high availability

    Hello

    The requirement is to install OLIVIER 7.9.6.4 (all components) on a clustered high-availability server.

    OLIVIER 7.9.6.4 software and versions.
    OBIEE 11.1.1.7
    Informatica 9.1.0
    DAC 11.1.1

    I would like to know what software is to be installed on the cluster server and how.

    I don't get any Installation Document for this.

    Any help is appreciated.

    Thank you
    Aerts

    need an urgent answer!

  • Configuration of high availability.

    Hello

    Please help me to configure high availability for Foglight existing environment, please send me the steps and requirements of pre.

    How many servers can exist in a cluster?

    Capacity how do we need on the primary server and the other servers if there is a failure?

    We currently have 1 unifying and 3 child FMS.

    version: 5.6.10

    Thank you

    Vicky

    Vicky,

    There are 2 very useful field guides that go through the requirements and the Setup process.

    High Availability Guide - http://edocs.quest.com/foglight/5610/doc/wwhelp/wwhimpl/common/html/frameset.htm?context=field&file=HA-field/index.php&single=true

    Federation of field guide-

    http://eDOCS.quest.com/Foglight/5610/doc/wwhelp/wwhimpl/common/HTML/frameset.htm?context=field&file=Federation-field/index.php&single=true

    Note the following points, known issue

    http://eDOCS.quest.com/Foglight/5611/doc/wwhelp/wwhimpl/common/HTML/frameset.htm?context=field&file=HA-field/overview.1.php&single=true

    "A master of the Federation running in mode high availability is not supported. Only children Federated can be run by high availability. »

    Golan

  • High availability of components in the design of vWorkspace tips

    Hi all

    Would ask you some advice regarding the design of vWorkspace components highly available. Suppose that vWorkspace components will be deployed in vSphere or hypervisors managed SCVMM hence HA is in place, if the failure of a host. In this situation, if we still need components redundant (n + 1 VMS) vWorkspace?

    On the other note, I understand that we can add a couple of broker for vWorkspace in vWorkspace Management Console connections and based on KB 99163 it would just work. I'm not sure how the traffic would be when an application is web access? As in, I guess that the connection broker news would be 'defined' at the request of the web call to the broker for connections. Or this is done automatically? Access Web would choose randomly from the broker for connections to go?

    Thanks for any advice in advance

    Kind regards

    Cyril

    Hi Cyril,.

    Big questions. As with any IT architecture in layers, you must plan HA and redundancy at all points of failure required by your environment or level of Service (SLA) agreements. For vWorkspace, the center of his universe is SQL and you must plan accordingly the failure and recovery. In some environments, full backup can meet the requirement of HA. In others, full SQL Cluster, Mirroring, replication, or Always-On configurations may be required. With our broker, we recommend N + 1 deployment in most scenarios HA. When you move peripheral components or enabling, you must evaluate each component and needs its impact of failure as well as its valuation to determine the appropriate AP.

    Load balancing between several brokers is done automatically by logic in the client connectors. In the case of Web access, when you configure the site Web Access in the Management Console, it includes broker list in the Web access configuration xml file. As client connectors, Web Access includes balancing logic that distributes the client load on brokers available automatically.

    If you have any questions about specific components and requirements of HA or architecture, please add them in the discussions.

  • Fabric interconnecting Cisco high availability

    Hi Experts,

    I would like to know,

    (1) what does fabric adapter interconnection cluster (L1, L2) ports are using high availability?

    (2) can form us able to interconnect fabric and fabric interconnection interconnection 6100 6200 to a cluster? If ok means how they exchange of history since the two are completely different material. ?

    Thanks in advance,

    Jako Raj

    Hello

    (1) L1 and L2 ports are used for the purpose of managing only basically to synchronize data management, triggering a possible failover.

    (2) different hardware platforms are supported in the same cluster only in a process of upgrading equipment. Actually what really happens in this meanwhile is the copy of the configuration and the new FI is promoted as primary. Although he works as a group, he is an active-standby cluster, where a single FI acts as principal.

    Kind regards

  • The OAM Console in a deployment architecture high availability

    Hello

    I'm looking to implement a HA for OAM environment and read the various docs that the OAM console is deployed on the AdminServer. However, when I read the Oracle documentation on Oracle Fusion Middleware high availability Guide for Oracle Identity and Access Management

    It gives the URL in the console of the OAM as the link to test the server managed, which means that OAMConsole is accessible on the ManagedServer. Is there anywhere I can find a definitive answer on how OAMConsole is deployed and in the what server / cluster?

    Ideally, I'd like the OAMConsole app deployed to the cluster (instead of the AdminServer) in order to achieve for her AP.

    Thank you

    oamconsole is deployed on the AdminServer, and there may be only one admin server active per domain.

    The link that you are referring is on setting up a cluster OAM. Oracle encourages a cluster OAM of 2 instances that are deployed on 2 physical servers. The server administrator is recommended to set up an alias to separate host that they refer as a VIP that is linked to a separate IP as the OAM instance on the same host address. A clone of the Admin instance is then placed on the second physical server corresponding to the second instance OAM. In the case of the first instance OAM and the server administrator is not available, a reconfiguration of network must be made to move the VIP to the second instance. Once this is done, the server administrator can be started on the second server.

    This gives you a little HA/DR.

  • How to set up the single instance of data of high availability and disaster tolerance

    Hi Experts,

    I have unique database and instance need to configure for high availability and disaster tolerance

    What are the DR options available for synchronization of database at a remote site with a delay of 5 to 10 minutes.

    Application connects to the local site and DR site should be the remote site.

    1 oracle FailSafe with SAN?

    2. What is the security integrated on linux centos/oel solution?

    3. If the storage is on the San (for example) then is it possible to set up a shell script

    which detects if the source database is down for 5 minutes, ride the SAN stored the files of database on the remote computer and

    change the ip in the application, so it will never connect to the source ip address

    Thank you and best regards,

    IVW

    Hello

    Rupture can occur in any level

    1 oracle FailSafe with SAN?

    --> Do check if you have failure in storage, your security will do nothing to bring back data

    --> Seen failsafe, will be only the insurance when MS cluster moving the disc and services to different node, configured services starts.

    2. What is the security integrated on linux centos/oel solution?

    --> Under linux, you need to set the scripts to run, and you can check the option on the cluster OS

    3. If the storage is on the San (for example) then it is possible to configure a shell script that detects if data source is down for 5 minutes, mount SAN stored the files of database on the remote computer and

    change the ip in the application, so it will never connect to the source ip address

    --> This you will get a cluster of BONES...

    Points to note:

    ================

    --> If there is power failure sudden, we can expect there may be lost writing in your data block, which will bring the inconsistency to your data file and redo

    --> Here, there is problem with your drive?

    --> If there is problem with your complete domain controller (how will you mount FS to a remote server?)

    Note what we are discussing was HA and you try to keep an idle server all the time (you have a server with more RAM & CPU)

    Why you can't check an option of CARS, can also have the cluster extension...

    And to avoid the loss of data and to meet the RPO and RTO in all time (same DC came down, storage failure, server crash), you may need to use Oracle data guard...

    Ask questions if you have

    Thank you

  • High availability of Hyperion 11.1.2.3

    Hello

    We need to implement high availability.  The Oracle document shows compact deployment and horizontal scale (and may be this is the reason for which it does not deploy serve App on the second server).  But what happens if we want to affect a server run for each component of Hyperion individually.  Say, how to implement HA exclusively to a shared service with its own managed server?  In windows, I must have Domain IDs to ensure high availability?  Have no ad servers and used local id to install Hyperion successfully for the implementation of many.

    Thank you.

    If you have deployed an application of java web to its own server, you can change this possibility / on other servers in the cluster managed.

    Take a look at the Clustering Java Applications using EMP Web Configurator system

    See you soon

    John

  • EBS high availability VS. Continuity

    Hi all

    I'm confused about high availability and business continuity.

    They are the same or similar?

    Assuming I have cluster tier1 Apps and Apps-teir2 installation on the same instance.

    If Apps-teir1 fails, users assigned to that transferred to Apps - level 2 will be and vice versa?

    If they do fail-overed in another layer Apps, what do you call this kind of configuration?

    Is - this high availability and business continuity? or other type of installation available in EBS?

    How about the Site configuration after disaster with dataguard standby server.

    If the PROD fails, it will automatically transfer to the Site of recovery with dataguard. Can it be called high-availability as well?

    Kindly enlighten please...

    Thank you very much

    JC

    For your type of installation, it will work, but you have to take care now

    1.) as DB side is supported by the installer of dataguard, phase applications file system, you will need to find ways to transfer files on Server2. Maybe you can use Rsync.

    2.) failure to PROD you convert DB mode ensures prod.

    3.) next to the application, you will need to run the combination adcfgclone/autoconfig (what URL you want to have Dr. Côté, CM Setup etc.)

    Please be aware that in the case of installation Ebiz DR there will always a downtime (you can reduce to a minimum the same but can't avoid it)

    concerning

    Pravin

  • ODI 11 g high availability features

    Hello

    Is there a link to all official documentation describing how ODI can be configured for high availability, and more specifically, which described how agents and repositories behave in case of failure.

    I saw the notes on how to configure the load balancing between two or more agents.

    However, I would like to know what happens if an agent fails:

    • Is there a process of follow-up that imprisons the failure and tries to restart the agent?
    • In this case, the ODI load balancer will be perfectly direct traffic only to available agents.
    • What happens if the agent fails in the middle of a stage of script performance?
    • Will of the database session, getting lost and rollback?
    • Stop the script or remaining available agents will manage the process of running success?

    Similar requests are the repository itself - if the master repository database or the work fails what is happening? Assuming, the repository is installed on a multiple-node cluster environment, if only one node fails, everything will run scenario fail (and need to reboot) or that there is a transparent failover to the remaining nodes in the cluster housing deposits ODI?

    I have a requirement where the ETL (ODI) system must be available 99.5% of the year and thus resilience is a key factor and I need to understand the capabilities.

    I need to know if high in ODI availability simply means that a process is restarted, if a node or agent fails, or if an existing process will continue until the end due to a transparent failover.

    see you soon,

    John

    Take a look at the link below:

    http://www.rittmanmead.com/2012/03/deploying-ODI-11g-agents-for-high-availability-and-load-balancing/

    I hope this has given a table on this topic clear about close.

    Apart from this we use EM to monitor the status of the Agent.

    See you soon!

    SH! going

  • vSphere high availability 'Élection' fails at 99% 'operation timed out' to 1 of the 2 hosts

    Hello

    We had a system with 1 host ESXi 5.1 with local disks.

    Now, we install redundancy by adding a 5.5 U2 ESXi host and a device of vCenter 5.5.

    After having installed and any addition to vcenter, we went the ESXi 5.1 to 5.5 U2 ESXi. The SAN works correctly (vMotion works on the own NETWORK card).

    Now, if I try to activate the high availability, both servers will install the HA Agent and start "Election".

    All the warehouses of data (4) on the SAN are chosen for the HA heartbeat, response of isolation is "keep it turned on" by default.

    A server will always be this process done, and the other will keep "elect" until it reaches 100% and errors on the operation "election timed out.

    I've seen this problem on both servers, so I think that the 'master' elected does not have the problem, only the "slave".

    I have checked these items and executed, but not worked:

    VMware KB: Reconfiguration HA (FDM) on a cluster fails with the error: operation timed out

    -The services were running

    VMware KB: Configuration HA in VMware vCenter Server 5.x does not work, error: Operation Timed out

    -All the MTU has been set at 1500

    VMware KB: VMware High Availability Configuration fails with the error: unable to complete the configuration of the ag of HA...

    -the default gateway is not the same on the two hosts, but I fixed that. There is no itinerary changes. HA the setting is "leave power." After correction and deactivation/reactivation HA, the problem is always the same.

    VMware KB: Check and reinstall the correct version of the VMware vCenter Server agents

    -J' ran "Reinstall ESX host management agents and HA on ESXi" agent of HA, and I checked that it was uninstalled and reinstalled during the reactivation of HA.

    CP /opt/vmware/uninstallers/VMware-fdm-uninstall.sh/tmp
    chmod + x /tmp/VMware-fdm-uninstall.sh
    /tmp/vmware-FDM-uninstall.sh

    I did this for two guests. This fix makes the problem of the election and I was still able to run a HA test successfully, but when after this test, I turned off the 2nd Server (in order to test the PA in the other direction), HA has no failover to 1 and everything remained low. After pressing 'reconfigure HA', the problem of the election appeared again on 1 hosts.

    Here are a few extractions of newspapers:

    -L' availability vSphere HA of that host State became election info 29/11/2014 22:03 192.27.224.138

    -vSphere HA agent is healthy info 29/11/2014 22:02:56 192.27.224.138

    -L' availability vSphere HA of that host State became Master info 29/11/2014 22:02:56 192.27.224.138

    -L' availability vSphere HA of that host State became election info 29/11/2014 22:01:26 192.27.224.138

    -vSphere HA agent is healthy info 29/11/2014 22:01:22 192.27.224.138

    -L' availability vSphere HA of that host State became Master info 29/11/2014 22:01:22 192.27.224.138

    -L' availability vSphere HA of that host State became election info 29/11/2014 22:03:02 192.27.224.139

    -Message "host vSphere HA State" on 192.27.224.139 changes from green to red info 29/11/2014 22:02:58 192.27.224.139

    officer of HA - vSphere for this host has an error: vSphere HA agent may not be properly installed or configured WARNING 29/11/2014 22:02:58 192.27.224.139

    -L' availability vSphere HA of that host State became info error of initialization of 29/11/2014 22:02:58 192.27.224.139

    -L' availability vSphere HA of that host State became election info 29/11/2014 22:00:52 192.27.224.139

    -DSMD3400DG2VD2 data store is selected for the storage heartbeat controlled by the HA agent vSphere on this info host 29/11/2014 22:00:49 192.27.224.139

    -DSMD3400DG2VD1 data store is selected for the storage heartbeat controlled by the HA agent vSphere on this info host 29/11/2014 22:00:49 192.27.224.139

    -Firewall configuration has changed. 'Enable' to rule the fdm value successful.  29/11/2014 info 22:00:45 192.27.224.139

    -L' availability vSphere HA of that host State became not initialized info 29/11/2014 22:00:40 reconfigures vSphere HA 192.27.224.139 root host

    -vSphere HA agent on this host is disabled info 29/11/2014 22:00:40 reconfigures vSphere HA 192.27.224.139 root

    -Reconfigure HA vSphere host 192.27.224.139 operation has timed out.     root of HOSTSERVER01 29/11/2014 22:00:31 29/11/2014 22:00:31 29/11/2014 22:02:51

    -Configuration of vSphere HA 192.27.224.139 operation has timed out.     System HOSTSERVER01 29/11/2014 21:56:42 29/11/2014 21:56:42 29/11/2014 21:58:55

    Can someone give me please with help here?

    Or the extra things that I can check or provide?

    I'm currenty running options.

    Best regards

    Joris

    P.S. I had problems with cold Migration during the implementation of the SAN. After setting up all (vMotion, ESX upgrade), these problems have disappeared.

    When you search for this error, I came to this article: KB VMware: VMware vCenter Server displays the error message: unable to connect to the host

    And this cause could make sense, since the server vCenter changed and IP addressing has been changed during the implementation.

    However, in the file vpxa.cfg, the < hostip > and < serverip > is correct (verified by using https://< hostip > / home).

    Tried this again today, no problem at all.

    P.P.S. I have set up several of these systems from scratch in the past without problem (if it is an 'upgrade').

    OK, so the problem is solved.

    I contacted Dell Pro Support (OEM offering the license) and they checked the logs (fdm.log) and found that the default IP gateway could not be reached.

    The default gateway is the ip address by default host isolation, used by HA.

    Because it is an isolated production system, the front door provided proved only for future purposes.

    Now, I've changed the default gateway address of management on the switch plugged into both hosts, namely the ping requests.

    This solved any problem.

  • Configuration of high availability in vSphere alarms

    Hello

    We have three ESXi hosts in a cluster and, recently, one of the hosts that failed (I have connected a matter of pension with VMWare to get assistance to research into why).  The virtual machines on this host automatically migrate to the other two hosts in the cluster, but we did not know that this had happened until we saw some VMS showing high CPU ready using vCOPs.

    My boss asked me to alert us if the high availability feature is used as it was the case here.  I looked on Google and the VMWare Web site, but I can't find anything specific but I've only worked with VMWare products for about a month so it could well be that I'm looking for simply not the right thing.

    I would be grateful for any help, as I have said that I am very new to VMWare, so I hope I got all the information that is needed, if it's not made me know.

    See you soon,.

    Ben.

    There are quite a few definitions standard alarm, you can use. Maybe there's a useful alarm because you can find them in the 'Alerts' tab

    Name Description
    ---- -----------

    Insufficient vsphere HA failover resources default alarm to be alerted when there are insufficient for vSphere HA cluster resources to ensure failover

    vSphere HA failover in progress Alarm by default to be alerted when vSphere HA is failing on virtual machines
    Cannot find vSphere HA master agent Default alarm to alert when vCenter Server could not connect to a main agent of vSphere HA for an extended period
    Status of the vSphere HA host By default the alarm to monitor the status of a host such as reported by vSphere HA

    vSphere HA failover the virtual machine has no alarm alert default when vSphere HA does not failover a virtual machine

    HA vSphere virtual machine followed the default action to alarm to alert when vSphere HA reset a virtual machine

    vSphere HA machine virtual error default monitoring alert alarm when vSphere HA could not reset a virtual machine

    Insufficient vsphere HA failover resources default alarm to be alerted when there are insufficient for vSphere HA cluster resources to ensure failover

    vSphere HA failover in progress Alarm by default to be alerted when vSphere HA is failing on virtual machines
    Cannot find vSphere HA master agent Default alarm to alert when vCenter Server could not connect to a main agent of vSphere HA for an extended period
    Status of the vSphere HA host By default the alarm to monitor the status of a host such as reported by vSphere HA

    vSphere HA failover the virtual machine has no alarm alert default when vSphere HA does not failover a virtual machine

    HA vSphere virtual machine followed the default action to alarm to alert when vSphere HA reset a virtual machine

    vSphere HA machine virtual error default monitoring alert alarm when vSphere HA could not reset a virtual machine

  • Infrastructure for high availability

    Hi all!

    Stage en currently, Lun project is to implement Davis high availability vmware to deploy it later.

    After a lot time on the tutorial, I realized to a high availability system, to 2 ESX servers, a minimum en SAN Bay.

    Normal after the Vmotion solution seems to me the best. So that Vmotion works must implement vCenter Server integrated into a field.

    My question is about how many card to pay the ESX servers? According to my research, I AI 3: 1 LAN, SAN 1 and 1, pour Vmotion if I don't me not mistaken.

    Then the question that arises is or place vCenter? Place it in a virtual machine or in physics?

    According to what I've read, the best would be to put it in a virtual machine. So in this case, is it possible to put the DC also in a VM or does is in physics by taking into account the possibility of "unavailability.

    Another question: should Hook the ESX version with a license or a version ESXi is sufficient?

    Thank you in advance if someone would have an opinion.

    Hi Tatuxp,

    Welcome to the VMware forums.

    Pour a DC Windows Virtualization, you can find a lot of info in this document:

    You can't start a band with the free version of vSphere. You'll have more info in the FAQ:

    Pour the network config, it depends on a lot of your physical infrastructure. If you have physical networks separated, it normal morphologies of ports need you two network for redundancy. If you only have one big switch output and you use a VLAN, you trunk ports and you can put your cards in redundancy to each other. You have a good document that presents the main principles of virtual networks here:

    Finally pour your VMs DC, the fact of having two and use rules qualifirons-affinity allows you to always have an available in even if one of the nodes in the cluster.

    Good luck!

    A +.

    Franck

  • Exchange 2010 and high availability (redundancy)

    Hello

    If virtualize us Exchange 2010 on VMware and Vmotion used for high availability (redundancy). (that is, if we do not have to use Exchange clusters.)  Claims we're still two x licenses Exchange 2010?

    Thank you

    Yes, you will need two exchange server license, if you have less than 5 databases in exchange, you can opt for the standard edition of Exchange. However the operating system you will be required Enterprise edition, as it is necessary for the cluster configuration.

Maybe you are looking for