Implemented shared disk high availability with TimesTen11

Hi all

I need to have shared disk high availability using cluster on AIX HACMP
Could you please share steps/info, this application with the TimesTen11.

Kind regards
-AK

Published by: tt0008 on Sep 9, 2010 02:09

Hello

We do not really recommend using disc shared according to HA with TimesTen since it is a bit complex to install and (b) is not officially a "supported" configuration Also, failover and recovery times are usually much longer than with a HA configuration based on TimesTen replication. I would like to invite you to consider using the TimesTen for HA replication. That's what it is designed for, and this is the recommended approach.

Kind regards

Chris

Tags: Database

Similar Questions

  • High availability with two 5508 WLAN controllers?

    Hi all

    We are considerung to implement a new wireless solution based on Cisco WLC 5508 and 1262N Access Points. We intend to buy about 30 access points and have two options: either buy a WLC 5508-50 or, for redundancy to, two controllers 5508-25.

    Is it possible to configure two WLC 5508 as a high availability solution, so that all access points are distributed on the two WLCs and during breaks WLC one another case manages all the APs?

    If we have 30 access points, and one of the two WLC 5508-25 breaks of course that not all access to 30 but only 25 points can be managed by one remaining. Is there some sort of control to choose the access points must be managed and which do not?

    How does such a configuration looks like in general, is the implementation of an installation of two controller quite complex or simple?

    Thank you!

    Michael

    Hi Michael,

    Do not forget that the 5508 works with a system of licensing. The hardware can support up to 500 APs, but it depends on the license that you put in.

    I think 2 5508 with 25 APs license will be more expensive than a 5508 with 50 APs license.

    If you have 2 WLCs, the best is NOT to spread access between the WLCs points. In doing so, you increase the complexity of homelessness (WLCs have to discount customers to each other all the time). If your point was to gain speed, it really doesn't matter as the 5508 can have up to 8 Gbit/s of uplink speed and has the ability of UC to treat 50 APs with no problems at all. So I find it best to have all the access points on 1 WLC, then if something goes wrong, all the APs migrate anyway for the other WLC.

    If you want 50 APs at a 25-degree WLC failover, you can select who will join Yes. The APs have a priority system, so you assign priorities. If the WLC sees it's full capacity but higher priority APs are trying to join, it will kick down-prio APs for the high prio allow to connect.

    WLCs is not exactly "HA." It's just that if you have 2 WLCs work together (as if you had 700 APs and needed to put 2 WLCs) and delivered to customers. Or all APs sat on a WLC and when it breaks down, they join the other available controller.

    The only thing to do is to put each WLC in the same group of mobility so that they know.

  • vSphere high availability with no shared storage? And general problems with VMware partner supplier


    HA can function without shared storage?

    It may not by the availability of vSphere manual.  However the global VMware partner who sold me on VMware solution said that the shared storage is not required for HA.

    This is the same guy who told me that I would not need to buy Windows Server licenses because everything was included in the package (vSphere Essentials Plus).  Now, I have no Windows license, no shared storage and a customer who will not be happy that we did not include these costs in the citation for this project.

    HA can function without shared storage?

    No.... you need a storage shared for HA.

    This is the same guy who told me that I would not need to buy Windows Server licenses because everything was included in the package (vSphere Essentials Plus).  Now, I have no Windows license, no shared storage and a customer who will not be happy that we did not include these costs in the citation for this project.

    Maybe your partner speak the vCenter Server Appliance that is included on vSphere Essentials Plus and you can use this device to manage your vSphere, ESXi without need a VM of Windows (with Windows license) to install the vCenter server.

  • Must all weblogic application servers share storage to implement high availability?

    I would like to understand if it takes a shared storage for all application servers that will run weblogic + oracle Identity Manager.  I used the following guides to oracle: http://docs.oracle.com/cd/E40329_01/doc.1112/e28391/iam.htm#BABEJGID and http://www.oracle.com/technetwork/database/availability/maa-deployment-blueprint-1735105.pdf and my interpretation, both speak to configure all application servers with access to a shared storage.  From an architectural point of view, this means the entire application servers need access to an external hard drive?  If shared storage is necessary, what are the steps to implement?

    Thank you

    user12107187

    You can do it and it will work. But products of Fusion Middleware GDE to provide guidelines to help you implement high availability.

    Having a shared storage will help you recover from the failure of storage, otherwise this could be a point of failure.

    Enterprise Deployment Overview

    "An enterprise deployment Oracle Fusion Middleware:

    • Consider various business service level agreements (SLA to make best practices high availability there is as widely as possible)
    • Operates databases servers of grid and grid storage in low-cost storage to provide a very flexible, lower cost infrastructure
    • Uses the results of impact studies of performance for various configurations to ensure that the architecture of high availability is configured optimally to perform and that adapts to the needs of companies
    • Allows control over the length of time to recover from a failure and the amount of acceptable data loss of a natural disaster
    • Evolves with every version of Oracle and is completely independent of the hardware and operating system"

    Best regards

    Luz

  • It is possible and effective: high availability (failover) on 2 guests with local storage?

    Hello

    I have a discussion on the implementation of ha (failover) with 2 guests with local storage.

    My advice is to go with a shared storage. But there is another view, use the cluster failover MS with local storage service to save on the shared storage hardware.

    2 "mirorred" on each host machines. Is it feasible at all?

    The HughesNet: have 2 virtual machines (SQL server and server applications in a highly available environment). The workload of the machines will be raised to 2 very powerful servers that are supposed to be buying.

    1. Please briefly explain a feature of local storage to provide HA.

    2. I got to cluster in the virtual environment, not so I guess that the shared storage is used only in the virtual environment.

    Thank you.

    Michael.

    There is software that can replicate a running, physical or virtual machine to another accessible location on the network. This could be as simple that between two close hosted ESXi of VMS or high-speed WAN. Take a look at things like neverfail, doubletake others.  The costs would probably be a bit prohibitive in a smaller environment. It might be possible and maybe even practice to use a virtual storage device having a failover replication. Use of storage on each host and allows to provide the data store. Open-e

    The Microsoft Clustering solution needs a shared storage.

  • SNS with high availability option

    I need help to understand the requirements of network and connectivity to deployment Cisco show and share with the high availability option where the DC and DR are different geographical locations.

    As I understand it, to achieve high availability, the DMM and the SNS servers require a L2 connectivity between the primary and the secondary server. How can this be achieved in a scenario where two data centers are connected by WAN / MAN links?

    Thank you.

    Chaitanya Datar

    + 91 99225 83636

    Hi, Datar,

    I already asked this question to the TAC, and it is unfortunately clear today the HA mode with 2 servers connected mode "back to back" via Ethernet2, REQUIRES to be on the same network of L2.

    It is not yet based on the IP routing layer, and therefore it is did not in charge of the design when using remote data centers...

    :-(

    It's a shame, I know, and I pointed to the BU of the SnS.

    May be part of a next version of work will be based on network IP L3 routed...

    Wait and see.

    Hervé Collignon

    Converged Communications Manager

    Dimension Data Luxembourg.

  • High availability without shared storage?


    Hi all

    We have created two new 5.1 without any network vMotion ESXi servers and no shared storage. Client asked for high availability of virtual machines. There are few things that we can look at in this situation as FT and HA, but I know that they will not work without shared storage. VSA is also another option in this case, but it is not flexible and has more limitations than benefits.

    Just check there at - there another solution we can us to provide high availability for virtual machines.

    Thanks in advance for the help.

    Greetings

    Nick

    Without storage shared (or ASB), it is difficult to provide high availability. Maybe vSphere replication is an option, if you can accept the loss of the latest changes, when it comes to the last line.

    André

  • I can storage vMotion two popular VMS with shared disks?

    I have an ESX 3.0.2 Server with local storage and two ESX 3.5 with fiber channel SAN servers. Two VM sharing disks and run it on the computer with only the local storage. Can I use storage vMotion to move these to a 3.5 servers and more at the SAN?

    If not; can I migrate the servers (when closed) to the SAN and 3.5 servers without breaking the links on shared disks?

    If you close the virtual machines and migrate them, you should be able to access the disks - as the sVmotion update pointers in the vmx file.

    If you upsize and lose access to the disk, you will need to do is open the.vmx file for each virtual computer and update the path specified for each disc, not too hard to fix at all. -If you don't like that, you can simply remove and re-add the disc in the VM properties

  • Is required for high availability and the DRS Vmware-AAM service? Has it been renamed with ESX3.5 U3?

    Hello Tech community.

    Is required for high availability and the DRS Vmware-AAM service? Has it been renamed with ESX3.5 U3?

    On one of our ESX 3.5 U3 infrastructure, VMware Ha and DRS work very well, and the following list of services are started, view VMware-AAM

    Here is an exit of an ESX 3.5 U3 system

    1. service - status-all | grep running

    crond (pid 1900) is running...

    gpm (pid 1851) is running...

    VMware-pass (pid 1942) is running...

    ntpd (pid 1842) is running...

    cimserver (pid 2452) is running...

    snmpd (pid 1557) is running...

    sshd (pid 4074 4037 1566) is running...

    syslogd (pid 1321) is running...

    klogd (pid 1325) is running...

    At least a virtual machine is still running.

    VMware-aam is running

    VMware VMkernel authorization daemon is running (pid 1911).

    VMware-vpxa is running

    webAccess (pid 1880) is running...

    openwsmand (pid 2534) is running...

    xinetd (1827 pid) is running...

    Now on our production VMware MCC cluster ESX 3.5 Update 2 we do not find "vmware-aam" this service but HA and DRS work very well, and the logs in/var/log/vmware/MAA indicate that everything is fine.

    1. service - status-all | grep running

    crond (pid 2033) is running...

    gpm (pid 1882) is running...

    hpasmd is running...

    cmathreshd is running...

    cmahostd is running...

    cmapeerd is running...

    cmastdeqd is running...

    cmahealthd is running...

    cmaperfd is running...

    cmaeventd is running...

    cmaidad is running...

    cmafcad is running...

    cmaided is running...

    cmascsid is running...

    cmasasd is running...

    cmanicd (pid 2703) is running...

    cmasm2d is running... [OK]

    hpsmhd (1986 1955 pid) is running...

    VMware-pass (pid 3502) is running...

    ntpd (pid 1873) is running...

    USE: Departure from OVCtrl | stop | restart | START | STOP | activate | Disable

    USE: Departure from OVTrcSrv | stop | restart

    cimserver (pid 3536) is running...

    Ramchecker is not running

    snmpd (pid 1745) is running...

    sshd (pid 1861 1447 1754) is running...

    syslogd (pid 1684) is running...

    klogd (pid 1688) is running...

    At least a virtual machine is still running.

    VMware VMkernel authorization daemon is running (pid 3472).

    VMware-vpxa is running

    webAccess (pid 1984) is running...

    openwsmand (pid 3642) is running...

    xinetd (pid 1854) is running...

    I did a few Google queries and couldn't find a definitive answer

    The service of AAM is required for HA - it is not required for DTH

    If you find this or any other answer useful please consider awarding points marking the answer correct or useful

  • High availability of Hyperion 11.1.2.3

    Hello

    We need to implement high availability.  The Oracle document shows compact deployment and horizontal scale (and may be this is the reason for which it does not deploy serve App on the second server).  But what happens if we want to affect a server run for each component of Hyperion individually.  Say, how to implement HA exclusively to a shared service with its own managed server?  In windows, I must have Domain IDs to ensure high availability?  Have no ad servers and used local id to install Hyperion successfully for the implementation of many.

    Thank you.

    If you have deployed an application of java web to its own server, you can change this possibility / on other servers in the cluster managed.

    Take a look at the Clustering Java Applications using EMP Web Configurator system

    See you soon

    John

  • vSphere high availability 'Élection' fails at 99% 'operation timed out' to 1 of the 2 hosts

    Hello

    We had a system with 1 host ESXi 5.1 with local disks.

    Now, we install redundancy by adding a 5.5 U2 ESXi host and a device of vCenter 5.5.

    After having installed and any addition to vcenter, we went the ESXi 5.1 to 5.5 U2 ESXi. The SAN works correctly (vMotion works on the own NETWORK card).

    Now, if I try to activate the high availability, both servers will install the HA Agent and start "Election".

    All the warehouses of data (4) on the SAN are chosen for the HA heartbeat, response of isolation is "keep it turned on" by default.

    A server will always be this process done, and the other will keep "elect" until it reaches 100% and errors on the operation "election timed out.

    I've seen this problem on both servers, so I think that the 'master' elected does not have the problem, only the "slave".

    I have checked these items and executed, but not worked:

    VMware KB: Reconfiguration HA (FDM) on a cluster fails with the error: operation timed out

    -The services were running

    VMware KB: Configuration HA in VMware vCenter Server 5.x does not work, error: Operation Timed out

    -All the MTU has been set at 1500

    VMware KB: VMware High Availability Configuration fails with the error: unable to complete the configuration of the ag of HA...

    -the default gateway is not the same on the two hosts, but I fixed that. There is no itinerary changes. HA the setting is "leave power." After correction and deactivation/reactivation HA, the problem is always the same.

    VMware KB: Check and reinstall the correct version of the VMware vCenter Server agents

    -J' ran "Reinstall ESX host management agents and HA on ESXi" agent of HA, and I checked that it was uninstalled and reinstalled during the reactivation of HA.

    CP /opt/vmware/uninstallers/VMware-fdm-uninstall.sh/tmp
    chmod + x /tmp/VMware-fdm-uninstall.sh
    /tmp/vmware-FDM-uninstall.sh

    I did this for two guests. This fix makes the problem of the election and I was still able to run a HA test successfully, but when after this test, I turned off the 2nd Server (in order to test the PA in the other direction), HA has no failover to 1 and everything remained low. After pressing 'reconfigure HA', the problem of the election appeared again on 1 hosts.

    Here are a few extractions of newspapers:

    -L' availability vSphere HA of that host State became election info 29/11/2014 22:03 192.27.224.138

    -vSphere HA agent is healthy info 29/11/2014 22:02:56 192.27.224.138

    -L' availability vSphere HA of that host State became Master info 29/11/2014 22:02:56 192.27.224.138

    -L' availability vSphere HA of that host State became election info 29/11/2014 22:01:26 192.27.224.138

    -vSphere HA agent is healthy info 29/11/2014 22:01:22 192.27.224.138

    -L' availability vSphere HA of that host State became Master info 29/11/2014 22:01:22 192.27.224.138

    -L' availability vSphere HA of that host State became election info 29/11/2014 22:03:02 192.27.224.139

    -Message "host vSphere HA State" on 192.27.224.139 changes from green to red info 29/11/2014 22:02:58 192.27.224.139

    officer of HA - vSphere for this host has an error: vSphere HA agent may not be properly installed or configured WARNING 29/11/2014 22:02:58 192.27.224.139

    -L' availability vSphere HA of that host State became info error of initialization of 29/11/2014 22:02:58 192.27.224.139

    -L' availability vSphere HA of that host State became election info 29/11/2014 22:00:52 192.27.224.139

    -DSMD3400DG2VD2 data store is selected for the storage heartbeat controlled by the HA agent vSphere on this info host 29/11/2014 22:00:49 192.27.224.139

    -DSMD3400DG2VD1 data store is selected for the storage heartbeat controlled by the HA agent vSphere on this info host 29/11/2014 22:00:49 192.27.224.139

    -Firewall configuration has changed. 'Enable' to rule the fdm value successful.  29/11/2014 info 22:00:45 192.27.224.139

    -L' availability vSphere HA of that host State became not initialized info 29/11/2014 22:00:40 reconfigures vSphere HA 192.27.224.139 root host

    -vSphere HA agent on this host is disabled info 29/11/2014 22:00:40 reconfigures vSphere HA 192.27.224.139 root

    -Reconfigure HA vSphere host 192.27.224.139 operation has timed out.     root of HOSTSERVER01 29/11/2014 22:00:31 29/11/2014 22:00:31 29/11/2014 22:02:51

    -Configuration of vSphere HA 192.27.224.139 operation has timed out.     System HOSTSERVER01 29/11/2014 21:56:42 29/11/2014 21:56:42 29/11/2014 21:58:55

    Can someone give me please with help here?

    Or the extra things that I can check or provide?

    I'm currenty running options.

    Best regards

    Joris

    P.S. I had problems with cold Migration during the implementation of the SAN. After setting up all (vMotion, ESX upgrade), these problems have disappeared.

    When you search for this error, I came to this article: KB VMware: VMware vCenter Server displays the error message: unable to connect to the host

    And this cause could make sense, since the server vCenter changed and IP addressing has been changed during the implementation.

    However, in the file vpxa.cfg, the < hostip > and < serverip > is correct (verified by using https://< hostip > / home).

    Tried this again today, no problem at all.

    P.P.S. I have set up several of these systems from scratch in the past without problem (if it is an 'upgrade').

    OK, so the problem is solved.

    I contacted Dell Pro Support (OEM offering the license) and they checked the logs (fdm.log) and found that the default IP gateway could not be reached.

    The default gateway is the ip address by default host isolation, used by HA.

    Because it is an isolated production system, the front door provided proved only for future purposes.

    Now, I've changed the default gateway address of management on the switch plugged into both hosts, namely the ping requests.

    This solved any problem.

  • Deployment of high availability of the IPCC 4.5

    In a future HD architecture implementation, the voice service will provide CallManager 5.0, that will integrate with 4.5 of the IPCC. 4.5 (required with 5.0 CM) IPCC does implement a high availability. How can we ensure that technical support continues to operate if the IPCC goes down? One possibility might be to configure CM such that if the IPCC goes down, all the number of help desk calls are automatically and immediately headed to a group (which includes all extensions help desk). This redirection can be configured in CM? Is there a better option?

    Thanks in advance,

    SB

    This is your best bet. On the road Points for your call center just put the call before busy, no answer and failure to the fighter pilot. Thus, when the IPCC Express Server is down it will sent to your fighter pilot.

    Please evaluate the useful messages.

    adignan - berbee

  • Bridges 2 1552 for high availability using

    Switches: 3560

    routers: 2811

    AP: 1552e

    We have 4 1552 AP we would like for installation in a point-to-point environment. Currently, they are light but would make us independent for this scenario, unless there is a feature in the controller that would achieve what it takes I don't know.

    We would like to implement in pairs on each side with the 1st goal consisting of high-availability so if one goes down there is still connectivity. Our second objective would be to use the two AP for higher bandwidth.

    I thought I'd try to configure a port channel, but as I read it, you can't do it, but I would like to know if I'm wrong. I saw where someone else tried to do the same thing in the forums and there were some responses, but the links do not work for me.

    http://www.Cisco.com/en/us/partner/products/HW/wireless/ps5279/products_tech_note09186a0080736199.shtml<----- doesn't="" work="" for="">

    Just look at showing up in the best possible direction. It seems that routing should take care of what we need, but would like to see if there are any other avenues.

    Thanks in advance.

    The best thing you can do is to configure each P2P connection as a trunk and use priority of vlan spanning tree in order to balance traffic between the two links.  You can't really bundle, because the bridge that sends the traffic expected for the traffic to come back through it.  It's the best way to use the two bridges, this is how it has been done in the past and how its still quite well done.

    Thank you

    Scott

    Help others by using the rating system and marking answers questions as 'response '.

  • VSphere hosts high availability 2

    Hi all

    I am trying to achieve the following by using the essential plus the package. I had 2 identical machines with 2 TB of storage. The two are running ESXI and I want to configure high availability and load the pendulum. From what I've read on for them is that vmware needs shared storage so kind of san if I want to use high availability. But I don't have this machine, what I want to achieve is the following: Vmware reflects the discs on both machines (network raid 1) and if a hardware failure occurred on one of them, the other machine will start all the vm remaining. In the same way vmware should do load balancing. From what I've read on the internet vmware had VSA, but this has been discontinued. VSAN is not an option, because I don't have 3 machines. One possible option would be for example starwind but I would have preferred an option of vmware itself.

    So in short it is a way to configure high availability and load with 2 hosts balancing and not shared storage? Preferably without third party software.

    Kind regards

    Sebastian Wehkamp

    Technically, you can use DRBD and Linux (or FreeBSD and POLE if you're on the dark side of the Moon) to create a device block tolerant replication between a pair of virtual machines running on a hypervisor nodes. Throw in

    failover NFSv4 point mounting on top and you have a nice VMware VM data store. Tons of a defined software storage vendors have exactly this ideology within their virtual storage devices if you do not go to the difference

    However as StarWind Virtual SAN is FREE for the installation of 2 knots in the scenario of VMware (Hyper-V licensing is different if you care) and can be run on a free Hyper-V Server (no need to pay for licenses of Windows), you CAN end up with a faster

    back from road StarWind. Depends on what you want from a set of functionality (iSCSI? NFS? SMB3? Online dedupe? Cache?) and what forum do you prefer for public support to ask questions

    --

    Thank you for your response. I thought that since preparations are exactly the same secondary machine would be able to easily start the vm of lost. For synchronization, I thought it would be something like drbd, when unable to connect with the other secondary hosts will become primary. This works in the case of a hardware failure, and network failure half a brain will take place that must be resolved manually. I play with it replicaton and reconsider the starwind software.

  • Single point of failure/Cluster/high availability on Essbase 11.1.2

    Hello

    The project I am currently working has only one Version of Unix 11.1.2 Essbase server that's threat as a single Point of failure. Management thinking to upgrade the Essbase to superior verion 11.1.2.1 to achieve clustering & high availability and to overcome the situation of single Point of failure by having just a server unique essbase.

    My concerns here are;

    Note: Essbase in this project is used to generate reports only, write back data is not allowed.

    (1) not supported of the Essbase Version 11.1.2 Cluster / HA / failover to the other node?

    (2) if it is Yes, can you please share the link to the documentation for this version 11.1.2.

    There are other components of Hyperion as HFM, FDM used the same version in this project. If the failover is not supported in the version of Essbase 11.1.2 we need upgrade other components as well as the Essbase or just Essbase upgrade upgrade to version 11.1.2.x is sufficient.

    Please share your valuable thoughts on that.

    Thanking in advance for your ideas.

    Kind regards

    UB.


    Found the reason why Essbase Cluster does not work when implemented with failover / high availability when Essbase Analytics link (EAL) used in the project. Due to these facts, we have stopped application Essbase Cluster and the Essbase in my project remains SPOF (Single Point of Failure).

    Not sure if Oracle Corp. comes with EAL product improvements that can take over for Essbase Failover.

    "

    1. Essbase Analytics link (EAL release 11.1.1.4) does not support high availability facilities. This means that EAL cannot connect to a cluster of Essbase, always happens only connections to a server regular Essbase through the Regional service console. Although we can define availability high for Essbase, EAL will use a single Essbase server. If it breaks down another Essbase server is used, we must manually redefine the EAL bridges on the new Essbase server and re-create the database Essbase. It is not considered high availability in this case.
    2. There is no technical mechanism to link point to a Cluster Essbase-Essbase analytics. EAL is supported only to connect to a single named Essbase Server.
    3. Transparent partitions can support when configured Essbase as component availability high when EAL is used. EAL bridges in the active project uses Transparent partition.

    "

    Please feel free to correct me if this interpretation is wrong.

    Thank you

    UB.

Maybe you are looking for