High availability & Cluster size with blades

I am currently considering a VMWare infrastructure, which has been underway, thanks in large part to the members of this community. I came to design the AP and want to make sure that I understand the limitations that simultaneous failures host is concerned.

Imagine that I've implemented the following:

  • Two blade chassis each with 8 blades in them for a total of 16 guests

  • 16 guests are in a cluster with nodes primary two nodes that reside in a frame and three in the other

  • The requirements for HA are respected on each Blade (they have access to the same, shared storage virtual network configuration and DNS is correctly configured)

  • Lack of resources prevents all virtual machines running on 8 guests

Now, imagine a chassis and 8 guests fail at the same time. I have reason to think that only the virtual machines of four of these hosts failed successfully restarted with HA on the 8 remaining hosts in the second chassis? I based this assumption on the fact that since he cannot be there five main nodes in a cluster, no more than 4 may fail. Or it will be actually even less than that, because I maybe only two nodes of work on the chassis of the second?

If I change my set up consists of two groups of 8 guests each (4 in each chassis) am I right in thinking that I can now face the loss of a full chassis as each cluster will have only lost 4 hosts? Or will I still have problems because again, I can have only two nodes on the chassis of the work?

If the 8 host cluster is not elastic then it would seem that the majority of guests who may have a two-bladed chassis resilient, put in place is 4, which is rather limited.

I will of course test failure of a chassis together before putting anything in production on the environment some design I go for. If I disable a blade chassis, all 5 main nodes remain on the chassis of the second after the return of one? If yes then I would need obviously manually move them back after any loss of chassis (test or other).

Go, you're all good. As long as there is a single primary node alive somewhere it restarts everything. So until all 5 primaries are not in the same chassis, you're cool.

Http://rodos.haywood.org/2008/12/blade-enclosures-and-ha.html review to ensure that they don't end up all in the same chassis as well as links to her more details Duncans.

Rodos

Consider the use of buttons useful or appropriate to award marks. Blog: http://rodos.haywood.org/

Tags: VMware

Similar Questions

  • vSphere high availability 'Élection' fails at 99% 'operation timed out' to 1 of the 2 hosts

    Hello

    We had a system with 1 host ESXi 5.1 with local disks.

    Now, we install redundancy by adding a 5.5 U2 ESXi host and a device of vCenter 5.5.

    After having installed and any addition to vcenter, we went the ESXi 5.1 to 5.5 U2 ESXi. The SAN works correctly (vMotion works on the own NETWORK card).

    Now, if I try to activate the high availability, both servers will install the HA Agent and start "Election".

    All the warehouses of data (4) on the SAN are chosen for the HA heartbeat, response of isolation is "keep it turned on" by default.

    A server will always be this process done, and the other will keep "elect" until it reaches 100% and errors on the operation "election timed out.

    I've seen this problem on both servers, so I think that the 'master' elected does not have the problem, only the "slave".

    I have checked these items and executed, but not worked:

    VMware KB: Reconfiguration HA (FDM) on a cluster fails with the error: operation timed out

    -The services were running

    VMware KB: Configuration HA in VMware vCenter Server 5.x does not work, error: Operation Timed out

    -All the MTU has been set at 1500

    VMware KB: VMware High Availability Configuration fails with the error: unable to complete the configuration of the ag of HA...

    -the default gateway is not the same on the two hosts, but I fixed that. There is no itinerary changes. HA the setting is "leave power." After correction and deactivation/reactivation HA, the problem is always the same.

    VMware KB: Check and reinstall the correct version of the VMware vCenter Server agents

    -J' ran "Reinstall ESX host management agents and HA on ESXi" agent of HA, and I checked that it was uninstalled and reinstalled during the reactivation of HA.

    CP /opt/vmware/uninstallers/VMware-fdm-uninstall.sh/tmp
    chmod + x /tmp/VMware-fdm-uninstall.sh
    /tmp/vmware-FDM-uninstall.sh

    I did this for two guests. This fix makes the problem of the election and I was still able to run a HA test successfully, but when after this test, I turned off the 2nd Server (in order to test the PA in the other direction), HA has no failover to 1 and everything remained low. After pressing 'reconfigure HA', the problem of the election appeared again on 1 hosts.

    Here are a few extractions of newspapers:

    -L' availability vSphere HA of that host State became election info 29/11/2014 22:03 192.27.224.138

    -vSphere HA agent is healthy info 29/11/2014 22:02:56 192.27.224.138

    -L' availability vSphere HA of that host State became Master info 29/11/2014 22:02:56 192.27.224.138

    -L' availability vSphere HA of that host State became election info 29/11/2014 22:01:26 192.27.224.138

    -vSphere HA agent is healthy info 29/11/2014 22:01:22 192.27.224.138

    -L' availability vSphere HA of that host State became Master info 29/11/2014 22:01:22 192.27.224.138

    -L' availability vSphere HA of that host State became election info 29/11/2014 22:03:02 192.27.224.139

    -Message "host vSphere HA State" on 192.27.224.139 changes from green to red info 29/11/2014 22:02:58 192.27.224.139

    officer of HA - vSphere for this host has an error: vSphere HA agent may not be properly installed or configured WARNING 29/11/2014 22:02:58 192.27.224.139

    -L' availability vSphere HA of that host State became info error of initialization of 29/11/2014 22:02:58 192.27.224.139

    -L' availability vSphere HA of that host State became election info 29/11/2014 22:00:52 192.27.224.139

    -DSMD3400DG2VD2 data store is selected for the storage heartbeat controlled by the HA agent vSphere on this info host 29/11/2014 22:00:49 192.27.224.139

    -DSMD3400DG2VD1 data store is selected for the storage heartbeat controlled by the HA agent vSphere on this info host 29/11/2014 22:00:49 192.27.224.139

    -Firewall configuration has changed. 'Enable' to rule the fdm value successful.  29/11/2014 info 22:00:45 192.27.224.139

    -L' availability vSphere HA of that host State became not initialized info 29/11/2014 22:00:40 reconfigures vSphere HA 192.27.224.139 root host

    -vSphere HA agent on this host is disabled info 29/11/2014 22:00:40 reconfigures vSphere HA 192.27.224.139 root

    -Reconfigure HA vSphere host 192.27.224.139 operation has timed out.     root of HOSTSERVER01 29/11/2014 22:00:31 29/11/2014 22:00:31 29/11/2014 22:02:51

    -Configuration of vSphere HA 192.27.224.139 operation has timed out.     System HOSTSERVER01 29/11/2014 21:56:42 29/11/2014 21:56:42 29/11/2014 21:58:55

    Can someone give me please with help here?

    Or the extra things that I can check or provide?

    I'm currenty running options.

    Best regards

    Joris

    P.S. I had problems with cold Migration during the implementation of the SAN. After setting up all (vMotion, ESX upgrade), these problems have disappeared.

    When you search for this error, I came to this article: KB VMware: VMware vCenter Server displays the error message: unable to connect to the host

    And this cause could make sense, since the server vCenter changed and IP addressing has been changed during the implementation.

    However, in the file vpxa.cfg, the < hostip > and < serverip > is correct (verified by using https://< hostip > / home).

    Tried this again today, no problem at all.

    P.P.S. I have set up several of these systems from scratch in the past without problem (if it is an 'upgrade').

    OK, so the problem is solved.

    I contacted Dell Pro Support (OEM offering the license) and they checked the logs (fdm.log) and found that the default IP gateway could not be reached.

    The default gateway is the ip address by default host isolation, used by HA.

    Because it is an isolated production system, the front door provided proved only for future purposes.

    Now, I've changed the default gateway address of management on the switch plugged into both hosts, namely the ping requests.

    This solved any problem.

  • VSphere hosts high availability 2

    Hi all

    I am trying to achieve the following by using the essential plus the package. I had 2 identical machines with 2 TB of storage. The two are running ESXI and I want to configure high availability and load the pendulum. From what I've read on for them is that vmware needs shared storage so kind of san if I want to use high availability. But I don't have this machine, what I want to achieve is the following: Vmware reflects the discs on both machines (network raid 1) and if a hardware failure occurred on one of them, the other machine will start all the vm remaining. In the same way vmware should do load balancing. From what I've read on the internet vmware had VSA, but this has been discontinued. VSAN is not an option, because I don't have 3 machines. One possible option would be for example starwind but I would have preferred an option of vmware itself.

    So in short it is a way to configure high availability and load with 2 hosts balancing and not shared storage? Preferably without third party software.

    Kind regards

    Sebastian Wehkamp

    Technically, you can use DRBD and Linux (or FreeBSD and POLE if you're on the dark side of the Moon) to create a device block tolerant replication between a pair of virtual machines running on a hypervisor nodes. Throw in

    failover NFSv4 point mounting on top and you have a nice VMware VM data store. Tons of a defined software storage vendors have exactly this ideology within their virtual storage devices if you do not go to the difference

    However as StarWind Virtual SAN is FREE for the installation of 2 knots in the scenario of VMware (Hyper-V licensing is different if you care) and can be run on a free Hyper-V Server (no need to pay for licenses of Windows), you CAN end up with a faster

    back from road StarWind. Depends on what you want from a set of functionality (iSCSI? NFS? SMB3? Online dedupe? Cache?) and what forum do you prefer for public support to ask questions

    --

    Thank you for your response. I thought that since preparations are exactly the same secondary machine would be able to easily start the vm of lost. For synchronization, I thought it would be something like drbd, when unable to connect with the other secondary hosts will become primary. This works in the case of a hardware failure, and network failure half a brain will take place that must be resolved manually. I play with it replicaton and reconsider the starwind software.

  • Cluster high availability

    How can I configure a cluster of high availability in Gemfire? I need a cluster with two nodes and two servers.

    For high availability of data, configure the redundancy of your data regions; If you are using partitioned regions, see Configure a maximum availability partitioned region in the user's Guide the GemFire, for all the details and options available. You can also use the persistence.

    For client/server e-mail high availability, see Configuring highly available servers.

    For high availability of function execution see running a function in vFabric GemFire.

  • Single point of failure/Cluster/high availability on Essbase 11.1.2

    Hello

    The project I am currently working has only one Version of Unix 11.1.2 Essbase server that's threat as a single Point of failure. Management thinking to upgrade the Essbase to superior verion 11.1.2.1 to achieve clustering & high availability and to overcome the situation of single Point of failure by having just a server unique essbase.

    My concerns here are;

    Note: Essbase in this project is used to generate reports only, write back data is not allowed.

    (1) not supported of the Essbase Version 11.1.2 Cluster / HA / failover to the other node?

    (2) if it is Yes, can you please share the link to the documentation for this version 11.1.2.

    There are other components of Hyperion as HFM, FDM used the same version in this project. If the failover is not supported in the version of Essbase 11.1.2 we need upgrade other components as well as the Essbase or just Essbase upgrade upgrade to version 11.1.2.x is sufficient.

    Please share your valuable thoughts on that.

    Thanking in advance for your ideas.

    Kind regards

    UB.


    Found the reason why Essbase Cluster does not work when implemented with failover / high availability when Essbase Analytics link (EAL) used in the project. Due to these facts, we have stopped application Essbase Cluster and the Essbase in my project remains SPOF (Single Point of Failure).

    Not sure if Oracle Corp. comes with EAL product improvements that can take over for Essbase Failover.

    "

    1. Essbase Analytics link (EAL release 11.1.1.4) does not support high availability facilities. This means that EAL cannot connect to a cluster of Essbase, always happens only connections to a server regular Essbase through the Regional service console. Although we can define availability high for Essbase, EAL will use a single Essbase server. If it breaks down another Essbase server is used, we must manually redefine the EAL bridges on the new Essbase server and re-create the database Essbase. It is not considered high availability in this case.
    2. There is no technical mechanism to link point to a Cluster Essbase-Essbase analytics. EAL is supported only to connect to a single named Essbase Server.
    3. Transparent partitions can support when configured Essbase as component availability high when EAL is used. EAL bridges in the active project uses Transparent partition.

    "

    Please feel free to correct me if this interpretation is wrong.

    Thank you

    UB.

  • It is possible and effective: high availability (failover) on 2 guests with local storage?

    Hello

    I have a discussion on the implementation of ha (failover) with 2 guests with local storage.

    My advice is to go with a shared storage. But there is another view, use the cluster failover MS with local storage service to save on the shared storage hardware.

    2 "mirorred" on each host machines. Is it feasible at all?

    The HughesNet: have 2 virtual machines (SQL server and server applications in a highly available environment). The workload of the machines will be raised to 2 very powerful servers that are supposed to be buying.

    1. Please briefly explain a feature of local storage to provide HA.

    2. I got to cluster in the virtual environment, not so I guess that the shared storage is used only in the virtual environment.

    Thank you.

    Michael.

    There is software that can replicate a running, physical or virtual machine to another accessible location on the network. This could be as simple that between two close hosted ESXi of VMS or high-speed WAN. Take a look at things like neverfail, doubletake others.  The costs would probably be a bit prohibitive in a smaller environment. It might be possible and maybe even practice to use a virtual storage device having a failover replication. Use of storage on each host and allows to provide the data store. Open-e

    The Microsoft Clustering solution needs a shared storage.

  • Is required for high availability and the DRS Vmware-AAM service? Has it been renamed with ESX3.5 U3?

    Hello Tech community.

    Is required for high availability and the DRS Vmware-AAM service? Has it been renamed with ESX3.5 U3?

    On one of our ESX 3.5 U3 infrastructure, VMware Ha and DRS work very well, and the following list of services are started, view VMware-AAM

    Here is an exit of an ESX 3.5 U3 system

    1. service - status-all | grep running

    crond (pid 1900) is running...

    gpm (pid 1851) is running...

    VMware-pass (pid 1942) is running...

    ntpd (pid 1842) is running...

    cimserver (pid 2452) is running...

    snmpd (pid 1557) is running...

    sshd (pid 4074 4037 1566) is running...

    syslogd (pid 1321) is running...

    klogd (pid 1325) is running...

    At least a virtual machine is still running.

    VMware-aam is running

    VMware VMkernel authorization daemon is running (pid 1911).

    VMware-vpxa is running

    webAccess (pid 1880) is running...

    openwsmand (pid 2534) is running...

    xinetd (1827 pid) is running...

    Now on our production VMware MCC cluster ESX 3.5 Update 2 we do not find "vmware-aam" this service but HA and DRS work very well, and the logs in/var/log/vmware/MAA indicate that everything is fine.

    1. service - status-all | grep running

    crond (pid 2033) is running...

    gpm (pid 1882) is running...

    hpasmd is running...

    cmathreshd is running...

    cmahostd is running...

    cmapeerd is running...

    cmastdeqd is running...

    cmahealthd is running...

    cmaperfd is running...

    cmaeventd is running...

    cmaidad is running...

    cmafcad is running...

    cmaided is running...

    cmascsid is running...

    cmasasd is running...

    cmanicd (pid 2703) is running...

    cmasm2d is running... [OK]

    hpsmhd (1986 1955 pid) is running...

    VMware-pass (pid 3502) is running...

    ntpd (pid 1873) is running...

    USE: Departure from OVCtrl | stop | restart | START | STOP | activate | Disable

    USE: Departure from OVTrcSrv | stop | restart

    cimserver (pid 3536) is running...

    Ramchecker is not running

    snmpd (pid 1745) is running...

    sshd (pid 1861 1447 1754) is running...

    syslogd (pid 1684) is running...

    klogd (pid 1688) is running...

    At least a virtual machine is still running.

    VMware VMkernel authorization daemon is running (pid 3472).

    VMware-vpxa is running

    webAccess (pid 1984) is running...

    openwsmand (pid 3642) is running...

    xinetd (pid 1854) is running...

    I did a few Google queries and couldn't find a definitive answer

    The service of AAM is required for HA - it is not required for DTH

    If you find this or any other answer useful please consider awarding points marking the answer correct or useful

  • Implemented shared disk high availability with TimesTen11

    Hi all

    I need to have shared disk high availability using cluster on AIX HACMP
    Could you please share steps/info, this application with the TimesTen11.

    Kind regards
    -AK

    Published by: tt0008 on Sep 9, 2010 02:09

    Hello

    We do not really recommend using disc shared according to HA with TimesTen since it is a bit complex to install and (b) is not officially a "supported" configuration Also, failover and recovery times are usually much longer than with a HA configuration based on TimesTen replication. I would like to invite you to consider using the TimesTen for HA replication. That's what it is designed for, and this is the recommended approach.

    Kind regards

    Chris

  • Help with the cluster in table for the cluster size difference, please!

    I will admit to still hurt with the berries of LabVIEW, and as usual, the behavior in the vi attached is meaningless to me!  The attached vi shows a cluster 6 element being converted into a table, then immediately to a cluster.  The reconstructed cluster has 9 elements, even if the table size indicator display properly 6.  How to maintain the initial cluster size when converting to and then since then, a table?

    The f

    Well, if you have worked with context-sensitive help running you would see:

    "With the right button of the function and select the Size of Cluster in the context menu to set the number of items in the cluster."

    The default is new. The maximum cluster size for this function is 256. »

    You must set the size. There is no way for the function to know how many elements in the table.

  • SNS with high availability option

    I need help to understand the requirements of network and connectivity to deployment Cisco show and share with the high availability option where the DC and DR are different geographical locations.

    As I understand it, to achieve high availability, the DMM and the SNS servers require a L2 connectivity between the primary and the secondary server. How can this be achieved in a scenario where two data centers are connected by WAN / MAN links?

    Thank you.

    Chaitanya Datar

    + 91 99225 83636

    Hi, Datar,

    I already asked this question to the TAC, and it is unfortunately clear today the HA mode with 2 servers connected mode "back to back" via Ethernet2, REQUIRES to be on the same network of L2.

    It is not yet based on the IP routing layer, and therefore it is did not in charge of the design when using remote data centers...

    :-(

    It's a shame, I know, and I pointed to the BU of the SnS.

    May be part of a next version of work will be based on network IP L3 routed...

    Wait and see.

    Hervé Collignon

    Converged Communications Manager

    Dimension Data Luxembourg.

  • High availability with two 5508 WLAN controllers?

    Hi all

    We are considerung to implement a new wireless solution based on Cisco WLC 5508 and 1262N Access Points. We intend to buy about 30 access points and have two options: either buy a WLC 5508-50 or, for redundancy to, two controllers 5508-25.

    Is it possible to configure two WLC 5508 as a high availability solution, so that all access points are distributed on the two WLCs and during breaks WLC one another case manages all the APs?

    If we have 30 access points, and one of the two WLC 5508-25 breaks of course that not all access to 30 but only 25 points can be managed by one remaining. Is there some sort of control to choose the access points must be managed and which do not?

    How does such a configuration looks like in general, is the implementation of an installation of two controller quite complex or simple?

    Thank you!

    Michael

    Hi Michael,

    Do not forget that the 5508 works with a system of licensing. The hardware can support up to 500 APs, but it depends on the license that you put in.

    I think 2 5508 with 25 APs license will be more expensive than a 5508 with 50 APs license.

    If you have 2 WLCs, the best is NOT to spread access between the WLCs points. In doing so, you increase the complexity of homelessness (WLCs have to discount customers to each other all the time). If your point was to gain speed, it really doesn't matter as the 5508 can have up to 8 Gbit/s of uplink speed and has the ability of UC to treat 50 APs with no problems at all. So I find it best to have all the access points on 1 WLC, then if something goes wrong, all the APs migrate anyway for the other WLC.

    If you want 50 APs at a 25-degree WLC failover, you can select who will join Yes. The APs have a priority system, so you assign priorities. If the WLC sees it's full capacity but higher priority APs are trying to join, it will kick down-prio APs for the high prio allow to connect.

    WLCs is not exactly "HA." It's just that if you have 2 WLCs work together (as if you had 700 APs and needed to put 2 WLCs) and delivered to customers. Or all APs sat on a WLC and when it breaks down, they join the other available controller.

    The only thing to do is to put each WLC in the same group of mobility so that they know.

  • Change the Cluster size in drive with VM-converter

    All the morning

    We'll probably replace our storage environment with that provided by a new provider.

    Our current provider has a recommended 4 k Cluster size. So all our VMDK is configured with a cluster of 4K, size happy days.

    However, the new seller we recommend 8 k.

    So I thought v2v ING our server domain to the new platform, change the size of the block in the Advanced tab of VM-converter in the process, which gives us the system formatted with 8 K blocks.

    There is however a Requirement MS that the system drive is 4 k (otherwise it wont power on). Its this little pesky 100 MB partition...

    Now comes the rub;

    The 100 MB system partition, part of the first drive (C:\). If I P2V the first drive with a cluster of 8 k size I of course get a non bootable server.

    When in fact I want actually to v2v 1 drive (C:\). As 8 KB, but leave the small single 100 MB system partition with the default of 4 k, however because they are part of the same disc, and converter seems to work at a disk level and not the level of the Partition, converter just wants to make all that is a problem.

    Any ideas on how to accomplish the foregoing.

    See you soon

    P

    Hi there mate

    Sorry for the delay, we are testing at the mo and I could not devote much time to this.

    Any ways I cracked it, I was testing on a 2 k 3 box where something like minitool would be necessary to finish the change of cluster size in the C:\ for the reasons given.

    But when v2v - ing (with VM converter) a k box 8 or 2012 2 the system partition is presented to the converter as an independent drive and can therefore remain @ 4 k while the C:\ D:\ etc. can be adjusted as required using the advanced settings.

    So in the end, you have a system with drives using the required cluster size and a system bootable...

    just for future reference should it be useful to anybody.

    P

  • OLIVIER components on a server cluster for high availability

    Hello

    The requirement is to install OLIVIER 7.9.6.4 (all components) on a clustered high-availability server.

    OLIVIER 7.9.6.4 software and versions.
    OBIEE 11.1.1.7
    Informatica 9.1.0
    DAC 11.1.1

    I would like to know what software is to be installed on the cluster server and how.

    I don't get any Installation Document for this.

    Any help is appreciated.

    Thank you
    Aerts

    need an urgent answer!

  • High availability of components in the design of vWorkspace tips

    Hi all

    Would ask you some advice regarding the design of vWorkspace components highly available. Suppose that vWorkspace components will be deployed in vSphere or hypervisors managed SCVMM hence HA is in place, if the failure of a host. In this situation, if we still need components redundant (n + 1 VMS) vWorkspace?

    On the other note, I understand that we can add a couple of broker for vWorkspace in vWorkspace Management Console connections and based on KB 99163 it would just work. I'm not sure how the traffic would be when an application is web access? As in, I guess that the connection broker news would be 'defined' at the request of the web call to the broker for connections. Or this is done automatically? Access Web would choose randomly from the broker for connections to go?

    Thanks for any advice in advance

    Kind regards

    Cyril

    Hi Cyril,.

    Big questions. As with any IT architecture in layers, you must plan HA and redundancy at all points of failure required by your environment or level of Service (SLA) agreements. For vWorkspace, the center of his universe is SQL and you must plan accordingly the failure and recovery. In some environments, full backup can meet the requirement of HA. In others, full SQL Cluster, Mirroring, replication, or Always-On configurations may be required. With our broker, we recommend N + 1 deployment in most scenarios HA. When you move peripheral components or enabling, you must evaluate each component and needs its impact of failure as well as its valuation to determine the appropriate AP.

    Load balancing between several brokers is done automatically by logic in the client connectors. In the case of Web access, when you configure the site Web Access in the Management Console, it includes broker list in the Web access configuration xml file. As client connectors, Web Access includes balancing logic that distributes the client load on brokers available automatically.

    If you have any questions about specific components and requirements of HA or architecture, please add them in the discussions.

  • How to set up the single instance of data of high availability and disaster tolerance

    Hi Experts,

    I have unique database and instance need to configure for high availability and disaster tolerance

    What are the DR options available for synchronization of database at a remote site with a delay of 5 to 10 minutes.

    Application connects to the local site and DR site should be the remote site.

    1 oracle FailSafe with SAN?

    2. What is the security integrated on linux centos/oel solution?

    3. If the storage is on the San (for example) then is it possible to set up a shell script

    which detects if the source database is down for 5 minutes, ride the SAN stored the files of database on the remote computer and

    change the ip in the application, so it will never connect to the source ip address

    Thank you and best regards,

    IVW

    Hello

    Rupture can occur in any level

    1 oracle FailSafe with SAN?

    --> Do check if you have failure in storage, your security will do nothing to bring back data

    --> Seen failsafe, will be only the insurance when MS cluster moving the disc and services to different node, configured services starts.

    2. What is the security integrated on linux centos/oel solution?

    --> Under linux, you need to set the scripts to run, and you can check the option on the cluster OS

    3. If the storage is on the San (for example) then it is possible to configure a shell script that detects if data source is down for 5 minutes, mount SAN stored the files of database on the remote computer and

    change the ip in the application, so it will never connect to the source ip address

    --> This you will get a cluster of BONES...

    Points to note:

    ================

    --> If there is power failure sudden, we can expect there may be lost writing in your data block, which will bring the inconsistency to your data file and redo

    --> Here, there is problem with your drive?

    --> If there is problem with your complete domain controller (how will you mount FS to a remote server?)

    Note what we are discussing was HA and you try to keep an idle server all the time (you have a server with more RAM & CPU)

    Why you can't check an option of CARS, can also have the cluster extension...

    And to avoid the loss of data and to meet the RPO and RTO in all time (same DC came down, storage failure, server crash), you may need to use Oracle data guard...

    Ask questions if you have

    Thank you

Maybe you are looking for