High availability of components in the design of vWorkspace tips

Hi all

Would ask you some advice regarding the design of vWorkspace components highly available. Suppose that vWorkspace components will be deployed in vSphere or hypervisors managed SCVMM hence HA is in place, if the failure of a host. In this situation, if we still need components redundant (n + 1 VMS) vWorkspace?

On the other note, I understand that we can add a couple of broker for vWorkspace in vWorkspace Management Console connections and based on KB 99163 it would just work. I'm not sure how the traffic would be when an application is web access? As in, I guess that the connection broker news would be 'defined' at the request of the web call to the broker for connections. Or this is done automatically? Access Web would choose randomly from the broker for connections to go?

Thanks for any advice in advance

Kind regards

Cyril

Hi Cyril,.

Big questions. As with any IT architecture in layers, you must plan HA and redundancy at all points of failure required by your environment or level of Service (SLA) agreements. For vWorkspace, the center of his universe is SQL and you must plan accordingly the failure and recovery. In some environments, full backup can meet the requirement of HA. In others, full SQL Cluster, Mirroring, replication, or Always-On configurations may be required. With our broker, we recommend N + 1 deployment in most scenarios HA. When you move peripheral components or enabling, you must evaluate each component and needs its impact of failure as well as its valuation to determine the appropriate AP.

Load balancing between several brokers is done automatically by logic in the client connectors. In the case of Web access, when you configure the site Web Access in the Management Console, it includes broker list in the Web access configuration xml file. As client connectors, Web Access includes balancing logic that distributes the client load on brokers available automatically.

If you have any questions about specific components and requirements of HA or architecture, please add them in the discussions.

Tags: Dell Tech

Similar Questions

  • Moving components in the designer

    I use JDeveloper 11 g. It is possible to move components directly (by mouse) in the designer and put them where I want (right, left, Centre...)?

    Hello

    You cannot place components where you want (as we do in other technologies like swing Panel, forms, oracle etc.). However, you can organize them according to your requirement, placing them in different layouts.

    There are several layouts in ADF Faces, which would be useful (stand-alone / in combination with each other, according to the need) for the design of your page.

    Check it out [ADF Faces demo | http://jdevadf.oracle.com/adf-richclient-demo], which would help you to understand the different better layouts.

    HTH.

    Arun-

  • vSphere high availability 'Élection' fails at 99% 'operation timed out' to 1 of the 2 hosts

    Hello

    We had a system with 1 host ESXi 5.1 with local disks.

    Now, we install redundancy by adding a 5.5 U2 ESXi host and a device of vCenter 5.5.

    After having installed and any addition to vcenter, we went the ESXi 5.1 to 5.5 U2 ESXi. The SAN works correctly (vMotion works on the own NETWORK card).

    Now, if I try to activate the high availability, both servers will install the HA Agent and start "Election".

    All the warehouses of data (4) on the SAN are chosen for the HA heartbeat, response of isolation is "keep it turned on" by default.

    A server will always be this process done, and the other will keep "elect" until it reaches 100% and errors on the operation "election timed out.

    I've seen this problem on both servers, so I think that the 'master' elected does not have the problem, only the "slave".

    I have checked these items and executed, but not worked:

    VMware KB: Reconfiguration HA (FDM) on a cluster fails with the error: operation timed out

    -The services were running

    VMware KB: Configuration HA in VMware vCenter Server 5.x does not work, error: Operation Timed out

    -All the MTU has been set at 1500

    VMware KB: VMware High Availability Configuration fails with the error: unable to complete the configuration of the ag of HA...

    -the default gateway is not the same on the two hosts, but I fixed that. There is no itinerary changes. HA the setting is "leave power." After correction and deactivation/reactivation HA, the problem is always the same.

    VMware KB: Check and reinstall the correct version of the VMware vCenter Server agents

    -J' ran "Reinstall ESX host management agents and HA on ESXi" agent of HA, and I checked that it was uninstalled and reinstalled during the reactivation of HA.

    CP /opt/vmware/uninstallers/VMware-fdm-uninstall.sh/tmp
    chmod + x /tmp/VMware-fdm-uninstall.sh
    /tmp/vmware-FDM-uninstall.sh

    I did this for two guests. This fix makes the problem of the election and I was still able to run a HA test successfully, but when after this test, I turned off the 2nd Server (in order to test the PA in the other direction), HA has no failover to 1 and everything remained low. After pressing 'reconfigure HA', the problem of the election appeared again on 1 hosts.

    Here are a few extractions of newspapers:

    -L' availability vSphere HA of that host State became election info 29/11/2014 22:03 192.27.224.138

    -vSphere HA agent is healthy info 29/11/2014 22:02:56 192.27.224.138

    -L' availability vSphere HA of that host State became Master info 29/11/2014 22:02:56 192.27.224.138

    -L' availability vSphere HA of that host State became election info 29/11/2014 22:01:26 192.27.224.138

    -vSphere HA agent is healthy info 29/11/2014 22:01:22 192.27.224.138

    -L' availability vSphere HA of that host State became Master info 29/11/2014 22:01:22 192.27.224.138

    -L' availability vSphere HA of that host State became election info 29/11/2014 22:03:02 192.27.224.139

    -Message "host vSphere HA State" on 192.27.224.139 changes from green to red info 29/11/2014 22:02:58 192.27.224.139

    officer of HA - vSphere for this host has an error: vSphere HA agent may not be properly installed or configured WARNING 29/11/2014 22:02:58 192.27.224.139

    -L' availability vSphere HA of that host State became info error of initialization of 29/11/2014 22:02:58 192.27.224.139

    -L' availability vSphere HA of that host State became election info 29/11/2014 22:00:52 192.27.224.139

    -DSMD3400DG2VD2 data store is selected for the storage heartbeat controlled by the HA agent vSphere on this info host 29/11/2014 22:00:49 192.27.224.139

    -DSMD3400DG2VD1 data store is selected for the storage heartbeat controlled by the HA agent vSphere on this info host 29/11/2014 22:00:49 192.27.224.139

    -Firewall configuration has changed. 'Enable' to rule the fdm value successful.  29/11/2014 info 22:00:45 192.27.224.139

    -L' availability vSphere HA of that host State became not initialized info 29/11/2014 22:00:40 reconfigures vSphere HA 192.27.224.139 root host

    -vSphere HA agent on this host is disabled info 29/11/2014 22:00:40 reconfigures vSphere HA 192.27.224.139 root

    -Reconfigure HA vSphere host 192.27.224.139 operation has timed out.     root of HOSTSERVER01 29/11/2014 22:00:31 29/11/2014 22:00:31 29/11/2014 22:02:51

    -Configuration of vSphere HA 192.27.224.139 operation has timed out.     System HOSTSERVER01 29/11/2014 21:56:42 29/11/2014 21:56:42 29/11/2014 21:58:55

    Can someone give me please with help here?

    Or the extra things that I can check or provide?

    I'm currenty running options.

    Best regards

    Joris

    P.S. I had problems with cold Migration during the implementation of the SAN. After setting up all (vMotion, ESX upgrade), these problems have disappeared.

    When you search for this error, I came to this article: KB VMware: VMware vCenter Server displays the error message: unable to connect to the host

    And this cause could make sense, since the server vCenter changed and IP addressing has been changed during the implementation.

    However, in the file vpxa.cfg, the < hostip > and < serverip > is correct (verified by using https://< hostip > / home).

    Tried this again today, no problem at all.

    P.P.S. I have set up several of these systems from scratch in the past without problem (if it is an 'upgrade').

    OK, so the problem is solved.

    I contacted Dell Pro Support (OEM offering the license) and they checked the logs (fdm.log) and found that the default IP gateway could not be reached.

    The default gateway is the ip address by default host isolation, used by HA.

    Because it is an isolated production system, the front door provided proved only for future purposes.

    Now, I've changed the default gateway address of management on the switch plugged into both hosts, namely the ping requests.

    This solved any problem.

  • Single point of failure/Cluster/high availability on Essbase 11.1.2

    Hello

    The project I am currently working has only one Version of Unix 11.1.2 Essbase server that's threat as a single Point of failure. Management thinking to upgrade the Essbase to superior verion 11.1.2.1 to achieve clustering & high availability and to overcome the situation of single Point of failure by having just a server unique essbase.

    My concerns here are;

    Note: Essbase in this project is used to generate reports only, write back data is not allowed.

    (1) not supported of the Essbase Version 11.1.2 Cluster / HA / failover to the other node?

    (2) if it is Yes, can you please share the link to the documentation for this version 11.1.2.

    There are other components of Hyperion as HFM, FDM used the same version in this project. If the failover is not supported in the version of Essbase 11.1.2 we need upgrade other components as well as the Essbase or just Essbase upgrade upgrade to version 11.1.2.x is sufficient.

    Please share your valuable thoughts on that.

    Thanking in advance for your ideas.

    Kind regards

    UB.


    Found the reason why Essbase Cluster does not work when implemented with failover / high availability when Essbase Analytics link (EAL) used in the project. Due to these facts, we have stopped application Essbase Cluster and the Essbase in my project remains SPOF (Single Point of Failure).

    Not sure if Oracle Corp. comes with EAL product improvements that can take over for Essbase Failover.

    "

    1. Essbase Analytics link (EAL release 11.1.1.4) does not support high availability facilities. This means that EAL cannot connect to a cluster of Essbase, always happens only connections to a server regular Essbase through the Regional service console. Although we can define availability high for Essbase, EAL will use a single Essbase server. If it breaks down another Essbase server is used, we must manually redefine the EAL bridges on the new Essbase server and re-create the database Essbase. It is not considered high availability in this case.
    2. There is no technical mechanism to link point to a Cluster Essbase-Essbase analytics. EAL is supported only to connect to a single named Essbase Server.
    3. Transparent partitions can support when configured Essbase as component availability high when EAL is used. EAL bridges in the active project uses Transparent partition.

    "

    Please feel free to correct me if this interpretation is wrong.

    Thank you

    UB.

  • Cluster high availability

    How can I configure a cluster of high availability in Gemfire? I need a cluster with two nodes and two servers.

    For high availability of data, configure the redundancy of your data regions; If you are using partitioned regions, see Configure a maximum availability partitioned region in the user's Guide the GemFire, for all the details and options available. You can also use the persistence.

    For client/server e-mail high availability, see Configuring highly available servers.

    For high availability of function execution see running a function in vFabric GemFire.

  • OLIVIER components on a server cluster for high availability

    Hello

    The requirement is to install OLIVIER 7.9.6.4 (all components) on a clustered high-availability server.

    OLIVIER 7.9.6.4 software and versions.
    OBIEE 11.1.1.7
    Informatica 9.1.0
    DAC 11.1.1

    I would like to know what software is to be installed on the cluster server and how.

    I don't get any Installation Document for this.

    Any help is appreciated.

    Thank you
    Aerts

    need an urgent answer!

  • Deployment of high availability of the IPCC 4.5

    In a future HD architecture implementation, the voice service will provide CallManager 5.0, that will integrate with 4.5 of the IPCC. 4.5 (required with 5.0 CM) IPCC does implement a high availability. How can we ensure that technical support continues to operate if the IPCC goes down? One possibility might be to configure CM such that if the IPCC goes down, all the number of help desk calls are automatically and immediately headed to a group (which includes all extensions help desk). This redirection can be configured in CM? Is there a better option?

    Thanks in advance,

    SB

    This is your best bet. On the road Points for your call center just put the call before busy, no answer and failure to the fighter pilot. Thus, when the IPCC Express Server is down it will sent to your fighter pilot.

    Please evaluate the useful messages.

    adignan - berbee

  • Remove the secondary controller and tertiary high availability for each access point

    I want to remove secondary and tertiary sector controller of high availability for each access point. I have more than 900 APs associated with a Version of the WLC 8510 8.0.121.0 software. What is the best/better way to remove secondary and tertiary controller?
    Or I can create a model first. We use version 2.2

    Hello

    Easiest way:

    Yes you can do this by using the first Cisco Infrastructure, you can create a Setup AP Lightweight model to specify the name of the PDC and the IP address and specify an empty value (choose the first empty option in the drop-down list) and 0.0.0.0 for the secondary and tertiary sector controllers.  Then you can apply this model to the AP, and she must remove (virgins all) values for these fields.

    Long way:

    Yes there is no clean way to remove it from the CLI. you need to manually remove each on the AP.

    Concerning

    Remember messages useful rates

  • VPN high availability: double 3 k in the hub and the PIX as rays

    Hi Experts.

    In my scenario, I need routing between the rays and, above all, high availability (HA).

    On the shelves, I have Pix 501/506E, OS ver 6.3. In the hub, I have a couple of redundant VPN3k.

    What mechanism is the best:

    1 - hub and spoke topology with remote EzVPN in rays - to HA, I can take advantage of the "load balancing" feature of the VPN3k?

    2 - hub and spoke topology with remote EzVPN in rays - to HA, I can take advantage of the "backup server" feature of the VPN3k?

    3 any-to-any topology (an IPSEC tunnel between any pair of sites) - for HA, I can take advantage of the 'LAN-to-LAN backup' feature of the VPN3k?

    Thank you

    Michele

    I'd go with NLB on the backup server. With load balancing your connections will be spread over the two hubs. If a hub dies, then at least it will only affect half of your connections, rather than each of them in case of death of your primary and backup servers using.

    If a hub dies, your PIX connections will be de-energized for a short period, but they will be able to reconnect back automatically without making you no change.

  • The OAM Console in a deployment architecture high availability

    Hello

    I'm looking to implement a HA for OAM environment and read the various docs that the OAM console is deployed on the AdminServer. However, when I read the Oracle documentation on Oracle Fusion Middleware high availability Guide for Oracle Identity and Access Management

    It gives the URL in the console of the OAM as the link to test the server managed, which means that OAMConsole is accessible on the ManagedServer. Is there anywhere I can find a definitive answer on how OAMConsole is deployed and in the what server / cluster?

    Ideally, I'd like the OAMConsole app deployed to the cluster (instead of the AdminServer) in order to achieve for her AP.

    Thank you

    oamconsole is deployed on the AdminServer, and there may be only one admin server active per domain.

    The link that you are referring is on setting up a cluster OAM. Oracle encourages a cluster OAM of 2 instances that are deployed on 2 physical servers. The server administrator is recommended to set up an alias to separate host that they refer as a VIP that is linked to a separate IP as the OAM instance on the same host address. A clone of the Admin instance is then placed on the second physical server corresponding to the second instance OAM. In the case of the first instance OAM and the server administrator is not available, a reconfiguration of network must be made to move the VIP to the second instance. Once this is done, the server administrator can be started on the second server.

    This gives you a little HA/DR.

  • How to set up the single instance of data of high availability and disaster tolerance

    Hi Experts,

    I have unique database and instance need to configure for high availability and disaster tolerance

    What are the DR options available for synchronization of database at a remote site with a delay of 5 to 10 minutes.

    Application connects to the local site and DR site should be the remote site.

    1 oracle FailSafe with SAN?

    2. What is the security integrated on linux centos/oel solution?

    3. If the storage is on the San (for example) then is it possible to set up a shell script

    which detects if the source database is down for 5 minutes, ride the SAN stored the files of database on the remote computer and

    change the ip in the application, so it will never connect to the source ip address

    Thank you and best regards,

    IVW

    Hello

    Rupture can occur in any level

    1 oracle FailSafe with SAN?

    --> Do check if you have failure in storage, your security will do nothing to bring back data

    --> Seen failsafe, will be only the insurance when MS cluster moving the disc and services to different node, configured services starts.

    2. What is the security integrated on linux centos/oel solution?

    --> Under linux, you need to set the scripts to run, and you can check the option on the cluster OS

    3. If the storage is on the San (for example) then it is possible to configure a shell script that detects if data source is down for 5 minutes, mount SAN stored the files of database on the remote computer and

    change the ip in the application, so it will never connect to the source ip address

    --> This you will get a cluster of BONES...

    Points to note:

    ================

    --> If there is power failure sudden, we can expect there may be lost writing in your data block, which will bring the inconsistency to your data file and redo

    --> Here, there is problem with your drive?

    --> If there is problem with your complete domain controller (how will you mount FS to a remote server?)

    Note what we are discussing was HA and you try to keep an idle server all the time (you have a server with more RAM & CPU)

    Why you can't check an option of CARS, can also have the cluster extension...

    And to avoid the loss of data and to meet the RPO and RTO in all time (same DC came down, storage failure, server crash), you may need to use Oracle data guard...

    Ask questions if you have

    Thank you

  • Install the Cloud control 12 c high availability

    Nice day

    I install cloud control high availability, but when I install the third-party certificate services are not available when I check for HTTP.

    I followed the following guidelines:

    Deployment of a cloud of highly available 12 control Enterprise Manager:

    http://www.Oracle.com/technetwork/OEM/framework-infra/WP-em12c-building-HA-Level3-1631423.PDF

    And third-party certificate configuration, note:

    NOTE: 1399293.1 -EM 12 c Cloud control how to create a portfolio with certificate of third party trust that can be imported in the WHO of communication SSL?

    I have a few questions

    How can I change the CA in the OMS?

    How to export the certificate and key?

    Can someone help me, please

    Best regards

    Alexandra Granados

    Hello

    Please follow the steps below...

    EM11g / EM12c: utility ORAPKI to create a portfolio with third party to trust certificate and import into OMS [ID 1367988.1]

  • Is required for high availability and the DRS Vmware-AAM service? Has it been renamed with ESX3.5 U3?

    Hello Tech community.

    Is required for high availability and the DRS Vmware-AAM service? Has it been renamed with ESX3.5 U3?

    On one of our ESX 3.5 U3 infrastructure, VMware Ha and DRS work very well, and the following list of services are started, view VMware-AAM

    Here is an exit of an ESX 3.5 U3 system

    1. service - status-all | grep running

    crond (pid 1900) is running...

    gpm (pid 1851) is running...

    VMware-pass (pid 1942) is running...

    ntpd (pid 1842) is running...

    cimserver (pid 2452) is running...

    snmpd (pid 1557) is running...

    sshd (pid 4074 4037 1566) is running...

    syslogd (pid 1321) is running...

    klogd (pid 1325) is running...

    At least a virtual machine is still running.

    VMware-aam is running

    VMware VMkernel authorization daemon is running (pid 1911).

    VMware-vpxa is running

    webAccess (pid 1880) is running...

    openwsmand (pid 2534) is running...

    xinetd (1827 pid) is running...

    Now on our production VMware MCC cluster ESX 3.5 Update 2 we do not find "vmware-aam" this service but HA and DRS work very well, and the logs in/var/log/vmware/MAA indicate that everything is fine.

    1. service - status-all | grep running

    crond (pid 2033) is running...

    gpm (pid 1882) is running...

    hpasmd is running...

    cmathreshd is running...

    cmahostd is running...

    cmapeerd is running...

    cmastdeqd is running...

    cmahealthd is running...

    cmaperfd is running...

    cmaeventd is running...

    cmaidad is running...

    cmafcad is running...

    cmaided is running...

    cmascsid is running...

    cmasasd is running...

    cmanicd (pid 2703) is running...

    cmasm2d is running... [OK]

    hpsmhd (1986 1955 pid) is running...

    VMware-pass (pid 3502) is running...

    ntpd (pid 1873) is running...

    USE: Departure from OVCtrl | stop | restart | START | STOP | activate | Disable

    USE: Departure from OVTrcSrv | stop | restart

    cimserver (pid 3536) is running...

    Ramchecker is not running

    snmpd (pid 1745) is running...

    sshd (pid 1861 1447 1754) is running...

    syslogd (pid 1684) is running...

    klogd (pid 1688) is running...

    At least a virtual machine is still running.

    VMware VMkernel authorization daemon is running (pid 3472).

    VMware-vpxa is running

    webAccess (pid 1984) is running...

    openwsmand (pid 3642) is running...

    xinetd (pid 1854) is running...

    I did a few Google queries and couldn't find a definitive answer

    The service of AAM is required for HA - it is not required for DTH

    If you find this or any other answer useful please consider awarding points marking the answer correct or useful

  • List of relational models available when the design is open is not true

    I use DM 3.3.0.747 and I have a design with 11 relationship models (without logic models) and I'm using version control.
    Each relational model has been added, the after another, using the 'import of data dictionary' and then added/committed to svn.
    After it was created 11 models, I save and closed the project.

    But now when the design file is reopened, the screen "Selectionner select relational models, shows only 7 of 11 models.
    Models that do not appear in this list are open regardless of what is selected.

    Thus, all relational models are in the project, but it's like the 'Pseudo-index"used by the splash screen when the design is open is obsolete.

    The only way to fix it was save as project in another empty folder, but of workaround does not resolve the problem in the folder where the files are versioned.

    Suggestions of what might happen, and how I can fix?
    He would not re - import the SVN project and tomorrow, when I need to add a new model to get the same problem because I did, and that's what happened again.

    Thank you!

    Hello

    You must delete the file dmd_open.xml in the directory of the design. There is a bug and this need to file should be deleted after adding new model relational.

    Philippe

  • Create to-dos for the components of the interface of the designer

    Hello

    I am very new to Flex, but I have some experience in programming Java (with the old AWT), Visual Basic 6 and Visual c#.
    The first thing I noticed is that apparently it is not an easy way to create an action for a component in the design view.
    Must be nice, for example, right-click on a button in the view of the designer, and have the option to add an "event listener' or create in code to the onclick action.
    Maybe I need to spend more time on the manual and this feature is already there, but if not, I think that should be a good idea to introduce it.
    Kind regards
    Marco.

    Marco-
    Feel free to connect to a development application bug: http://bugs.adobe.com/jira/

    Thank you-
    Mac

Maybe you are looking for