VPN high availability: double 3 k in the hub and the PIX as rays

Hi Experts.

In my scenario, I need routing between the rays and, above all, high availability (HA).

On the shelves, I have Pix 501/506E, OS ver 6.3. In the hub, I have a couple of redundant VPN3k.

What mechanism is the best:

1 - hub and spoke topology with remote EzVPN in rays - to HA, I can take advantage of the "load balancing" feature of the VPN3k?

2 - hub and spoke topology with remote EzVPN in rays - to HA, I can take advantage of the "backup server" feature of the VPN3k?

3 any-to-any topology (an IPSEC tunnel between any pair of sites) - for HA, I can take advantage of the 'LAN-to-LAN backup' feature of the VPN3k?

Thank you

Michele

I'd go with NLB on the backup server. With load balancing your connections will be spread over the two hubs. If a hub dies, then at least it will only affect half of your connections, rather than each of them in case of death of your primary and backup servers using.

If a hub dies, your PIX connections will be de-energized for a short period, but they will be able to reconnect back automatically without making you no change.

Tags: Cisco Security

Similar Questions

  • A VPN client can go same interface on the Pix 515

    A user in a Pix VPN and get an address x.x.x.x via an ippool on the Pix. Once this is done, they will need access to information on the public network. Is it possible since they come out of the same interface?

    I can open ports and route subnets on our core routers, but that doesn't seem to work.

    Thank you

    Dwane

    Hi elodie

    You can do this by entering the following command

    permit same-security-traffic intra-interface

    Concerning

  • Install the Cloud control 12 c high availability

    Nice day

    I install cloud control high availability, but when I install the third-party certificate services are not available when I check for HTTP.

    I followed the following guidelines:

    Deployment of a cloud of highly available 12 control Enterprise Manager:

    http://www.Oracle.com/technetwork/OEM/framework-infra/WP-em12c-building-HA-Level3-1631423.PDF

    And third-party certificate configuration, note:

    NOTE: 1399293.1 -EM 12 c Cloud control how to create a portfolio with certificate of third party trust that can be imported in the WHO of communication SSL?

    I have a few questions

    How can I change the CA in the OMS?

    How to export the certificate and key?

    Can someone help me, please

    Best regards

    Alexandra Granados

    Hello

    Please follow the steps below...

    EM11g / EM12c: utility ORAPKI to create a portfolio with third party to trust certificate and import into OMS [ID 1367988.1]

  • vSphere high availability 'Élection' fails at 99% 'operation timed out' to 1 of the 2 hosts

    Hello

    We had a system with 1 host ESXi 5.1 with local disks.

    Now, we install redundancy by adding a 5.5 U2 ESXi host and a device of vCenter 5.5.

    After having installed and any addition to vcenter, we went the ESXi 5.1 to 5.5 U2 ESXi. The SAN works correctly (vMotion works on the own NETWORK card).

    Now, if I try to activate the high availability, both servers will install the HA Agent and start "Election".

    All the warehouses of data (4) on the SAN are chosen for the HA heartbeat, response of isolation is "keep it turned on" by default.

    A server will always be this process done, and the other will keep "elect" until it reaches 100% and errors on the operation "election timed out.

    I've seen this problem on both servers, so I think that the 'master' elected does not have the problem, only the "slave".

    I have checked these items and executed, but not worked:

    VMware KB: Reconfiguration HA (FDM) on a cluster fails with the error: operation timed out

    -The services were running

    VMware KB: Configuration HA in VMware vCenter Server 5.x does not work, error: Operation Timed out

    -All the MTU has been set at 1500

    VMware KB: VMware High Availability Configuration fails with the error: unable to complete the configuration of the ag of HA...

    -the default gateway is not the same on the two hosts, but I fixed that. There is no itinerary changes. HA the setting is "leave power." After correction and deactivation/reactivation HA, the problem is always the same.

    VMware KB: Check and reinstall the correct version of the VMware vCenter Server agents

    -J' ran "Reinstall ESX host management agents and HA on ESXi" agent of HA, and I checked that it was uninstalled and reinstalled during the reactivation of HA.

    CP /opt/vmware/uninstallers/VMware-fdm-uninstall.sh/tmp
    chmod + x /tmp/VMware-fdm-uninstall.sh
    /tmp/vmware-FDM-uninstall.sh

    I did this for two guests. This fix makes the problem of the election and I was still able to run a HA test successfully, but when after this test, I turned off the 2nd Server (in order to test the PA in the other direction), HA has no failover to 1 and everything remained low. After pressing 'reconfigure HA', the problem of the election appeared again on 1 hosts.

    Here are a few extractions of newspapers:

    -L' availability vSphere HA of that host State became election info 29/11/2014 22:03 192.27.224.138

    -vSphere HA agent is healthy info 29/11/2014 22:02:56 192.27.224.138

    -L' availability vSphere HA of that host State became Master info 29/11/2014 22:02:56 192.27.224.138

    -L' availability vSphere HA of that host State became election info 29/11/2014 22:01:26 192.27.224.138

    -vSphere HA agent is healthy info 29/11/2014 22:01:22 192.27.224.138

    -L' availability vSphere HA of that host State became Master info 29/11/2014 22:01:22 192.27.224.138

    -L' availability vSphere HA of that host State became election info 29/11/2014 22:03:02 192.27.224.139

    -Message "host vSphere HA State" on 192.27.224.139 changes from green to red info 29/11/2014 22:02:58 192.27.224.139

    officer of HA - vSphere for this host has an error: vSphere HA agent may not be properly installed or configured WARNING 29/11/2014 22:02:58 192.27.224.139

    -L' availability vSphere HA of that host State became info error of initialization of 29/11/2014 22:02:58 192.27.224.139

    -L' availability vSphere HA of that host State became election info 29/11/2014 22:00:52 192.27.224.139

    -DSMD3400DG2VD2 data store is selected for the storage heartbeat controlled by the HA agent vSphere on this info host 29/11/2014 22:00:49 192.27.224.139

    -DSMD3400DG2VD1 data store is selected for the storage heartbeat controlled by the HA agent vSphere on this info host 29/11/2014 22:00:49 192.27.224.139

    -Firewall configuration has changed. 'Enable' to rule the fdm value successful.  29/11/2014 info 22:00:45 192.27.224.139

    -L' availability vSphere HA of that host State became not initialized info 29/11/2014 22:00:40 reconfigures vSphere HA 192.27.224.139 root host

    -vSphere HA agent on this host is disabled info 29/11/2014 22:00:40 reconfigures vSphere HA 192.27.224.139 root

    -Reconfigure HA vSphere host 192.27.224.139 operation has timed out.     root of HOSTSERVER01 29/11/2014 22:00:31 29/11/2014 22:00:31 29/11/2014 22:02:51

    -Configuration of vSphere HA 192.27.224.139 operation has timed out.     System HOSTSERVER01 29/11/2014 21:56:42 29/11/2014 21:56:42 29/11/2014 21:58:55

    Can someone give me please with help here?

    Or the extra things that I can check or provide?

    I'm currenty running options.

    Best regards

    Joris

    P.S. I had problems with cold Migration during the implementation of the SAN. After setting up all (vMotion, ESX upgrade), these problems have disappeared.

    When you search for this error, I came to this article: KB VMware: VMware vCenter Server displays the error message: unable to connect to the host

    And this cause could make sense, since the server vCenter changed and IP addressing has been changed during the implementation.

    However, in the file vpxa.cfg, the < hostip > and < serverip > is correct (verified by using https://< hostip > / home).

    Tried this again today, no problem at all.

    P.P.S. I have set up several of these systems from scratch in the past without problem (if it is an 'upgrade').

    OK, so the problem is solved.

    I contacted Dell Pro Support (OEM offering the license) and they checked the logs (fdm.log) and found that the default IP gateway could not be reached.

    The default gateway is the ip address by default host isolation, used by HA.

    Because it is an isolated production system, the front door provided proved only for future purposes.

    Now, I've changed the default gateway address of management on the switch plugged into both hosts, namely the ping requests.

    This solved any problem.

  • NAC Manager high availability peer CAM DEAD

    Hello

    I have two managers of the NAC with high availability and I used both interface eth1 of sides as a link Heartbit.

    I did following steps for high availability.

    (1) synchronize the time between two cams.

    (2) generate a temporary SSL certificate in CAMs and import-export procedure made in the other.

    (3) make a CAM as a primary and the other as secondary.

    But after all this made configuration I can see the State in surveillance > reports-primary CAM is in place in both servers and redundant CAM is down.

    Also on the failover tab, I can see - Local CAM - OK [Active] and counterpart CAM:-DEAD.

    I have attached some screenshots so that you can find the same.

    Your help will be very appreciated.

    Thank you

    Try these steps and check that all steps were followed:

    http://www.Cisco.com/c/en/us/support/docs/security/NAC-appliance-clean-access/99945-NAC-cam-HA.html

  • site2site distance-VPN and access-PIX - no way?

    I have,

    I have a problem wrt site2site & VPN remote access on a PIX:

    My setup is as follows: PIX (6.3) puts an end to two a site2-site VPN and also should the remote access service clients using the client VPN Cisco (4.0.x).

    The problem is with remote access VPN clients, obtain an IP address on their VPN interface, but customers cannot reach anything. (Please note that the site2site VPN runs without problem)

    To be precise (see config-excerpts below):

    The customer, who has 212.138.109.20 as its IP address gets an IP 10.0.100.1 on his card-VPN which comes from the "vpnpool of the pool.

    configured on the PIX. This customer relationships to reach servers on interface 'inside' of the PIX as 10.0.1.28.

    However, the client cannot achieve * nothing *-a server on the inside or anything like that (e.g. Internet) outside!

    Using Ethereal traces, I discovered that the packets arrive inside interface coming 10.0.100.1 (IP address of the)

    VPN - client). I also see the response from the server (10.0.1.28) to 10.0.100.1. However for some reason any package does not thanks to

    the PIX to the customer. PIX-newspapers also show packets to and from the VPN client to the inside interface - and * no. * drops. So to my knowledge the packets from server to the VPN client really should be done through the PIX.

    I have attached the following as separate files:

    (o) the parts of the PIX config

    (o) packets showing PIX-log between the VPN client and the server (s) on the interface inside

    (o) ethereal-trace done inside the watch interface also packets between VPN client and server (s)

    I have really scratched my head for a while on this one, tested a lot of things, but I really don't know what could be a problem with my

    config.

    After all, it really should be possible to run site2site - and on the same PIX VPN remote access, shouldn't it?

    Thank you very much in advance for your help,.

    -ewald

    I think that your problem is in your ACL and your crypto card:

    access-list 101 permit ip 10.0.1.0 255.255.255.0 10.0.2.0 255.255.255.0

    access-list 101 permit ip 10.0.0.0 255.255.255.0 10.0.2.0 255.255.255.0

    access-list 101 permit ip 10.0.3.0 255.255.255.0 10.0.2.0 255.255.255.0

    access-list 101 permit ip 10.0.1.0 255.255.255.0 10.0.100.0 255.255.255.0

    correspondence address 1 card crypto loc2rem 101

    This means that this map correspond to these addresses. But your dynamic map is one that must match 10.0.100.0, 10.0.1.0 traffic because your pool local ip is 10.0.100.x. I think what is happening is that the return traffic from the lan to vpn clients trying to get out of the static tunnel, which probably does not exist (for the netblocks - you probably have a security association for each pair of netblocks, but not for vpn clients) and so do not.

    I would recommend adding these lines:

    access-list 105 allow ip 10.0.1.0 255.255.255.0 10.0.2.0 255.255.255.0

    access-list 105 allow ip 10.0.0.0 255.255.255.0 10.0.2.0 255.255.255.0

    access-list 105 permit 10.0.3.0 ip 255.255.255.0 10.0.2.0 255.255.255.0

    no correspondence address 1 card crypto loc2rem 101

    correspondence address 1 card crypto loc2rem 105

    Then reapply:

    loc2rem interface card crypto outside

  • High availability of components in the design of vWorkspace tips

    Hi all

    Would ask you some advice regarding the design of vWorkspace components highly available. Suppose that vWorkspace components will be deployed in vSphere or hypervisors managed SCVMM hence HA is in place, if the failure of a host. In this situation, if we still need components redundant (n + 1 VMS) vWorkspace?

    On the other note, I understand that we can add a couple of broker for vWorkspace in vWorkspace Management Console connections and based on KB 99163 it would just work. I'm not sure how the traffic would be when an application is web access? As in, I guess that the connection broker news would be 'defined' at the request of the web call to the broker for connections. Or this is done automatically? Access Web would choose randomly from the broker for connections to go?

    Thanks for any advice in advance

    Kind regards

    Cyril

    Hi Cyril,.

    Big questions. As with any IT architecture in layers, you must plan HA and redundancy at all points of failure required by your environment or level of Service (SLA) agreements. For vWorkspace, the center of his universe is SQL and you must plan accordingly the failure and recovery. In some environments, full backup can meet the requirement of HA. In others, full SQL Cluster, Mirroring, replication, or Always-On configurations may be required. With our broker, we recommend N + 1 deployment in most scenarios HA. When you move peripheral components or enabling, you must evaluate each component and needs its impact of failure as well as its valuation to determine the appropriate AP.

    Load balancing between several brokers is done automatically by logic in the client connectors. In the case of Web access, when you configure the site Web Access in the Management Console, it includes broker list in the Web access configuration xml file. As client connectors, Web Access includes balancing logic that distributes the client load on brokers available automatically.

    If you have any questions about specific components and requirements of HA or architecture, please add them in the discussions.

  • Deployment of high availability of the IPCC 4.5

    In a future HD architecture implementation, the voice service will provide CallManager 5.0, that will integrate with 4.5 of the IPCC. 4.5 (required with 5.0 CM) IPCC does implement a high availability. How can we ensure that technical support continues to operate if the IPCC goes down? One possibility might be to configure CM such that if the IPCC goes down, all the number of help desk calls are automatically and immediately headed to a group (which includes all extensions help desk). This redirection can be configured in CM? Is there a better option?

    Thanks in advance,

    SB

    This is your best bet. On the road Points for your call center just put the call before busy, no answer and failure to the fighter pilot. Thus, when the IPCC Express Server is down it will sent to your fighter pilot.

    Please evaluate the useful messages.

    adignan - berbee

  • Remove the secondary controller and tertiary high availability for each access point

    I want to remove secondary and tertiary sector controller of high availability for each access point. I have more than 900 APs associated with a Version of the WLC 8510 8.0.121.0 software. What is the best/better way to remove secondary and tertiary controller?
    Or I can create a model first. We use version 2.2

    Hello

    Easiest way:

    Yes you can do this by using the first Cisco Infrastructure, you can create a Setup AP Lightweight model to specify the name of the PDC and the IP address and specify an empty value (choose the first empty option in the drop-down list) and 0.0.0.0 for the secondary and tertiary sector controllers.  Then you can apply this model to the AP, and she must remove (virgins all) values for these fields.

    Long way:

    Yes there is no clean way to remove it from the CLI. you need to manually remove each on the AP.

    Concerning

    Remember messages useful rates

  • The OAM Console in a deployment architecture high availability

    Hello

    I'm looking to implement a HA for OAM environment and read the various docs that the OAM console is deployed on the AdminServer. However, when I read the Oracle documentation on Oracle Fusion Middleware high availability Guide for Oracle Identity and Access Management

    It gives the URL in the console of the OAM as the link to test the server managed, which means that OAMConsole is accessible on the ManagedServer. Is there anywhere I can find a definitive answer on how OAMConsole is deployed and in the what server / cluster?

    Ideally, I'd like the OAMConsole app deployed to the cluster (instead of the AdminServer) in order to achieve for her AP.

    Thank you

    oamconsole is deployed on the AdminServer, and there may be only one admin server active per domain.

    The link that you are referring is on setting up a cluster OAM. Oracle encourages a cluster OAM of 2 instances that are deployed on 2 physical servers. The server administrator is recommended to set up an alias to separate host that they refer as a VIP that is linked to a separate IP as the OAM instance on the same host address. A clone of the Admin instance is then placed on the second physical server corresponding to the second instance OAM. In the case of the first instance OAM and the server administrator is not available, a reconfiguration of network must be made to move the VIP to the second instance. Once this is done, the server administrator can be started on the second server.

    This gives you a little HA/DR.

  • How to set up the single instance of data of high availability and disaster tolerance

    Hi Experts,

    I have unique database and instance need to configure for high availability and disaster tolerance

    What are the DR options available for synchronization of database at a remote site with a delay of 5 to 10 minutes.

    Application connects to the local site and DR site should be the remote site.

    1 oracle FailSafe with SAN?

    2. What is the security integrated on linux centos/oel solution?

    3. If the storage is on the San (for example) then is it possible to set up a shell script

    which detects if the source database is down for 5 minutes, ride the SAN stored the files of database on the remote computer and

    change the ip in the application, so it will never connect to the source ip address

    Thank you and best regards,

    IVW

    Hello

    Rupture can occur in any level

    1 oracle FailSafe with SAN?

    --> Do check if you have failure in storage, your security will do nothing to bring back data

    --> Seen failsafe, will be only the insurance when MS cluster moving the disc and services to different node, configured services starts.

    2. What is the security integrated on linux centos/oel solution?

    --> Under linux, you need to set the scripts to run, and you can check the option on the cluster OS

    3. If the storage is on the San (for example) then it is possible to configure a shell script that detects if data source is down for 5 minutes, mount SAN stored the files of database on the remote computer and

    change the ip in the application, so it will never connect to the source ip address

    --> This you will get a cluster of BONES...

    Points to note:

    ================

    --> If there is power failure sudden, we can expect there may be lost writing in your data block, which will bring the inconsistency to your data file and redo

    --> Here, there is problem with your drive?

    --> If there is problem with your complete domain controller (how will you mount FS to a remote server?)

    Note what we are discussing was HA and you try to keep an idle server all the time (you have a server with more RAM & CPU)

    Why you can't check an option of CARS, can also have the cluster extension...

    And to avoid the loss of data and to meet the RPO and RTO in all time (same DC came down, storage failure, server crash), you may need to use Oracle data guard...

    Ask questions if you have

    Thank you

  • Is required for high availability and the DRS Vmware-AAM service? Has it been renamed with ESX3.5 U3?

    Hello Tech community.

    Is required for high availability and the DRS Vmware-AAM service? Has it been renamed with ESX3.5 U3?

    On one of our ESX 3.5 U3 infrastructure, VMware Ha and DRS work very well, and the following list of services are started, view VMware-AAM

    Here is an exit of an ESX 3.5 U3 system

    1. service - status-all | grep running

    crond (pid 1900) is running...

    gpm (pid 1851) is running...

    VMware-pass (pid 1942) is running...

    ntpd (pid 1842) is running...

    cimserver (pid 2452) is running...

    snmpd (pid 1557) is running...

    sshd (pid 4074 4037 1566) is running...

    syslogd (pid 1321) is running...

    klogd (pid 1325) is running...

    At least a virtual machine is still running.

    VMware-aam is running

    VMware VMkernel authorization daemon is running (pid 1911).

    VMware-vpxa is running

    webAccess (pid 1880) is running...

    openwsmand (pid 2534) is running...

    xinetd (1827 pid) is running...

    Now on our production VMware MCC cluster ESX 3.5 Update 2 we do not find "vmware-aam" this service but HA and DRS work very well, and the logs in/var/log/vmware/MAA indicate that everything is fine.

    1. service - status-all | grep running

    crond (pid 2033) is running...

    gpm (pid 1882) is running...

    hpasmd is running...

    cmathreshd is running...

    cmahostd is running...

    cmapeerd is running...

    cmastdeqd is running...

    cmahealthd is running...

    cmaperfd is running...

    cmaeventd is running...

    cmaidad is running...

    cmafcad is running...

    cmaided is running...

    cmascsid is running...

    cmasasd is running...

    cmanicd (pid 2703) is running...

    cmasm2d is running... [OK]

    hpsmhd (1986 1955 pid) is running...

    VMware-pass (pid 3502) is running...

    ntpd (pid 1873) is running...

    USE: Departure from OVCtrl | stop | restart | START | STOP | activate | Disable

    USE: Departure from OVTrcSrv | stop | restart

    cimserver (pid 3536) is running...

    Ramchecker is not running

    snmpd (pid 1745) is running...

    sshd (pid 1861 1447 1754) is running...

    syslogd (pid 1684) is running...

    klogd (pid 1688) is running...

    At least a virtual machine is still running.

    VMware VMkernel authorization daemon is running (pid 3472).

    VMware-vpxa is running

    webAccess (pid 1984) is running...

    openwsmand (pid 3642) is running...

    xinetd (pid 1854) is running...

    I did a few Google queries and couldn't find a definitive answer

    The service of AAM is required for HA - it is not required for DTH

    If you find this or any other answer useful please consider awarding points marking the answer correct or useful

  • I double clicked on update of programs and I wonder now run a legacy cpl high. When I click ok it says script errors and nothing works.

    original title: run a legacy cpl high?

    Hi all. on my control panel I double clicked on update of programs and I wonder now run a legacy cpl high. When I click ok it says script errors and nothing works. If I click on details until I click on continue it says; "C:\Windows\System32\RunLegacyCPLElevated.exe" Shell32.dll, Control_RunDLL "C:\Windows\system32\ISUSPM.cpl",Program updates I'm not very technical and well I tried to look on the search engines, I can't find anything to help me.

    Hope someone can advise me please

    Thank you aypee74

    I figured how to stop my warning high. I just got an answer and it reminded me that I now know why my computer was giving this prompt. This may help a bit there. So I decided to write.

    In my case it was my 'Sigmatel Audio Control Panel.
    I had disabled "allow reconfiguration pop ups".
    This will make the warning "Run a legacy CPL high" lights up to chance that your system can access this audio program. Once I've rechecked. The warnings went away unless I open the control panel. But it's a normal warning according to the manufacturer of my computer.
    Some of you may have something much more serious, but for the few who becomes confused by the latter and have this audio driver, this may help.
  • Configuration of high availability.

    Hello

    Please help me to configure high availability for Foglight existing environment, please send me the steps and requirements of pre.

    How many servers can exist in a cluster?

    Capacity how do we need on the primary server and the other servers if there is a failure?

    We currently have 1 unifying and 3 child FMS.

    version: 5.6.10

    Thank you

    Vicky

    Vicky,

    There are 2 very useful field guides that go through the requirements and the Setup process.

    High Availability Guide - http://edocs.quest.com/foglight/5610/doc/wwhelp/wwhimpl/common/html/frameset.htm?context=field&file=HA-field/index.php&single=true

    Federation of field guide-

    http://eDOCS.quest.com/Foglight/5610/doc/wwhelp/wwhimpl/common/HTML/frameset.htm?context=field&file=Federation-field/index.php&single=true

    Note the following points, known issue

    http://eDOCS.quest.com/Foglight/5611/doc/wwhelp/wwhimpl/common/HTML/frameset.htm?context=field&file=HA-field/overview.1.php&single=true

    "A master of the Federation running in mode high availability is not supported. Only children Federated can be run by high availability. »

    Golan

  • WLC 5508 high availability

    Hello

    Today I have two WLC 5508 (with license for 100 AP each of them), on a single site.

    The WLC work availability (active-standby).

    However, we have a new scenario, with 02 sites: A and B (attachment).

    I would like to know if it is possible to work as follows:

    The WLC - A as the main controller of site A. WLC - B as a backup (BDC) of WLC.-a.

    The WLC - B that has the PDC site B. WLC - as a backup (BDC) to WLC - B.

    For example:

    If WLC - a falls, site access Points are managed by B WLC site - B and vice versa.

    Is this possible?

    How can I configure the new scenario? Don't forget, there is a site-to-site between Site A and Site b.

    Another point:

    If I add more than 50 APs on Site A. How does the license number?

    Should I buy a license for the two WLC?

    TKS,

    >....

    >.. .is it possible?

    No. , high availability in terms of controller is supposed to be what is said, the backup controller is not 'full' - stby and cannot play other roles.

    M.

Maybe you are looking for