Redudancy

Hi all, I am a beginner with VMWARE.

I have 3 servers and NAS 2.

The first server running VMWARE. The other 2 are running virtual machines with redudancy. The NAS 2 use databases.

My question is: If the server that is running VMware is blocking down due to a hardware or software failure. Make virtual machines on 2 other servers still work without any impact.

I would like to be sure that VMware is not necessary to establish the virtual machines running properly.  Because there is no redudancy with the server using VMware.

Thancks for assistance.

Welcome to the community,

I assume you mean when you say "the first server running VMWARE" vcenter Server (VMware is the name of the company)?

vCenter Server is required for the configuration and tasks such as vMotion. The hosts - and the virtual machine running on the hosts - will continue to operate even if vCenter Server fell. Same HA (high availability) will continue to work.

André

Tags: VMware

Similar Questions

  • Redudancy VPN 3030 VRRP.

    Hello

    So I read on the redudancy VRRP 3030 in the example that I see on the site Web of Ciscos http://www.cisco.com/warp/public/471/vrrp.html it seems that I only need two ip addresses. The main hub uses both ip VRRP addy for its own interfaces and the actual address of VRRP. Where as the backup hub watches the VRRP address and guess what addy when it no longer, but he still has is own ip address for its interfaces.

    I can see the three address used for VRRP for ip address, virtual and two others for the physical interfaces on the segment. Has anyone else done this and I read this right?

    Unfortunaly I really do test this with the exception of a brief outtage window and want to make sure I have everything well.

    Thanks for the replies and I will note all.

    Patrick

    You read it right. I made a couple of these deployments, you can follow this guide to the letter.

  • The networking redudancy, 2 network cards, active/active or active / standby?

    I have two network cards available for my management network.   More 'design' documents that I saw an active set to NIC and the other in standby mode.  What is the advantage of this approach compared to their definition both active?  Suppose I have no limitation of NIC and these 2 ports are dedicated to management only.

    greenpride32 wrote:

    I have two network cards available for my management network.   More 'design' documents that I saw an active set to NIC and the other in standby mode.  What is the advantage of this approach compared to their definition both active?  Suppose I have no limitation of NIC and these 2 ports are dedicated to management only.

    If you have no other exchanges on this vSwitch then you can leave them as an asset with no problems.

    Sometimes, the VMK vMotion interface is placed on the same vSwitch as VMK and if yes, it is good to separate them for different vmnic with active / standby.

  • Reference Dell acceptable EQL PS4100xv latency

    Hello

    I had a Dell PS4100xv with 3 three servers connected via 2 x PC6224.

    The total consumption of the IOPS / s does not exceed 600 IOPS / s. Most of the OPS / s time is about 300 IOPS / s. In looking at SAN HQ, OPS are / s taken max supported by the table on the RAID 6 should be about 1500 ms.

    The latency is about 20 to 50 ms.   However it reads about 80 to 100 m, but it is not maintained.

    What is the acceptable latency for the EQL? I have Windows 2008 R2 SP1 and some Linux virtual machines.

    Application performance is acceptable.

    Thank you

    Paul

    OK, that will certainly be delayed ACK and / or LRO.  In addition, should check that MPIO has Round Robin with IOs by path to 3.  (Default for RR, 1000 IOs before moving paths)

    This link is a PDF which covers all the best practices for ESX + EQL.   There are some very important parameters for the performance and redudancy in this guide.

    en.Community.Dell.com/.../20434601.aspx

    The question of priming is probably different from the issue of latency.  Given that very few of IO to the SAN occurs during startup.   If you look at the EQL GUI, you can see the speed at which the connections met.

  • The B Firmware update series

    I have a chassis with four blades B200 executes code 2.0(2q). ESXi 5 is running on the blades. There are two managers of UCS in a cluster and I want to upgrade to 2.0(3a)

    I have improved the 1.4 code a few months back, but the system was new and was not running any VMs in time so I could reboot at will.

    It is now in Production and I am concerned about the upgrade when the system power is on. Is it possible to perform the upgrade, while VMS are running? I know the blades must restart when the upgrade of some components, but I cannot vmotion off virtual machines for each blade, one at a time.

    If anyone has any experience of this please let me know. States to upgrade guide 'no data disruption to traffic if the correct sequence of steps is followed.

    Thank you

    That is right.  It is possible use VMotion to move production machines virtual autour, while you turn the adapter BIOS images (requires server restarts).  Assuming that you have the two SAN & LAN redundancy on each host, the activation of the other component should not affect your production systems (other than a temporary slight performance hit because of uplink single).

    We always say if you can plan for a failure - do.  Then continue as indicated the upgrade guide to make it also possible continuously.  Always plan for the worst cases.  On hundreds of updates, I ran, using Vmotion there was no impact 99% of the time.  Given that Murphy's law intervenes from time to time and s * happens.  Either a device falls down, or I forgot to test redudancy SAN/LAN before the upgrade and it causes a failure.

    Make sure that your LAN & SAN fabrics are all healthy & hosts are configured for full redundancy and I am convinced that you will be fine.

    Kind regards

    Robert

  • Static NAT & DMVPN Hub

    Hello

    I don't think that will be a problem DMVPN supports the rays behind NAT devices, but I anticipate change my network for reasons of security and redudancy autour and putting a pair of ASA firewalls on my Internet collocation.  Right now I have a DMVPN race 3845, NAT & ZBFW.  I'm going to remove the ZBFW and move the NAT to the ASA, leaving only the DMVPN hub and routing.  If I create a static NAT mapping on my ASA to point to the DMVPN hub that will work?

    I think it will be, but I just wanted to be 110% sure.

    Thank you!

    Hi Brantley,

    DMVPN with static NAT on the hub is supported in the installer. Just be awear it there are limits.

    1, all DMVPN router, hub and spokes must be running at least 12.3(9a) and 12.3 (11) T code.

    2, must use ipsec transport mode.

    3, so need dynamic tunnel talk to rays, hub should work at least 12.3 (13), 12.3 (14) T and 12.3 (11) T3 code.

    See the configuration guide

    http://www.Cisco.com/en/us/docs/iOS/sec_secure_connectivity/configuration/guide/sec_DMVPN_ps6350_TSD_Products_Configuration_Guide_Chapter.html#wp1122466

    HTH,

    Lei Tian

  • S - S VPN between ASA and ASR1001

    Hello

    We have 2 routers ASR to connected to ISP headquarters and there are new remote sites that must be connected to the AC over the site to site VPN. Each remote branch will be ASA, IPs outside of these two recommendations are in the same subnet.

    1. is it possible to reach redudancy beside HQ in this design?

    2. can I create L2L tunnels to two ASR? If yes how can I do 1 active tunnel and other secondary?

    | ASR1

    Users---L3SW---ASA---ISP---CPE---|

    | ASR2

    Any suggestions are welcome

    Thank you

    There are two ways:

    1. IPSec stateful failover
      http://www.Cisco.com/c/en/us/TD/docs/iOS-XML/iOS/sec_conn_vpnav/configuration/15-Mt/sec-VPN-availability-15-Mt-book/Sec-State-fail-IPSec.html
      http://packetlife.net/blog/2009/Aug/17/fun-IPSec-stateful-failover/
    2. VPN config with two counterparts one ASA.
      Here you have two individual bridges on the HQ and the ASA has two tunnel-groups en the two bridges but only a single sequence in the crypto plan. Peer education has two HQ - IPs configured.
  • Question of Migration Nexus1010?

    Hi all

    We got a box of demonstration (Nexus 1010) (installed standalone) to our customers. It has a demo license which will be completed on Dec26th. Already, we have configured a VSM on this area and installed couple of MEC. Customer was looking forward a little and already migrated direct VMs couple behind some of the MEC. They bought two Nexus 1010 s. Now I am looking for a migration scenario. Here are the steps I should follow to migrate to the new Nexus1010s with no problems. ID really appreciate it if someone can comment on these steps.

    1. Convert primary Nexus1010 current "HA" demo box
    2. Install one of the new Nexus1010s as a' secondary '.
    3. Turning off the demo Nexus1010 box that currently works as a "primary HA.
    4. Then convert the new box (installed in step 2) Nexus1010 an autonomous area, then a "primary HA.
    5. Then install the license on "primary HA.
    6. Install the other new Nexus1010 as a' secondary '.

    Thanks in advance.

    Dumlu

    Please note that TAC will only provide the image of Nexus 1010 in rare & approved circumstances.  The role of redudancy can be corrected with an erasure of the configuration, but it can not be changed "on the fly".

    TAC will more than likely RMA unity should there be problems with the Manager software.

    Kind regards

    Robert

  • EtherChannel b & w 3560

    Hello

    I m try two 3560 L3 switch to package Cisco etherchannel black and white tracers. I am able to ping both pass bu v. vic when I stopped one of the port in channel not able ping between switch ports. I want to connect my two location with 2 3550 with redudancy. o/p are below. can suggest what will be the issue.

    hostname SW1

    !
    interface FastEthernet0/1
    switchport access vlan 100

    !
    interface FastEthernet0/23
    No switchport
    channel-group mode 1 on
    no ip address
    full duplex
    automatic speed
    !
    interface FastEthernet0/24
    No switchport
    channel-group mode 1 on
    no ip address
    full duplex
    automatic speed

    !
    Interface Port - Channel 1
    No switchport
    IP 10.10.10.1 255.255.255.252
    !
    interface Vlan1
    no ip address
    Shutdown
    !
    interface Vlan100
    IP 192.9.100.100 255.255.255.0
    !
    IP classless
    IP route 192.9.193.0 255.255.255.0 10.10.10.2

    end
    SW1 #.
    SW1 #.
    SW1 #show etherchannel port-channel
    List of channel-group:
    ----------------------

    Group: 1
    ----------
    Port-channels of the Group:
    ---------------------------

    Port-channel: Po1
    ------------

    The Port-channel = d from 00:00 age: s 36m: 35
    Logical slot/port = 2/1 number of ports = 2
    GC = 0 x 00000000 HotStandBy port = null
    State of Port = Port-Channel
    = PAGP Protocol
    Port security = disabled

    Ports in the Port-Channel:

    Index load Port CE number of status bits
    ------+------+------+------------------+-----------
    0 00 Fa0/24 0
    0 00 Fa0/23 0
    Time since the last group port: d 00: 00:07:00 Fa0/23
    SW1 #.
    SW1 #show etherchannel summary
    Flags: D - low P - port-channel
    I have - autonomous s - suspended
    H Eve (LACP only)
    R - Layer 3 S - Layer2
    U - running f - cannot allocate an aggregator
    u - unfit to tied selling
    w waiting to be aggregated
    d default port

    Number of channels: 1
    Number of aggregators: 1

    Protocol for the Port-Channel port group
    ------+-------------+-----------+----------------------------------------------

    1 Po1 (SU) PAgP Fa0/23 (P) Fa0/24 (P)

    ----------------------------------------------------------------
    !
    hostname SW2

    interface FastEthernet0/1
    switchport access vlan 193

    !
    interface FastEthernet0/23
    No switchport
    channel-group mode 1 on
    no ip address
    full duplex
    automatic speed
    !
    interface FastEthernet0/24
    No switchport
    channel-group mode 1 on
    no ip address
    full duplex
    automatic speed
    !
    Interface Port - Channel 1
    No switchport
    10.10.10.2 IP address 255.255.255.252
    !
    interface Vlan1
    no ip address
    Shutdown
    !
    interface Vlan193
    IP 192.9.193.100 255.255.255.0
    !
    IP classless
    IP route 192.9.100.0 255.255.255.0 10.10.10.1
    !

    end

    SW2 #.
    SW2 etherchannel port-channel #show
    List of channel-group:
    ----------------------

    Group: 1
    ----------
    Port-channels of the Group:
    ---------------------------

    Port-channel: Po1
    ------------

    The Port-channel = d from 00:00 age: s 33m: 10
    Logical slot/port = 2/1 number of ports = 2
    GC = 0 x 00000000 HotStandBy port = null
    State of Port = Port-Channel
    = PAGP Protocol
    Port security = disabled

    Ports in the Port-Channel:

    Index load Port CE number of status bits
    ------+------+------+------------------+-----------
    0 00 Fa0/24 0
    0 00 Fa0/23 0
    Time since the last group port: d 00: 00:08:00 Fa0/23
    SW2 #.
    SW2 #show etherchannel summary
    Flags: D - low P - port-channel
    I have - autonomous s - suspended
    H Eve (LACP only)
    R - Layer 3 S - Layer2
    U - running f - cannot allocate an aggregator
    u - unfit to tied selling
    w waiting to be aggregated
    d default port

    Number of channels: 1
    Number of aggregators: 1

    Protocol for the Port-Channel port group
    ------+-------------+-----------+----------------------------------------------

    1 Po1 (SU) PAgP Fa0/23 (P) Fa0/24 (P)
    SW2 #.

    Hello

    I tested and got the same result as you, so I think you should stay away from PT to test etherchannels and stick to GNS3 if you can do with the mode on or use real devices if you want to use PagP or LACP.

    Kind regards.

    Alain

  • Limitation with the number of entries in a Tunnel of Split ACL

    Hey Cisco community!

    I am facing a problem with a Cisco hub and spoke to the solution.

    We have 2 Hubs (Cisco 7200-2 for redudancy). All clients have a RADIUS (Cisco 881). The rays are 24/24 reported the 2 hubs (2 dmvpn tunnel) to give us access to our monitoring and support equipment.

    Each talk have a NAT table with a specific NAT range for each talk. That way, we can reach every devices with a single IP address within the VPN.

    For example:

    -Spoke_001 have a range of IP NAT 10.80.0.0 255.255.254.0

    -Spoke_002 have a range of IP NAT 10.80.2.0 255.255.254.0

    ...

    To connect to hubs with our mobile phones, we use the Cisco VPN client. We have different profiles created in the regional centres:

    -Profile Admin with an ACL that allow connectivity with each talk

    -Integrator profiles: which allow connectivity to an integrator to some defined rays.

    So the integrating profile looks like this in the hub

    Configuration group customer crypto isakmp [NAME]

    Touch [password]

    [domain]

    pool [NAME]

    ACL [NAME_VPN_Split]

    !

    Profile of crypto isakmp [NAME]

    Profile of clients VPN Description Group [NAME]

    identity group match [NAME]

    list of authentication of client VPN_Client_AUTHEN

    VPN_Client_AUTHOR of ISAKMP authorization list.

    client configuration address respond

    IP local pool [NAME]...

    And the relationship of this group access list:

    [NAME_VPN_Split] extended IP access list

    IP 10.82.20.0 allow 0.0.1.255 all

    IP 10.82.24.0 allow 0.0.1.255 all

    IP 10.81.238.0 allow 0.0.1.255 all

    IP 10.82.4.0 allow 0.0.1.255 all

    IP 10.82.44.0 allow 0.0.1.255 all

    IP 10.81.242.0 allow 0.0.1.255 all

    ...

    In the access list, we can modify the subnets to reduce the number of entries, but some groups should have access to a spoke with the NAT IP range that we can summarize in 1 line (see example)

    The question we have is: when we have more than 50 entries in the ACL, 51st entry does not work:

    -Customer VPN does not receive the road to this network, the road is not added on the connected PC

    -Even if the road is added manually on the PC, the 51st network ACL is not accessible.

    Do you know why there is a limit of 50 entries in a tunnel "Split ACL?

    Do you know if there is a solution to avoid this problem?

    The problem is that if we can summarize an ACL in less than 50 lines, we will have to create a second profile and know wich one to use for the network that... Not really a good solution.

    Thanks in advance!

    Version:

    ROM: System Bootstrap, T3 Version 12.3 (4r), RELEASE SOFTWARE (fc1)

    BOOTLDR: 7200 (C7200-KBOOT-M), Version 12.3 software (15), VERSION of the SOFTWARE (fc3)

    System image file is "disk2:c7200 - advsecurityk9 - mz.151 - 4.M2.bin.

    Yes, there is a strict limit of 50 split tunnel ACL entries when you set it by using the old-fashioned way of VPN configuration (ie: card crypto).

    If you use dynamic TIV to configure, then you have no limitation for ACL split tunnel.

    Here is an example configuration for dynamic configuration of VTI:

    http://www.Cisco.com/en/us/docs/iOS-XML/iOS/sec_conn_vpnips/configuration/15-Mt/sec-IPSec-virt-tunnl.html#GUID-E9EB4518-6269-42E8-908C-57BA5D6334A5

    Hope that answers your question.

  • Windows loads and have a RAW hard disk

    Recently, I have this problem, my laptop started, he began to check for errors on the disk (D :). after reaching 100% for more than an hour, I thought it got stuck and stop my laptop with the power button.) Now when I run windows even loads disk control D: starts but its stuck at 0%

    I tried the automatic recovery, but I failed. I opened the advanced recovery, troubleshooting, and the command prompt.

    Since the command that I could access C, E, F (my 1 TB HD is divded into C, D, E, F) discs but the opening of d: it shows redudancy cyclical error. Running Chkdsk it gives an error
    Cannot run Chkdsk for RAW readers...

    I tried to ignore the disk check, but windows won't load. the loading screen just guard spinning.

    Thanks for help.

    Hello

    You might try to wipe the disk, but it may not make a difference...

    You have nothing to lose but some time...

    You probably found the RAW because the drive was damaged when the chkdsk command was arrested... She basically lost his file from formatting\partitioning...

    The reader can already failed and the reason for the chkdsk to begin with...

    Another thing, you could try that would be faster and CAN recover the drive is to reinstall windows and install a CUSTOM...  You'll get a screen to remove all the partitions and/or reformat the drive...

    Just delete them all and see if windows install and create what he needs...

    Otherwise, try again and this time format the drive in NTFS... Windows will probably be repartitioning again...

    If your installation media does not restore to factory settings, it might be a good time to spend to Win 8.1 or 10 and start again...

    You can also be in need of a new HARD drive...

  • Application of quorum

    Environment

    2 RAC cluster nodes with each container:

    ----------------------------------------

    Oracle Linux 6 update 5 (x 86-64)

    Oracle Grid Infrastructure 12R1 (12.1.0.2.0)

    12R1 database Oracle (12.1.0.2.0)

    I'm not understanding the role of the College in the Group of lack of quorum for voting with Oracle ASM files.

    Failure of groups (FG) assure that there is a separation of the risk you are trying to mitigate.   The separation is to scale with user data.  Vote files, the file is separated on a disk for each FG in the Group Disk (DG).  If voting files are stored on a disk group (DG) to the normal dismissal.  A minimum of two FGs is required.  It is recommended that 3 FGs serve.  It's so partner status Table (PST) may have at least one other FG where PST is maintain by comparison.  It is in the case of a failure of FG.  Must the FGs be QUORUM which store files to vote?  What is the role of the College? When is it necessary?

    Hello

    I'll start with what the Quorum means:

    The quorum is the minimum number of members (majority) of a game, necessary to avoid a failure. (COMPUTER concept)

    A quorum is a lot like Votedisk Quorum Quorum OCR, network Quorum, Quorum PST, etc.

    We must separate this quorum, we are concerned.

    The PST Quorum is different from the Quorum of Votedisk, well think everything works toogheter.

    Quorum PST:

    A PST file contains information on all the disks in a Diskgroup ASM - number of disk, Disk Status, disk number, the heart rate info and Failgroup Info partner.

    A disk group must be able to access a quorum of the Tables of status partner (PST) to mount the diskgroup.

    When diskgroup mount is requested the instance reads all disks in the disk group to find and verify all available PST. Once it checks that there is enough PSTs for quorum, he set up the disk group.

    There is a good post here: ASM support Guy: partnership and the State Table

    Votedisk quorum:

    Is a minimum number of votes for cluster is operational. There is still a quorum of votedisk.

    When you configure votedisk in normal redundancy you 3 Votedisk one in each Failgroup. For cluster is operational you need of at least a quorum with 2 online voting for the cluster are online.

    Quorum Failgroup (clause):

    Quorum Failgroup is an implementation of a Diskgroup option.

    This option do not confuse with the Quorum of voting, because the Quorum of voting and Quorum Failgroup are different things.

    For example: in a Normal Redudancy diskgroup can I lost my Failgroup Quorum, and the Cluster will stay in line with 2 Failgroup regular, this Quorum failgroup is an installation program.

    Named as 'Failgroup Quorum' a failegroup to a specific purpose which is store only the Votedisk due to deployment of the infrastructure of the Oracle.

    Is no obligation to use "Quorum Failgroup' in a Diskgroup containing votedisk.

    Now, back to your question:

    If your failure groups have only 1 ASM disk then should not the recommendation to use high redundancy (5 groups of failure) so in the case of a disk failure ASM a quorum of PST (3 PST files) would be possible?

    About Quorum PTS: You need must be aware that if you have 5 PST, you will need at least a quorum with 3 PST to mount his Diskgroup.

    If you have 5 Failgroup and each Failgroup has ASMDISK that one alone, you'll have a STP by ASMDISK that support you lost at least 2 PST to make a Quorum with 3 PST and keep mounted Diskgroup or fix it.

    The bold italic Oracle documentation above seems to say that if you allocate 3 devices in disc 2 will be used by groups of normal redundancy failure. In addition, a group of lack of quorum will exist who will use all disk devices.  What does that mean?

    I don't know what documents are saying is so confused. I'll try to contact some employee of Oracle to check it out.

    But will attempt to clarify some things:

    Suppose you install a Diskgroup as follows:

    DATA DiskGroup

    Failgroup data01

    * / dev/hdisk1 and/dev/hdisk2

    Failgroup data02

    * / dev/hdisk3 and/dev/hdisk4

    Failgroup data03

    * / dev/hdisk5 and/dev/hdisk6

    Quorum Failgroup data_quroum

    * / nfs/votedisk

    When you add Votedisk on this WHAT DATA diskgroup, the CSSD will store as follow:

    CSSD will randomly choose an asmdisk by votedisk Failgroup and magazine on the subject, but always chooses an ASMDISK of Failgroup of Quorum (if it exists).

    Therefore, after you add votedisk below diskgroup that you have:
    * Failgroup data01 (/ dev/hdisk2)

    * Failgroup data03 (/ dev/hdisk5)

    * Failgroup data_quorum (votedisk/nfs /)

    Mount diskgroup data, you must failgroup data01, data03 and data_quorum available in the diskgroup mount, otherwise diskgroup does not rise.

    On Documentation (https://docs.oracle.com/database/121/CWADD/votocr.htm#CWADD91889) is a bit of confusion:

    Normal redundancy

    The level of redundancy you choose for the Oracle ASM disk group determines how Oracle ASM mirrors the disk group files and determines the number of disks and the amount of disk space you need. If the files with the right to vote in a disk group, groups of disks that contain Oracle Clusterware (OCR and vote files) files have a more minimum number of groups of failure than other disk groups because with voting rights are stored in the default quorum groups.

    What he says is: when using Failgroup of Quorum, you will have a more minimal number of failure than other disk groups...

    But remember that qorum Failgroup is optional for those who use a single storage or an odd number of H/w storage.

    For Oracle Clusterware files, a normal redundancy disk group requires a minimum of three disk devices (two of the three discs are used by groups of failure and all three disks are used by the default quorum group) and provides three files with right to vote and a mirror of the OCR OCR. When you use a normal disk of redundancy group, the cluster can survive the loss of the group a failure.

    Try to clarify:

    -Votedisk in Diskgroup with normal redundancy requires three drive devices. (When using Failgroup Quorum: you will have two of the three disk used by regular failgroup and one of the three discs are used by Quorum Failgroup but all three discs (regular failgroup and quorum that store votedisk) count when mount the diskgroup.)

    - OCR and a mirror of OCR:

    It was really confusing. Because the mirror of the OCR must be placed in a different diskgroup because OCR is stored similar to how Oracle database files are stored. Scopes are spread across all disks in the diskgroup.

    I don't know what he's talking about. If measure on diskgroup redundancy mirror or mirror of the OCR.

    By as note and especially the documentation says it's not possible store OCR, and OCR mirror on diskgroup even

    RAC FAQ (Doc ID 220970.1)

    How the registry of Cluster Oracle (OCR) is retained when using ASM?

    And (https://docs.oracle.com/database/121/CWADD/votocr.htm#CWADD90964)

    * At least two places of OCR if OCR is configured on an Oracle ASM disk group. You must configure OCR in two groups of independent labels. In general, it is the recovery area and work space.

    High redundancy:

    For the Oracle Clusterware, a disk group files high redundancy requires a minimum of five disk drives (three of the five discs are used by groups of failure and all five disks are used by the default quorum group) and provides five files with right to vote and an OCR and two mirrors of the OCR. With high redundancy, the cluster can survive the loss of two failure groups.

    Three of the five discs are used? and two mirrors of OCR? In a Diskgroup single?

    Now things goes bad.

    As much as I can test and see how Quorum Votedisk four (but not three) of five disks are used and all five counties.

  • Fatal or not?

    2-node RAC on 11.2.0.3.5 on Solaris Sparc 11.1

    When I run this query:

    SQL > select file #, unrecoverable_time, unrecoverable_change # from v$ datafile where unrecoverable_time is not null;

    FILE # UNRECOVER UNRECOVERABLE_CHANGE #.

    ---------- --------- ---------------------

    5-10 NOVEMBER 12 1774222

    94 10 NOVEMBER 12 15263028

    106 10 NOVEMBER 12 15262676

    240 10 NOVEMBER 12 15258111

    244 10 NOVEMBER 12 15259701

    250 10 NOVEMBER 12 15262273

    254 10 NOVEMBER 12 15260190

    256 10 NOVEMBER 12 15261471

    291 10 NOVEMBER 12 1776582

    318 10 NOVEMBER 12 15256886

    349 10 NOVEMBER 12 15261876

    351 10 NOVEMBER 12 15260644

    12 selected lines.

    ... It tells me that there are 12 data files that are unrecoverable. It's something the utility raccheck takes over and gives me a "failure" for recovery of file data accordingly.

    But the database logging mode force (lit on June this year). Then, when I check with RMAN:

    RMAN > unrecoverable report;

    using the control file of the target instead of recovery catalog database

    Files report requiring a backup due to unrecoverable operations

    Type of file backup name required

    ---- ----------------------- -----------------------------------

    RMAN >

    ... I thought that nothing is unrecoverable, that's what I initially planned.

    Is that what I can do to remove the data from dba_data_files so that the utility raccheck choke on it every time? Or is there a real problem that raccheck is picking up that RMAN ignores?

    I think you are badly understand the meaning of these columns DATAFILE V$. Documentation:

    Unrecoverable change last number makes this data file. If the database is in ARCHIVELOG mode, then this column is updated when a fatal operation is finished. If the database is not in ARCHIVELOG mode, this column does not get updated.

    Timestamp of the last fatal change. This column is updated only if the database is in ARCHIVELOG mode.

    As you can see on 10 November 2012 there are transactions for these data files which are sunk, probably because of NOLOGGING. It will be a problem only if you want to restore a backup of 2012 and recover data to this particular SCN files. These columns display the LAST RCS of the LAST IMPOSSIBLE operation performed on these data files. You would be able to recover them at any time after November 10, 2012.

    RMAN does not report this because it takes into account your current backups. RMAN is you say that everything is OK to be restored to any point in time on your retention (time based or based redudancy) policy. For example, if you did a NOLOGGING INSERT it would show on RMAN, but it goes away after you take a full backup of the data file after the nologging operation.

    Concerning

  • Recommendation of disc with right to vote for 11.2.0.3

    Version of the grid: 11.2.0.3

    Platform: Oracle Enterprise Linux 5.8

    We want to keep our record of voting in the ASM disk group. Since this RAC cluster is going to host a very critical database, we want to make sure the voting disk worth more 'protected' against disk failure.

    All our LUNS are allocated from a Hitachi Frame high-end storage (Hitachi VSP). No mutiple storage arrays. I think it is the case of most of the RAC implementations worldwide. I could be wrong.

    So, I guess I can't speciy Normal redundancy or high because there is only one group of failure.

    I have 4 questions.

    1. given the situation described above (all LUN storage even frame) how much Lun should be there in my group of disk OCR_VOTE?

    2. what redundancy shoud set up for diskgroup OCR_VOTE? Normal, High or external?

    3. is there an advantage to having an odd number of LUNS (3) for the Group of OCR_VOTE disks in my case?

    4. it is good to have separate diskgroup to voting disk and OCR?

    So, I guess I can't speciy Normal redundancy or high because there is only one group of failure.

    I have 4 questions.

    1. given the situation described above (all LUN storage even frame) how much Lun should be there in my group of disk OCR_VOTE?

    Is recommended to create 3 small LUN (one in each storage array if you have that option) and you can create a failgroup to each LUN. (the pattern is multiplex record voting to protect against the problem of corruption or array logical and physical)

    Most of the EPS have MPIO and configured RAID then on hardware redundancy (disk or controller of average breakdown) is resolved at the level to support him, but you can protect the voting disk or rock against failures table.

    2. what redundancy shoud set up for diskgroup OCR_VOTE? Normal, High or external?

    External if you make sure that your material is highly protect against failures, RAID and MPIO is configured. But you'll be not protected against failures of corruption or logical or physical table.

    Normal , you protect against physical or logical and is corruption the location recommended a lun in a different table on the storage.

    Obligation to use normal redundancy if you use two or more storage system.

    High if your cluster has more than 5 knots, and have two or more storage system is recommended use high rendundacy.

    3. is there an advantage to having an odd number of LUNS (3) for the Group of OCR_VOTE disks in my case?

    To set the record of voting on a normal GM with redudancy, you will need at least 3 LUNS as disk to vote on a Normal redudancy requires asmdisk 3 or more.

    3 LUN is not a Diskgroup requirement but a requirement of CSS.

    4. it is good to have separate diskgroup to voting disk and OCR?

    The only reason to separate (if you RAID and MPIO level H/W), is to protect OCR against failure on Diskgroup.

    If you have a Diskgroup (OCR_VOTE) set up correctly, then this will protect OCR in another Diskgroup is a plus.

    p.s vote disc can be placed in a diskgroup. OCR can be placed in several diskgroup.

    Recently, I faced a lost customer 12 disks (failure) at the same time in a DS8000 storage, storage over 100 physical disks.

    We considered impossible, but it may something.

  • RMAN: Backups not exipring after the window of redundancy.

    
    

    Hello

    I meet backup sbt_tape and the use of the slot block:

    run{
    ALLOCATE CHANNEL C1 TYPE SBT_TAPE PARMS='SBT_LIBRARY=/u01/app/oracle/product/11.2.0.3/db1/lib/libddobk.so,
    ENV=(STORAGE_UNIT=CH2-nwtdb-Oracle, BACKUP_HOST=172.28.136.50, ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/db1)';
    ALLOCATE CHANNEL C2 TYPE SBT_TAPE PARMS='SBT_LIBRARY=/u01/app/oracle/product/11.2.0.3/db1/lib/libddobk.so,
    ENV=(STORAGE_UNIT=CH2-nwtdb-Oracle, BACKUP_HOST=172.28.136.50, ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/db1)';
    ALLOCATE CHANNEL C3 TYPE SBT_TAPE PARMS='SBT_LIBRARY=/u01/app/oracle/product/11.2.0.3/db1/lib/libddobk.so,
    ENV=(STORAGE_UNIT=CH2-nwtdb-Oracle, BACKUP_HOST=172.28.136.50, ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/db1)';
    ALLOCATE CHANNEL C4 TYPE SBT_TAPE PARMS='SBT_LIBRARY=/u01/app/oracle/product/11.2.0.3/db1/lib/libddobk.so,
    ENV=(STORAGE_UNIT=CH2-nwtdb-Oracle, BACKUP_HOST=172.28.136.50, ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/db1)';
    backup incremental level 0 database filesperset=1 format '%d_L0_DF%f_%T_%u.dbf' tag 'DB_L0_DF_#_DT_UNQNO';
    backup current controlfile format '%d_L0_CF_%T_%u.ctl' tag 'DB_L0_CF_DT_UNQNO';
    backup incremental level 0 archivelog all filesperset=1 format '%d_L0_AL_%h_%e_%T_%u.arc' tag 'DB_L0_ARC#_DT_UNQNO';
    crosscheck backup of database;
    crosscheck backup of archivelog all;
    delete noprompt expired backup of database;
    delete noprompt expired backup of archivelog all;
    }
    

    My backups work perfectly, but for some reason, my backups expire even though my redudancy is set to 7 days.

    In my dest backup, I have backups as old as March 13, which should have been expired since a long time back.

    Any ideas what could be wrong?

    RMAN> show all;
    using target database control file instead of recovery catalog
    RMAN configuration parameters for database with db_unique_name IWTPR are:
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;
    CONFIGURE BACKUP OPTIMIZATION ON;
    CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE';
    CONFIGURE CONTROLFILE AUTOBACKUP OFF;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE SBT_TAPE TO '%F'; # default
    CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO COMPRESSED BACKUPSET;
    CONFIGURE DEVICE TYPE 'SBT_TAPE' PARALLELISM 4 BACKUP TYPE TO BACKUPSET;
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE 'SBT_TAPE' TO 1;
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE 'SBT_TAPE' TO 1;
    CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/u01/app/oracle/product/11.2.0.3/db1/lib/libddobk.so, ENV=(STORAGE_UNIT=CH2-nwtdb-Oracle, BACKUP_HOST=172.28.136.50, ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/db1)';
    CONFIGURE MAXSETSIZE TO UNLIMITED;
    CONFIGURE ENCRYPTION FOR DATABASE OFF;
    CONFIGURE ENCRYPTION ALGORITHM 'AES128';
    CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE;
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE;
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+IWTPRFRA/IWTPR/snapcf_IWTPR.f';
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+IWTPRFRA/iwtpr/snapcf_iwtpr.f';
    

    Thanks in advance,

    aBBy.

    Details of env: RAC 4 node cluster running 11.2.0.3.4 GI/RDBMS on RHEL 5. 6

    AB007 wrote:

    Hello

    I meet backup sbt_tape and the use of the slot block:

    1. Run {}
    2. ALLOCATE CHANNELS C1 TYPE SBT_TAPE PARMS='SBT_LIBRARY=/u01/app/oracle/product/11.2.0.3/db1/lib/libddobk.so.
    3. ENV = (STORAGE_UNIT = CH2-nwtdb-Oracle, BACKUP_HOST = 172.28.136.50, ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/db1)';
    4. ALLOCATE channel C2 TYPE SBT_TAPE PARMS='SBT_LIBRARY=/u01/app/oracle/product/11.2.0.3/db1/lib/libddobk.so,
    5. ENV = (STORAGE_UNIT = CH2-nwtdb-Oracle, BACKUP_HOST = 172.28.136.50, ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/db1)';
    6. ALLOCATE CHANNELS C3 TYPE SBT_TAPE PARMS='SBT_LIBRARY=/u01/app/oracle/product/11.2.0.3/db1/lib/libddobk.so.
    7. ENV = (STORAGE_UNIT = CH2-nwtdb-Oracle, BACKUP_HOST = 172.28.136.50, ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/db1)';
    8. ALLOCATE CHANNELS C4 TYPE SBT_TAPE PARMS='SBT_LIBRARY=/u01/app/oracle/product/11.2.0.3/db1/lib/libddobk.so.
    9. ENV = (STORAGE_UNIT = CH2-nwtdb-Oracle, BACKUP_HOST = 172.28.136.50, ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/db1)';
    10. filesperset database backup incremental level 0 = 1 format '% d_L0_DF%f_%T_%u.dbf' tag "DB_L0_DF_ #_DT_UNQNO";
    11. backup current controlfile '% d_L0_CF_%T_%u.ctl' format tag "DB_L0_CF_DT_UNQNO".
    12. incremental backup of level 0 archivelog all filesperset = 1 format '% d_L0_AL_%h_%e_%T_%u.arc' tag "DB_L0_ARC #_DT_UNQNO";
    13. cross-checking of backup of database;
    14. backup duplication of the archivelog all;
    15. delete noprompt expired backup of database;
    16. delete expired backup noprompt to archivelog all;
    17. }

    My backups work perfectly, but for some reason, my backups expire even though my redudancy is set to 7 days.

    In my dest backup, I have backups as old as March 13, which should have been expired since a long time back.

    Any ideas what could be wrong?

    1. RMAN > show all.
    2. using the control file of the target instead of recovery catalog database
    3. RMAN settings for database with db_unique_name IWTPR are:
    4. CONFIGURE RETENTION POLICY TO RECOVERY OF 7-DAY WINDOW;
    5. CONFIGURE BACKUP OPTIMIZATION
    6. CONFIGURE THE TYPE OF DEFAULT DEVICE TO 'SBT_TAPE ';
    7. CONFIGURE CONTROLFILE AUTOBACKUP
    8. CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO "%F" # by default
    9. CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE SBT_TAPE TO "%F" # by default
    10. SET UP THE DEVICE TYPE DISK PARALLELISM 4 TYPE OF BACKUP TO COMPRESSED BACKUPSET;
    11. SET UP THE DEVICE TYPE 'SBT_TAPE' PARALLELISM 4 TYPE OF BACKUP BACKUPSET.
    12. CONFIGURE BACKUP OF DATA TO DISK FILE TYPE DEVICE TO 1;
    13. CONFIGURE BACKUP COPIES OF DATA FILE FOR THE 'SBT_TAPE' UNIT 1;
    14. CONFIGURE BACKUP ARCHIVELOG FOR DEVICE TYPE DISK TO 1;
    15. CONFIGURE BACKUP ARCHIVELOG FOR DEVICE TYPE 'SBT_TAPE' 1;
    16. CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/u01/app/oracle/product/11.2.0.3/db1/lib/libddobk.so, ENV is (STORAGE_UNIT = CH2-nwtdb-Oracle, BACKUP_HOST = 172.28.136.50, ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/db1)';
    17. CONFIGURE MAXSETSIZE TO UNLIMITED;
    18. CONFIGURE THE DATABASE ENCRYPTION
    19. CONFIGURE THE ENCRYPTION ALGORITHM "AES128";
    20. CONFIGURE COMPRESSION ALGORITHM 'BASIC' AND 'DEFAULT' LIBERATION OPTIMIZE FOR TRUE LOAD;
    21. CONFIGURE THE NONE ARCHIVELOG DELETION POLICY;
    22. CONFIGURE the SNAPSHOT CONTROLFILE NAME to "+ IWTPRFRA/IWTPR/snapcf_IWTPR.f";
    23. CONFIGURE the SNAPSHOT CONTROLFILE NAME to "+ IWTPRFRA/iwtpr/snapcf_iwtpr.f";

    Thanks in advance,

    aBBy.

    Details of env: RAC 4 node cluster running 11.2.0.3.4 GI/RDBMS on RHEL 5. 6

    You are confusing "expired" with "obsolete".

    When a backupset is created, an account of which is placed in the control file.

    If you backup ' overlap ', rman checks these records against reaility.  If no backup turns out have disappeared, his file is marked as "expired".  That follow-up with a 'delete expired backup', and missing backups (expired) records will be deleted from the repository.

    What you want is "delete obsolete.  Which will remove any backup that is no longer necessary to enforce the retention period.

    BTW, "redundancy" and "x days" are mutually exclusive.

Maybe you are looking for