Compressed log shipping

Hello

1 can. someone please explain what is sending log compressed in oracle?
2. the benefits of its use during the existing function?
3 requires a permit to use it? If yes how it would be calculated?

Thanks in advance

See http://www.oracle.com/technetwork/articles/sql/11g-dataguard-083323.html
where Compression is identified as a "new feature" available in 11g.

Hemant K Collette

Tags: Database

Similar Questions

  • Redo log shipping after a stop pending physical

    Hallo,

    I just have a question for redo log shipping

    in a configuration simple primary/secondary 11g.

    In a shutted down an instance physical standby, redoshipping is active? Primary redo file transfer is still ongoing?

    Thanks in advance

    Hello;

    I'm not going to say no. I still SEE on the primary side before I shutdown, maintains false alarms from happening.

    Test

    Using this query

    http://www.mseberg.NET/data_guard/monitor_data_guard_transport.html

    DB_NAME HOSTNAME LOG_ARCHIVED LOG_APPLIED APPLIED_TIME LOG_GAP

    ---------- -------------- ------------ ----------- -------------- -------

    TDBGIS01B PRIMARY 5558 5557 21-OCT/12:51 1

    Stop waiting

    SQL > alter database recover managed standby database cancel;

    Database altered.

    SQL > shutdown

    ORA-01109: database is not open

    The database is dismounted.

    Journal of strength comes on primary side

    SQL > alter system switch logfile;

    Modified system.

    (3)

    Journal of before was 5558 check so make sure 5559 or higher

    Last log is

    o1_mf_1_5558_c2hn5vdf_.arc

    Wait a few minutes, no change

    Best regards

    mseberg

  • The archive logs shipped but more old logs do not applied...

    Environment:

    Oracle on Solaris 11.2.0.3 10.5

    The Data Guard environment working fine for 6 months.

    I recently had problems with disk space on the day before and had to copy some files from newspapers of the archives to a mount point different that had been shipped to the waiting, but not yet applied because I had to cancel the paper application process.

    After having cleared the space problem and start log shipping and return application log on (change database recovery managed to sleep...) I have the new logs shipped but my watch is a bunch of log files behind. I can see it in the V$ ARCHIVED_LOG on intelligence, where the last seq # applied is 8083 in my case and the more current seq # is currently more than 8200.

    I have all the archives always connects in 'other' the mount point and can move back the directory the correct archive according to needs.

    Since the next file that the application requires is seq # 8084, I moved this file in the/arch directory awaits the day before 'see' and apply it. No luck.

    By searching in the alert log just before after execution of the 'alter database recover managed standby database disconnect;' I see the following:
    Media Recovery Waiting for thread 1 sequence 8084
    and also:
    Fetching gap sequence in thread 1, gap sequence 8084-8084
    and just below that:
    FAL[client]: Error fetching gap sequence, no FAL server specified
    I thought I read in the docs that the FAL_SERVER and the FAL_CLIENT are no longer needed. (?)

    I also found a CKPT script that lists information about DG Environment. For the primary side to hang on, but the day before one produces this output:
    NAME                           DISPLAY_VALUE
    ------------------------------ ------------------------------
    db_file_name_convert           /u01/oradata/APSMDMP1, /u01/or
                                   adata/APSMDMP1, /u02/oradata/A
                                   PSMDMP1, /u02/oradata/APSMDMP1
                                   , /u03/oradata/APSMDMP1, /u03/
                                   oradata/APSMDMP1, /u04/oradata
                                   /APSMDMP1, /u04/oradata/APSMDM
                                   P1
    
    db_name                        APSMDMP1
    db_unique_name                 APSMDMP1STBY
    dg_broker_config_file1         /u01/app/oracle/product/11203/
                                   db_2/dbs/dr1APSMDMP1STBY.dat
    
    dg_broker_config_file2         /u01/app/oracle/product/11203/
                                   db_2/dbs/dr2APSMDMP1STBY.dat
    
    dg_broker_start                FALSE
    fal_client
    fal_server
    local_listener
    log_archive_config             DG_CONFIG=(APSMDMP1,APSMDMP1ST
                                   BY)
    
    log_archive_dest_2             SERVICE=APSMDMP1 ASYNC VALID_F
                                   OR=(ONLINE_LOGFILES,PRIMARY_RO
                                   LE) DB_UNIQUE_NAME=APSMDMP1 MA
                                   X_CONNECTIONS=2
    
    log_archive_dest_state_2       ENABLE
    log_archive_max_processes      4
    log_file_name_convert          /u01/oradata/APSMDMP1, /u01/or
                                   adata/APSMDMP1
    
    remote_login_passwordfile      EXCLUSIVE
    standby_archive_dest           ?/dbs/arch
    standby_file_management        AUTO
    
    NAME       DB_UNIQUE_NAME                 PROTECTION_MODE DATABASE_R OPEN_MODE
    ---------- ------------------------------ --------------- ---------- --------------------
    APSMDMP1   APSMDMP1STBY                   MAXIMUM PERFORM PHYSICAL S READ ONLY WITH APPLY
                                              ANCE            TANDBY
    
    
       THREAD# MAX(SEQUENCE#)
    ---------- --------------
             1           8083
    
    PROCESS   STATUS          THREAD#  SEQUENCE#
    --------- ------------ ---------- ----------
    ARCH      CLOSING               1       7585
    ARCH      CLOSING               1       7588
    ARCH      CLOSING               1       6726
    ARCH      CLOSING               1       7581
    RFS       RECEIVING             1       8462
    MRP0      WAIT_FOR_GAP          1       8084
    RFS       IDLE                  0          0
    
    NAME                           VALUE      UNIT                           TIME_COMPUTED                  DATUM_TIME
    ------------------------------ ---------- ------------------------------ ------------------------------ ------------------------------
    transport lag                  +00 00:01: day(2) to second(0) interval   08/16/2012 13:24:04            08/16/2012 13:23:35
                                   59
    
    apply lag                      +01 23:08: day(2) to second(0) interval   08/16/2012 13:24:04            08/16/2012 13:23:35
                                   43
    
    apply finish time              +00 00:17: day(2) to second(3) interval   08/16/2012 13:24:04
                                   21.072
    
    estimated startup time         15         second                         08/16/2012 13:24:04
    
    NAME                                                            Size MB    Used MB
    ------------------------------------------------------------ ---------- ----------
                                                                          0          0
    I do not use a fast recovery area.

    What should I do to get the old logs to check-in begins to apply again?

    The alert log says something about "Adding value to the media", but I don't know why I would need that since I have the log files itself.

    Any help is appreciated.

    Also, I spend a significant amount of time to read through this forum and many articles on the web but couldn't find exactly the solution I was looking for.

    Thank you very much!!

    -gary

    Try to save the log file

    alter database register logfile 'full path for the 8074 sequence.

    and check select message from v$ dataguard_status;

  • Reporter mode physical log shipping standby when you apply the patch

    I apply a (Group of hotfixes) processor for a dataguard primary and physical shall help Doc-ID 278641.1 . I need to temporarily stop archivelog shipment from primary to standby. The documentation gives this example of order:

    ALTER system set log_archive_dest_state_X = reporter brought = the two sid ='* '

    Where X is the number of destination used for shipment of recovery for the backup site.

    So, what is the X? In online research, the answer is almost always Log_archive_dest_state_2. Questioning primary school, 2 is idle and waiting seems to be on 1, so I think I use 1. Which should I use?

    On the primary:

    Select the State, nom_dest, DESTINATION of v$ archive_dest where status = "VALID";

    VALID
    LOG_ARCHIVE_DEST_1
    prod1s

    VALID
    LOG_ARCHIVE_DEST_10
    USE_DB_RECOVERY_FILE_DEST

    view the log_archive_dest_1 setting;

    Log_archive_dest_1 SERVICE string VALID_FOR = (ONLINE_LOGFILES, PRIMARY_ROLE) prod1s = LGW R SYNC AFFIRM DB_UNIQUE_NAME = prod1s

    LOG_ARCHIVE_DEST_10 string LOCATION = USE_DB_RECOVERY_FILE_DEST REQUIRED


    view the log_archive_dest_state setting

    VALUE OF TYPE NAME
    ------------------------------------ ---- ------------------------------
    log_archive_dest_state_1 ENABLE stri
    NG
    log_archive_dest_state_10 ENABLE stri
    NG
    LOG_ARCHIVE_DEST_STATE_2 ENABLE stri
    NG
    activate log_archive_dest_state_3 stri
    NG
    activate log_archive_dest_state_4 stri
    NG
    activate log_archive_dest_state_5 stri
    NG
    activate log_archive_dest_state_6 stri
    NG
    activate log_archive_dest_state_7 stri
    NG
    activate log_archive_dest_state_8 stri
    NG
    activate log_archive_dest_state_9 stri
    NG


    On the physics of the day before:

    Select the State, nom_dest, DESTINATION of v$ archive_dest where status = "VALID";

    VALID
    LOG_ARCHIVE_DEST_2
    prod1

    VALID
    LOG_ARCHIVE_DEST_10
    USE_DB_RECOVERY_FILE_DEST

    VALID
    STANDBY_ARCHIVE_DEST
    USE_DB_RECOVERY_FILE_DEST

    view the log_archive_dest_ setting

    LOG_ARCHIVE_DEST_10 string LOCATION = USE_DB_RECOVERY_FILE_DEST REQUIRED
    SERVICE string LOG_ARCHIVE_DEST_2 = VALID_FOR (ONLINE_LOGFILES, PRIMARY_ROLE) = prod1 LGWR
    SYNC AFFIRM DB_UNIQUE_NAME = prod1

    view the log_archive_dest_state setting

    VALUE OF TYPE NAME
    ------------------------------------ ----------- ------------------------------
    log_archive_dest_state_1 string DAVID
    log_archive_dest_state_10 string ENABLE
    allow the chain of LOG_ARCHIVE_DEST_STATE_2
    allow the chain of log_archive_dest_state_3
    allow the chain of log_archive_dest_state_4
    allow the chain of log_archive_dest_state_5
    allow the chain of log_archive_dest_state_6
    allow the chain of log_archive_dest_state_7
    allow the chain of log_archive_dest_state_8
    allow the chain of log_archive_dest_state_9

    DataGuard with physical standby
    DB to: 10.2.0.4.0
    No RAC, ASM or DG broker

    Published by: JJ on May 2, 2012 10:05

    Published by: JJ on May 2, 2012 10:05

    Hello;

    You want this

    ALTER system set log_archive_dest_state_2 = reporter; (you need to do to your log_archive_dest_state_n) DEST ID that apply to your system (looks like 2 also)

    I'll post my full how patch is here a moment.

    0. Disable log shipping from the Primary
    1. Shutdown Standby
    2. Install patch on Standby software only
    3. Startup Standby in recovery mode (do NOT run any SQL at the standby)
    4. Shutdown Primary
    5. Install patch on Primary
    6. Run SQL on Primary
    7. Re-enable log shipping
    8. Monitor the redo apply from Primary to Standby --- this will also upgrade the Standby 
    
    See Oracle support article : How do you apply a Patchset,PSU or CPU in a Data Guard Physical Standby configuration [ID 278641.1]
    

    Please consider old questions of closing some of you.

    Best regards

    mseberg

    Published by: mseberg on May 2, 2012 09:16

  • SQL Log Shipping 14420, 14421 false alerts

    I've implemented a 2014 SQL Server transaction logs to copy data from a primary to a secondary server. It is all working properly save, copy and restore however the alert that is configured to run on the secondary server reports that the last backup time are the time of the full initial backup before transaction logs starts. This is the information that is reported when I question the msdb.dbo.log_shipping_monitor_primary table. Therefore, when the alert monitoring is running, he reports:

    MSG 14420, level 16, State 1, procedure sp_check_log_shipping_monitor_alert, line 61
    The main expedition of database XXXXXXX.xxxxxxx newspaper threshold of backup of 60 minutes and has not performed a backup of newspaper for 1120 minutes operation. Check information for log and logshipping agent monitor.

    SQL server was originally installed before the server has been renamed to its current name, however I've renamed the instance and all the database queries (for example SELECT @@SERVERNAME) return the correct server name.

    Any help would be appreciated.

    This issue is beyond the scope of this site (for consumers) and to be sure, you get the best (and fastest) reply, we have to ask either on Technet (for IT Pro) or MSDN (for developers)
    *
  • vROPS to LI LI Agent using log shipping

    I am trying to use the solution for recording of vrops to loginsight (Cloud Management Marketplace |) Solution Exchange) and I'm having a few problems, as indicated in this post on the vrops forum... VROPS configuration automatically changes Log Insight (in a bad way)

    There is someone to help these solutions togheter and had the same problem (vrops changing the configuration of li automatically)?

    I had to exclude some cutting, because since my installation of vrops I was getting event per day about 20 million only for the vrops, which are almost INFO/DEBUG.

    Someone at - it a guideline on which configuration can be done?

    It is seems to be only little documentation about the configuration of newspapers... I've seen some articles that talk about the log4j files, but it seems that they reset itself by default if I change the configuration of the interface chart user also.

    Concerning

    Francesco

    Confirmed - Sorry about the back and forth on it. Tests on my end has not happened with 6.0.1. So seems that you can't use the content and user interface pack remote syslog with 6.0.1. Thank you report to us - I will get it tracked with vR Ops team. You opened a pension for this business? If this isn't the case, could you and ensure that it is assigned to the Ops vR?

  • Success of verfying archives log shipping

    RDBMS version: 11.2.0.2
    Platform: Solaris 10

    How can I check if an individual archived Journal (or a set of newspapers archived after a certain time) in the primary DB server was successfully sent to the server of the DB of the day before?

    Hello

    If you are using dataguard broker, I think this might help you

    http://docs.Oracle.com/CD/B10501_01/server.920/a96629/dbpropref.htm#92312

  • Log Insight Agent Compression application

    I'm sure that the answer is, but if someone can confirm is it possible to disable compression Log Insight?

    No, it is not possible to disable compacting today.

  • BlackBerry Smartphones Mail Synchronization Log to Support

    I find no support by e-mail.  I can't support log shipping.  I get "program error occurred at line 131 dans.\Error.cpp" "Please send Log to Technical Support.

    I have Sync full logging on the value, I find the log file.  I don't have anywhere to send the log.

    You must contact BlackBerry support through your carrier, AT & T, in order to send logs to support.

  • ORA-16086: Redo data cannot be written in the expectation of recovery log redo

    Hello

    I have enough disk space: 85% full on both servers, v$ flash_recovery_area_usage shows 56% of the area of dest used recovery.

    So after checking that I have matching sizes for standby records and logs and check that there is enough space allocated to the domain recovery and disk space on the server, I don't know what else could cause a problem to archive log shipping.

    Any ideas?

    My old journals and newspapers of recovery were the same size:

    SQL > select group #, the bytes of the log v$.

    GROUP # BYTES

    ---------- ----------

    1 52428800

    2 52428800

    3 52428800

    SQL > select group #, bytes from v$ standby_log;

    GROUP # BYTES

    ---------- ----------

    4 52428800

    5 52428800

    6 52428800

    7 52428800

    standby time:

    SQL > select group #, the bytes of the log v$.

    GROUP # BYTES

    ---------- ----------

    1 52428800

    3 52428800

    2 52428800

    SQL > select group #, bytes from v$ standby_log;

    GROUP # BYTES

    ---------- ----------

    4 52428800

    5 52428800

    6 52428800

    7 52428800

    However, on the autonomy in standby:

    SQL > select group #, status from v$ standby_log;

    GROUP # STATUS

    ---------- ----------

    4. UNORDERED

    5 UNASSIGNED

    6 NOT ASSIGNED

    7. UNASSIGNED

    Finally, on the primary:

    SQL > select status, error from v$ archive_dest where dest_id = 2;

    STATE ERROR

    --------- -----------------------------------------------------------------

    ERROR ORA-16086: Redo data cannot be written in the expectation of recovery log redo

    Hello

    As you can see, your flash_recovery_Area is 99% used.  57% consumed by the archivelogs and 42% consumed by the flashback logs

    ARCHIVED JOURNAL 57.19 0

    439

    FLASHBACK TO CONNECT 42,44 0

    178

    you have 3 options

    1. increase the flash_recovery_Area if you have space left in recovery_file_Dest

    2. check the retention of archives and remove the old

    3. do you really use flashback function? If this isn't the case, you can disable it. If you need to check if there is no restore point (sometimes if you have and not intended them to use it will cause flashback logs to be kept longer than they should

    Select * from v$ restore_point;

    also check the retention of flashback in operation below

    show the flashback of parameter

  • Log Insight 3.3 shows several entries host consuming licenses - can I clean it?

    Hi all

    So I installed the Log Insight 3.3 for vCenter and it helped me to set up log shipping. Everything works well except two things:

    1. Duplicate hosts (see below) consume all my OSI licenses. Anyone know how I can clean one of the entries? (Of course, I can add FQDN ESXi host name if this is useful and supported)
    2. 5.5 ESXi hosts are not in list host - configuration double checked and restarted syslog. Possible due licenses OSI are consumed by the host entries a copy?

    Release notes:

    The host table can display devices more than once.
    The host table can display devices more than once with each in different formats, including a combination of IP address, hostname and domain FULL name. For example, a device called foo.bar.com may appear as foo and foo.bar.com.
    The host table uses the host name field that is defined in the syslog RFC. If an event sent by a device via the syslog Protocol does not have a host name, vRealize Log Insight uses the source under the host name. This can cause the device being listed repeatedly as vRealize Insight Log cannot determine if the two formats are pointing to the same device.

    Advice would be much appreciated.

    Thank you

    # 1 there is no way manually clear entries - for/admin/hosts the entry will be deleted once that all data from this host spun on (i.e. based on the retention period), for/admin/license if you click the question mark next to medium active HMOs, it says "The average County OSI active is the daily average number of hosts sending events to Log Insight." the big question is why are you seeing duplicates? Duplicates saw if DNS front AND rear are or are not configured correctly. Duplicates can also result in malformed syslog events.

    # 2, the question is not duplicated OSI - if this does not work, it means that something is wrong. It could be the network report including DNS resolution on the ESXi host or network firewall configuration (no configuration host firewall). You'll probably want to connect to and 5.5 ESXI host and check things like syslog configuration validation, confirming the network connectivity to LI, confirming DNS resolution to the syslog destination is work, etc..

    I hope this helps!

  • journal compression archive Oracle

    Dear all,

    Please help me how do I Compress log archiving of oracle 10 g.

    Thank you
    Manas

    There is nothing to test. The order is for the not , such that it can create more problems than good. If this isn't a good idea to propose other solutions, backups compressed backupset or o/s compression level can help.

    Aman...

  • Error: A connection has been established with the server, but then an error occurred during the connection process.

    Hello

    I have MsSql running in the cluster environment and recently face the problem when there is a security agent installed in MsSql server, which the agent does nothing but only to capture the local database activity. The error led is as below:

    ID from step 1

    Server NIBKSQLCLUST

    Job name LSBackup_DRIB

    Newspaper log shipping backup job step name.

    Time 00:00:02

    SQL severity 0

    SQL Message ID 0

    Operator by e-mail

    Operator Net sent

    Operator paged

    Retries attempted 0

    Message

    2011-03-21 08:00:02.62 * error: could not retrieve parameters of backup for primary ID '26f46141-a676-41b2-8653-11f1b13de43a '. (Microsoft.SqlServer.Management.LogShipping) *.

    2011-03-21 08:00:02.63 * error: could not connect to the server NIBKSQLCLUST. (Microsoft.SqlServer.ConnectionInfo) *.

    2011-03-21 08:00:02.63 * error: a connection has been established with the server, but then an error occurred during the connection process. (provider: Named Pipes Provider, error: 0 - no process is on the other end of the pipe.) (.Net SqlClient data provider) *.

    2011-03-21 08:00:02.63 - END OF THE TRANSACTION LOG BACKUP-

    The process to run correctly when I turned off the security officer. Advice kindly the cause of this problem and is where all configurations should be set / changed in MsSql server.

    Thank you

    Boonlep coulibaly

    Hello

    I suggest you to send your request from the link and check.

    http://msdn.Microsoft.com/en-us/hh361695.aspx

    http://msdn.Microsoft.com/en-us/library/bb545450.aspx

  • SITE v2.0 failed Installation on Windows server 2008 R2 with SP1

    Installing CPID v2.0 on Windows server 2008 R2 with SP1, the Setup log reported that the installation failed with the error "CustomAction RollBackSccmComponents returned error code 1603"

    I run the Setup with this command after you open the command prompt as 'Run as administrator ':

    MsiExec.exe /i Dell_Client_Integration_Pack.msi

    I even put the command above into a batch file and run it with the same result. With the help of Trace32, I also found this error to be the main problem causing the failure of this facility:

    "custom action InstallSccmComponents unexpectedly closed the hInstall (type MSIHANDLE) handle provided. The custom action must be set to not close this handle.

    Autour Googling, I came across this link: http://en.community.dell.com/techcenter/enterprise-client/f/4448/t/19416814.aspx to try to perform the steps below, but when I did the steps below, the installation just rolled back without even beginning the installation:

    Steps in the link above:

    *********************************************************************************************************

    "We have found this to be a problem with the way that the custom action is running in the MSI file, it is executed in the context of the user who was unable, we changed this to run in the context of the system that solved the problem. To do is relatively simple;

    First unzip the files from the downloaded executable deployment package. Then, download Orca here, http://blogs.msdn.com/astebner/archive/2004/07/12/180792.aspx. Once installed Orca, do a right click Extract MSI and select, "Edit with Orca. Once Orca lance, do not panic, just click on the top menu and select, "Transform", then select "Transformation." Then go to the 'Custom Actions' Table on the left and find the "InstallSccmComponents" entry on the right; Change the value in the column "Type" 3073, then go to "transform", "Generate Transform", save the transformation in the same directory as your MSI and select, and then close Orca.

    To install, use the following command line;

    MsiExec.exe /i \Dell_Server_Deployment_Pack_v1.2_for_ConfigMgr_A01.msi TRANSFORMS =-number

    = The path to the directory containing the package deployment installation files and your STDS. »

    *****************************************************************************************************

    Has anyone seen this problem before? And if so, how to solve?

    Thank you

    TeeDarling77

    I found myself by opening a ticket with Dell to get this problem finally solved. After the installation of Dell's Support log shipping, they asked me to follow these steps below:

    1 if the SQL DB is on a separate machine, try to create the "SMS_TaskSequence_Action" class in WMI under \\.\root\sms\site_ >.  You can download WMI Explorer to browse if you want to see if this class already exists.  Run the script below will not hurt if it already exists.

    a. here are the instructions to create the custom class.  First open powershell as an administrator.

    b. set-executionpolicy RemoteSigned

    c. change the extension on the attached in .ps1 file and change the file to replace the "site_abc" by "site_their 3 letter site code.

    d. change the storage location of the ps1 file.

    e... \WMIClassAdd.ps1

    f. restart and try the installation again.

    3. If it still does not work, send me the newspaper install again to look at.

    Note:

    Here is what was in the attachment in step C.

    ****************************************************************************

    $newClass = ' new-Object System.Management.ManagementClass

    ("root\sms\site_abc", [String]: empty, $null);

    $newClass ["__CLASS"] = "SMS_TaskSequence_Action";

    $newClass.Qualifiers.Add ("Static", $true)

    $newClass.Put)

    *************************************************************************

    After you run the instructions above and restarted the server, installing v2.0 CPID ended successfully.

    I hope that these instructions will help someone in the future... :))

    TeeDarling77

  • Roll forward standby database with the incremental backup, when a data file is deleted in primary education

    Hello

    I'm nologging operations + deleting some files in the primary and you want to roll forward the day before using the incremental backup Yvert.

    I do in particular, as the files are dropped?

    I got to meet ( Doc ID 1531031.1 ) which explains how to roll forward when a data file is added.

    If I follow the same steps, to make the move to restore the data file newly added, will it work in my case?

    Can someone please clarify?

    Thank you

    San

    I was wondering if reocover noredo is performed before restored controlfile, oracle will apply the incremental backup error-free files, and in this case, what would be the status of the data file in the control file.

    Why do you consider to retrieve the day before first and then in the restaurant of the controlfile will lead to problems. Please read my first post on this thread - I had clearly mentioned that you would not face problems if you go with the method of deployment.

    Here is a demo for you with force logging is disabled. For the first time the day before resuming and restored then the controlfile ensures:

    Primary: oraprim

    Standby: orastb

    Tablespace DataFile of MYTS is removed on primary:

    SYS @ oraprim > select force_logging in the database of v$.

    FORCE_LOGGING

    ---------------------------------------

    NO.

    Currently the tablespace is to have 2 data files.

    SYS @ oraprim > select file_name in dba_data_files where nom_tablespace = 'MYTS;

    FILE_NAME

    -------------------------------------------------------

    /U01/app/Oracle/oradata/oraprim/myts01.dbf

    /U01/app/Oracle/oradata/oraprim/myts02.dbf

    In standby mode, the tablespace is to have 2 data files:

    SYS @ orastb > select name from v$ datafile where ts #= 6;

    NAME

    --------------------------------------------------------------------------------

    /U01/app/Oracle/oradata/orastb/myts01.dbf

    /U01/app/Oracle/oradata/orastb/myts02.dbf

    Postponement of the day before on the primary log shipping

    SYS @ oraprim > alter system set log_archive_dest_state_3 = delay;

    Modified system.

    Dropped 1 MYTS datafile on the primary.

    SYS @ oraprim > alter tablespace myts drop datafile ' / u01/app/oracle/oradata/oraprim/myts02.dbf';

    Tablespace altered.

    Removed some archives to create a space.

    [oracle@ora12c-1 2016_01_05] $ rm - rf * 31 *.

    [oracle@ora12c-1 2016_01_05] $ ls - lrt

    13696 total

    -rw - r - 1 oracle oinstall 10534400 5 January 18:46 o1_mf_1_302_c8qjl3t7_.arc

    -rw - r - 1 oracle oinstall 2714624 5 January 18:47 o1_mf_1_303_c8qjmhpq_.arc

    -rw - r - 1 oracle oinstall 526336 5 January 18:49 o1_mf_1_304_c8qjp7sb_.arc

    -rw - r - 1 oracle oinstall 23552 5 January 18:49 o1_mf_1_305_c8qjpsmh_.arc

    -rw - r - 1 oracle oinstall 53760 5 January 18:50 o1_mf_1_306_c8qjsfqo_.arc

    -rw - r - 1 oracle oinstall 14336 Jan 5 18:51 o1_mf_1_307_c8qjt9rh_.arc

    -rw - r - 1 oracle oinstall 1024 5 January 18:53 o1_mf_1_309_c8qjxt4z_.arc

    -rw - r - 1 oracle oinstall 110592 5 January 18:53 o1_mf_1_308_c8qjxt34_.arc

    [oracle@ora12c-1 2016_01_05] $

    Current main MYTS data files:

    SYS @ oraprim > select file_name in dba_data_files where nom_tablespace = 'MYTS;

    FILE_NAME

    -------------------------------------------------------

    /U01/app/Oracle/oradata/oraprim/myts01.dbf

    Current data of MYTS standby files:

    SYS @ orastb > select name from v$ datafile where ts #= 6;

    NAME

    --------------------------------------------------------------------------------

    /U01/app/Oracle/oradata/orastb/myts01.dbf

    /U01/app/Oracle/oradata/orastb/myts02.dbf

    Gap is created:

    SYS @ orastb > select the process, status, sequence # v$ managed_standby;

    STATUS OF PROCESS SEQUENCE #.

    --------- ------------ ----------

    ARCH. CLOSING 319

    ARCH. CLOSING 311

    CONNECTED ARCH 0

    ARCH. CLOSING 310

    MRP0 WAIT_FOR_GAP 312

    RFS IDLE 0

    RFS IDLE 0

    RFS IDLE 0

    RFS IDLE 320

    9 selected lines.

    Backup incremental RMAN is taken elementary school.

    RMAN > incremental backup of the format of database of SNA 2686263 ' / u02/bkp/%d_inc_%U.bak';

    From backup 5 January 16

    using the control file of the target instead of recovery catalog database

    the DISC 2 channel configuration is ignored

    the DISC 3 channel configuration is ignored

    configuration for DISK 4 channel is ignored

    allocated channel: ORA_DISK_1

    channel ORA_DISK_1: SID = 41 type device = DISK

    channel ORA_DISK_1: starting full datafile from backup set

    channel ORA_DISK_1: specifying datafile (s) in the backup set

    Enter a number of file datafile = 00001 name=/u01/app/oracle/oradata/oraprim/system01.dbf

    Enter a number of file datafile = name=/u01/app/oracle/oradata/oraprim/sysaux01.dbf 00003

    Enter a number of file datafile = name=/u01/app/oracle/oradata/oraprim/undotbs01.dbf 00004

    Enter a number of file datafile = name=/u01/app/oracle/oradata/oraprim/users01.dbf 00006

    Enter a number of file datafile = name=/u01/app/oracle/oradata/oraprim/myts01.dbf 00057

    channel ORA_DISK_1: starting total, 1-January 5, 16

    channel ORA_DISK_1: finished piece 1-January 5, 16

    piece handle=/u02/bkp/ORAPRIM_inc_42qqkmaq_1_1.bak tag = TAG20160105T190016 comment = NONE

    channel ORA_DISK_1: complete set of backups, time: 00:00:02

    Backup finished on 5 January 16

    Saved controlfile on primary:

    RMAN > backup current controlfile to Eve format ' / u02/bkp/ctl.ctl';

    Cancel recovery in standby mode:

    SYS @ orastb > alter database recover managed standby database cancel;

    Database altered.

    Recover the day before by using the above backup items

    RMAN > recover database noredo;

    From pick up to 5 January 16

    the DISC 2 channel configuration is ignored

    the DISC 3 channel configuration is ignored

    configuration for DISK 4 channel is ignored

    allocated channel: ORA_DISK_1

    channel ORA_DISK_1: SID = 26 type of device = DISK

    channel ORA_DISK_1: from additional data file from the restore backup set

    channel ORA_DISK_1: specifying datafile (s) to restore from backup set

    destination for the restoration of the data file 00001: /u01/app/oracle/oradata/orastb/system01.dbf

    destination for the restoration of the data file 00003: /u01/app/oracle/oradata/orastb/sysaux01.dbf

    destination for the restoration of the data file 00004: /u01/app/oracle/oradata/orastb/undotbs01.dbf

    destination for the restoration of the data file 00006: /u01/app/oracle/oradata/orastb/users01.dbf

    destination for the restoration of the data file 00057: /u01/app/oracle/oradata/orastb/myts01.dbf

    channel ORA_DISK_1: backup /u02/bkp/ORAPRIM_inc_3uqqkma0_1_1.bak piece reading

    channel ORA_DISK_1: room handle=/u02/bkp/ORAPRIM_inc_3uqqkma0_1_1.bak tag = TAG20160105T190016

    channel ORA_DISK_1: restored the backup part 1

    channel ORA_DISK_1: restore complete, duration: 00:00:01

    Finished recover to 5 January 16

    Restored the controlfile and mounted the day before:

    RMAN > shutdown immediate

    dismounted database

    Instance Oracle to close

    RMAN > startup nomount

    connected to the database target (not started)

    Oracle instance started

    Total System Global Area 939495424 bytes

    Bytes of size 2295080 fixed

    348130008 variable size bytes

    583008256 of database buffers bytes

    Redo buffers 6062080 bytes

    RMAN > restore controlfile eve of ' / u02/ctl.ctl ';

    From 5 January 16 restore

    allocated channel: ORA_DISK_1

    channel ORA_DISK_1: SID = 20 type of device = DISK

    channel ORA_DISK_1: restore the control file

    channel ORA_DISK_1: restore complete, duration: 00:00:01

    output file name=/u01/app/oracle/oradata/orastb/control01.ctl

    output file name=/u01/app/oracle/fast_recovery_area/orastb/control02.ctl

    Finished restore at 5 January 16

    RMAN > change the editing of the database;

    Statement processed

    output channel: ORA_DISK_1

    Now the data file does not exist on the standby mode:

    SYS @ orastb > alter database recover managed standby database disconnect;

    Database altered.

    SYS @ orastb > select the process, status, sequence # v$ managed_standby;

    STATUS OF PROCESS SEQUENCE #.

    --------- ------------ ----------

    CONNECTED ARCH 0

    CONNECTED ARCH 0

    CONNECTED ARCH 0

    ARCH. CLOSING 329

    RFS IDLE 0

    RFS IDLE 330

    RFS IDLE 0

    MRP0 APPLYING_LOG 330

    8 selected lines.

    SYS @ orastb > select name from v$ datafile where ts #= 6;

    NAME

    --------------------------------------------------------------------------------

    /U01/app/Oracle/oradata/orastb/myts01.dbf

    Hope that gives you a clear picture. You can use this to roll forward day before using the SNA roll forward Eve physical database using RMAN incremental backup | Shivananda Rao

    -Jonathan Rolland

Maybe you are looking for