Flashback data archive

Hello
I heard about flaskback data Archives.
can I get some short examples to understand in practical ways

Thank you

Please put your comments. I got the answer from your post

I'm confused what do you want to get from us when you have the answer but... :)

I think above doc link will answer your questions rest/future about FDA (flashback data archive)

Concerning
Girish Sharma

Tags: Database

Similar Questions

  • Flashback Data Archive issue

    Hi all

    There is this issue on the web:

    Q: identify the statement that is true about Flashback Data Archive:

    A. you can use multipletablespaces for an archive, and each archive can have its own maintenance time.

    B. you can have an archive, and to eachtablespace which is part of archive, you can specify a different retention period.

    C. you can use multipletablespaces for an archive, and you can have more than one default value archive by retention period.

    D. Si you specify an archive by default, it must exist in onetablespace only.

    Everyone says that the correct answer is b. isn't it supposed to be a?

    Specified retention period to the archive flashback level not tablespace, then B is incorrect.

  • Flashback data archive commit the performance of time - a bug problem?

    Hi all

    I use the Oracle 11 g R2 on 64-bit windows environment. I just want to do some tests quue flashback data archive. I created one and add to a table. about 1.8 million records is to exist in this table. Furthermore, this table is one of the examples of oracle, SH. SALEStables. I created another table using the table and insert the same data twice.
    -- not a SH session
    
    Create Table Sales as select * from sh.sales;
    
    insert into sales select * from sh.sales;
    Commit;
    insert operation takes a few seconds. sometimes, in this code, validation command takes more of * 20 *, sometimes 0 seconds. If validation time is brief after insert, can I update the table and then validate again:
    update sales set prod_id = prod_id; -- update with same data
    commit;
    update takes a few seconds longer. If the first commit (after integration) has had too little time, second validation, after the update, takes more than 20 minutes. At this time, while that commit were working, my cpu becomes overloaded, 100% charge.

    the system that oracle runs on is good for quest staff use, i7 4 real core cpu, 8 GB ram, disk SSD etc.

    When I looked at the Business Manager - performance monitoring, I saw this SQL in my sql album list:
    insert /*+ append */ into SYS_MFBA_NHIST_74847  select /*+ leading(r) 
             use_nl(v)  PARALLEL(r,DEFAULT) PARALLEL(v,DEFAULT)  */ v.ROWID "RID", 
             v.VERSIONS_STARTSCN "STARTSCN",  v.VERSIONS_ENDSCN "ENDSCN", 
             v.VERSIONS_XID "XID" ,v.VERSIONS_OPERATION "OPERATION",  v.PROD_ID 
             "PROD_ID",  v.CUST_ID "CUST_ID",  v.TIME_ID "TIME_ID",  v.CHANNEL_ID 
             "CHANNEL_ID",  v.PROMO_ID "PROMO_ID",  v.QUANTITY_SOLD 
             "QUANTITY_SOLD",  v.AMOUNT_SOLD "AMOUNT_SOLD"  from SYS_MFBA_NROW r, 
             SYS.SALES versions between SCN :1 and MAXVALUE v where v.ROWID = 
             r.rid
    This consumes my resources for more than 20 minutes. what I do is, just an update 1.8 milion records (which use update is really takes little time) and validation (which kills my system).

    What is the reason for this?

    Info:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE    11.2.0.1.0      Production
    TNS for 64-bit Windows: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production

    I see that in the example of Guy Harrison the SYS_MFBA_NROW table contains a very large number of lines - and the query is forced into a loop of neste join on this table (as is your query). In your case the nested loop is a vision (and there is no sign of a current "pushed" predicate.

    If you have a very large number of rows in the table, the resulting code is bound to be slow. To check, I suggest your run the test again (from scratch) with active sql_trace or statistics_level defined at all so that you can get the rowsource for the query execution statistics and check where the time is being spent and where the volume is displayed. You will have to call this one in Oracle - if this observation is correct that FBDA is only for OLTP systems, no DSS or DW.

    Concerning
    Jonathan Lewis

  • Unlimited Conservation on the flashback data archive

    11.2.0.4

    The FDA is created with a retention time, in days, months or years.

    https://docs.Oracle.com/CD/E11882_01/server.112/e41084/statements_5010.htm#BABIFIBJ

    Is there a setting of retention that make conservation unlimited/forever?   Or do you put something like 100 years.

    Yes

  • Problem activation of flashback is archive!

    Hello

    When I try to create an archive of flashback and try to run this code:

    Conn System/Manager have sysdba

    create flashbacks archive test_archive

    example of tablespace

    1 M QUOTA
    CONSERVATION 1 DAY;

    I get:
    ERROR on line 1:
    ORA-00439: feature not enabled: Flashback Data Archive

    and when I try turn it on by running this code:

    ALTER TABLE table_name FLASHBACK is ARCHIVE;

    I get:

    ERROR on line 1:
    ORA-00439: feature not enabled: Flashback Data Archive.

    M.L

    Check this query:
    SQL > SELECT * FROM V$ OPTION where the parameter as 'Flash % ';

    If it's WRONG, you don't have a FLASHBACK. Probably as he said, you must change to Oracle Standard Edition license Oracle Enterprise Edition.

  • Snapshot to physics - flashback data missing

    Environment: Oracle 11.2.0.3 EE on Solaris


    I think I know the answer to this already, but I wanted to have confirmation of experts.


    BTW, thanks Brian, Michael and CKPT to my previous question.


    My goal was to make a Data Pump Export of the physical Standby database to move Production data in a test environment.


    After research, I decided to use the method of conversion the physical Standby a standby time of the snapshot, performing the Data Pump Export and then conversion back.


    I did a small implementation of work to do to make the physical Standby ready to become a watch of the snapshot, but that seems to be going smoothly.  It was changing the name of the tablespace, UNDO, adding a temporary file to the temporary tablespace and setting the DB_FILE_RECOVERY_DEST and _Dest_size.


    The conversion on the eve of the snapshot and the subsequent Data Pump Export went well.


    When I went to convert the physics of the day before, I have received several messages as follows:


    ORA-38753: can't the 22 flashback data file; no newspaper flashback data.

    ORA-01110: data file 22: ' / u02/oradata/APSKTLP1/aps_datastore_dt01_13.dbf'

    It seems I missed the step in my process to allow the return of flame in the database.


    Is it possible to recover from this or do I have to recreate the physical Standby from scratch?


    Any help is GREATLY appreciated.


    -gary

    Gary;

    It is not good. The thing with a snapshot is to have the available Flash back to return to standby mode.

    I think that you must re-create the day before. I don't know of another option.

    No Data Guard system, you can sometimes use the offline data file and continue with the return of flame.

    Best regards

    mseberg

  • Query Archive for Data Archive Manager


    Hello

    Recently I create a query of Peoplesoft (value Type Archive and Public) to use for the data archives Manager. However, after that I saved and closed, when I try to search there again, I couldn't see to change. I know that the query was created because I can see it when I set up the Archive and when trying to create the same query, the system asked me to crush him.

    Is - it so this type of query works? How can I change the SQL code, I just created?

    Appreciate anyone's help.

    Simple search in the default query Manager, Query Type = User, unless you explicitly change the type = search Archive. If you want to search by type and name, place you in the advanced search.

    Kind regards

    Bob

  • Indexing & Data Archive Manager

    Hello forums.

    I work with Data Archive Manager at 8.52 PeopleTools, using the table PSACCESSLOG as indicated on the Wiki of PeopleSoft here:
    http://PeopleSoft.wikidot.com/data-archive-manager

    Basically, the problem I encounter is during the creation of a model, when I select the subject archive and check as a base object, I get an error warning me that a unique index does not exist on the record, here's the message:
    http://d.PR/i/llxl

    However, the only way I know PeopleSoft manages index is by specifying the key for record fields. Here is the table definitions, both have the same major fields:
    http://d.PR/i/G0E6
    http://d.PR/i/h2Fz

    Index of the two documents/tables have been created since the application here is the built-in script designer:
    DROP INDEX PS_PSACCESSLOG_HST
    /
    CREATE  iNDEX PS_PSACCESSLOG_HST ON PS_PSACCESSLOG_HST (PSARCH_ID,
       PSARCH_BATCHNUM,
       OPRID,
       LOGIPADDRESS,
       LOGINDTTM) TABLESPACE PSINDEX STORAGE (INITIAL 40000 NEXT 100000
     MAXEXTENTS UNLIMITED PCTINCREASE 0) PCTFREE 10 PARALLEL NOLOGGING
    /
    ALTER INDEX PS_PSACCESSLOG_HST NOPARALLEL LOGGING
    /
    Is there a specific way to create keys/indexes so that recording can be used as a basis of archives? I tried to create as unique index by publishing the create index SQL manually, by specifying "create unique index...". ».
    There must be something that I am missing. As always, any input is much appreciated, thank you.

    Best regards.

    PeopleBooks
    The nonunique indexes
    The SQL code generated by the data archives Manager assumes that the index keys identify unique rows. Therefore, the base table of the base object must have a unique index.

    Check if the table has a unique index as pictured below. You can change this by clicking on the edit index unique index
    http://docs.Oracle.com/CD/E28394_01/pt852pbh1/Eng/psbooks/tapd/IMG/sm_ChangeRecordIndexesDialog_tapd.PNG

    Say what you can do to create a unique index, I advice not to set a unique index with the keys as highly match the table PSACCESSLOG.
    Current keys:
    -OPRID
    -LOGIPADDRESS
    -LOGINDTTM

    Sense
    A person
    newspapers comes from a workstation
    at a certain point.

    When you place a unique index on this table, this means that theoretically, you can not connect to PeopleSoft at the same time a position working with the same user with say different browsers IE and Firefox.
    This will cause errors in unique constraints that will be presented to the user on logon, which is that average there is not a unique index on that table.

    Halin

  • Data archival and Resue

    Hello

    I dropped to know some examples of data archiving and more use it once. I've heard of fragmentation, shirnk and other methods but I just wanted to know how to use it
    example is apperciated


    Thank you
    Also in addition to him, you can use partitions to share the table archived with a score of the work table for the archived lines will be logically added to the lines of the work table.

    can u please give a simple example on above statement

    Example for a single table. What you can use as a partition key may depend on your existing data structure and the row of the table. If data allow, that it can be optimized by exchanging the empty partition with an existing table (which is very fast) (not included).

    1. create tablespace ARCHIVE1900 datafile ' / replaceable_disk/data/archive1900.dbf'

    2. create the WORKING_PARTITIONED table (YEAR number / * it will be the partition key * /, OTHER_COL number, YETANOTHER_COL varchar...) partition of range (YEAR)
    (lower ARCHIVE1900 partition (2000) tablespace ARCHIVE1900,)
    lower DATA2000 partition (2010) DATA2000 tablespace.
    partition lower DATA2010 (2020) tablespace DATA2020)

    3 move the data in WORKING_PARTITIONED of WORK. Remove the WORK. Rename WORKING_PARTITIONED to WORK.

    4 archive
    4.1 create tablespace table WORKING_ARCHIVE1900 ARCHIVE1900 select * from WORKING where 1 = 0;
    4.2 change the partition of table WORK Exchange ARCHIVE1900 with table UPDATE GLOBAL INDEXES WORKING_ARCHIVE1900;
    All data from the partition of WORK ARCHIVE1900 is now in the WORKING_ARCHIVE1900 table.
    WORK ARCHIVE1900 partition is empty.
    4.3 change only archive1900 read-only tablespace;
    4.4 alter tablespace ARCHIVE1900 offline;
    4.5 BONE lever unmount or eject a /replaceable_disk

    5 restoration of Archives
    5.1 mounting /replaceable_disk so you have ' / replaceable_disk/data/archive1900.dbf' available
    5.2 alter tablespace ARCHIVE1900 online;
    5.3 modify the partition of table WORK Exchange ARCHIVE1900 with table UPDATE GLOBAL INDEXES WORKING_ARCHIVE1900;
    All the data in the table WORKING_ARCHIVE1900 is now in the partition of WORK ARCHIVE1900. WORKING_ARCHIVE1900 table is empty.

    Enjoy.

    The example is approximate. Here can be syntax errors. then test the approach carefully before implementing it.

  • Flashback Data RETENTION Archive not applied?

    Hello

    We are on the 12 c Enterprise Edition Release Oracle database 12.1.0.2.0 - 64 bit Production.

    For testing purposes, I have created an Archive of Flashback with RETENTION 1 DAY data and
    Adding a table of this archive. Then I inserted/updated / delete records in the table
    and verified that there are data in the table (SYS_FBA_HIST_ < ObjectID >) story that reflects
    My DML.

    When revisitng the history of table after a few days the reviews are still there, even if
    column USER_FLASHBACK_ARCHIVE. LAST_PURGE_TIME shows that the archive

    has been purged.

    I expected the records in the table in history to have disappeared, can someone explain why they are still there?

    Hello

    Probably you hit the bug, trying to perform a manual purge if you still have questions of conclusion then fdba process is not able to pick up the partition for split and couldn't right '; t could truncate

    -Pavan Kumar N

  • flashback vs archive log mode

    I want to ask is at - there a relationship between the return of flame and the ARCHIVELOG Mode;

    I want to say if I go to the mode NOARCHIVELOG can I use flashback DATABASE features

    Thank you

    fadiwilliam wrote:
    I want to ask is at - there a relationship between the return of flame and the ARCHIVELOG Mode;

    I want to say if I go to the mode NOARCHIVELOG can I use flashback DATABASE features

    Thank you

    N ° so that the flashback is enabled on the database, you must have your database works Archivelog mode.

    Refer to this http://docs.oracle.com/cd/B28359_01/appdev.111/b28424/adfns_flashback.htm

    fadiwilliam
         
    Handle: fadiwilliam
    Status level: Beginner
    Join date: December 15, 2000
    Messages total: 37
    Total Questions: 13 (10 open)
    Fadi name william

    Please consider closing your questions by providing appropriate points and marking them as answered. Please clean the forum!

  • Archive Data Guard and Flashback

    Hello

    I have a DB 11.2.0.1 with a database of pending.
    If I put a few tables to be in the flashback data archive, (support for FDA tables) will be copied to a db as well sleep?

    concerning

    Hello;

    I hesitate to answer because you don't seem to close your old questions or give points to those who help. But I also believe in giving the benefit of the doubt.

    If the FIU is configured on your watch then the answer is Yes.

    While this is not a requirement to use I would always use FRA with Data Guard.

    db_recovery_file_dest = ' / u01/app/oracle/flash_recovery_area.

    The following are some sample parameter values that might be used to configure a physical standby database to archive its standby redo log to the fast recovery area:
    
    LOG_ARCHIVE_DEST_2 = 'LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(STANDBY_LOGFILE,STANDBY_ROLE)' LOG_ARCHIVE_DEST_STATE_2=ENABLE
    

    Source

    DataGuard of Concepts and Administration 11g Release 2 (11.2) E10700-02

    With RMAN efficiently in an environment of Dataguard. [848716.1 ID]

    Best regards

    mseberg

    Published by: mseberg on December 1, 2012 13:53

  • What is the best way to check the data

    What is the best way to check the actual changes in the data, i.e., to be able to see each insert, update, delete on a given line, when it happened, who did it, and what looked like to the front row and after the change?

    Currently, we have implemented our own audit infrastructure where we generate standard triggers and an audit table to store the OLD (values at the beginning of the Timekeeping point row before) and NEW (values at the beginning of the point of timing after line) values for each change.

    I put this strategy due to the performance impact there (important say least) and because it's something that a developer (confession, I'm the developer) came with, rather than something is a database administrator came with. I looked in the audit of the Oracle, but it doesn't seem like we would be able to go back and see what a line looked like at some point in time. I also watched flashbacks, but this seems like it would take a monumental amount of storage just to be able to go back a week, much less the years currently keep us these data.

    Thank you
    Matt Knowles

    Published by: mattknowles on January 10, 2011 08:40

    mattknowles wrote:
    What is the best way to check the actual changes in the data, i.e., to be able to see each insert, update, delete on a given line, when it happened, who did it, and what looked like to the front row and after the change?

    Currently, we have implemented our own audit infrastructure where we generate standard triggers and an audit table to store the OLD (values at the beginning of the Timekeeping point row before) and NEW (values at the beginning of the point of timing after line) values for each change.

    You can either:
    1. set up your own audit custom (as you do now)
    2 flashback Data Archive (11 g). Application for licence.
    3 version check your tables with Workspace Manager.

    >

    I put this strategy due to the performance impact there (important say least) and because it's something that a developer (confession, I'm the developer) came with, rather than something is a database administrator came with. I looked in the audit of the Oracle, but it doesn't seem like we would be able to go back and see what a line looked like at some point in time. I also watched flashbacks, but this seems like it would take a monumental amount of storage just to be able to go back a week, much less the years currently keep us these data.

    Unfortunately, the audit of data always takes a lot of space. You should also consider the performance, as custom triggers and Workspace Manager will perform much slower than the FDA if there is heavy DML on the table.

  • DataGuard and Flashback database

    Any sense to run a database (using the flashback database) when using Dataguard.
    We don't talk about uptime

    When to use database of flashback? (to decline the upgrades?)
    0 protection against human errors
    1 flashback database
    2 flashback Data Archive

    also should we run dataguard in a physical standby or a logical standby configuration
    1 physical standby
    And effect seen use "logical standby".

    Kind regards
    Taj

  • I want to know when we issue statement truncate table in oracle.

    I want to know when we issue statement truncate table in oracle. No newspaper will be write in the redo log. But we can recover data using flashback or SNA. I want to know where the actually truncate table statement log is stored in the oracle database. Please explain to me in detail step by step.

    >
    I understand your SNA. But I want to know where the truncate statement stored log. But in the redo log, no entry for truncate.but I want to go which connect there is no market value back...
    >
    If you are still wondering after getting the answer so you don't "understand".

    You have received the reply above. Archive of flashback stores data.

    See the link provided above to CREATE an ARCHIVE of FLASHBACK in the doc of SQL language
    http://docs.Oracle.com/CD/E11882_01/server.112/e26088/statements_5010.htm
    >
    TABLESPACE clause

    Specify the table space where the data archived for this flashback data archive referring to be stored. You can specify that a tablespace with this clause. However, you can then add tablespaces to archive flashback with a statement ALTER FLASHBACK ARCHIVE database.
    >
    The data is moved to storage of archives and restored from there when you query for archive flashback.

Maybe you are looking for

  • Can I use Time Capsule as a router to connect wirelessly with a pc

    I can all my products easily connect to the internet using Airport Time Capsule? I know I can save wireless my Apple products, but can I also backup my computers portable windows on the Time Capsule via a cable?

  • Connection Wi - Fi disconnects frequently

    original title: I have a problem with my connection wireless internet After going through all the configuration wireless, I get a few minutes then I'm disconnected again. This has happened for weeks.

  • Microsoft Solitaire

    I recently opted for solitary MS pass the time. In the game, I found some inconstincies to play with the actual cards. Yes, the entire bridge is there, but I find that the shuffle is less random. The shuffle seems to have a strange separation of card

  • Java 1.8.0_45

    Use win 7, ie11. Update java 1.8.0_45. program works. in the Control Panel does not appear correctly. How can I the java icon placed in the Panel. and how to remove current icon of Control Panel

  • Question of ISE

    In ISE, server for comments, a second NETWORK card interface can be used to physically connect the comments interface to a DMZ?  If so, can you give a link?