Flashback Data Archive issue

Hi all

There is this issue on the web:

Q: identify the statement that is true about Flashback Data Archive:

A. you can use multipletablespaces for an archive, and each archive can have its own maintenance time.

B. you can have an archive, and to eachtablespace which is part of archive, you can specify a different retention period.

C. you can use multipletablespaces for an archive, and you can have more than one default value archive by retention period.

D. Si you specify an archive by default, it must exist in onetablespace only.

Everyone says that the correct answer is b. isn't it supposed to be a?

Specified retention period to the archive flashback level not tablespace, then B is incorrect.

Tags: Database

Similar Questions

  • Flashback data archive

    Hello
    I heard about flaskback data Archives.
    can I get some short examples to understand in practical ways

    Thank you

    Please put your comments. I got the answer from your post

    I'm confused what do you want to get from us when you have the answer but... :)

    I think above doc link will answer your questions rest/future about FDA (flashback data archive)

    Concerning
    Girish Sharma

  • Flashback data archive commit the performance of time - a bug problem?

    Hi all

    I use the Oracle 11 g R2 on 64-bit windows environment. I just want to do some tests quue flashback data archive. I created one and add to a table. about 1.8 million records is to exist in this table. Furthermore, this table is one of the examples of oracle, SH. SALEStables. I created another table using the table and insert the same data twice.
    -- not a SH session
    
    Create Table Sales as select * from sh.sales;
    
    insert into sales select * from sh.sales;
    Commit;
    insert operation takes a few seconds. sometimes, in this code, validation command takes more of * 20 *, sometimes 0 seconds. If validation time is brief after insert, can I update the table and then validate again:
    update sales set prod_id = prod_id; -- update with same data
    commit;
    update takes a few seconds longer. If the first commit (after integration) has had too little time, second validation, after the update, takes more than 20 minutes. At this time, while that commit were working, my cpu becomes overloaded, 100% charge.

    the system that oracle runs on is good for quest staff use, i7 4 real core cpu, 8 GB ram, disk SSD etc.

    When I looked at the Business Manager - performance monitoring, I saw this SQL in my sql album list:
    insert /*+ append */ into SYS_MFBA_NHIST_74847  select /*+ leading(r) 
             use_nl(v)  PARALLEL(r,DEFAULT) PARALLEL(v,DEFAULT)  */ v.ROWID "RID", 
             v.VERSIONS_STARTSCN "STARTSCN",  v.VERSIONS_ENDSCN "ENDSCN", 
             v.VERSIONS_XID "XID" ,v.VERSIONS_OPERATION "OPERATION",  v.PROD_ID 
             "PROD_ID",  v.CUST_ID "CUST_ID",  v.TIME_ID "TIME_ID",  v.CHANNEL_ID 
             "CHANNEL_ID",  v.PROMO_ID "PROMO_ID",  v.QUANTITY_SOLD 
             "QUANTITY_SOLD",  v.AMOUNT_SOLD "AMOUNT_SOLD"  from SYS_MFBA_NROW r, 
             SYS.SALES versions between SCN :1 and MAXVALUE v where v.ROWID = 
             r.rid
    This consumes my resources for more than 20 minutes. what I do is, just an update 1.8 milion records (which use update is really takes little time) and validation (which kills my system).

    What is the reason for this?

    Info:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE    11.2.0.1.0      Production
    TNS for 64-bit Windows: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production

    I see that in the example of Guy Harrison the SYS_MFBA_NROW table contains a very large number of lines - and the query is forced into a loop of neste join on this table (as is your query). In your case the nested loop is a vision (and there is no sign of a current "pushed" predicate.

    If you have a very large number of rows in the table, the resulting code is bound to be slow. To check, I suggest your run the test again (from scratch) with active sql_trace or statistics_level defined at all so that you can get the rowsource for the query execution statistics and check where the time is being spent and where the volume is displayed. You will have to call this one in Oracle - if this observation is correct that FBDA is only for OLTP systems, no DSS or DW.

    Concerning
    Jonathan Lewis

  • Unlimited Conservation on the flashback data archive

    11.2.0.4

    The FDA is created with a retention time, in days, months or years.

    https://docs.Oracle.com/CD/E11882_01/server.112/e41084/statements_5010.htm#BABIFIBJ

    Is there a setting of retention that make conservation unlimited/forever?   Or do you put something like 100 years.

    Yes

  • Problem activation of flashback is archive!

    Hello

    When I try to create an archive of flashback and try to run this code:

    Conn System/Manager have sysdba

    create flashbacks archive test_archive

    example of tablespace

    1 M QUOTA
    CONSERVATION 1 DAY;

    I get:
    ERROR on line 1:
    ORA-00439: feature not enabled: Flashback Data Archive

    and when I try turn it on by running this code:

    ALTER TABLE table_name FLASHBACK is ARCHIVE;

    I get:

    ERROR on line 1:
    ORA-00439: feature not enabled: Flashback Data Archive.

    M.L

    Check this query:
    SQL > SELECT * FROM V$ OPTION where the parameter as 'Flash % ';

    If it's WRONG, you don't have a FLASHBACK. Probably as he said, you must change to Oracle Standard Edition license Oracle Enterprise Edition.

  • Snapshot to physics - flashback data missing

    Environment: Oracle 11.2.0.3 EE on Solaris


    I think I know the answer to this already, but I wanted to have confirmation of experts.


    BTW, thanks Brian, Michael and CKPT to my previous question.


    My goal was to make a Data Pump Export of the physical Standby database to move Production data in a test environment.


    After research, I decided to use the method of conversion the physical Standby a standby time of the snapshot, performing the Data Pump Export and then conversion back.


    I did a small implementation of work to do to make the physical Standby ready to become a watch of the snapshot, but that seems to be going smoothly.  It was changing the name of the tablespace, UNDO, adding a temporary file to the temporary tablespace and setting the DB_FILE_RECOVERY_DEST and _Dest_size.


    The conversion on the eve of the snapshot and the subsequent Data Pump Export went well.


    When I went to convert the physics of the day before, I have received several messages as follows:


    ORA-38753: can't the 22 flashback data file; no newspaper flashback data.

    ORA-01110: data file 22: ' / u02/oradata/APSKTLP1/aps_datastore_dt01_13.dbf'

    It seems I missed the step in my process to allow the return of flame in the database.


    Is it possible to recover from this or do I have to recreate the physical Standby from scratch?


    Any help is GREATLY appreciated.


    -gary

    Gary;

    It is not good. The thing with a snapshot is to have the available Flash back to return to standby mode.

    I think that you must re-create the day before. I don't know of another option.

    No Data Guard system, you can sometimes use the offline data file and continue with the return of flame.

    Best regards

    mseberg

  • Query Archive for Data Archive Manager


    Hello

    Recently I create a query of Peoplesoft (value Type Archive and Public) to use for the data archives Manager. However, after that I saved and closed, when I try to search there again, I couldn't see to change. I know that the query was created because I can see it when I set up the Archive and when trying to create the same query, the system asked me to crush him.

    Is - it so this type of query works? How can I change the SQL code, I just created?

    Appreciate anyone's help.

    Simple search in the default query Manager, Query Type = User, unless you explicitly change the type = search Archive. If you want to search by type and name, place you in the advanced search.

    Kind regards

    Bob

  • Indexing & Data Archive Manager

    Hello forums.

    I work with Data Archive Manager at 8.52 PeopleTools, using the table PSACCESSLOG as indicated on the Wiki of PeopleSoft here:
    http://PeopleSoft.wikidot.com/data-archive-manager

    Basically, the problem I encounter is during the creation of a model, when I select the subject archive and check as a base object, I get an error warning me that a unique index does not exist on the record, here's the message:
    http://d.PR/i/llxl

    However, the only way I know PeopleSoft manages index is by specifying the key for record fields. Here is the table definitions, both have the same major fields:
    http://d.PR/i/G0E6
    http://d.PR/i/h2Fz

    Index of the two documents/tables have been created since the application here is the built-in script designer:
    DROP INDEX PS_PSACCESSLOG_HST
    /
    CREATE  iNDEX PS_PSACCESSLOG_HST ON PS_PSACCESSLOG_HST (PSARCH_ID,
       PSARCH_BATCHNUM,
       OPRID,
       LOGIPADDRESS,
       LOGINDTTM) TABLESPACE PSINDEX STORAGE (INITIAL 40000 NEXT 100000
     MAXEXTENTS UNLIMITED PCTINCREASE 0) PCTFREE 10 PARALLEL NOLOGGING
    /
    ALTER INDEX PS_PSACCESSLOG_HST NOPARALLEL LOGGING
    /
    Is there a specific way to create keys/indexes so that recording can be used as a basis of archives? I tried to create as unique index by publishing the create index SQL manually, by specifying "create unique index...". ».
    There must be something that I am missing. As always, any input is much appreciated, thank you.

    Best regards.

    PeopleBooks
    The nonunique indexes
    The SQL code generated by the data archives Manager assumes that the index keys identify unique rows. Therefore, the base table of the base object must have a unique index.

    Check if the table has a unique index as pictured below. You can change this by clicking on the edit index unique index
    http://docs.Oracle.com/CD/E28394_01/pt852pbh1/Eng/psbooks/tapd/IMG/sm_ChangeRecordIndexesDialog_tapd.PNG

    Say what you can do to create a unique index, I advice not to set a unique index with the keys as highly match the table PSACCESSLOG.
    Current keys:
    -OPRID
    -LOGIPADDRESS
    -LOGINDTTM

    Sense
    A person
    newspapers comes from a workstation
    at a certain point.

    When you place a unique index on this table, this means that theoretically, you can not connect to PeopleSoft at the same time a position working with the same user with say different browsers IE and Firefox.
    This will cause errors in unique constraints that will be presented to the user on logon, which is that average there is not a unique index on that table.

    Halin

  • Data archival and Resue

    Hello

    I dropped to know some examples of data archiving and more use it once. I've heard of fragmentation, shirnk and other methods but I just wanted to know how to use it
    example is apperciated


    Thank you
    Also in addition to him, you can use partitions to share the table archived with a score of the work table for the archived lines will be logically added to the lines of the work table.

    can u please give a simple example on above statement

    Example for a single table. What you can use as a partition key may depend on your existing data structure and the row of the table. If data allow, that it can be optimized by exchanging the empty partition with an existing table (which is very fast) (not included).

    1. create tablespace ARCHIVE1900 datafile ' / replaceable_disk/data/archive1900.dbf'

    2. create the WORKING_PARTITIONED table (YEAR number / * it will be the partition key * /, OTHER_COL number, YETANOTHER_COL varchar...) partition of range (YEAR)
    (lower ARCHIVE1900 partition (2000) tablespace ARCHIVE1900,)
    lower DATA2000 partition (2010) DATA2000 tablespace.
    partition lower DATA2010 (2020) tablespace DATA2020)

    3 move the data in WORKING_PARTITIONED of WORK. Remove the WORK. Rename WORKING_PARTITIONED to WORK.

    4 archive
    4.1 create tablespace table WORKING_ARCHIVE1900 ARCHIVE1900 select * from WORKING where 1 = 0;
    4.2 change the partition of table WORK Exchange ARCHIVE1900 with table UPDATE GLOBAL INDEXES WORKING_ARCHIVE1900;
    All data from the partition of WORK ARCHIVE1900 is now in the WORKING_ARCHIVE1900 table.
    WORK ARCHIVE1900 partition is empty.
    4.3 change only archive1900 read-only tablespace;
    4.4 alter tablespace ARCHIVE1900 offline;
    4.5 BONE lever unmount or eject a /replaceable_disk

    5 restoration of Archives
    5.1 mounting /replaceable_disk so you have ' / replaceable_disk/data/archive1900.dbf' available
    5.2 alter tablespace ARCHIVE1900 online;
    5.3 modify the partition of table WORK Exchange ARCHIVE1900 with table UPDATE GLOBAL INDEXES WORKING_ARCHIVE1900;
    All the data in the table WORKING_ARCHIVE1900 is now in the partition of WORK ARCHIVE1900. WORKING_ARCHIVE1900 table is empty.

    Enjoy.

    The example is approximate. Here can be syntax errors. then test the approach carefully before implementing it.

  • Store data related issues.

    Hello

    I am new to ODI. I have a couple of store data related issues. In fact, I am facing some problems, so I would like to have clarification.

    1. I imported the definition of a database table in a model. Later, I added a few columns to the database table. However, after having overthrown the model the new columns did not appear as a column in the data store. When I click on view the data, I could see the new columns. I expect that the new columns should have been displayed in the model tree.

    2. the key constraints have been defined in the database schema before reversing the model. However, these do not appear in the model. However, primary key constraints are listed.
    2A. I manually created the FK constraints in the model. On the definition tab, I gave the same name as it appears on the database. Also, I chose the type of the constraint as a reference database. And I checked activate on the database check box. These three steps are correct? The columns tab was completed as one might expect.

    These questions really (1 and 2), or am I missing a step? If I miss the point 2, to create the FK constraints manually, are my steps like 2 correct?

    I use ODI 10.1.3.5 and Oracle 11 g 2.

    Thanks in advance,

    Maury.

    Yes. You can install ODI 11 g at the same time at ODI 10 g.

  • Flashback Data RETENTION Archive not applied?

    Hello

    We are on the 12 c Enterprise Edition Release Oracle database 12.1.0.2.0 - 64 bit Production.

    For testing purposes, I have created an Archive of Flashback with RETENTION 1 DAY data and
    Adding a table of this archive. Then I inserted/updated / delete records in the table
    and verified that there are data in the table (SYS_FBA_HIST_ < ObjectID >) story that reflects
    My DML.

    When revisitng the history of table after a few days the reviews are still there, even if
    column USER_FLASHBACK_ARCHIVE. LAST_PURGE_TIME shows that the archive

    has been purged.

    I expected the records in the table in history to have disappeared, can someone explain why they are still there?

    Hello

    Probably you hit the bug, trying to perform a manual purge if you still have questions of conclusion then fdba process is not able to pick up the partition for split and couldn't right '; t could truncate

    -Pavan Kumar N

  • Oracle EPM Planning 11.1.2.3.500 data loading issue

    We do a load all night at our project and dimensions of a csv file. We use a unix shell script command to build to start this process.  First, we conduct a dry using the / N command, if this is successful we then update and save our contour using the / C command. The problem is that dry running checks only the file to make sure that it is valid, it does not test the file against the contour.  I was wondering if anyone knows of an order, that I can use to test against the contour csv file WITHOUT saving it. This way if there is an error in the file, it will not update our plan with incorrect data.

    Thank you.

    I agree with Sunil.

    You will need to run the file through a few scripts to check some things that you are looking for.

    I had a similar problem to another client, they also check that their wasn't any what issues/releases (alias) for the construction of the dimension when running on the day the day.

    The solution came us up with a (do not know if it would work in your scenario), we have treated our generation of dimension file towards the end of the company.

    So we had a race the night before and if there are problems they are sorted before the race every night. Minor problems that could be corrected using fact, how if there were too many questions of a change in the hierarchy, we not treat of the day to the next and get the company to fix their hierarchies before we reconstructed again.

    Concerning

    Jimmy

  • Date performance issue

    Hi guru,.

    I use 11.1.6.8 OBIEE. One of my report is to have performance issue when I dig in that I found that the date filter not applied in the code SQL generated for send DB, due to which there is table scan, but strange thing is that when she displays data based on the Date range filter. It only occurs with the date dimension, all other dimensions are working properly. I'm not sure what he is missing.

    Thanks in advance.

    concerning

    Mohammed.

    I found the problem, it is in the characteristics of the DB. I click query to get the DB position it works now.

  • Creation date '1969' issue.

    Hello-

    I just posted a question in another thread, so I apologize for flooding the forums.

    I just came across a really strange problem in my archives. It seems that almost all my scans old film who live within Lightroom have had their year 'creation date' changed in 1969.

    Just to give you some information on my machine:

    I run a 2008 Macbook Pro 5.1 wit 8 GB of ram, 128 GB SSD for my boot drive and a 256GB in my old optical Bay that I use as a storage drive. I have Lightroom and all my apps on this disk "storage" for the storage of large files.


    I ran simple copy and paste backups for a while and then started to use Chronosync two years ago. I know that this problem occurred last year and I am sure it has something to do with Lightroom because of the information I could do appear in the metadata Panel.


    I have attached a bunch of screenshots below. I'd appreciate any help I can get with this... This is a really strange question. I'm fine with it now because all the files, that this problem has affected always have the correct creation date in their modification date. The files were also renamed by their created some time ago. It's only going to cause a problem when I try to sort records by year in bridge and which do not.


    All files problem (which is actually my catalog of films together Lightroom) waiting to be a big key Word session and DNG conversion. Then, I was planning on moving them to an archive player where they will live out there. I'm not too comfortable with letting go files in an archive with the incorrect date information... even if file names reflect the exact date... I feel just like it could cause a problem on the road.


    Please let me know if you need more information from me.


    Screen Shot 2014-10-14 at 3.31.06 PM.png

    Screen Shot 2014-10-14 at 3.31.33 PM.png

    Screen Shot 2014-10-14 at 3.31.36 PM.png


    The symptoms that you observe have nothing to do with LR.

    There are two different sets of associated with a photo date/times: times file managed by the operating system (Mac OS) and the time of metadata that is stored in image files and are defined by cameras and software applications as LR.

    The Finder shows you the time of files - the time that the file was created, and when it has been changed by an application.  Different applications handle the time of creation and change differently, and not all backup/synchronization programs maintain the time of creation and modification.  The creation time, shown in the screen shot, 31/12/69 19:00, is a time special is 01/01/70 0:00 UTC (assuming that your computer is currently configured on the zone IS), which is represented in many programs with the digit 0.  This suggests that one of the programs that allows you to copy your files has a bug and set the time from creation to 0.

    LR does not set file times - when it creates a file or it changes, Mac OS will fix the time of creation or change automatically at the present time.

    LR tells you the time that are stored inside the metadata of the image files, and the most important of them is what LR calls "hour of capture.  Cameras to set the time of capture of their built-in clocks, and LR metadata allows you to change the capture time by using metadata > command change the Capture time.  LR shows the capture time in the right metadata Panel and next to the thumbnails in the library.

    If an image does not have a metadata capture time, LR will use the file's modification time (older versions of LR used at creation time).  Scanners generally do not set the capture time, so, in that your screenshot above, LR shows the modified next time thumbnails.

    In general, it is dangerous to rely on the file times to represent information about your images - too many programs will do unpredictable things with those moments.  Use programs like LR allows to define the metadata of image capture time (when the shutter is pressed in the film).  Industry standards has also set a time of additional metadata, usually called "date/time digitized", i.e. when the original image has been digitized, that is converted to digital bits by a scanner.   LR does not all orders for digital time keeping (it is for the images captured by digital cameras), so if you need to maintain also digitized date/time, you will need to use another program like Exiftool (free).

  • flashback vs archive log mode

    I want to ask is at - there a relationship between the return of flame and the ARCHIVELOG Mode;

    I want to say if I go to the mode NOARCHIVELOG can I use flashback DATABASE features

    Thank you

    fadiwilliam wrote:
    I want to ask is at - there a relationship between the return of flame and the ARCHIVELOG Mode;

    I want to say if I go to the mode NOARCHIVELOG can I use flashback DATABASE features

    Thank you

    N ° so that the flashback is enabled on the database, you must have your database works Archivelog mode.

    Refer to this http://docs.oracle.com/cd/B28359_01/appdev.111/b28424/adfns_flashback.htm

    fadiwilliam
         
    Handle: fadiwilliam
    Status level: Beginner
    Join date: December 15, 2000
    Messages total: 37
    Total Questions: 13 (10 open)
    Fadi name william

    Please consider closing your questions by providing appropriate points and marking them as answered. Please clean the forum!

Maybe you are looking for