MVIEW log truncation

Hi all

I have a question about logs mview.

While analyzing my AWR reports I found that most of the query at the top of the page are on my logs MView. How to optimize these queries?

I intend to truncate the table $ MLog too, then after truncating tables MLg that I do a full refresh or my fast regular refresh is correct?

We use a server Oracle 11 GR 2.

Let me know if any more information is required for the same.

Thank you

AJ

You can follow this doc for you reference

236233.1

Tags: Database

Similar Questions

  • Additional data capture using Mview log

    I want to create a Mview log on my table of source of change data capture. Every day I use the data in Mlog Update/Insert the target table. After that I truncates the mlog.

    I am facing two problems:

    (1) by updating the source table, I get 'U' value to the title of the column OLD_NEW$ $. How do I know that it is old or new. I just want ' o 'or' not so that I can identify which is the new value of the values.

    (2) suppose a record is updated several times in the source table, or a record is inserted and then deleted, in this case it is not possible to simply use $ $ $ OLD_NEW = 'n' to fetch the differentials. In this case, how can I identify new records.

    CREATE TABLE src_t (KEY NUMBER, VARCHAR2 val (1), CONSTRAINT t_pk PRIMARY KEY (KEY));

    CREATE materialized VIEW LOG ON t WITH sequence (val), PRIMARY KEY, including the new VALUES;

    The problem you have is that you do not use the published work procedures.

    A fast refresh uses the journal of
    A full update truncates the log.

    That said, what you do is
    -Buying a car
    -Remove the motor
    -Use it as a bike

    You shouldn't touch with internal logs. It will create havoc. Mark my words. You have been warned.
    ---------
    Sybrand Bakker
    Senior Oracle DBA

  • Activity log truncated every 14 days

    Hello

    We are on APEX 3.2.1. We would like to see how many users are using our APEX applications, but apparently the activity logs are truncated every 14 days! Is this expected behavior? Is it possible to set this up? I thought that this used to be longer, does with a recent version of the APEX?

    Thank you
    Matthias

    Mathias,

    It is listed in the documentation for the logging. There are two tables running at ~ 14 days to hold a total of the value of one month of data depending on the load. It then uses a view to show you the current array that will logging. I need to follow our users, so I combined a lot of different things I've learned on this site into a complete solution for our business needs. It's here: I then created an APEX application with an interactive report to query the data in my table.

    I created a new DB schema to remove this daily information and save it for 13 months.

    DB user: apex_management

    The table has been created by: create table APEX_ACTIVITY_LOG_ARCHIVE select * from apex_activity_log where 1 = 2;

    view apex_activity_log is provided by apex which is a composite of the two tables that all hold about 14 days worth of data.

    It is worth noting that apex_activity_log needs of the security_group_id of the workspace to define before you access (outside the apex as via dbms_scheduler) while the underlying tables are not. The only downside is that, for every upgrade, you need to fix the job to point to the new schema. (Or if the underyling table changed its name) then the view is more likely to not change.

    Currently, there are two procedures, one for the new data to add to the journal and the other to purge old data to our needs. This is handled by a dbms_scheduler job: APEX_ACTIV_LOG_ARCHIVE_REFRESH

    This task runs every day.

    BEGIN
    () sys.dbms_scheduler.create_job
    job_name => ' 'APEX_MANAGEMENT '. ' ' APEX_ACTIV_LOG_ARCHIVE_REFRESH ' ',
    job_type-online "PLSQL_BLOCK."
    job_action => ' begin
    refresh_apex_activity_archive;
    end;',
    repeat_interval => ' FREQ = DAILY; BYHOUR = 21; BYMINUTE = 30',
    start_date-online systimestamp to the time zone "America/New_York"
    job_class => ' "DEFAULT_JOB_CLASS '",
    observations-online "used to synchronize the Apex Activity from Log of a schema of apex to check the location for the monitoring of the use of the app,
    auto_drop => FALSE,
    enabled-online FALSE);
    sys.dbms_scheduler.set_attribute (name => ' "APEX_MANAGEMENT".) (' ' APEX_ACTIV_LOG_ARCHIVE_REFRESH ' ', the attribute-online 'raise_events', value-online dbms_scheduler.job_failed + dbms_scheduler.job_broken + dbms_scheduler.job_disabled);
    sys.dbms_scheduler.set_attribute (name => ' "APEX_MANAGEMENT".) (' ' APEX_ACTIV_LOG_ARCHIVE_REFRESH ' ', attribute value-TRUE online-online 'transfer',);
    sys.dbms_scheduler. Enable (' "APEX_MANAGEMENT".) ("' APEX_ACTIV_LOG_ARCHIVE_REFRESH" ');
    END;

    create or replace
    PROCEDURE POPULATE_APEX_ACTIVITY_ARCHIVE
    AS
    BEGIN
    INSERT
    IN apex_activity_log_archive
    (
    time_stamp,
    component_type,
    name of the component,
    component_attribute,
    information,
    ELAP,
    Num_Rows,
    UserID,
    ip_address,
    USER_AGENT,
    flow_id,
    step_id,
    session_id,
    SQLERRM,
    sqlerrm_component_type,
    sqlerrm_component_name,
    page_mode,
    application_info
    )
    SELECT a.time_stamp,
    a.component_type,
    a.Component_name,
    a.component_attribute,
    a.information,
    a.ELAP,
    a.Num_Rows,
    a.UserID,
    a.ip_address,
    a.USER_AGENT,
    a.flow_id,
    a.step_id,
    a.session_id,
    a.SQLERRM,
    a.sqlerrm_component_type,
    a.sqlerrm_component_name,
    a.page_mode,
    a.application_info
    Apex_activity_log has,.
    apex_activity_log_archive x
    WHERE a.time_stamp = x.time_stamp (+)
    AND a.session_id = x.session_id (+)
    AND x.ROWID IS NULL;

    EXCEPTION
    WHILE OTHERS THEN
    -raise_application_error(-20000,'Error.');
    lift;
    COMMIT;
    -If you look at my predicates in the insert statement, I'm an anti-jointure (tom site search for this). This means that I do a left outer join on the view from the top with the table based on the date of the notice and the apex_session_id... If there is a match, I'll have a row_id of my table (rowid is not null). If I did not have a match the row id is null (and therefore does not exist).
    --
    -WHERE alog.view_date = x.view_date (+)
    - AND alog.apex_session_id = x.apex_session_id (+)
    - AND alog.application_schema_owner = USER
    - AND x.ROWID IS NULL;
    END POPULATE_APEX_ACTIVITY_ARCHIVE;

    create or replace
    PROCEDURE REFRESH_APEX_ACTIVITY_ARCHIVE
    IS
    BEGIN
    -Procedure handles refreshing and purging of the long version of the apex_activity_log.
    -The security_id attribute must be defined in order to see the data for this workspace.
    -Could use the underlying tables wwv_flow_activity_log1$ wwv_flow_activity_log2$, but
    -There is in renamed the schema after each upgrade of the apex. (No need to set good security_id)
    -create table APEX_ACTIVITY_LOG_ARCHIVE select * from apex_activity_log where 1 = 2;
    -CREATE INDEXES 'APEX_ACTIVITY_LOG_ARCHIVE_IDX1' ON 'APEX_MANAGEMENT '. "" APEX_ACTIVITY_LOG_ARCHIVE "("TIME_STAMP");
    --

    DELETE FROM APEX_ACTIVITY_LOG_ARCHIVE WHERE time_stamp< sysdate="" -="">
    --
    -This security_id can be obtained from an export file or you making home > Manage Workspaces > Workspace manage assignments of schema.

    wwv_flow_api.set_security_group_id (p_security_group_id-online 1786101047996118);
    populate_apex_activity_archive;

    wwv_flow_api.set_security_group_id (p_security_group_id-online 1734204950462651);
    populate_apex_activity_archive;
    wwv_flow_api.set_security_group_id (p_security_group_id-online 6555628123884835);
    populate_apex_activity_archive;

    --

    END REFRESH_APEX_ACTIVITY_ARCHIVE;

    Hope this helps,
    Justin

  • Generate the ddl to create materialized view log...

    Hi gurus...

    Oracle db: 11g

    I am trying to generate / excerpt from script MVIEW Logs from prod to be applied on the test.
    I tried
    < code >
    DBMS_METADATA. GET_DDL (decode(object_type,'materialized_view_log'), object_name, owner)
    from dba_objects
    where owner = < ABC >;
    < code >

    above fails, any ideas?

    Thank you

    >
    Oracle db: 11g

    I am trying to generate / excerpt from script MVIEW Logs from prod to be applied on the test.
    I tried

    dbms_metadata.get_ddl(decode(object_type,'materialized_view_log'), object_name, owner)
    from dba_objects
    where owner=;

    above fails, any ideas?
    >
    Please try to use the tags in code, but you need to add.

     on the line before and the line after your code instead of what you are using.
    
    The object type names are case-sensitive so you need to use
    

    DBMS_METADATA. GET_DDL ('MATERIALIZED_VIEW_LOG', object_name, owner)

    Why are you using DECODE?                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    
    
  • Hi, Pls help

    Hello

    We have a table of newspaper once more number of lines inserted in this table application causes performance problem. so every time we truncate the log table. is there an alternative instead of truncating the table?

    A PLSQL loop is a very slow to copy the lines.

    Use SQL statements.

    -- single execution
    create table log_archive as select * from log where 1=2 ;  --- to be done only ONCE to initially create the table
    -- monthly executions
    insert into log_archive select * from log ;
    truncate table log;
    

    Hemant K Collette

  • refresh the materialized view, but skip delete

    I have a main table named DATA with mview log created on this subject.

    DATA_MVIEW is created in the select * data.

    For some reason, I want to update DATA_MVIEW every hour, but to only UPDATE/INSERT, skip DELETE action.

    A way to achieve that is to write additional code:
    1 detect records looking into MLOG$ _DATA (Journal mview) worth $$ DMLTYPE is D.
    2. pull these records in a separate temporary table.
    3 Refresh DATA_MVIEW
    4. Insert the records from the temporary table in DATA_MVIEW.

    My question is "is there a better, simpler way to ignore DELETE them when updating MVIEW?

    Thank you very much and waiting for tips Rewards points!

    There is another convoluted option:
    1. create an ON DELETE trigger on the DATA table that copies each line removed in a shadow / historical table
    2 update the MV
    3. create a view (not a MV) which is a Union All of the table shadow history with the MV
    4 refer to this view when you want to that 'all', the MV lines when you want 'alone that does not exist' reference lines

    Of course, if your MV is equipped with filters (WHERE clause) that is defined as a subset of the DATA table, you must also check that the shadow/history table also applies the same filters so that it fits with the MV.

    Hemant K Collette

  • How to abort dbms_redefinition and start all over again

    I am an existing table partitioning in oracle 10g with dbms_redefinition. I made a mistake and had to abandon the intermediate table. I didn't know that Oracle created a Mview and Mview log when you use dbms_redefinition. The result is that now when I'm trying to redo my partitioning I get an error when I use

    DBMS_REDEFINITION EXEC. CAN_REDEF_TABLE ('SL', 'SERL');

    ORA-12091: can't redefine online table "SL". ' "SERL" with materialized views
    ORA-06512: at "SYS." DBMS_REDEFINITION", line 137
    ORA-06512: at "SYS." DBMS_REDEFINITION", line 1478
    ORA-06512: at line 1

    SERL is the name of the existing table that I am trying to partition. SL is the owner. The mview_log is called MLOG$ _SERL. There is no mviews in the user schema.
    My temp table is called SERL_P and is now with all its partitions and subpartitions. It is empty.

    How can I clean it so I can start the process of partitioning again? I can't delete my existing table SERL. You will appreciate the help.

    I guess that you do not use DBMS_REDEFINITION. ABORT_REDEF_TABLE? This procedure should have cleaned the MT and MV newspaper.

    Can you try running it now?

    Otherwise, you need to remove the MV manually - but I would say totalling a SR with Oracle Support to confirm that there is no other "intermdiate" object that creates DBMS_REDEFINITION.

    Hemant K Collette

  • can we use the command move to MV and MV newspaper?

    Hi friends,

    DB: 9.2.0.7
    OPERATING SYSTEM: AIX 5.3

    can we use the command move against MV & MV newspaper to place them in a different tablespace?
    something like "alters the materialized view moving..." "etc.

    If someone posts to order too, it would be awesome!

    Thanks in advance

    Mviews and Mview Logs are a few paintings. Any code will refer to the schema table, not tablespace.

    Why not just run a test to confirm? It would be quicker to ask a question here.

    SQL> alter materialized view emp_mv move tablespace test;
    
    Materialized view altered.
    
    SQL> alter materialized view log on emp move tablespace test;
    
    Materialized view log altered.
    
    SQL> alter index A.SYS_C0011458 rebuild;
    
    Index altered.
    
    SQL> insert into emp values (3,'peter');
    
    1 row created.
    
    SQL> exec dbms_mview.refresh('EMP_MV');
    
    PL/SQL procedure successfully completed.
    
    SQL> select * from emp_mv;
    
            ID NAME
    ---------- ----------
             1 john
             2 robert
             3 peter
    

    Or... a more dirty solution would be to just move the tables.

    SQL> exec dbms_mview.refresh('EMP_MV');
    
    PL/SQL procedure successfully completed.
    
    SQL> create tablespace test;
    
    Tablespace created.
    
    SQL> alter table MLOG$_EMP move tablespace test;
    
    Table altered.
    
    SQL> alter table EMP_MV move tablespace test;
    
    Table altered.
    
    SQL> alter index A.SYS_C0011458 rebuild;
    
    Index altered.
    
    SQL> insert into emp values (2,'robert');
    
    1 row created.
    
    SQL> commit;
    
    Commit complete.
    
    SQL> exec dbms_mview.refresh('EMP_MV');
    
    PL/SQL procedure successfully completed.
    
    SQL> select * from emp_mv;
    
            ID NAME
    ---------- ----------
             1 john
             2 robert
    

    Published by: Robert Geier on December 8, 2009 13:10

    Published by: Robert Geier on December 8, 2009 13:12

  • production.log more and not truncated

    I noticed that the production.log develops and it is never truncated. It is now high 350MB

    I think this is a normal behavior. Is this a bug and if not how can it be changed?

    We spoke last week with a technician from VMWare, and it is indeed a current function. The developers are aware of the problem. The technology was not able to tell us when to expect a solution. We told him that it would be nice if logs have been memorized block upward and then older files have been replaced.

    KM

  • Truncate a table without removing the redo log

    Hi Experts,

    Could you please tell me is it possible to truncate a table without removing the Redo log. In fact, I know it is not possible, but some body asked me the same question, and I want to be sure before answering him.

    Help, please.

    Concerning
    Rajat

    >
    I'm not familiar with these tearms (Flashback or Rman backup). But I acccess DBA. Could you please tell me the procedures.
    >
    You can restore data from a backup or export.

    You cannot use FLASHBACK to recover the data unless you use GR 11, 2 and that you have prepared (using TOTAL RECALL) for this in advance.

    See Tubby response in this thread
    Flashback the truncate table is possible in 11g?

  • Truncate logs in vmware vdi

    Hi, we use vmware view 3.0 for desktop virtualization. We put the data user disk size to 128 MB.   everything worked very well today, we are faced with a problem, it's this vdm newspaper create the user profile. to the document and setting \all user\application Application Data VMware \vdm\logs.

    every now and then on their vdi low disk space error user.  and when we go to this place of \vdm\logs then we had newspapers of the day a lot of size 10 MB notapad file.then must manually remove the logs...

    y at - there any option or featuire through which we can truncate thease newspapers so that they cannot be more then 10 MB or more.

    The maximum size per file, you will need to set a maximum number of debug files too (default is 10).

  • Truncate the log Essbase

    Hello
    Someone knows how to truncate logs of Essbase and error files. I didn't see an option for it. Its juz adding in the existing log.

    -app

    Hello

    Are you talking about the ODI logs created when you dimension/load data, you could create a package and use the OdiFileDelete tool.

    See you soon

    John
    http://John-Goodwin.blogspot.com/

  • Refresh Materilaized views without generating archive logs

    Hello gurus,

    I am faced with a situation of emebarassing,
    I have a job every night to perform a dbms_refresh on several points of view materialiled, but there are 2 Mviews who take almost 30 minutes each but another thay question generates almost 10 GB of data and my file system becomes full in one minute.

    I did an Mview Nologging alter, but that does not change.

    Soemon has an idea how to make a refreshment without generating logs archiving - or
    without logging in?

    thxs in advance
    C.

    10 default g made a DELETE and INSERT to a FULL refresh.

    It is the part of DELETION, which can generate significant cancellations and redo. You can change the behavior to TRUNCATE and INSERT if you do the update with the ATOMIC_REFRESH flag is set to FALSE.

    for example

    DBMS_MVIEW.REFRESH('MY_MV',ATOMIC_REFRESH=>FALSE);
    

    However, if your discount is FAST refresh you can compare the difference between a QUICK and a COMPLETE. (You don't have to delete and recreate the MV, you can refresh the MV with COMPLETE manually, while the other automated referesh coming FAST).

    Hemant K Collette
    http://hemantoracledba.blogspot.com

  • questions of small scale log values

    I have a strange problems with tracing values in graphs for NI Measurement studio WPF. This problem occurs when the axis is log based on and the value on the axis spans less than 1 to 1 (i.e. 0.001 to 1000). The chart automatically change the lower x-axis to limit to 1. The behavior is similar to the one reported here (http://forums.ni.com/t5/Measurement-Studio-for-NET/log-scale-won-t-scale-to-show-small-values/m-p/27...) but in my case, none of the mapped value is 0 or less than the lower limit of 0 and the axis is set to greater than 0.

    The code below will reproduce the problem:

    Focus() pt = new Point [100];
    for (int i = 1; i)<=>
    {
    PT [i-1] = new Point (0,1 * I, me)
    }

    Field pl = new Plot ("test");

    PL. data = pt;

    Graph Plots.Add (pl);

    Where the x-axis of the graph are log10 scale (and the boundaries are defined on {0,1, 10} in the code .xaml) and the y axis is linear (no matter the scale really). The chart automatically x-axis it 1 and this can be changed to 0.1 but it comes back again to 1 if the visible property of the chart changed.

    This behavious doesn't happen if all the x-axis values are less than 1 or greater than 1 that is the code below to draw correctly and the axis is not reset to 1, etc.:

    Focus() pt = new Point [100];
    for (int i = 1; i)<=>
    {
    PT [i-1] = new Point (0.001 * I, me)
    }

    Field pl = new Plot ("test");

    PL. data = pt;

    Graph Plots.Add (pl);

    Someone knows it why it behaves like that and what can we do to fix this?

    Which seems to be the case, it's that the default FitLoosely range adjustment see values ranging from 10-1 to 10-1, and then choose a range of 0,10 for data. Since no one can be represented on a logarithmic scale, forced the scale to an arbitrary value, less than ten years, thus giving a final range of 1,10 and truncating low values. I created a task to solve this problem.

    To work around the problem, you can change the Adjuster on the scale to another value, such as FitExactly .

  • boards of opt_estimate in the merge of refresh operations quick mview

    Hello

    in a system (Oracle 11 g Enterprise Edition Release 11.2.0.2.0 - 64 bit Production database) I'm trying to optimize, there are materialized views with "fast refresh on commit" and there are many queries to the following distribution:

    / * MV_REFRESH (MRG) * / MERGE INTO "XXX". "" WITH THE HELP OF YYY "" SNA$ ' (SELECT / * + OPT_ESTIMATE (QUERY_BLOCK MAX = 485387) * /...)

    "As far as I can see the queries show the structure explained in Alberto Dell's Oracle blog'Era ' quick refreshment of only materialized aggregate views with SUM - algorithm (in the section" Refresh for mixed-DML TMPDLT") - the best resource on refresh mview algorithms I know. But I could not find information on the setting of the OPT_ESTIMATE indicator. In the database, I see that the values in the indicator are changing:

    Select st.sql_id

    , substr (st.sql_text, instr (st.sql_text, 'OPT_ESTIMATE'), 40) sql_text

    St dba_hist_sqltext

    where st.sql_text like ' / * MV_REFRESH (MRG) * / MERGE INTO "XXX". » YYY » %'

    and...


    SQL_ID SQL_TEXT

    ------------- ----------------------------------------

    6by5cwg0v6zaf OPT_ESTIMATE (QUERY_BLOCK MAX = 485387) *.

    2b5rth5uxmaa2 OPT_ESTIMATE (QUERY_BLOCK MAX = 485387) *.

    4kqc15tb2hvut OPT_ESTIMATE (QUERY_BLOCK MAX = 490174) *.

    fyp1rn4qvxcdb OPT_ESTIMATE (QUERY_BLOCK MAX = 490174) *.

    a5drp0m9wt53k OPT_ESTIMATE (QUERY_BLOCK MAX = 407399) *.

    2dcmwg992pjaz OPT_ESTIMATE (QUERY_BLOCK MAX = 485272) *.

    971zzvq5bdkx6 OPT_ESTIMATE (QUERY_BLOCK MAX = 493572) *.

    46434kbmudkq7 OPT_ESTIMATE (QUERY_BLOCK MAX = 493572) *.

    4ukc8yj73a3h3 OPT_ESTIMATE (QUERY_BLOCK MAX = 491807) *.

    8k46kpy4zvy96 OPT_ESTIMATE (QUERY_BLOCK MAX = 491807) *.

    3h1n5db3vdugt OPT_ESTIMATE (QUERY_BLOCK MAX = 493547) *.

    5340ukdznyqr6 OPT_ESTIMATE (QUERY_BLOCK MAX = 493547) *.

    7fxhdph8ymyz8 OPT_ESTIMATE (QUERY_BLOCK MAX = 407399) *.

    15f3st5gdvwp3 OPT_ESTIMATE (QUERY_BLOCK MAX = 491007) *.

    083ntxzh8wnhg OPT_ESTIMATE (QUERY_BLOCK MAX = 491007) *.

    cg17yjx3qay5z OPT_ESTIMATE (QUERY_BLOCK MAX = 491452) *.

    5qt37uzwrwkgw OPT_ESTIMATE (QUERY_BLOCK MAX = 491452) *.

    byzfcg7vvj859 OPT_ESTIMATE (QUERY_BLOCK MAX = 485272) *.

    aqtdpak3636y5 OPT_ESTIMATE (QUERY_BLOCK MAX = 493572) *.

    dcrkruvsgpz3u OPT_ESTIMATE (QUERY_BLOCK MAX = 492226) *.

    7mmt5px6sd7xg OPT_ESTIMATE (QUERY_BLOCK MAX = 492226) *.

    9c6v714pbjvc0 OPT_ESTIMATE (QUERY_BLOCK MAX = 485336) *.

    fbpsz02yq2qxv OPT_ESTIMATE (QUERY_BLOCK MAX = 485336) *.

    0q04g2rh9j84y OPT_ESTIMATE (QUERY_BLOCK MAX = 491217) *.

    gp3u5d5702dpb OPT_ESTIMATE (QUERY_BLOCK MAX = 491638) *.

    9f35swtju24aa OPT_ESTIMATE (QUERY_BLOCK MAX = 491638) *.

    a70jwxnrxtfjn OPT_ESTIMATE (QUERY_BLOCK MAX = 491217) *.

    93mbf02cjq2ny OPT_ESTIMATE (QUERY_BLOCK MAX = 491217) *.

    Then of course the cardinalities in the OPT_ESTIMATE indication are not static here and the sql_id of foreign exchange as a result. And this change prevents me from using the basic lines of sql plan to gurantee a stable path (essentially to avoid the parallel operations for refresh, well operations that Parallels access for queries do not prevent). I did a quick check with 11.2.0.1 and see the same model here:

    drop materialized view t_mv;

    drop table t;

    create table t

    as

    Select rownum id

    , mod (rownum, 50) col1

    , mod (rownum, 10) col2

    , lpad ('* ', 50,' *') col3

    of the double

    connect by level < = 100000;

    exec dbms_stats.gather_table_stats (user, 't')

    Create materialized view log on t with rowid (id, col1, col2, col3) including the new values;

    Create materialized view t_mv

    quickly refresh on validation

    as

    Select col1

    sum (col2) sum_col2

    count (*) NTC

    count (col2) cnt_col2

    t

    Col1 group;

    Update t set col2 = 0 where col1 = 1;

    commit;

    SQL_ID, 4gnafjwyvs79v, number of children 0

    -------------------------------------

    / * MV_REFRESH (MRG) * / MERGE IN 'TEST '. "" T_MV ""$SNA"WITH THE HELP OF SELECT (SELECT

    / * + OPT_ESTIMATE (QUERY_BLOCK MAX = 1000) * / "DLT$ 0. "" COL1 ""GB0. "

    SUM (DECODE ("DLT$ 0". "DML$ $"(, 'I', 1,-1) * DECODE ("DLT ($0". " ("" COL2 ").

    (NULL, 0, 1)) 'D0', SUM (DECODE ("DLT$ 0". "DML$ $"(, 'I', 1,-1)) 'D1', "

    NVL (SUM (DECODE ("DLT$ 0". "DML$ $"(, 'I', 1,-1) * ("DLT$ 0". " ((("" COL2 ")), 0)

    'D2' FROM (SELECT CHARTOROWID (' MAS$ ".)) ("' M_ROW$ $') RID$,.

    "MAS$". " "COL1", "MAS$". "' COL2 ', DECODE (" MAS$ ".) OLD_NEW$ $, 'N', ' (I ' 'd').

    DML$ $, "MAS$". "" DMLTYPE$ $"" DMLTYPE$ $', 'TEST '. "" MLOG$ _T "" MAS$ ".

    WHERE "MAS$". XID$ $ =: 1) 'DLT$ 0' GROUP BY 'DLT$ 0. ("' COL1 ')" AV$ "ON

    (SYS_OP_MAP_NONNULL ("SNA$".)) ("' COL1 ') = SYS_OP_MAP_NONNULL (" AV$ ".) (("" GB0 '))

    WHEN MATCHED THEN UPDATE SET "SNA$". "CNT_COL2"= "$SNA". «CNT_COL2 "+" AV$ ".»

    'D0', '$SNA '. "CNT"= "$SNA". «CNT "+" AV$ ".» "' D1 ',.

    "SNA$". " SUM_COL2 "= DECODE (" SNA$ "". ")" CNT_COL2 "+" AV$ ".» D0', 0, NULL, NVL ("SNA$".)

    ("SUM_COL2", 0) + AV$ «» ("' D2 ') DELETE WHERE (" SNA$ ".) ("' CNT ' = 0) IS NOT

    MATCHED THEN INSERT ("SNA$".) "" COL1 ","$SNA ". "" CNT_COL2 ","$SNA ". "" CNT ",.

    Hash value of plan: 2085662248

    -----------------------------------------------------------------------------------

    | ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time |

    -----------------------------------------------------------------------------------

    |   0 | MERGE STATEMENT |         |       |       |    24 (100) |          |

    |   1.  MERGE | T_MV |       |       |            |          |

    |   2.   VIEW                  |         |       |       |            |          |

    |*  3 |    OUTER HASH JOIN |         |    40.  4640 |    24 (9) | 00:00:01 |

    |   4.     VIEW                |         |    40.  2080.    20 (5) | 00:00:01 |

    |   5.      GROUP SORT BY |         |    40.  1640 |    20 (5) | 00:00:01 |

    |*  6 |       TABLE ACCESS FULL | MLOG$ _T |    40.  1640 |    19 (0) | 00:00:01 |

    |   7.     MAT_VIEW FULL ACCESS | T_MV |    50.  3200 |     3 (0) | 00:00:01 |

    -----------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):

    ---------------------------------------------------

    3 - access (SYS_OP_MAP_NONNULL ("SNA$".)) ("' COL1 ') = SYS_OP_MAP_NONNULL (" AV$ ".) "G

    B0'))

    6 - filter("MAS$".") XID$ $"(=:1)"

    Note

    -----

    -dynamic sample used for this survey (level = 2)

    So I see an OPT_ESTIMATE (QUERY_BLOCK MAX = 1000) and this estimate seems to be fairly stable when I change the number of lines, number of blocks, use partitioning etc. I checked a trace with Event 10046 but didn't source the value 1000. I also disabled the comments of cardinality ("_optimizer_use_feedback" = false) and there is no profile sql in my system (according to dba_sql_profiles) - but the OPT_ESTIMATE is still there.

    So the question is: is there something known about values used in the OPT_ESTIMATE indication for the materialized view fast refresh operations? Thanks in advance for your contributions.

    Concerning

    Martin Preiss

    Martin Preiss wrote:

    In regard to point 1: initially as a partitioned table starting with K 1000 lines, then 100K, then 10K, I created my test T table (using my old example of Blog Oracle MP: Materialized View Fast refresh): in all cases, I got OPT_ESTIMATE (QUERY_BLOCK MAX = 1000). My first problem in this context is that I did not come with a model showing different OPT_ESTIMATE values - as I see them in the prod system.

    Using the sql Profiler and the force_match option looks promising - I'll check if I can use them here.

    Concerning

    Martin

    Hi Martin,

    but perhaps the OPT_ESTIMATE indicator is based on stats of the MLOG or MV table, where my question. Since this is the option to 'MAX' of OPT_ESTIMATE (limit the maximum number of rows for this query block) the 1000 resembles 'low' default value is used if the stats are missing or the MLOG / MV table less than 1000 lines?

    Randolf

Maybe you are looking for