RowId question!

Assume that Sql like this;
    Select mv.RowId, mv.field1 , mv.field2, ot.field1 , ot.field2
     from table1 mv  join 
            table2 ot on ( mv.pkField = ot.otherField )
Let's say the sql above is (for c in... loop) can we use the c.RowId to update the table mv?
    Update table1 
         set ....
    where table1.RowId = c.RowId;
Can we trust on this rowId? I mean, what this rowId is not related to the sqlCursor resulting SQL instead of rowId from table1?

Thank you;

Marcos Ortega
Brazil

Hello

>
Can we trust on this rowId? I mean, what this rowId is not related to the sqlCursor resulting SQL instead of rowId from table1?
>
the rowid is an "internal key" used by Oracle to precisely locate a record. from that:
Yes, it's reliable information.
The ROWID belongs to the archives, not cursors.
Never store the rowid yourself (ex: declare it as a column of rowid data type) just because it's a pointer and only because if "sharp" movements Records (ex you put the table in an another tablsepace) then your pointer become useless...
to update this record, I would use the "for update" and "WHERE CURRENT OF syntax.
Here is an example:

DECLARE
   my_emp_id NUMBER(6);
   my_job_id VARCHAR2(10);
   my_sal    NUMBER(8,2);
   CURSOR c1 IS SELECT employee_id, job_id, salary FROM employees FOR UPDATE;
BEGIN
   OPEN c1;
   LOOP
      FETCH c1 INTO my_emp_id, my_job_id, my_sal;
      IF my_job_id = 'SA_REP' THEN
        UPDATE employees SET salary = salary * 1.02 WHERE CURRENT OF c1;
      END IF;
      EXIT WHEN c1%NOTFOUND;
   END LOOP;
END;
/

Check this box:
http://download.Oracle.com/docs/CD/B19306_01/AppDev.102/b14261/sqloperations.htm

Tags: Database

Similar Questions

  • bimtap conversion rowid question

    I can't really understand how oracle can convert images bitmap (in the bitmap index) rowid.

    Let's say that there is a huge table T with a column C that has a 'V' value for the nth line and 'X' for all other lines.

    Let's say one issues the following query: SELECT * FROM T WHERE C = "V".

    Oracle does not know how many lines is inside blocks, so, how can it understand that the nth line of a certain block? He enjoy some blocks around the right one until he finds it?

    The bitmap rowid conversion algorithm is intended to be internal, but maybe someone has some clues to this topic.

    Thanks in advance

    Claire wrote:
    OK, but my question is: how this feature can do this? Oracle does not know how many lines is inside a block so how he can convert a position to a rowid?

    Have you read the
    "
    Each bit in the bitmap corresponds to a possible rowid.
    "
    Part of this quote? Then... Why do you think oracle NEEDS to know how much line is inside a block, how this information would be useful with this quote?

    Claire wrote:
    Let's say you know you want to pick up the fourth row: How do you know where is this line?

    What is 'the fourth line', and why you think that oracle cares about this? There is no concept of randomly seize the nth row from a result set. If you need a particular line you must specify in your query...

    Usually something like

    select 
    from
    (
       select , row_number() over (order by ) as rn
       from 
       where 
    )
    where rn <= 10
    
    

    If the inner query is resolved and a line is assigned a value by our analytical clause. The query must be resolved and exits ordered so that our RN to assign.

    It's where (as applicable) the bitmap index gets used, in determining the results of the subquery. The results are filtered (index bitmap access we tell) and the analytical clause is then applied to the result set that results.

  • ROWID question during the passage of the element with f? rating URL p

    Hi all

    Recently, I encountered a problem trying to pass the ROWID as a parameter in f? rating p on hidden item URL. No matter if the item is hidden or is just a text field, in apex cut somehow the symbol '+' if it is contained in the rowid.
    Therefore, when I refer to this article, I get the error ORA-01410 ROWID invalid.

    the reference is as follows:

    f? p = 106:3:2171715767421110:3:P3_SAMPLE_RECORD_ID, P3_BRANCH, P3_CUSTOMER_MAIN_NAME:AAAeGdAAIAAAAFb + VGA, 1, ARNOLD and when it is spent in the symbol-period the rowid displayed in the form 'AAAeGdAAIAAAAFb AAG', without, but with spaces instead.

    I don't know how to solve this problem. I would be very grateful if someone could help with this.

    Try thisdoc.41/e21676/apex_util.htm#AEAPI190 http://docs.oracle.com/cd/E23903_01/doc/
    :)

  • find the most recent record inserted by using rowid

    Yes, I suspect that it is a bad idea!

    I work with an application that has been modelled incorrectly. a historical table of the status had been modeled with a primary key on a key of the company and the entry into force instead of the date of the end of the recording of the State. There is one status as possible at the same time.

    logic has been developed to insert new records of status with the dates of null terminator and then a trigger was provided that populated end dates except the one with the MAX (ROWID). so the hope is that ROWID will always get bigger.

    now, the database has been moved in environments of promotion using data pump. in our development environments, it always seems to work. but not in promotional customers environments.

    My question is: is there a setting associated with data pump that the client has used that does not respect the ROWID as it has in our development environment?

    I'm trying to determine if there is a way to continue to use the old code without immediately re-development.

    Thanks in advance for your comments.

    so the hope is that ROWID will always get bigger.

    It is an expectation that has no guarantee of being correct. He has a very good chance that a new line will have a lower value of the ROWID.

    My question is: is there a setting associated with data pump that the client has used that does not respect the ROWID as it has in our development environment?

    There is not this setting. Data pump just inserts rows into a table. Because we cannot guarantee that the next row will have a higher ROWID, then what you are looking for there is no.

    It is time to rethink this misguided approach.

    See you soon,.
    Brian

  • Question about cardinality (lines) to explain the plan

    I have two tables (names have been changed to protect the innocent):


    TABLE 1:


    The Null columns?    Type

    -------------------- -------- ----------------------------------------------

    Table1_Primary_Key NOT NULL NUMBER

    more than 10 columns


    TABLE2:


    The Null columns?    Type

    -------------------- -------- ----------------------------------------------

    Table2_Primary_Key NOT NULL NUMBER

    more than 8 columns


    Lines of table1 has 1097172


    Rows of table2 has 160960


    I am analysis request and get explain below:


    SELECT t1. Table1_Primary_Key

    --

    FROM TABLE1 t1,

    From TABLE2 T2

    --

    WHERE t1. Table1_Primary_Key = t1. Table1_Primary_Key

    AND t2. Table2_Primary_Key = 3432798

    /


    ------------------------------------------------------------------------------------------------

    | ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time |

    ------------------------------------------------------------------------------------------------

    |   0 | SELECT STATEMENT |                 |     1.    21.     5 (0) | 00:00:01 |

    |   1.  NESTED LOOPS |                 |     1.    21.     5 (0) | 00:00:01 |

    |   2.   TABLE ACCESS BY INDEX ROWID | TABLE2.     1.    12.     3 (0) | 00:00:01 |

    |*  3 |    INDEX UNIQUE SCAN | TABLE2_PK |     1.       |     2 (0) | 00:00:01 |

    |   4.   TABLE ACCESS BY INDEX ROWID | TABLE1.  1096K |  9634K |     2 (0) | 00:00:01 |

    |*  5 |    INDEX UNIQUE SCAN | TABLE1_PK |     1.       |     1 (0) | 00:00:01 |

    ------------------------------------------------------------------------------------------------


    As you can see it table2 is exactly 1 row and join table1 on a correspondence of single line.


    My question is this:


    Why the plan of the explain command seems (at least for me) to indicate that it looked like all the rows in TABLE1?


    Thank you


    Thomas

    the optimizer's decisions are based on the object (and maybe system) statistics: so it's a good idea to provide as much information as possible in these statistics. Basically, there is nothing wrong with statistics automatic collection job - so I would count on that if I don't have very good reason to use anything else. Of course, there are some situations in which it's a good idea to add a few adjustions for automatic collection: sometimes, there is too much created histograms, sometimes there is too little (basically you have histograms when the distribution of the data is not yet). And if there are columns with correlated values who serve together in boundary conditions then create extensive statistics may be a good idea. To make these adjustions, you can use the routines of pref dbms_stats. And sometimes, it may even be a good idea is not to collect statistics for an object and use the sample dynamic (dynamic statistics) for more detailed information on the cardinality of distribution and join.

    In the book of my opinion Jonathan Lewis cost base Oracle Fundamentals still contains the best explanation of the use of optimizer statistics - and Christian Antognini Oracle performance troubleshooting also provides a lot of valuable information about statistics and their gathering. Of course the documentation also explains the basics in detail: Managing optimizer statistics - 11 g Release 2 (11.2). And if you want to get a shorter summary, then you can always take a look at the Web of Tim Hall site: https://oracle-base.com/articles/misc/cost-based-optimizer-and-database-statistics.

  • More on my table of full scan question...

    Ok. Apology. start one new thread, like other a receipt very complicated... If anyone can help Id be very grateful, because this results in a significant performance problem...

    XE 11.2

    Re this query:

    Select ENTITY_BUDGET_CAT,

    Sum (actual) real by

    de)

    Select Br ENTITY_BUDGET_CAT_ID as ENTITY_BUDGET_CAT,

    sum (br. BRI_CREDIT) under the real name

    from: br bri_recon

    Group of BR. ENTITY_BUDGET_CAT_ID

    )

    where ENTITY_BUDGET_CAT = (SELECT "EBC". (' ' ID ' FROM 'ENTITY_BUDGET_CAT' "EBC" WHERE 'ENTITY_ID' = 55)

    ENTITY_BUDGET_CAT group

    The query results in a table full of BRI_RECON scan, even if there is an index on BR. ENTITY_BUDGET_CAT_ID... If I put the where conditition on the inner circle question it works fine.

    The void, select retrieve entity_budget_cat on where clause returns a single value.

    If I replace the subselect statement to get the entity_budget_cat with a literal value, it uses the index on ENTITY_BUDGET_CAT and is much faster with a much lower cost.

    If I remove the sum (actual) external and just retrieve the value and no group, it also uses a sweep of indexes on the inner query on BRI_RECON.

    I need instruction in this format because its share actually well expand with unions comprising the sum of 4 tables, then later adds the value of the outer query to retrieve a single value by group of... The docs say that the filter must be passed to any SQL internally, but everything Ive tried to change the query, but leave the internal SQL without a where clause clause and apply it on the outer query. (as I would a view) translates into a full table scan.

    Here is a simple example of the view I have as an example of how the larger view needs to look for tables of the amount and return a single sum by group...

    Create union_sum_view like)

    Select sum (a) as a 'b' b

    de)

    Select nvl (sum (2), 0) as a 'b' of the double

    Union of all the

    Select nvl (sum (2), 0) as a 'b' of the double

    )

    Group by 'b '.

    )

    Select * from union_sum_view where b = (select "b" double)

    Ive tried to put a hint about it, and it doesn't seem to make a difference...

    Ive looked full statistics on the diagram too...

    So my question is: what is causing the full table scan (which is clearly much less effective). And given that I need to build a view in this way, how can I change to use an index in this format, or what I do to make it work...   All variants and the traces are below...

    Select ENTITY_BUDGET_CAT,

    Sum (actual) real by

    de)

    Select Br ENTITY_BUDGET_CAT_ID as ENTITY_BUDGET_CAT,

    sum (br. BRI_CREDIT) under the real name

    from: br bri_recon

    Group of BR. ENTITY_BUDGET_CAT_ID

    )

    where ENTITY_BUDGET_CAT = (SELECT "EBC". (' ' ID ' FROM 'ENTITY_BUDGET_CAT' "EBC" WHERE 'ENTITY_ID' = 55)

    ENTITY_BUDGET_CAT group

  • | ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time |
    ---------------------------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | 230. 3910 | 3335 (3) | 00:00:41 |
    | 1. HASH GROUP BY. 230. 3910 | 3335 (3) | 00:00:41 |
    |* 2 | VIEW | 230. 3910 | 3333 (3) | 00:00:40 |
    | 3. HASH GROUP BY. 230. 1610. 3333 (3) | 00:00:40 |
    | 4. TABLE ACCESS FULL | BRI_RECON | 589K | 4031K | 3287 (2) | 00:00:40 |
    | 5. TABLE ACCESS BY INDEX ROWID | ENTITY_BUDGET_CAT | 1. 8. 2 (0) | 00:00:01 |
    |* 6 | INDEX RANGE SCAN | ENTITY_BUDGET_CAT_ENTITY_IDX1 | 1 | | 1 (0) | 00:00:01 |
    ---------------------------------------------------------------------------------------------------------------
    Information of predicates (identified by the operation identity card):
    ---------------------------------------------------
    2 - filter("ENTITY_BUDGET_CAT"= (SELECT "EBC".")) ID""ENTITY_BUDGET_CAT""EBC"WHERE
    'ENTITY_ID' = 55))
    6 - access ("ENTITY_ID" = 55)

    Select ENTITY_BUDGET_CAT,

    Sum (actual) real by

    de)

    Select Br ENTITY_BUDGET_CAT_ID as ENTITY_BUDGET_CAT,

    sum (br. BRI_CREDIT) under the real name

    from: br bri_recon

    Group of BR. ENTITY_BUDGET_CAT_ID

    )

    where ENTITY_BUDGET_CAT = (382)

    ENTITY_BUDGET_CAT group

    | ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time |
    ----------------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | 1. 17. 54 (0) | 00:00:01 |
    | 1. GROUP SORT BY NOSORT | 1. 17. 54 (0) | 00:00:01 |
    | 2. VIEW | 1. 17. 54 (0) | 00:00:01 |
    | 3. GROUP SORT BY NOSORT | 1. 7. 54 (0) | 00:00:01 |
    | 4. TABLE ACCESS BY INDEX ROWID | BRI_RECON | 808 | 5656. 54 (0) | 00:00:01 |
    |* 5 | INDEX RANGE SCAN | BRI_RECON_IDX_EBC | 808 | 4 (0) | 00:00:01 |
    ----------------------------------------------------------------------------------------------------
    Information of predicates (identified by the operation identity card):
    ---------------------------------------------------
    5 - access("BR".") ENTITY_BUDGET_CAT_ID "= 382)

    Richard Legge wrote:

    The void, select retrieve entity_budget_cat on where clause returns a single value.

    If I replace the subselect statement to get the entity_budget_cat with a literal value, it uses the index on ENTITY_BUDGET_CAT and is much faster with a much lower cost.

    Because the query runs faster when you use literal is because Oracle merges internal queries and views, so that it looks to below.

    Select Br ENTITY_BUDGET_CAT_ID as ENTITY_BUDGET_CAT

    sum (br. BRI_CREDIT) under the real name

    from: br bri_recon

    where br. ENTITY_BUDGET_CAT_ID =

    Group of BR. ENTITY_BUDGET_CAT_ID

    However when a subquery is used, as in the column ENTITY_BUDGET_CAT - the subquery must be run before the outer query can send more data, so that the outer query to only return results that match records based on the subquery.

    Richard Legge wrote:

    I need instruction in this format because its share actually well expand with unions comprising the sum of 4 tables, then later adds the value of the outer query to retrieve a single value by group of...

    Maybe you should re - write the query as follows so that the optimizer would get the opportunity to merge the view and give you better performance...

    Select entity_budget_cat

    , sum (actual) real by

    from (select br.entity_budget_cat_id as entity_budget_cat

    , sum (br.bri_credit) real by

    from: br bri_recon

    Br.entity_budget_cat_id group

    ) a1

    , (select ebc.id from entity_budget_cat where entity_id = 55 ebc) b1

    where a1.entity_budget_cat = b1.id

    A1.entity_budget_cat group;

    or just... would...

    Select entity_budget_cat_id entity_budget_cat

    , sum (actual) real by

    bri_recon a1

    , (select ebc.id from entity_budget_cat where entity_id = 55 ebc) b1

    where a1.entity_budget_cat_id = b1.id

    A1.entity_budget_cat_id group

  • XmlIndex subsetting question


    I have a table EMP that contains an xmltype column called empinfo

    CREATE TABLE EMP
    (
    DATE EVENT_DATE,
    empinfo SYS. XMLTYPE
    )
    XMLTYPE XML STORE AS BINARY XML NAVIGATION

    This pass of xmltype contains xml unstructured documents. I am interested in indexing on 'empno '.
    I am able to create an index, but it will be only works on the first level.


    CREATE INDEX subset_xmlindex ON emp (empinfo)
    indexType IS xdb.xmlindex
    parameters ("PATHS (INCLUDE (//empno))
    path table subset_pathtable (undesirable tablespace)');

    When I do the next xmlexists, he found all the empno, but it does not use the index.

    XMLExists ('/ / * [empno = 171943270] ' from empinfo)

    When I do that it finds the empno in the fitrst level and use the index. but it does not find the empno < empinfo > level.
    and xmlexists ('/ person / * [empno = 1073630546]' from empinfo)


    question: How can I use the index on all the elements 'empno' wherever they are in the doc.
    is it still possible? the table contains millions of records.

    doc suggest using //empno in the index include the will of the index on the regradless of all the empno hence they are.

    -xml1

    < person >
    < personrecord >
    < name >
    .....
    < empno >
    ...
    < / personrecord >
    < / person >


    -xml2
    < person >
    < personinfo >
    < name >
    ....
    < empinfo >
    < empno >
    ....
    < / empinfo >
    ....
    < / personrecord >
    < / person >

    Version database please?

    And question: is there still a single node "empno" by the document?

    The following works for me.

    Make sure you pass the link as string variable, no number (unless you create an additional digital index on the VALUE of path):

    SQL> select *
      2  from emp
      3  where xmlexists('//empno[.=$empno]'
      4          passing empinfo
      5                , '7777' as "empno"
      6        )
      7  ;
    
    Execution Plan
    ----------------------------------------------------------
    Plan hash value: 1962240707
    
    -------------------------------------------------------------------------------------------------------------------
    | Id  | Operation                        | Name                           | Rows  | Bytes | Cost (%CPU)| Time     |
    -------------------------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT                 |                                |     1 |   145 |     5  (20)| 00:00:01 |
    |   1 |  NESTED LOOPS                    |                                |     1 |   145 |     5  (20)| 00:00:01 |
    |   2 |   VIEW                           | VW_SQ_1                        |     1 |    12 |     3   (0)| 00:00:01 |
    |   3 |    HASH UNIQUE                   |                                |     1 |    47 |            |          |
    |   4 |     NESTED LOOPS                 |                                |       |       |            |          |
    |   5 |      NESTED LOOPS                |                                |     1 |    47 |     3   (0)| 00:00:01 |
    |   6 |       TABLE ACCESS BY INDEX ROWID| X$PT72R2I5S5V9A9VE0GOACG0GKWIG |     1 |    12 |     2   (0)| 00:00:01 |
    |*  7 |        INDEX RANGE SCAN          | X$PR72R2I5S5V9A9VE0GOACG0GKWIG |     1 |       |     1   (0)| 00:00:01 |
    |*  8 |       INDEX RANGE SCAN           | SYS166546_SUBSET_XM_VALUE_IX   |     7 |       |     0   (0)| 00:00:01 |
    |*  9 |      TABLE ACCESS BY INDEX ROWID | SYS166546_SUBSET_XM_PATH_TABLE |     1 |    35 |     1   (0)| 00:00:01 |
    |  10 |   TABLE ACCESS BY USER ROWID     | EMP                            |     1 |   133 |     1   (0)| 00:00:01 |
    -------------------------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       7 - access(SYS_PATH_REVERSE("PATH")>=HEXTORAW('021FD8')  AND
                  SYS_PATH_REVERSE("PATH")
    

    Alternatively, if each doc contains at most one occurrence of 'empno', you can add a virtual column that retrieves and create an index on this column:

    SQL> alter table emp add empno number generated always as (
      2    xmlcast(xmlquery('//empno' passing empinfo returning content) as number)
      3  )
      4  virtual ;
    
    Table altered.
    
    SQL>
    SQL> create index emp_empno_i on emp (empno);
    
    Index created.
    
    SQL> select * from emp where empno = 7777;
    
    Execution Plan
    ----------------------------------------------------------
    Plan hash value: 1114809652
    
    -------------------------------------------------------------------------------------------
    | Id  | Operation                   | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
    -------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT            |             |     1 |   133 |     2   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| EMP         |     1 |   133 |     2   (0)| 00:00:01 |
    |*  2 |   INDEX RANGE SCAN          | EMP_EMPNO_I |     1 |       |     1   (0)| 00:00:01 |
    -------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       2 - access("EMPNO"=7777)
    
  • Merge Partition - Space Question

    We have a large table in 10g which has been partitioned from list. As the time has passed data with a valid value is not in any of the lists of the partitions has been inserted several times. These data fall into the default partition

    So we have something like this:

    Table ERRORS

    Partition E_1 values ('A', 'B', 'C')

    Partition E_2 values ('L', ', 'n')-size 600G

    Partition by default E_DEFAULT (but contains data of a single value 'Z') - size 50 G

    Now the seller would like to fix this by adding the values of 'Z' to E_2.

    They offer the following:

    ALTER table split partition (partition E_TEMP, E_DEFAULT) E_DEFAULT ERRORS;

    ALTER table merge E_TEMP, E_2 partition in partition E_2 ERRORS.

    I'm fine with the strategy. I have no doubt that it will work.

    I'm afraid that because the size of the partition is so great the merger will require a 650G extra free in the tablespace. Sellers seem to think that it will just add the 50G to 600G.

    Looking at the documentation, the merge partition section shows:

    "Use the ALTER TABLE ... M ERGE PARTITION statement allows you to merge the contents of two partitions into a single partition. "The two original partitions are deleted, as are all the correspondents index."

    This means to me that it takes two and creates a third, thus requiring the 650G of extra free space.

    But is this always the case if the third partition is actually one of the two being merged?

    For what it's worth, my plan was:

    Failure to Split

    Change the new partition with an empty temp table

    remove the new partition

    Edit E_2 to add the new value

    Insert ERROR select * from temporary table.

    drop temporary table

    I think that this requires free space of 50G.

    Thank you

    My main question, however, and this that I can't test (or do not know how to test) is the solution of partition of merger proposed by the seller requires free space equal to two partitions (in this case 650G) original to work? The seller says no. I think, but can not find documentation or evidence, Yes.

    If the seller says no, ask them for proof. I'm not aware that the internal process has been documented. But you should be able to test quite easily.

    The only reason why a new segment will be created being is if the data would actually be moved. In order to test whether the data is transferred.

    If the data is not moved, it's the same segment as before.

    1 put a line in P1 - examine the IDENTIFIER line

    2 put a line in P2 - examine the IDENTIFIER line

    3 MERGE P2 P1

    4. determine if the data transferred - what ROWID changed?

    The ID will be different if the data has been moved to a new segment.

    create table part_list (State varchar2 (2))
    list partition (State)
    (values of partition p_ca ('CA'),
    values of partition p_wa ('WA'),
    values (default) partition p_default
    );

    insert into part_list values ('CA');

    insert into part_list values ('WA');

    insert into part_list values ("NV");

    Select a State, rowid RID, DBMS_ROWID. ROWID_RELATIVE_FNO (ROWID) fno,

    DBMS_ROWID. ROWID_BLOCK_NUMBER (ROWID) blkno,

    DBMS_ROWID. ROWID_ROW_NUMBER (ROWID) rowno

    of part_list

    STATE, RID, FNO, BLKNO, ROWNO

    CA, AAAV71AAEAAAAs0AAA, 4, 2868, 0

    WA, AAAV72AAEAAAAtEAAA, 4, 2884, 0

    NV, AAAV73AAEAAAAtcAAA, 4, 2908, 0

    ALTER table merge partitions p_ca, p_wa part_list in the p_ca partition

    Select a State, rowid RID, DBMS_ROWID. ROWID_RELATIVE_FNO (ROWID) fno,

    DBMS_ROWID. ROWID_BLOCK_NUMBER (ROWID) blkno,

    DBMS_ROWID. ROWID_ROW_NUMBER (ROWID) rowno

    of part_list

    STATE, RID, FNO, BLKNO, ROWNO

    CA, AAAV74AAEAAAAtjAAA, 4, 2915, 0

    WA, AAAV74AAEAAAAtjAAB, 4, 2915, 1

    NV, AAAV73AAEAAAAtcAAA, 4, 2908, 0

    Note that the ROWID for 'CA' and 'WA' values are different. This means that the lines have been moved. The file number is the same, but the block numbers have changed as to have the line number for 'WA '.

    The fact that existing lines have been moved at all should answer your question about what really makes Oracle. In other words, is NOT just to add the 'WA' line for the existing partitions.

    If you do a SPLIT, but the data of the existing partitions is NOT affected, no data movement is performed. Check this example and see that all existing ROWID components remain the same

    ALTER table part_list split partition p_default ('OR') values

    in (p_or partition, partition p_default)

    He has not given "GOLD" in the partition by default so that Oracle has simply ignored this segment and created a new one for the new partition.

  • simple question on block splitting

    Hello Experts,

    My question is, are not distributed the following DML running 90-10 block? Because it was said that one performs 50-50 block splitting.

    SQL> CREATE TABLE album_sales_IOT(album_id number, country_id number, total_sals number, album_colour varchar2(20),

         CONSTRAINT album_sales_iot_pk PRIMARY KEY(album_id, country_id)) ORGANIZATION INDEX;

    Table created.

    1

    SQL> BEGIN

      2FOR i IN 5001..10000LOOP

      3FOR c IN 201..300LOOP

      4INSERT INTO album_sales_iot VALUES(i,c,ceil(dbms_random.value(1,5000000)), 'Yet more new rows');

      5END LOOP;

      6END LOOP;

      7COMMIT;

      8END;

      9/

    PL/SQL procedure successfully completed.

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    SQL> BEGIN

      2FOR i IN 1..5000LOOP

      3FOR c IN 101..200LOOP

      4INSERT INTO album_sales_iot

      5VALUES(i,c,ceil(dbms_random.value(1,5000000)), 'Some new rows');

      6END LOOP;

      7END LOOP;

      8COMMIT;

      9END;

    10/

    PL/SQL procedure successfully completed.

    Hi NightWing,

    Well you missed the creation of initial here (and something mixed in between) DataSet: http://richardfoote.wordpress.com/2012/01/10/index-organized-tables-an-introduction-of-sorts-pyramid-song/

    It's the same thing I did with your (first) provided PL/SQL procedure (my first run "even more new ranks"). You can see 90-10 block divides only. However 50-50 block splits occur by running the second PL/SQL procedure (with 'a few new lines') later. It's the same as Richard mentioned in his second blog, but in my case, the initial data set was different based on your information: http://richardfoote.wordpress.com/2012/04/26/iot-secondary-indexes-the-logical-rowid-guess-component-part-i-lucky/

    However, he executed the procedure from PL/SQL (with "still more new lines") as a third run (http://richardfoote.wordpress.com/2012/05/08/iot-secondary-indexes-the-logical-rowid-guess-component-part-ii-move-on/) and, in this / her case increase the monotonous PK based on sound values previously created data sets. So no 50-50 block divides (as he says also) - all logic and all right.

    > My question is, are not distributed the following DML running 90-10 block? Because it was said that one performs 50-50 block splitting.

    He never said this. He said the following on one with "still more new lines" (in your message the first PL/SQL procedure):

    We note that the PCT_DIRECT_ACCESS value is unchanged. So, no 50-50 block split, no degradation of PCT_DIRECT_ACCESS with look secondary indexes.

    You can cross-check this on your own by a query on statistics session or by using snapper by Tanel Poder.

    Concerning

    Stefan

  • simple question about access to information of predicate and filter

    Hello Experts

    I know that maybe this is a very simple and fundamental question. I read a lot of articles on explains the plan and trying to understand what are 'access' and 'filter' which means?
    Please correct me if I'm wrong, I guess when the index of explain plan can use predicate choose access if the explain command plan go with complete table filter scan (witout index) is chosen.

    My last question is, can you recommend me an article or document will contact plan to explain it in clear language and base level?

    Thanks in advance.

    Hello

    as the name suggests, access predicate is when data access based on a certain condition. Filter predicate is when the data is filtered by this condition after reading.

    For example, if you have a select * FROM T1 WHERE X =: x AND Y =: y, where X column is indexed, but column Y is not, you can get a map with an INDEX RANGE SCAN with access predicate = X: x (because you can use this condition to when selecting the data to be read and read only sheets of index blocks that meet this condition) and ACCESS BY ROWID from TABLE with the filter predicate Y =: y (because you cannot check this condition until after reading the table block).

    I'm not aware of any good articles on the subject, and unlike others I can't find Oracle enough detailed documentation. I suggest you read a book, for example Christian Antognini, "Troubleshooting Oracle performance problems."

    Best regards

    Nikolai

  • Question on query SQL Tuning

    We have made from a 9i database to an 11 g database.  The defined query works perfectly under 9i (about 1 second) but takes 60 to 70 seconds back to 11g.  The strange thing is that it has an outer query and a subquery.  If I remove the where clause of the outer query, it runs in less than a second.  Here's the query...

    SELECT the period, biweek_start, biweek_end, pay_period_complete, worker_count

    FROM (SELECT to_char (en.start_date, ' dd/mm/yyyy') |) « à » ||

    TO_CHAR (en.start_date + 13, ' dd/mm/yyyy') period.

    en.start_date biweek_start,

    en.start_date + biweek_end 13,

    Decode (sign (sysdate - en.start_date - 13),

    -1,

    'Ongoing',

    Pay_period_complete 'Done'),

    ta_mssauth_pkg.actual_endorser (en.start_date, '33811') worker_count

    FROM (select worker_id,

    default_endorser resolved_endorser,

    effective_date,

    correspondents

    of endorse_delegate_history_v

    where default_endorser = '33811'

    Union

    Select worker_id,

    temporary_delegate resolved_endorser,

    effective_date,

    correspondents

    of endorse_delegate_history_v

    where temporary_delegate = '33811'

    and temporary_delegate is not null),

    endorse_activity_v en

    WHERE en.worker_id = de.worker_id

    AND en.endorse_status = n

    AND en.start_date < = de.expiration_date

    AND en.start_date + 13 > = de.effective_date

    GROUP BY en.start_date)

    WHERE worker_count > 0;

    The function that is used in the inner query to generate the worker_counts is quite complex.  But it works subsecond if you run manually.  And it works very well when you run the query without the "where worker_count > 0" part.

    So the where clause of the outer query seems to be the cause of the problem - it is based on a function which should have calculated a value FIRST, and then the outer query must run depending on the result of the inner query.  Here are the plans to explain to the two different versions:

    BAD REQUEST - 60 to 70 seconds

    Object owner name cardinality bytes cost object description
    SELECT STATEMENT, GOAL = 41 1 617 ALL_ROWS
    HASH GROUP BY 41 1 617
    NESTED LOOPS
    616 1 41 NESTED LOOPS
    VIEW CA17062 58 186 4464
    58 186 5392 SINGLE FATE
    UNION-ALL
    TABLE ACCESS BY INDEX ROWID TAS_AUTH ENDORSE_DELEGATE_HISTORY_TBL 42 139 4170
    INDEX RANGE SCAN TAS_AUTH ENDORSE_DELEGATE_HIST_DEF_IDX 3 139
    TABLE ACCESS BY INDEX ROWID TAS_AUTH 14 47 1222 ENDORSE_DELEGATE_HISTORY_TBL
    INDEX RANGE SCAN 3 47 ENDORSE_DELEGATE_HIST_TEMP_IDX TAS_AUTH
    INDEX RANGE SCAN TAS_AUTH 2 1 TA_SCLENDORSE_PK

    TABLE ACCESS BY INDEX ROWID 3 1 17 ENDORSE_ACTIVITY_TBL TAS_AUTH

    GOOD QUESTION < 1 second

    Object owner name cardinality bytes cost object description
    SELECT STATEMENT, GOAL = 1025 25 617 ALL_ROWS
    HASH GROUP BY 1025 25 617
    NESTED LOOPS
    616 25 1025 NESTED LOOPS
    VIEW CA17062 58 186 4464
    58 186 5392 SINGLE FATE
    UNION-ALL
    TABLE ACCESS BY INDEX ROWID TAS_AUTH ENDORSE_DELEGATE_HISTORY_TBL 42 139 4170
    INDEX RANGE SCAN TAS_AUTH ENDORSE_DELEGATE_HIST_DEF_IDX 3 139
    TABLE ACCESS BY INDEX ROWID TAS_AUTH 14 47 1222 ENDORSE_DELEGATE_HISTORY_TBL
    INDEX RANGE SCAN 3 47 ENDORSE_DELEGATE_HIST_TEMP_IDX TAS_AUTH
    INDEX RANGE SCAN TAS_AUTH 2 1 TA_SCLENDORSE_PK
    TABLE ACCESS BY INDEX ROWID 3 1 17 ENDORSE_ACTIVITY_TBL TAS_AUTH

    If these expleain plans look almost identical to me - can someone tell me why having where clause of the outer query is originally so wrong and how I can solve this problem?  I tried using the NO_MERGE and PUSH_SUBQ - tips but nothing of what I did so wrong not better requested.

    Thanks in advance!

    Cory

    You can also try taking advantage of the scalar subquery caching by changing

    ta_mssauth_pkg.actual_endorser (en.start_date, '33811') worker_count

    TO

    worker_count (select ta_mssauth_pkg.actual_endorser (en.start_date, '33811') of double)

    Which could cache the results for recent start dates, massively reducing the number of calls.

    I think the NO_PUSH_PRED is a better approach, because the cache is less reliable to eliminate duplicate calls - it can cache the value, it could have thrown the corresponding value in the cache already, there is no guarantee of behavior and values the more en.start_date the request to meet the less reliable the caching will be.

  • The question of performance - cached data from a large table

    Hi all

    I have a general question about caching, I use an Oracle 11 g R2 server.

    I have a large table on 50 million lines, which is very often consulted by my application. Some query runs slowly and some are ok. But (of course) when the data in the table is already in the cache (so basically when a user asks the same thing twice or several times) it works very quickly.

    Does anyone have any recommendations on the caching of data / table of this size?

    Thank you very much.

    Chiwatel wrote:

    With the best formatting (I hope), I'm sorry, I'm not used to the new forum!

    Hash value of plan: 2501344126

    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    | ID | Operation | Name | Begins | E - lines. E - bytes | Cost (% CPU). Pstart. Pstop | A - lines.  A - time | Pads | Bed |  OMem |  1Mem | Used Mem.

    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    |  0 | SELECT STATEMENT |                    |      1.        |      |  7232 (100) |      |      |  68539 | 00:14:20.06 |    212K |  87545 |      |      |          |

    |  1.  SORT ORDER BY |                |      1.  7107 |  624K |  7232 (1) |      |      |  68539 | 00:14:20.06 |    212K |  87545 |  3242K |  792KO | 2881K (0) |

    2.  NESTED LOOPS |                |      1.        |      |            |      |      |  68539 | 00:14:19.26 |    212K |  87545 |      |      |          |

    |  3.    NESTED LOOPS |                |      1.  7107 |  624K |  7230 (1) |      |      |  70492 | 00:07:09.08 |    141K |  43779 |      |      |          |

    *  4 |    INDEX RANGE SCAN | CM_MAINT_PK_ID |      1.  7107 |  284K |    59 (0) |      |      |  70492 | 00:00:04.90 |    496.    453.      |      |          |

    |  5.    RANGE OF PARTITION ITERATOR.                |  70492 |      1.      |    1 (0) |  KEY |  KEY |  70492 | 00:07:03.32 |    141K |  43326 |      |      |          |

    |*  6 |      INDEX UNIQUE SCAN | D1T400P0 |  70492 |      1.      |    1 (0) |  KEY |  KEY |  70492 | 00:07:01.71 |    141K |  43326 |      |      |          |

    |*  7 |    TABLE ACCESS BY INDEX ROWID | D1_DVC_EVT |  70492 |      1.    49.    2 (0) | ROWID | ROWID |  68539 | 00:07:09.17 |  70656 |  43766 |      |      |          |

    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):

    ---------------------------------------------------

    4 - access("ERO".") MAINT_OBJ_CD '= 'D1-DEVICE' AND 'ERO'." PK_VALUE1 "=" 461089508922")

    6 - access("ERO".") DVC_EVT_ID '=' E '. ("' DVC_EVT_ID")

    7 filter (("E". "DVC_EVT_TYPE_CD"= "END-GSMLOWLEVEL-EXCP-SEV-1" OR "E" " DVC_EVT_TYPE_CD "=" STR-GSMLOWLEVEL-EXCP-SEV-1'))

    Your user name has run a query to return the lines 68 000 - what type of user is, a human being cannot cope eventually with that a lot of data and it's not entirely surprising that he might take a long time to return.

    One thing I would check is if you still get the same execution plan - Oracle here estimates are out by a factor of about 95 (7 100 vs 68 500 returned planned lines) may be a part of your change in the calendar refers to plan changes.

    If you check the numbers you will see about half your time came to survey the unique index, and half visited the table. In general, it is hard to beat Oracle to cache algorithms, but the indexes are often much smaller than the paintings they cover, so it is possible that your best strategy is to protect this index at the expense of the table. Rather than trying to create a cache to KEEP the index, however, you MIGHT find that you get some advantage to create a cache of RECYCLING for the table, using a small percentage of available memory - the goal is to arrange things so that the table blocks that you revisit do not grow the index blocks that you will come back to memory.

    Another detail to consider is that if you visit the index and the table completely random (for 68 500 sites), it is possible that you find yourself re-reading blocks several times during the visit. If you order the intermediate result set of driving all table first you find you're walk the index and the table in the order and that you don't have to reread all the blocks. It's something that only you can know, however.  The code will have to change to include a view of inline with a dash of no_merge and no_eliminate_oby.

    Concerning

    Jonathan Lewis

  • Question of the optimizer V9.2-&gt; upgrade 11.2

    Well I spent time this formatting and it cleaning to get an error when I tried to post it I hope it comes out ok.

    Solaris 10

    Versions of Oracle 9.2.0.6 and 11.2.0.3

    In the middle to test an upgrade of a system 9.2.0.6 SAP 11.2.0.3 for production updated in September.

    We had teams of application test various SAP transactions before and after the upgrade and save the time of transaction. Fortunately, we have a sandbox environment which is still in 9.2.0.4 so that we can easily test it side by side with an improved test environment. A specific transaction in SAP takes 30-60 seconds on the 11.2 system while it is running in 2 to 5 seconds on 9.2.0.4.

    Most of the time in v11 is dedicated to this query that is executed 1000 times, as evidenced by a trace of SAP on the transaction.

      SELECT "KUNNR" , "KUNN2"
        FROM "KNVP"
       WHERE "MANDT" = :A0 AND
             "KUNN2" IN ( :A1 , :A2 , :A3 , :A4 , :A5 ) AND
             "VKORG" = :A6 AND
             "VTWEG" = :A7 AND
             "SPART" = :A8 AND
             "PARVW" IN ( :A9 , :A10 , :A11 )
    
    
    

    The individual query is not slow, but when combined at runtime for 1000 executions is most of the runtime.

    I threw together a plsql anonymous loop which took place this 1000 times, such as the SAP transaction. This works in the environment of V9 it runs in some secondes.2 with no change in the V11 environment it runs in about 60 seconds.

    Just trying to get some information I did an autotrace in versions 9 and 11.

    Plan of version 9 and autotrace.

    Execution plan

    ----------------------------------------------------------

    0 SELECT STATEMENT Optimizer = CHOOSE (cost = 1 card = 18 bytes = 576)

    1 0 INLIST ITERATOR

    2 1 ACCESS TABLE (BY INDEX ROWID) OF "KNPV" (cost = 1 card = 18 bytes = 576)

    3 2 INDEX (LINE SCAN) OF "KNVP___K" (NO ONE) (cost = 3 = 18 card)

    Statistics

    ----------------------------------------------------------

    0 recursive calls

    0 db block Gets

    Gets 9 compatible

    0 physical reads

    0 redo size

    Autotrace and plan for version 11

    --------------------------------------------------------------------------------------------

    | ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time |

    --------------------------------------------------------------------------------------------

    |   0 | SELECT STATEMENT |             |    36.  1404 |     2 (0) | 00:00:01 |

    |   1.  INLIST ITERATOR.             |       |       |            |          |

    |*  2 |   TABLE ACCESS BY INDEX ROWID | KNPV |    36.  1404 |     2 (0) | 00:00:01 |

    |*  3 |    INDEX RANGE SCAN | KNVP___K |    36.       |     1 (0) | 00:00:01 |

    --------------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):

    ---------------------------------------------------

    2 - filter("VKORG"=:A6 AND "VTWEG"=:A7 AND "SPART"=:A8)

    3 - access("MANDT"=:A0 AND ("KUNN2"=:A1 OR "KUNN2"=:A2 OR "KUNN2"=:A3 OR))

    (("KUNN2" =: A4 OR "KUNN2" =: A5))

    filter ("PARVW" =: A9 or "PARVW" =: "PARVW" = or A10: A11)

    Statistics

    ----------------------------------------------------------

    0 recursive calls

    0 db block Gets

    1009 becomes consistent

    0 physical reads

    0 redo size

    The difference lies in the number of becomes coherent while the plan was that the same so started to play in the V11 environment, try to get it down to

    the same number of consistent gets as V9.

    It turns out that this seems to be something to change in the optimizer between versions 10.1 and 10.2. I could define optimizer_features_enabled in my session and change the behavior of the query. Do some research on google/support I hit on a quick comment by J. Lewis in this forum who could change the inlist 10.2 optimization. I also found various parameters that would change the behaviour that I did a few tests with them.

    The rest of the present is now in version 11 and comes from the loop of PL/SQL which I ran the query 1000 times with.

    Results of the v$ sql. The first has optimizer features value enabled 9.2.0.4 the second is nothing has changed.

    SQL_ID HASH_VALUE PLAN_HASH_VALUE ELAPSED CPU BUFFER_GETS EXECUTIONS

    ------------- ---------- --------------- ---------- ---------- ---------- -----------

    856081986 4271490216 1000.05.06 cdb5x2hthdjk2 8000

    81115016 4271490216 1000 74.40 7p32q202dbdw8 74,43 1009000

    Dbms_xplan for the 2 sql_ids.

    SQL_ID, cdb5x2hthdjk2, number of children 0

    -------------------------------------

    SELECT / * + opt_param ('optimizer_features_enable', '9.2.0'), loop_test

    * / "KUNNR', 'KUNN2' SAPR3. "KNPV" WHERE "MANDT" =: B12 AND 'KUNN2 '.

    IN (: B11,: B10,: B9,: B8,: B7) AND "VKORG" =: B6 AND "VTWEG" =.

    : B5 AND "SPART" =: B4 AND 'PARVW' IN (: B3: B2,: B1)

    Hash value of plan: 4271490216

    --------------------------------------------------------------------------------------------

    | ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time |

    --------------------------------------------------------------------------------------------

    |   0 | SELECT STATEMENT |             |       |       |     3 (100) |          |

    |   1.  INLIST ITERATOR.             |       |       |            |          |

    |*  2 |   TABLE ACCESS BY INDEX ROWID | KNPV |    36.  1404 |     3 (0) | 00:00:01 |

    |*  3 |    INDEX RANGE SCAN | KNVP___K |    36.       |     2 (0) | 00:00:01 |

    --------------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):

    ---------------------------------------------------

    2. (("VKORG" =: B6 ET "VTWEG" =: B5 ET "SPART» =: B4) filter)

    3 - access ("MANDT" =: B12 AND (("KUNN2" =: B11 ou "KUNN2" =: B10 ou "KUNN2" =: B9 ou)))

    ((("KUNN2" =: B8 OR "KUNN2" =: B7)) AND (("PARVW" =: B3 OU "PARVW" =: B2 OU "PARVW" =: B1)))

    SQL_ID, 7p32q202dbdw8, number of children 0

    -------------------------------------

    SELECT / * + loop_test_11 * / "KUNNR', 'KUNN2' FROM SAPR3. 'KNPV' WHERE

    "MANDT" =: B12 AND 'KUNN2' IN (: B11,: B10,: B9,: B8,: B7) AND

    "VKORG" =: B6 AND "VTWEG" =: B5 AND "SPART" =: B4 AND 'PARVW' IN (: B3)

    (: B2,: B1)

    Hash value of plan: 4271490216

    --------------------------------------------------------------------------------------------

    | ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time |

    --------------------------------------------------------------------------------------------

    |   0 | SELECT STATEMENT |             |       |       |     2 (100) |          |

    |   1.  INLIST ITERATOR.             |       |       |            |          |

    |*  2 |   TABLE ACCESS BY INDEX ROWID | KNPV |    36.  1404 |     2 (0) | 00:00:01 |

    |*  3 |    INDEX RANGE SCAN | KNVP___K |    36.       |     1 (0) | 00:00:01 |

    --------------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):

    ---------------------------------------------------

    2. (("VKORG" =: B6 ET "VTWEG" =: B5 ET "SPART» =: B4) filter)

    3 - access ("MANDT" =: B12 AND (("KUNN2" =: B11 ou "KUNN2" =: B10 ou "KUNN2" =: B9 ou)))

    ((("KUNN2" =: B8 OR "KUNN2" =: B7)))

    filter (("PARVW" =: B3 ou "PARVW" =: B2 ou "PARVW" =: B1))

    Apart from a few differences numbers in terms of the difference is in the predicate section. In version 9, a PARVW is used in predicates access section while the default 11 a version is only used as a filter.

    The definition of the index

    INDEX_NAME POS COLUMN_NAME

    ----------------------------------------------------------------

    KNVP___K 1 MANDT

    KNVP___K 2 KUNN2

    KNVP___K 3 PARVW

    There are various options to deal with this included two hidden settings _optimizer_better_inlist_costing and _optim_peek_user_binds, both with an initial test that the application run as expected. Before setting either of those that we would have to do more testing and verification with support etc. So I hope a possible different difficulty perhaps in the way that we collect statistics or anything of the sort.

    Is there something in the stats to look out for which could contribute to this?

    Default statistics collection was 'for all columns size 1'. I played with various other options with no luck yet.

    Our test is not finished, but we are a little worried that this could be a more general than just a query question since we have several transactions that show some runtimes longer 11.2

    Post edited by: 892918

    I have not sent/posted these traces because working with taken in charge, I think he found the problem and have implemented a workaround solution. Fortunately, so far, it's only this one request that we found with this behavior.

    This seems to be caused by the fix for bug 4600710 - improve INLISTs costs in some cases (Doc ID 4600710.8). This query does not fit the description exactly but it it closes. index has 3 columns A, B, C with A and C is not selective but B is selective and C is used in a list.

    We now have a few options at our disposal to deal with this request and other pop up.

    (1) turn off the fix_control for this bug fix.

    (2) manually, to lower the distinct values for column B which will use all 3 columns.

    (3) use the advice, first_row (1), off _optimizer_better_inlist_costing etc..

    (4) creates a new index on C, B and use a reference database to ensure that it is used.

    We'll go with option 4 at this stage. It was the less risky option to affect the rest of the environment.

    Thanks for your help.

  • ROWID and data block

    Hello
    I'm using Orace 11 g R2 on Oracle Linux 64-bit 5.6
    I have a simple question. ROWID contains: number of data, the number of file data object, * block * number, line number.
    When he says data block, it doesn't mean Oracle this block or block of disk?
    Thank you all.

    Published by: Laus on June 3, 2013 04:16

    Oracle block. The doc for dbms_rowid makes this clear, http://docs.oracle.com/cd/E11882_01/appdev.112/e25788/d_rowid.htm#CIHDGIGD
    --
    John Watson
    Oracle Certified Master s/n
    http://skillbuilders.com

  • Materialized view index questions...

    Hi all..


    I created a view "" REFRESH FAST' "materialized on a '' remote database table' ' that has 100 million records.
    Previously, I have a query that joins the local_db table and the remote_table via db_link.
    After I created the MV, I use MV to joing in this query.



    (1) the performance of the query has not much changed after I used MV. I was in the impression that the MV remains locally
    and improves the peformace. What can I do?

    (2) I intend to create indexes on the MV. It improves performance?
    Should I drop and create the MV, if I create indexes on it?
    I don't want to drop and create the MV all the time. Are there other options I have?


    EXPLAIN THE PLAN WITH MV:
    ====================
    SELECT STATEMENT, GOAL = ALL_ROWS      198190  45  2070
     FILTER          
      SORT AGGREGATE        1  23
       TABLE ACCESS BY INDEX ROWID  VP_OWNER  VIOLATORS  4  1  23
        INDEX RANGE SCAN  VP_OWNER  VLR_PLATE  3  1  
     HASH GROUP BY      198190  45  2070
      VIEW  SNALLADB    198189  45  2070
       HASH UNIQUE      198189  45  7245
        WINDOW SORT      198189  45  7245
         HASH JOIN OUTER      198187  45  7245
          PARTITION RANGE SINGLE      4520  44  6028
           TABLE ACCESS FULL  VP_OWNER  VIOLATIONS  4520  44  6028
          MAT_VIEW ACCESS FULL  VP_OWNER  PLATES_MV  193637  8310922  199462128
    EXPLAIN THE PLAN WITH JOIN THE REMOTE TABLE:
    ==========================================
    SELECT STATEMENT, GOAL = ALL_ROWS      4699  44  2024
     FILTER          
      SORT AGGREGATE        1  23
       TABLE ACCESS BY INDEX ROWID  VP_OWNER  VIOLATORS  4  1  23
        INDEX RANGE SCAN  VP_OWNER  VLR_PLATE  3  1  
     HASH GROUP BY      4699  44  2024
      VIEW  SNALLADB    4698  44  2024
       HASH UNIQUE      4698  44  7348
        WINDOW SORT      4698  44  7348
         NESTED LOOPS OUTER      4696  44  7348
          PARTITION RANGE SINGLE      4520  44  6028
           TABLE ACCESS FULL  VP_OWNER  VIOLATIONS  4520  44  6028
          REMOTE    PLATES  4  1  30
    QUERY:
    =====
     
     select viol_date, dmv_sts, lane_id, count(1) cnt, sum(rev) revenue
       from (select violation_id,
                    trunc(viol_date) viol_date,
                    lane_id,
                    rev,
                    'LP-' || (case
                      when dmv_sts in ('NDMV', 'NV-TIME') then
                       dmv_sts
                      when exists (select count(1)
                              from violators vr
                             where vr.lic_plate_nbr = vt.lic_plate_nbr
                               and vr.lic_plate_state = vt.lic_plate_state
                               and vt.viol_date between vr.usage_begin_date and
                                   nvl(vr.usage_end_date, sysdate)
                             having count(1) > 1) then
                       'M-VLTR'
                      else
                       'OTHER'
                    end) dmv_sts,
                    business_type
               from (SELECT DISTINCT v.violation_id,
                                     v.viol_date,
                                     v.lane_id,
                                     v.toll_due rev,
                                     v.lic_plate_nbr,
                                     v.lic_plate_state,
                                     DECODE(v.origin_type,
                                            'F',
                                            'Z',
                                            v.origin_type) business_type,
                                     MIN(case
                                           when p.lic_plate_nbr is null THEN
                                            'NDMV'
                                           when p.start_date is not null and
                                                v.viol_date between p.start_date and
                                                nvl(p.end_date, sysdate) THEN
                                            'IN-DMV'
                                           ELSE
                                            'NV-TIME'
                                         end) over(PARTITION BY violation_id) dmv_sts
                       FROM violations v
                       LEFT OUTER JOIN --plates@home_dmv.world
                     vp_owner.plates_mv p -- *** I am using the MV over there
                         ON v.lic_plate_nbr = p.lic_plate_nbr
                        AND v.lic_plate_state = p.lic_plate_state
                      WHERE v.viol_date >= to_date('7/5/2007', 'MM/DD/YYYY')
                        AND v.viol_date < to_date('7/31/2007', 'MM/DD/YYYY') + 1
                        AND v.viol_status IN ('ZH', 'WJ', 'A')
                        and v.violator_id is null 
                        and v.lic_plate_state = 'TX') vt)
      group by viol_date, dmv_sts, lane_id, business_type
    Thank you

    You have posted more than enough times to know that you will need to provide your Oracle version 4-digit.
    >
    (1) the performance of the query has not much changed after I used MV. I was in the impression that the MV remains locally
    and improves the peformace. What can I do?
    >
    Yes - drop the MV. Why did you create a MV if your tests showed that it did not provide an advantage?
    >
    (2) I intend to create indexes on the MV. It improves performance?
    Should I drop and create the MV, if I create indexes on it?
    I don't want to drop and create the MV all the time. Are there other options I have?
    >
    Why "do you plan to create indexes on the MV? Your statement in #1 said you that the MV will not provide any advantage.
    >
    Can someone help me please with this question...
    >
    What question? You have not presented any issue to help with.

    We must STOP focusing on a solution and return to determine what the problem or even if you have a problem.

    1. determine and document the existence of a problem.
    2. determine the cause of the problem
    3. identify possible solutions that will eliminate or alleviate the problem
    4. Select one or two options for further evaluation and testing
    5. implement your solution 'better '.

    You seem to be in step #5 but have not posted something that indicates you did steps 1-4.

    Display detailed information on steps 1 and 2.

Maybe you are looking for