Question on the composite index and index skip scan

Hello
I have a confusion.
I read the post of Burleson on the column of the composite index command (http://www.dba-oracle.com/t_composite_index_multi_column_ordering.htm) where he writes that

«.. . for composite indexes the most restrictive value of the column (the column with the highest unique values) should be made first to cut down the result set in... »


But 10g performance tuning book tells the subject INDEX SKIP SCAN:

"... Index scan Skip allows a composite index that is logically divided into smaller subindex. In Dumpster
scanning, the first column in the composite index is not specified in the query. In other words, it is ignored.
The number of logic subindex is determined by the number of distinct values in the first column.

Skip scanning is advantageous if there are few distinct values in the main column of the composite index and many distinct values in the key do not tip of the index... »

If design us a composite index according to what said Burleson, then how can we take advantage of index skip scan. These two staements to oppose each other, Don't they?

Can someone explain this?

Even if you're not skip scanning, it is best to put the column with less distinct values in the main column of the index.

If a query specifies two key as predicates of equality columns, it doesn't really matter how columns are sorted in the index.
If a query specifies a range on a key column, it is most likely on the second column of the index.

BTW, sometimes even a column 3 or the index of the column 4 is useful. In order to not restrict simply yourself in 2 columns. However, do not create too many clues - especially if there is overlap between the index.

Hemant K Collette

Tags: Database

Similar Questions

  • Question about the composition settings and make

    I looked for an answer to my question, but had to ask at resort, sorry, it's pretty basic, but a direct response is needed I think.

    I imported a project in Premiere pro with two layers of still images with movement and applied effects. I work for a few weeks now, with variations on these pictures and had many number of exports of EI through the render queue.

    After you import the compositions of first CC my starting point; composition and other settings of composition parameters are all resolved HDTV 1080 25 but the set resolution in which I unfortunately now has, is defined as 'Half' (960 * 540)... New compositions also sets the default resolution somehow half...

    My question is about my previous exports and subsequently.  Together of the composition to "Half" resolution affect the quality of my compositions exported using the render module? So far, all my renders are 1080 p by default and no 960 * 540. These have been uprezed to 1080 of 960? ... - I have not changed any setting on the resolution in the rendering module - they all came out 1080 after.

    Thank you

    Together of the composition to "Half" resolution affect the quality of my compositions exported using the render module?

    N ° except if you change the settings of default rendering, which is "Best settings" in "current Configuration".

  • Why the feature multiple column indexes using index skip scan?

    Hi all

    I have just been hired by a new company and I explored its database infrastructure. Interestingly, I see several function based indexed column used for all the tables. I found it strange, but they said ' we use Axapta to connect Axapta with Oracle, function index according to should be used to improve performance. Therefore, our DBAs create several indexes of feature based for each table in the database. "Unfortunately, I can not judge their business logic.

    My question is, I just created similar to my local database tables in order to understand the behavior of the function index according to several columns. In order to create indexes of based function (substr and nls_lower), I have to declare the columns as varchars2. Because in my business our DBAs had created a number of columns as a varchar2 data type. I created two excatly same table for my experience. I create miltiple function according to index on the my_first table, and then I create several normal index on the my_sec table. The interesting thing is, index skip scan cannot be performed on more than one basic function index (table my_first). However, it can be performed to normal several index on my_sec table. I hope that I have to express myself clearly.

    Note: I also ask the logic of the rule function based index, they said when they index a column they don't ((column length) * 2 + 1) formula. For example, I want to create indexes on the zip code column, column data type VARCHAR2 (3), so I have to use 3 * 2 + 1 = 7, (substr (nls_lower (areacode), 1, 7). substr (nls_lower ()) notation is used nested for any function function index. I know that these things are very illogical, but they told me, they use this type of implementation for Axapta.

    Anyway, in this thread, my question is reletad to function function with index index skip scan, not logical bussiness, because I can not change the business logic.

    Also, can you please give hints or clues for multiple function based indexes?

    Thanks for your help.


    SQL > create table my_first as select '201' codeZone, to_char (100 + rownum) account_num, dbms_random.st
    Ring name ('A', 10) from dual connect by level < = 5000;

    Table created.

    SQL > create table my_sec as select '201' codeZone, to_char (100 + rownum) account_num, dbms_random.st

    Ring name ('A', 10) from dual connect by level < = 5000;

    Table created.

    SQL > alter table my_first change account_num varchar2 (12);

    Modified table.


    SQL > alter table my_sec change account_num varchar2 (12);

    Modified table.

    SQL > alter table my_first change codeZone VARCHAR2 (3);

    Modified table.

    SQL > alter table my_sec change codeZone VARCHAR2 (3);

    Modified table.

    SQL > create index my_first_i on my_first (substr (nls_lower (areacode), 1, 7), substr (nls_lower (account_num), 1, 15));

    The index is created.

    SQL > create index my_sec_i on my_sec (area code, account_num);

    The index is created.

    SQL > analyze table my_first computing statistics for all columns indexed for all indexes.

    Parsed table.

    SQL > analyze table my_sec computing statistics for all columns indexed for all indexes.

    Parsed table.

    SQL > exec dbms_stats.gather_table_stats (USER, 'MY_FIRST');

    PL/SQL procedure successfully completed.

    SQL > exec dbms_stats.gather_table_stats (USER, 'MY_SEC');

    PL/SQL procedure successfully completed.

    SQL > my_first desc;
    Name                                      Null?    Type
    ----------------------------------------- -------- ----------------------------
    CODEZONE VARCHAR2 (3)
    ACCOUNT_NUM VARCHAR2 (12)
    NAME VARCHAR2 (4000)

    SQL > desc my_sec
    Name                                      Null?    Type
    ----------------------------------------- -------- ----------------------------
    CODEZONE VARCHAR2 (3)
    ACCOUNT_NUM VARCHAR2 (12)
    NAME VARCHAR2 (4000)

    SQL > select * from my_sec where account_num = '4000';


    Execution plan
    ----------------------------------------------------------
    Hash value of plan: 1838048852

    --------------------------------------------------------------------------------
    --------

    | ID | Operation | Name | Lines | Bytes | Cost (% CPU). TI
    me |

    --------------------------------------------------------------------------------
    --------

    |   0 | SELECT STATEMENT |          |     1.    19.     3 (0) | 00
    : 00:01 |

    |   1.  TABLE ACCESS BY INDEX ROWID | MY_SEC |     1.    19.     3 (0) | 00
    : 00:01 |

    |*  2 |   INDEX SKIP SCAN | MY_SEC_I |     1.       |     2 (0) | 00
    : 00:01 |

    --------------------------------------------------------------------------------
    --------


    Information of predicates (identified by the operation identity card):
    ---------------------------------------------------

    2 - access ("ACCOUNT_NUM" = '4000')
    Filter ("ACCOUNT_NUM" = '4000')


    Statistics
    ----------------------------------------------------------
    1 recursive calls
    0 db block Gets
    Gets 7 compatible
    0 physical reads
    0 redo size
    543 bytes sent via SQL * Net to client
    384 bytes received via SQL * Net from client
    2 SQL * Net back and forth to and from the client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed

    SQL > select * from my_first where substr (nls_lower (account_num), 1: 25) = '4000';


    Execution plan
    ----------------------------------------------------------
    Hash value of plan: 1110109060

    ------------------------------------------------------------------------------
    | ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time |
    ------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT |          |     1.    20.     9 (12) | 00:00:01 |
    |*  1 |  TABLE ACCESS FULL | MY_FIRST |     1.    20.     9 (12) | 00:00:01 |
    ------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):
    ---------------------------------------------------

    1 Filter (SUBSTR (NLS_LOWER ("MY_FIRST". "" "" ACCOUNT_NUM")(, 1, 15) ="4000"
    AND SUBSTR (NLS_LOWER ("ACCOUNT_NUM"), 1, 25) = '4000')


    Statistics
    ----------------------------------------------------------
    15 recursive calls
    0 db block Gets
    Gets 26 consistent
    0 physical reads
    0 redo size
    543 bytes sent via SQL * Net to client
    384 bytes received via SQL * Net from client
    2 SQL * Net back and forth to and from the client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed

    SQL > Select / * + INDEX_SS (MY_FIRST) * / * from my_first where substr (nls_lower (account_num), 1: 25) = '4000';


    Execution plan
    ----------------------------------------------------------
    Hash value of plan: 2466066660

    --------------------------------------------------------------------------------
    ----------

    | ID | Operation | Name | Lines | Bytes | Cost (% CPU).
    Time |

    --------------------------------------------------------------------------------
    ----------

    |   0 | SELECT STATEMENT |            |     1.    20.    17 (6) |
    00:00:01 |

    |*  1 |  TABLE ACCESS BY INDEX ROWID | MY_FIRST |     1.    20.    17 (6) |
    00:00:01 |

    |*  2 |   INDEX SCAN FULL | MY_FIRST_I |     1.       |    16 (7) |
    00:00:01 |

    --------------------------------------------------------------------------------
    ----------


    Information of predicates (identified by the operation identity card):
    ---------------------------------------------------

    1 - filter (SUBSTR (NLS_LOWER ("ACCOUNT_NUM"), 1, 25) = '4000')
    2 - access (SUBSTR (NLS_LOWER ("ACCOUNT_NUM"), 1, 15) = '4000')
    Filter (substr (NLS_LOWER ("ACCOUNT_NUM"), 1, 15) = '4000')


    Statistics
    ----------------------------------------------------------
    15 recursive calls
    0 db block Gets
    857 consistent gets
    0 physical reads
    0 redo size
    543 bytes sent via SQL * Net to client
    384 bytes received via SQL * Net from client
    2 SQL * Net back and forth to and from the client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed

    Check MoS for a bug with the FBI and Skip Scan - it sounds like it could be a bug.

    On 11.2.0.4 with your sample code 10053 trace shows the optimizer whereas an index FULL scan to the point where she should consider an index SKIP scan for "unique table path".

    A person with 12.1.0.1 practice would like to run your test and see if it's fixed in this version.

    Concerning

    Jonathan Lewis

  • doubt about the Index Skip Scan

    Hi all

    I read the setting of Oracle performance guide (Version 11.2 Chapter 11). I just want to see index skip scan with an example. I created a table called t and inserted the test data. When I asked the table optimizer did not use the index skip scan path.

    Can you please let me know what mistake I am doing here.

    Thanks a lot for your help in advance.

    SQL > create table t (empno number
    2, ename varchar2 (2000)
    3, varchar2 (1) sex
    4, email_id varchar2 (2000));

    Table created

    SQL >
    SQL >-test data
    SQL > insert into t
    2 level, select "suri" | (level), ','suri.king' | level | ' @gmail.com'
    3 double
    4. connect by level < = 20000
    5.

    20000 lines inserted

    SQL >
    SQL > insert into t
    2 Select level + 20000, 'surya ' | (level + 20000), 'F', 'surya.princess'. (level + 20000) : ' @gmail.com '
    3 double
    4. connect by level < = 20000
    5.

    20000 lines inserted

    SQL > create index t_gender_email_idx on t (gender, email_id);

    Index created

    SQL > explain the plan for
    2 Select
    3 t
    4 where email_id = "[email protected]";

    He explained.

    SQL > select *.
    table 2 (dbms_xplan.display);

    PLAN_TABLE_OUTPUT
    ----------------------------------------------------------------------------------------------------------------
    Hash value of plan: 1601196873

    --------------------------------------------------------------------------
    | ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time |
    --------------------------------------------------------------------------


    |   0 | SELECT STATEMENT |      |     4.  8076 |   103 (1) | 00:00:02 |
    |*  1 |  TABLE ACCESS FULL | T    |     4.  8076 |   103 (1) | 00:00:02 |
    --------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):
    ---------------------------------------------------

    1 - Filter ("EMAIL_ID"= "[email protected]")

    Note
    -----
    -dynamic sample used for this survey (level = 2)

    17 selected lines.

    See you soon,.

    Suri

    You have just demonstrated how your execution plan gets screwed up if you do not have your statistics

    SQL > create table t
    () 2
    3 empno number
    4, ename varchar2 (2000)
    5, varchar2 (1) sex
    6, email_id varchar2 (2000)
    7  );

    Table created.

    SQL > insert into t
    2 Select level, "suri" | (level), ', 'suri.king'| level | ' @gmail.com'
    3 double
    4. connect by level<=>
    5.

    20000 rows created.

    SQL > insert into t
    2 Select level + 20000, 'surya ' | (level + 20000), 'F', 'surya.princess'. (level + 20000) : ' @gmail.com'
    3 double
    4. connect by level<=>
    5.

    20000 rows created.

    SQL > create index t_gender_email_idx on t (gender, email_id);

    The index is created.

    SQL > set autotrace traceonly explain
    SQL >
    SQL > select *.
    2 t
    3 where email_id = "[email protected]";

    Execution plan
    ----------------------------------------------------------
    Hash value of plan: 2153619298

    --------------------------------------------------------------------------
    | ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time |
    --------------------------------------------------------------------------
    |   0 | SELECT STATEMENT |      |     3.  6057.    79 (4) | 00:00:01 |
    |*  1 |  TABLE ACCESS FULL | T    |     3.  6057.    79 (4) | 00:00:01 |
    --------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):
    ---------------------------------------------------

    1 - Filter ("EMAIL_ID"= "[email protected]")

    Note
    -----
    -dynamic sampling used for this statement

    SQL > exec dbms_stats.gather_table_stats (user, 't', cascade-online true)

    PL/SQL procedure successfully completed.

    SQL > select *.
    2 t
    3 where email_id = "[email protected]";

    Execution plan
    ----------------------------------------------------------
    Hash value of plan: 2655860347

    --------------------------------------------------------------------------------------------------
    | ID | Operation | Name               | Lines | Bytes | Cost (% CPU). Time |
    --------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT |                    |     1.    44.     1 (0) | 00:00:01 |
    |   1.  TABLE ACCESS BY INDEX ROWID | T                  |     1.    44.     1 (0) | 00:00:01 |
    |*  2 |   INDEX SKIP SCAN | T_GENDER_EMAIL_IDX |     1.       |     1 (0) | 00:00:01 |
    --------------------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):
    ---------------------------------------------------

    2 - access ("EMAIL_ID"= '[email protected]')
    filter ("EMAIL_ID"= "[email protected]")

    SQL >

  • INDEX RANGE SCAN against INDEX SKIP SCAN

    Dear,

    Let me introduce you to the model, and then I'll ask my question
    SQL> select * from v$version;
    
    BANNER
    ----------------------------------------------------------------
    Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi
    PL/SQL Release 10.2.0.5.0 - Production
    CORE    10.2.0.5.0      Production
    TNS for Solaris: Version 10.2.0.5.0 - Production
    NLSRTL Version 10.2.0.5.0 - Production
    
    SQL> create table t1
      2     as select rownum                  id1,
      3      mod(rownum,1000)                  id2,
      4      lpad(rownum,10,'0')              small_vc,
      5      rpad('x',1000)                   padding
      6  from dual
      7  connect by level <= 10000;
    
    Table created.
    
    SQL> create index t1_ind_id1 on t1(id1);
    
    Index created.
    
    SQL> create index t1_ind_id2 on t1(id2, id1);
    
    Index created.
    
    SQL> exec dbms_stats.gather_table_stats(user, 't1', cascade => true);
    
    PL/SQL procedure successfully completed.
    
    SQL> select index_name, num_rows, clustering_factor
      2  from user_indexes
      3  where index_name in ('T1_IND_ID1','T1_IND_ID2');
    
    INDEX_NAME                       NUM_ROWS CLUSTERING_FACTOR
    ------------------------------ ---------- -----------------
    T1_IND_ID1                          10000              1429
    T1_IND_ID2                          10000             10000
    
    
    SQL> select *
      2  from t1
      3  where id1=6;
    
     Execution Plan
    ----------------------------------------------------------
    Plan hash value: 2367654148
    
    ------------------------------------------------------------------------------------------
    | Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    ------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT            |            |     1 |  1019 |     2   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| T1         |     1 |  1019 |     2   (0)| 00:00:01 |
    |*  2 |   INDEX RANGE SCAN          | T1_IND_ID1 |     1 |       |     1   (0)| 00:00:01 |
    ------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       2 - access("ID1"=6)
    So far so good.

    What I want is to know how I can reproduce an example of real life where an index skip scan has been chosen by the CBO despite the presence of the index 'adequate '.

    Here, below, I tried several examples
    SQL> alter index t1_ind_id1 unusable;
    
    Index altered.
    
    SQL> select *
      2  from t1
      3  where id1=6;
    
      Execution Plan
    ----------------------------------------------------------
    Plan hash value: 2497247906
    
    ------------------------------------------------------------------------------------------
    | Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    ------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT            |            |     1 |  1019 |  1004   (1)| 00:00:03 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| T1         |     1 |  1019 |  1004   (1)| 00:00:03 |
    |*  2 |   INDEX SKIP SCAN           | T1_IND_ID2 |     1 |       |  1003   (1)| 00:00:03 |
    ------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       2 - access("ID1"=6)
           filter("ID1"=6)
    It's predictable. Let replace them the usable index and change its grouping factor
    SQL> alter index t1_ind_id1 rebuild;
    
    Index altered.
    
    SQL> select *
      2  from t1
      3  where id1=6;
    
         
    Execution Plan
    ----------------------------------------------------------
    Plan hash value: 2367654148
    
    ------------------------------------------------------------------------------------------
    | Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    ------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT            |            |     1 |  1019 |     2   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| T1         |     1 |  1019 |     2   (0)| 00:00:01 |
    |*  2 |   INDEX RANGE SCAN          | T1_IND_ID1 |     1 |       |     1   (0)| 00:00:01 |
    ------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       2 - access("ID1"=6)
    
    SQL> exec dbms_stats.set_index_stats(user, 'T1_IND_ID1',clstfct => 20000);
    
    PL/SQL procedure successfully completed.
    
    SQL> select index_name, num_rows, clustering_factor
      2  from user_indexes
      3  where index_name in ('T1_IND_ID1','T1_IND_ID2');
    
    INDEX_NAME                       NUM_ROWS CLUSTERING_FACTOR
    ------------------------------ ---------- -----------------
    T1_IND_ID1                          10000             20000
    T1_IND_ID2                          10000             10000
    
    
    SQL> select *
      2  from t1
      3  where id1=6;
    
        
    Execution Plan
    ------------------------------------------------------------------------------------------
    Plan hash value: 2367654148
    ------------------------------------------------------------------------------------------
    | Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    ------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT            |            |     1 |  1019 |     3   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| T1         |     1 |  1019 |     3   (0)| 00:00:01 |
    |*  2 |   INDEX RANGE SCAN          | T1_IND_ID1 |     1 |       |     1   (0)| 00:00:01 |
    ------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       2 - access("ID1"=6)
    Still without success to produce an INDEX SKIP SCAN on T1_IND_ID2 in the presence of the T1_IND_ID1 index

    Any suggestions?

    Thank you

    Mohamed Houri
    www.hourim.WordPress.com

    What I want is to know how I can reproduce an example of real life where an index skip scan has been chosen by the CBO despite the presence of the 'adequate' index

    If, on behalf of the investigation, trying to 'force' the index skip scan, you must do two things:

    1. change the factor of grouping of TI_IND_ID1 to make it more expensive.

    While Hemant and Nikolay make good points on the fact that the grouping factor SHOULD BE irrelevant for a search of a single line, you are using a non-unique index is still part of the calculation of costs for a range scan.

    It had been a unique index so the factor of grouping of piracy would have been ineffective.

    But because only the cost calculation involves selectivity * factor clustering, you must change it by an order of magnitude (relevant to num_distinct obviously) to make significant change.

    For example:

    SQL> exec dbms_stats.set_index_stats(user, 'T1_IND_ID1',clstfct => 20000000);
    
    PL/SQL procedure successfully completed.
    
    SQL> explain plan for
      2  select /*+ index(t1 t1_ind_id1) */ *
      3  from t1
      4  where id1=6;
    
    Explained.
    
    SQL> select * from table(dbms_xplan.display);
    
    PLAN_TABLE_OUTPUT
    ---------------------------------------------------------------------------------------------------------
    Plan hash value: 3180815200
    
    ------------------------------------------------------------------------------------------
    | Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    ------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT            |            |     1 |  1019 |  2002   (1)| 00:00:25 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| T1         |     1 |  1019 |  2002   (1)| 00:00:25 |
    |*  2 |   INDEX RANGE SCAN          | T1_IND_ID1 |     1 |       |     1   (0)| 00:00:01 |
    ------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       2 - access("ID1"=6)
    
    14 rows selected.
    
    SQL>
    

    This pushes the cost of analysis of the range up above the table scan complete:

    SQL> explain plan for
      2  select *
      3  from t1
      4  where id1=6;
    
    Explained.
    
    SQL>  select * from table(dbms_xplan.display);
    
    PLAN_TABLE_OUTPUT
    -----------------------------------------------------------------------------------------------
    Plan hash value: 3617692013
    
    --------------------------------------------------------------------------
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    --------------------------------------------------------------------------
    |   0 | SELECT STATEMENT  |      |     1 |  1019 |   322   (1)| 00:00:04 |
    |*  1 |  TABLE ACCESS FULL| T1   |     1 |  1019 |   322   (1)| 00:00:04 |
    --------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       1 - filter("ID1"=6)
    
    13 rows selected.
    
    SQL>
    

    So, now to the next step.

    2. we need to artificially reduce the cost of the analysis of Skip - and the best way to do that is by changing the separate issue of the main column in the index (currently 1000):

    SQL> begin
      2     DBMS_STATS.SET_COLUMN_STATS
      3     (ownname       => USER,
      4      tabname       => 'T1',
      5      colname       => 'ID2',
      6      partname      => NULL,
      7      stattab       => NULL,
      8      statid        => NULL,
      9      distcnt       => 1,
     10      density       => 1,
     11      nullcnt       => 0,
     12      srec          => NULL,
     13      avgclen       => 4,
     14      flags         => NULL,
     15      statown       => NULL,
     16      no_invalidate => FALSE,
     17      force         => TRUE);
     18  end;
     19  /
    
    PL/SQL procedure successfully completed.
    
    SQL> 
    

    As a Skip Scan is now taken over by default:

    SQL> explain plan for
      2  select *
      3  from t1
      4  where id1=6;
    
    Explained.
    
    SQL> select * from table(dbms_xplan.display);
    
    PLAN_TABLE_OUTPUT
    ---------------------------------------------------------------------------------------------------
    Plan hash value: 3198394326
    
    ------------------------------------------------------------------------------------------
    | Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    ------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT            |            |     1 |  1019 |     3   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| T1         |     1 |  1019 |     3   (0)| 00:00:01 |
    |*  2 |   INDEX SKIP SCAN           | T1_IND_ID2 |     1 |       |     2   (0)| 00:00:01 |
    ------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       2 - access("ID1"=6)
           filter("ID1"=6)
    
    15 rows selected.
    
    SQL>
    

    Hope this helps

    Published by: Dom Brooks on October 24, 2012 12:49
    Reformulated

  • index skip scan

    Hi all
    excerpt from the page of doc guide 11-18 of rel2 11g performance tuning:
    Skip scanning is advantageous when there are few distinct
    values in the leading column of the composite index and many distinct values in the
    nonleading key of the index
    Anyone could explain it means by "nonleading key of the index?

    Best regards
    Val

    The key to an index is all the columns included in the index. The 'leader' or keys are the first column in the index. So, with an index on col1, col2, col3, you could make a request like:

    select * from table
    where col1 = value
    

    and have it use the index for a range scan. In this case, col1 is the main key. You could also do:

    select * from table
    where col1 = value and
          col2 = value
    

    In this case, col1 and col2 run keys. However, something like:

    select * from table
    where col2 = value and
          col3 = value
    

    does not use the main key to the index. According to a number of factors, Oracle may be able to skip scan this index to respond to your request.

    John

  • Basic query - Index Skip Scan

    Hello

    I have a very basic question.

    I use Autotrace to check the Plan for this query

    Table definition:
    ------------------------------------
    create table tb_emp)
    sextype varchar2 (1).
    EmpID number
    );

    Array of values
    -----------------------------------------------
    insert into tb_emp values ('F', 98);
    insert into tb_emp values ('F', 100);
    insert into tb_emp values ('F', 102);
    insert into tb_emp values ('F', 104);
    insert into tb_emp values('M',101);
    insert into tb_emp values('M',103);
    insert into tb_emp values('M',105);
    commit;

    Index:
    -----------------------------------------------------------------------------
    create index EMP_SEXTYPE_EMP_IDX on tb_emp (SEXTYPE, empid);


    Query:
    --------------------------------------------------------------------------------------------------------------
    Select * from tb_emp where empid = 101;

    ---------------------------------------------------------------------------------------------------------------
    Execution plan
    ----------------------------------------------------------
    0 SELECT STATEMENT Optimizer = ALL_ROWS (cost = 0 card = 1 bytes = 15)
    1 INDEX 0 (COMPLETE ANALYSIS) OF 'EMP_SEXTYPE_EMP_IDX' (INDEX) (cost = 0 card = 1 bytes = 15)

    According to b14211 this should translate into an index Skip Scan,

    A pointer to what am I missing or other parameters that could affect the execution plan.
    Thank you and best regards,
    Ashish.

    The case of test used by you is not a real. With 7 records in a table is important if the optimizer goes for a systematic index scan or a full scan?

    make it a little big and try

    SQL> truncate table tb_emp
      2  /
    
    Table truncated.
    
    SQL> set autotrace off
    
    SQL> edit
    Wrote file afiedt.buf
    
      1  insert into tb_emp
      2  select decode(mod(level,2),0,'M','F'), level
      3    from dual
      4* connect by level <= 10000
    SQL> /
    
    10000 rows created.
    
    SQL> commit
      2  /
    
    Commit complete.
    
    SQL> select sextype, count(*) from tb_emp group by sextype
      2  /
    
    S   COUNT(*)
    - ----------
    M       5000
    F       5000
    
    SQL> exec dbms_stats.gather_table_stats(user,'TB_EMP',cascade=>true)
    
    PL/SQL procedure successfully completed.
    
    SQL> set autotrace traceonly explain
    SQL> select * from tb_emp where empid = 3000
      2  /
    
    Execution Plan
    ----------------------------------------------------------
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=1 Bytes=5)
       1    0   INDEX (SKIP SCAN) OF 'EMP_SEXTYPE_EMP_IDX' (INDEX) (Cost=3 Card=1 Bytes=5)
    
  • question on the layer effects and filters - Jews Becker

    Hello

    I have a question about the layer effects and filters

    first is there a way to layer effects in flash (as in photoshop 'coup' "outter glow" etc...)

    so it is a kind of plug-ins to special effects - perhaps with particles or pretty effects that I can buy

    Thank you very much!

    (as you may have guessed I'm very new at this and appreciate all the advice I can get!)

    -Jews Becker

    You won't find anything exactly like filters in Photoshop for use in Flash. There are filters in Flash. They are available at the bottom of the Properties window for an object MovieClip or button on the stage. Alternatively, you can apply and animate a filter in Actionscript. There is a good third-party developer, http://www.greensock.com, where you can find a few effects tools very useful. Most are free.

  • the composite index question

    If I have to create a composite index, why should we put the most selective column first?

    Suppose I have a table T that has columns C1, C2, C3, C4...
    Suppose I have a composite index on C1, C2 and my queries always contain the C1 and C2 in the WHERE clause.

    Suppose that C1 is more selective than C2.

    Why should the order of 2 columns in the index, C1, C2 and C2, C1?

    Is not the same height?

    Or oracle is able to store the values of C2 directly inside the leaves, save a branch?

    Claire wrote:
    If I have to create a composite index, why should we put the most selective column first?

    Suppose I have a table T that has columns C1, C2, C3, C4...
    Suppose I have a composite index on C1, C2 and my queries always contain the C1 and C2 in the WHERE clause.

    Suppose that C1 is more selective than C2.

    Why should the order of 2 columns in the index, C1, C2 and C2, C1?

    Is not the same height?

    Or oracle is able to store the values of C2 directly inside the leaves, save a branch?

    The order of the columns can make a difference.

    Select * from T where C1 =: C1 and C2 =: C2;
    Select * from T where C1 =: C1;

    In above cases if you have a composite index as C1, C2 then this index would be better.

    And

    Select * from T where C2 =: C2 and C1 =: C1;
    Select * from T where C2 =: C2;

    Thus, the order of the columns in your index depends on HOW YOUR QUERIES are written. Nothing
    another (selectivity of a or b are not at all)

    Source - the author himself. http://asktom.Oracle.com/pls/asktom/f?p=100:11:0:P11_QUESTION_ID:5671539468597

  • need advice on the composite index...

    Hello
    I created 2 indexes on 2 columns as prescription.

    CREATE INDEX IDX_COMP_1 ON EMP (SAL, HIREDATE)
    CREATE INDEX IDX_COMP_2 ON EMP (HIREDATE, SAL)

    DELETE FROM PLAN_TABLE;
    EXPLAIN PLAN FOR SELECT * FROM EMP WHERE SAL = 5000;
    EXPLAIN PLAN FOR SELECT * FROM EMP WHERE HIREDATE = 17 NOVEMBER 1981 ';

    EXPLAIN PLAN for SELECT * FROM EMP WHERE HIREDATE = December 3, 1981 ' and SAL = 950;
    EXPLAIN PLAN for SELECT * FROM EMP WHERE SAL = 3000 and DATERECRUTEMENT = December 3, 1981 '.

    SELECT P.PLAN_ID, P.TIMESTAMP, P.OPERATION, P.OPTIONS, P.OBJECT_NAME, P.OBJECT_TYPE, P.ACCESS_PREDICATES, P.OPTIMIZER, P.CPU_COST
    FROM PLAN_TABLE P BY 1.2;

    o/p-
    THE ARGUMENT PLAN_ID TIMESTAMP OPERATION OPTIONS OBJECT_NAME OBJECT_TYPE ACCESS_PREDICATES OPTIMIZER CPU_COST
    1 131 9/12 / 2011 22:24:12 SELECT STATEMENT ALL_ROWS 8461
    2 131 9/12 / 2011 22:24:12 TABLE ACCESS BY INDEX ROWID EMP ANALYSIS 8461
    3 131 9/12 / 2011 22:24:12 INDEX INDEX UNIQUE SCAN (SINGLE) 'EMPNO' PK_EMP = 1050 ANALYSED 7788
    4 132 9/12 / 2011 22:24:12 SELECT STATEMENT ALL_ROWS 15223
    5 132 9/12 / 2011 22:24:12 TABLE ACCESS BY INDEX ROWID EMP TABLE ANALYSIS 15223
    132 9/12 6 / 2011 22:24:12 INDEX RANGE SCAN IDX_COMP_1 INDEX "SAL" = 5000 7521 ANALYSED
    7 133 9/12 / 2011 22:24:12 SELECT STATEMENT ALL_ROWS 15223
    133 9/12 8 / 2011 22:24:12 TABLE ACCESS BY INDEX ROWID EMP TABLE ANALYSIS 15223
    133 9/12 9 / 2011 22:24:12 INDEX RANGE SCAN IDX_COMP_2 INDEX "HIREDATE" = 17 NOVEMBER 1981 ' 7521 ANALYSED
    134 9/12 10 / 2011 22:24:12 SELECT STATEMENT ALL_ROWS 14733
    134 9/12 11 / 2011 22:24:12 TABLE ACCESS BY INDEX ROWID EMP TABLE ANALYSIS 14733
    12 134 9/12 / 2011 22:24:12 INDEX RANGE SCAN IDX_COMP_1 INDEX "SAL" = 950 AND "HIREDATE" = 3 DECEMBER 1981 ' 7321 ANALYSED
    13 135 9/12 / 2011 22:24:12 SELECT STATEMENT ALL_ROWS 14733
    14 135 9/12 / 2011 22:24:12 TABLE ACCESS BY INDEX ROWID EMP TABLE ANALYSIS 14733
    15 135 9/12 / 2011 22:24:12 INDEX RANGE SCAN INDEX "SAL" = 3000 IDX_COMP_1 AND "HIREDATE" = 3 DECEMBER 1981 ' 7321 ANALYSED


    Here, I need to know that in the query no 3.4...
    (1) why oracle chose "IDX_COMP_1" in both cases, why not IDX_COMP_2?
    (2) how oracle decide what indexes should be chosen that I created IDX_COMP_1 & IDX_COMP_2 on the two single column order is different.
    (3) in the case of a composite key, is there any logic order also know if I created composite on col1, col2, col3, but in which clause I m help col2, then what will be the behavior of oracle to use index.

    Any other information to anyone about all the above will be good for me...

    thnx... PC

    Here, I need to know that in the query no 3.4...
    (1) why oracle chose "IDX_COMP_1" in both cases, why not IDX_COMP_2?
    (2) how oracle decide what indexes should be chosen that I created IDX_COMP_1 & IDX_COMP_2 on the two single column order is different.
    (3) in the case of a composite key, is there any logic order also know if I created composite on col1, col2, col3, but in which clause I m help col2, then what will be the behavior of oracle to use index.

    Hello
    This is what the CBO. Takes the lowest cost to build your execution plan.

    Everything depends on your data and the asymmetric nature of it. When collect you statistics, the CBO will decide which is a better path to take, what index to use, etc...

    Order in the index composite shud be such that the column with the most distinct values should come first...

    Kind regards
    Rizwan

    Published by: Rizwan Sep 12, 2011 22:03

  • the question of the overall index was updated

    I have a partitioned table and the local partitioned indexes in 9i. In the past, when I add a new partition, partitioned indexes have become "unusable" so I have to rebuild the index to the "exploitable": State

    ALTER INDEX REBUILD PARTITION myindex TYPE my_partition nologging;

    I just found out that when you use the clause to UPDATE GLOBAL INDEXES, there is no need to rebuild the local index in the underlying table. It seems that updating the GLOBAL INDEX also update local index to the status of "exploitable".

    create tablespace you see

    ALTER TABLE my_table SPLIT PARTITION of VALUES 'TYPE_DEFAULT' ('new') INTO (PARTITION "TYPE_NEW" TABLESPACE "you see", "TYPE_DEFAULT" PARTITION) UPDATE GLOBAL INDEXES;

    I wonder if UPDATE GLOBAL INDEXES also update local indexes as it appears in this case. I thought that the UPDATE GLOBAL INDEXES only updated global index.

    Published by: user7435395 on March 8, 2013 14:33

    Published by: user7435395 on March 8, 2013 14:36

    Published by: user7435395 on March 8, 2013 15:15

    >
    I just found out that when you use the clause to UPDATE GLOBAL INDEXES, there is no need to rebuild the local index in the underlying table. It seems that updating the GLOBAL INDEX also update local index to the status of "exploitable".
    >
    Interesting! Can you post the DDL that you use for the table and index, so we can try to reproduce it?

    This seems to contradict what said documentation, but that would not be something new.

    See "Split of Partitions" in the Guide DBA 9i
    http://docs.Oracle.com/CD/B10501_01/server.920/a96521/partiti.htm#6736
    >
    Behavior of the index

    Regular (TAS)
    Oracle UNUSABLE brand new partitions (there are two) in each local index.

    Unless you specify GLOBAL INDEXES to UPDATE, all index global, or all partitions of global partitioned indexes are marked UNUSABLE and must be rebuilt
    >
    Also you not "add" a partition you split one and creates two new partitions, which both can have data. These two partitions of the local index must be rebuilt.

    When you "add" a partition it lacks all the data so there is not need to rebuild the local index for this 'new' partition.

  • FPGA reference questions between the station development and execution? 63195 error code

    I will do my best to describe the problem, I was see.  Note, I tried a few other messages that kind of touch on my problems, but they never seem to have a definitive solution.

    Background-

    I have 3 screws:

    VI 1) opens / runs the bitfile FPGA reference and stores the reference to a global file so that I can call the reference of other screws I need to do this as opposed to the opening of a new reference because I use the FPGA for digital communications and it adds about 100 ms to to open a new reference whenever I need to read/write in the target FPGA to host (100 ms are long in the) digital world!).  Some people use Shift Registers.  I finally called my whole army FPGA screws of TestStand to run a test automated, so it was easier to break up of my functions for open/close/read/write.

    VI 2) called the global reference of the file for FPGA, then going to a read/write node to change the settings of the target, then passes the reference out of the node back to the global reference of the file read/writable

    VI 3) calls close reference FPGA and the function is passed the global reference of the file.

    Question-

    I understand not why this method works fine when I run either of TestStand with station options defined for the development (not running) mode or when I open VI which calls these 3 screws individually in sequential order but IS NOT working when I try to run these screws individually (Run VI 1 - open-> write Ref to the global reference of the file-> Run VI 2 - read global file reference-> playback control function / writing-> error-63195) or when I have run in the same sequence TestStand but together with station for execution options.  Maybe I need to change the TestStand sequence to load all the modules at startup?

    Why the reference becomes lost when global failover between development and execution and why labview cannot drop keep the reference stored?  Is there a work around?

    Finally found a solution.  I do not understand why it is necessary during the reading of the FPGA and not what writing (in fact I do not understand why it worked at all) but I ended up changing the properties of the step in my TestStand sequence for the step calling my DTL_READ.vi to be properties-> Run Options-> Unload Option-> "Unload after executing the step.  That seemed to do the trick to get my test TestStand sequence to work in runtime.

    I still don't understand why this was not required when you run the same sequence in the design environment and why it is not when I call my DTL_WRITE.vi.

    The DTL_READ.vi and the DTL_WRITE.vi pass the parameter labveiw global.vi 'Reference FPGA VI"of a node in read/write.  I don't know if the root of the problem is in my host TestStand and LabView FPGA VI...

    Thanks for all suggestions from you guys!  I'm happy I work but I am still confused by the solution.

  • Questions after the BIOS updates and driver - place 11 pro

    Hi all

    I am owner of a place of reference dell 11 7140 model 10 64-bit windows running and about 5 days ago I installed the latest drivers and the latest version of the BIOS using the automatic detection of the dell support page www.dell.com/.../drivers printer driver

    Since then, I met the two following questions:

    1: the automatic detection of the dell driver no longer works

    2: the system does not detect when a headset is connected, the audio always comes out the speakers Tablet - I reinstalled the latest drivers, but made no difference

    I wonder if anyone else has experienced those problems and what measures have you taken to solve problems.

    Thank you

    I managed to solve the problem with the audio: it was caused by an incorrect realtek driver installatiion, for some reason any 'Realtek Audio Manager' was not loaded when you start and that's why the system doesn't detect the status of the audio input jack.

    Always have the question of the automatic recognition of the driver, but at least I have a tablet of new work!

  • Question about the memory consumed and active

    I have esx4.1 on three dl585 running. I have about 100 active vm running, and I have a small question.

    My vm is all 2008 r2 datacenter and I gave them 1 cpu and 4 GB of ram. When I recover a single virtual machine and look at the summary page, I see the bones of memory consumed host 4075 mb and the active guest memory 81 MB operating system. My question is can I reduce the host cunsumed memory to 2 GB without noticing a difference within the virtual machine?

    It looks like you can.   To be absolutely sure that you'd need monitor assets long enough to have a good idea of what the average is and what are the tops.

  • Question about the game database and the UNICODE character

    Hello

    I have to configure a database for an IBM product where is a requirement:
    "Databases must be created using the UNICODE database and National characters as UTF8 games, AL32UTF8 or AL16UTF16 or."

    My current database character is: WE8MSWIN1252. Change the current database character set is not an option, so I have to create a new db for this requirement.

    My question is:
    (1) how to ensure that "databases must be created using the UNICODE database"?

    (2) to take care of character, I'll use in the database creation script:

    "
    create the db1 database...
    ..
    CHARACTER SET US7ASCII
    NATIONAL CHARACTER SET AL16UTF16
    .."
    who will take care of him?

    create the db1 database...
    ..
    UTF8 CHARACTER
    NATIONAL CHARACTER SET AL16UTF16

Maybe you are looking for