Index rebuild VS analysis static calculation index VS DBMS_STATS. GATHER_INDEX_S

Hello

I have little doubt below the topics listed. Please explain each topics and where we need to use for the same thing. Can I use any of that? What are the subjects of these advantages and disadvantages?

(1) index rebuild and index rebuild online
(2) analyze index-static calculation and validate the strcture
(3) DBMS_STATS. GATHER_INDEX_STATS

In order to make good use of the cost based optimizer, you create statistics for data in the database.

Tags: Database

Similar Questions

  • analysis of calculation of free space

    Could someone please explain the result.

    I need an explanation for the last column - I have no idea where these number have been taken

    ((Select d.TABLESPACE_NAME, round(sum(d.MAXBYTES/1024/1024)) "mb max", round(sum(v.create_bytes/1024/1024)) 'created mb. "

    ((Round(Sum(v.bytes/1024/1024)) "current mb", round(sum(f.bytes/1024/1024)) 'free mb.

    v $ datafile v, dba_data_files d, f DBA_FREE_SPACE

    where v.file #= d.file_id

    D.TABLESPACE_NAME group;

    The attached excel with the output file

    What can draw you from this? Simple, you have a missing join condition, and your statement is inaccurate.

    Only v$ datafile and dba_data_files are joined and dba_free_space does not

    Note: you will need an OUTER join

    f.file_id (+) = d.file_id

    because there is no record in dba_free_space when storage space is full.

    -------------

    Sybrand Bakker

    Senior Oracle DBA

  • Collect statistics on the table with indexes of text only?

    I gathered statistics for a table that contains a text index
    EXEC DBMS_STATS. GATHER_TABLE_STATS (USER, 'CONADDR', estimate_percent = > 10, block_sample = > TRUE, cascade = > TRUE);

    There are a lot of tables/indexes not monitored (e.g. DR$ TI_CONADDR$ I). Do I have to analyse the tables there, too? The Guide Tuning Oracle text mentions just to analyze the table of "base".

    Oracle DB version is 10.2.0.4.
    select table_name, last_analyzed, num_rows from dba_tables where table_name like '%CONADDR%';
    CONADDR     11.08.2010 10:29:37     17944660
    DR$TI_CONADDR$I          
    DR$TI_CONADDR$R          
    DR$TI_CONADDR$K          
    DR$TI_CONADDR$N          
    
    select index_name, table_name, last_analyzed, num_rows from dba_indexes where table_name like '%CONADDR%';
    SYS_IL0003730268C00004$$     CONADDR          
    IDX_CONADDR                     CONADDR     11.08.2010 10:29:46     17106050
    SYS_IL0003731165C00006$$     DR$TI_CONADDR$I          
    SYS_IOT_TOP_3731168             DR$TI_CONADDR$K          
    SYS_IL0003731170C00002$$     DR$TI_CONADDR$R          
    SYS_IOT_TOP_3731173             DR$TI_CONADDR$N          
    DR$TI_CONADDR$X                     DR$TI_CONADDR$I     11.08.2010 10:05:05     67585029
    TI_CONADDR                     CONADDR     11.08.2010 10:29:46     

    DR$ table do NOT need to be analysed - and should not be.

    As "secondary objects", they will not be analyzed by orders based on patterns, and it is strongly recommended to not analyze manually. All commands that access these tables are set correctly without the input of the optimizer.

  • Calculation value that is used in the Condition of year-round

    I found that if I have a text field element in a region and I put its value in a calculation (after the region) that this element will not work in a "condition to evaluate.

    Text field element is FirstName. No set of default values and "always replace...". "
    Calculation (after the regions), the value as a static assignment to JOHN.

    There is a second computation (after Regions) for another article with a condition. The condition is "the point value to expressin 1 = value of element in the expression 2".
    First name in JOHN and expression1 in expression2.

    When I look at the debugging session FirstName prepares to JOHN, but the second calculation does not run.

    Now if I give FirstName, a default value of JOHN where the element is defined, then the second calculation runs.
    It appears to me that something different is happening with a calculated value that does not allow to use in a condition. Can anyone help.
    Thank you very much.

    Hello

    I tried to reproduce your case:

    I have a region containing two elements P6_X1 and P6_X2 (no default value and always...):
    For P6_X1, I created the static calculation (after region, sequence 1)
    For P6_X2, I created the math static (after region, order of the sequences 10) with the conditon (value of the element / column is 1 Expression = Expression 2), where Expression1 = P6_X1 and Expression2 = JOHN

    And in my case it works:

    You can see what is happening in my debug

    0.09112     0.00030     Computation point: After Box Body
    0.09142     0.00031     ...Perform computation of item: P6_X1, type=STATIC_ASSIGNMENT
    0.09173     0.00046     ...Performing static computation
    0.09219     0.00093     ...Session State: Save "P6_X1" - saving same value: "JOHN"
    0.09312     0.00031     ...Perform computation of item: P6_X2, type=STATIC_ASSIGNMENT
    0.09342     0.00039     ...Performing static computation
    0.09381     0.00038     ...Session State: Save "P6_X2" - saving same value: "LITTLE"          
    

    Have you checked the sequence of your calculation?
    What is your exact condition for the calculation? For expression2 don't you user JOHN or "JOHN"?

    Kind regards
    Aljaz

  • Range-Hash verssu Hash Partitioning purely report performance

    Hello

    We evaluate different strategies and whereas hash and range-hash partitioning.

    While the range would give us additional cleaning options, is by far out up to priority for the reports to run as fast as possible.

    Central Oraxcle 11.2.0.3 in fact warehouse using large table estaimte will be 500 GB, surrogarte key star schema and the key to the substitution of the largest dimesnion partioned-hash with the surrogate key.

    According to your experience, the side purley perfomance queruy someone found the hash is significantly greater than parrtition by range of dates, then under hash partition.

    Queries do not use partition size.

    Greater hope of qwe benfit to win partitioning uses a parallel query + score-recognition of join between columns of hash-partitioedn (on the facts and the great dimesnion table.

    Thank you

    >
    We evaluate different strategies and whereas hash and range-hash partitioning.

    While the range would give us additional cleaning options, is by far out up to priority for the reports to run as fast as possible.

    Central Oraxcle 11.2.0.3 in fact warehouse using large table estaimte will be 500 GB, surrogarte key star schema and the key to the substitution of the largest dimesnion partioned-hash with the surrogate key.

    According to your experience, the side purley perfomance queruy someone found the hash is significantly greater than parrtition by range of dates, then under hash partition.

    Queries do not use partition size.

    Greater hope of qwe benfit to win partitioning uses a parallel query + score-recognition of join between columns of hash-partitioedn (on the facts and the great dimesnion table.
    >
    Objectives statements in this thread have some of the same problems and missing information that was your other thread.
    Re: Compress all the existing table ain dat

    So I would say the same thing, that I suggested it with minor changes.

    You give us your preferred solution instead of us giving that information about the PROBLEM you're trying to solve.

    You must first focus on the problem:

    1. define the problem - indicate the desired objectives
    2. identify the options and resources available to address the problem
    3. Select one or several small, these options for assessment earn and tests.
    4 testing of possible solutions
    5. Select and implement what you consider the "best" solution
    6. monitor the results
    >
    We evaluate different strategies and whereas hash and range-hash partitioning.
    >
    Why? 1. What is the problem that you are trying to address and what are your desired goals? Partitioning is a solution - what's the problem?
    >
    While the range would give us additional cleaning options, is by far out up to priority for the reports to run as fast as possible.
    >
    Great! Do you really need or even want options housekeeping? If so, which? Don't you bulk loads? Down periodically the data? How many times? Monthly? Every year?

    What is the relationship, in your analysis between partitioning and your reports running "as fast as possible? Give us some details. Why do you partitioning in general (range or range-hash in particular) will somehow make your reports run faster? What kind of reports? The amount of data they have access to produce? The amount of data returned in fact? How many times reports do work? How much of a problem reports are now? Generally meet their SLA? Or they RARELY meet their SLA?

    Partitioning is not a remedy of performance for badly written queries. Often the most effective way to improve the performance of reports is to resolve any issues the queries themselves may have or add appropriate indexes. You have exhausted these possibilities? Have you created and examined the execution plans for your key reports? That shows this analysis?
    >
    According to your experience, the side purley perfomance queruy someone found the hash is significantly greater than parrtition by range of dates, then under hash partition.
    >
    For a partitioned table, all data are stored in individual segments; a segment for each partition.

    For a partitioned table Sub all data are stored in the individual segments; a segment for each subpartition. There is NO data stored at the partition level.

    The type of partitioning (versus range-hash hash) and the type of data (partition versus subpartion) logic has no relevance in terms of performance.

    Performance of the queries are directly proportional the number of segments that should be available, the type of access (via the index or full scan) and the size of the segment (including the amount of data).

    The number of segments that should be available depends on the ability of the Oracle to prune partitions during the analysis statically or dynamically at run time.
    >
    Queries do not use partition size.
    >
    Partitioning then generally won't be valuable performance but only for maintenance operations.
    >
    Greater hope of qwe benfit to win partitioning uses a parallel query + score-recognition of join between columns of hash-partitioedn (on the facts and the great dimesnion table.
    >
    Please explain why you think that partitioning will provide this benefit.

    Oracle PARALLEL query very well on non-partitioned tables without using partition-wise joins. The latest version of Oracle also has a DBMS_PARALLEL_EXECUTE package that provides additional features for the realization of PARALLEL operations for many of the use cases more.

    Partitioning lends itself to a natural method of CHUNKing based on the scores/subparts, but that is not necessary to get the benefit of the PARALLEL use. The exception would be if provided partitioning segments that are on different axes or decrease disk IO conflicts.

    Another missing piece of key information are the number and the type of index that needs your reports. Will you be able to use mainly LOCAL partitioned indexes? Global index tend to destroy any maintenance performance that can be learned from partitioning.

  • How temporary data hod

    Hello

    Here are my requirements:

    Select quantity of data in the table A.-j' created a Curos for this.

    loop through the data row in the cursor.
    Do analysis/validation
    Insert the necessary data temporarily to a table or other object, to the end of the analysis on the entire cursor line
    End of loop

    Once that all rows in table A are validated, run a query of aggregation on the data that is stored temporarily
    Insert the summary data in table B.

    Here, my question is, what should I use to temporarily store data (there will be 3 fields - Emp_No, rank, Sal)

    And how aggregate function on these data. Select Emp_No, Sum (Sal) in the < < objects > > Group by rank.

    Oracel Version 10g

    Thanks in advance.

    RizlyFaisal wrote:
    Thanks to you all.

    Karthik & William, it is impossible to do in AN insert statement, I need to run the Insert for each valid record, because...

    Because what?
    It is extremely rare to make it impossible to do this task as a single Insert... SELECT statement.

    What analyses and calculations could possibly make on the data that actually requires you to do rowing by the treatment of the line (slow by slow)?

    Saying that however, a global temporary table will allow you to store data temporarily, to allow the index to use against it and allow you to perform more SQL against it, much more easily than the treatment of data collection types. Personally, I would do this as a single SQL statement. With the use of analytical functions and/or the clause type, it is possible to do almost anything.

  • Photosmart 6520: Can not scan to computer

    I can scan is no longer on my computer. I have a message to ensure that the "Scan to Computer" is selected via the printer software. Where can I find it. Printer works fine.

    Hello

    Thanks for the additional comments.

    You can always try our suggestion.   In order to enable the analysis of calculation, you need to install the FULL features software, then by activating the "Scan to computer" as shown.

    Hope that helps

    Thank you.

  • Buttons

    Hello users Diadem

    I'm a new user for Diadem, I created my own function analysis and calculation and made an interface to it with a dialog box, I want to create a new button in the interface of tiara, when it is selected, the user dialog box to appear, I tried to implement that, but it doesn't market always give error , here is my code

    Dim

    MyBar

    IfnotBarManager.Bars.Exists ("MyBar") ThenSet MyBar = BarManager.Bars.Add ("MyBar")

    On the other

    Set MyBar = BarManager.Bars ("MyBar")

    Call MyBar.UsedActionObjs.RemoveAll

    EndIf

    Dim mybutton

    IfnotBarManager.ActionObjs.Exists("Sensitivity") ThenSet oButton = BarManager.ActionObjs.Add ("Sensitivity", "CustomButton")

    MyButton. OnClickCode.Code = 'Call SUDDlgShow ("Dlg1", "Sens.SUD", NULL)'

    PS: the error comes from the last line!

    If someone has implemented this procedure I'll wait your sweet replay . I just want to know the method how to call the dialog box when the user clicks on the button

    Thank you very much in advance

    Aladdin

    Design engineer

    Hello Madishah,

    in your last line

    MyButton. OnClickCode.Code = 'Call SUDDlgShow ("Dlg1", "Sens.SUD", NULL)'

    You must use double quotation marks in the string. If the line should be:

    MyButton. OnClickCode.Code = 'Call SUDDlgShow ("" Dlg1"", "" Sens.SUD"", NULL)'

    This is how you define a quote with a string in VBScript

  • Change the Image based on the line of dashboard

    Hello

    IM using OBIEE 11 G 11.1.1.5

    I need to show a dashboard (Image) the company logo on a line of dashboard. I have 5 companies and each has its own logo. That a company can be selected.

    Is there a way to building that?

    Thank you

    Published by: Andres on 16-ene-2012 12:52

    Hi Andres,

    We can make use of the "conditional display of sections within the dashboard based on the results of the analysis. One option is certainly write different analyses (static text or narrative mode) with your company newspapers and their incorporation in the various sections of their reports in the dashboard. These sections should be displayed conditionally based on the fast one of dashboard. You can create analysis, each read the variable of presentation (associated with the command prompt) and display a record only if you choose a particular company. (Ex: create a business with a single column as @{variables.CompanyPrompt}.) Create a filter on it as "equals 'Oracle'. If this analysis, would post a record only if the presentation variable is set to 'Oracle').

    I would keep you updated if I hit many effective otherwise.

    I hope this helps.

    Thank you
    Diakité

  • Number of Photos in folders

    I just did some reorganization of my Lightroom folders and files on my machine and noticed that on many files Lightroom there is not a number that indicates how many photos it contains, just a «...» ».  The pictures are there when I click on it (no? symbol or indicating a broken link).  No idea why it happened and if I can fix it?

    Thank you!

    The «...» "is what LR displays while it is a group of files for analysis and calculation of the number of files it contains. I don't know why it would not be completed this process in a reasonable period of time unless your folder structure is very large and complex and your computer is slow. Are the correct numbers finally displayed?

    HAL

  • Analysis of the accident: Tibia Index (ChnTICalc)

    Hello

    I have a few questions for the calculation of the index of Tibia DIAdem (2010, 2012, 2014) (ChnTICalc)

    The help of tiara (2014) tells me to "dialog boxes > DIAdem ANALYSIS > Crash Analysis > Crash Analysis (TI):

    Instant Mx specifies the filtered data CFC600 channel...
    Now my specifies the filtered data CFC600 channel...
    Compression Fz force specifies the filtered data CFC600 channel...

    Of the other in "programming reference > alphabetical Programming Reference > controls and features > command: ChnTICalc»

    Note The input channels must be filtered with CFC600.

    Question 1: Does he expect the non filtered ChnTICalc or the precleaned CFC600 channels?

    Help of DIAdem (2014) told me in ' Basics > DIAdem ANALYSIS > Crash Analysis criteria > abdominal criteria > TI '.

    Note  To calculate only the axial compression forces are used. The tensile forces must be set to 0.

    Question 2: ChnTICalc defines the strength of traction to 0?

    ChnTICalc calculates the result of TI string and returns the maximum value in the variable of tires.

    I need this channel for a parcel of report and limiting curves

    Question 3: Is it possible to keep all TI result of the channel?

    Thank you

    GEMÜ

    Hi Andreas,

    Thanks for your workaround solutions.

    I've implemented the calculation by my own.

    See the code below. Maybe someone else has need of the curve by TI.

    Dim oAg, oTI
    Define OAG = Data.Root.ActiveChannelGroup.Channels
    OTI value = ChnTICalcObj (oAg ("11TIBILEUPH3MOXB"), oAg("11TIBILEUPH3MOYB"), oAG("11TIBILEUPH3FOZB") _
    225.0, 35.9, '11TIINLEUPH3000B', "Tibia Index Curve")

    MsgBox oTI.Properties.Item ("Maximum"). Value

    Function ChnTICalcObj (ByVal MxObj_Nm _
    ByVal MyObj_Nm _
    ByVal FzObj_kN _
    ByVal Mcr_Nm _
    ByVal Fcz_kN _
    ByVal sTI_ResultName _
    ByVal sTI_ResultDescription)

    Set ChnTICalcObj = MxObj_Nm.ChannelGroup.Channels.Add (sTI_ResultName, DataTypeFloat64)
    With ChnTICalcObj
    . Name = sTI_ResultName
    . Properties.Item ("unit_string"). Value = «»
    . Properties.Item ("description"). Value = sTI_ResultDescription
    Ends with

    Dim sFormula, aSymbols (5), aValues (5)

    < 0.0))="">

    aSymbols (0) = "TI".
    aSymbols (1) = "Mx".
    aSymbols (2) = 'my
    aSymbols (3) = "Fz".
    aSymbols (4) = "Mcr".
    aSymbols (5) = "Fcz.

    Set aValues (0) = ChnTICalcObj
    Set aValues (1) = MxObj_Nm
    AValues set (2) = MyObj_Nm
    AValues set (3) = FzObj_kN
    aValues (4) = Mcr_Nm
    aValues (5) = Fcz_kN

    Call the calculate (sFormula, aSymbols, aValues)
    End Function

    Concerning

    GEMÜ

  • Performance, audit, Reporting, index calculations

    Hello
    On the tab reports, link Performance, in the section down earned value right, how do you get the cost Performance Index, index of Performance forecasts and complete to calculate Performance Index? Our plans for the empty display for these values.
    Thank you, Kevin

    Hello

    Earned value #.

    Expected value of the total budgeted cost up to the date of the analysis.
    Won the value calculated for the task using the following formula:

    Earned value = current Budget x percentage of physical completion

    If the physical percent complete rollup is for the plan of work, Effort, then earned value is an effort. Otherwise, earned value is based on the cost. According to the method of percentage cumulative effective complete, projects Oracle uses the planned grassroots effort or cost of the task of determining the current Budget.

    Forecast Performance Index

    Calculated for the task using the following formula:

    Schedule Performance Index = earned value / planned value

    Cost Performance Index

    Calculated for the task using the following formula:

    Cost Performance Index = earned value / actual cost

    If the values are null, after execution of the performance of project reporting requests (jobs) seems earned is not get calculated, cross check rollup method to plan working budgets, physical percent complete

    Thank you
    Krishna

  • How oracle decide whetehr to use the index or full analysis (statistics)

    Hi guys,.

    Let's say I have an index on a column.
    Tables and index statistics were collected. (without the histograms).

    Let's say I have run a select * from table where a = 5;
    Oracle will perform a complete analysis.
    But what statistics, it will be able to know indeed the greater part of the column = 5? (histograms do not used)

    After analysis, we get the following:
    Statistical table:
    (NUM_ROWS)
    (BLOCKS)
    (EMPTY_BLOCKS)
    (AVG_SPACE)
    (CHAIN_COUNT)
    (AVG_ROW_LEN)

    Index statistics:
    (BLEVEL)
    (LEAF_BLOCKS)
    (DISTINCT_KEYS)
    (AVG_LEAF_BLOCKS_PER_KEY)
    (AVG_DATA_BLOCKS_PER_KEY)
    (CLUSTERING_FACTOR)

    Thank you





    Index of column (A)
    ======
    1
    1
    2
    2
    5
    5
    5
    5
    5
    5

    I have prepared a few explanations and did not notice that the topic has been marked as answer.

    My sentence is not quite true.

    A column "without histograms' means that the column has only a bucket.

    More correct: even without the histogram there are data in dba_tab_histograms which can be considered a bucket for the whole column. In fact, these data are extracted from hist_head$, not from $ histgrm as usual buckets.
    Technically there are no buckets without combined histograms.

    Let's create a table with the asymmetric data distribution.

    SQL> create table t as
      2  select least(rownum,3) as val, '*' as pad
      3    from dual
      4  connect by level <= 1000000;
    
    Table created
    
    SQL> create index idx on t(val);
    
    Index created
    
    SQL> select val, count(*)
      2    from t
      3   group by val;
    
           VAL   COUNT(*)
    ---------- ----------
             1          1
             2          1
             3     999998
    

    So, we have table with the very uneven distribution of the data.
    We collect statistics without histograms.

    SQL> exec dbms_stats.gather_table_stats( user, 'T', estimate_percent => 100, method_opt => 'for all columns size 1', cascade => true);
    
    PL/SQL procedure successfully completed
    
    SQL> select blocks, num_rows  from dba_tab_statistics
      2   where table_name = 'T';
    
        BLOCKS   NUM_ROWS
    ---------- ----------
          3106    1000000
    
    SQL> select blevel, leaf_blocks, clustering_factor
      2    from dba_ind_statistics t
      3   where table_name = 'T'
      4     and index_name = 'IDX';
    
        BLEVEL LEAF_BLOCKS CLUSTERING_FACTOR
    ---------- ----------- -----------------
             2        4017              3107
    
    SQL> select column_name,
      2         num_distinct,
      3         density,
      4         num_nulls,
      5         low_value,
      6         high_value
      7    from dba_tab_col_statistics
      8   where table_name = 'T'
      9     and column_name = 'VAL';
    
    COLUMN_NAME  NUM_DISTINCT    DENSITY  NUM_NULLS      LOW_VALUE      HIGH_VALUE
    ------------ ------------ ---------- ---------- -------------- ---------------
    VAL                     3 0,33333333          0           C102            C104
    

    Therefore, Oracle suggests that the values between 1 and 3 (raw C102 C104) are distributed uniform and the density of the distribution is 0.33.
    We will try to explain the plan

    SQL> explain plan for
      2  select --+ no_cpu_costing
      3         *
      4    from t
      5   where val = 1
      6  ;
    
    Explained
    
    SQL> @plan
    
    --------------------------------------------------
    | Id  | Operation         | Name | Rows  | Cost  |
    --------------------------------------------------
    |   0 | SELECT STATEMENT  |      |   333K|   300 |
    |*  1 |  TABLE ACCESS FULL| T    |   333K|   300 |
    --------------------------------------------------
    Predicate Information (identified by operation id):
    ---------------------------------------------------
       1 - filter("VAL"=1)
    Note
    -----
       - cpu costing is off (consider enabling it)
    

    An excerpt from trace 10053

    BASE STATISTICAL INFORMATION
    ***********************
    Table Stats::
      Table:  T  Alias:  T
        #Rows: 1000000  #Blks:  3106  AvgRowLen:  5.00
    Index Stats::
      Index: IDX  Col#: 1
        LVLS: 2  #LB: 4017  #DK: 3  LB/K: 1339.00  DB/K: 1035.00  CLUF: 3107.00
    ***************************************
    SINGLE TABLE ACCESS PATH
      -----------------------------------------
      BEGIN Single Table Cardinality Estimation
      -----------------------------------------
      Column (#1): VAL(NUMBER)
        AvgLen: 3.00 NDV: 3 Nulls: 0 Density: 0.33333 Min: 1 Max: 3
      Table:  T  Alias: T
        Card: Original: 1000000  Rounded: 333333  Computed: 333333.33  Non Adjusted: 333333.33
      -----------------------------------------
      END   Single Table Cardinality Estimation
      -----------------------------------------
      Access Path: TableScan
        Cost:  300.00  Resp: 300.00  Degree: 0
          Cost_io: 300.00  Cost_cpu: 0
          Resp_io: 300.00  Resp_cpu: 0
      Access Path: index (AllEqRange)
        Index: IDX
        resc_io: 2377.00  resc_cpu: 0
        ix_sel: 0.33333  ix_sel_with_filters: 0.33333
        Cost: 2377.00  Resp: 2377.00  Degree: 1
      Best:: AccessPath: TableScan
             Cost: 300.00  Degree: 1  Resp: 300.00  Card: 333333.33  Bytes: 0
    

    FTS here costs 300 and Index Range Scan here costs 2377.
    I disabled cpu cost, so the selectivity does not affect the cost of FTS.
    cost of the Index Range Scan is calculated as
    blevel + (leaf_blocks * selectivity + clustering_factor * selecivity) = 2 + (4017 * 0.33333 + 3107 * 0.33333) = 2377.
    Oracle believes that he must read 2 blocks root/branch index, 1339 the index leaf blocks and 1036 blocks in the table.
    Pay attention that the selectivity is the main component of the cost of the Index Range Scan.

    We will try to collect histograms:

    SQL> exec dbms_stats.gather_table_stats( user, 'T', estimate_percent => 100, method_opt => 'for columns val size 3', cascade => true);
    
    PL/SQL procedure successfully completed
    

    If you look at dba_tab_histograms you can see more

    SQL> select endpoint_value,
      2         endpoint_number
      3    from dba_tab_histograms
      4   where table_name = 'T'
      5     and column_name = 'VAL'
      6  ;
    
    ENDPOINT_VALUE ENDPOINT_NUMBER
    -------------- ---------------
                 1               1
                 2               2
                 3         1000000
    

    ENDPOINT_VALUE is the value of the column (in number for any type of data) and ENDPOINT_NUMBER is the cumulative number of lines.
    Number of lines for any ENDPOINT_VALUE = ENDPOINT_NUMBER for this ENDPOINT_VALUE - ENDPOINT_NUMBER to the previous ENDPOINT_VALUE.

    explain the plan and track 10053 the same query:

    ------------------------------------------------------------
    | Id  | Operation                   | Name | Rows  | Cost  |
    ------------------------------------------------------------
    |   0 | SELECT STATEMENT            |      |     1 |     4 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| T    |     1 |     4 |
    |*  2 |   INDEX RANGE SCAN          | IDX  |     1 |     3 |
    ------------------------------------------------------------
    Predicate Information (identified by operation id):
    ---------------------------------------------------
       2 - access("VAL"=1)
    Note
    -----
       - cpu costing is off (consider enabling it)
    
    ***************************************
    BASE STATISTICAL INFORMATION
    ***********************
    Table Stats::
      Table:  T  Alias:  T
        #Rows: 1000000  #Blks:  3106  AvgRowLen:  5.00
    Index Stats::
      Index: IDX  Col#: 1
        LVLS: 2  #LB: 4017  #DK: 3  LB/K: 1339.00  DB/K: 1035.00  CLUF: 3107.00
    ***************************************
    SINGLE TABLE ACCESS PATH
      -----------------------------------------
      BEGIN Single Table Cardinality Estimation
      -----------------------------------------
      Column (#1): VAL(NUMBER)
        AvgLen: 3.00 NDV: 3 Nulls: 0 Density: 5.0000e-07 Min: 1 Max: 3
        Histogram: Freq  #Bkts: 3  UncompBkts: 1000000  EndPtVals: 3
      Table:  T  Alias: T
        Card: Original: 1000000  Rounded: 1  Computed: 1.00  Non Adjusted: 1.00
      -----------------------------------------
      END   Single Table Cardinality Estimation
      -----------------------------------------
      Access Path: TableScan
        Cost:  300.00  Resp: 300.00  Degree: 0
          Cost_io: 300.00  Cost_cpu: 0
          Resp_io: 300.00  Resp_cpu: 0
      Access Path: index (AllEqRange)
        Index: IDX
        resc_io: 4.00  resc_cpu: 0
        ix_sel: 1.0000e-06  ix_sel_with_filters: 1.0000e-06
        Cost: 4.00  Resp: 4.00  Degree: 1
      Best:: AccessPath: IndexRange  Index: IDX
             Cost: 4.00  Degree: 1  Resp: 4.00  Card: 1.00  Bytes: 0
    

    Be careful on selectivity, ix_sel: 1.0000e - 06
    Cost of the FTS is always the same = 300,
    but the cost of the Index Range Scan is now 4: 2 blocks from root/branch + block 1 sheet + 1 table blocks.

    So, conclusion: histograms to calculate more accurate selectivity. The goal is to have more efficient execution plans.

    Alexander Anokhin
    http://alexanderanokhin.WordPress.com/

  • doubt in the rebuilding of the index

    Hello

    can someone tell me the formula for the rebuilding of the index? in what situation to go for the reconstruction of the index?

    ora_2009 wrote:
    You can run the command 'ANALYSIS of the STRUCTURE INDEX VALIDATE' on the relevant index - each invocation of this command creates a single line in the INDEX_STATS view.
    This line is replaced with the next order INDEX of ANALYSER, so copy the contents of the view into a local table after each SCAN. The "wickedness" of the index can then be judged by the ratio of 'DEL_LF_ROWS' to 'LF_ROWS '.

    Do not forget that this command locks the table for the duration of the analysis.
    How do you (personally) by the ratio of del_lf_rows to lf_rows? Would you, for example, reconstruction based on the following figures:

    LF_ROWS                       : 328158
    DEL_LF_ROWS                   : 7354
    

    Concerning
    Concerning
    Jonathan Lewis
    http://jonathanlewis.WordPress.com
    http://www.jlcomp.demon.co.UK

    "All experts it is a equal and opposite expert."
    Clarke

  • I followed these steps several times, but he still has to work to rebuild the index.  Is there something else in the way of this work?

    Have you tried to rebuild the index spotlight several times, but it didn't work. I followed the steps through the system preferences, but there is no result for the rebuilding of the index.  Is there another way to do it, or is there another problem preventing it from working?

    Do you mean the following steps:

    Rebuild the index on your Mac - Apple Support Spotlight

Maybe you are looking for

  • Protection of personal information to the lock screen

    Hey I need to know one thing in Notification settings when we personalize the Notifications of Messages here Option to display the preview But he knows such option in phone Notifications if someone calls and I missed his or her call it reflects the n

  • viFindRsrc using VI_ATTR_USB_SERIAL_NUM

    I'm trying to get the resource VISA for a USB/RAW device string according to the serial number.  If I do viFindRsrc(hRM, "/USB?*RAW{VI_ATTR_USB_SERIAL_NUM == 123456/789}", VI_NULL, &lCount, szBuffer); I have VI_ERROR_RSRC_NFOUND.  However, if I just

  • I have Windows 7 and I would like to know why my screen resolution will not remain when I put it?

    I need to know why there is above all the resolution of my screen jumps in and out? Then, I would like to know if anyone can tell me why my monitor He died after I turn it on?  Help

  • a microsoft security icon/disappear

    On my desk, I have windows xp. and use microsoft security ess. After my security update has been installed, the icon on my desktop "disappeared". All my other icons are always installed. I don't know what could happen. Help, please?

  • I have to add several user accounts to understand?

    I could create a situation particularly on Windows 8, in order to better understand where I have all of these potential users say on a single machine, and I'm all user accounts. "These people" have access to this computer, and then I create a guest a