index statistics
Hi allEach week, once we collect statistics on all schemas. But when I do
SQL> select HEIGHT,BLOCKS from index_stats where name='FND_CONCURRENT_REQUESTS_N1';
no rows selected
SQL> select count(*) from index_stats;
COUNT(*)
----------
0
1 row selected.
The index_stats column has no lines and our env TYPICAL statistics_level chainThank you
Baskar.l
You are logged in as APPLSYS? INDEX_STATS reports for the current schema.
Hemant K Collette
SQL> analyze index SALES_SALE_DATE_NDX validate structure;
Index analyzed.
SQL> select * from index_stats;
HEIGHT BLOCKS NAME
---------- ---------- ------------------------------
PARTITION_NAME LF_ROWS LF_BLKS LF_ROWS_LEN LF_BLK_LEN
------------------------------ ---------- ---------- ----------- ----------
BR_ROWS BR_BLKS BR_ROWS_LEN BR_BLK_LEN DEL_LF_ROWS DEL_LF_ROWS_LEN
---------- ---------- ----------- ---------- ----------- ---------------
DISTINCT_KEYS MOST_REPEATED_KEY BTREE_SPACE USED_SPACE PCT_USED ROWS_PER_KEY
------------- ----------------- ----------- ---------- ---------- ------------
BLKS_GETS_PER_ACCESS PRE_ROWS PRE_ROWS_LEN OPT_CMPR_COUNT OPT_CMPR_PCTSAVE
-------------------- ---------- ------------ -------------- ----------------
3 2816 SALES_SALE_DATE_NDX
1000000 2653 19000000 7996
2652 6 37063 8028 0 0
1000000 1 21261556 19037063 90 1
4 0 0 0 0
SQL> analyze index SALES_SALE_DATE_NDX validate structure online;
Index analyzed.
SQL> select * from index_stats;
HEIGHT BLOCKS NAME
---------- ---------- ------------------------------
PARTITION_NAME LF_ROWS LF_BLKS LF_ROWS_LEN LF_BLK_LEN
------------------------------ ---------- ---------- ----------- ----------
BR_ROWS BR_BLKS BR_ROWS_LEN BR_BLK_LEN DEL_LF_ROWS DEL_LF_ROWS_LEN
---------- ---------- ----------- ---------- ----------- ---------------
DISTINCT_KEYS MOST_REPEATED_KEY BTREE_SPACE USED_SPACE PCT_USED ROWS_PER_KEY
------------- ----------------- ----------- ---------- ---------- ------------
BLKS_GETS_PER_ACCESS PRE_ROWS PRE_ROWS_LEN OPT_CMPR_COUNT OPT_CMPR_PCTSAVE
-------------------- ---------- ------------ -------------- ----------------
3 2816 SALES_SALE_DATE_NDX
1000000 2653 19000000 7996
2652 6 37063 8028 0 0
1000000 1 21261556 19037063 90 1
4 0 0 0 0
SQL> analyze index SYS_C009094 compute statistics;
Index analyzed.
SQL> select * from index_stats;
HEIGHT BLOCKS NAME
---------- ---------- ------------------------------
PARTITION_NAME LF_ROWS LF_BLKS LF_ROWS_LEN LF_BLK_LEN
------------------------------ ---------- ---------- ----------- ----------
BR_ROWS BR_BLKS BR_ROWS_LEN BR_BLK_LEN DEL_LF_ROWS DEL_LF_ROWS_LEN
---------- ---------- ----------- ---------- ----------- ---------------
DISTINCT_KEYS MOST_REPEATED_KEY BTREE_SPACE USED_SPACE PCT_USED ROWS_PER_KEY
------------- ----------------- ----------- ---------- ---------- ------------
BLKS_GETS_PER_ACCESS PRE_ROWS PRE_ROWS_LEN OPT_CMPR_COUNT OPT_CMPR_PCTSAVE
-------------------- ---------- ------------ -------------- ----------------
3 2816 SALES_SALE_DATE_NDX
1000000 2653 19000000 7996
2652 6 37063 8028 0 0
1000000 1 21261556 19037063 90 1
4 0 0 0 0
SQL> set pages600
SQL> set linesize 132
SQL> disconnect
Disconnected from Oracle Database 10g Enterprise Edition ....
SQL> connect
Enter user-name: user/password
Connected.
SQL> l
1* select * from index_stats
SQL> /
no rows selected
SQL>
In addition, it does not need to VALIDATE an ONLINE STRUCTURE:
SQL> l
1* select * from index_stats
SQL> /
no rows selected
SQL> analyze index SALES_SALE_DATE_NDX validate structure online;
Index analyzed.
SQL> select * from index_stats;
no rows selected
SQL> analyze index SALES_SALE_DATE_NDX validate structure;
Index analyzed.
SQL> select * from index_stats;
HEIGHT BLOCKS NAME PARTITION_NAME LF_ROWS LF_BLKS LF_ROWS_LEN LF_BLK_LEN
---------- ---------- ------------------------------ ------------------------------ ---------- ---------- ----------- ----------
BR_ROWS BR_BLKS BR_ROWS_LEN BR_BLK_LEN DEL_LF_ROWS DEL_LF_ROWS_LEN DISTINCT_KEYS MOST_REPEATED_KEY BTREE_SPACE USED_SPACE
---------- ---------- ----------- ---------- ----------- --------------- ------------- ----------------- ----------- ----------
PCT_USED ROWS_PER_KEY BLKS_GETS_PER_ACCESS PRE_ROWS PRE_ROWS_LEN OPT_CMPR_COUNT OPT_CMPR_PCTSAVE
---------- ------------ -------------------- ---------- ------------ -------------- ----------------
3 2816 SALES_SALE_DATE_NDX 1000000 2653 19000000 7996
2652 6 37063 8028 0 0 1000000 1 21261556 19037063
90 1 4 0 0 0 0
SQL>
Published by: Hemant K grapple on June 10, 2010 10:42
Published by: Hemant K grapple on June 10, 2010 10:44
Tags: Database
Similar Questions
-
Hi all
The BP is 11.2.0.3 on a linux machine.
In my production DB, I published the following command to copy table statistics / index between partitions.
= > EXEC DBMS_STATS. COPY_TABLE_STATS (OWNNAME = > 'AA', TABNAME = > AA_TABLE, SRCPARTNAME = > 'P201302', DSTPARTNAME = > 'P201303');
The command above cause the index statistics to collect 'NEWLY' instead of copying the table statistics / index?
After that I issued the command, some of the index partitions'statistics are newly collected (last_analyzed).
I'm still not sure that two things are between the relationship of cause and effect.
Thanks in advance.
Best regards.Please check below doc
http://oracledoug.com/Serendipity/index.php?/archives/1596-statistics-on-partitioned-tables-part-6a-COPY_TABLE_STATS.html
Hope this helps,
Concerning
ORA-01031: insufficient privileges
http://www.oracleracexpert.com
http://www.oracleracexpert.com/2013/02/ora-01031-insufficient-privileges.html
How to change the Oracle DBNAME and DBID
http://www.oracleracexpert.com/2013/02/how-to-change-Oracle-dbname-and-dbid.html -
calculate vs believe the index statistics
Hello
No one knows what criteria can be used to evaluate when it is necessary to calculate statistics vs statistical estimate on an index?
I'm in a situation where I'd like to collect statistics on all indexes of production DB. The indexes are on the tables that vary considerably in size from very small (< 10 records) to the very large (> 150 million records). Based on previous experience, I think stats estimate would be appropriate on the index of the largest tables and computer stats would be appropriate on the indexes of smaller tables. The reason to be here, it is that when I tried to compute statistics on major indexes, it took several hours to complete each. It was not ideal because I have a small window to calculate the index statistics. Alternatively, there are some indexes of average size that is currently estimated, and I'm fairly certain they would benefit calculation of stats.
I thought that I had read a few Oracle somewhere documentation that explains how to evaluate when it is appropriate to calculate the estimate of the vs, but I can't seem to find it.
EDIT: I should mention that it's in a DB 10 gr 2 (10.2.0.3)
Advice or assistance in this matter would be greatly appreciated.
Thank you very much.
Published by: x94qvi on November 14, 2010 19:02Hello
Please read these articles by Tom, who will give you a clear understanding between calculation vs estimate for any object can be a table or index
http://asktom.Oracle.com/pls/asktom/f?p=100:11:0:P11_QUESTION_ID:432169917482
http://download.Oracle.com/docs/CD/B19306_01/server.102/b14211/stats.htm
Thank you
Published by: CKPT November 15, 2010 08:21
-
Hello
I have the settings in a database of the following
How to interpret this info about the INDEX:
index/good / fragmented or...SQL> select * from index_stats ; no rows selected statistics_level string TYPICAL timed_os_statistics integer 0 timed_statistics boolean TRUE db_block_size integer 8192 10g Enterprise Edition Release 10.2.0.2.0 - 64bi
Thanks a tonHello
is that what I will rebuild this index or not...
If done correctly, the reconstruction or coaleseing an index is 100% safe.
Have you considered an index unite instead?
But first, what is your motivation?
-To recover the disk space after a significant deletion?
-To improve performance?
-Another reason?
Please read carefully, he explains the problem:
http://www.DBA-Oracle.com/t_index_rebuilding_issues.htm
Article talks about index_stats
Yes, index_stats collects addiitonal details on an index when you use the validate index command.
It is mainly used to justify an index rebuild, and this is dangerous because it consumes much resources and can cause locking issues...
BTW, that link is simply part of a 4 part series, be sure to read the other parts:
http://jonathanlewis.WordPress.com/2009/09/15/index-explosion-4/#comment-34512
I hope this helps...
Donald K. Burleson
Oracle Press author
Author of "Oracle Tuning: the definitive reference".
http://www.rampant-books.com/t_oracle_tuning_book.htm
"Time flies like an arrow; Flies to fruit like a banana. -
How oracle decide whetehr to use the index or full analysis (statistics)
Hi guys,.
Let's say I have an index on a column.
Tables and index statistics were collected. (without the histograms).
Let's say I have run a select * from table where a = 5;
Oracle will perform a complete analysis.
But what statistics, it will be able to know indeed the greater part of the column = 5? (histograms do not used)
After analysis, we get the following:
Statistical table:
(NUM_ROWS)
(BLOCKS)
(EMPTY_BLOCKS)
(AVG_SPACE)
(CHAIN_COUNT)
(AVG_ROW_LEN)
Index statistics:
(BLEVEL)
(LEAF_BLOCKS)
(DISTINCT_KEYS)
(AVG_LEAF_BLOCKS_PER_KEY)
(AVG_DATA_BLOCKS_PER_KEY)
(CLUSTERING_FACTOR)
Thank you
Index of column (A)
======
1
1
2
2
5
5
5
5
5
5I have prepared a few explanations and did not notice that the topic has been marked as answer.
My sentence is not quite true.
A column "without histograms' means that the column has only a bucket.
More correct: even without the histogram there are data in dba_tab_histograms which can be considered a bucket for the whole column. In fact, these data are extracted from hist_head$, not from $ histgrm as usual buckets.
Technically there are no buckets without combined histograms.Let's create a table with the asymmetric data distribution.
SQL> create table t as 2 select least(rownum,3) as val, '*' as pad 3 from dual 4 connect by level <= 1000000; Table created SQL> create index idx on t(val); Index created SQL> select val, count(*) 2 from t 3 group by val; VAL COUNT(*) ---------- ---------- 1 1 2 1 3 999998
So, we have table with the very uneven distribution of the data.
We collect statistics without histograms.SQL> exec dbms_stats.gather_table_stats( user, 'T', estimate_percent => 100, method_opt => 'for all columns size 1', cascade => true); PL/SQL procedure successfully completed SQL> select blocks, num_rows from dba_tab_statistics 2 where table_name = 'T'; BLOCKS NUM_ROWS ---------- ---------- 3106 1000000 SQL> select blevel, leaf_blocks, clustering_factor 2 from dba_ind_statistics t 3 where table_name = 'T' 4 and index_name = 'IDX'; BLEVEL LEAF_BLOCKS CLUSTERING_FACTOR ---------- ----------- ----------------- 2 4017 3107 SQL> select column_name, 2 num_distinct, 3 density, 4 num_nulls, 5 low_value, 6 high_value 7 from dba_tab_col_statistics 8 where table_name = 'T' 9 and column_name = 'VAL'; COLUMN_NAME NUM_DISTINCT DENSITY NUM_NULLS LOW_VALUE HIGH_VALUE ------------ ------------ ---------- ---------- -------------- --------------- VAL 3 0,33333333 0 C102 C104
Therefore, Oracle suggests that the values between 1 and 3 (raw C102 C104) are distributed uniform and the density of the distribution is 0.33.
We will try to explain the planSQL> explain plan for 2 select --+ no_cpu_costing 3 * 4 from t 5 where val = 1 6 ; Explained SQL> @plan -------------------------------------------------- | Id | Operation | Name | Rows | Cost | -------------------------------------------------- | 0 | SELECT STATEMENT | | 333K| 300 | |* 1 | TABLE ACCESS FULL| T | 333K| 300 | -------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("VAL"=1) Note ----- - cpu costing is off (consider enabling it)
An excerpt from trace 10053
BASE STATISTICAL INFORMATION *********************** Table Stats:: Table: T Alias: T #Rows: 1000000 #Blks: 3106 AvgRowLen: 5.00 Index Stats:: Index: IDX Col#: 1 LVLS: 2 #LB: 4017 #DK: 3 LB/K: 1339.00 DB/K: 1035.00 CLUF: 3107.00 *************************************** SINGLE TABLE ACCESS PATH ----------------------------------------- BEGIN Single Table Cardinality Estimation ----------------------------------------- Column (#1): VAL(NUMBER) AvgLen: 3.00 NDV: 3 Nulls: 0 Density: 0.33333 Min: 1 Max: 3 Table: T Alias: T Card: Original: 1000000 Rounded: 333333 Computed: 333333.33 Non Adjusted: 333333.33 ----------------------------------------- END Single Table Cardinality Estimation ----------------------------------------- Access Path: TableScan Cost: 300.00 Resp: 300.00 Degree: 0 Cost_io: 300.00 Cost_cpu: 0 Resp_io: 300.00 Resp_cpu: 0 Access Path: index (AllEqRange) Index: IDX resc_io: 2377.00 resc_cpu: 0 ix_sel: 0.33333 ix_sel_with_filters: 0.33333 Cost: 2377.00 Resp: 2377.00 Degree: 1 Best:: AccessPath: TableScan Cost: 300.00 Degree: 1 Resp: 300.00 Card: 333333.33 Bytes: 0
FTS here costs 300 and Index Range Scan here costs 2377.
I disabled cpu cost, so the selectivity does not affect the cost of FTS.
cost of the Index Range Scan is calculated as
blevel + (leaf_blocks * selectivity + clustering_factor * selecivity) = 2 + (4017 * 0.33333 + 3107 * 0.33333) = 2377.
Oracle believes that he must read 2 blocks root/branch index, 1339 the index leaf blocks and 1036 blocks in the table.
Pay attention that the selectivity is the main component of the cost of the Index Range Scan.We will try to collect histograms:
SQL> exec dbms_stats.gather_table_stats( user, 'T', estimate_percent => 100, method_opt => 'for columns val size 3', cascade => true); PL/SQL procedure successfully completed
If you look at dba_tab_histograms you can see more
SQL> select endpoint_value, 2 endpoint_number 3 from dba_tab_histograms 4 where table_name = 'T' 5 and column_name = 'VAL' 6 ; ENDPOINT_VALUE ENDPOINT_NUMBER -------------- --------------- 1 1 2 2 3 1000000
ENDPOINT_VALUE is the value of the column (in number for any type of data) and ENDPOINT_NUMBER is the cumulative number of lines.
Number of lines for any ENDPOINT_VALUE = ENDPOINT_NUMBER for this ENDPOINT_VALUE - ENDPOINT_NUMBER to the previous ENDPOINT_VALUE.explain the plan and track 10053 the same query:
------------------------------------------------------------ | Id | Operation | Name | Rows | Cost | ------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 4 | | 1 | TABLE ACCESS BY INDEX ROWID| T | 1 | 4 | |* 2 | INDEX RANGE SCAN | IDX | 1 | 3 | ------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("VAL"=1) Note ----- - cpu costing is off (consider enabling it)
*************************************** BASE STATISTICAL INFORMATION *********************** Table Stats:: Table: T Alias: T #Rows: 1000000 #Blks: 3106 AvgRowLen: 5.00 Index Stats:: Index: IDX Col#: 1 LVLS: 2 #LB: 4017 #DK: 3 LB/K: 1339.00 DB/K: 1035.00 CLUF: 3107.00 *************************************** SINGLE TABLE ACCESS PATH ----------------------------------------- BEGIN Single Table Cardinality Estimation ----------------------------------------- Column (#1): VAL(NUMBER) AvgLen: 3.00 NDV: 3 Nulls: 0 Density: 5.0000e-07 Min: 1 Max: 3 Histogram: Freq #Bkts: 3 UncompBkts: 1000000 EndPtVals: 3 Table: T Alias: T Card: Original: 1000000 Rounded: 1 Computed: 1.00 Non Adjusted: 1.00 ----------------------------------------- END Single Table Cardinality Estimation ----------------------------------------- Access Path: TableScan Cost: 300.00 Resp: 300.00 Degree: 0 Cost_io: 300.00 Cost_cpu: 0 Resp_io: 300.00 Resp_cpu: 0 Access Path: index (AllEqRange) Index: IDX resc_io: 4.00 resc_cpu: 0 ix_sel: 1.0000e-06 ix_sel_with_filters: 1.0000e-06 Cost: 4.00 Resp: 4.00 Degree: 1 Best:: AccessPath: IndexRange Index: IDX Cost: 4.00 Degree: 1 Resp: 4.00 Card: 1.00 Bytes: 0
Be careful on selectivity, ix_sel: 1.0000e - 06
Cost of the FTS is always the same = 300,
but the cost of the Index Range Scan is now 4: 2 blocks from root/branch + block 1 sheet + 1 table blocks.So, conclusion: histograms to calculate more accurate selectivity. The goal is to have more efficient execution plans.
Alexander Anokhin
http://alexanderanokhin.WordPress.com/ -
Impdp ORA-39083 error: INDEX could not create with object type error:
Hi Experts,
I get the following error when importing schema HR after a fall it. The DB is r12.1.3 11.2.0.3 & ebs
I did export with this command.
patterns of HR/hr = hr = TEST_DIR dumpfile = HR.dmp logfile directory expdp = expdpHR.log statistics = none
that the user HR with the option drop waterfall.
And try to import it HR schemas in the database by the following.
Impdp System/Manager schemas = hr = TEST_DIR dumpfile = HR.dmp logfile directory = expdpHR.log statistics = none
Here is the error
imported 'HR '. "" PQH_SS_PRINT_DATA "0 KB 0 rows
... imdoor 'HR '. "" PQH_TJR_SHADOW "0 KB 0 rows
. . imported 'HR '. "" PQH_TXN_JOB_REQUIREMENTS "0 KB 0 rows
. . imported 'HR '. "" PQH_WORKSHEET_BUDGET_SETS_EFC "0 KB 0 rows
. . imported 'HR '. "" PQH_WORKSHEET_DETAILS_EFC "0 KB 0 rows
. . imported 'HR '. "" PQH_WORKSHEET_PERIODS_EFC "0 KB 0 rows
. . imported 'HR '. "" PQP_ALIEN_TRANSACTION_DATA "0 KB 0 rows
. . imported 'HR '. "" PQP_ANALYZED_ALIEN_DATA "0 KB 0 rows
. . imported 'HR '. "" PQP_ANALYZED_ALIEN_DETAILS "0 KB 0 rows
. . imported 'HR '. "" PQP_EXCEPTION_REPORTS_EFC "0 KB 0 rows
. . imported 'HR '. "" PQP_EXT_CROSS_PERSON_RECORDS "0 KB 0 rows
. . imported 'HR '. "" PQP_FLXDU_FUNC_ATTRIBUTES "0 KB 0 rows
. . imported 'HR '. "" PQP_FLXDU_XML_TAGS "0 KB 0 rows
. . imported 'HR '. "" PQP_GAP_DURATION_SUMMARY "0 KB 0 rows
. . imported 'HR '. "" PQP_PENSION_TYPES_F_EFC "0 KB 0 rows
. . imported 'HR '. "" PQP_SERVICE_HISTORY_PERIODS "0 KB 0 rows
. . imported 'HR '. "" PQP_VEHICLE_ALLOCATIONS_F_EFC "0 KB 0 rows
. . imported 'HR '. "" PQP_VEHICLE_DETAILS_EFC "0 KB 0 rows
. . imported 'HR '. "" PQP_VEHICLE_REPOSITORY_F_EFC "0 KB 0 rows
. . imported 'HR '. "" PQP_VEH_ALLOC_EXTRA_INFO "0 KB 0 rows
. . imported 'HR '. "" PQP_VEH_REPOS_EXTRA_INFO "0 KB 0 rows
Processing object type SCHEMA_EXPORT/TABLE/SCHOLARSHIP/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/SCHOLARSHIP/CROSS_SCHEMA/OBJECT_GRANT
Object type SCHEMA_EXPORT/TABLE/COMMENT of treatment
Object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC of treatment
Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX
Object type SCHEMA_EXPORT/TABLE/CONSTRAINT/treatment
Object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS of treatment
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS
Object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY of treatment
Object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT of treatment
Object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS treatment
Processing object type SCHEMA_EXPORT/TABLE/INDEX/DOMAIN_INDEX/INDEX
ORA-39083: Type what INDEX failed to create object error:
ORA-29855: an error has occurred in the execution of routine ODCIINDEXCREATE
ORA-20000: Oracle text error:
DRG-50857: error oracle in drvxtab.create_index_tables
ORA-00959: tablespace "APPS_TS_TX_IDX_NEW" does not exist
Because sql is:
CREATE INDEXES ' HR'.»» IRC_POSTING_CON_TL_CTX' ON 'HR '. "" INDEXTYPE IRC_POSTING_CONTENTS_TL "("NAME") IS"CTXSYS. "' CONTEXT ' PARALLEL 1
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA
Work 'SYSTEM '. "" SYS_IMPORT_SCHEMA_01 "completed with error (s 1) at 11:16:07
SQL > select count (parameter), object_type from dba_objects where owner = 'HR' group by object_type.
OBJECT_TYPE COUNT (OBJECT_NAME)
------------------ -------------------
INDEX 37 PARTITION
SEQUENCE OF 799
TABLE 12 PARTITION
LOB 70
4 BODY PACKAGE
PACKAGE OF 4
3 RELAXATION
2936 INDEX
TABLE OF 1306
Could you please suggest.
Thank you
MZ
MZ,
I get the following error when importing schema HR after a fall it. The DB is r12.1.3 11.2.0.3 & ebs
Export and import of individual patterns of Oracle E-Business Suite stocked are not supported as this will violate referential integrity (except for custom schemas provided, you have no dependencies).
Thank you
Hussein
-
Hello
We have added a new index (global) range-hash partitioned table using oracle 11.2.0.3
My understanding is that thanks to new clues need not to collect statistics - generated instantly on the creation of index.
Is this correct?
Thank you
Yes because of Oracle 11 g Release 2 Administrators Guide Page 21-8: "these operations also collect index statistics.
-
Hi all
11.2.0.1
Is it a good idea to create indexes on all columns used in conditions WHERE all QUERIES?
Thank you
Petra k.
Is it a good idea to create indexes on all columns used in conditions WHERE all QUERIES?
No - you need not index in order to query the data in a table.
Don't add indexes without specific reason to add them.
Two reasons/cases of primary use for the addition of an index are:
1. to support the operations of INSERTION/update and ensure the integrity of the data. For example, to enforce a primary key or a unique constraint. This prevents duplicate data never broke IN the table.
2. to improve the performance of queries. An index can improve the performance of certain queries and know whether or not MAY depend on the predicates used in the WHERE clause. But a clue can also reduce the performance of the other queries.as that John has shown you. In addition an index can degrade the performance of DML operations because EACH index may need to be updated if the data is changed. The indices more you more the decline in performance possible.
Statistics are just as important as the index. Oracle needs on the table and index statistics to make the right decision on whether to use a particular index. Adding an index without having the appropriate current statistics is a waste of time.
But back to what I said earlier: add indexes without specific reason to add them. This means that appropriate tests to determine:
1. what impact has the index on queries you're trying to improve
2. what impact index has on other requests - as mentioned previously it could be NEGATIVE impacts to other queries if Oracle uses your new index for them when it shoudn't.
-
Index of the question - do not include the cost
Hello
I have a table of 112mil lines that I copied to a new tablespace, which worked well
I am so adding indexes to the copied table, but gets enormous costs in plans by trying to make a request using the new index, similar to quirying the original table/index
First of all I tired to change my main index to an another initial tablespace more great extension, and even if I remove all cela and put it in the table space as well as the original index and created the same way, I get the same huge difference.
In most om my system this will cause queries that the CBO does not use the index, and applications get very slow. If I do a simple select statement, and the index is selected, even if the cost is enormous, the moment of the actual query is the same as the original table/query. I can't get the cost of the change index by trying different dbms_stats.gather_index_stats settings
Can someone tell me why this new index comes up with such a huge cost, and what I can do to lower the cost, so that the index gets actually chosen?
Here is some basic information:
It's about Oracle 10 g 2 (10.2.0.4) EA
The simplified create statement for the original and the new index:
CREATE INDEX IDX_POS_DATE ON FMC_POS
("UTC_DATE' DESC, FMC_LOGVES_ID)
CREATE the INDEX IDX_POS_DATE_P2 on FMC_POS_p2
("UTC_DATE' DESC, FMC_LOGVES_ID)
The two indices in dba_indexes:
Source language:
OWNER: VTRACK
INDEX_NAME: IDX_POS_DATE
INDEX_TYPE: BASED ON A NORMAL FUNCTION
TABLE_OWNER: VTRACK
TABLE_NAME: FMC_POS
TABLE_TYPE: TABLE
UNIQUENESS: NO SINGLE
COMPRESSION: DISABLED
PREFIX_LENGTH:
NOM_TABLESPACE: USERS
INI_TRANS: 2
MAX_TRANS: 255
INITIAL_EXTENT: 65536
NEXT_EXTENT:
MIN_EXTENTS: 1
MAX_EXTENTS: 2147483645
PCT_INCREASE:
PCT_THRESHOLD:
INCLUDE_COLUMN:
FREELISTS:
FREELIST_GROUPS:
PCT_FREE: 10
LOGGING: YES
BLEVEL: 3
LEAF_BLOCKS: 439239
DISTINCT_KEYS: 108849202
AVG_LEAF_BLOCKS_PER_KEY: 1
AVG_DATA_BLOCKS_PER_KEY: 1
CLUSTERING_FACTOR: 79021005
STATUS: VALID
NUM_ROWS: 110950137
SAMPLE_SIZE: 2177632
LAST_ANALYZED: 05/09/2011 23:38:15
DEGREE: 1
INSTANCES: 1
PARTITIONED: NO.
TEMPORARY: N
GENERATED: N
SCHOOL: N
USER_TABLES: DEFAULT
USER_STATS: NO.
DURATION:
PCT_DIRECT_ACCESS:
ITYP_OWNER:
ITYP_NAME:
PARAMETERS:
GLOBAL_STATS: YES
DOMIDX_STATUS:
DOMIDX_OPSTATUS:
FUNCIDX_STATUS: ENABLED
JOIN_INDEX: NO.
IOT_REDUNDANT_PKEY_ELIM: NO.
ELIMINATED: NO.
New:
OWNER: VTRACK
INDEX_NAME: IDX_POS_DATE_P2
INDEX_TYPE: BASED ON A NORMAL FUNCTION
TABLE_OWNER: VTRACK
TABLE_NAME: FMC_POS_P2
TABLE_TYPE: TABLE
UNIQUENESS: NO SINGLE
COMPRESSION: DISABLED
PREFIX_LENGTH:
NOM_TABLESPACE: USERS
INI_TRANS: 2
MAX_TRANS: 255
INITIAL_EXTENT: 65536
NEXT_EXTENT:
MIN_EXTENTS: 1
MAX_EXTENTS: 2147483645
PCT_INCREASE:
PCT_THRESHOLD:
INCLUDE_COLUMN:
FREELISTS:
FREELIST_GROUPS:
PCT_FREE: 10
LOGGING: YES
BLEVEL: 3
LEAF_BLOCKS: 408128
DISTINCT_KEYS: 111888565
AVG_LEAF_BLOCKS_PER_KEY: 1
AVG_DATA_BLOCKS_PER_KEY: 1
CLUSTERING_FACTOR: 88607794
STATUS: VALID
NUM_ROWS: 112757727
SAMPLE_SIZE: 112757727
LAST_ANALYZED: 23/06/2011 07:57:14
DEGREE: 1
INSTANCES: 1
PARTITIONED: NO.
TEMPORARY: N
GENERATED: N
SCHOOL: N
USER_TABLES: DEFAULT
USER_STATS: NO.
DURATION:
PCT_DIRECT_ACCESS:
ITYP_OWNER:
ITYP_NAME:
PARAMETERS:
GLOBAL_STATS: NO.
DOMIDX_STATUS:
DOMIDX_OPSTATUS:
FUNCIDX_STATUS: ENABLED
JOIN_INDEX: NO.
IOT_REDUNDANT_PKEY_ELIM: NO.
ELIMINATED: NO.
The simple selects and costs:
Original table / index:
Select * from fmc_pos where utc_date > sysdate-10
Plan:
ALL_ROWS SELECT STATEMENT
Cost: 5 bytes: 5,350 cardinality: 50
TABLE ACCESS BY INDEX ROWID VTRACK TABLE 2. FMC_POS
Cost: 5 bytes: 5,350 cardinality: 50
1 INDEX RANGE SCAN INDEX VTRACK. IDX_POS_DATE
Cost: cardinality 4: 1
Original table / index:
Select * from fmc_pos_p2 where utc_date > sysdate-10
Plan:
Plan
ALL_ROWS SELECT STATEMENT
Cost: 3,067 bytes: 2.708.856 cardinality: 25.082
TABLE ACCESS BY INDEX ROWID VTRACK TABLE 2. FMC_POS_P2
Cost: 3,067 bytes: 2.708.856 cardinality: 25.082
1 INDEX RANGE SCAN INDEX VTRACK. IDX_POS_DATE_P2
Cost: cardinality 2.927: 177>
Original table/index: select * from fmc_pos where utc_date>sysdate-10 Plan: SELECT STATEMENT ALL_ROWS Cost: 5 Bytes: 5.350 Cardinality: 50 2 TABLE ACCESS BY INDEX ROWID TABLE VTRACK.FMC_POS Cost: 5 Bytes: 5.350 Cardinality: 50 1 INDEX RANGE SCAN INDEX VTRACK.IDX_POS_DATE Cost: 4 Cardinality: 1 Original table/index: select * from fmc_pos_p2 where utc_date>sysdate-10 Plan: Plan SELECT STATEMENT ALL_ROWS Cost: 3.067 Bytes: 2.708.856 Cardinality: 25.082 2 TABLE ACCESS BY INDEX ROWID TABLE VTRACK.FMC_POS_P2 Cost: 3.067 Bytes: 2.708.856 Cardinality: 25.082 1 INDEX RANGE SCAN INDEX VTRACK.IDX_POS_DATE_P2 Cost: 2.927 Cardinality: 177
Look at the stats of the table, not the index statistics. Cardinalities are extremely different.
Specifically, what are the high and low values on the utc_date column.
(The old table statistics are probably days or weeks, obsolete).Concerning
Jonathan Lewis -
Index work only with a column of numbers?
I created a table with a column of data type such as varchar2
with index I executed the query
Execution plan
----------------------------------------------------------
Hash value of plan: 2608167055
----------------------------------------------------------------------------
| ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | 8. 2896 | 2 (0) | 00:00:01 |
|* 1 | TABLE ACCESS FULL | BOPS_1 | 8. 2896 | 2 (0) | 00:00:01 |
----------------------------------------------------------------------------
After creating Btree index on this column I executed the same query
Execution plan
----------------------------------------------------------
Hash value of plan: 2608167055
----------------------------------------------------------------------------
| ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | 8. 2896 | 2 (0) | 00:00:01 |
|* 1 | TABLE ACCESS FULL | BOPS_1 | 8. 2896 | 2 (0) | 00:00:01 |
----------------------------------------------------------------------------
There is no change between these to explain plans, no performance difference?
is that B-TREE index works well with number data types?
Thank you
REDAThere could be several reasons. After you create the index statistics gather e.g. also maybe your query returns a huge part of table index will be useless. I mean if you ask him % of 80 percent of one table, optimizer will probably not use index. Another reason, perhaps your table is too small, so oracle read both so the whole table, it is not necessary use index. etc...
It is not on the varchar2 data type, if one of these conditions is tru that whatever your index of data type will not be used.
Published by: Mustafa KALAYCI on 20.Eki.2010 10:14
-
DBMS_STATS Table and Index Stats Question?
All,
Oracle database version: 10.2.0.3
Operating system: Solaris 10 X 86-64
We have a packed application we collect stats of objects with cascade = "TRUE". If the index statistics would be also collected in the process of collecting statistics. We do a estimate_percent = > 30 and method_opt = > all COLUMNS size AUTO no statistical calculation of these paintings that are huge (1 m lines - 192 m). The collected index statistics is based also on the estimate_percent product in the order.
We found that some of the SQL had improved performance after the calculation of the index statistics in our Test environment that is a copy of production, updated daily.
We thought the way to collect statistics:
Collect statistics for tables with estimate_percent and Computing stats on index.
We are making changes to our scripts and test approach in our Test environment.
In the meantime, I would like to know the pros and cons of this approach?
Thank you
AnanthaYou run your own collect routine of stats, I understand that you disable an Oracle provides and plans automatically on 10g?
What result did you get the Oracle supplied with AUTO_SAMPLE_SIZE routine?
Estimate of the table and then calculating the index was something that I saw recommended in the past with ANALYZE and if you see the best results to the test, I think it would be worth in production.
None of the environments applications two users are exactly the same, but the approach appears reasonable based on your observations.
I suggest to bring together some before and after snapshots of query performance so that you have documentation of your results.
IMHO - Mark D Powell-
-
To update the schema in the data pump
Hello
Database version: 11.2.0.1.0
Is that we can update the schema in the database without creating the similar to the network mode dump file
for example
I have a DB 1 database to say, it contains two schema Test and production, we can update the test with production data without creating the dump file in the database server.
the whole idea is to perform the export and import data in a single step here to reduce the time and human intervention.
Currently, I've followed the steps below:
(1) export of production data
(2) file the test schema
(3) create a diagram of the test
(4) import production data in the test schema
Thank you
Sery
Hello
It is possible.,.
SQL > create public database link impdpt connect to the SYSTEM identified by abc123 using 'DG1 ';
Database link created.
SQL >
-----
Directory of system/abc123 impdp [oracle@prima admin] $ = network_link = impdpt schemas = hr remap_schema hr:hrtn logfile = test_same.log = DATA_PUMP_DIR
Import: Release 11.2.0.4.0 - Production on Tue Dec 29 02:53:24 2015
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11 g Enterprise Edition Release 11.2.0.4.0 - 64 bit Production
With partitioning, OLAP, Data Mining and Real Application Testing options
Start "SYSTEM". "" SYS_IMPORT_SCHEMA_01 ": System / * Directory = network_link = impdpt schemas = hr remap_schema hr:hrtn logfile = test_same.log = DATA_PUMP_DIR
Current estimation using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 448 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE of treatment
Object type SCHEMA_EXPORT/TABLE/TABLE processing
. . imported "HRTN. "" COUNTRY "25 lines
. . imported "HRTN. "' DEPARTMENTS ' 27 lines
. . imported "HRTN. "' EMPLOYEES ' 107 lines
. . imported "HRTN. "JOBS"19 ranks. "
. . imported "HRTN. "" JOB_HISTORY "10 lines
. . imported "HRTN. "" LOCATIONS "23 lines
. . imported "HRTN. "The REGIONS"4 lines.
Processing object type SCHEMA_EXPORT/TABLE/SCHOLARSHIP/OWNER_GRANT/OBJECT_GRANT
Object type SCHEMA_EXPORT/TABLE/COMMENT of treatment
Object type SCHEMA_EXPORT/PROCEDURE/treatment PROCEDURE
Object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE processing
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Object type SCHEMA_EXPORT/TABLE/CONSTRAINT/treatment
Object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS of treatment
Object type SCHEMA_EXPORT/VIEW/VIEW processing
Object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT of treatment
Object type SCHEMA_EXPORT/TABLE/TRIGGER processing
Object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS treatment
Work 'SYSTEM '. "' SYS_IMPORT_SCHEMA_01 ' completed Fri Dec 29 02:54:09 2015 elapsed 0 00:00:44
[oracle@prima admin] $
-
DataPump: Rows in tables do not match after importation
Hello
I have exported a schema 'ORAMSCA' and imported in the form of schema 'ORAMSCA_TEST151223 '. I found the difference between the rows of tables. Could you please advice.
Here are the details.
ORAMSCA_TEST151223 ORAMSCA
COUNTY OF COUNTY OF TABLE_NAME
------------------------------ ------------- ------------
2 952 3 367 OM_AUDIT_TRAIL
33 40 OM_CONFIG_OPTIONS
3 456 3 456 OM_COUNTRY_STATES
86 91 OM_ENTITY_MENU
64 69 OM_FUNFACTS
81 61 OM_INSTANCES
OM_JOBS 139 111
OM_JOB_LOGS 226 132
37 19 OM_JOB_PARAMS
15 15 OM_LICENSES
1 289 1 594 OM_LOGIN_HISTORY
OM_LOOKUP_CODES 900 904
31 31 OM_LOOKUP_TYPES
9 625 8 225 OM_ORG_ORGANIZATIONS
36 36 OM_ROLE_RIGHTS
29 48 OM_USERS
728 1 983 OM_USER_ENTITY_MENU_ACCESS
OM_USER_ORGANIZATION_ACCESS 40 156Exported log file:
;;;
Export: Release 11.2.0.1.0 - Production on Fri Dec 18 09:22:16 2015
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
;;;
Connected to: Oracle Database 11 g Enterprise Edition Release 11.2.0.1.0 - 64 bit Production
With partitioning, OLAP, Data Mining and Real Application Testing options
Start "SYSTEM". "" SYS_EXPORT_SCHEMA_01 ": System / * schema = oramsca = ORAMSCA_TEST1.dmp = expdp.log = DUMP directory logfile dumpfile
Current estimation using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 2,937 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/DB_LINK
Object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE of treatment
Object type SCHEMA_EXPORT/TABLE/TABLE processing
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Object type SCHEMA_EXPORT/TABLE/CONSTRAINT/treatment
Object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS of treatment
Object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC of treatment
Object type SCHEMA_EXPORT/FUNCTION/treatment
Object type SCHEMA_EXPORT/PROCEDURE/treatment PROCEDURE
Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
Object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION of treatment
Object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE processing
Object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY of treatment
Processing object type SCHEMA_EXPORT/JAVA_SOURCE/JAVA_SOURCE
Processing object type SCHEMA_EXPORT/JAVA_CLASS/JAVA_CLASS
Object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT of treatment
Object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS treatment
. . exported "ORAMSCA." "" OM_ORG_ORGANIZATIONS "8225 lines 620,3 KB
. . exported "ORAMSCA." "" OM_AUDIT_TRAIL "3283 lines 285,2 KB
. . exported "ORAMSCA." "' OM_ENTITY_MENU ' lines of 145,2 90 KB
. . exported "ORAMSCA." "' OM_COUNTRY_STATES ' 3456 lines 186,1 KB
. . exported "ORAMSCA." "" OM_CONFIG_OPTIONS "the 40 lines 67,10 KB
. . exported "ORAMSCA." "' OM_LOGIN_HISTORY ' 118.2 lines 1525 KB
. . exported "ORAMSCA." "' OM_LOOKUP_CODES ' 904 lines 68,61 KB
. . exported "ORAMSCA." "' OM_USER_ENTITY_MENU_ACCESS ' 78.60 lines 1644 KB
. . exported "ORAMSCA." "' OM_FUNFACTS ' lines of 15,06 67 KB
. . exported "ORAMSCA." "' OM_INSTANCES ' lines of 17,37 61 KB
. . exported "ORAMSCA." "' OM_JOBS ' 111 lines 19.54 KB
. . exported "ORAMSCA." "' OM_JOB_LOGS ' 132 lines 19,02 KB
. . exported "ORAMSCA." "' OM_JOB_PARAMS ' Ko 9,726, 19 ranks
. . exported "ORAMSCA." "" OM_LICENSES "15 lines 10,25 KB
. . exported "ORAMSCA." "' OM_LOOKUP_TYPES ' 31 lines 11.50 KB
. . exported "ORAMSCA." "' OM_ROLE_RIGHTS ' 36 lines 14,43 KB
. . exported "ORAMSCA." "' OM_USERS ' 44 lines 21,57 KB
. . exported "ORAMSCA." "' OM_USER_ORGANIZATION_ACCESS ' 147 lines 15,56 KB
Main table 'SYSTEM '. "" SYS_EXPORT_SCHEMA_01 "properly load/unloaded
******************************************************************************
Empty the file system set. SYS_EXPORT_SCHEMA_01 is:
/U01/oracle11/dump/ORAMSCA_TEST1.dmp
Work 'SYSTEM '. "" SYS_EXPORT_SCHEMA_01 "carried out at 09:23:14
Import a log file:
;;;
Import: Free 11.2.0.1.0 - Production Wed Dec 23 01:00:58 2015
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
;;;
Connected to: Oracle Database 11 g Enterprise Edition Release 11.2.0.1.0 - 64 bit Production
With partitioning, OLAP, Data Mining and Real Application Testing options
Main table 'SYSTEM '. "' SYS_IMPORT_FULL_01 ' properly load/unloaded
Start "SYSTEM". "" SYS_IMPORT_FULL_01 ": System / * Directory = DUMP dumpfile = logfile = impdp.log remap_schema = ORAMSCA:ORAMSCA_TEST151223 remap_tablespace = ORAMSCA:ORAMSCA_TEST151223 expdp_ORAMSCA_151221.dmp
Processing object type SCHEMA_EXPORT/USER
ORA-31684: USER object Type: 'ORAMSCA_TEST151223' already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/DB_LINK
Object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE of treatment
Object type SCHEMA_EXPORT/TABLE/TABLE processing
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "ORAMSCA_TEST151223." "' OM_ORG_ORGANIZATIONS ' 9625 lines 724,5 KB
. . imported "ORAMSCA_TEST151223." "' OM_AUDIT_TRAIL ' 258.6 lines 2952 KB
. . imported "ORAMSCA_TEST151223." "' OM_ENTITY_MENU ' lines of 166.7 86 KB
. . imported "ORAMSCA_TEST151223." "' OM_CONFIG_OPTIONS ' 33 lines 59,26 KB
. . imported "ORAMSCA_TEST151223." "' OM_COUNTRY_STATES ' 3456 lines 186,1 KB
. . imported "ORAMSCA_TEST151223." "' OM_LOGIN_HISTORY ' 102.8 lines 1289 KB
. . imported "ORAMSCA_TEST151223." "' OM_LOOKUP_CODES ' 900 lines 68,36 KB
. . imported "ORAMSCA_TEST151223." "' OM_USER_ENTITY_MENU_ACCESS ' 728 lines 39,46 KB
. . imported "ORAMSCA_TEST151223." "' OM_FUNFACTS ' lines of 14.87 64 KB
. . imported "ORAMSCA_TEST151223." "' OM_INSTANCES ' lines of 19.66 81 KB
. . imported "ORAMSCA_TEST151223." "' OM_JOBS ' 139 lines 22,07 KB
. . imported "ORAMSCA_TEST151223." "' OM_JOB_LOGS ' 226 KB 25,73 lines
. . imported "ORAMSCA_TEST151223." "' OM_JOB_PARAMS ' 37 lines 10,74 KB
. . imported "ORAMSCA_TEST151223." "" OM_LICENSES "15 lines 10,25 KB
. . imported "ORAMSCA_TEST151223." "' OM_LOOKUP_TYPES ' 31 lines 11.50 KB
. . imported "ORAMSCA_TEST151223." "' OM_ROLE_RIGHTS ' 36 lines 14,43 KB
. . imported "ORAMSCA_TEST151223." "' OM_USERS ' 29 ranks 19.38 KB
. . imported "ORAMSCA_TEST151223." "" OM_USER_ORGANIZATION_ACCESS "the 40 lines 10.53 KB
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Object type SCHEMA_EXPORT/TABLE/CONSTRAINT/treatment
Object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS of treatment
Object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC of treatment
Object type SCHEMA_EXPORT/FUNCTION/treatment
Object type SCHEMA_EXPORT/PROCEDURE/treatment PROCEDURE
Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
Object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION of treatment
ORA-39083: Type than alter_function cannot be created with the object error:
ORA-04052: error occurred when searching to the top of the remote object APPS. FND_APPLICATION_ALL_VIEW@SYSTEM_LINK_OM_VISMA
ORA-00604: an error has occurred at the SQL level recursive 3
ORA-12154: TNS: could not resolve the connect identifier specified
Because sql is:
ALTER FUNCTION "ORAMSCA_TEST151223". "" OM_APPLICATION_VISMA "PLSQL_OPTIMIZE_LEVEL = 2 PLSQL_CODE_TYPE = COMPILATION INTERPRETED PLSQL_DEBUG = FALSE PLSCOPE_SETTINGS = ' IDENTIFIERS: NO ' S REUSE
ORA-39083: Type than alter_function cannot be created with the object error:
ORA-04052: error occurred when searching to the top of the remote object APPS.GL_CODE_COMBINATIONS_V@SYSTEM_LINK_OM_VISMA
ORA-00604: an error has occurred at the SQL level recursive 3
ORA-12154: TNS: could not resolve the connect identifier specified
Because sql is:
ALTER FUNCTION "ORAMSCA_TEST151223". "" OM_CODE_COMBINATION_VISMA "PLSQL_OPTIMIZE_LEVEL = 2 PLSQL_CODE_TYPE = COMPILATION INTERPRETED PLSQL_DEBUG = FALSE PLSCOPE_SETTINGS = ' IDENTIFIERS: NO ' REUS
Object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE processing
Object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY of treatment
Processing object type SCHEMA_EXPORT/JAVA_SOURCE/JAVA_SOURCE
Processing object type SCHEMA_EXPORT/JAVA_CLASS/JAVA_CLASS
Object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT of treatment
Object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS treatment
Work 'SYSTEM '. "" SYS_IMPORT_FULL_01 "finished with 3 errors at 01:01:58
Thank you
Jeremy.
Seems that you do not import the file that you exported?
Export: dumpfile = ORAMSCA_TEST1.dmp
Import: dumpfile = expdp_ORAMSCA_151221.dmp
AJ
-
What are they means? It is not much more in the docs.
AVG_LEAF_BLOCKS_PER_KEY average number of blocks of sheet by key
AVG_DATA_BLOCKS_PER_KEY average number of blocks of data by key
These two are in fact just values based on the DISTINCT_KEYS, LEAF_BLOCKS and CLUSTERING_FACTOR index statistics.
The AVG_LEAF_BLOCKS_PER_KEY is essentially LEAF_BLOCKS/DISTINCT_KEYS and shows so basically the average number of pads of sheets required to store an indexed value unique/individual.
The AVG_DATA_BLOCKS_PER_KEY is essentially CLUSTERING_FACTOR/DISTINCT_KEYS and shows so basically the average number of blocks of different tables that should be visited then return data for a single/specific indexed value.
The lowest value that can be be is 1.
See you soon
Richard Foote
-
How fast an application update for 27 000 lines?
I used the following code from pl/sql using bulk collect to update a table called dwd_candidate_new. But it takes nearly 20 minutes to update record 27000. Is there a work around to make it faster?
Thank you.
declare
T_ServiceId. ARRAY TYPE IS VARCHAR2 (30);
T_Comments. ARRAY TYPE IS VARCHAR2 (30);
v_ServiceId t_ServiceId;
v_Comments t_Comments;
Start
Select s.c1, s.c4
COLLECTION in BULK IN v_ServiceId, v_Comments
of s dwd_report_stg
s.file_id = p_file_id;
FORALL indx IN v_Comments.FIRST... v_Comments.Last
UPDATE dwd_candidate_new
Set sdt_receive = sysdate,
sdt_comments = v_Comments (indx)
where service_id = v_ServiceId (indx);
end;
Why you use bulk collect and forall? If performance is a problem, I would start by trying to do this as a single SQL UPDATE or a MERGE statement. Make sure you have updated on tables and indexes statistics and see what the optimizer based on CSSTidy comes with.
Something like the following should work (untested so please excuse the mistakes of syntax)
MERGE INTO dwd_candidate_new tgt USING ( SELECT c1, c4 FROM dwd_report_stg WHERE file_id = p_file_id ) src ON (src.c1 = tgt.service_id) WHEN MATCHED THEN UPDATE SET tgt.sdt_receive=sysdate, tgt.sdt_comments = src.c4;
In addition, there are a few mistakes in your PL/SQL - there seems to be a keyword WHERE in the SELECT statement, and the variable p_file_id is undeclared.
It would be useful if you could create post instructions, sample data and code of work table.
Hope that helps
Dan
Maybe you are looking for
-
Gmail setting I want to clean a corrupted email
Nice day I received an email that has damaged my Gmail account and now I need to get rid of him, because it keeps loading extensible email any help would be good. Thank you Brian H.
-
dose of support 3rd generation apple tv LG 4 K TV
dose of support 3rd generation apple tv LG 4 K TV
-
Canon PIXMA MG3570 to WIFI connection
Hi, I bought the Canon MG3570 printer, I'm tying to connect it to my router without success. I have seen any printer option to configure the SSID and encryption key. Please help me to connect to my wifi router.
-
I recently got a new laptop from my desktop which requires me to connect via a secure VPN tunnel. My laptop has no problem to connect to my router and internet access. However, when I log in my VPN then I only have access to my internal sites and r
-
I am trying to install the printer and the camera software. When I do that, I get a message saying that the print queue Service does not work and reboot and try again. It does not work. I tried to install with the supplied disc and also via the we