poor cardinality of bad query performance
I have a query, whose performance is unsatisfactory.It produces (11.2.0.2) next track. The slowest part of the query is the functioning of the UNION-ALL on two fast full scans indexes on PRODUCTS_DATES indices. These clues are in the two tables that make up a notice, V_SALES_ALL.
The estimation of cardinality for the full scans seems to be out - 100 lines against 78,000,000 and 1 703 000 respectively. The estimate of 100 strangely resembles a defect because the two tables are, as I said, an interior view. In fact, if I break up of the view to its constituent tables, queries run in a tenth of the time.
How can I fix this misinformation, most likely created by the view?
I'm reading the right trace?
Regs
Johnnie
Rows Row Source operation
------- ---------------------------------------------------
321 SORT GROUP BY (cr = 6759441 pr = 176970 pw = 176955 time = 480 US cost = 63 = 896 card = 14 size)
5322875 NESTED LOOPS (cr = 6759441 pr = 176970 pw = 176955 time = 109327744-en)
5322875 NESTED LOOPS (cr = 241360 pr = 176970 pw = 176955 time = 55796544 US cost = size 62 = 896 card = 14)
5322875 HASH JOIN (cr = 241049 pr = 176970 pw = 176955 time = 7774711 US cost = size 48 = map 280 = 14)
80445738 VIEW V_SALES_ALL (cr 241001 pr = 0 pw = time = 0 = 569162368 cost = US size 4 = 1800 map = 200)
80445738 UNION-ALL (cr = 241001 pr = 0 pw = time 0 = 404890176 en)
78742696 INDEX FAST FULL SCAN PRODUCTS_DATES_IDX (cr = 235954 pr = 0 pw = time 0 = 85524904 US cost = size 2 = 900 card = 100) (object id 221975)
1703042 INDEX FAST FULL SCAN PRODUCTS_DATES_IDX_HARD (cr = 5047 pr = 0 pw = time 0 = 1850486 US cost = size 2 = 900 card = 100) (object id 241720)
2238 VIEW index$ _join$ _003 (cr = 48 pr = 0 pw = time 0 = US cost = size 44 14474 = 24618 card = 2238)
JOIN by HASH 2238 (cr = 48 pr = 0 pw = time 0 = 9737 US)
2238 PRODUCTS_GF_INDEX2 INDEX RANGE SCAN (cr = 8 pr = 0 pw = time 0 = 2609 US cost = size 6 = 24618 card = 2238) (object id 255255)
16206 INDEX FAST FULL SCAN PRODUCTS_GF_PK (cr = 40 pr = 0 pw = time 0 = 20415 US cost = size 45 = 24618 card = 2238) (object id 255253)
5322875 INDEX UNIQUE SCAN DATES_PK (cr = 311 pr = 0 pw = time 0 = 0 US cost = 0 size = 0 = 1 card) (object id 151306)
5322875 ACCESS BY ROWID DATES TABLE INDEX (cr 6518081 pr = 0 pw = time = 0 = 0 US cost = 1 size = 44 = 1 card)
Implementation plan of lines
------- ---------------------------------------------------
0 SELECT STATEMENT MODE: FIRST_ROWS
321 TRI (GROUP BY)
5322875-HASH JOIN
TABLE 5322875 ACCESS MODE: ANALYZED (FULL) OF "DATES" (TABLE)
5322875-HASH JOIN
80445738 VIEW OF "index$ _join$ _003 ' (VIEW)
80445738-HASH JOIN
MODE 78742696 INDEX: SCANNED (SCAN INTERVAL) OF
"PRODUCTS_GF_INDEX2" (INDEX)
MODE 1703042 INDEX: ANALYZED (FULL SCAN) OF
"PRODUCTS_GF_PK" ((UNIQUE) INDEX)
2238 VIEW OF "V_SALES_ALL" (VIEW)
2238 UNION-ALL
INDEX 2238 MODE: ANALYZED (COMPLETE ANALYSIS) OF
"PRODUCTS_DATES_IDX" (INDEX)
HOW TO INDEX 16206: ANALYZED (COMPLETE ANALYSIS) OF
"PRODUCTS_DATES_IDX_HARD" (INDEX)
Johnnie d wrote:
I have a query, whose performance is unsatisfactory.It produces (11.2.0.2) next track. The slowest part of the query is the functioning of the UNION-ALL on two fast full scans indexes on PRODUCTS_DATES indices. These clues are in the two tables that make up a notice, V_SALES_ALL.
The estimation of cardinality for the full scans seems to be out - 100 lines against 78,000,000 and 1 703 000 respectively. The estimate of 100 strangely resembles a defect because the two tables are, as I said, an interior view. In fact, if I break up of the view to its constituent tables, queries run in a tenth of the time.
Implementation plan of lines
------- ---------------------------------------------------
0 SELECT STATEMENT MODE: FIRST_ROWS
You run with optimizer_mode = first_rows_100?
If yes then you may have found a bug in the code of first_rows_N. 100 seem to have been pushed inside the view in a way that has made Oracle to the full index full scans while only "intend" to find 100 rows in each table. There catches the limit of 100 as the actual limit in deciding which online source to use as the hash and which will serve to the probe.
If you want the entire result set, or a large part of it, you could add the all_rows education indicator.
Concerning
Jonathan Lewis
http://jonathanlewis.WordPress.com
Author: core Oracle
Tags: Database
Similar Questions
-
Query performance poor when they join CONTAINS to another table
We just recently started evaluation Oracle Text for a search solution. We must be able to find a table which can have over 20 million lines. Each user can have visibility to a very small part of these lines. The goal is to have a single Oracle text index that represents all the columns of research in the table (multi column datastore) and provide a score for each search result so that we can sort the search results in descending score order. What we see is that the performance of the queries of TOAD are extremely fast, when we write a simple CONTAINS query against the table indexed Oracle text. However, when we first try reduce the lines from that CONTAINS query must search using a we find the query performance degrades significantly.
For example, we can find all the records that a user has access from our base table of the following query:
SELECT d.duns_loc
DUNS d
JOIN primary_contact pc
ON d.duns_loc = pc.duns_loc
AND pc.emp_id =: employeeID;
This query may run in < 100 m in the example, this query returns close to 1200 lines of the duns_loc of primary key.
Our search query looks like this:
SELECT score (1), d.
DUNS d
WHERE CONTAINS (TEXT_KEY,: research, 1) > 0
ORDER BY score (1) DESC;
The: Find value in this example will be 'Highway '. The query can return 246 k lines in about 2 seconds.
2 seconds is good, but we should be able to have a much quicker response if the request did not have to search the entire table, right? Since each user can only records from 'view' that they are assigned to as us if the search operation had to be analysed a tiny tiny percentage of the TEXT index, we should see results faster (and more relevant). If we now write the following query:
WITH the subset
AS
(SELECT d.duns_loc
DUNS d
JOIN primary_contact pc
ON d.duns_loc = pc.duns_loc
AND pc.emp_id =: employeeID
)
SELECT score (1), d.
DUNS d
JOIN the subset s
ON d.duns_loc = s.duns_loc
WHERE CONTAINS (TEXT_KEY,: research, 1) > 0
ORDER BY score (1) DESC;
For reasons that we have not been able to identify this query actually takes longer to run than the sum times the contributing elements. This query takes more than 6 seconds to run. We, or our DBA can understand why this query runs worse than a large open research. Open research is not ideal because the query eventually folders back to the user, they do not have access to view.
Has anyone ever encountered something like that? Any suggestions on what to watch or where to go? If someone wants more information to help diagnosis to let me know, and I'll be happy to produce it here.
Thank you!!Since you're using two tables, you will get probably better performance on an index that uses a section group and a user_datastore that uses a procedure. He should be able to recover all the data with a simple query, and hit a single index. Please see the demo below. Indexing can be slower, but research should be faster. If you have your primary and foreign keys in place and current statistics before you create the index, it should speed up indexing.
SCOTT@orcl_11gR2> -- tables: SCOTT@orcl_11gR2> CREATE TABLE duns 2 (duns_loc NUMBER, 3 business_name VARCHAR2 (15), 4 business_name2 VARCHAR2 (15), 5 address_line VARCHAR2 (30), 6 city VARCHAR2 (15), 7 state VARCHAR2 (2), 8 business_phone VARCHAR2 (15), 9 contact_name VARCHAR2 (15), 10 contact_title VARCHAR2 (15), 11 text_key VARCHAR2 (1), 12 CONSTRAINT duns_pk PRIMARY KEY (duns_loc)) 13 / Table created. SCOTT@orcl_11gR2> CREATE TABLE primary_contact 2 (duns_loc NUMBER, 3 emp_id NUMBER, 4 CONSTRAINT primary_contact_pk 5 PRIMARY KEY (emp_id, duns_loc), 6 CONSTRAINT primary_contact_fk FOREIGN KEY (duns_loc) 7 REFERENCES duns (duns_loc)) 8 / Table created. SCOTT@orcl_11gR2> -- data: SCOTT@orcl_11gR2> INSERT INTO duns (duns_loc, address_line) VALUES (1, 'highway') 2 / 1 row created. SCOTT@orcl_11gR2> INSERT INTO duns (duns_loc, address_line) VALUES (2, 'highway') 2 / 1 row created. SCOTT@orcl_11gR2> INSERT INTO primary_contact VALUES (1, 1) 2 / 1 row created. SCOTT@orcl_11gR2> INSERT INTO primary_contact VALUES (2, 2) 2 / 1 row created. SCOTT@orcl_11gR2> INSERT INTO duns (duns_loc, address_line) 2 SELECT object_id, object_name 3 FROM all_objects 4 WHERE object_id > 2 5 / 76029 rows created. SCOTT@orcl_11gR2> INSERT INTO primary_contact 2 SELECT object_id, namespace 3 FROM all_objects 4 WHERE object_id > 2 5 / 76029 rows created. SCOTT@orcl_11gR2> -- gather statistics: SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'DUNS') PL/SQL procedure successfully completed. SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'PRIMARY_CONTACT') PL/SQL procedure successfully completed. SCOTT@orcl_11gR2> -- procedure: SCOTT@orcl_11gR2> CREATE OR REPLACE PROCEDURE duns_proc 2 (p_rowid IN ROWID, 3 p_clob IN OUT NOCOPY CLOB) 4 AS 5 BEGIN 6 FOR d IN 7 (SELECT duns_loc, 8 '
' || 9 business_name || ' ' || 10 business_name2 || ' ' || 11 address_line || ' ' || 12 city || ' ' || 13 state || ' ' || 14 business_phone || ' ' || 15 contact_name || ' ' || 16 contact_title || 17 ' ' 18 AS duns_cols 19 FROM duns 20 WHERE ROWID = p_rowid) 21 LOOP 22 DBMS_LOB.WRITEAPPEND (p_clob, LENGTH (d.duns_cols), d.duns_cols); 23 FOR pc IN 24 (SELECT '' || emp_id || ' ' AS pc_col 25 FROM primary_contact 26 WHERE duns_loc = d.duns_loc) 27 LOOP 28 DBMS_LOB.WRITEAPPEND (p_clob, LENGTH (pc.pc_col), pc.pc_col); 29 END LOOP; 30 END LOOP; 31 END duns_proc; 32 / Procedure created. SCOTT@orcl_11gR2> SHOW ERRORS No errors. SCOTT@orcl_11gR2> -- user datastore, section group with field section: SCOTT@orcl_11gR2> begin 2 ctx_ddl.create_preference ('duns_store', 'USER_DATASTORE'); 3 ctx_ddl.set_attribute ('duns_store', 'PROCEDURE', 'duns_proc'); 4 ctx_ddl.set_attribute ('duns_store', 'OUTPUT_TYPE', 'CLOB'); 5 ctx_ddl.create_section_group ('duns_sg', 'BASIC_SECTION_GROUP'); 6 ctx_ddl.add_field_section ('duns_sg', 'emp_id', 'emp_id', true); 7 end; 8 / PL/SQL procedure successfully completed. SCOTT@orcl_11gR2> -- text index with user datastore and section group: SCOTT@orcl_11gR2> CREATE INDEX duns_context_index 2 ON duns (text_key) 3 INDEXTYPE IS CTXSYS.CONTEXT 4 FILTER BY duns_loc 5 PARAMETERS 6 ('DATASTORE duns_store 7 SECTION GROUP duns_sg 8 SYNC (ON COMMIT)') 9 / Index created. SCOTT@orcl_11gR2> -- variables: SCOTT@orcl_11gR2> VARIABLE employeeid NUMBER SCOTT@orcl_11gR2> EXEC :employeeid := 1 PL/SQL procedure successfully completed. SCOTT@orcl_11gR2> VARIABLE search VARCHAR2(100) SCOTT@orcl_11gR2> EXEC :search := 'highway' PL/SQL procedure successfully completed. SCOTT@orcl_11gR2> -- query: SCOTT@orcl_11gR2> SET AUTOTRACE ON EXPLAIN SCOTT@orcl_11gR2> SELECT SCORE(1), d.* 2 FROM duns d 3 WHERE CONTAINS 4 (text_key, 5 :search || ' AND ' || 6 :employeeid || ' WITHIN emp_id', 7 1) > 0 8 / SCORE(1) DUNS_LOC BUSINESS_NAME BUSINESS_NAME2 ADDRESS_LINE CITY ST BUSINESS_PHONE ---------- ---------- --------------- --------------- ------------------------------ --------------- -- --------------- CONTACT_NAME CONTACT_TITLE T --------------- --------------- - 3 1 highway 1 row selected. Execution Plan ---------------------------------------------------------- Plan hash value: 2241294508 -------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 38 | 1102 | 12 (0)| 00:00:01 | | 1 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 12 (0)| 00:00:01 | |* 2 | DOMAIN INDEX | DUNS_CONTEXT_INDEX | | | 4 (0)| 00:00:01 | -------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("CTXSYS"."CONTAINS"("TEXT_KEY",:SEARCH||' AND '||:EMPLOYEEID||' WITHIN emp_id',1)>0) SCOTT@orcl_11gR2> -
Estimate of poor cardinality using Bind Variables
Hi I'm using the 11.2.0.4.0 Oracle version. I have a query that is underway for the plan of the poor execution by the estimate of poor cardinality for two tables (I've extracted and published this part only) as I mentioned below, the individual conditions for which the estimate goes bad and moving entire query execution path.
These are for two tables and currently we use BIND variable for them in our code, and I notice, its best estimate gives with literals. I need to know how to handle this scenario that I need this query to execute for all types of volumes. Is there something I can do without changing the code, as it works well for most of the execution? In the current scenario of the main query that uses those below tables providing a plan (index + nested loop) that works very well for small volume, but running for 10 hr + for large volume as ideally its going to the same regime.
And Yes, most time that this request will be hit for small volume, but killing some appearance of large volume presents the performance of the queries.
Here are the values of the variable binding.B1 VARIABLE VARCHAR2 (32);
B2 VARIABLE VARCHAR2 (32);
B3 VARIABLE NUMBER;
B4 VARIABLE VARCHAR2 (32);
B7 VARIABLE VARCHAR2 (32);
B5 VARIABLE NUMBER;
B6 VARIABLE NUMBER;EXEC: B1: = 'NONE ';
EXEC: B2: = NULL;
EXEC: B3: = 0;
EXEC: B4: = NULL;
EXEC: B7: = NULL;
EXEC: B5: = 0;
EXEC: B6: = 0;---- For TABLE1------- -- Published Actual VS Etimated cardinality -- With bind values select * from TABLE1 SF WHERE ( (SF.C1_IDCODE = :B4) OR (NVL (:B4, 'NONE') = 'NONE')) AND ( (SF.C2_ID = :B3) OR (NVL (:B3, 0) = 0)); Plan hash value: 2590266031 ----------------------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem | ----------------------------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 28835 |00:00:00.08 | 2748 | 46 | | | | |* 1 | TABLE ACCESS STORAGE FULL| TABLE1 | 1 | 11 | 28835 |00:00:00.08 | 2748 | 46 | 1025K| 1025K| | ----------------------------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - storage((("SF"."C1_IDCODE"=:B4 OR NVL(:B4,'NONE')='NONE') AND ("SF"."C2_ID"=:B3 OR NVL(:B3,0)=0))) filter((("SF"."C1_IDCODE"=:B4 OR NVL(:B4,'NONE')='NONE') AND ("SF"."C2_ID"=:B3 OR NVL(:B3,0)=0))) -- With literals select * from TABLE1 SF WHERE ( (SF.C1_IDCODE = null) OR (NVL (null, 'NONE') = 'NONE')) AND ( (SF.C2_ID = 0) OR (NVL (0, 0) = 0)); Plan hash value: 2590266031 -------------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem | -------------------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 28835 |00:00:00.03 | 2748 | | | | | 1 | TABLE ACCESS STORAGE FULL| TABLE1 | 1 | 28835 | 28835 |00:00:00.03 | 2748 | 1025K| 1025K| | -------------------------------------------------------------------------------------------------------------------------------------- --------For TABLE2 ----------------------- -- Published Autotrace plan, as it was taking long time for completion, and actual cardinality is 45M, but its estimating 49 With bind value--- --withbind value select * from TABLE2 MTF WHERE ( (MTF.C6_CODE = TRIM (:B2)) OR (NVL (:B2, 'NONE') = 'NONE')) AND ( (MTF.C3_CODE = :B1) OR (NVL (:B1, 'NONE') = 'NONE')) AND ( (MTF.C4_CODE = :B7) OR (:B7 IS NULL)) AND ( (MTF.C5_AMT <= :B6) OR (NVL (:B6, 0) = 0)) AND ( (MTF.C5_AMT >= :B5) OR (NVL (:B5, 0) = 0)); Execution Plan ---------------------------------------------------------- Plan hash value: 1536592532 ----------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | ----------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 49 | 10437 | 358K (1)| 01:11:43 | | | | 1 | PARTITION RANGE ALL | | 49 | 10437 | 358K (1)| 01:11:43 | 1 | 2 | |* 2 | TABLE ACCESS STORAGE FULL| TABLE2 | 49 | 10437 | 358K (1)| 01:11:43 | 1 | 2 | ----------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - storage(("MTF"."C4_CODE"=:B7 OR :B7 IS NULL) AND ("MTF"."C3_CODE"=:B1 OR NVL(:B1,'NONE')='NONE') AND ("MTF"."C5_AMT"<=TO_NUMBER(:B6) OR NVL(:B6,0)=0) AND ("MTF"."C5_AMT">=TO_NUMBER(:B5) OR NVL(:B5,0)=0) AND ("MTF"."C6_CODE"=TRIM(:B2) OR NVL(:B2,'NONE')='NONE')) filter(("MTF"."C4_CODE"=:B7 OR :B7 IS NULL) AND ("MTF"."C3_CODE"=:B1 OR NVL(:B1,'NONE')='NONE') AND ("MTF"."C5_AMT"<=TO_NUMBER(:B6) OR NVL(:B6,0)=0) AND ("MTF"."C5_AMT">=TO_NUMBER(:B5) OR NVL(:B5,0)=0) AND ("MTF"."C6_CODE"=TRIM(:B2) OR NVL(:B2,'NONE')='NONE')) -- with literal select * from TABLE2 MTF WHERE ( (MTF.C6_CODE = TRIM (null)) OR (NVL (null, 'NONE') = 'NONE')) AND ( (MTF.C3_CODE = 'NONE') OR (NVL ('NONE', 'NONE') = 'NONE')) AND ( (MTF.C4_CODE = null) OR (null IS NULL)) AND ( (MTF.C5_AMT <= 0) OR (NVL (0, 0) = 0)) AND ( (MTF.C5_AMT >= 0) OR (NVL (0, 0) = 0)); Execution Plan ---------------------------------------------------------- Plan hash value: 1536592532 ----------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | ----------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 45M| 9151M| 358K (1)| 01:11:41 | | | | 1 | PARTITION RANGE ALL | | 45M| 9151M| 358K (1)| 01:11:41 | 1 | 2 | | 2 | TABLE ACCESS STORAGE FULL| TABLE2 | 45M| 9151M| 358K (1)| 01:11:41 | 1 | 2 | ----------------------------------------------------------------------------------------------------------- select column_name,num_nulls,num_distinct,density from dba_tab_col_statistics where table_name='TABLE2' and column_name in ('C3_CODE','C4_CODE','C5_AMT','C6_CODE'); C3_CODE 0 65 0.0153846153846154 C4_CODE 0 2 0.5 C5_AMT 0 21544 4.64166357222429E-5 C6_CODE 1889955 71 0.0140845070422535
933257 wrote:
((SF. C1_IDCODE =: B4) OR (NVL (: B4, 'NONE') = 'NONE'))
In fact for literals, I did not find any section of the predicate after running the sql code with activation "set autotrace traceonly explain."
The main problem is with another large query whose cardinality is underestimated due to the presence of these table (table1, table2) with the above mentioned clause, and the query is for the analysis of index + nested with values of Bind loops and take 10 hr +, whereas with literals, its completion in ~ 8minutes with FTS + Hash Join.
Your real problem is that you try to have just a single SQL query handle all POSSIBLE thanks to the use of embedded FILTERS ' either / or ' filters in the WHERE clause. You want only a select this OPTION to run whatever filters have been selected at run time by the user or the application using it. And it would never work. You really need to SELECT different queries for different combinations of filter conditions.
Why? Think for a minute. How Oracle works internally? A SQL SELECT query gets analyzed and an execution plan is produced which is stored in the library cache and gets REUSED on all subsequent executions of this query - except in certain cases where there may exist several plans run through several cursors of the child. So with only SELECT a query you only AN execution plan in the library cache, to be used by all THE executions of this query, regardless of the value of your run-time binding variables.
Lets put another way - each library cache execution plan is associated with a SQL statement. If you want a DIFFERENT execution plan then you need run a DIFFERENT SQL statement. That's how you get a different execution plan - by running a different SQL statement. Running the SAME SQL query generally you will get the SAME execution plan every time.
In addition, because of the "either / or" filters that you use you will end up generally with a full Table Scan on each of the referenced tables. Why? Given that the optimizer must produce an implementation plan that manages all possible contingencies for all values of possible bind variables in the SELECT. If the optimizer should choose to use any index based on one of these "either / or" filters then it would only help performance when real value was provided, but it would be really bad if a NULL value was supplied. If the optimizer ends up ignoring the index because they are not always optimal for all possible input values and instead chose a plan that is "good enough" for all input values possible. That means that it will use a scanning Table full.
I hope you can see that it is precisely what is happening for you with your query. You select this OPTION to manage the different combinations of filter, which leads to the execution plan only one, which leads to scans full Table on the referenced tables in these ' either / or ' filters.
The solution? Build queries SELECT DIFFERENT when input values are NULL. How you do that? Read this article to ask Tom that tells you:
http://www.Oracle.com/technetwork/issue-archive/2009/09-Jul/o49asktom-090487.html
To sum up - when you have real value for a bind variable 'bind_var1' add the following filter to your CHOICE:
AND column_name1 =: bind_var1
When the binding variable is NULL, add the filter according to your CHOICE:
AND (1 = 1 OR: bind_var1 IS NULL)
Now, you'll have 2 queries SELECT must be performed, which have exactly the same number of variables in the same order bind, which is important. When you then run one of these variations, Oracle can analyze and optimize each one SEPARATELY, with a single execution by the SELECT query plan.
When you provide a real value, the filter is a normal 'column = value' that the optimizer can use all indexes on this column, because NULL values are not referenced.
When there is no real value, the optimizer will analyze the '1 = 1 GOLD' and realize that "1 = 1" is set to TRUE and GOLD, it is quite TRUE regardless because the binding variable is null or not. This means that the optimizer will actually REMOVE this filter, because it filters nothing because it is always TRUE. You will end up with an operating plan based on the other filters in the query, which is what you want because you have no filter on this column.
What is it - producing distinct SELECT queries to determine if you have a real value to filter or not you end up with DIFFERENT execution plans for each of them, and each of them is OPTIMAL for this particular set of filters. Now you get good performance for each variation of the performance of the SELECTION, rather than sometimes good and sometimes very bad when using SELECT only one. It is impossible to try to get multiple shots of execution 'optimal' out of a SELECT query. That's why you get mediocre performance under different bound the values of the variables.
John Brady
-
Satellite Pro S500 - 11 c - bad audio performance
Hi people,
I have a Satellite Pro S500 - 11 c who suffers from bad audio performance mentioned in the comments of the S500 - 11 c (yes I * + look + * comments and excuse the pun).
This makes it very difficult to use this laptop for music, VOIP and Skype without having applications to carry on a heaeset of a certain type.
Someone at - it to another eperianced this problem and had a solution. No help from Toshiba?
Best regards
Andrew
Sorry, but what kind of problem mean you? Is the sound not loud enough or what?
I want to say that this laptop has standard stereo speakers and harman/kardon speakers not high-end inside.I have the Satellite L300 with same stereo speakers. The sound is OK, but not as good as on other laptops with h/K speakers.
By the way: what kind of help do you expect? A new driver that makes the speakers more hard maybe?
-
When you invite someone to a game of this error:
Request error (invalid_request) Your request could not be processed. This could be caused by a misconfiguration or possibly a bad query is formed.
For assistance, contact your network support team.Hi Paolombana,
· Which game you try to play?
· Do you get this error message when you try to invite a particular contact?
· What operating system is installed on your computer?
Method 1
If you are using Windows Vista, you can self test network diagnostic tool and see if that fixes the problem.
Network connection problems
http://Windows.Microsoft.com/en-us/Windows-Vista/troubleshoot-network-connection-problems
Method 2
If you use Windows 7, run the network troubleshooter utility.
Using the troubleshooter from network in Windows 7
http://Windows.Microsoft.com/en-us/Windows7/using-the-network-troubleshooter-in-Windows-7
Hope the helps of information. Please post back and we do know.
Concerning
Joel S
Microsoft Answers Support Engineer
Visit our Microsoft answers feedback Forum and let us know what you think. -
The BLASTP_ALIGN query performance decreases as increases the size table Ref?
Newbie here.
I'm using Oracle 11.2.0.3.
I am currently running and a loop through the cursor according to who uses the tool BLASTP_ALIGN from Oracle:
FOR MyALIGN_TAB IN
(
Select a.query_string, H.AA_SEQUENCE target_string, t_seq_id, pct_identity, alignment_length, incompatibilities, positive, gap_openings, gap_list, q_seq_start, q_frame, q_seq_end, t_seq_start, t_seq_end, t_frame, score, wait
from (select t_seq_id, pct_identity, alignment_length, incompatibilities, positive, gap_openings, gap_list, q_seq_start, q_frame, q_seq_end, t_seq_start, t_seq_end, t_frame, score, wait
table (BLASTP_ALIGN ((p_INPUT_SEQUENCE SELECT query_string FROM DUAL),
CURSOR (Select GB_ACCESSION, AA_SEQUENCE from HUMAN_DB1. HUMAN_PROTEINS),
1-1, 0, 0, 'PAM30',. 1, 10, 1, 2, 0, 0)
)
),
(SELECT p_INPUT_SEQUENCE FROM DUAL Query_string).
HUMAN_DB1. HUMAN_PROTEINS H
WHERE UPPER (t_seq_id) = UPPER (H.gb_accession) and gap_openings = 0
)
LOOP
This initial query works relatively well (about 2 seconds) on a table target of approximately 20,000 documents (reproduced above, as the HUAMN_DB1. Table HUMAN_PROTEINS. However, if I had to choose a selected target table that contains approximately 170 000 records, the query performance are significantly reduced in about 45 seconds. The two tables have identical ratings.
I was wondering if there are ways to improve the performance of BLASTP_ALIGN on large tables? There only seems to be a lot of documentation on BLASTP_ALIGN. I could find this (http://docs.oracle.com/cd/B19306_01/datamine.102/b14340/blast.htm), but it wasn't that useful.
Any ideas would be greatly appreciated.
In case one is interested... it looked like the AA_SEQUENCE column in the following slider: SLIDER (Select GB_ACCESSION, AA_SEQUENCE from HUMAN_DB1. HUMAN_PROTEINS) was a CLOB field. In my second target, my column correspodoning table was VARCHAR2. One hypothesis is that BLASTP_ALIGN made a VARCHAR2-> CLOB conversion internally. I changed the table to have a CLOB column and with success against BLASTP_ALIGN 170 000 documents about 8 seconds (not much, but better than 45).
I will mark it as answered.
-
How to tune the query performance, any1 help me to impropve... Thanks in advanceCURSOR c_exercise_list IS SELECT DECODE(v_mfd_mask_id ,'Y',' ',o.opt_id) opt_id, DECODE(v_mfd_mask_id ,'Y',' ',o.soc_sec) soc_sec, P.plan_id plan_id, E.exer_id exer_id, E.exer_num, DECODE(G.sar_flag, 0, DECODE(G.plan_type, 0, '1', 1, '2', 2, '3', 3, ' ', 4,'5', 5, '6', 6, '7', 7, '8', 8, '9', '0'), ' ') option_type, TO_CHAR(G.grant_dt, 'YYYYMMDD') grant_dt, TO_CHAR(E.exer_dt, 'YYYYMMDD') exer_dt, E.opts_exer opts_exer, E.mkt_prc mkt_prc, E.swap_prc swap_prc, E.shrs_swap shrs_swap, decode(e.exer_type,2,decode(xe.cash_partial,'Y','A','2'),TO_CHAR(E.exer_type)) exer_type, E.sar_shrs sar_shrs, NVL(ROUND(((xe.sar_shrs_withld_optcost - (e.opts_exer * g.opt_prc) / e.mkt_prc) * e.mkt_prc),2),0)+e.sar_cash sar_cash, NVL(f.fixed_fee1,0) fixed_fee1, NVL(f.fixed_fee2,0) fixed_fee2, NVL(f.fixed_fee3,0) fixed_fee3, NVL(f.commission,0) commission, NVL(f.sec_fee,0) sec_fee, NVL(f.fees_paid,0) fees_paid, NVL(ct.amount,0) cash_tend, E.shrs_tend shrs_tend, G.grant_id grant_id, NVL(G.grant_cd, ' ') grant_cd, NVL(xg.child_symbol,' ') child_symbol, NVL(xg.opt_gain_deferred_flag,'N') defer_flag, o.opt_num opt_num, --XO.new_ssn, DECODE(v_mfd_mask_id ,'Y',' ',xo.new_ssn) new_ssn, xo.use_new_ssn ,xo.tax_verification_eligible tax_verification_eligible ,(SELECT TO_CHAR(MIN(settle_dt),'YYYYMMDD') FROM tb_ml_exer_upload WHERE exer_num = E.exer_num AND user_id=E.user_id AND NVL(settle_dt,TO_DATE('19000101','YYYYMMDD'))>=E.exer_dt) AS settle_dt ,xe.rsu_type AS rsu_type ,xe.trfbl_det_name AS trfbl_det_name ,o.user_txt1,o.user_txt2,xo.user_txt3,xo.user_txt4,xo.user_txt5,xo.user_txt6,xo.user_txt7 ,xo.user_txt8,xo.user_txt9,xo.user_txt10,xo.user_txt11, xo.user_txt12, xo.user_txt13, xo.user_txt14, xo.user_txt15, xo.user_txt16, xo.user_txt17, xo.user_txt18, xo.user_txt19, xo.user_txt20, xo.user_txt21, xo.user_txt22, xo.user_txt23, xo.user_dt2, xo.adj_dt_hire_vt_svc, xo.adj_dt_hire_vt_svc_or, xo.adj_dt_hire_vt_svc_or_dt, xo.severance_plan_code, xo.severance_begin_dt, xo.severance_end_dt, xo.retirement_bridging_dt ,NVL(xg.pu_var_price ,0) v_pu_var_price ,NVL(xe.ficamed_override,'N') v_ficmd_ovrride ,NVL(xe.vest_shrs,0) v_vest_shrs ,NVL(xe.client_exer_id,' ') v_client_exer_id ,(CASE WHEN xg.re_tax_flag = 'Y' THEN pk_xop_reg_outbound.Fn_GetRETaxesWithheld(g.grant_num, E.exer_num, g.plan_type) ELSE 'N' END) re_tax_indicator -- 1.5V ,xe.je_bypass_flag ,xe.sar_shrs_withld_taxes --Added for SAR july 2010 release ,xe.sar_shrs_withld_optcost --Added for SAR july 2010 release FROM (SELECT exer.* FROM exercise exer WHERE NOT EXISTS (SELECT s.exer_num FROM suspense s WHERE s.exer_num = exer.exer_num AND s.user_id = exer.user_id AND exer.mkt_prc = 0))E, grantz G, xop_grantz xg, optionee o, xop_optionee xo, feeschgd f, cashtendered ct, planz P,xop_exercise xe WHERE E.grant_num = G.grant_num AND E.user_id = G.user_id AND E.opt_num = o.opt_num AND E.user_id = o.user_id AND (G.grant_num = xg.grant_num(+) AND G.user_id=xg.user_id(+)) AND (o.opt_num = xo.opt_num(+) AND o.user_id=xo.user_id(+)) AND E.plan_num = P.plan_num AND E.user_id = P.user_id AND E.exer_num = f.exer_num(+) AND E.user_id = ct.user_id(+) AND E.exer_num = ct.exer_num(+) AND E.user_id = ct.user_id(+) AND E.exer_num=xe.exer_num(+) AND E.user_id=xe.user_id(+) AND G.user_id = USER AND NOT EXISTS ( SELECT tv.exer_num FROM tb_xop_tax_verification tv--,exercise ex WHERE tv.exer_num = e.exer_num AND tv.user_id = e.user_id AND tv.user_id = v_cms_user AND tv.status_flag IN (0,1,3,4, 5)) -- Not Processed ;
Published by: BluShadow on February 21, 2013 08:14
corrected {noformat}{noformat} tags. Please read {message:id=9360002} and learn how to post code correctly.
956684 wrote:
I got the cost of CPU: 458.50 time: 1542.90 therefore anything can capture to improve performance, but there is no applied full table scan to put nothing in the mentioned table. . and most of the columns are index unique scan takes place... someone can help me to find the solutionHis request as "my car doesn't work, care color is gray. Can solve you this problem? »
Please read the FAQ, I already posted and follow the instructions.
-
The DIMINFO affects query performance?
Hi all
A USER_SDO_GEOM_METADATA can. DIMINFO well defined to improve the query performance?
For all the tables in my system, I have the USER_SDO_GEOM_METADATA view like this:
DIMINFO
X; -2147483648; 2147483648; 5TH-5
Y; -2147483648; 2147483648; 5TH-5
Z; -2147483648; 2147483648; 5TH-5
Thanks to you allThe simple answer is Yes - it provides an alternative and faster I/O path.
The real question is whether it is supposed that was the data model and its use.
So your question is similar to asking if a varchar2 column indexing is good or not. The answer is "+ depends on +".
-
I have a table that lists the users visits to pages on our website. The information takes the type of structure within our next record table:
VisitID | IDVisiteur | VisitPage | VisitDate
Index | UniqueID. VisitPage | Date/time
I need to get to IDVisiteur who visited in a user defined date range for a report that is to be written, and then get a count the days of separate visit that each user has visited our website. I have a request of work attached that will get me the result set, I want, but it's so _very_ slowly. Query Analyzer it shows that 84% included in table scans. I hope someone has a suggestion on how to optimize it. I am currently working on a MSSQL 8.0 Server, so I have no access to the function of tronque() that I would prefer to use on the dates, but that's a minor inconvenience.
Thank you
-Daniel
Quote:
Posted by: Dan Bracuk
You have an index on visitdate?Visitdate contains real-time, or are all the parts of the time 0:00? If they are all from 00:00, you don't need the convert function. Otherwise, you might have better luck by selecting all data from your database and using Q of Q for the counties.
Dan there on this one. Looking at the design table index was absent. Once I added an index my query performance dramatically, improved enough so that I don't have a lot of worries more. Thanks for the suggestion.
-Daniel
-
After importing bulk - query performance initially very badly but ok next day?
Hello
We noted two times so far, that after the creation of a new schema and data on the spatial tables queries import block are very slow.
(We will not check the non-space tables however). After you import the data, indexes and statistics are created.
All the data is quite low (less than features 250'000 spread over several tables). This slowdown was evident hours after creating indexes and statistics. But after returning back to the work of the performance of the queries next day was good - as originally planned. No one does something in the meantime. Database is 11 GR 2.
I vaguely remember I read that statistical etc. could not be used immediately after the creation/update but I didn't know there are so many kick. Is there an explanation of the behavior? It isn't really a problem for us, but I would like to know why this happens,
Thank you, RobRob,
Note that since 10g there is an automated collection of statistics DBMS_STATS work that takes place during the night. It seems that this creates appropriate statistics that result in a plan of execution. Take a look in Enterprise Manager to see the details of this work.
You mentioned that collect you statistics after execution of your loading mass - I guess the problem is that you are not with the same characteristics as night work. Can you put the command you run to collect the statistics after bulk loading?
John
-
We live a very slow performance for the below query. Joined the explain command plan too. One of us can, help us with suggestions. We expect something as nested loops could be avoided by following certain steps, some changes in the structure of the query.
SELECT assoc.name_first. ' ' || Assoc.name_last AS client_manager
T1 assoc,.
This T2,
T3 aa,
Law on the T4
T5 cc
WHERE ce.ent_id = act.primary_ent_id (+)
AND ce.ent_id =: p_ent_id
AND assoc.id = CASE WHEN aa.assoc_id IS NULL THEN (SELECT DISTINCT ca.assoc_id
From t6 ca
WHERE ca.comp_id =
: p_ent_id
AND
CA.cd_code IN
("CMG", "GCR",
"BCM", "CCM"
'BAE'))
Of OTHER aa.assoc_id
END
AND nvl (act.activity_id, 0) = nvl (aa.activity_id, 0)
AND assoc.bk_code = cc.cpy_no
AND assoc.center = cc.cct_no
AND aa.role_code IN ("CMG", "GCR", 'BCM', 'MAC', 'BAE')
Object owner name cardinality bytes cost object description
SELECT STATEMENT, TARGET = CHOOSE 29 2 232
27 2 232 NESTED LOOPS
26 2 214 NESTED LOOPS
25 2 82 NESTED LOOPS
3 2 50 OUTER NESTED LOOPS
INDEX UNIQUE SCAN DEC 1 1 6 COREENT_PK
T4 VIEW 2 2 38 CBD
2 2 48 NESTED LOOPS
TABLE ACCESS BY INDEX ROWID CBD t4 1 8 128
INDEX RANGE SCAN PEH 1 8 PRIMATY_CLIENT_IDX
TABLE ACCESS BY INDEX ROWID PEH ACTIVITY_TYPE 1 1 8
INDEX SCAN SINGLE PEH 1 1 ACTIVITY_TYPE_PK
INDEX SCAN FULL CDB 11 1 16 ACTIVITY_ASSOCIATE_PK
TABLE ACCESS BY INDEX ROWID SEC 1 1 66 t1
INDEX UNIQUE SCAN DEC 1 1 ASSOC_PK
OUT UNIQUE 16 1 2 NOSORT
INDEX RANGE SCAN DEC 1 1 16 COMPASSOC_PK
INDEX UNIQUE SCAN DEC 1 1 9 CST_CNTR_PK
I appreciate your time and efforts.Maybe try this
SELECT assoc.name_first || ' ' || assoc.name_last AS client_manager FROM t1 assoc, t2 ce, t3 aa, t4 act, t5 cc, t6 ca WHERE ce.ent_id = act.primary_ent_id(+) AND ce.ent_id = :p_ent_id AND ca.comp_id = ce.ent_id AND ca.cd_code = aa.role_code AND assoc.id = nvl(aa.assoc_id, ca.assoc_id) AND assoc.bk_code = cc.cpy_no AND assoc.center = cc.cct_no AND (act.activity_id = aa.activity_id OR (act.activity_id is null and aa.activity_id is null)) AND aa.role_code IN ('CMG', 'RCM', 'BCM', 'CCM', 'BAE')
Untested code. Pleace in order to check if it gives the correct result.
-
Bad Photoshop performance on a high end machine.
Hi all
New to the forum but not new to photoshop, have been using it for many years now and understand a bit of his performance issues. How ever, this one is a new one on me and something that I can understand everything.
Lets start with the question and then poorly run the specs of the computer Im using.
Question:
Photoshop will work fine for about an hour and then when opening multiple files, it will slow down to a point of lag. closing and reopening will not fix the problem. The only solution I have right now is to erase memory using ALT, Shift, and CTRL when you open photoshop. This will reset back to the first hour or so of good speed.
I thought it was a ram problem or a graphic problem but had not noticed any spike in my system when running. This is a job for me the tool so need to understand what the problem is.
Here is the Spec:
System:
Manufacturer Hewlett-Packard Model HP Z820 workstation Total amount of system memory 64.0 GB OF RAM Type of system 64-bit operating system Number of processor cores 8 Storage:
(LOTS) - cannot display, but there is more than 1000 GB. (Sorry for the inconvenience)
Graphics card:
Type of view map NVIDIA Quadro K5000 Total available graphics memory 36569 MB Dedicated graphics memory 4096 MB Dedicated system memory 0 MB Shared system memory 32473 MB Display adapter driver version 9.18.13.5306 Resolution of the primary monitor 1920 x 1200 The secondary monitor resolution 1200 x 1920 Version of DirectX DirectX 10 Processor:
Intel Xeon CPU E5-2643 0 @ 3.30 GHz
System: Windows 7 Professional
No clue as to why he would run slow because the specs seem to be way more than necessary.
Thank you all
-A-
It is a good point. There is a process called CEPHtmlEngine that often consumes a gigantic piece of CPU and often work in several instances. From my experience, it extends the time Photoshop starts up, so does not correspond to a work station slows down over time, but worth it for. If you find THAT CEPHTML is running, then close the library panel, and restart Photoshop. As long as the library is closed while the process does not run. It is a bug known in all of the Creative Suite and planned to be fixed in the next version.
-
I have two schemas of two databases.
When I check the sql plan, both the schema contains diffrently (one is underway for a full table scan and a scan of systematic index range).
Both the scheme almost similar kind of data, indexes, and charges.
What is causing the performance of sql
in the second plan, the optimizer expects the analysis of range on IDX_TSK_ID step 5 to return to only 14 lines and decides it's a good idea to join the second TB_TRANS_MSTR table with a nested loops join (make a loop on 14 TASK_INSTANCE results and do a search on each iteration of the use of the PK_TRANS_ID index).
In the foreground, the optimizer decides to read TB_TRANS_MSTR (containing 978 lines) and build a table of hash in memory of the results - and then probe against the TASK_INSTANCE second set.
The next question is: which plan is most suitable and translates into better performance? The chances are high that the best plan is one that includes an estimate more fitting of the cardinalities. These estimates are based on a simple arithmetic (more or less) - and they depend on the table and column statistics. So the dba_tab_column entires Swen W. mentioned would be useful. In addition the text of the query would probably we shed some light on the question.
-
Partitioning strategy for the OBIEE query performance
I use partitioning for the first time and I'll have trouble determining if I partition my fact table in a way that will allow the partition size to work with queries generating OBIEE. I've set up a simple example using query I wrote to illustrate my problem. In this example, I have a star with a fact table schema and I join in two dimensions. My fact table is partitioned on JOB_ID and TIME_ID RANGE LIST and those are the keys that link the two dimensions that I use in this application.
Select sum (boxbase)
TEST_RESPONSE_COE_JOB_QTR a
Join DIM_STUDY C on A.job_id = C.job_id
Join DIM_TIME B on A.response_time_id = B.time_id
where C.job_name = "FY14 CSAT"
and B.fiscal_quarter_name = ' quarter 1';
What can I say, because the query is in fact done filtering on columns in the dimensions instead of they, the columns in the table, the size isn't actually is happening. I see actually slightly better performance of a non-partitioned table, even though I wrote this application specifically for partitioning strategy which is now in place.
If I run the next statement, it runs a lot faster and a plan to explain is very simple and it seems to me that it's size down to a partition of sub as I hoped. This isn't any query generated by OBIEE how will seem so.
Select sum (boxbase)
of TEST_RESPONSE_COE_JOB_QTR
where job_id = 101123480
and response_time_id < 20000000;
Any suggestions? I get some benefits from the exchange of partition by using this configuration, but if I'm going to sacrifice performance reports then that maybe isn't useful, or at the very least, I would need to get rid of my partitions void if they are not providing any benefit.
Here are the plans to explain that I got for two queries in my original post:
Operation
Name of the object
Lines
Bytes
Cost
Object node
In/Out
PStart
PStop
INSTRUCTION SELECT optimizer Mode = ALL_ROWS
1
20960
AGGREGATION OF TRI
1
13
VIEW
SYS. VW_ST_5BC3A99F
101 K
1 M
20960
NESTED LOOPS
101 K
3 M
20950
PARTITION LIST SUBQUERY
101 K
2 M
1281
KEY (SUBQUERY)
KEY (SUBQUERY)
RANGE OF PARTITION SUBQUERY
101 K
2 M
1281
KEY (SUBQUERY)
KEY (SUBQUERY)
CONVERSION OF BITMAP IN ROWID
101 K
2 M
1281
BITMAP AND
MERGE TO BITMAP IMAGE
KEY ITERATION BITMAP
BUFFER
INDEX SKIP SCAN
CISCO_SYSTEMS. DIM_STUDY_UK
1
17
1
BITMAP INDEX RANGE SCAN
CISCO_SYSTEMS. FACT_RESPONSE_JOB_ID_BMID_12
KEY
KEY
MERGE TO BITMAP IMAGE
KEY ITERATION BITMAP
BUFFER
VIEW
CISCO_SYSTEMS.index$ _join$ _052
546
8 K
9
HASH JOIN
INDEX RANGE SCAN
CISCO_SYSTEMS. DIM_TIME_QUARTER_IDX
546
8 K
2
INDEX FULL SCAN
CISCO_SYSTEMS. TIME_ID_PK
546
8 K
8
BITMAP INDEX RANGE SCAN
CISCO_SYSTEMS. FACT_RESPONSE_TIME_ID_BMIDX_11
KEY
KEY
TABLE ACCESS BY ROWID USER
CISCO_SYSTEMS. TEST_RESPONSE_COE_JOB_QTR
1
15
19679
ROWID
L LINE
Operation
Name of the object
Lines
Bytes
Cost
Object node
In/Out
PStart
PStop
INSTRUCTION SELECT optimizer Mode = ALL_ROWS
1
1641
AGGREGATION OF TRI
1
13
SIMPLE LIST OF PARTITION
198 K
2 M
1641
KEY
KEY
RANGE OF SINGLE PARTITION
198 K
2 M
1641
1
1
TABLE ACCESS FULL
CISCO_SYSTEMS. TEST_RESPONSE_COE_JOB_QTR
198 K
2 M
1641
36
36
It seems unreasonable to think that relying on our index in a table partitioned (or partitioned in a way only focused on the help of ETL) can actually work better than partitioning in a way that we could get some size dynamic, but never static pruning?
Yes - standard tables with indexes can often outperform partitioned tables. It all depends on types of queries and predicates to query which are typically used and the number of rows returned in general.
Size of the partition eliminates the partitions TOGETHER - regardless of the number of rows in the partition or table. An index, on the other hand, is prohibited if the query predicate needs a significant number of lines since Oracle can determine that the cost is lower simply use close readings and make a full scan.
A table with 1 million rows and a query predicate who wants 100 K of them probably will not use an index at all. But the same table with two partitions could easily have one of the partitions pruned by the "number of effective lines" only 500 k or less.
If you are partitioning for performance, you should test your critical queries to make sure partitioning/pruning is effective for them.
Select sum (boxbase)
TEST_RESPONSE_COE_JOB_QTR a
Join DIM_STUDY C on A.job_id = C.job_id
Join DIM_TIME B on A.response_time_id = B.time_id
where C.job_name = "FY14 CSAT"
and B.fiscal_quarter_name = ' quarter 1';
So, what is a typical value for 'A.response_time_id '? That represents a 'B.time_id '?
Because a way of providing explicit partition keys may be to use a range of 'response_time_id' of the FACT table rather than a value of 'fiscal_quarter_name' of the DIMENSION table.
If "1 quarter" could correspond to a range of dates from 01/01/YYYY ' at ' 03/31/yyyy '. "."
Also, you said that on the partitioning: JOB_ID and TIME_ID
But if your questions relate mainly to the DATES / TIMES, you might be better use the TIME_ID for PARTITIONS and JOB_ID, if necessary, for the subpartitioning.
Date range partitioning is one of the most common around.and serves both performance and ease of maintenance (delete/archive old data).
-
Questions after TimesTen first trial: memory footprint and query performance
Hello!
I'm testing TimesTen In - Memory Database cache to see if it could help with some ad hoc reports questioned this need too long to run in our Oracle database.
Here is the configuration:
1.) TimesTen Server CPU Quad Core 2 with 32 GB of RAM running Windows 2003 x 64.
2.) put in place two cachegroups read-only: a little for a quick test and the real thing that maps to a table of the database as such:
Database table looks like:
Oracle database table has 1.367.336.329 lines and table segments are approximately 61 GB, so a medium line takes about 46 bytes.CREATE TABLE "TB_BD" ( "VALUE" NUMBER NOT NULL ENABLE, "TIME_UTC" TIMESTAMP (6) NOT NULL ENABLE, "ASSIGNED_TO_ID" NUMBER NOT NULL ENABLE, "EVENT_ID" NUMBER, "ID" NUMBER NOT NULL ENABLE, "ID_LABEL" NUMBER NOT NULL ENABLE, "ID_ALARM" NUMBER, CONSTRAINT "PK_TB_BD" PRIMARY KEY ("ID") );
Since I have 32 GB in the TimesTen machine, I created the Group cache with a where predicate in the ID column that only the 98.191.284 most recent ranks get in the cache group. In the Oracle database, it is around 4.2 GB of data.
After the cache loading dssize group returns:
I then ran on the TimesTen machine:Command> dssize PERM_ALLOCATED_SIZE: 26624000 PERM_IN_USE_SIZE: 19772852 PERM_IN_USE_HIGH_WATER: 26622892 TEMP_ALLOCATED_SIZE: 32768 TEMP_IN_USE_SIZE: 10570 TEMP_IN_USE_HIGH_WATER: 14192 (Note: the high PERM_IN_USE_HIGH_WATER comes from a first test where I tried to cache too many rows)
She is still going after 10 hours, so I can already tell that the query execution time is not really met my expectations. :-)tisql> select avg(value) from tb_bd;
In the Windows Task Manager, I see that tisql constantly use 13% of CPU (= 100% / 8 cores), so that it uses only a carrot, but even he was using all the hearts and the execution time would be 1/8th, it wouldn't meet my expectation. :-)
I also see in the the Windows Task Manager who becomes slowly higher and higher, currently the 'MemUsage' of my tisq 14FR processl. I believe that it is shared memory mapping that is already mapped by the TimesTen process that has approximately 24 GB mapped. The query is probably 53% through and the total time of queries can be around 20 hours.
My questions:
1.) for what I tested, 1 GB of data in the table Oracle needs about 4-5 gigabytes of memory in the TimesTen database. I read a post on the forum who has explained with ' data are optimized for performance, no space in TT ", but I don't quite buy it. A factor of 4-5 means that the CPU must spend 4 to 5 times the amount of data. The data is not compressed in the Oracle database, but it is in its natural binary form. I would like to understand why data takes much more space in TT - like when you have a numeric in Oracle, which TT do with it to make it 4 - 5 times bigger and why does do that?
2.) regarding the performance of the queries: how long can take even to the base allows to browse about 20 GB of data in memory, number of lines, summarize the NUMBER of a column with a division to get the avg (< column >)? Is there something flawed with my setup?
Thanks for the ideas!
Kind regards
Marcus
Published by: user11973438 on 06.09.2012 23:27I agree that the use of 4 - 5 times more memory than Oracle is far from optimal. Your drawing is unfortunately a little pathological; normally we see more like 2 - 3 times (which is still too really0. There are many internal differences between Oracle and TimesTen in the way data are stored internally. Some are historical, and some are due to the optimization of performance rather than storage efficiency.
For example:
1 oracle lines are always variable storage length while TimesTen lines are always of fixed length in storage.
2. in Oracle, a column defined as NUMBER only occupies the space needed based on the stored value. In TimesTen SEVERAL column always occupies the space to store the maximum possible precision and therefore takes up 22 bytes. You can reduce it by restricting explicitly using NUMBER (n) or NUMBER(n,p).
3 TimesTen does not support any kind of parallel query within a single data store. All queries will be run using maximum core CPU; Oracle DB supports parallel queries and so it can make a big difference for certain types of application.
4. NUMBER is implemented in software and is relatively ineffective. Calculating the average of almost 100M lines will take time... You can try to change cela a native binary type (TT_INTEGER, TT_BIGINT, BINARY_DOUBLE depending on your data); This will no doubt give a good improvement (but see point 5 below).
5. with a database of this size, it is possible that Windows made a lot of paging, while the query is running. I myself also observed on Windows it seems to be a penalty when a process key/maps a page for the first time. You should monitor the paging activity via the task manager that the query is run. All important pagination will really affect the performance. Also, try to execute the query a second time without disconnecting ttIsql this may also show an advantage. On Unix/Linux platforms, we provide an option (MemoryLock == 4) to lock the entire database in physical memory to prevent any paging, but is not available under Windows.
Chris
Maybe you are looking for
-
Remove photos/music of the iPad.
I wanted some pictures of my iMac transferred to my iPad. I found myself by synchronizing the two m/cs which gave rise to all the photos/music, transferring to the iPad. Now, I want to remove most of the music/pictures which was transferred to my iPa
-
What spec RAM required for Satellite 1410 604?
I want to have 1 GB... but I don't know what kind of memories this model suppport...?
-
G71-340US: what is the little switch just below the space bar?
What is the small switch just below the SPACEBAR fror?
-
Hello I'm using Labview 8.0 and I would like to include a date and time stamp that every time the data is collected, and this as a 5th column in the final table. I inserted a generator of random numbers instead of devices so it would be easier to un
-
Aspire E1-531 "no Webcam device found". Windows 8
The hardware device is not connected to the computer. And how I could connect this hardware device to the computer. This is why I can't install the driver of Webcam HD. What should do?