access SQL vs tuning question Advisor
I am trying to understand a question which is:Which of the following identifies and creates an index to reduce the time of the DB for a given SQL statement?
(a) SQL Tuning Advisor
(b) SQL Access Advisor
I think the right answer is a) tuning advisor, because it is for a given SQL statement and it can also be configured to automatically create an index when it runs over the extensible standard maintenance task. However, the marked answer is b) Access Advisor. That which is correct and why?
Thank you!
Waldrfm,
I guess this question of OCP is also asking if STA or SAA make recommendations in terms of DB Time (%) or the cost of the workload (%).
When you look at the details of STA recommendation, it is show recommendations by times of DB (which includes the DB service time + queue DB time) *, in particular for each SQL statement.
However, when you look at the details of recommendation of SAA, is show the recommendations by improving Total cost (which is the cost of CBO, SUM (IO + CPU)). * instead of DB self time
SQL statement.
For example,.
When you tune in 10 identified SQL statements using the STA, it tunes indivdual SQL statement acting time of each SQL DB.
SAA tunes instructions SQL 10 identified as a workload making recommendations based on improving Total cost of the entire workload.
Tags: Database
Similar Questions
-
application of SQL statement tuning
Application of SQL statement tuning
1 SQL: Code that never ends. 11 hours running, but nothing get inserted into tables
2. database version:
3.SELECT * FROM V$VERSION; BANNER Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi PL/SQL Release 10.2.0.4.0 - Production CORE 10.2.0.4.0 Production TNS for Solaris: Version 10.2.0.4.0 - Production NLSRTL Version 10.2.0.4.0 - Production
(i)
BIFSQL> SHOW PARAMETER OPTIMIZER NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ optimizer_dynamic_sampling integer 2 optimizer_features_enable string 10.2.0.4 optimizer_index_caching integer 0 optimizer_index_cost_adj integer 100 optimizer_mode string ALL_ROWS optimizer_secure_view_merging boolean TRUE SQL>
(III)SQL> SHOW PARAMETER DB_FILE_MULTI NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ db_file_multiblock_read_count integer 16 SQL>
(IV)SQL> SHOW PARAMETER DB_BLOCK_SIZE NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ db_block_size integer 8192 SQL>
4 calendar and Autotrace outputSQL> SHOW PARAMETER CURSOR_SHARING NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ cursor_sharing string EXACT SQL>
one)
(b)SQL> SQL> SET AUTOTRACE TRACEONLY SQL> Query; 99999 rows selected. Execution Plan ---------------------------------------------------------- Plan hash value: 888060805 -------------------------------------------------------------------------------- ------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------------- ------ | 0 | SELECT STATEMENT | | 99999 | 171M| 6452 (1)| 00:0 1:18 | |* 1 | COUNT STOPKEY | | | | | | | 2 | TABLE ACCESS FULL| STGING| 99999 | 171M| 6452 (1)| 00:0 1:18 | -------------------------------------------------------------------------------- ------ Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(ROWNUM<100000) Statistics ---------------------------------------------------------- 8 recursive calls 0 db block gets 33379 consistent gets 24108 physical reads 0 redo size 177773283 bytes sent via SQL*Net to client 46901 bytes received via SQL*Net from client 6668 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 99999 rows processed
6. explain Plan outputSQL> SET AUTOTRACE TRACEONLY EXPLAIN rem Could't do SET AUTOTRACE TRACEONLY as query takes a long time. SQL> Query; Execution Plan ---------------------------------------------------------- Plan hash value: 696991379 -------------------------------------------------------------------------------- ------------------------- | Id | Operation | Name | Rows | Bytes |TempSpc | Cost (%CPU)| Time | -------------------------------------------------------------------------------- ------------------------- | 0 | SELECT STATEMENT | | 99999 | 171M| | 77733 (1)| 00:15:33 | | 1 | HASH UNIQUE | | 99999 | 171M| 390M | 77733 (1)| 00:15:33 | |* 2 | CONNECT BY WITHOUT FILTERING| | | | | | | | 3 | VIEW | | 99999 | 171M| | 40120 (1)| 00:08:02 | |* 4 | COUNT STOPKEY | | | | | | | | 5 | TABLE ACCESS FULL | STG_OLD_RUBRIC1 | 621K| 1066M| | 40120 (1)| 00:08:02 | -------------------------------------------------------------------------------- ------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter(LEVEL<=(LENGTH("STRING")-LENGTH(REPLACE("STRING",' Criterion:')))/10 ) 4 - filter(ROWNUM<100000) SQL>
7. from TKPROF outputSQL> ed Wrote file afiedt.buf 1 EXPLAIN PLAN SET STATEMENT_ID = 'A' FOR 2 QUERY; 3 / Explained. SQL> SELECT * FROM TABLE (DBMS_XPLAN.DISPLAY); PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- Plan hash value: 696991379 -------------------------------------------------------------------------------- ------------------------- | Id | Operation | Name | Rows | Bytes |TempSpc | Cost (%CPU)| Time | -------------------------------------------------------------------------------- ------------------------- PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 99999 | 171M| | 77733 (1)| 00:15:33 | | 1 | HASH UNIQUE | | 99999 | 171M| 390M | 77733 (1)| 00:15:33 | |* 2 | CONNECT BY WITHOUT FILTERING| | | | | | | | 3 | VIEW | | 99999 | 171M| | 40120 (1)| 00:08:02 | PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- |* 4 | COUNT STOPKEY | | | | | | | | 5 | TABLE ACCESS FULL | STGING| 621K| 1066M| | 40120 (1)| 00:08:02 | -------------------------------------------------------------------------------- ------------------------- PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter(LEVEL<=(LENGTH("STRING")-LENGTH(REPLACE("STRING",' Criterion:')))/10 ) 4 - filter(ROWNUM<100000) 19 rows selected. SQL>
Any advice. Why the request is not able to generate a trace file and take so long?SQL> alter session set timed_statistics = TRUE; Session altered. SQL> alter session set sql_trace = TRUE; Session altered. SQL> query; rem it is still running, no idea what is going on.
Let me know, if needed further information.
Thank you.Something like that...?
WITH T AS (SELECT 'Criterion: Crit1.
Proficient (points 2): Crit1 text.Criterion: Crit2.
Basic (points 1): Crit2 text.Criterion: Crit3.
Proficient (points 2): Crit3 text.Criterion: Crit4.
Basic (points 1): Crit4 text.Criterion: Crit5.
Proficient (points 2): Crit5 text.
Proficient (points 2): Crit5 text.' latest_comment FROM DUAL union all SELECT 'Criterion: Crit1.
Proficient (points 2): Crit1 text.Criterion: Crit2.
Basic (points 1): Crit2 text.Criterion: Crit3.
Proficient (points 2): Crit3 text.Criterion: Crit4.
Basic (points 1): Crit4 text.Criterion: Crit5.
Proficient (points 2): Crit5 text.
Proficient (points 2): Crit5 text.' latest_comment FROM DUAL union all SELECT 'Criterion: Crit1.
Proficient (points 2): Crit1 text.Criterion: Crit2.
Basic (points 1): Crit2 text.Criterion: Crit3.
Proficient (points 2): Crit3 text.Criterion: Crit4.
Basic (points 1): Crit4 text.Criterion: Crit5.
Proficient (points 2): Crit5 text.
Proficient (points 2): Crit5 text.' latest_comment FROM DUAL ) SELECT SUBSTR(REGEXP_SUBSTR(latest_comment,'Criterion:[^<]+', 1, n.column_value), 20) column1, SUBSTR(REGEXP_SUBSTR(latest_comment,'points [^\)]+', 1, n.column_value), 8) column2, SUBSTR(REGEXP_SUBSTR(latest_comment,'\):[^<]+', 1, n.column_value), 3) column3, SUBSTR(REGEXP_SUBSTR(latest_comment,'blockquote>[^<]+', 1, n.column_value), 12) column4, SUBSTR(latest_comment, INSTR(latest_comment, '>', -1) + 1) column5 ,n.column_value FROM t,table(cast(multiset(select level from dual CONNECT BY LEVEL <= (LENGTH(latest_comment) - LENGTH(REPLACE(latest_comment, 'Criterion:')))/10) as sys.OdciNumberList)) n ;Kind regards
Bob -
My account has been blocked, I forgot security question answers. Could someone help me to sort out how to access my account security question unanswered!
Hello
You must work with the support of Yahoo and their forums.
Yahoo help and support
http://help.Yahoo.com/l/us/Yahoo/helpcentral/Yahoo products and services
http://everything.Yahoo.com/us/I hope this helps.
Rob Brown - Microsoft MVP<- profile="" -="" windows="" expert="" -="" consumer="" :="" bicycle="">-><- mark="" twain="" said="" it="">->
-
Index BITMAP Advisor Advisor access SQL on the Partition key
I ran SQL and SQL Access Advisor as below he recommends index bitmap on p_key... Given that the cust table is partitioned on the p_key, does make sense to create indexes on p_key?
Not sure if SQL Access Advisor look just in ' where clause conditions/predicates "and the cardinality of such columns and not look at whether the table is partitioned or not. T it?
Is it wise to create BITMAP indexes on the partition key? If so what scenario would be beneficial?
SELECT * FROM cust WHERE 'T' = 'T' AND part_key IN ( 2, 3, 4 )) AND ( p_key, act_key) IN ( select p_key, act_key from account where act_type = 'PENDING' and p_key in (2,3,4) )
user4529833 wrote:
Jonathan, I have exactly one value per partition for the partition key. However, most SQLs use 'IN' as predicate of partitioning pruning so overall stats are always used and they are always a bit bland compared to the partition level stats. Then this led SQL Access advisor recommendations?Yes, the Counselor also recommended to create Bitmap indexes on act_type... Given this does make sense to have the Bitmap index on the partition key?
I'll post the execution plan as soon as I have access to the system...
The fact that statistics are "a little stale" was not much of a difference.
The fact that Oracle has probably used statistics at the level of the tables is likely to be the underlying issue. Is the partitioned table list or you have rigged it with partitioning range: If you faked list partitioning using partitioning of the range which MAY have contributed to the issue (but it's an assumption that I have not tested).
Unofficially, the optimizer has said something like:
+ "There are 25 possible values for pkey, there are 4 possible values for act_type, there seems to be 100 different combinations - so your query will pick up X lines. 80% of these lines will be packed in Y blocks and 20% of them will be scattered through the Z blocks if we have two bitmap index, and we will have to do sequential reading of the db file SSS. If we do an analysis through the affected partitions, we do MMM db file scattered reads. What are those cheaper. » +
You know how false it's - if you know that you need not the pkey index. You also know what is the efficiency of the column act_type is in the identification of data, so you can decide whether or not an index on act_key may be useful.
Concerning
Jonathan Lewis
http://jonathanlewis.WordPress.com
http://www.jlcomp.demon.co.UK"The premature temptation to theories of shape on the lack of data is the scourge of our profession."
Sherlock Holmes (Sir Arthur Conan Doyle) in "the Valley of fear." -
Hello
I feel, the hash join subsequently cause performance problem. Please take a look and suggest me that my interpretation is fair or not:
Database version: 10.2.0.3SELECT COUNT(*),d.file_id,d.PRINT_LOCATION,m.corporate_id,m.file_uploaded_on,r.PROCESS_DATE as AuthorizeDate,r.AUTHORIZATION_STATUS,r.AUTHORIZATION_LEVEL as AuthorizationLevel FROM CHQPRINT.T_DATA_MASTER_FIELD_DETAILS d,CHQPRINT.T_DATA_CORP_AUTHORIZATION r,CHQPRINT.t_data_file_details m,CHQPRINT.T_DATA_RECORD_DETAILS N WHERE d.file_id=r.FILE_ID and d.RECORD_REFERENCE_NO=r.RECORD_REFERENCE_NO and r.file_id=m.file_id AND N.FILE_ID=R.FILE_ID AND N.RECORD_REFERENCE_NO=R.RECORD_REFERENCE_NO and d.file_id=m.file_id and N.CORPORATE_AUTHORIZATION_DONE='Y' AND TO_DATE(m.FILE_UPLOADED_ON) between ('01-OCT-2010') and ('28-OCT-2010') AND(N.PRINTING_STATUS<>'C' OR N.PRINTING_STATUS IS NULL) GROUP BY d.file_id,d.PRINT_LOCATION,m.corporate_id,m.file_uploaded_on,r.PROCESS_DATE,r.AUTHORIZATION_STATUS,r.AUTHORIZATION_LEVEL Plan hash value: 904523798 -------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | -------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 80 | | 5464 (1)| 00:01:06 | | 1 | HASH GROUP BY | | 1 | 80 | | 5464 (1)| 00:01:06 | | 2 | NESTED LOOPS | | 1 | 80 | | 5463 (1)| 00:01:06 | |* 3 | HASH JOIN | | 36 | 2340 | 6376K| 5401 (1)| 00:01:05 | | 4 | TABLE ACCESS BY INDEX ROWID| T_DATA_CORP_AUTHORIZATION | 408 | 9792 | | 75 (0)| 00:00:01 | | 5 | NESTED LOOPS | | 116K| 5003K| | 2535 (1)| 00:00:31 | |* 6 | TABLE ACCESS FULL | T_DATA_FILE_DETAILS | 286 | 5720 | | 202 (4)| 00:00:03 | |* 7 | INDEX RANGE SCAN | PK_DATA_CORPAUTHORIZATION | 408 | | | 6 (0)| 00:00:01 | | 8 | INDEX FAST FULL SCAN | IDX_FILE_REF_PLOC | 911K| 18M| | 1120 (1)| 00:00:14 | |* 9 | TABLE ACCESS BY INDEX ROWID | T_DATA_RECORD_DETAILS | 1 | 15 | | 2 (0)| 00:00:01 | |* 10 | INDEX UNIQUE SCAN | PK_DATA_RECORD_DETAILS | 1 | | | 1 (0)| 00:00:01 | -------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - access("D"."FILE_ID"="R"."FILE_ID" AND "D"."RECORD_REFERENCE_NO"="R"."RECORD_REFERENCE_NO" AND "D"."FILE_ID"="M"."FILE_ID") 6 - filter(TO_DATE(INTERNAL_FUNCTION("M"."FILE_UPLOADED_ON"))>=TO_DATE('2010-10-01 00:00:00', 'yyyy-mm-dd hh24:mi:ss') AND TO_DATE(INTERNAL_FUNCTION("M"."FILE_UPLOADED_ON"))<=TO_DATE('2010-10-28 00:00:00', 'yyyy-mm-dd hh24:mi:ss')) 7 - access("R"."FILE_ID"="M"."FILE_ID") 9 - filter(("N"."PRINTING_STATUS" IS NULL OR "N"."PRINTING_STATUS"<>'C') AND "N"."CORPORATE_AUTHORIZATION_DONE"='Y') 10 - access("N"."FILE_ID"="R"."FILE_ID" AND "N"."RECORD_REFERENCE_NO"="R"."RECORD_REFERENCE_NO") Elapsed: 00:00:08.49 Statistics ---------------------------------------------------------- 22 recursive calls 0 db block gets 1149987 consistent gets 383 physical reads 1560 redo size 14528 bytes sent via SQL*Net to client 679 bytes received via SQL*Net from client 19 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 268 rows processed
Kind regardsSanti says:
Hi Charles,I made the change in the query according to your suggestion. Here's the amended plan:
(snip)Even when the cost into account in the new plan is higher than the previous plan of 2000, but I see a big gain to comply:
Elapsed: 00:00:00.64 Statistics ---------------------------------------------------------- 22 recursive calls 0 db block gets 27392 consistent gets 0 physical reads 0 redo size 2441 bytes sent via SQL*Net to client 503 bytes received via SQL*Net from client 3 SQL*Net roundtrips to/from client 1 sorts (memory) 0 sorts (disk) 29 rows processed
Kind regards
What you have done is to give Oracle optimizer increases much better estimates of the actual number of rows that will be returned by operations in the execution plan by eliminating the m.FILE_UPLOADED_ON encapsulated TO_DATE function, so the calculated costs are expected to increase when the cardinality estimates. Note that now the T_DATA_FILE_DETAILS table is more should return 286 lines, but instead 1900 lines - 3993 rows being returned. The estimate of cardinaliy is still 2.1 times less than real, but it's better that be 13.96 times weaker than real with a Cartesian join is passed as a nested loops join. The T_DATA_CORP_AUTHORIZATION table is probably the biggest contributor to the execution time in the new execution plan (this may or may not be a performance issue). During the first run he required 10 156 physical block reads (all physical block reads for this run, check that the DB_FILE_MULTIBLOCK_READ_COUNT parameter is disabled to help potentially diluvium read performance in the future). During the second run that no physical reads has been made, then the execution time decreased slightly. 13 309 of the 27 392 makes more sense are a direct consequence of this table T_DATA_CORP_AUTHORIZATION access - we could see the number of becomes compatible drop using an index to access this table, but performance might be harmful if physical block reads are required.
There is a small chance that you could see slightly better performance by using clues more and less full table scan, but I suspect that the performance may not improve much more. You could, of course, test by temporarily adding index indicators to the SQL statement (where the indicator GATHER_PLAN_STATISTICS is currently positioned) to see how changing the performance. Once you are satisfied with the performance of the SQL statement, make sure you remove the indication of the GATHER_PLAN_STATISTICS of the SQL statement.
Charles Hooper
Co-author of "Expert Oracle practices: Oracle Database Administration of the Oak Table.
http://hoopercharles.WordPress.com/
IT Manager/Oracle DBA
K & M-making Machine, Inc. -
MS SQL or Exchange Storage Tuning Question
Does anyone know how better to tune SQL or Exchange storage?
We create 2 TB LUN using the SAS 8 x 300 GB on an IBM DS3400 FC SAN disks.
(1) for the creation of the IBM Storage Manager dashboard, should what segment size we use? the indicated only are 128 KB or 256 KB. (512 can be done using the CLI)
(2) what VMFS block size should be used? Then this SQL affect disk performance?
(3) is there any special NTFS formatting required in the operating system (Windows Server 2003) to get better disk performance?
Advice or guidance would be appreciated.
Concerning
Nicholas
Reading the messages of various virtualization experts, the size of the block of VMFS has little impact on performance. If you have more than 256 GB virtual disks, then a larger block size would be better.
Also, be sure to create the VMFS with Virtual Center partition as this will automatically align the VMFS.
You need to align your XXXX in Windows Server 2003 for best performance. VMware has a guide that can be referenced here:
-
I have a question that needs to be tuned... This query is executed a lot of time during the day so total run time is high... Here are the details
Please let me know if more info I neededqUERY SELECT col1 , col2 , col3 FROM table1 WHERE col4 = '200' AND col2 IN ('123ABC','234/AF','AKJF/R','67KJAF/S','AD45/R') AND COL1 IN ('NEWY','OHIO','WADC','CALI','PHYL','ILLI') AND COL5 = ' '; SQL> select banner from v$version 2 ; BANNER ---------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bi PL/SQL Release 10.2.0.2.0 - Production CORE 10.2.0.2.0 Production TNS for Solaris: Version 10.2.0.2.0 - Production NLSRTL Version 10.2.0.2.0 - Production SQL> select banner from v$version 2 ; BANNER ---------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bi PL/SQL Release 10.2.0.2.0 - Production CORE 10.2.0.2.0 Production TNS for Solaris: Version 10.2.0.2.0 - Production NLSRTL Version 10.2.0.2.0 - Production SQL> SHOW PARAMETER OPTIMIZER NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ _optimizer_mjc_enabled boolean FALSE optimizer_dynamic_sampling integer 2 optimizer_features_enable string 10.2.0.2 optimizer_index_caching integer 0 optimizer_index_cost_adj integer 20 optimizer_mode string ALL_ROWS optimizer_secure_view_merging boolean TRUE SQL> show parameter db_file_multi NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ db_file_multiblock_read_count integer 128 SQL> show parameter cursor_sharing NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ cursor_sharing string EXACT SQL> column sname format a20 SQL> column pname format a20 SQL>column pval2 format a20 select sname, pname, pval1, pval2 SQL> from sys.aux_stats$; SNAME PNAME PVAL1 PVAL2 -------------------- -------------------- ---------- -------------------- SYSSTATS_INFO STATUS COMPLETED SYSSTATS_INFO DSTART 11-10-2009 13:57 SYSSTATS_INFO DSTOP 11-10-2009 13:57 SYSSTATS_INFO FLAGS 1 SYSSTATS_MAIN CPUSPEEDNW 1417.876 SYSSTATS_MAIN IOSEEKTIM 11.022 SYSSTATS_MAIN IOTFRSPEED 15576.989 SYSSTATS_MAIN SREADTIM 2.844 SYSSTATS_MAIN MREADTIM .829 SYSSTATS_MAIN CPUSPEED 715 SYSSTATS_MAIN MBRC 8 SNAME PNAME PVAL1 PVAL2 -------------------- -------------------- ---------- -------------------- SYSSTATS_MAIN MAXTHR 19869696 SYSSTATS_MAIN SLAVETHR 13 rows selected. SQL> explain plan for SELECT col1 , col2 , col3 FROM table1 WHERE col4 = '200' AND col2 IN ('123ABC','234/AF','AKJF/R','67KJAF/S','AD45/R') AND COL1 IN ('NEWY','OHIO','WADC','CALI','PHYL','ILLI') AND COL5 = ' '; 4 Explained. SQL> select * from table(dbms_xplan.display); PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- ---------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| ---------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 27 | 783 | 12 (9)| | 1 | INLIST ITERATOR | | | | | | 2 | TABLE ACCESS BY INDEX ROWID| TABLE1 | 27 | 783 | 11 (0)| | 3 | INDEX RANGE SCAN | INDEX1 | 27 | | 7 (0)| ---------------------------------------------------------------------------- Note PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- ----- - 'PLAN_TABLE' is old version 13 rows selected.
Published by: njafri on January 5, 2010 13:33
-
Cannot access SQL more from other than the Oracle users
Hello
I installed Oracle 11 g R2 on Red Hat 6. My problem is that I can't access sqlplus from other than the oracle users.
I put the PATH, the ORACLE_HOME and ORACLE_SID to the correct values.
Session1
Connect as: oracle
[email protected] password:
Last login: Sun Aug 24 01:24:46 192.168.202.1 2014
[oracle@localhost ~] $ echo $PATH
/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/Home/Oracle/bin:/Home/Oracle/app/Oracle/product/11.2.0/dbhome_1/bin
[oracle@localhost ~] $ echo $ORACLE_HOME
/Home/Oracle/app/Oracle/product/11.2.0/dbhome_1
[oracle@localhost ~] $ echo $ORACLE_SID
ORCL
[oracle@localhost ~] $ sqlplus
SQL * more: Production release 11.2.0.1.0 Sun Aug 24 03:17:32 2014
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Enter the user name:
Session2
Connect as: nada
[email protected] password:
Last login: Sun Aug 24 02:31:49 192.168.202.1 2014
[prithwish@localhost ~] $ export PATH=$PATH:/home/oracle/app/oracle/product/11.2.0/dbhome_1/bin
[prithwish@localhost ~] $ export ORACLE_HOME=/home/oracle/app/oracle/product/11.2.0/dbhome_1
[prithwish@localhost ~] $ export ORACLE_SID = orcl
[prithwish@localhost ~] $ echo $ORACLE_HOME
/Home/Oracle/app/Oracle/product/11.2.0/dbhome_1
[prithwish@localhost ~] $ echo $ORACLE_SID
ORCL
[prithwish@localhost ~] $ echo $PATH
/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/Home/Prithwish/bin:/Home/Oracle/app/Oracle/product/11.2.0/dbhome_1/bin
[prithwish@localhost ~] $ sqlplus
-bash: sqlplus: command not found
I guess that I could not access sqlplus as the Nada the user has no access to X11R6
[prithwish@localhost ~] $ cd...
[Host prithwish@localhost] $ ls - ltr
Total 8
drwx-. 30 oracle oinstall 4096 23 August 23:51 oracle
drwx-. Nada Nada 4096 24 August to 02:33 27 Mathieu
[Host prithwish@localhost] $
My question is, although quite stupid, that double
(1) I have to change the permissions of the directory oracle of 755 for any other user to access the sqlplus?
(2) why is the user directory under the folder created with permissions of 700, even when the umask is set to 002? Change the permissions of the directory under/Home oracle will cause security problems?
My apologies for being naïve. It's the first time I installed Oracle under UNIX
Kind regards
Prithwsh
Actually, the default procedure is NOT to install oracle to X11R6, but in/opt/oracle. or maybe/usr/local/oracle.
This is what you're going to be recommended in the installation documentation, and in doing so, don't forget to run rootpre.sh and root.sh, I never had security problems and could run Oracle from any account.
Sybrand Bakker
Senior Oracle DBA
-
Aironet 1140 access point / 1941 router question
I currently have:
-router (not wireless) 1941
-Access point 1140
Looks like I got the AP on controller instead of the standalone version. My question is, the 1941 (not 1941W) has a wireless controller? If not, is there a controller module I can add to my router? Or I would return the AP I to the standalone version (or the 1941 for the 1941W)?
Thanks in advance for any advice / help.
It is not a controller which can go in the 1941. The controllers are quite expensive if you probably don't want to go this way in any case. Before you send the return of accreditation, Sue TAC and see if you can convert it to a standalone. Some AP support this, but I'm not sure of the 1140 series.
It will be useful.
-
Under SQL result in development in a few seconds, but in production continues to run. Explain plan is also less.
Oracle Database 11 g Enterprise Edition Release 11.2.0.4.0 - 64 bit Production
CURSOR_SHARING is set to EXACT (production) and FORCE (development) - is this the reason?
select wonum from workorder where worktype in ('EM','CM') and siteid = 'DWS_DSS' and historyflag = 0 and (exists (select null from dcw_ddotpermits b where workorder.wonum = b.wonum and workorder.siteid = b.siteid and b.permittype in ('Construction Permit', 'Occupancy Permit') and b.permitenddate > sysdate group by b.wonum, b.permittype having count(wonum) > 1));
Explain the Plan of Production (which is slow)
------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | ------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 26 | 162K (17)| 00:32:30 | | | |* 1 | TABLE ACCESS BY GLOBAL INDEX ROWID| WORKORDER | 1491 | 38766 | 4259 (1)| 00:00:52 | 1 | 1 | |* 2 | INDEX RANGE SCAN | WORKORDER_NDX20 | 2399 | | 988 (1)| 00:00:12 | | | |* 3 | FILTER | | | | | | | | | 4 | HASH GROUP BY | | 1 | 35 | 6 (17)| 00:00:01 | | | |* 5 | TABLE ACCESS BY INDEX ROWID | DCW_DDOTPERMITS | 1 | 35 | 5 (0)| 00:00:01 | | | |* 6 | INDEX RANGE SCAN | W_DDOTPERMITS_NDX2 | 3 | | 2 (0)| 00:00:01 | | | ------------------------------------------------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("WORKTYPE"='CM' OR "WORKTYPE"='EM') 2 - access("SITEID"='DWS_DSS' AND "HISTORYFLAG"=0) filter( EXISTS (SELECT 0 FROM "MAXIMO"."DCW_DDOTPERMITS" "B" WHERE "B"."WONUM"=:B1 AND ("B"."PERMITTYPE"='Construction Permit' OR "B"."PERMITTYPE"='Occupancy Permit') AND "B"."PERMITENDDATE">SYSDATE@! AND "B"."SITEID"=:B2 GROUP BY "B"."WONUM","B"."PERMITTYPE" HAVING COUNT(*)>1)) 3 - filter(COUNT(*)>1) 5 - filter(("B"."PERMITTYPE"='Construction Permit' OR "B"."PERMITTYPE"='Occupancy Permit') AND "B"."PERMITENDDATE">SYSDATE@! AND "B"."SITEID"=:B1) 6 - access("B"."WONUM"=:B1)
Explain the Plan of development (which is fast)
----------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 25 | 28247 (17)| 00:05:39 | |* 1 | FILTER | | | | | | |* 2 | VIEW | index$_join$_001 | 7991 | 195K| 985 (1)| 00:00:12 | |* 3 | HASH JOIN | | | | | | | 4 | INLIST ITERATOR | | | | | | |* 5 | INDEX RANGE SCAN | WWORKORDER_NDX32 | 7991 | 195K| 279 (2)| 00:00:04 | |* 6 | INDEX RANGE SCAN | WORKORDER_NDX20 | 7991 | 195K| 973 (1)| 00:00:12 | |* 7 | FILTER | | | | | | | 8 | HASH GROUP BY | | 1 | 39 | 6 (17)| 00:00:01 | |* 9 | TABLE ACCESS BY INDEX ROWID| DCW_DDOTPERMITS | 1 | 39 | 5 (0)| 00:00:01 | |* 10 | INDEX RANGE SCAN | W_DDOTPERMITS_NDX2 | 3 | | 2 (0)| 00:00:01 | ----------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter( EXISTS (SELECT 0 FROM "MAXIMO"."DCW_DDOTPERMITS" "B" WHERE "B"."WONUM"=:B1 AND ("B"."PERMITTYPE"=:SYS_B_4 OR "B"."PERMITTYPE"=:SYS_B_5) AND "B"."PERMITENDDATE">SYSDATE@! AND "B"."SITEID"=:B2 GROUP BY "B"."WONUM","B"."PERMITTYPE" HAVING COUNT(*)>:SYS_B_6)) 2 - filter("HISTORYFLAG"=:SYS_B_3 AND "SITEID"=:SYS_B_2 AND ("WORKTYPE"=:SYS_B_0 OR "WORKTYPE"=:SYS_B_1)) 3 - access(ROWID=ROWID) 5 - access("WORKTYPE"=:SYS_B_0 OR "WORKTYPE"=:SYS_B_1) 6 - access("SITEID"=:SYS_B_2 AND "HISTORYFLAG"=:SYS_B_3) 7 - filter(COUNT(*)>:SYS_B_6) 9 - filter(("B"."PERMITTYPE"=:SYS_B_4 OR "B"."PERMITTYPE"=:SYS_B_5) AND "B"."PERMITENDDATE">SYSDATE@! AND "B"."SITEID"=:B1) 10 - access("B"."WONUM"=:B1)
It looks more like a problem with the size of the data and related statistics. (It may be simply the outdated statistics on dev).
In the production of a larger data NDX20 index on its own looks too expensive to the optimizer so it combines this with the index of ndx32 to avoid visiting the table.
The first step to study the problem would be to check if the estimated row counts are realistic for individual systems, and then to determine if the data cluster for the production system is much better than Oracle think it is - if yes, then (for newer versions) affecting the preference of table 'table made of blocks cached' a realistic value can help ( https://jonathanlewis.wordpress.com/2015/11/02/clustering_factor-4/ ). If all else fails and the dev is good then allusion and capture of a Plan SQL database may be required.
Concerning
Jonathan Lewis
-
SQL Query Tuning (large table)
Hi all
Ask your help for Tuning below mentioned simple query.
SELECT distinct BROKER_CODE FROM PROCESSED_TRXNS WHERE FOLIO_NO = '101302'Top query takes about 15 seconds to give the output.
Explain the plan: -.
Hash value of plan: 2775832988
-----------------------------------------------------------------------------------------
| ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time |
-----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 29. 609 | 38241 (1) | 00:07:39 |
| 1. UNIQUE NOSORT FATE. | 29. 609 | 38241 (1) | 00:07:39 |
|* 2 | INDEX SKIP SCAN | PROCTRAN_BRC_FN_C1 | 29. 609 | 38240 (1) | 00:07:39 |
-----------------------------------------------------------------------------------------Information of predicates (identified by the operation identity card):
---------------------------------------------------2 - access ("FOLIO_NO" = '101302')
Filter ("FOLIO_NO" = '101302')
Additional info: -.
SELECT COUNT (1) IN PROCESSED_TRXNS - 135989170Composite index on BROKER_CODE, FOLIO_NO in the table
The optimizer expects that the skip scan to take at 7:39 min - so the 15 sec are not so bad. And I'm sure that the operation is faster than a full Table Scan on PROCESSED_TRXNS. If you do not have a post adjustment index more then the skip scan might indeed be the best available strategy. I expect an index on PROCESSED_TRXNS (FOLIO_NO, BROKER_CODE) to be more effective for the query, because it should allow a range of index analysis which should read only the part of the index with the given FOLIO_NO. But create additional indexes will obviously slow down DML and could have an impact on other query execution plans (and not necessarily a positive impact).
-
Practical SQL training/exam question
Hello
I am a student at the bases SQL 1z0-051 exam, and examples of questions about the Oracle University site review is the following:
3. review the structure of the EMP table:
EMP
Name Null? Type
------------------------------------------------------------------------------------------
EMPNO NOT NULL NUMBER (3)
ENAME VARCHAR2(25)
SALARY NUMBER(10,2)
COMM_PCT NUMBER(4,2)
You want to generate a report which meets the following requirements:
1 shows the names of employees and the commission amounts
2 excludes employees who do not have a commission
3 displays a zero for the employees whose SALARY has no value
You run the following SQL statement:
SQL > SELECT ename, NVL (salary * comm_pct, 0)
WCP
WHERE comm_pct <>NULL;
What is the result?
It generates an error
B it runs successfully but displays no results
C it runs successfully but displays results that meet only the requirements 1 and 3
D it is running successfully and displays results which fulfil all the requirements
The answer provided on the Web site is: B it runs successfully but displays no results
and I don't understand why...
The WHERE clause excludes rows with comm_pct is NULL. So for the function NVL in the SELECT clause, the value of comm_pct will never be NULL, but salary * can * be NULL (there is no NULL constraint on this topic), so for lines where the salary is NULL, salary * comm_pct evaluate to NULL and the function NVL would transform these nulls to 0.
Am I missing something? I realize that for some people on this forum, the answer may be obvious, but I'm learning just for the first level of certification SQL, so for now I still have a lot to discover.
Thanks in advance,
JM
x <> NULL
x = NULL
Neither is never true. NULL is never equal to what whatsoever (including NULL). NULL is also never not equal to what
X IS NOT NULL
X IS NULL
are the right way
http://docs.Oracle.com/CD/B19306_01/server.102/b14200/sql_elements005.htm
-
Please help to translate ms access sql for oracle
Hello
I work by the migration of sql processes for updating legacy 11g access ms stuff we already use for a lot of other business processes.
I need advice with the code below if someone could help please.
SELECT (ABC. NAME = ABC. + (ABC. FAMILY NAME) INITS = ABC. INITIALS or IsNull (ABC. INITS) or IsNull (ABC. +(ABC.) INITIALS)) BDAY = ABC. BIRTHDAY or IsNull (ABC. BDAY)) AS [success]
The results in the column are successes; 0 = no match fields, matches, 2 = 2 fields match, 3 = 3 fields are the field of 1 = 1.
It works 3 concatenations and function success fills a column with a numeric value that reflects the success.
in General, it seems to be almost a Where statement but clause to choose and it operates fortunately access.
There are a lot more code, but ironically it is fairly standard, once the brackets [] around the table with spaces names are deleted and search and replacement of the legacy! between the table and column names.
Your suggestions are greatly appreciated.
Cheers, Peter
so you'd:
SELECT DECODE (ABC. NAME, ABC. SURNAME, 1, 0)
+ DECODE (NVL (ABC. INITS, ABC. (INITIALS), NVL (ABC. INITIALS, ABC. INITS), 1, 0)
+ DECODE (NVL (ABC. BROWN, ABC. (BIRTHDAY), ABC. BIRTHDAY, 1, 0) as a result
Of...
Returns + 1 for all matches (so 0 to do nothing, 1 for 1, 2, 2,...)
the matches are: the same name (both null is considered to be identical too)
Inits and identical initials (null or one of them null is considered identical too)
bday and identical birthday (both null or null bday is considered identical too)
HTH
-
Help with MS Access SQL Developer Import (problem of DateTime field)
So here's my dilemma...
I am currently trying to import from MS Access to SQL Developer, by exporting MS Access to MS Excel and then import in my table of the exporting Excel sheet. What happens is that when I import the excel sheet, it is does not keep the format of date/time / I tried several different formats, but everyone seems to exclude the time and the date.
My field of date/time is currently set up like this on SQL Developer:
DD/RRRR/MM HH: MM: SS AM/PM
And the Excel sheet, I'm importing the field corresponding, defined as follows:
mm/dd/yyyy/hh/mm/ss AM/PM
The answer (s) I'm importing is as follows:
Date/time physical_cpu server_id
01/11/2013-12:01:23 AM scc415 6.03999996
When the record is imported, it eliminates the time, so the imported data are:
Date/time physical_cpu server_id
01/11/2013 6.03999996 scc415
Any idea on how to get this timestamp to import properly? Many appreciate the ideas in advance!
Looks like I have found the solution. For some reason, export to leave access to Excel first and then open and save the file as a .csv instead of a .xlsx, date/time information are maintained. This problem has been resolved.
Found the solution on the following link for anyone interested.
-
How to connect and access SQL Server data base of sqldeveloper?
Hi all
is it possible to connect and access the database of remote sql server using sql developer?Connect you as 'his' or as a normal user?
When you connect as a normal user, this user has the tables or views?
Maybe you are looking for
-
dv4-3138tx: HP - win 7 Home premium x 64 recovery disc
I intend to improve my hdd of 750Go of laptop(dv4-3138tx) to a new 120 GB SDS. Before you begin, I created a set of HP recovery discs using HP Recovery Manager. To make sure that disk works, I tested the recovery DVDs on my laptop original 750 GB har
-
Hi, I would like to add a 2nd HDD to the K210 and would appreciate if someone can advise me please WHAT type of HD, I need and what cables are required? Thank you Bob
-
Error: Software of "Operation not valid Floating Point" as he tried to run Iolo drive scrubber
Original title: cannot run A clean product Up I ran Drive Scrubber from Iolo now for 2 consecutive years. Now when I try to run it, it gives me an error message that says "invalid Floating Point Operation". I don't have any idea what it is. What is a
-
Remember - this is a public forum so never post private information such as numbers of mail or telephone! Ideas: MSN LIve and Microsoft Office 2007 on XP The two programs are not in Add Remove programs and does not the PC Tried all the basics of know
-
USB key is not considered to HP PAVILION CORE I5 and MS 7613
Hello I just got my new HP desktop HP PAVILION CORE I5 motherboard MS 7613 and im trying to install windows 7 without success, because in the BIOS the usb memory key does not START options. Don't see these settings in the bios to boot: Floppy boot pr