Query performance problem
I have two schemas of two databases.
When I check the sql plan, both the schema contains diffrently (one is underway for a full table scan and a scan of systematic index range).
Both the scheme almost similar kind of data, indexes, and charges.
What is causing the performance of sql
in the second plan, the optimizer expects the analysis of range on IDX_TSK_ID step 5 to return to only 14 lines and decides it's a good idea to join the second TB_TRANS_MSTR table with a nested loops join (make a loop on 14 TASK_INSTANCE results and do a search on each iteration of the use of the PK_TRANS_ID index).
In the foreground, the optimizer decides to read TB_TRANS_MSTR (containing 978 lines) and build a table of hash in memory of the results - and then probe against the TASK_INSTANCE second set.
The next question is: which plan is most suitable and translates into better performance? The chances are high that the best plan is one that includes an estimate more fitting of the cardinalities. These estimates are based on a simple arithmetic (more or less) - and they depend on the table and column statistics. So the dba_tab_column entires Swen W. mentioned would be useful. In addition the text of the query would probably we shed some light on the question.
Tags: Database
Similar Questions
-
On query performance problem. Need help.
It is essentially a performance problem. I hope someone can help me with that.
Basically, I have four old masters (150000 records), (100000 records) Child1, Child2 (50 million records!), child 3 (10000 + records)
(please forgive the alias).
Each record in the master has now more than one matching record in each table child (one to many).
Also there may be any record in any or all of the tables for a particular master record.
Now, I need to get the maximum of last_updated_date for each master record in each table 3 child and then find the maximum of
the three obtained last_active_dates from the 3 tables.
for example: Master ID 100, to interrogate Child1 for all Master ID 100 records and get the max last_updated_date.
Same for the other 2 tables and get the most out of these three values.
(I also need to deal with cases where no trace may be found in a child table to a Master ID)
Write a procedure that uses sliders that the value of each of the performance hits of child table
evil. And that's, I need to know the last_updated_date for each master file (all 150000 of them). It will probably take days to do this.
SELECT MAX (C1. LAST_UPDATED_DATE)
MAX (C2. LAST_UPDATED_DATE)
MAX (C3. LAST_UPDATED_DATE)
OF CHILD1 C1
CHILD2 C2
CHILD3 C3
WHERE C1. MASTER_ID = 100
OR C2. MASTER_ID = 100
OR C3. MASTER_ID = 100
I tried the above, but I got an error in tablespace temp. I don't think that the application is good enough at all.
(The GOLD clause is to take care of any records in a child table. If there is an AND, then the join and then select
No, not even if there is no record in a child table, but valid values in the other 2 tables).
Thank you very much.
Published by: user773489 on December 16, 2008 11:49You want alias to this field then.
SELECT MAX (C.LAST_UPDATED_DATE) FROM (select child1_master_id MASTER_ID, field2, field3,... field4 from CHILD1 UNION ALL select child2_master_id MASTER_ID, field2, field3,... field4 from CHILD2 UNION ALL select child3_master_id MASTER_ID, field2, field3,... field4 from CHILD3) C WHERE C.MASTER_ID = 100
If do you something like that, and explicitly list the columns you want.
Edit: for something like a specific query for a MASTER_ID...
SELECT MAX (C.LAST_UPDATED_DATE) FROM (select child1_master_id MASTER_ID, LAST_UPDATED_DATE from CHILD1 where child1_master_id = 100 UNION ALL select child2_master_id MASTER_ID, LAST_UPDATED_DATE from CHILD2 where child2_master_id = 100 UNION ALL select child3_master_id MASTER_ID, LAST_UPDATED_DATE from CHILD3 where child3_master_id = 100) C WHERE C.MASTER_ID = 100
That should give you very good performance by raising a record. But a better idea, as indicated, would be to get it all at once with a sql:
SELECT MASTER_ID, MAX(C.LAST_UPDATED_DATE) FROM (select child1_master_id MASTER_ID, LAST_UPDATED_DATE from CHILD1 UNION ALL select child2_master_id MASTER_ID, LAST_UPDATED_DATE from CHILD2 UNION ALL select child3_master_id MASTER_ID, LAST_UPDATED_DATE from CHILD3 ) C GROUP BY MASTER_ID
This will give you the max for each MASTER_ID in a sql without a cursor.
Published by: tk-7381344, December 16, 2008 12:12
-
Query Performance problem in the apex
Hi all
I use
Select Address1 address2, address3, city, place, PIN code, siteid, bpcnum_0, contactname, fax, mobile, phone, Web site, rn from (select Address1 address2, address3, city, place, pincode, siteid, bpcnum_0, contactname, fax, mobile, phone, website, dense_rank () (order of contactname, Address1) as rn, row_number() over (partition contactname, Address1 order by contactname, Address1) as rn1 vw_sub_cl_add1 where siteid = v ('P10_SITENO') and bpcnum_0 = ('P10_CLNO') v) emp where rn1 = 1 and rn > = v('P10_) RN')
the query above to extract the details of a view, an area of pl/sql
paging also works very well.
That is to say only 4 records at a time will be displayed.
It comes
It takes 1 minute and 5 seconds to display, then set records.
Please, could any tell me how to reduce the time of rendering the page. ?
Thanks in advance
Good bye
Sonny_starckIf it's really true, then the use of bind variables in the query is fast, you can rewrite your query in your application to use bind variables?
Try to rewrite your query to use: P10_SITENO instead of v ('P10_SITENO') etc. If possible.You can find more details on the http://www.inside-oracle-apex.com/2006/12/drop-in-replacement-for-v-and-nv.html of Patrick Wolf blog
-
4.2 APEX performance problem
Hi all
We are facing the problem of performance when we are by selecting one of the underside of responsibility that rests in the Organization as if we select APEX_01 Reporting ALL then in this case, we can see all the organization such as ETL, BRN, CHICKEN ect in home page, but I am facing performance problem when it forms that select area of responsibility to the home page. Please indicate how to check this page's performance in the APEX and also advise how to solve this problem.
LnTInfotech wrote:
so you want to say below takes more time and we need to address this request?
15.34901 224.01663 ... Run the statement: SELECT DISTINCT papf.full_name a, papf.full_name b
OF po_agents pa
per_all_people_f women's wear
org_organization_definitions org
WHERE 1 = 1
AND papf.person_id = pa.agent_id
AND org.organization_CODE = NVL(:P1_WARE_HOUSE,org.organization_CODE)
AND TRUNC (SYSDATE) BETWEEN papf.effective_start_date AND papf.effective_end_date
AND EXISTS SELECT (SEPARATE 1
OF po_headers_all poh
WHERE poh.agent_id = pa.agent_id
AND poh.org_id = org.operating_unit)This request seems to be missing a join. An obligation to use DISTINCT in application code usually indicates that there is something seriously wrong with the data model or query...
-
We have a database that has recently moved to the cloud and started having problems of query performance. I tried to detect possible bottlenecks at the level of the database and performance troubleshooting, by actively monitoring the execution of these queries, generate and analyze the AWR reports, tests of the optimizer to choose alternative plans, but even after the execution of Toad SQL Optimizer, no best execution plan found.
We always have the former operational base for test purposes and the same queries, with the same exactly the execution plan, takes less than a minute to run, so that the production environment in the cloud lasts 42 minutes to complete. Session are not suspended, such that it is possible to check through V$ session, v$ sql and other sources.
Here's the problem:
After a comparison of control between the two databases, outside the platform, each of them is running, the only difference I could find was, these three parameters exist in the old production server and are not supported in RDS:
_complex_view_merging
_gby_hash_aggregation_enabled
_optimizer_push_pred_cost_based
Support RDS reported that current settings are not supported for Oracle running on the cloud.
My main questions are, a assumed these differences can be the root cause:
How can I test the application against these parameters, to check whether or not the query uses on these parameters. Just as we can 'alter index index-name monitor usage'... to check if an index is used by a specific query.
Note: The suggestion of: try disabling settings in the test environment and run the query cannot be done at this time, given that the dessert instance a large number of patterns that use of the Dev team. If other recommendations will be appreciated.
Thanks in advance.
I learned that from Oracle 10 G, the aggregation of hash function was introduced, driving the optimizer to choose UNIQUE HASH instead of SORT UNIQUE for queries containing the clause "SEPARATE." In fact, this behavior is driven by one of these mentioned underscore settings, specifically the _gby_hash_aggregation_enabled, which must be set to false. Because RDS does not support _ * parameters, the solution was to rewrite the original query using the indicator / * + NO_USE_HASH_AGGREGATION * /.
-
Performance problem - oracle 10.2.0.4
HI Experts,
Today I met a performance problem. I pulled a simple query to get the number of records in a table existed in the databases of PROD and QA. The two databases are the same settings at parameter level. But when I launch the application, Prod database takes 10 minutes to get the result. Not sure why prod base it takes a lot of time.
Additional information:
---------------------
The number of lines is almost the same on the two databases
OPERATING SYSTEM: IBM AIX
Oracle: 10.2.0.4
QA Database - explain plan
==========================
06:40:49 SQL > select count (*) in the Siebel.s_prod_baseline;
COUNT (*)
----------
42146408
Elapsed time: 00:00:45.35 == > just 45 seconds
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Hash value of plan: 645963650
-------------------------------------------------------------------------------
| ID | Operation | Name | Lines | Cost (% CPU). Time |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1. 1165 (1) | 00:00:14 |
| 1. GLOBAL TRI | | 1. | |
| 2. INDEX SCAN FULL | S_PROD_BASELINE_P1 | 42 M | 1165 (1) | 00:00:14 |
-------------------------------------------------------------------------------
Database of PROD explain plan
======================
06:42:59 SQL > select count (*) in the Siebel.s_prod_baseline;
COUNT (*)
----------
42730261
Elapsed time: 00:10:13.43 == > took more than 10 minutes
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Hash value of plan: 645963650
-------------------------------------------------------------------------------
| ID | Operation | Name | Lines | Cost (% CPU). Time |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1. 4160 (1) | 00:00:50 |
| 1. GLOBAL TRI | | 1. | |
| 2. INDEX SCAN FULL | S_PROD_BASELINE_P1 | 42 M | 4160 (1) | 00:00:50 |
-------------------------------------------------------------------------------
Could if it you please let me know why oracle it takes a lot of time prod database?
Thank you..
> There was another option to check why prod database behaves differently?
Yes. The trace file, TRACE the request and review (or the tkprof).
Hemant K Collette
-
Hello
I have a performance problem involving high line response to locking conflicts that I can't wait.
The application updates the specific table about 1200 times for my AWR report representing 2 hours.
This update represents 98% of the db_time and wait to enq: TX - line lock conflict is:
-83 796 waiting
-245 441 seconds
The application updates a row both an unprecedented SELECT UPDATES. This is the query:
Update my_table set creationtime =: 1, modificationdate =: version 2, =: creationuserid 3, =: fortesting 4, =: 5, modificationuserid =: participation_uuid 6, =: actimenucontact 7, =: allegroid 8, =: campaign_uuid 9, =: desjardinsemployeetype 10, =: effectiveparticipationtype 11, =: family_uuid 12, =: fetchpasseport 13, =: 14, healthratingfilldate =: hoursofsleeptimecommitment 15, =: igaid 16, =: initialsubscriptiontype 17, =: minutesofsleeptimecommitment 18, =: goal 19, =: partner_uuid 20, =: 21, promotionalemails =: 22, readrules =: sccsubscriptionprovenance 23, =: = subscriptionprovenance 24,: 25, supportemails =: where the 26 uuid = : 27 and = version: 28
Clause where is filtering on uuid which is the primary key of my_table.
The uuid is a specific user and two users would not update the same row, what could be the reason for this wait event?
Thank you
Why guess when you know.
Watch ASH data for these events to the queue.
He'll tell you sql id as well.
-
How to tune the query performance, any1 help me to impropve... Thanks in advanceCURSOR c_exercise_list IS SELECT DECODE(v_mfd_mask_id ,'Y',' ',o.opt_id) opt_id, DECODE(v_mfd_mask_id ,'Y',' ',o.soc_sec) soc_sec, P.plan_id plan_id, E.exer_id exer_id, E.exer_num, DECODE(G.sar_flag, 0, DECODE(G.plan_type, 0, '1', 1, '2', 2, '3', 3, ' ', 4,'5', 5, '6', 6, '7', 7, '8', 8, '9', '0'), ' ') option_type, TO_CHAR(G.grant_dt, 'YYYYMMDD') grant_dt, TO_CHAR(E.exer_dt, 'YYYYMMDD') exer_dt, E.opts_exer opts_exer, E.mkt_prc mkt_prc, E.swap_prc swap_prc, E.shrs_swap shrs_swap, decode(e.exer_type,2,decode(xe.cash_partial,'Y','A','2'),TO_CHAR(E.exer_type)) exer_type, E.sar_shrs sar_shrs, NVL(ROUND(((xe.sar_shrs_withld_optcost - (e.opts_exer * g.opt_prc) / e.mkt_prc) * e.mkt_prc),2),0)+e.sar_cash sar_cash, NVL(f.fixed_fee1,0) fixed_fee1, NVL(f.fixed_fee2,0) fixed_fee2, NVL(f.fixed_fee3,0) fixed_fee3, NVL(f.commission,0) commission, NVL(f.sec_fee,0) sec_fee, NVL(f.fees_paid,0) fees_paid, NVL(ct.amount,0) cash_tend, E.shrs_tend shrs_tend, G.grant_id grant_id, NVL(G.grant_cd, ' ') grant_cd, NVL(xg.child_symbol,' ') child_symbol, NVL(xg.opt_gain_deferred_flag,'N') defer_flag, o.opt_num opt_num, --XO.new_ssn, DECODE(v_mfd_mask_id ,'Y',' ',xo.new_ssn) new_ssn, xo.use_new_ssn ,xo.tax_verification_eligible tax_verification_eligible ,(SELECT TO_CHAR(MIN(settle_dt),'YYYYMMDD') FROM tb_ml_exer_upload WHERE exer_num = E.exer_num AND user_id=E.user_id AND NVL(settle_dt,TO_DATE('19000101','YYYYMMDD'))>=E.exer_dt) AS settle_dt ,xe.rsu_type AS rsu_type ,xe.trfbl_det_name AS trfbl_det_name ,o.user_txt1,o.user_txt2,xo.user_txt3,xo.user_txt4,xo.user_txt5,xo.user_txt6,xo.user_txt7 ,xo.user_txt8,xo.user_txt9,xo.user_txt10,xo.user_txt11, xo.user_txt12, xo.user_txt13, xo.user_txt14, xo.user_txt15, xo.user_txt16, xo.user_txt17, xo.user_txt18, xo.user_txt19, xo.user_txt20, xo.user_txt21, xo.user_txt22, xo.user_txt23, xo.user_dt2, xo.adj_dt_hire_vt_svc, xo.adj_dt_hire_vt_svc_or, xo.adj_dt_hire_vt_svc_or_dt, xo.severance_plan_code, xo.severance_begin_dt, xo.severance_end_dt, xo.retirement_bridging_dt ,NVL(xg.pu_var_price ,0) v_pu_var_price ,NVL(xe.ficamed_override,'N') v_ficmd_ovrride ,NVL(xe.vest_shrs,0) v_vest_shrs ,NVL(xe.client_exer_id,' ') v_client_exer_id ,(CASE WHEN xg.re_tax_flag = 'Y' THEN pk_xop_reg_outbound.Fn_GetRETaxesWithheld(g.grant_num, E.exer_num, g.plan_type) ELSE 'N' END) re_tax_indicator -- 1.5V ,xe.je_bypass_flag ,xe.sar_shrs_withld_taxes --Added for SAR july 2010 release ,xe.sar_shrs_withld_optcost --Added for SAR july 2010 release FROM (SELECT exer.* FROM exercise exer WHERE NOT EXISTS (SELECT s.exer_num FROM suspense s WHERE s.exer_num = exer.exer_num AND s.user_id = exer.user_id AND exer.mkt_prc = 0))E, grantz G, xop_grantz xg, optionee o, xop_optionee xo, feeschgd f, cashtendered ct, planz P,xop_exercise xe WHERE E.grant_num = G.grant_num AND E.user_id = G.user_id AND E.opt_num = o.opt_num AND E.user_id = o.user_id AND (G.grant_num = xg.grant_num(+) AND G.user_id=xg.user_id(+)) AND (o.opt_num = xo.opt_num(+) AND o.user_id=xo.user_id(+)) AND E.plan_num = P.plan_num AND E.user_id = P.user_id AND E.exer_num = f.exer_num(+) AND E.user_id = ct.user_id(+) AND E.exer_num = ct.exer_num(+) AND E.user_id = ct.user_id(+) AND E.exer_num=xe.exer_num(+) AND E.user_id=xe.user_id(+) AND G.user_id = USER AND NOT EXISTS ( SELECT tv.exer_num FROM tb_xop_tax_verification tv--,exercise ex WHERE tv.exer_num = e.exer_num AND tv.user_id = e.user_id AND tv.user_id = v_cms_user AND tv.status_flag IN (0,1,3,4, 5)) -- Not Processed ;
Published by: BluShadow on February 21, 2013 08:14
corrected {noformat}{noformat} tags. Please read {message:id=9360002} and learn how to post code correctly.
956684 wrote:
I got the cost of CPU: 458.50 time: 1542.90 therefore anything can capture to improve performance, but there is no applied full table scan to put nothing in the mentioned table. . and most of the columns are index unique scan takes place... someone can help me to find the solutionHis request as "my car doesn't work, care color is gray. Can solve you this problem? »
Please read the FAQ, I already posted and follow the instructions.
-
question about a view that I have created to solve performance problems
Dear alll;
I have an interesting problem. I created a view to help solve some performance problems, I've had with my query
See below
and I tried test the view using the following syntax and I get the following errorscreate or replace view view_test as Select trunc(c.close_date, 'YYYY-MM-DD') as close_date, t.names from tbl_component c, tbl_joborder t where c.t_id = t.p_id and c.type = 'C' group by trunc(c.close_date, 'YYYY-MM-DD'), t.names ;
However, I get the below error messagesselect k.close_date, k.names from view_test k where k.names = 'Kay' and k.close_date between to_date('2010-01-01', 'YYYY-MM-DD') and to_date('2010-12-31', 'YYYY-MM-DD')
I Googled it and tried a lot of things online but I can't solve the problem unfortunately, and I don't know why.ora-o1898: too many precision specifiers
What you trying to accomplish with TRUNC:
SQL> select trunc(sysdate, 'YYYY-MM-DD') from dual; select trunc(sysdate, 'YYYY-MM-DD') from dual * ERROR at line 1: ORA-01898: too many precision specifiers
I think you meant simply TRUNC (c.close_date)
-
Performance problem in production; Please help me out
Hi all,
I'd really appreciate if someone can help me with this.
Every night, the server's SWAP, Sysadmin add more space disk swap every night and for the last 4 days.
I run ADDM report from 22:00 to 04:00 (when the server is running out of memory)
I had the problem of performance of this query:
I can't find what the problem is and why it is a source of performance problem, could you help me pleaseRECOMMENDATION 4: SQL Tuning, 4.9% benefit (1329 seconds) ACTION: Investigate the SQL statement with SQL_ID "b7f61g3831mkx" for possible performance improvements. RELEVANT OBJECT: SQL statement with SQL_ID b7f61g3831mkx and PLAN_HASH 881601692
*WORKLOAD REPOSITORY SQL Report* Snapshot Period Summary DB Name DB Id Instance Inst Num Release RAC Host ------------ ----------- ------------ -------- ----------- --- ------------ **** 1490223503 **** 1 10.2.0.1.0 NO **** Snap Id Snap Time Sessions Curs/Sess --------- ------------------- -------- --------- Begin Snap: 9972 21-Apr-10 23:00:39 106 3.6 End Snap: 9978 22-Apr-10 05:01:04 102 3.4 Elapsed: 360.41 (mins) DB Time: 451.44 (mins) SQL Summary DB/Inst: ****/**** Snaps: 9972-9978 Elapsed SQL Id Time (ms) ------------- ---------- b7f61g3831mkx 1,329,143 Module: DBMS_SCHEDULER GATHER_STATS_JOB select /*+ no_parallel(t) no_parallel_index(t) dbms_stats cursor_sharing_exact u se_weak_name_resl dynamic_sampling(0) no_monitoring */ count(*),count("P_PRODUCT _ID"),count(distinct "P_PRODUCT_ID"),count("NAME"),count(distinct "NAME"),count( "DESCRIPTION"),count(distinct "DESCRIPTION"),count("UPC"),count(distinct "UPC"), ------------------------------------------------------------- SQL ID: b7f61g3831mkx DB/Inst: ***/*** Snaps: 9972-9978 -> 1st Capture and Last Capture Snap IDs refer to Snapshot IDs witin the snapshot range -> select /*+ no_parallel(t) no_parallel_index(t) dbms_stats cursor_shari... Plan Hash Total Elapsed 1st Capture Last Capture # Value Time(ms) Executions Snap ID Snap ID --- ---------------- ---------------- ------------- ------------- -------------- 1 881601692 1,329,143 1 9973 9974 ------------------------------------------------------------- Plan 1(PHV: 881601692) ---------------------- Plan Statistics DB/Inst: ***/*** Snaps: 9972-9978 -> % Total DB Time is the Elapsed Time of the SQL statement divided into the Total Database Time multiplied by 100 Stat Name Statement Per Execution % Snap ---------------------------------------- ---------- -------------- ------- Elapsed Time (ms) 1,329,143 1,329,142.7 4.9 CPU Time (ms) 26,521 26,521.3 0.7 Executions 1 N/A N/A Buffer Gets 551,644 551,644.0 1.3 Disk Reads 235,239 235,239.0 1.5 Parse Calls 1 1.0 0.0 Rows 1 1.0 N/A User I/O Wait Time (ms) 233,212 N/A N/A Cluster Wait Time (ms) 0 N/A N/A Application Wait Time (ms) 0 N/A N/A Concurrency Wait Time (ms) 0 N/A N/A Invalidations 0 N/A N/A Version Count 2 N/A N/A Sharable Mem(KB) 71 N/A N/A ------------------------------------------------------------- Execution Plan --------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | --------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | | | 24350 (100)| | | | | 1 | SORT GROUP BY | | 1 | 731 | | | | | | 2 | PARTITION RANGE SINGLE| | 8892 | 6347K| 24350 (1)| 00:04:53 | KEY | KEY | | 3 | PARTITION LIST ALL | | 8892 | 6347K| 24350 (1)| 00:04:53 | 1 | 5 | | 4 | TABLE ACCESS SAMPLE | PRODUCT | 8892 | 6347K| 24350 (1)| 00:04:53 | KEY | KEY | --------------------------------------------------------------------------------------------------- Full SQL Text SQL ID SQL Text ------------ ----------------------------------------------------------------- b7f61g3831mk select /*+ no_parallel(t) no_parallel_index(t) dbms_stats cursor_ _sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitori ng */ count(*), count("P_PRODUCT_ID"), count(distinct "P_PRODUCT_ ID"), count("NAME"), count(distinct "NAME"), count("DESCRIPTION") , count(distinct "DESCRIPTION"), count("UPC"), count(distinct "UP C"), count("ADV_PRODUCT_URL"), count(distinct "ADV_PRODUCT_URL"), count("IMAGE_URL"), count(distinct "IMAGE_URL"), count("SHIPPING _COST"), count(distinct "SHIPPING_COST"), sum(sys_op_opnsize("SHI PPING_COST")), substrb(dump(min("SHIPPING_COST"), 16, 0, 32), 1, 120), substrb(dump(max("SHIPPING_COST"), 16, 0, 32), 1, 120), cou nt("SHIPPING_INFO"), count(distinct "SHIPPING_INFO"), sum(sys_op_ opnsize("SHIPPING_INFO")), substrb(dump(min(substrb("SHIPPING_INF O", 1, 32)), 16, 0, 32), 1, 120), substrb(dump(max(substrb("SHIPP ING_INFO", 1, 32)), 16, 0, 32), 1, 120), count("P_STATUS"), count (distinct "P_STATUS"), sum(sys_op_opnsize("P_STATUS")), substrb(d ump(min(substrb("P_STATUS", 1, 32)), 16, 0, 32), 1, 120), substrb (dump(max(substrb("P_STATUS", 1, 32)), 16, 0, 32), 1, 120), count ("EXTRA_INFO1"), count(distinct "EXTRA_INFO1"), sum(sys_op_opnsiz e("EXTRA_INFO1")), substrb(dump(min(substrb("EXTRA_INFO1", 1, 32) ), 16, 0, 32), 1, 120), substrb(dump(max(substrb("EXTRA_INFO1", 1 , 32)), 16, 0, 32), 1, 120), count("EXTRA_INFO2"), count(distinct "EXTRA_INFO2"), sum(sys_op_opnsize("EXTRA_INFO2")), substrb(dump (min(substrb("EXTRA_INFO2", 1, 32)), 16, 0, 32), 1, 120), substrb (dump(max(substrb("EXTRA_INFO2", 1, 32)), 16, 0, 32), 1, 120), co unt("ANALISIS_DATE"), count(distinct "ANALISIS_DATE"), substrb(du mp(min("ANALISIS_DATE"), 16, 0, 32), 1, 120), substrb(dump(max("A NALISIS_DATE"), 16, 0, 32), 1, 120), count("OLD_STATUS"), count(d istinct "OLD_STATUS"), sum(sys_op_opnsize("OLD_STATUS")), substrb (dump(min("OLD_STATUS"), 16, 0, 32), 1, 120), substrb(dump(max("O LD_STATUS"), 16, 0, 32), 1, 120) from "PARTNER_PRODUCTS"."PRODUCT " sample ( 12.5975349658) t where TBL$OR$IDX$PART$NUM("PARTNER_PR ODUCTS"."PRODUCT", 0, 4, 0, "ROWID") = :objn
Dear friend,
Why do you think you have problems with shared pool? In the ASH report you are provided, there was just 2.5 medium active sessions and 170 requests during this period, he is very low with swimming pool shared, problems
you have some queries that use literals, it will be better to replace literals with bind variable if possible, or you can set the init cursor_sharing parameter to force or similar, this is the dynamic parameter.
But it is not so dramatic problem in your case!From ASHES of your report, we can see that top wait events is "CPU + wait for CPU", "RMAN backup & recovery i/o" and "log file sync" and 65% of your database of waiting time. And even in the background waiting events.
If I understand well report, you have two members in your redo log groups, you have problems with log IO writer speed, check the distribution of files on the disc, newspaper editor is slow causing it to wait for the other sessions. High
processor can be related to rman compression. Best service can we sea GATHER_STATS_JOB consumes 16% of activity 33% consumes rman and only 21% your applications and also there is something running
SQL * more under the sys (?) account. There is from the top of the sql page, this is the sql in your application, if I understand correctly, 'scattered db reading file' event indicates that full scans have a place, is it normal that your application? If Yes, then try using
running in parallel, as we can see in the section "Sessions running PQs Top" your report there is no running in parallel, but as I understand it there are 8 processors, try to use parallel executions or avoid full scans. But consider that
When you do full scans in parallel PGA memory not used CMS, then decrees setting pga_aggregate_target SGA and increase respectively.Is there another application or a program running on the server except oracle?
Is the performance degradation was strong, I mean yesterday, everything was ok, but today all the evil, or it was good?Check the reasons for the slow newspaper writer, it can greatly affect performance.
Also of 90% of the performance problems generally because of the poor sql, poor execution plans.
Also if you use automatic memory management, tap Settings, but you must know that in this case the settings will identify the minimum values, this is why define them to lower values in oracle can manage
entirely.
Don't increase your SGA at this stage, get the awr report, use @$ORACLE_HOME/rdbms/admin/awrrpt.sql, check your cover shot.BUT first, you must change your backup strategy, look at my first post, after that check performance again, before you do that it will be very difficult to help you.
Good luck
-
SEM_MATCH queries performance problems
Hello
We run into performance problems when you use queries SEM_MATCH.
We have a model for data (ABox) containing a triple 12 000 000.
We have a model for the plan (TBox) containing 800 triple.
We ran an implication of "OWLprime" and built the model and entailment index with the SEM_APIS commands.
The number of triplets after the execution of entailment was 35.000.000 triples.
We use the following hardware configuration:
OS: Windows Server 2008
CPU: Intel CPU Xeon X 5460 @3.15 GHz (2 CPUs).
64-bit operating system.
Memory: 32 GB
The results below, it seems that whenever we execute a query against the index of inference, execution time increases significantly.
Here are the results:
1. single template query using index of inference:
SELECT
x
TABLE)
SEM_MATCH (' (? x < http://www.we.com/weo.owl#has_symptom > < http://www.we.com/weo.owl #allergyA >)',)
SEM_Models ('TOM_ONTOLOGY', 'TOM_ONTOLOGY_TA'),
SDO_RDF_RULEBASES ('OWLPrime'), NULL, NULL, NULL)
+);+
Execution time: 0.02 seconds
2.
a.Double query model using index of inference:
SELECT
x, y
TABLE)
SEM_MATCH (' (? x < http://www.we.com/weo.owl#has_symptom > < http://www.we.com/weo.owl #allergyA >))
+(?x RDF:type <http://www.We.com/WEO.Owl#patient>) +.
+ (? x < http://www.we.com/weo.owl#id >? y)', +.
SEM_Models ('TOM_ONTOLOGY', 'TOM_ONTOLOGY_TA'),
SDO_RDF_RULEBASES ('OWLPrime'), NULL, NULL, NULL)
+);+
Run time: 127 seconds
b. double model query without indication of inference:
SELECT
x, y
TABLE)
SEM_MATCH (' (? x < http://www.we.com/weo.owl#has_symptom > < http://www.we.com/weo.owl #allergyA >))
+(?x RDF:type <http://www.We.com/WEO.Owl#patient>) +.
+ (? x < http://www.we.com/weo.owl#id >? y)', +.
SEM_Models ('TOM_ONTOLOGY', 'TOM_ONTOLOGY_TA'),
NULL, NULL, NULL, NULL)
+);+
Execution time: 2.5 seconds
3.
a. triple query model using index of inference:
SELECT
x, y, z
TABLE)
SEM_MATCH (' (? x < http://www.we.com/weo.owl#has_symptom > < http://www.we.com/weo.owl #allergyA >))
(? x rdf:type < http://www.we.com/weo.owl#patient >)
(? x < http://www.we.com/weo.owl#id >? y)
(? x < http://www.we.com/weo.owl#gender >? z)',
SEM_Models ('TOM_ONTOLOGY', 'TOM_ONTOLOGY_TA'),
SDO_RDF_RULEBASES ('OWLPrime'), NULL, NULL, NULL)
);
Running time: 146 seconds
b. triple query model without using the index of inference:
SELECT
x, y, z
TABLE)
SEM_MATCH (' (? x < http://www.we.com/weo.owl#has_symptom > < http://www.we.com/weo.owl #allergyA >))
(? x rdf:type < http://www.we.com/weo.owl#patient >)
(? x < http://www.we.com/weo.owl#id >? y)
(? x < http://www.we.com/weo.owl#gender >? z)',
SEM_Models ('TOM_ONTOLOGY', 'TOM_ONTOLOGY_TA'),
NULL, NULL, NULL, NULL)
);
Execution time: 9 seconds
Thank you
DoronIf you use 11.1.0.7.0 and you have installed the 7600122 hotfix, please use:
-Brace for the SEM_MATCH query syntax
-Virtual model (using SEM_Models ('TOM_ONTOLOGY', 'TOM_ONTOLOGY_TA'),
SDO_RDF_RULEBASES ('OWLPrime'))
-ALLOW_DUP = T (as part of the parameter options of SEM_MATCH)For more details, please see
http://download.Oracle.com/docs/CD/B28359_01/AppDev.111/b28397/sdo_rdf_newfeat.htm -
a defective battery in a tablet will cause performance problems of speed
a defective battery in a tablet will cause a matter of output speed
Hello
Sometimes it can cause a performance problem, visit the following link to monitor how to take care of the battery:
Taking care of your laptop battery
http://Windows.Microsoft.com/en-us/Windows7/taking-care-of-your-laptop-battery
In order to increase the performance of Windows XP please visit the link below:
Windows XP performance
http://TechNet.Microsoft.com/en-us/library/bb457057.aspx
Reference: PC slow? Optimize your computer for peak performance
http://www.Microsoft.com/athome/Setup/optimize.aspx
Note: as long, the Tablet is connected to a power source it shouldn't affect the performance in itself. However, bad battery itself a performance problem.
-
WRT54GS - wireless performance problem
Dear someone
I am currently having performance problems with my WRT54GSv1.1
Just received my new internet connection to 30Mbit. Unfortunately, I am not able to download at that speed. If I connect via one of the linksys router's ethernet ports, I'm a happy person, but using the wireless makes me a little less happy.
Wireless: 15Mbit/s
Through the cable: 27Mbit/s
Network information:
The distance between the router and the computer laptop 3-4 meters
Linksys WRT54GSv1.1 - Firmware Version: v4.71.4
Network wireless - g only mode
TV - 6 (because all the surrounding neighbor use 10-11, I noticed using Cain Abel & wireless discovered 4.9.29)
Security: WPA2-Personal (TKIP + AES)
All other settings are default (just reset to the factory settings).
Laptop:
Windows Vista Business SP1 32-bit
Intel PRO/Wireless 3945ABG (installed the latest drivers from the website of intel, default settings)
When you check the diagnostic of Intel tools, it tells me the link is 54 megabits and pushes meter packages of 54 megabits, while all lower speeds remain the same.
More diagnostic information:
Percent missed beacons: 0
Percent transmit errors: 18
Current Tx power: ~ 32 mW (100%)
Supported power levels: 1.0 mW - 32.0 mW.
Hope someone can help me here.
Thanks in advance!
Kind regards
Ski Klesman
Although wireless g "connects" to 54 Mbit/s, the maximum possible wireless "data rate" (in ideal laboratory conditions) is only about 20 to 25 Mbit/s. Unfortunately, the phrase "in ideal laboratory conditions" generally excludes home intrusion!
The aerial transmission for wireless connections is higher than for wired connections. Thus, most individuals find that their Wi - Fi connection works at 50% to 70% of their speed of wired (LAN). Your wired LAN being 27 Mbps connection speed, your 15 Mbps wireless speed is within normal limits.
You might be able to tweak a bit more speed from your wireless network by optimizing all your wireless settings. Here are a few suggestions:
I guess you want actually to WPA2 encryption. If so, set to AES only. When you set the router to TKIP and AES, you actually tell the router to accept a WPA or WPA2 connection.
Also, give your network a unique SSID. Do not use "linksys". If you use "linksys", you can try to connect the router to your neighbor. Also set 'SSID Broadcast' to 'active '. This will help your computer to find and lock on the signal from your router.
Bad wireless connections are often caused by interference from other 2.4 GHz devices. This includes cordless phones, baby monitor wireless, microwave ovens, wireless mice and keyboards, wireless speakers and wireless network from your neighbor. In rare cases, Bluetooth devices can interfere. Even some 5 + GHz phones also use the 2.4 Ghz band. disconnect these devices and see if that solves your problem.
In your router, try another channel. There are 11 channels in the band of 2.4 GHz channel 1, 6 or 11 generally works better. Discover your neighbors and see what channel they use. Because the channels overlap, try to stay at least + 5 or - 5 channels of your more powerful neighbors. For example, if you have a powerful neighbour on channel 9, try any channel 1 to 4.
Also, try putting the router about 4 to 6 feet above the ground in an open area. Do not place behind your screen or other computer equipment or speakers. The antenna must be vertical.
In addition, in the computer, go to your wireless software and go to 'Favorite networks' (sometimes called 'Profiles'). There are probably a few listed networks. Remove any network called "linksys". Also remove any network that you don't recognize or that you no longer use. If your current network is not listed, enter its information (SSID, encryption (if any) and key (if any)). Select your current network and make your network by default, then set it to auto login. You may need to go to 'settings' to do this, or you may need to right click on your network and select 'Properties' or 'settings '.
If you continue to have problems, try the following:
For wireless g routers, try setting the "baud rate" at 54 Mbps.
If you still have problems, download and install the latest firmware for your router. After an update of the firmware, you must reset the default router, and then configure the router again from scratch. If you have saved a router configuration file, DO NOT use it.
I hope this helps.
-
Oracle 9i Java component performance problems
Hello
I am reorganizing a java component inherited because of performance problems.
A java stored procedure is used to trigger a shell script that runs a java component. The component connects to a series of remote directories via ftp, gets the files and load them using slq * charger.Is there a better way to do this? I saw a few articles talking using a FTP interface directly from PL - SQL jump the java component entirely. It would be preferable to the current solution its?
Thanks in advance,
PedroI am reorganizing a java component inherited because of performance problems.
The first step is to identify what are the problems of performance, where they occur, and what causes them.
View details on
1. WHAT you do
2. HOW to
3. WHAT results you get
4 what ARE the results you expect to get.
-
ViewObject range Paging performance problem
Hi all
I am facing a performance problem with the implementation of an obligation to programmatically add a number of extra where the parameters of the clause (using bind) variable in combination with range paging.
My code looks like this
... ApplicationModule am = Configuration.createRootApplicationModule("services.DossierAM", "DossierAMLocal"); ViewObject vo = am.findViewObject("DossierListView"); // apply programmatic view criteria ViewCriteria vc = vo.createViewCriteria(); ViewCriteriaRow vcr = vc.createViewCriteriaRow(); vcr.setAttribute("Reference", "15/%"); vc.addElement(vcr); vo.applyViewCriteria(vc, true); // enable range paging vo.setAccessMode(RowSet.RANGE_PAGING); vo.setIterMode(RowIterator.ITER_MODE_LAST_PAGE_PARTIAL); vo.setRangeSize(50); vo.scrollToRangePage(5); // Cause a java.sql.SQLException: Parameter IN or OUT missing for index.....debugging learned that the :vc_temp_1 bind variable is not filled // vo.scrollToRange(250); // Cause a java.sql.SQLException: Parameter IN or OUT missing for index.....debugging learned that the :vc_temp_1 bind variable is not filled ... ...
I found 2 solutions, but they both require an application of additional database that is, performance wise, is not acceptable.
The first solution is to slip into an additional call to exectueQuery() before the call to function scrollToRangePage (int) or scrollToRange (int).
The second solution is to use the method (int) setRangeStart instead of variants scrollToRange (Page). This method performs also 2 database calls.
My question to you:
Is there another way to satisfy the requirement of programming add a certain number of parameters of the additional where clause (using the variable binding) in combination with the pagination of the range without the need to perform queries of database 2?
The code is tested with JDeveloper, 11.1.2.4.0, and 12.1.3.0.0 and behaves the same on both versions.
Kind regards
Steven.
Have you tried to create truly VC with bind variable (rather than use binding implied var created by frame)?
Something like: http://www.jobinesh.com/2010/10/creating-view-criteria-having-bind.html
Dario
Maybe you are looking for
-
Apps constantly asks the camera access, microphone, photo, contacts
I use a 6 s w iPhone / iOS 9.2.1. In the last two days some of my apps (Snapchat, FB, Garmin Connect, Instagram Messenger) were often asking access to my camera, photos, micro or contacts (depends on the application that they request access to the).
-
I'm about to order little memory for my D20 and simply check to make sure that I forgot something. I have installed dual X5550s and so I need 6 sticks. I currently have 6 sticks of 1 GB of 10600. I plan on getting 6 sticks of 4 GB 1333 RDIMMS. All of
-
Hello I'm trying to create exe file that accept data, make some calculations and then return the results. The entry is not a problem, but I have some problems with getting the work output. I am currently exploring two options: 1 echo command-exec sys
-
Windows Update problem. Will not install the Silverlight security updates
I'm running the 32-bit version of windows xp service Pack 3. Windows Update: KBs 2514842 and KBs251827 download, but not install I am also running enternet explorer 8. When silverlight trys to install I get a message that it is on a resourse which is
-
Error "Windows can not open installer_msi_win.msi."
I get the error "Windows cannot open installer_msi_win.msi." It resembles an association for execution has disappeared. I found a msiexec.exe file and tried to associate-but, it did not help. It displays a message indicating that my run is in the w