Developer SQL vs TOAD - query performance issue
Someone pointed out the same queries are run slower in SQL Developer and TOAD. I'm curious on this issue, since I understand Java is 'slow', but I can't find another thread on this point. I do not use TOAD, so I can't compare...Can it be linked to the amount of data returned by the query? What could be the other reasons for the SQL Dev works more slowly with a similar query?
Thank you
Attila
It occurs to me also that TOAD always uses the equivalent of the 'thick' JDBC driver Developer SQL can use 'thin' driver or the 'thick' driver, but the connections are usually configured with the pilot "thin", since you need an Oracle client to use the 'thick' driver
The difference is that the 'thin' drivers are written entirely in Java, but 'thick' drivers are written with only a small Java that calls the native executable (that's why you need an Oracle client) to do most of the work. In theory, a thick driver is faster because the code of the object should not be interpreted by the JAVA virtual machine. However, I heard that the performance difference is not that big. The only way to know for sure is to set up a connection with SQL Developer to use thick driver and see if it is faster (I would use a stopwatch).
Correct me if I'm wrong, but I think that if you use 'TNS' as your type of connection, Developer SQL use thick driver, while someone is using the default value, the type of ' basic' connection the thin driver. Otherwise, you need to use the connection type 'Advanced' and type in the JDBC URL custom for the thick driver.
Tags: Database
Similar Questions
-
Database query performance issues
I use the database of polling to detect changes in database OLTP application.
my doubts are
(1) will affect the performance of the OLTP application
(2) if so, what would be the impact on the OLTP application
(3) how we can improve performance
(4) any link to get more idea about it.(1) will affect the performance of the OLTP application
No IT WONT AFFECTENT BECAUSE Aapplication oltp has
Transactions that involve small amounts of data
* Indexed access to data
* Many users
* Frequent queries and updates
* Responsiveness
(2) if so, what would be the impact on the OLTP application
N/A
(3) how we can improve performanceNot query the table for poll interval 30 secs for intervals of 45 seconds at least
(4) any link to get more idea about it.
N/A
-
performance issue with the Oracle SQL query
Dears
Good evening
I am new to begin to use Oracle SQL, I have a sql query which takes longer to run in production.
Each table in the query contains 1.2 million records and DBA suggested using the "Oracle - tips. I don't have good knowledge on this subject and the difficulties to implement this advice for imrpovise performance. In the product the jobs Informatica are failed and stuck with this problem of query performance.
I ask this forum for an emergency for me to solve this problem by using "advice". kindly help me.
SELECT
CASE.ID,
CASE. DTYPE,
CASE. Version
CASE. EXTERNAL_REF,
CASE. CREATION_TS,
RQ. TYPE
Of
PAS_CASE CASE,
AS_REQUEST RQ,
CN PAS_CONTEXT
where rq.case_id = case.id
AND rq.id = cn.request_id
and rq. DTYPE = "MAINREQUEST."
and rq. TYPE in
(
'bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution ',' bgc.tbf.repair.vap', ' bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ',' bgc.cbm ',' bgc.dar.e2etest.preparation', ' bgc.chc.polling '.)
and CN. STATUS ('FINISHED', 'CANCEL', 'TIMEOUT', 'ERROR')
AND CAST (CN. CREATION_TS AS DATE) > = TO_DATE ('2014-05-06 00:00:00 ',' ' YYYY-MM-DD HH24:MI:SS)
AND
CAST (CN. CREATION_TS AS DATE) < TO_DATE ('2014-05-06 23:59:59 ',' ' YYYY-MM-DD HH24:MI:SS)-2nd request
SELECT
RA.ID,
RHEUMATOID ARTHRITIS. Version
RHEUMATOID ARTHRITIS. REQUEST_ID,
RA.NAME,
RHEUMATOID ARTHRITIS. VALUE,
RHEUMATOID ARTHRITIS. LOB_ID,
RHEUMATOID ARTHRITIS. DTYPE,
RHEUMATOID ARTHRITIS. CREATION_TS,
TASK. MAIN_REQ_TYPE
PAS_REQUESTATTRIBUTE RA
Join
(
Select tsk.ID, tsk. TYPE, main_req_type main_req.
of tsk PAS_REQUEST
Join
(
Select cn.id as context_id, rq. TYPE main_req_type
of PAS_REQUEST rq
Cn PAS_CONTEXT
where rq.id = cn.request_id
and rq. DTYPE = "MAINREQUEST."
and rq. TYPE in
(
'bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution ',' bgc.tbf.repair.vap', ' bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ',' bgc.cbm ',' bgc.dar.e2etest.preparation', ' bgc.chc.polling '.
)
and cn. STATUS ('FINISHED', 'CANCEL', 'TIMEOUT', 'ERROR')
and CAST (cn. END_TS AS DATE) > = TO_DATE ('2014-05-06 00:00:00 ',' ' YYYY-MM-DD HH24:MI:SS)
and CAST (cn. END_TS AS DATE) < = TO_DATE ('2014-05-06 23:59:59 ',' ' YYYY-MM-DD HH24:MI:SS)
and cn.ID between the 20141999999999999 AND 20141800000000000
and rq.ID between the 20141999999999999 AND 20141800000000000) main_req
On tsk.parent_context_id = main_req.context_id
and tsk. DTYPE in ('ANALYSIS_TASK', 'DECISION_TASK')
and tsk.ID between $$ LOW_ID1 AND $$ HIGH_ID1
) task
on the RA REQUEST_ID = task.ID
and RA.ID between $$ LOW_ID1 AND $$ HIGH_ID1
UNIONSELECT
RA.ID,
RHEUMATOID ARTHRITIS. Version
RHEUMATOID ARTHRITIS. REQUEST_ID,
RA.NAME,
RHEUMATOID ARTHRITIS. VALUE,
RHEUMATOID ARTHRITIS. LOB_ID,
RHEUMATOID ARTHRITIS. DTYPE,
RHEUMATOID ARTHRITIS. CREATION_TS,
MAIN_REQ. TYPE
PAS_REQUESTATTRIBUTE RA
Join
(
Select rq.id, rq.type
of PAS_REQUEST rq
Cn PAS_CONTEXT
where rq.id = cn.request_id
and rq. DTYPE = "MAINREQUEST."
and rq. TYPE in
(
'bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution ',' bgc.tbf.repair.vap', ' bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ',' bgc.cbm ',' bgc.dar.e2etest.preparation', ' bgc.chc.polling '.
)
and cn. STATUS ('FINISHED', 'CANCEL', 'TIMEOUT', 'ERROR')
and CAST (cn. END_TS AS DATE) > = TO_DATE ('2014-05-06 00:00:00 ',' ' YYYY-MM-DD HH24:MI:SS)
and CAST (cn. END_TS AS DATE) < = TO_DATE ('2014-05-06 23:59:59 ',' ' YYYY-MM-DD HH24:MI:SS)
and cn.ID between the 20141999999999999 AND 20141800000000000
and rq.ID between the 20141999999999999 AND 20141800000000000
) main_req
on the RA REQUEST_ID = main_req.ID-3rd request
SELECT
RB.ID,
RB. DTYPE,
RB. Version
RB. TYPE,
RB. CREATION_TS,
RB. TASK_ID,
RB. Color
RB. GLOBAL_RESULT,
TASK. MAIN_REQ_TYPE
Of
PAS_RESULTBLOCK RB
Join
(
Select tsk.ID, tsk. TYPE, main_req_type main_req.
of tsk PAS_REQUEST
Join
(
Select cn.id as context_id, rq. TYPE main_req_type
of PAS_REQUEST rq
Cn PAS_CONTEXT
where rq.id = cn.request_id
and rq. DTYPE = "MAINREQUEST."
and rq. TYPE in
(
'bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution ',' bgc.tbf.repair.vap', ' bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ',' bgc.cbm ',' bgc.dar.e2etest.preparation', ' bgc.chc.polling '.
)
and cn. STATUS ('FINISHED', 'CANCEL', 'TIMEOUT', 'ERROR')
and CAST (cn. END_TS AS DATE) > = TO_DATE ('2014-05-06 00:00:00 ',' ' YYYY-MM-DD HH24:MI:SS)
and CAST (cn. END_TS AS DATE) < = TO_DATE ('2014-05-06 23:59:59 ',' ' YYYY-MM-DD HH24:MI:SS)
and cn.ID between the 20141999999999999 AND 20141800000000000
and rq.ID between the 20141999999999999 AND 20141800000000000
) main_req
On tsk.parent_context_id = main_req.context_id
and tsk. DTYPE in ('ANALYSIS_TASK', 'DECISION_TASK')
and tsk.ID between $$ LOW_ID1 AND $$ HIGH_ID1
) task
We rb.task_id = task.ID
and rb.ID between $$ LOW_ID1 AND $$ HIGH_ID1
and RB. TYPE is not null
and RB. TYPE IN
(
'bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution ',' bgc.tbf.repair.vap', ' bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ',' bgc.cbm ',' bgc.dar.e2etest.preparation', ' bgc.chc.polling '.
)
UNION
SELECT
RB.ID,
RB. DTYPE,
RB. Version
RB. TYPE,
RB. CREATION_TS,
RB. TASK_ID,
RB. Color
RB. GLOBAL_RESULT,
TASK. MAIN_REQ_TYPE
Of
PAS_RESULTBLOCK RB
Join
(
Select tsk.ID, tsk. TYPE, main_req_type main_req.
of tsk PAS_REQUEST
Join
(
Select cn.id as context_id, rq. TYPE main_req_type
of PAS_REQUEST rq
Cn PAS_CONTEXT
where rq.id = cn.request_id
and rq. DTYPE = "MAINREQUEST."
and rq. TYPE in
(
'bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution ',' bgc.tbf.repair.vap', ' bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ',' bgc.cbm ',' bgc.dar.e2etest.preparation', ' bgc.chc.polling '.
)
and cn. STATUS ('FINISHED', 'CANCEL', 'TIMEOUT', 'ERROR')
and CAST (cn. END_TS AS DATE) > = TO_DATE ('2014-05-06 00:00:00 ',' ' YYYY-MM-DD HH24:MI:SS)
and CAST (cn. END_TS AS DATE) < = TO_DATE ('2014-05-06 23:59:59 ',' ' YYYY-MM-DD HH24:MI:SS)
and cn.ID between the 20141999999999999 AND 20141800000000000
and rq.ID between the 20141999999999999 AND 20141800000000000
) main_req
On tsk.parent_context_id = main_req.context_id
and tsk. DTYPE in ('ANALYSIS_TASK', 'DECISION_TASK')
and tsk.ID between $$ LOW_ID1 AND $$ HIGH_ID1
) task
We rb.task_id = task.ID
and rb.ID between $$ LOW_ID1 AND $$ HIGH_ID1
and RB. TYPE is nullSELECT
RB.ID,
RB. DTYPE,
RB. Version
RB. TYPE,
RB. CREATION_TS,
RB. TASK_ID,
RB. Color
RB. GLOBAL_RESULT,
TASK. MAIN_REQ_TYPE
Of
PAS_RESULTBLOCK RB
Join
(
Select tsk.ID, tsk. TYPE, main_req_type main_req.
of tsk PAS_REQUEST
Join
(
Select cn.id as context_id, rq. TYPE main_req_type
of PAS_REQUEST rq
Cn PAS_CONTEXT
where rq.id = cn.request_id
and rq. DTYPE = "MAINREQUEST."
and rq. TYPE in
(
'bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution ',' bgc.tbf.repair.vap', ' bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ',' bgc.cbm ',' bgc.dar.e2etest.preparation', ' bgc.chc.polling '.
)
and cn. STATUS ('FINISHED', 'CANCEL', 'TIMEOUT', 'ERROR')
and CAST (cn. END_TS AS DATE) > = TO_DATE ('2014-05-06 00:00:00 ',' ' YYYY-MM-DD HH24:MI:SS)
and CAST (cn. END_TS AS DATE) < = TO_DATE ('2014-05-06 23:59:59 ',' ' YYYY-MM-DD HH24:MI:SS)
and cn.ID between the 20141999999999999 AND 20141800000000000
and rq.ID between the 20141999999999999 AND 20141800000000000
) main_req
On tsk.parent_context_id = main_req.context_id
and tsk. DTYPE in ('ANALYSIS_TASK', 'DECISION_TASK')
and tsk.ID between $$ LOW_ID1 AND $$ HIGH_ID1
) task
We rb.task_id = task.ID
and rb.ID between $$ LOW_ID1 AND $$ HIGH_ID1
and RB. TYPE is not null
and RB. TYPE IN
(
'bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution ',' bgc.tbf.repair.vap', ' bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ',' bgc.cbm ',' bgc.dar.e2etest.preparation', ' bgc.chc.polling '.
)
UNION
SELECT
RB.ID,
RB. DTYPE,
RB. Version
RB. TYPE,
RB. CREATION_TS,
RB. TASK_ID,
RB. Color
RB. GLOBAL_RESULT,
TASK. MAIN_REQ_TYPE
Of
PAS_RESULTBLOCK RB
Join
(
Select tsk.ID, tsk. TYPE, main_req_type main_req.
of tsk PAS_REQUEST
Join
(
Select cn.id as context_id, rq. TYPE main_req_type
of PAS_REQUEST rq
Cn PAS_CONTEXT
where rq.id = cn.request_id
and rq. DTYPE = "MAINREQUEST."
and rq. TYPE in
(
'bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution ',' bgc.tbf.repair.vap', ' bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ',' bgc.cbm ',' bgc.dar.e2etest.preparation', ' bgc.chc.polling '.
)
and cn. STATUS ('FINISHED', 'CANCEL', 'TIMEOUT', 'ERROR')
and CAST (cn. END_TS AS DATE) > = TO_DATE ('2014-05-06 00:00:00 ',' ' YYYY-MM-DD HH24:MI:SS)
and CAST (cn. END_TS AS DATE) < = TO_DATE ('2014-05-06 23:59:59 ',' ' YYYY-MM-DD HH24:MI:SS)
and cn.ID between the 20141999999999999 AND 20141800000000000
and rq.ID between the 20141999999999999 AND 20141800000000000
) main_req
On tsk.parent_context_id = main_req.context_id
and tsk. DTYPE in ('ANALYSIS_TASK', 'DECISION_TASK')
and tsk.ID between $$ LOW_ID1 AND $$ HIGH_ID1
) task
We rb.task_id = task.ID
and rb.ID between $$ LOW_ID1 AND $$ HIGH_ID1
and RB. TYPE is null
-4th query
SELECT
RI.ID,
UII DTYPE,
UII Version
UII RESULTBLOCK_ID,
RI.NAME,
UII VALUE,
UII UNIT,
UII Color
UII LOB_ID,
UII CREATION_TS,
UII SEQUENCE,
UII DETAILLEVEL,
RES_BLK. MAIN_REQ_TYPE
Of
RI PAS_RESULTITEM
Join
(
Select
rb.ID, rb. TYPE rb_type, task. TYPE as the task_type, task. pas_resultblock rb main_req_type
Join
(
Select tsk.ID, tsk. TYPE, main_req_type main_req.
of tsk PAS_REQUEST
Join
(
Select cn.id as context_id, rq. TYPE main_req_type
of PAS_REQUEST rq
Cn PAS_CONTEXT
where rq.id = cn.request_id
and rq. DTYPE = "MAINREQUEST."
and rq. TYPE in
(
'bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution ',' bgc.tbf.repair.vap', ' bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ',' bgc.cbm ',' bgc.dar.e2etest.preparation', ' bgc.chc.polling '.
)
and cn. STATUS ('FINISHED', 'CANCEL', 'TIMEOUT', 'ERROR')
and CAST (cn. END_TS AS DATE) > = TO_DATE ('2014-05-06 00:00:00 ',' ' YYYY-MM-DD HH24:MI:SS)
and CAST (cn. END_TS AS DATE) < = TO_DATE ('2014-05-06 23:59:59 ',' ' YYYY-MM-DD HH24:MI:SS)
and cn.ID between the 20141999999999999 AND 20141800000000000
and rq.ID between the 20141999999999999 AND 20141800000000000
) main_req
On tsk.parent_context_id = main_req.context_id
and tsk. DTYPE in ('ANALYSIS_TASK', 'DECISION_TASK')
and tsk.ID between the 20141999999999999 AND 20141800000000000
) task
We rb.task_id = task.ID
and rb.ID between the 20141999999999999 AND 20141800000000000
and RB. TYPE IN
(
'bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution ',' bgc.tbf.repair.vap', ' bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ',' bgc.cbm ',' bgc.dar.e2etest.preparation', ' bgc.chc.polling '.
)
) res_blk
On ri.resultblock_id = res_blk.ID
where IN RI.NAME
(
'bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution ',' bgc.tbf.repair.vap', ' bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ',' bgc.cbm ',' bgc.dar.e2etest.preparation', ' bgc.chc.polling '.
)
and RI.ID between the 20141999999999999 AND 20141800000000000
and RI.NAME is not null
UNIONSelect
RI.ID,
UII DTYPE,
UII Version
UII RESULTBLOCK_ID,
RI.NAME,
UII VALUE,
UII UNIT,
UII Color
UII LOB_ID,
UII CREATION_TS,
UII SEQUENCE,
UII DETAILLEVEL,
RES_BLK. MAIN_REQ_TYPE
of pas_resultitem ri
Join
(
Select
rb.ID, rb. TYPE rb_type, task. TYPE as the task_type, task. pas_resultblock rb main_req_type
Join
(
Select tsk.ID, tsk. TYPE, main_req_type main_req.
of tsk PAS_REQUEST
Join
(
Select cn.id as context_id, rq. TYPE main_req_type
of PAS_REQUEST rq
Cn PAS_CONTEXT
where rq.id = cn.request_id
and rq. DTYPE = "MAINREQUEST."
and rq. TYPE in
(
'bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution ',' bgc.tbf.repair.vap', ' bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ',' bgc.cbm ',' bgc.dar.e2etest.preparation', ' bgc.chc.polling '.
)
and cn. STATUS ('FINISHED', 'CANCEL', 'TIMEOUT', 'ERROR')
and CAST (cn. END_TS AS DATE) > = TO_DATE ('2014-05-06 00:00:00 ',' ' YYYY-MM-DD HH24:MI:SS)
and CAST (cn. END_TS AS DATE) < = TO_DATE ('2014-05-06 23:59:59 ',' ' YYYY-MM-DD HH24:MI:SS)
and cn.ID between the 20141999999999999 AND 20141800000000000
and rq.ID between the 20141999999999999 AND 20141800000000000
) main_req
On tsk.parent_context_id = main_req.context_id
and tsk. DTYPE in ('ANALYSIS_TASK', 'DECISION_TASK')
and tsk.ID between the 20141999999999999 AND 20141800000000000
) task
We rb.task_id = task.ID
and rb.ID between the 20141999999999999 AND 20141800000000000
) res_blk
On ri.resultblock_id = res_blk.ID
where RI.ID between 20141800000000000 20141999999999999 AND
and RI.NAME is null-REQUEST OF 5HT
SELECT
TSK.ID,
TSK. Version
TSK. DTYPE,
TSK. CASE_ID,
TSK. TYPE,
TSK. CORRELATION_ID,
TSK. INITIATOR,
TSK. EXECUTOR,
TSK. CATEGORY,
TSK. PARENT_CONTEXT_ID,
TSK. CREATION_TS,
MAIN_REQ. MAIN_REQ_TYPE
Of
TSK PAS_REQUEST
Join
(
Select cn.id as context_id, rq. TYPE main_req_type
of PAS_REQUEST rq
Cn PAS_CONTEXT
where rq.id = cn.request_id
and rq. DTYPE = "MAINREQUEST."
and rq. TYPE in
(
'bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution ',' bgc.tbf.repair.vap', ' bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ',' bgc.cbm ',' bgc.dar.e2etest.preparation', ' bgc.chc.polling '.
)
and cn. STATUS ('FINISHED', 'CANCEL', 'TIMEOUT', 'ERROR')
and CAST (cn. END_TS AS DATE) > = TO_DATE ('2014-05-06 00:00:00 ',' ' YYYY-MM-DD HH24:MI:SS)
and CAST (cn. END_TS AS DATE) < = TO_DATE ('2014-05-06 23:59:59 ',' ' YYYY-MM-DD HH24:MI:SS)
and cn.ID between the 20141999999999999 AND 20141800000000000
and rq.ID between the 20141999999999999 AND 20141800000000000
) main_req
On tsk.parent_context_id = main_req.context_id
and tsk. DTYPE in ('ANALYSIS_TASK', 'DECISION_TASK')
and tsk.ID between $$ LOW_ID1 AND $$ HIGH_ID1
UNION
Select
MREQ.ID,
MREQ. Version
MREQ. DTYPE,
MREQ. CASE_ID,
MREQ. TYPE,
MREQ. CORRELATION_ID,
MREQ. INITIATOR,
MREQ. EXECUTOR,
MREQ. CATEGORY,
MREQ. PARENT_CONTEXT_ID,
MREQ. CREATION_TS,
MREQ. TYPE
of PAS_REQUEST mreq
Cn PAS_CONTEXT
where mreq.id = cn.request_id
and mreq. DTYPE = "MAINREQUEST."
and mreq. TYPE in
(
'bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution ',' bgc.tbf.repair.vap', ' bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ',' bgc.cbm ',' bgc.dar.e2etest.preparation', ' bgc.chc.polling '.
)
and cn. STATUS ('FINISHED', 'CANCEL', 'TIMEOUT', 'ERROR')
and CAST (cn. END_TS AS DATE) > = TO_DATE ('2014-05-06 00:00:00 ',' ' YYYY-MM-DD HH24:MI:SS)
and CAST (cn. END_TS AS DATE) < = TO_DATE ('2014-05-06 23:59:59 ',' ' YYYY-MM-DD HH24:MI:SS)
and cn.ID between the 20141999999999999 AND 20141800000000000
and mreq.ID between the 20141999999999999 AND 20141800000000000
Tips will be may not be necessary (proportional to the cardinalities need)
Select pc.id, pc.dtype, pc.version, pc.external_ref, pc.creation_ts, rq.type
from pas_case
as_request rq,
CN pas_context
where rq.case_id = pc.id
and rq.id = cn.request_id
and rq.dtype = 'MAINREQUEST. '
and rq.type in ('bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution', ' bgc.tbf.repair.vap',
'bgc.dar.e2etest ',' bgc.dar.e2etest.intermediate.execution', ' bgc.cbm ',.
'bgc.dar.e2etest.preparation ','bgc.chc.polling '.
)
and cn.status in ('FINISHED', 'CANCEL', 'TIMEOUT', 'ERROR')
and cn.creation_ts > = to_timestamp ('2014-05-06 00:00:00 ',' yyyy-mm-dd hh24:mi:ss')
and cn.creation_ts<= to_timestamp('2014-05-06="" 23:59:59.999999','yyyy-mm-dd="">=>
-2nd request
with
main_request as
(select / * + materialize * /)
CN.ID as context_id, rq.type as main_req_type
of pas_request rq
Join
CN pas_context
On rq.id = cn.request_id
and rq.dtype = 'MAINREQUEST. '
and rq.type in ('bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution', ' bgc.chc.polling',
'bgc.tbf.repair.vap ',' bgc.dar.e2etest', ' bgc.dar.e2etest.intermediate.execution ',.
'bgc.cbm ','bgc.dar.e2etest.preparation '.
)
and cn.status in ('FINISHED', 'CANCEL', 'TIMEOUT', 'ERROR')
and cn.end_ts > = to_timestamp ('2014-05-06 00:00:00 ',' yyyy-mm-dd hh24:mi:ss')
and cn.end_ts<= to_timestamp('2014-05-06="" 23:59:59.999999','yyyy-mm-dd="">=>
and cn.id between 20141800000000000 and 20141999999999999
and rq.id between 20141800000000000 and 20141999999999999
)
Select ra.id, ra.version, ra.request_id, ra.name, ra.value, ra.lob_id, ra.dtype, ra.creation_ts, task.main_req_type
of pas_requestattribute ra
Join
(select tsk.id, tsk.type, main_req.main_req_type
from pas_request tsk
Join
main_request main_req
On tsk.parent_context_id = main_req.context_id
and tsk.dtype in ('ANALYSIS_TASK', 'DECISION_TASK')
and tsk.id between $$ low_id1 and $$ high_id1
) task
We ra.request_id = task.id
and ra.id between $$ low_id1 and $$ high_id1
Union
Select ra.id, ra.version, ra.request_id, ra.name, ra.value, ra.lob_id, ra.dtype, ra.creation_ts, main_req.type
of pas_requestattribute ra
Join
main_request main_req
On ra.request_id = main_req.context_id
-3rd request
Select rb.id, rb.dtype, rb.version, rb.type, rb.creation_ts, rb.task_id, rb.color, rb.global_result, task.main_req_type
of pas_resultblock rb
Join
(select tsk.id, tsk.type, main_req.main_req_type
from pas_request tsk
Join
(select cn.id as context_id, rq.type as main_req_type
of pas_request rq
Join
CN pas_context
On rq.id = cn.request_id
and rq.dtype = 'MAINREQUEST. '
and rq.type in ('bgc.dar.vap.resolution.advice ','bgc.dar.e2etest.execution ',
'bgc.tbf.repair.vap ','bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ','bgc.cbm ',.
'bgc.dar.e2etest.preparation ','bgc.chc.polling '.
)
and cn.status in ('FINISHED', 'CANCEL', 'TIMEOUT', 'ERROR')
and cn.end_ts > = to_timestamp ('2014-05-06 00:00:00 ',' yyyy-mm-dd hh24:mi:ss')
and cn.end_ts<= to_timestamp('2014-05-06="" 23:59:59.999999','yyyy-mm-dd="">=>
and cn.id between 20141800000000000 and 20141999999999999
and rq.id between 20141800000000000 and 20141999999999999
) main_req
On tsk.parent_context_id = main_req.context_id
and tsk.dtype in ('ANALYSIS_TASK', 'DECISION_TASK')
and tsk.id between $$ low_id1 and $$ high_id1
) task
We rb.task_id = task.id
and rb.id between $$ low_id1 and $$ high_id1
where the type is null
or (type is not null
type in ('bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution ', ' bgc.tbf.repair.vap',
'bgc.dar.e2etest ',' bgc.dar.e2etest.intermediate.execution', ' bgc.cbm ',.
'bgc.dar.e2etest.preparation ','bgc.chc.polling '.
)
)
-4th query
Select ri.id, ri.dtype, ri.version, ri.resultblock_id, ri.name, ri.value, ri.unit, ri.color, ri.lob_id.
RI.creation_ts, RI. Sequence, RI. DetailLevel, res_blk.main_req_type
of pas_resultitem ri
Join
(select rb.id, rb.type as rb_type, task.type as task_type, task.main_req_type)
of pas_resultblock rb
Join
(select main_req_type, tsk.id, tsk.type, main_req.)
from pas_request tsk
Join
(select cn.id as context_id, rq.type as main_req_type
of pas_request rq
Join
CN pas_context
On rq.id = cn.request_id
and rq.dtype = 'MAINREQUEST. '
and rq.type in ('bgc.dar.vap.resolution.advice ','bgc.dar.e2etest.execution ',
'bgc.tbf.repair.vap ','bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ','bgc.cbm ',.
'bgc.dar.e2etest.preparation ','bgc.chc.polling '.
)
and cn.status in ('FINISHED', 'CANCEL', 'TIMEOUT', 'ERROR')
and cn.end_ts > = to_timestamp ('2014-05-06 00:00:00 ',' yyyy-mm-dd hh24:mi:ss')
and cn.end_ts<= to_timestamp('2014-05-06="" 23:59:59.999999','yyyy-mm-dd="">=>
and cn.id between 20141800000000000 and 20141999999999999
and rq.id between 20141800000000000 and 20141999999999999
) main_req
On tsk.parent_context_id = main_req.context_id
and tsk.dtype in ('ANALYSIS_TASK', 'DECISION_TASK')
and tsk.id between 20141800000000000 and 20141999999999999
) task
We rb.task_id = task.id
and rb.id between 20141800000000000 and 20141999999999999
and rb.type in ('bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution', ' bgc.tbf.repair.vap',
'bgc.dar.e2etest ',' bgc.dar.e2etest.intermediate.execution', ' bgc.cbm ',.
'bgc.dar.e2etest.preparation ','bgc.chc.polling '.
)
) res_blk
On ri.resultblock_id = res_blk.id
and ri.id between 20141800000000000 and 20141999999999999
where ri.name is null
or (ri.name is not null
and ri.name in ('bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution', ' bgc.tbf.repair.vap',
'bgc.dar.e2etest ',' bgc.dar.e2etest.intermediate.execution', ' bgc.cbm ',.
'bgc.dar.e2etest.preparation ','bgc.chc.polling '.
)
)
-5th application
with
main_request as
(select / * + materialize * / )
mreq.ID, mreq.version, mreq.dtype, mreq.case_id, mreq.type, mreq.correlation_id, mreq. Initiator, mreq. Executor,
mreq. Category, mreq.parent_context_id, mreq.creation_ts, mreq.type
of pas_request mreq
Join
CN pas_context
On mreq.id = cn.request_id
and mreq.dtype = 'MAINREQUEST. '
and mreq.type in ('bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution', ' bgc.tbf.repair.vap',
'bgc.dar.e2etest ',' bgc.dar.e2etest.intermediate.execution', ' bgc.cbm ',.
'bgc.dar.e2etest.preparation ','bgc.chc.polling '.
)
and cn.status in ('FINISHED', 'CANCEL', 'TIMEOUT', 'ERROR')
and cn.end_ts > = to_timestamp ('2014-05-06 00:00:00 ',' yyyy-mm-dd hh24:mi:ss')
and cn.end_ts<= to_timestamp('2014-05-06="" 23:59:59.999999','yyyy-mm-dd="">=>
and cn.id between 20141800000000000 and 20141999999999999
and mreq.id between 20141800000000000 and 20141999999999999
)
Select tsk.id, tsk.version, tsk.dtype, tsk.case_id, tsk.type, tsk.correlation_id, tsk.initiator, tsk.executor,
TSK. Category, tsk.parent_context_id, tsk.creation_ts, main_req.main_req_type
from pas_request tsk
Join
main_request main_req
On tsk.parent_context_id = main_req.id
and tsk.dtype in ('ANALYSIS_TASK', 'DECISION_TASK')
and tsk.id between $$ low_id1 and $$ high_id1
Union
SELECT id, version, dtype, case_id, type, correlation_id, initiator, executor,
category, type parent_context_id, creation_ts
of main_request
Concerning
Etbin
-
Using APEX_ITEM in SQL performance issues?
Hello
I was wondering if using APEX_ITEM.* in your SQL source for a report would give some performance issues? I expect the report to bring a little more than 3000 documents. When I developed it it was working fine but we had 150 cases or so, but now we have migrated demand during our test the page system will take about 2 minutes to load. Here is my SQL to create the report:
Select distinct
initcap (MPa.pa_name) | ' (' || sd. DESIGNATION_CODE | ')' site.
FRC. REPORT_DESCRIPTION report_category,
MF. Function FEATURE_DESC
decode (cmf. SELECTED_FOR_QA, 'Y', 'X', ' N ',' ') QA,.
() apex_item.select_list_from_query
21,
CPF. ASSIGN_TO,
«Select ss.firstname | "» '' || SS. Surname d, ss. STAFF_NUMBER r
of snh_staff ss,.
snh_management_units smu,
m_pa_snh_area psa
where ss. MU_UNIT_ID = EMS. UNIT_ID
and smu. UNIT_ID = psa. UNIT_ID
and ss. CURRENTLY_EMPLOYED = "Y"
and psa. SCM_LEAD = "Y"
and psa. MAIN_AREA = "P"
and psa.PA_CODE = ' | MPa.pa_code,
NULL,
'' YES. ''
NULL,
(' ') assign_to,.
() apex_item.select_list_from_query
22,
Decode (to_char (cpf.planned_fieldwork, ' DD/MM /'),)
30/06 /', 'Q1 ' | TO_CHAR (planned_fieldwork, 'YYYY'),
30/09 /', 'T2 ' | TO_CHAR (planned_fieldwork, 'YYYY'),
31/12 /', 'Q3 | TO_CHAR (planned_fieldwork, 'YYYY'),
31/03 /', 'T4 ' | TO_CHAR (planned_fieldwork-365, "YYYY").
TO_CHAR (cpf.planned_fieldwork, "YYYY")),
' select r d,
of CM_CYCLE_Q_YEARS') planned_fieldwork,.
() apex_item.select_list_from_query
23,
Decode (to_char (cpf.planned_cmf, ' DD/MM /'),)
30/06 /', 'Q1 ' | TO_CHAR (planned_cmf, 'YYYY'),
30/09 /', 'T2 ' | TO_CHAR (planned_cmf, 'YYYY'),
31/12 /', 'Q3 | TO_CHAR (planned_cmf, 'YYYY'),
31/03 /', 'T4 ' | TO_CHAR (planned_cmf-365, "YYYY").
TO_CHAR (cpf.planned_cmf, "YYYY")),
' select r d,
of CM_CYCLE_Q_YEARS') planned_cmf,.
() apex_item.select_list_from_query
24,
CPF.monitoring_method_id,
"(Select METHOD, MONITORING_METHOD_ID from cm_monitoring_methods where active_flag ="Y"') monitoring_method,
(apex_item). Text
25,
CPF.pre_cycle_comments,
15,
255,
"title =" '. CPF.pre_cycle_comments |' » ',
'annualPlanningComments '.
|| TO_CHAR (cpf. Comment PLAN_MON_FEATURE_ID)),
(apex_item). Text
26,
TO_CHAR (cpf. CONTRACT_LET, 'MON-DD-YYYY'),
11,
(11) contract_let,
(apex_item). Text
27,
TO_CHAR (cpf. CONTRACT_REPORT_PLANNED, 'MON-DD-YYYY'),
11,
(11) contract_report,
(apex_item). Text
28,
CPF. ADVISOR_DATA_ENTRY,
11,
(11) advisor_entry,
CMS.complete_percentage | ' ' || status of CMS. Description,
(apex_item). Text
29,
TO_CHAR (cpf. RESULT_SENT_TO_OO, 'MON-DD-YYYY'),
11,
(11) result_to_oo,
CPF. PLAN_MON_FEATURE_ID,
CMF. MONITORED_FEATURE_ID,
mpa.PA_CODE,
MPF. SITE_FEATURE_ID
of fm_report_category ERS,
m_feature mf,
m_pa_features mpf,
m_protected_area mpa,
snh_designations sd,
cm_monitored_features FMC,
cm_plan_mon_features FCP,
cm_monitoring_status cms,
cm_cycles cc,
msa m_pa_snh_area,
snh_management_units smu,
ssa snh_sub_areas
where frc. REPORT_CATEGORY_ID = mf. REPORT_CATEGORY_ID
and mf. Feature_code = mpf. FEATURE_CODE
and mpa.PA_CODE = mpf.PA_CODE
and mpa. DESIGNATION_ID = sd. DESIGNATION_ID
and the mpf. SITE_FEATURE_ID = FCM. SITE_FEATURE_ID
and CME. MONITORED_FEATURE_ID = cpf. MONITORED_FEATURE_ID
and cms. MONITORING_STATUS_ID = FCM. MONITORING_STATUS_ID
and cc. CYCLE # = FCM. CYCLE #.
and msa.PA_CODE = mpa.PA_CODE
and msa. UNIT_ID = EMS. UNIT_ID
and msa. SUB_AREA_ID = ass. SUB_AREA_ID
and cc. CURRENT_CYCLE = 'Y '.
and msa. MAIN_AREA = 'P '.
and msa. SCM_LEAD = 'Y '.
and the mpf. INTEREST_CODE in (1,2,3,9)
and ((nvl (: P6_REPORTING_CATEGORY, 'TOUT') = 'ALL'))
and to_char (frc. FCA_FEATURE_CATEGORY_ID) = case nvl (: P6_BROAD_CATEGORY, 'ALL') when "ALL" then to_char (frc. (FCA_FEATURE_CATEGORY_ID) else: P6_BROAD_CATEGORY end)
or (nvl (: P6_REPORTING_CATEGORY, 'ALL')! = 'ALL')
and to_char (mf. REPORT_CATEGORY_ID) = case nvl (: P6_REPORTING_CATEGORY, 'ALL') when "ALL" then to_char (mf. REPORT_CATEGORY_ID) else: P6_REPORTING_CATEGORY end))
and ((nvl (: P6_SNH_SUB_AREA, 'TOUT') = 'ALL'))
and to_char (msa. UNIT_ID) = case nvl (: P6_SNH_AREA, 'ALL') when "ALL" then to_char (msa. (UNIT_ID) else: P6_SNH_AREA end)
or (nvl (: P6_SNH_SUB_AREA, 'ALL')! = 'ALL')
and to_char (msa. SUB_AREA_ID) = case nvl (: P6_SNH_SUB_AREA, 'ALL') when "ALL" then to_char (msa. SUB_AREA_ID) else: P6_SNH_SUB_AREA end))
and ((nvl (: P6_SITE, 'TOUT')! = 'ALL'))
and mpa.PA_CODE =: P6_SITE)
or nvl (: P6_SITE, 'ALL') = 'ALL')
As you can see I have 9 calls the APEX_ITEM API and when I get out them the works of report that I expect.
Has anyone else ran into this problem?
We're currently on APEX: 3.0.1.00.08 and using Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64 bit, Production database.
Thanks in advance,
Paul.Try to remove all except one of the calls apex_item.select_list_from_query, and then rewrite this one to use subquery factoring. Now compare at the same time for the same query with and without subquery factoring. In addition, 500 lines is a realistic number that you expect someone to change?
-
How to avoid performance issues in PL/SQL?
How to avoid performance issues in PL/SQL?
According to my knowledge, below a few points to avoid performance problems in PL/SQL.
Is there any other point to avoid performance problems?
1. use FORALL instead of the FORUM and GATHER in BULK to avoid a loop several times.
2. RUN IMMEDIATE is faster than DBMS_SQL
3. use NOCOPY for OUT and IN OUT if the original value need not be retained. Overhead of keeping a copy of OUT is avoided.Thanks for your comments Justin!
BC explains things right on part SQL...
http://www.DBA-Oracle.com/art_sql_tune.htm
-
I have a table that lists the users visits to pages on our website. The information takes the type of structure within our next record table:
VisitID | IDVisiteur | VisitPage | VisitDate
Index | UniqueID. VisitPage | Date/time
I need to get to IDVisiteur who visited in a user defined date range for a report that is to be written, and then get a count the days of separate visit that each user has visited our website. I have a request of work attached that will get me the result set, I want, but it's so _very_ slowly. Query Analyzer it shows that 84% included in table scans. I hope someone has a suggestion on how to optimize it. I am currently working on a MSSQL 8.0 Server, so I have no access to the function of tronque() that I would prefer to use on the dates, but that's a minor inconvenience.
Thank you
-Daniel
Quote:
Posted by: Dan Bracuk
You have an index on visitdate?Visitdate contains real-time, or are all the parts of the time 0:00? If they are all from 00:00, you don't need the convert function. Otherwise, you might have better luck by selecting all data from your database and using Q of Q for the counties.
Dan there on this one. Looking at the design table index was absent. Once I added an index my query performance dramatically, improved enough so that I don't have a lot of worries more. Thanks for the suggestion.
-Daniel
-
Hi guru,.
I use 11.1.6.8 OBIEE. One of my report is to have performance issue when I dig in that I found that the date filter not applied in the code SQL generated for send DB, due to which there is table scan, but strange thing is that when she displays data based on the Date range filter. It only occurs with the date dimension, all other dimensions are working properly. I'm not sure what he is missing.
Thanks in advance.
concerning
Mohammed.
I found the problem, it is in the characteristics of the DB. I click query to get the DB position it works now.
-
Query performance poor when they join CONTAINS to another table
We just recently started evaluation Oracle Text for a search solution. We must be able to find a table which can have over 20 million lines. Each user can have visibility to a very small part of these lines. The goal is to have a single Oracle text index that represents all the columns of research in the table (multi column datastore) and provide a score for each search result so that we can sort the search results in descending score order. What we see is that the performance of the queries of TOAD are extremely fast, when we write a simple CONTAINS query against the table indexed Oracle text. However, when we first try reduce the lines from that CONTAINS query must search using a we find the query performance degrades significantly.
For example, we can find all the records that a user has access from our base table of the following query:
SELECT d.duns_loc
DUNS d
JOIN primary_contact pc
ON d.duns_loc = pc.duns_loc
AND pc.emp_id =: employeeID;
This query may run in < 100 m in the example, this query returns close to 1200 lines of the duns_loc of primary key.
Our search query looks like this:
SELECT score (1), d.
DUNS d
WHERE CONTAINS (TEXT_KEY,: research, 1) > 0
ORDER BY score (1) DESC;
The: Find value in this example will be 'Highway '. The query can return 246 k lines in about 2 seconds.
2 seconds is good, but we should be able to have a much quicker response if the request did not have to search the entire table, right? Since each user can only records from 'view' that they are assigned to as us if the search operation had to be analysed a tiny tiny percentage of the TEXT index, we should see results faster (and more relevant). If we now write the following query:
WITH the subset
AS
(SELECT d.duns_loc
DUNS d
JOIN primary_contact pc
ON d.duns_loc = pc.duns_loc
AND pc.emp_id =: employeeID
)
SELECT score (1), d.
DUNS d
JOIN the subset s
ON d.duns_loc = s.duns_loc
WHERE CONTAINS (TEXT_KEY,: research, 1) > 0
ORDER BY score (1) DESC;
For reasons that we have not been able to identify this query actually takes longer to run than the sum times the contributing elements. This query takes more than 6 seconds to run. We, or our DBA can understand why this query runs worse than a large open research. Open research is not ideal because the query eventually folders back to the user, they do not have access to view.
Has anyone ever encountered something like that? Any suggestions on what to watch or where to go? If someone wants more information to help diagnosis to let me know, and I'll be happy to produce it here.
Thank you!!Since you're using two tables, you will get probably better performance on an index that uses a section group and a user_datastore that uses a procedure. He should be able to recover all the data with a simple query, and hit a single index. Please see the demo below. Indexing can be slower, but research should be faster. If you have your primary and foreign keys in place and current statistics before you create the index, it should speed up indexing.
SCOTT@orcl_11gR2> -- tables: SCOTT@orcl_11gR2> CREATE TABLE duns 2 (duns_loc NUMBER, 3 business_name VARCHAR2 (15), 4 business_name2 VARCHAR2 (15), 5 address_line VARCHAR2 (30), 6 city VARCHAR2 (15), 7 state VARCHAR2 (2), 8 business_phone VARCHAR2 (15), 9 contact_name VARCHAR2 (15), 10 contact_title VARCHAR2 (15), 11 text_key VARCHAR2 (1), 12 CONSTRAINT duns_pk PRIMARY KEY (duns_loc)) 13 / Table created. SCOTT@orcl_11gR2> CREATE TABLE primary_contact 2 (duns_loc NUMBER, 3 emp_id NUMBER, 4 CONSTRAINT primary_contact_pk 5 PRIMARY KEY (emp_id, duns_loc), 6 CONSTRAINT primary_contact_fk FOREIGN KEY (duns_loc) 7 REFERENCES duns (duns_loc)) 8 / Table created. SCOTT@orcl_11gR2> -- data: SCOTT@orcl_11gR2> INSERT INTO duns (duns_loc, address_line) VALUES (1, 'highway') 2 / 1 row created. SCOTT@orcl_11gR2> INSERT INTO duns (duns_loc, address_line) VALUES (2, 'highway') 2 / 1 row created. SCOTT@orcl_11gR2> INSERT INTO primary_contact VALUES (1, 1) 2 / 1 row created. SCOTT@orcl_11gR2> INSERT INTO primary_contact VALUES (2, 2) 2 / 1 row created. SCOTT@orcl_11gR2> INSERT INTO duns (duns_loc, address_line) 2 SELECT object_id, object_name 3 FROM all_objects 4 WHERE object_id > 2 5 / 76029 rows created. SCOTT@orcl_11gR2> INSERT INTO primary_contact 2 SELECT object_id, namespace 3 FROM all_objects 4 WHERE object_id > 2 5 / 76029 rows created. SCOTT@orcl_11gR2> -- gather statistics: SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'DUNS') PL/SQL procedure successfully completed. SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'PRIMARY_CONTACT') PL/SQL procedure successfully completed. SCOTT@orcl_11gR2> -- procedure: SCOTT@orcl_11gR2> CREATE OR REPLACE PROCEDURE duns_proc 2 (p_rowid IN ROWID, 3 p_clob IN OUT NOCOPY CLOB) 4 AS 5 BEGIN 6 FOR d IN 7 (SELECT duns_loc, 8 '
' || 9 business_name || ' ' || 10 business_name2 || ' ' || 11 address_line || ' ' || 12 city || ' ' || 13 state || ' ' || 14 business_phone || ' ' || 15 contact_name || ' ' || 16 contact_title || 17 ' ' 18 AS duns_cols 19 FROM duns 20 WHERE ROWID = p_rowid) 21 LOOP 22 DBMS_LOB.WRITEAPPEND (p_clob, LENGTH (d.duns_cols), d.duns_cols); 23 FOR pc IN 24 (SELECT '' || emp_id || ' ' AS pc_col 25 FROM primary_contact 26 WHERE duns_loc = d.duns_loc) 27 LOOP 28 DBMS_LOB.WRITEAPPEND (p_clob, LENGTH (pc.pc_col), pc.pc_col); 29 END LOOP; 30 END LOOP; 31 END duns_proc; 32 / Procedure created. SCOTT@orcl_11gR2> SHOW ERRORS No errors. SCOTT@orcl_11gR2> -- user datastore, section group with field section: SCOTT@orcl_11gR2> begin 2 ctx_ddl.create_preference ('duns_store', 'USER_DATASTORE'); 3 ctx_ddl.set_attribute ('duns_store', 'PROCEDURE', 'duns_proc'); 4 ctx_ddl.set_attribute ('duns_store', 'OUTPUT_TYPE', 'CLOB'); 5 ctx_ddl.create_section_group ('duns_sg', 'BASIC_SECTION_GROUP'); 6 ctx_ddl.add_field_section ('duns_sg', 'emp_id', 'emp_id', true); 7 end; 8 / PL/SQL procedure successfully completed. SCOTT@orcl_11gR2> -- text index with user datastore and section group: SCOTT@orcl_11gR2> CREATE INDEX duns_context_index 2 ON duns (text_key) 3 INDEXTYPE IS CTXSYS.CONTEXT 4 FILTER BY duns_loc 5 PARAMETERS 6 ('DATASTORE duns_store 7 SECTION GROUP duns_sg 8 SYNC (ON COMMIT)') 9 / Index created. SCOTT@orcl_11gR2> -- variables: SCOTT@orcl_11gR2> VARIABLE employeeid NUMBER SCOTT@orcl_11gR2> EXEC :employeeid := 1 PL/SQL procedure successfully completed. SCOTT@orcl_11gR2> VARIABLE search VARCHAR2(100) SCOTT@orcl_11gR2> EXEC :search := 'highway' PL/SQL procedure successfully completed. SCOTT@orcl_11gR2> -- query: SCOTT@orcl_11gR2> SET AUTOTRACE ON EXPLAIN SCOTT@orcl_11gR2> SELECT SCORE(1), d.* 2 FROM duns d 3 WHERE CONTAINS 4 (text_key, 5 :search || ' AND ' || 6 :employeeid || ' WITHIN emp_id', 7 1) > 0 8 / SCORE(1) DUNS_LOC BUSINESS_NAME BUSINESS_NAME2 ADDRESS_LINE CITY ST BUSINESS_PHONE ---------- ---------- --------------- --------------- ------------------------------ --------------- -- --------------- CONTACT_NAME CONTACT_TITLE T --------------- --------------- - 3 1 highway 1 row selected. Execution Plan ---------------------------------------------------------- Plan hash value: 2241294508 -------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 38 | 1102 | 12 (0)| 00:00:01 | | 1 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 12 (0)| 00:00:01 | |* 2 | DOMAIN INDEX | DUNS_CONTEXT_INDEX | | | 4 (0)| 00:00:01 | -------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("CTXSYS"."CONTAINS"("TEXT_KEY",:SEARCH||' AND '||:EMPLOYEEID||' WITHIN emp_id',1)>0) SCOTT@orcl_11gR2> -
Developer SQL not accepting is not some keyboard characters
Hello
I've been using SQL Developer for 5 or 6 months, with minor problems and great satisfaction.
A few days earlier, just out of the blue, in the middle of the day, developer started SQL only accepts only not the following characters:
CTRL-v, Ctrl + c, F5, F9 (I think that all the function keys), BACKSPACE, delete, arrows and Enter. + *
T * its happens only in the part of the text editor *. I am able to use this key in the results, whether they are an output of script or execute query.
Developer SQL works as well (if I stick with the mouse, paste works... and all other keys)
I have not checked the pc of viruses using AVG, no virus found (my avg is updated daily and runs daily virus checks on a calendar and this issue has been here for 1 or 2 weeks, so it was a virus, I think he should be discovered, in addition, keys always work in sql developer (, not only in the text editor component)
It goes without saying, but if I run Notepad and type, keys work properly.
I also reinstalled it to a different folder with no luck
So tried to restart... no luck
Tried in SafeMode... no luck
I use SQL developer Version 1.5.4, HAND-5940 build... on Windows Xp
I suspect there is an option in that I disabled without paying attention...
Help!
Kind regards
CharlesAll this time, try the possible solutions is understandable, but then type in your post... when searching the forum would have given you the solution below 1 minute:
Preferences - accelerators - load - default setting
Hope that helps,
K. -
Developer SQL 4.1.2.20 Build HAND-20, 64: I am not able to drag-and - drop a file from Windows Explorer into the SQL Developer Editor window if the name of the file or the path includes sharp «#» Unfortunately, my main directory structure contains a ' # ' in one parent folder names and I use drag-and - drop all the time... it is, I used to. :-)
I am running Windows 7 Enterprise 64-bit with Service Pack 1
It wasn't a problem in the previous version of SQL Developer 4.1.1.19 build HAND - 19.59.
Thanks to study deeply and providing a repeatable test. It is a very strange edge cases. Particularly interesting is...
This same issue exists for JDeveloper Studio Edition Version 12.2.1.0.0
After my tests in SQL Developer 4.1.2 the question seems to be, that say you, trying to open any file (I tried sql, xml, and pkb types) by drag-and - drop from Explorer Windows to a publisher of target opened on a XML file with a symbol of hash somewhere in the specification of the file it is.
First of all, as a solution, I thought that I could recommend that you drag-and - drop since our view > files browser rather than Windows Explorer. Which avoids the question and even you will descend on the Start Page tab, any worksheet or another editor must be opened before hand. However, there is a completely different problem with that: try to close the last tab of the XML Editor open blocking the entirety of the product.
As you do not declare it against a release of the Early Adopter, where our team connect the bug, the standard procedure is so that you can open a service request with the support of the Oracle. My research did not turn any latest bug as this connected against SQL Developer or JDeveloper.
Edit: In fact, just double click instead of using drag-and - drop from view > files avoids questions, name incorrect both hang at the end.
-
Developer SQL 1.5.5 is Compatible with SQL Server 2008
We are trying to perform a database migration and we continue more "Location Source Plugin" has failed, we have the jTDS drivers from sourceforge, tried several versions including 1.0, 1.2, 1.2.2 1.2.4, 1.2.5 1.2.7 and 1.2.8... and I came across a problem of migration Sql Dev third party thread
and I was wondering if the developer SQL 1.5.5 is compatible with Microsoft SQL Server 2008?
These versions are old, older than SS2008 even.
Fetch 4.0.3 version and a 3.1 jTDS driver.
-
Performance issues with large number of nodes
I am creating an application to display graphics (large), for example:
But I ran into some performance issues, even for a relatively small number of nodes in the scene graph (+-2000 in the picture above). The graph is built, step by step, adding circles and paths to a StackPane. Circles and paths can be semi-transparant. For a small number of nodes, I get a solid 60 FPS, but this decreases over time to about 5 frames per second. As soon as I stop adding new nodes, the framerate shoot again up to 60 images per second. The framerate drop even when all the nodes are outside the viewport.
My questions are:
* Is Platform.runLater () call to 2000 times a minute too?
* This might just be a problem with my graphics card? (I have an Intel HD Graphics 3000)
* JavaFX pulse recorder says such things, are there meaningful information in that I'm missing?PULSE: 1287 [163MS:321MS]
T14 (0 + 0ms): col of CSS
T14 (0 + 5ms): layout pass
T14 (6 + 152ms): waiting for the minutes of the previous
T14 (158 + 0ms): copy the State for graphic rendering
T12 (159 + 0ms): dirty opts calculated
T12: Path of the slow form for null
T12 (159 + 160ms): painted
T12 (319 + 2ms): Presentable.present
T12 (321 + 0ms): completed the presentation of painter
Counters:
Background image of the region used cached: 14
NGRegion renderBackgroundShape slow path: 1
Nodes displayed: 1839
Nodes visited during rendering: 1840
Kind regards
Yuri
Basically, try some optimization of performance ranging from the simple to the complex. Each of the changes below may provide you with an increase in performance. Some will probably increase performance much more that others (depending on where is the real bottleneck). I would probably start by replace the paths to the lines and reduce the number of Platform.runLater calls (as adding nodes that fall within the viewport can be difficult only).
> The framerate drop even when all the nodes are outside the viewport.
Place the nodes in the graph, which fall inside the viewport.
> Is Platform.runLater () call to 2000 times a minute too?
Yes, there is no reason to call it more than 60 times a minute when the framerate of JavaFX is capped at 60 frames per second by default.
> This might just be a problem with my graphics card? (I have an Intel HD Graphics 3000)
Yes, it's a graphics system relatively low-end. But, look at the comment of developer below - your CPU and choice of graphic primitives can also affect rendering speed.
> Adding circles and the paths to a StackPane
You don't need a StackPane to this, a group is a container easier and probably better.
> may be semi-transparant
Remove transparency * may * cause acceleration.
----
You run may be:
https://JavaFX-JIRA.Kenai.com/browse/RT-20405 : improve the rendering of path performance
Maybe if you use lines rather than the paths, performance might improve.
A comment by a developer on this performance tweak application is:
«It is quite normal for applications that use arbitrary paths (if the node path objects, SVGPath, polyline, or polygon) because these paths are rendered in software.» As card circle, Ellipse, line and Rectangle very primitive forms easily to the operations that can be performed entirely on the GPU, which makes them essentially cheap. There is no need to compare the rendering of a complicated shape for rendering of simple primitives for this reason. »
----
Setting the cache indicators can help, but probably only if you animate nodes.
----
Present level of detail of your graph functionality so a chart with zoom out is not make as many nodes as a part of zoomed in.
----
You run any code important calculation on the JavaFX application thread that could stall it?
Can make you available a ftom so that others can reproduce your problems?
-
Adobe Air performance issues?
I have recently finished a huge project in Adobe Muse and I love the new sensitive design tools. However, as the project continued to develop, Muse has become increasingly slow to react and interact with the sensitive cursor was almost impossible. As someone who worked for Apple, I'd be the first person to point the finger my Mac and offer to buy a new computer, but it is a new, maxed-out MacBook Pro 15 inches. All other Adobe CC applications work without problem. When Muse was first published, as I remember, it was written on top of the Adobe AIR platform, instead of being natively written in C++. That this has something to do with performance issues that I see? If so, is there something I can do to improve performance, I see?
Air is not basis for Muse more for a long time. Nativly revised muse was released in early 2014.
In my view, the Muse team works actively on performance boosts, but I think that we should not expect any miracle: no other application I know not a calculate and display real-time changing dynamically and the same objects interactively from time on endless pages in response.
In my opinion, the only way to significantly increase the speed would be gray on all goods during the move of the scrubber.
-
Hello
Oracle version: Oracle Database 11 g Release 11.1.0.7.0 - 64 bit Production
The version of SQL Developer: 4.0.2.15 - 15.21 build
I have a question about drawing diagram, entity-relationship with SQL Developer & I would be grateful if you could kindly give me a helping hand.
I would like to draw an entity-relationship diagram in order to better visualize the relationship between multiple tables. After a google search, I found the following interesting article indicating the procedure to be followed:
http://www.Oracle.com/technetwork/issue-archive/2014/14-may/o34sqldev-2193423.html
(What I need and I'm trying to produce is Figure 4 in the article above)
I made exactly as stated in the article & apparently everything has worked well enough, in other words, I had several UML as rectangles showing columns of tables, their data types, primary and
foreign key and the arrows indicating the link between each parent/child table.Later, I tried to do the same thing on a different oracle instance and a different database by using the same procedure. Yet, something strange happened this time. Developer SQL printed on-screen tables, once again,.
inside the rectangles with types, keys,... but, there was no arrow showing the link between the tables, I just saw on the grid the selected tables but without any association/relationship shown (as a symbol of an arrow)
between them.I repeated the procedure on another instance of development, with a few test tables that I created with primary and foreign keys just to make sure I had followed the procedure correctly. Once again
the test was successful, but when I repeated the test on another instance, the same problem persisted, in other words, no arrow (relationship) between tables.
Any idea?
This could be due to a lack of privilege on the instance? If Yes, what should be granted in what about roles?
Thanks in advance
I think that what you see is that not all databases have foreign keys - applications hard-coding relationships vs let the database to handle this.
Connect to this instance different oracle and browse the tables affected - have FK constraints defined on them?
-
Hello! My first post here so relax on me 8)
I have remotely in a Windows 7 Enterprise environment. From there, I'll open Oracle SQL Developer (sqldeveloper - 4.0.2.15.21 - No. - jre). The software works without problem. However, when I leave my remote desktop session and reconnect, Developer SQL broke down. I tried "sqldeveloper - 4.0.2.15.21 - x 64" and the same thing happens. I have not configured all the settings and I have no problem with any other software (ex: Toad or other dev tools).
Does anyone have suggestions on what this might be?
Thank you
Nick
The JDK version upgrade is a reasonable approach if you do not want to continue a bug report using the hs_err_pid4848.log. Just be aware that with SQL Developer to use the new JDK requires update of the SetJavaHome line in your settings to user on the remote computer.
For versions SQL Developer 4.0.x standard location for SetJavaHome is % AppData%\sqldeveloper\1.0.0.0.0\product,conf.
To see what JDK uses your SQL Developer Runspace, see help > on > properties and search for "jdk" with the search widget.
P.S.: as usual, I forgot to add a reminder that the AppData directory is one of those hidden under C:\Users\
Windows files/folders.
Maybe you are looking for
-
Cannot open link spdy in FF 13.0 beta1
I use firefox on cp win8 13.0 b1.but I can't open any link spdy in firefox. I had checked about: config and yes the 'network.http.spdy.enabled' is 'true '.
-
Why all of a sudden I start my computer desk 2 times?
When I turn on my computer, it doesn't matter, I have to stop with the button 'walk', wait then push it again and it lights up as it should. This just started about 2 weeks ago. Once its sequel it works fine! But the thing 2 times it's embarrassing.
-
Read/write file in Unicode (UTF-16)
Hi, I have a problem to write a file in Unicode (UTF-16) I have to read a file with LabView, change some settings and write the new data in the same file. The file uses Unicode UTF-16. I downloaded a few library here: https://decibel.ni.com/content/d
-
Is it possible to launch and run a VI in a project from a simple icon?
I am trying to launch and run a VI that must be opened with a project (a FPGA links, etc.) directly from Windows without having to open the project first, and then select the VI of the project. Basically I want it behaves as if the code was deployed
-
Windows 7 64-bit PNP monitor problem, can I remove it?
I have problems with my dell 1749 windows 7 64 bit. I had to do a fresh install of windows and this PNP monitor is appeared that is causing me problems with resolution and no 3d available. Can I remove the PNP monitor and use PC of Linda. IM on a 17