SQL query to create artificial lines (performance)
Database: Oracle 11g
the above is a table of operation for a given productid. for each master job there are jobs of the child and for each child employment there is an entry for insertion. When we remove the master station we only store one record for the id of the job to master an integer. However in some reports we show each child job id with a status being removed as the below. basically for each logical record that created the deleted entry will be the sequence number of the main entrance. in this example sequence # 4. tab1 is a very large table.
my query:
- with tab1 as
- (Select 1 sequence_num"p1" productid, 'jm1' job_master_id, 'jm11' job_id, "insert" State
- de double
- Union all
- Select 2 sequence_num 'p1' ProductID, 'jm1' job_master_id, "jm12" job_id, "Insert" status
- de
Can rewrite us this query with just a single select statement with three full table on the same table scans? Something using connectby or connector line... Thanks in advance...
Something like:
with tab1 as)
Select 1 sequence_num, productid 'p1', 'jm1' job_master_id 'jm11', 'insert' Union status job_id double all the
Select 2 sequence_num, productid 'p1', 'jm1' job_master_id job_id 'jm12","insert"status of double union all
Select 3 sequence_num, productid 'p1', 'jm1' job_master_id job_id 'jm13', 'insert' status of double union all
Select 4 sequence_num "p1" productid, job_master_id, job_id 'jm1' null, 'master_removed' status of all the double union
Select 1 sequence_num "p2" productid, job_master_id 'jm2', 'jm21"job_id,"insert"status of double
)
Select sequence_num,
ProductID,
NVL (job_master_id, job_id) job_master_id.
nvl2 (job_master_id, job_id, job_id prior) job_id,
status,
level of the case
When 1 then 'physical '.
otherwise "logic."
end record
of tab1
connect by productid = ProductID prior
and job_id = prior job_master_id
and the State = "master_removed".
order by productid,
job_master_id,
job_id
/
SEQUENCE_NUM PR JOB_ JOB_ STATE REGISTRATION
------------ -- ---- ---- -------------- --------
4 p1 jm1 jm11 logical master_removed
1 p1 jm1 jm11 insert physical
2 p1 jm1 jm12 insert physical
4 p1 jm1 jm12 logical master_removed
4 p1 jm1 jm13 logical master_removed
3 p1 jm1 jm13 insert physical
4 p1 jm1 physical master_removed
1 p2 jm2 jm21 insert physical
8 selected lines.
SQL >
SY.
Tags: Database
Similar Questions
-
I am trying to retrieve a report for certain POIS, unfortunately I do not know how to formulate the query correctly. A few samples of data and the rules of the report are described below:
with my_tab as (select 1 provider id 'ACME', 1', 'A' attr1, attr2 Union ' B' double all the)
Select 2 provider id 'TOMA', 2', 'C' attr1, had ' attr2 Union double all the»
Select the id 3, supplier of 'ACME', 3 ', 'A' attr1, attr2 Union ' B' double all the
Select 4 provider id 'TOMA', 3', 'C' attr1, had ' attr2 Union double all the»
Select 5 id provider "ACME", 4 pi, null attr1, attr2 Union null double all the
Select 6 provider id 'TOMA', 4', 'C' attr1, had ' attr2 Union double all the»
Select 7 id provider "ACME", 5 pi, null attr1, attr2 Union ' B' double all the
Select 8 id provider "TOMA", 5', 'C' attr1, had ' attr2 Union double all the»
Select double union all 9 id provider 'ACME", 6', null, null attr2 attr1
Select 10 id provider "TOMA", 6 pi, null attr1, had "double attr2)
rules for the result set:
(a) each separate poi must appear exactly once
(b) if there is more than one folder with the same id of poi, the ACME provider attributes are preferred
(c) if ACME is not able to provide an attribute, the provider TOMA must intervene
The result should look like this:
POI | attr1 | att2
1. A | B |
2. C | D |
3. A | B |
4. C | D |
5. C | B |
6. null | D |
Published by: user8914294 on December 21, 2009 11:58
Published by: user8914294 on December 21, 2009 11:59Hello
Welcome to the forum!
Here's one way:
SELECT poi , MAX (attr1) KEEP (DENSE_RANK FIRST ORDER BY CASE WHEN attr1 IS NOT NULL THEN 1 END , CASE supplier WHEN 'ACME' THEN 1 WHEN 'TOMA' THEN 2 END ) AS attr_1 , MAX (attr2) KEEP (DENSE_RANK FIRST ORDER BY CASE WHEN attr2 IS NOT NULL THEN 1 END , CASE supplier WHEN 'ACME' THEN 1 WHEN 'TOMA' THEN 2 END ) AS attr_2 FROM my_tab GROUP BY poi ORDER BY poi ;
It's not the fact that there are only two suppliers, or combination (poi, supplier) is unique, or that the order of preference for suppliers is the same as alphabetical order.
This is an example of Request Top - N , where you want the elements of N (N = 1 in this case) from the top of a sorted list. The problem is how we can adjust the values, so that the most desirable come top of the list?
There are two things that make a more desirable than the other value:
(1) No NULL values are better than NOTHING. After that:
(2) the lines where providers = "ACME" is more desirable than the lines where vendor = "TOMA."
In this case, these two require CASE expressions (or similar, such as NVL2 siomething) to sort.Thanks for posting the sample data in a useful format. This helps a lot.
-
Write a SQL query with lines in columns
All the
I need help in writing a SQL query with lines in columns, let give u an example...
drop table activity;
CREATE TABLE 'ACTIVITY '.
(
"PROJECT_WID" NUMBER (22.0) NOT NULL,
VARCHAR2 (150 CHAR) "PROJECT_NO."
VARCHAR2 (800 CHAR) 'NAME '.
);
Insert in the ACTIVITY (PROJECT_WID, PROJECT_NO, NAME) values (1683691, '10007', 12-121');
Insert in the ACTIVITY (PROJECT_WID, PROJECT_NO, NAME) values (1684994, '10008', 12-122');
Insert in the ACTIVITY (PROJECT_WID, PROJECT_NO, NAME) values (1686296, '10009', 12-123');
Insert in the ACTIVITY (PROJECT_WID, PROJECT_NO, NAME) values (2225222, '9040', 12-124');
drop table lonet;
CREATE TABLE 'LONET.
(
VARCHAR2 (150 CHAR) "NAME."
NUMBER OF THE "ROOT."
VARCHAR2 (150 CHAR) "ENTRYVALUE".
);
INSERT INTO LONET (NAME, ROOT, ENTRYVALUE) VALUES ("GAC", 1683691, "LDE");
INSERT INTO LONET (NAME, ROOT, ENTRYVALUE) VALUES ('NAM', 1683691, 'LME');
INSERT INTO LONET (NAME, ROOT, ENTRYVALUE) VALUES ('BAG', 1683691, 'ICE');
INSERT INTO LONET (NAME, ROOT, ENTRYVALUE) VALUES ('PAP', 1683691, 'IKE');
INSERT INTO LONET (NAME, ROOT, ENTRYVALUE) VALUES ('NAM', 1686291, "QTY");
INSERT INTO LONET (NAME, ROOT, ENTRYVALUE) VALUES ('PAP', 1686291, 'MAX');
INSERT INTO LONET (NAME, ROOT, ENTRYVALUE) VALUES ("GAC", 1684994, "MTE");
INSERT INTO LONET (NAME, ROOT, ENTRYVALUE) VALUES ('PAP', 1684994, 'MAC');
INSERT INTO LONET (NAME, ROOT, ENTRYVALUE) VALUES ('FMT', 1684994, 'NICE');
INSERT INTO LONET (NAME, ROOT, ENTRYVALUE) VALUES ('FMR', 1684994, 'RAY');
INSERT INTO LONET (NAME, ROOT, ENTRYVALUE) VALUES ('BAG', 1686296, "CAQ");
INSERT INTO LONET (NAME, ROOT, ENTRYVALUE) VALUES ("PAP", 1686296, "QAQ");
INSERT INTO LONET (NAME, ROOT, ENTRYVALUE) VALUES ("VANESSA", 1686296, "THEW");
INSERT INTO LONET (NAME, ROOT, ENTRYVALUE) VALUES ("ANDR", 1686296, "REYL");
commit;
Link: activity.project_wid = lonet.root
look like output
Project_wid Project_no NAME GAC NAM BAG RAC 1683691 10007 12-121 LDE LME LCE LKE 1684994 10008 12-122 MTE null null MAC 1686296 10009 12-123 null null CAQ QAQ 2225222 9040 12-124 null null null null two problems, in that I am running
1. I dono how simply we can convert rows to columns
2. for root = 1683691, there are double NAM and RAC in lonet table... ideally these data should not be there, but since its here, we can take a MAX so that it returns a value
3. There are undesirables who should be ignored
Once again my thought process is that we join the activity and 4 alias table lonet.
ask for your help in this
Thank you
Hello
This is called pivoting.
Here's a way to do it:
WITH relevant_data AS
(
SELECT a.project_wid, a.project_no, b.SID
, l.name AS lonet_name, l.entryvalue
Activity one
LEFT OUTER JOIN lonet l.root = a.project_wid l
)
SELECT *.
OF relevant_data
PIVOT (MAX (entryvalue)
FOR lonet_name IN ("GAC" IN the gac
"NAM" AS nam
'BAG' IN the bag
"RAC" AS cars
)
)
ORDER BY project_wid
;
Output:
PROJECT_WID PROJECT_NO GAC NAM BAG RAC NAME
----------- ---------- ---------- ---------- ---------- ---------- ----------
1683691 12 - 10007 121 LDE LME LCE LKE
1684994 MAC MTE 10008 12-122
1686296 12 - 10009 123 QAC QAQ
2225222 9040 12 - 124
To learn more about swivel, see the FAQ in the Forum: Re: 4. How can I convert rows to columns?
Thanks for posting the CREATE TABLE and INSERT statements; It's very useful!
-
Hi all..
It's SOA 11.1.1.7... I created the schema and storage via RCU but when I try to configure JDBS schema components... his throw this error:
CFGFWK-60850: the Test failed!
CFGFWK-60853: A connection to the database, but no lines have been returned to the examination of the SQL query.
I created 3 users of UCR and who were all successful, because I have no error... of these three alone is managed in the configuration tree SOA...
No idea why like that...
Thank you
Aerts
This that I solved it by installing EMP 11.1.2.2 with UCR and SOA 11.1.1.6...
-
SQL query returns no row vs. multiple lines
Hello
I am trying to get the result of a simple sql query,
If there is no row returned, he would have "No. ROWS" in the result set.
Other wise all the rows containing data is necessary.
Let me know how this could be achieved. Under query works for the latter and not first case as mentioned below
OUTPUT
+ (box 1) when we use B_ID = 123456 +.
IDS
-----
'NO LINE '.
+ (box 2) when we use B_ID = 12345 +.
IDS
-----
1 11112345
2 22212345
create table TEMP_AAA ( A_ID VARCHAR2(10), B_ID VARCHAR2(10) ) INSERT INTO TEMP_AAA(A_ID,B_ID) VALUES('111','12345'); INSERT INTO TEMP_AAA(A_ID,B_ID) VALUES('222','12345'); INSERT INTO TEMP_AAA(A_ID,B_ID) VALUES('333','12000'); INSERT INTO TEMP_AAA(A_ID,B_ID) VALUES('444','10000');
WITH MATCH_ROWS AS ( SELECT A_ID||B_ID IDS FROM TEMP_AAA WHERE B_ID=12345 ), MATCH_ROW_COUNT AS ( SELECT COUNT(1) AS COUNTS FROM MATCH_ROWS ), PROCESSED_ROWS AS ( SELECT CASE WHEN COUNTS = 0 THEN (SELECT NVL((SELECT IDS FROM TEMP_AAA WHERE B_ID=12345),'NO ROWS') IDS FROM DUAL) ELSE MATCH_ROWS.IDS END IDS FROM MATCH_ROWS,MATCH_ROW_COUNT ) SELECT * FROM PROCESSED_ROWS;
Hello
I think you want to put this on a report or something. I have what would be easier to do this logic there. But if you want that this is possible.
Like thiswith temp_aaa as (select '111' a_id ,'12345' b_id from dual union all select '222' ,'12345' from dual union all select '333' ,'12000' from dual union all select '444' ,'10000' from dual ) , wanted_rows as ( select '123456' B_ID from dual) select temp_aaa.A_ID || temp_aaa.B_ID IDS from temp_aaa ,wanted_rows where temp_aaa.b_id = wanted_rows.b_id union all select 'NO ROWS' FROM DUAL WHERE (SELECT COUNT(*) FROM TEMP_AAA, wanted_rows WHERE TEMP_AAA.B_ID = wanted_rows.B_ID) = 0
In addition, you mix var and number of always use the same type or convert explicitly (to_number or to_char)
WITH MATCH_ROWS AS ( SELECT A_ID||B_ID IDS FROM TEMP_AAA WHERE B_ID =12345 -- varchar2(10) number ), ...
And again in the
THEN (SELECT NVL((SELECT IDS FROM TEMP_AAA WHERE B_ID =12345),'NO ROWS') IDS FROM DUAL) -- varchar2(10) number
Kind regards
Peter
-
Performance problem on the SQL query that does not use the primary key index
Hello!
I have some performance issues on a single SQL query (Oracle 10 g).
I could solve the problem by using the INDEX indicator, but I would like to know WHY this is happening.
* Tables *.
create table jobs)
ID number (5) not null,
name varchar2 (100),
primary key constraint Job_PK (id)
)
/
-Record count: 298
create table Comp)
integer ID not null,
name varchar2 (100),
primary key constraint Comp_PK (id)
)
/
-Record count: 193
-Relation m: n
create table JobComp)
integer ID not null,
id_job integer not null,
id_comp integer not null,
primary key constraint JobComp_PK (id),
unique key constraint JobComp_UK (id_job, id_comp),
Constraint JobComp_FK_Job foreign key (id_job) refers to Job (id),
Constraint JobComp_FK_Comp foreign key (id_comp) makes reference Comp (id)
)
/
create index JobComp_IX_Comp on JobComp (Cod_Comp)
/
create index JobComp_IX_Job on JobComp (Cod_Job)
/
-Record count: 6431
* Ask *.
When I run this query, the execution plan shows the index using (JobComp_PK and JobComp_IX_Comp).
No problem.
Select JobComp.*
of JobComp
Join jobs
on Job.id = JobComp.id_job
where JobComp.id_comp = 134
/
-runs in 0.20 sec
But when I add the field 'name' of the work table the plan uses full access table to the table of work
Select JobComp.*, Job.name
of JobComp
Join jobs
on Job.id = JobComp.id_job
where JobComp.id_comp = 134
/
-runs in the 2.70 dry
With the help of the index
Select / * + INDEX (Job Job_PK) * /.
JobComp.*, Job.name
of JobComp
Join jobs
on Job.id = JobComp.id_job
where JobComp.id_comp = 134
/
-runs in 0.20 sec
* Doubt *.
This behavior is correct?
PS. : I tried to recalculate the statistics, but nothing changes:
analyze the job calculation table statistics.
/
change the statistical calculation of index Job_PK reconstruction;
/
Start
dbms_utility.analyze_schema (sys_context ('userenv', 'current_schema'), 'CALCULATE');
end;
/
[of]
Gustavo EhrhardtGus.EHR wrote:
Hello.
I'm sorry for the plan unformatted.
The execution time of the querys "without field name' and 'with the field name with suspicion" are equal.
He has no problem caching, because I get the plans of the sequence different from the querys and repeated the performance. The result is always the same.I don't think that there is no problem with oracle crossing LOOP IMBRIQUEE to the HASH JOIN when you include the field name and this should be the expected behavior. But it seems that your WORKING table has a degree of parallelism set against what is causing the query to run in parallel (as JOB table is now available with full table scan, instead of indexed access earlier). It could be that the parallel execution is contributor to extra Runtime.
(a) do you know why the degree of parallelism on the WORK table has been defined? Do you need it?You can see if the following query provides a better response time?
select /*+ NOPARALLEL(JOB) */ JobComp.*, Job.Name from JobComp join Job on Job.id = JobComp.id_job where JobComp.id_comp = 134
-
SQl query to find out time between the different lines of transactions
(See both images from an attachment to get the clear picture of the data and understand the question correctly.)
I have a set of data like this in one of my paintings. (This is a simple representation of the original data.)
Reference table1.jpg
Id | Type | Value | Start_date | End_date
----------------------------------------------------------------------------------------------------------------------
ZTR0098 | ALLOW | 0 | 1 JUN | 2 JUN |
ZTR0098 | ADTAX | 0 | 1 JUN | 2 JUN |
ZTR0098 | MXTAX | 0 | 1 JUN | 9 JUN |
ZTR0098 | ALLOW | 4. 3 JUN | 15 JUN |
ZTR0098 | ADTAX | 44.00 | 3 JUN | 17-JUNE |
ZTR0098 | MXTAX | 2. 10 JUN | 17-JUNE |
ZTR0098 | ALLOW | 5. 16-JUNE | 20 JUN |
ZTR0098 | ADTAX | 55,34 | 18 JUN | 22 JUN |
ZTR0098 | MXTAX | 1. 18 JUN | 25 JUN |
ZTR0098 | MXTAX | 6. 26 JUN | 31 AUG |
ZTR0098 | ADTAX | 20.09. 23 JUN | 23 JUL |
ZTR0098 | ALLOW | 8. 21 JUN | 31 AUG |
ZTR0098 | ADTAX | 45. 24 JUL | 31 AUG |
each line has a type and a rasthaus id to it. ID belongs to other parent tables. the value of each type is given, and the validity of each value is followed by a field start_date and end_date.
All values start from 1 - JUN and expires on 31 - AUG. Now my requirement is to obtain a report that gives three columns for three different types (ALLOW, ADTAX and MXTAX) with combination of unique values in the effective time interval. Let me put the result below.
Reference table2.jpg
Id | ALLOW | ADTAX | MXTAX | Start_date | End_date
--------------------------------------------------------------------------------------------------------------------------------------------------
ZTR0098 | 0 | 0 | 0 | 1 JUN | 2 JUN |
ZTR0098 | 4. 44.00 | 0 | 3 JUN | 9 JUN |
ZTR0098 | 4. 44.00 | 2. 10 JUN | 15 JUN |
ZTR0098 | 5. 44.00 | 2. 16-JUNE | 17-JUNE |
ZTR0098 | 5. 55,34 | 1. 18 JUN | 20 JUN |
ZTR0098 | 8. 55,34 | 1. 21 JUN | 22 JUN |
ZTR0098 | 8. 20.09. 1. 23 JUN | 25 JUN |
ZTR0098 | 8. 20.09. 6. 26 JUN | 23 JUL |
ZTR0098 | 8. 45. 6. 23 JUL | 31 AUG |
As you can see there are no duplicate rows for a combination of (ALLOW, ADTAX and MXTAX) with their respective dates in force. resulting in the above table. the first step is to convert lines to the column which is pretty obvious to do that by grouping on start_date and end_date colum, but the real deal is the time interval during which the combination of the values (ALLOW, ADTAX, and MXTAX) has remained constant.
I wrote under query using Group by.
Select
ID,
NVL (max (decode (type, "ALLOW", value)), 0) as ALLOW
NVL (max (decode (type, 'ADTAX', value)), 0) as ADTAX
NVL (max (decode (type, 'MXTAX', value)), 0) as MXTAX
Start_date,
End_date
from my_table
Group of start_date, end_date, id
start_date, end_date
the results it gives are like this:
Reference table3.jpg
Id | ALLOW | ADTAX | MXTAX | Start_date | End_date
------------------------------------------------------------------------------------------------------------------------------------------------
ZTR0098 | 0 | 0 | 0 | 1 JUN | 2 JUN |
ZTR0098 | 0 | 0 | 2. 1 JUN | 9 JUN |
ZTR0098 | 4. 0 | 0 | 3 JUN | 15 JUN |
ZTR0098 | 0 | 44.00 | 0 | 3 JUN | 17-JUNE |
ZTR0098 | 0 | 0 | 2. 10 JUN | 17-JUNE |
ZTR0098 | 5. 0 | 0 | 16-JUNE | 20 JUN |
ZTR0098 | 0 | 55,34 | 0 | 18 JUN | 22 JUN |
. .
. .
like wise
but I'm not able to determine the time intervals by using the SQL query.
with
Table1 as
(select the id 'ZTR0098', 'ALLOW' type, 0 val, to_date('1-JUN','dd-MON') start_date, end_date Union to_date('2-JUN','dd-MON') double all the)
Select 'ZTR0098', 'ADTAX', 0, to_date('1-JUN','dd-MON'), to_date('2-JUN','dd-MON') of all the double union
Select 'ZTR0098', 'MXTAX', 0, to_date('1-JUN','dd-MON'), to_date('9-JUN','dd-MON') of all the double union
Select 'ZTR0098', 'ALLOW', 4, to_date('3-JUN','dd-MON'), to_date('15-JUN','dd-MON') of all the double union
Select 'ZTR0098', 'ADTAX', 44.00, to_date('3-JUN','dd-MON'), to_date('17-JUN','dd-MON') of all the double union
Select 'ZTR0098', 'MXTAX', 2, to_date('10-JUN','dd-MON'), to_date('17-JUN','dd-MON') of all the double union
Select 'ZTR0098', 'ALLOW', 5, to_date('16-JUN','dd-MON'), to_date('20-JUN','dd-MON') of all the double union
Select 'ZTR0098', 'ADTAX', 55.34, to_date('18-JUN','dd-MON'), to_date('22-JUN','dd-MON') of all the double union
Select 'ZTR0098', 'MXTAX', 1, to_date('18-JUN','dd-MON'), to_date('25-JUN','dd-MON') of all the double union
Select 'ZTR0098', 'MXTAX', 6, to_date('26-JUN','dd-MON'), to_date('31-AUG','dd-MON') of all the double union
Select 'ZTR0098', 'ADTAX', 20.09, to_date('23-JUN','dd-MON'), to_date('23-JUL','dd-MON') of all the double union
Select 'ZTR0098', 'ALLOW', 8, to_date('21-JUN','dd-MON'), to_date('31-AUG','dd-MON') of all the double union
Select 'ZTR0098', 'ADTAX', 45, to_date('24-JUL','dd-MON'), to_date('31-AUG','dd-MON') of the double
),
days like
(select level - 1 dte + to_date('1-JUN','dd-MON')
of the double
connect by level<= to_date('31-aug','dd-mon')="" -="" to_date('1-jun','dd-mon')="" +="">=>
)
Select id, allow, adtax, mxtax, min (dte) start_date, max (dte) end_date
(select ID, dte, max (allow) allow, max (adtax) adtax, max (mxtax) mxtax,
ROW_NUMBER() over (order by dte) row_number() - courses (partition by order max (allow), max (adtax), max (mxtax) by dte) gr
go (select id, dte,
-case when type = 'ALLOW' and dte between start_date and end_date then end val 0 otherwise allow.
-case when type = "ADTAX" and dte between start_date and end_date then val 0 otherwise end adtax.
-case when type = "MXTAX" and dte between start_date and end_date then val 0 otherwise end mxtax
Table 1 t,
days d
where d.dte between t.start_date and t.end_date
)
Group by id, dte
)
Group by id, gr, allow, adtax, mxtax
order by id, gr
ID ALLOW ADTAX MXTAX START_DATE END_DATE ZTR0098 0 0 0 01/06/2015 02/06/2015 ZTR0098 4 44 0 03/06/2015 09/06/2015 ZTR0098 4 44 2 10/06/2015 15/06/2015 ZTR0098 5 44 2 16/06/2015 17/06/2015 ZTR0098 5 55,34 1 18/06/2015 20/06/2015 ZTR0098 8 55,34 1 21/06/2015 22/06/2015 ZTR0098 8 20.09 1 23/06/2015 25/06/2015 ZTR0098 8 20.09 6 26/06/2015 23/07/2015 ZTR0098 8 45 6 24/07/2015 31/08/2015 Concerning
Etbin
-
A map of OWB (service line) SQL query
If I trace a session, run a map OWB (base line), the trace file contains the actual SQL query?
The problem with me is that when I execute this rank - based OWB card, is throw me an error CursorFetchMapTerminationRTV20007, BUT (most time consuming) when I take on the intermediate SQL insert query, it works very well (and also in a very short time)
The executing State = COMPLETE
message = text ORA-06502: PL/SQL: digital or value error: character string buffer too small
CursorFetchMapTerminationRTV20007 = message text
N ° of task errors = 0
N ° task warnings = 2
N ° errors = 1
Since this card OWB (truncate insert) is the line in function of where I can't back-end of the generated pl/sql package request OWB so I was wondering if I trace the session, check the trace file, maybe I'll able to see the exact SQL query generated. But I wanted to confirm the same.
Yes, the real run SQL in session will be in the trace file.
-
performance issue with the Oracle SQL query
Dears
Good evening
I am new to begin to use Oracle SQL, I have a sql query which takes longer to run in production.
Each table in the query contains 1.2 million records and DBA suggested using the "Oracle - tips. I don't have good knowledge on this subject and the difficulties to implement this advice for imrpovise performance. In the product the jobs Informatica are failed and stuck with this problem of query performance.
I ask this forum for an emergency for me to solve this problem by using "advice". kindly help me.
SELECT
CASE.ID,
CASE. DTYPE,
CASE. Version
CASE. EXTERNAL_REF,
CASE. CREATION_TS,
RQ. TYPE
Of
PAS_CASE CASE,
AS_REQUEST RQ,
CN PAS_CONTEXT
where rq.case_id = case.id
AND rq.id = cn.request_id
and rq. DTYPE = "MAINREQUEST."
and rq. TYPE in
(
'bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution ',' bgc.tbf.repair.vap', ' bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ',' bgc.cbm ',' bgc.dar.e2etest.preparation', ' bgc.chc.polling '.)
and CN. STATUS ('FINISHED', 'CANCEL', 'TIMEOUT', 'ERROR')
AND CAST (CN. CREATION_TS AS DATE) > = TO_DATE ('2014-05-06 00:00:00 ',' ' YYYY-MM-DD HH24:MI:SS)
AND
CAST (CN. CREATION_TS AS DATE) < TO_DATE ('2014-05-06 23:59:59 ',' ' YYYY-MM-DD HH24:MI:SS)-2nd request
SELECT
RA.ID,
RHEUMATOID ARTHRITIS. Version
RHEUMATOID ARTHRITIS. REQUEST_ID,
RA.NAME,
RHEUMATOID ARTHRITIS. VALUE,
RHEUMATOID ARTHRITIS. LOB_ID,
RHEUMATOID ARTHRITIS. DTYPE,
RHEUMATOID ARTHRITIS. CREATION_TS,
TASK. MAIN_REQ_TYPE
PAS_REQUESTATTRIBUTE RA
Join
(
Select tsk.ID, tsk. TYPE, main_req_type main_req.
of tsk PAS_REQUEST
Join
(
Select cn.id as context_id, rq. TYPE main_req_type
of PAS_REQUEST rq
Cn PAS_CONTEXT
where rq.id = cn.request_id
and rq. DTYPE = "MAINREQUEST."
and rq. TYPE in
(
'bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution ',' bgc.tbf.repair.vap', ' bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ',' bgc.cbm ',' bgc.dar.e2etest.preparation', ' bgc.chc.polling '.
)
and cn. STATUS ('FINISHED', 'CANCEL', 'TIMEOUT', 'ERROR')
and CAST (cn. END_TS AS DATE) > = TO_DATE ('2014-05-06 00:00:00 ',' ' YYYY-MM-DD HH24:MI:SS)
and CAST (cn. END_TS AS DATE) < = TO_DATE ('2014-05-06 23:59:59 ',' ' YYYY-MM-DD HH24:MI:SS)
and cn.ID between the 20141999999999999 AND 20141800000000000
and rq.ID between the 20141999999999999 AND 20141800000000000) main_req
On tsk.parent_context_id = main_req.context_id
and tsk. DTYPE in ('ANALYSIS_TASK', 'DECISION_TASK')
and tsk.ID between $$ LOW_ID1 AND $$ HIGH_ID1
) task
on the RA REQUEST_ID = task.ID
and RA.ID between $$ LOW_ID1 AND $$ HIGH_ID1
UNIONSELECT
RA.ID,
RHEUMATOID ARTHRITIS. Version
RHEUMATOID ARTHRITIS. REQUEST_ID,
RA.NAME,
RHEUMATOID ARTHRITIS. VALUE,
RHEUMATOID ARTHRITIS. LOB_ID,
RHEUMATOID ARTHRITIS. DTYPE,
RHEUMATOID ARTHRITIS. CREATION_TS,
MAIN_REQ. TYPE
PAS_REQUESTATTRIBUTE RA
Join
(
Select rq.id, rq.type
of PAS_REQUEST rq
Cn PAS_CONTEXT
where rq.id = cn.request_id
and rq. DTYPE = "MAINREQUEST."
and rq. TYPE in
(
'bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution ',' bgc.tbf.repair.vap', ' bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ',' bgc.cbm ',' bgc.dar.e2etest.preparation', ' bgc.chc.polling '.
)
and cn. STATUS ('FINISHED', 'CANCEL', 'TIMEOUT', 'ERROR')
and CAST (cn. END_TS AS DATE) > = TO_DATE ('2014-05-06 00:00:00 ',' ' YYYY-MM-DD HH24:MI:SS)
and CAST (cn. END_TS AS DATE) < = TO_DATE ('2014-05-06 23:59:59 ',' ' YYYY-MM-DD HH24:MI:SS)
and cn.ID between the 20141999999999999 AND 20141800000000000
and rq.ID between the 20141999999999999 AND 20141800000000000
) main_req
on the RA REQUEST_ID = main_req.ID-3rd request
SELECT
RB.ID,
RB. DTYPE,
RB. Version
RB. TYPE,
RB. CREATION_TS,
RB. TASK_ID,
RB. Color
RB. GLOBAL_RESULT,
TASK. MAIN_REQ_TYPE
Of
PAS_RESULTBLOCK RB
Join
(
Select tsk.ID, tsk. TYPE, main_req_type main_req.
of tsk PAS_REQUEST
Join
(
Select cn.id as context_id, rq. TYPE main_req_type
of PAS_REQUEST rq
Cn PAS_CONTEXT
where rq.id = cn.request_id
and rq. DTYPE = "MAINREQUEST."
and rq. TYPE in
(
'bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution ',' bgc.tbf.repair.vap', ' bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ',' bgc.cbm ',' bgc.dar.e2etest.preparation', ' bgc.chc.polling '.
)
and cn. STATUS ('FINISHED', 'CANCEL', 'TIMEOUT', 'ERROR')
and CAST (cn. END_TS AS DATE) > = TO_DATE ('2014-05-06 00:00:00 ',' ' YYYY-MM-DD HH24:MI:SS)
and CAST (cn. END_TS AS DATE) < = TO_DATE ('2014-05-06 23:59:59 ',' ' YYYY-MM-DD HH24:MI:SS)
and cn.ID between the 20141999999999999 AND 20141800000000000
and rq.ID between the 20141999999999999 AND 20141800000000000
) main_req
On tsk.parent_context_id = main_req.context_id
and tsk. DTYPE in ('ANALYSIS_TASK', 'DECISION_TASK')
and tsk.ID between $$ LOW_ID1 AND $$ HIGH_ID1
) task
We rb.task_id = task.ID
and rb.ID between $$ LOW_ID1 AND $$ HIGH_ID1
and RB. TYPE is not null
and RB. TYPE IN
(
'bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution ',' bgc.tbf.repair.vap', ' bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ',' bgc.cbm ',' bgc.dar.e2etest.preparation', ' bgc.chc.polling '.
)
UNION
SELECT
RB.ID,
RB. DTYPE,
RB. Version
RB. TYPE,
RB. CREATION_TS,
RB. TASK_ID,
RB. Color
RB. GLOBAL_RESULT,
TASK. MAIN_REQ_TYPE
Of
PAS_RESULTBLOCK RB
Join
(
Select tsk.ID, tsk. TYPE, main_req_type main_req.
of tsk PAS_REQUEST
Join
(
Select cn.id as context_id, rq. TYPE main_req_type
of PAS_REQUEST rq
Cn PAS_CONTEXT
where rq.id = cn.request_id
and rq. DTYPE = "MAINREQUEST."
and rq. TYPE in
(
'bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution ',' bgc.tbf.repair.vap', ' bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ',' bgc.cbm ',' bgc.dar.e2etest.preparation', ' bgc.chc.polling '.
)
and cn. STATUS ('FINISHED', 'CANCEL', 'TIMEOUT', 'ERROR')
and CAST (cn. END_TS AS DATE) > = TO_DATE ('2014-05-06 00:00:00 ',' ' YYYY-MM-DD HH24:MI:SS)
and CAST (cn. END_TS AS DATE) < = TO_DATE ('2014-05-06 23:59:59 ',' ' YYYY-MM-DD HH24:MI:SS)
and cn.ID between the 20141999999999999 AND 20141800000000000
and rq.ID between the 20141999999999999 AND 20141800000000000
) main_req
On tsk.parent_context_id = main_req.context_id
and tsk. DTYPE in ('ANALYSIS_TASK', 'DECISION_TASK')
and tsk.ID between $$ LOW_ID1 AND $$ HIGH_ID1
) task
We rb.task_id = task.ID
and rb.ID between $$ LOW_ID1 AND $$ HIGH_ID1
and RB. TYPE is nullSELECT
RB.ID,
RB. DTYPE,
RB. Version
RB. TYPE,
RB. CREATION_TS,
RB. TASK_ID,
RB. Color
RB. GLOBAL_RESULT,
TASK. MAIN_REQ_TYPE
Of
PAS_RESULTBLOCK RB
Join
(
Select tsk.ID, tsk. TYPE, main_req_type main_req.
of tsk PAS_REQUEST
Join
(
Select cn.id as context_id, rq. TYPE main_req_type
of PAS_REQUEST rq
Cn PAS_CONTEXT
where rq.id = cn.request_id
and rq. DTYPE = "MAINREQUEST."
and rq. TYPE in
(
'bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution ',' bgc.tbf.repair.vap', ' bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ',' bgc.cbm ',' bgc.dar.e2etest.preparation', ' bgc.chc.polling '.
)
and cn. STATUS ('FINISHED', 'CANCEL', 'TIMEOUT', 'ERROR')
and CAST (cn. END_TS AS DATE) > = TO_DATE ('2014-05-06 00:00:00 ',' ' YYYY-MM-DD HH24:MI:SS)
and CAST (cn. END_TS AS DATE) < = TO_DATE ('2014-05-06 23:59:59 ',' ' YYYY-MM-DD HH24:MI:SS)
and cn.ID between the 20141999999999999 AND 20141800000000000
and rq.ID between the 20141999999999999 AND 20141800000000000
) main_req
On tsk.parent_context_id = main_req.context_id
and tsk. DTYPE in ('ANALYSIS_TASK', 'DECISION_TASK')
and tsk.ID between $$ LOW_ID1 AND $$ HIGH_ID1
) task
We rb.task_id = task.ID
and rb.ID between $$ LOW_ID1 AND $$ HIGH_ID1
and RB. TYPE is not null
and RB. TYPE IN
(
'bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution ',' bgc.tbf.repair.vap', ' bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ',' bgc.cbm ',' bgc.dar.e2etest.preparation', ' bgc.chc.polling '.
)
UNION
SELECT
RB.ID,
RB. DTYPE,
RB. Version
RB. TYPE,
RB. CREATION_TS,
RB. TASK_ID,
RB. Color
RB. GLOBAL_RESULT,
TASK. MAIN_REQ_TYPE
Of
PAS_RESULTBLOCK RB
Join
(
Select tsk.ID, tsk. TYPE, main_req_type main_req.
of tsk PAS_REQUEST
Join
(
Select cn.id as context_id, rq. TYPE main_req_type
of PAS_REQUEST rq
Cn PAS_CONTEXT
where rq.id = cn.request_id
and rq. DTYPE = "MAINREQUEST."
and rq. TYPE in
(
'bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution ',' bgc.tbf.repair.vap', ' bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ',' bgc.cbm ',' bgc.dar.e2etest.preparation', ' bgc.chc.polling '.
)
and cn. STATUS ('FINISHED', 'CANCEL', 'TIMEOUT', 'ERROR')
and CAST (cn. END_TS AS DATE) > = TO_DATE ('2014-05-06 00:00:00 ',' ' YYYY-MM-DD HH24:MI:SS)
and CAST (cn. END_TS AS DATE) < = TO_DATE ('2014-05-06 23:59:59 ',' ' YYYY-MM-DD HH24:MI:SS)
and cn.ID between the 20141999999999999 AND 20141800000000000
and rq.ID between the 20141999999999999 AND 20141800000000000
) main_req
On tsk.parent_context_id = main_req.context_id
and tsk. DTYPE in ('ANALYSIS_TASK', 'DECISION_TASK')
and tsk.ID between $$ LOW_ID1 AND $$ HIGH_ID1
) task
We rb.task_id = task.ID
and rb.ID between $$ LOW_ID1 AND $$ HIGH_ID1
and RB. TYPE is null
-4th query
SELECT
RI.ID,
UII DTYPE,
UII Version
UII RESULTBLOCK_ID,
RI.NAME,
UII VALUE,
UII UNIT,
UII Color
UII LOB_ID,
UII CREATION_TS,
UII SEQUENCE,
UII DETAILLEVEL,
RES_BLK. MAIN_REQ_TYPE
Of
RI PAS_RESULTITEM
Join
(
Select
rb.ID, rb. TYPE rb_type, task. TYPE as the task_type, task. pas_resultblock rb main_req_type
Join
(
Select tsk.ID, tsk. TYPE, main_req_type main_req.
of tsk PAS_REQUEST
Join
(
Select cn.id as context_id, rq. TYPE main_req_type
of PAS_REQUEST rq
Cn PAS_CONTEXT
where rq.id = cn.request_id
and rq. DTYPE = "MAINREQUEST."
and rq. TYPE in
(
'bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution ',' bgc.tbf.repair.vap', ' bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ',' bgc.cbm ',' bgc.dar.e2etest.preparation', ' bgc.chc.polling '.
)
and cn. STATUS ('FINISHED', 'CANCEL', 'TIMEOUT', 'ERROR')
and CAST (cn. END_TS AS DATE) > = TO_DATE ('2014-05-06 00:00:00 ',' ' YYYY-MM-DD HH24:MI:SS)
and CAST (cn. END_TS AS DATE) < = TO_DATE ('2014-05-06 23:59:59 ',' ' YYYY-MM-DD HH24:MI:SS)
and cn.ID between the 20141999999999999 AND 20141800000000000
and rq.ID between the 20141999999999999 AND 20141800000000000
) main_req
On tsk.parent_context_id = main_req.context_id
and tsk. DTYPE in ('ANALYSIS_TASK', 'DECISION_TASK')
and tsk.ID between the 20141999999999999 AND 20141800000000000
) task
We rb.task_id = task.ID
and rb.ID between the 20141999999999999 AND 20141800000000000
and RB. TYPE IN
(
'bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution ',' bgc.tbf.repair.vap', ' bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ',' bgc.cbm ',' bgc.dar.e2etest.preparation', ' bgc.chc.polling '.
)
) res_blk
On ri.resultblock_id = res_blk.ID
where IN RI.NAME
(
'bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution ',' bgc.tbf.repair.vap', ' bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ',' bgc.cbm ',' bgc.dar.e2etest.preparation', ' bgc.chc.polling '.
)
and RI.ID between the 20141999999999999 AND 20141800000000000
and RI.NAME is not null
UNIONSelect
RI.ID,
UII DTYPE,
UII Version
UII RESULTBLOCK_ID,
RI.NAME,
UII VALUE,
UII UNIT,
UII Color
UII LOB_ID,
UII CREATION_TS,
UII SEQUENCE,
UII DETAILLEVEL,
RES_BLK. MAIN_REQ_TYPE
of pas_resultitem ri
Join
(
Select
rb.ID, rb. TYPE rb_type, task. TYPE as the task_type, task. pas_resultblock rb main_req_type
Join
(
Select tsk.ID, tsk. TYPE, main_req_type main_req.
of tsk PAS_REQUEST
Join
(
Select cn.id as context_id, rq. TYPE main_req_type
of PAS_REQUEST rq
Cn PAS_CONTEXT
where rq.id = cn.request_id
and rq. DTYPE = "MAINREQUEST."
and rq. TYPE in
(
'bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution ',' bgc.tbf.repair.vap', ' bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ',' bgc.cbm ',' bgc.dar.e2etest.preparation', ' bgc.chc.polling '.
)
and cn. STATUS ('FINISHED', 'CANCEL', 'TIMEOUT', 'ERROR')
and CAST (cn. END_TS AS DATE) > = TO_DATE ('2014-05-06 00:00:00 ',' ' YYYY-MM-DD HH24:MI:SS)
and CAST (cn. END_TS AS DATE) < = TO_DATE ('2014-05-06 23:59:59 ',' ' YYYY-MM-DD HH24:MI:SS)
and cn.ID between the 20141999999999999 AND 20141800000000000
and rq.ID between the 20141999999999999 AND 20141800000000000
) main_req
On tsk.parent_context_id = main_req.context_id
and tsk. DTYPE in ('ANALYSIS_TASK', 'DECISION_TASK')
and tsk.ID between the 20141999999999999 AND 20141800000000000
) task
We rb.task_id = task.ID
and rb.ID between the 20141999999999999 AND 20141800000000000
) res_blk
On ri.resultblock_id = res_blk.ID
where RI.ID between 20141800000000000 20141999999999999 AND
and RI.NAME is null-REQUEST OF 5HT
SELECT
TSK.ID,
TSK. Version
TSK. DTYPE,
TSK. CASE_ID,
TSK. TYPE,
TSK. CORRELATION_ID,
TSK. INITIATOR,
TSK. EXECUTOR,
TSK. CATEGORY,
TSK. PARENT_CONTEXT_ID,
TSK. CREATION_TS,
MAIN_REQ. MAIN_REQ_TYPE
Of
TSK PAS_REQUEST
Join
(
Select cn.id as context_id, rq. TYPE main_req_type
of PAS_REQUEST rq
Cn PAS_CONTEXT
where rq.id = cn.request_id
and rq. DTYPE = "MAINREQUEST."
and rq. TYPE in
(
'bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution ',' bgc.tbf.repair.vap', ' bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ',' bgc.cbm ',' bgc.dar.e2etest.preparation', ' bgc.chc.polling '.
)
and cn. STATUS ('FINISHED', 'CANCEL', 'TIMEOUT', 'ERROR')
and CAST (cn. END_TS AS DATE) > = TO_DATE ('2014-05-06 00:00:00 ',' ' YYYY-MM-DD HH24:MI:SS)
and CAST (cn. END_TS AS DATE) < = TO_DATE ('2014-05-06 23:59:59 ',' ' YYYY-MM-DD HH24:MI:SS)
and cn.ID between the 20141999999999999 AND 20141800000000000
and rq.ID between the 20141999999999999 AND 20141800000000000
) main_req
On tsk.parent_context_id = main_req.context_id
and tsk. DTYPE in ('ANALYSIS_TASK', 'DECISION_TASK')
and tsk.ID between $$ LOW_ID1 AND $$ HIGH_ID1
UNION
Select
MREQ.ID,
MREQ. Version
MREQ. DTYPE,
MREQ. CASE_ID,
MREQ. TYPE,
MREQ. CORRELATION_ID,
MREQ. INITIATOR,
MREQ. EXECUTOR,
MREQ. CATEGORY,
MREQ. PARENT_CONTEXT_ID,
MREQ. CREATION_TS,
MREQ. TYPE
of PAS_REQUEST mreq
Cn PAS_CONTEXT
where mreq.id = cn.request_id
and mreq. DTYPE = "MAINREQUEST."
and mreq. TYPE in
(
'bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution ',' bgc.tbf.repair.vap', ' bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ',' bgc.cbm ',' bgc.dar.e2etest.preparation', ' bgc.chc.polling '.
)
and cn. STATUS ('FINISHED', 'CANCEL', 'TIMEOUT', 'ERROR')
and CAST (cn. END_TS AS DATE) > = TO_DATE ('2014-05-06 00:00:00 ',' ' YYYY-MM-DD HH24:MI:SS)
and CAST (cn. END_TS AS DATE) < = TO_DATE ('2014-05-06 23:59:59 ',' ' YYYY-MM-DD HH24:MI:SS)
and cn.ID between the 20141999999999999 AND 20141800000000000
and mreq.ID between the 20141999999999999 AND 20141800000000000
Tips will be may not be necessary (proportional to the cardinalities need)
Select pc.id, pc.dtype, pc.version, pc.external_ref, pc.creation_ts, rq.type
from pas_case
as_request rq,
CN pas_context
where rq.case_id = pc.id
and rq.id = cn.request_id
and rq.dtype = 'MAINREQUEST. '
and rq.type in ('bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution', ' bgc.tbf.repair.vap',
'bgc.dar.e2etest ',' bgc.dar.e2etest.intermediate.execution', ' bgc.cbm ',.
'bgc.dar.e2etest.preparation ','bgc.chc.polling '.
)
and cn.status in ('FINISHED', 'CANCEL', 'TIMEOUT', 'ERROR')
and cn.creation_ts > = to_timestamp ('2014-05-06 00:00:00 ',' yyyy-mm-dd hh24:mi:ss')
and cn.creation_ts<= to_timestamp('2014-05-06="" 23:59:59.999999','yyyy-mm-dd="">=>
-2nd request
with
main_request as
(select / * + materialize * /)
CN.ID as context_id, rq.type as main_req_type
of pas_request rq
Join
CN pas_context
On rq.id = cn.request_id
and rq.dtype = 'MAINREQUEST. '
and rq.type in ('bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution', ' bgc.chc.polling',
'bgc.tbf.repair.vap ',' bgc.dar.e2etest', ' bgc.dar.e2etest.intermediate.execution ',.
'bgc.cbm ','bgc.dar.e2etest.preparation '.
)
and cn.status in ('FINISHED', 'CANCEL', 'TIMEOUT', 'ERROR')
and cn.end_ts > = to_timestamp ('2014-05-06 00:00:00 ',' yyyy-mm-dd hh24:mi:ss')
and cn.end_ts<= to_timestamp('2014-05-06="" 23:59:59.999999','yyyy-mm-dd="">=>
and cn.id between 20141800000000000 and 20141999999999999
and rq.id between 20141800000000000 and 20141999999999999
)
Select ra.id, ra.version, ra.request_id, ra.name, ra.value, ra.lob_id, ra.dtype, ra.creation_ts, task.main_req_type
of pas_requestattribute ra
Join
(select tsk.id, tsk.type, main_req.main_req_type
from pas_request tsk
Join
main_request main_req
On tsk.parent_context_id = main_req.context_id
and tsk.dtype in ('ANALYSIS_TASK', 'DECISION_TASK')
and tsk.id between $$ low_id1 and $$ high_id1
) task
We ra.request_id = task.id
and ra.id between $$ low_id1 and $$ high_id1
Union
Select ra.id, ra.version, ra.request_id, ra.name, ra.value, ra.lob_id, ra.dtype, ra.creation_ts, main_req.type
of pas_requestattribute ra
Join
main_request main_req
On ra.request_id = main_req.context_id
-3rd request
Select rb.id, rb.dtype, rb.version, rb.type, rb.creation_ts, rb.task_id, rb.color, rb.global_result, task.main_req_type
of pas_resultblock rb
Join
(select tsk.id, tsk.type, main_req.main_req_type
from pas_request tsk
Join
(select cn.id as context_id, rq.type as main_req_type
of pas_request rq
Join
CN pas_context
On rq.id = cn.request_id
and rq.dtype = 'MAINREQUEST. '
and rq.type in ('bgc.dar.vap.resolution.advice ','bgc.dar.e2etest.execution ',
'bgc.tbf.repair.vap ','bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ','bgc.cbm ',.
'bgc.dar.e2etest.preparation ','bgc.chc.polling '.
)
and cn.status in ('FINISHED', 'CANCEL', 'TIMEOUT', 'ERROR')
and cn.end_ts > = to_timestamp ('2014-05-06 00:00:00 ',' yyyy-mm-dd hh24:mi:ss')
and cn.end_ts<= to_timestamp('2014-05-06="" 23:59:59.999999','yyyy-mm-dd="">=>
and cn.id between 20141800000000000 and 20141999999999999
and rq.id between 20141800000000000 and 20141999999999999
) main_req
On tsk.parent_context_id = main_req.context_id
and tsk.dtype in ('ANALYSIS_TASK', 'DECISION_TASK')
and tsk.id between $$ low_id1 and $$ high_id1
) task
We rb.task_id = task.id
and rb.id between $$ low_id1 and $$ high_id1
where the type is null
or (type is not null
type in ('bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution ', ' bgc.tbf.repair.vap',
'bgc.dar.e2etest ',' bgc.dar.e2etest.intermediate.execution', ' bgc.cbm ',.
'bgc.dar.e2etest.preparation ','bgc.chc.polling '.
)
)
-4th query
Select ri.id, ri.dtype, ri.version, ri.resultblock_id, ri.name, ri.value, ri.unit, ri.color, ri.lob_id.
RI.creation_ts, RI. Sequence, RI. DetailLevel, res_blk.main_req_type
of pas_resultitem ri
Join
(select rb.id, rb.type as rb_type, task.type as task_type, task.main_req_type)
of pas_resultblock rb
Join
(select main_req_type, tsk.id, tsk.type, main_req.)
from pas_request tsk
Join
(select cn.id as context_id, rq.type as main_req_type
of pas_request rq
Join
CN pas_context
On rq.id = cn.request_id
and rq.dtype = 'MAINREQUEST. '
and rq.type in ('bgc.dar.vap.resolution.advice ','bgc.dar.e2etest.execution ',
'bgc.tbf.repair.vap ','bgc.dar.e2etest ',.
'bgc.dar.e2etest.intermediate.execution ','bgc.cbm ',.
'bgc.dar.e2etest.preparation ','bgc.chc.polling '.
)
and cn.status in ('FINISHED', 'CANCEL', 'TIMEOUT', 'ERROR')
and cn.end_ts > = to_timestamp ('2014-05-06 00:00:00 ',' yyyy-mm-dd hh24:mi:ss')
and cn.end_ts<= to_timestamp('2014-05-06="" 23:59:59.999999','yyyy-mm-dd="">=>
and cn.id between 20141800000000000 and 20141999999999999
and rq.id between 20141800000000000 and 20141999999999999
) main_req
On tsk.parent_context_id = main_req.context_id
and tsk.dtype in ('ANALYSIS_TASK', 'DECISION_TASK')
and tsk.id between 20141800000000000 and 20141999999999999
) task
We rb.task_id = task.id
and rb.id between 20141800000000000 and 20141999999999999
and rb.type in ('bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution', ' bgc.tbf.repair.vap',
'bgc.dar.e2etest ',' bgc.dar.e2etest.intermediate.execution', ' bgc.cbm ',.
'bgc.dar.e2etest.preparation ','bgc.chc.polling '.
)
) res_blk
On ri.resultblock_id = res_blk.id
and ri.id between 20141800000000000 and 20141999999999999
where ri.name is null
or (ri.name is not null
and ri.name in ('bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution', ' bgc.tbf.repair.vap',
'bgc.dar.e2etest ',' bgc.dar.e2etest.intermediate.execution', ' bgc.cbm ',.
'bgc.dar.e2etest.preparation ','bgc.chc.polling '.
)
)
-5th application
with
main_request as
(select / * + materialize * / )
mreq.ID, mreq.version, mreq.dtype, mreq.case_id, mreq.type, mreq.correlation_id, mreq. Initiator, mreq. Executor,
mreq. Category, mreq.parent_context_id, mreq.creation_ts, mreq.type
of pas_request mreq
Join
CN pas_context
On mreq.id = cn.request_id
and mreq.dtype = 'MAINREQUEST. '
and mreq.type in ('bgc.dar.vap.resolution.advice ',' bgc.dar.e2etest.execution', ' bgc.tbf.repair.vap',
'bgc.dar.e2etest ',' bgc.dar.e2etest.intermediate.execution', ' bgc.cbm ',.
'bgc.dar.e2etest.preparation ','bgc.chc.polling '.
)
and cn.status in ('FINISHED', 'CANCEL', 'TIMEOUT', 'ERROR')
and cn.end_ts > = to_timestamp ('2014-05-06 00:00:00 ',' yyyy-mm-dd hh24:mi:ss')
and cn.end_ts<= to_timestamp('2014-05-06="" 23:59:59.999999','yyyy-mm-dd="">=>
and cn.id between 20141800000000000 and 20141999999999999
and mreq.id between 20141800000000000 and 20141999999999999
)
Select tsk.id, tsk.version, tsk.dtype, tsk.case_id, tsk.type, tsk.correlation_id, tsk.initiator, tsk.executor,
TSK. Category, tsk.parent_context_id, tsk.creation_ts, main_req.main_req_type
from pas_request tsk
Join
main_request main_req
On tsk.parent_context_id = main_req.id
and tsk.dtype in ('ANALYSIS_TASK', 'DECISION_TASK')
and tsk.id between $$ low_id1 and $$ high_id1
Union
SELECT id, version, dtype, case_id, type, correlation_id, initiator, executor,
category, type parent_context_id, creation_ts
of main_request
Concerning
Etbin
-
Cannot create the SQL query view object
I'm having a lot of trouble to create a display of a SQL object.
The query is as follows:
Select CalBruker.BRUK_ID,
CalBruker.EMAIL,
CalBruker.ETTERNAVN,
CalBruker.PASSORD,
CalBruker.DATO_OPPRETTET,
CalBruker.AKTIV,
CalBruker.FORNAVN,
CalBruker.ROLL_ROLL_ID,
CalBruker.AVDE_AVDE_ID,
CalRoller.NAVN,
CalRoller.ROLL_ID,
CalAvdelinger.NAVN AS NAVN1,
CalAvdelinger.AVDE_ID,
CalRoller.BESKRIVELSE,
CalForlag.NAVN AS NAVN2,
CalForlag.FORL_ID,
CalAvdelinger.navn,
CalForlag.navn AS BrukForlag,
CalBruker.FRIEKS_PROSENT_GRENSE,
CalBruker.LOGIN_NAVN
Of CAL_BRUKER CalBruker, CAL_ROLLER CalRoller, CAL_AVDELINGER CalAvdelinger, CAL_FORLAG CalForlag
Where CalBruker.ROLL_ROLL_ID = CalRoller.ROLL_ID AND CalBruker.AVDE_AVDE_ID = CalAvdelinger.AVDE_ID AND CalAvdelinger.FORL_FORL_ID = CalForlag.FORL_ID
If I create a new view object and paste the SQL query in there, I get no automatic attribute mappings, and I can't understand how I'm supposed to map the attributes manually.
Basically, I get a display without her attributes object.
JDeveloper version 11.1.2.0Help if you give alias names in your columns?
Something like:Select CalBruker.BRUK_ID BRUK_ID,
EMAIL CalBruker.EMAIL,
CalBruker.ETTERNAVN ETTERNAVN,
... -
How to get the line number in the line itself in the Sql query?
Hello
I pick up some lines of a sql query. Is it possible to get line number in each line while all lines are read?
Like this:
RowNum data1 data2
1 abc era
2 NBH ioiYes.
ROWNUM
http://download.Oracle.com/docs/CD/E11882_01/server.112/e17118/pseudocolumns009.htm#SQLRF00255
select rownum, data1, data2 from yourtable;
-
Report of update SQL query with line selector. Update process.
I have a report of update SQL query with the selectors in line.
How to identify line selector in a process update on the page.
I want to update some columns with a value of an area of selection on the page.
With the help of the base:
UPDATE table_name
SET Column1 = value
WHERE some_column = some_value
I would need to do:
UPDATE table_name
SET column1 =: P1_select
WHERE [line selector] =?
Now sure how to identify [line selector] and/or validate it is checked.
Thank you
BobIdentify the name of the checkbox of the source of the page element, it should be of the fxx format (f01, f02... f50).
Suppose that we f01.for i in 1 .. apex_application.g_f01.count loop UPDATE CONTRACTS SET SIP_LOAD_FLAG = :P16_STATUS where
= apex_application.g_f01(i); --i'th checked record' primary key end loop; -
How to measure the performance of the sql query?
Hi Experts,
How to measure the cost of performance, efficiency and CPU of an sql query?
What are all the measures available to a sql query?
How to identify the optimal query writing?
I use Oracle 9i...
It'll be useful for me to write the effective query...
Thanks and greetingsPSRAM wrote:
Could you tell me how to activate the PLUSTRACE role?First put on when you do a search on PLUSTRACE: http://forums.oracle.com/forums/search.jspa?threadID=&q=plustrace&objID=f75&dateRange=all&numResults=15&rankBy=10001
Kind regards
Rob. -
DA on updatable report items Sql query using the class
Request Express 4.2.5.00.08
10-11 g Oracle
I have a page, so it has a link/button when he clicks then modal region will appear. Modal region has a as a table (Type SQL Query (updateable report)). All work very well so far.
However, I've now added some complex things.
for example. My region is based on the query of SQL Type (editable report).
SELECT id, name of family, Traghetti, area, d_date, start_time, End_time, displacement, role of table where id =: P10_ID;
I can also add a new line of course.
My problem is, (point) Shift display based on the value of Start_Time point) and so I made sure the class for two items.
for example, Start_Time (' class = "st_tm" ') and Maj ('class to = "Shift_time" '). Start_Time is based on (LOV) and shift is off the point in the sql query, but change the Start_Time.
And now, I created the DA.
1. event: change
Selection type: jQuery Selector
jQuery Selector: .st_tm
Condition: No.
1. action: set value
Set type: Expression Javascript
The JavaScript Expression: $(this.triggeringElement) .val ();
The element affected: P10_X1 (it is a hidden item)
2. action: Plsql Code
Code: start to null; end;
Page items to submit: P10_X1
3. action: Plsql Code
Code: start
If: P10_X1 between '03' AND '11' then
: P10_X1: = 'EARLIES ';
elsif: P10_X1 then '12' and '18'
: P10_X1: = 'LATES ';
on the other
: P10_X1: = 'NIGHTS';
end if;
end;
Page items to submit: P10_X1
NOW another DA
Event: change
Article (s) P10_X1
Condition (in list)
value (HASTY, LATES, NIGHTS)
Action:
Set type: Expression Javascript
The JavaScript Expression: $(this.triggeringElement) .val ();
Affected item:
Selection type: jQuery Selector
jQuery Selector:. Shift_time
Well, I look forward all the above work well, but I'm having a problem.
If I have more than 1 lines on a modal region and if I change the Start_Time value for a row then MAJ (point) is changing but he effect on all lines.
I want, if I make the changes on the line line # 2 while only 2 # MAJ (point) must be not performed any other lines?
Help, please!
Kind regards
RI
I found the solution me thank you.
I removed all the DA mentioned above and created a new.
1. event: change
Selection type: jQuery Selector
jQuery Selector: .st_tm
Condition: No.
Scope of the event: dynamic and light on page load.
Action (Javascript code)
ROW_ID = $(this.triggeringElement).attr('id').substr (4);
If ($(this.triggeringElement). val() > = 03 &. val() $(this.triggeringElement))<= 11)="">=>
.Val ('EARLIES') $(«#f11_» + row_id);
}
ElseIf ($(this.triggeringElement). > 11 val() & $(this.triggeringElement). val())<= 18)="">=>
.Val ('LATES') $(«#f11_» + row_id);
}
ElseIf (. val() > $19 (this.triggeringElement)) {}
.Val ('NIGHTS') $(«#f11_» + row_id);
}
also created another DA because I have a disabled report items.
Event: before the page is sent.
$('_:_disabled').removeAttr ("disabled");
and finally created manual process of DML for tabular and everything works fine. (Insert, Update, and delete).
Kind regards.
-
SQL Query - store the result for optimization?
Good day experts,
I'm looking for advice on a report. I did a lot of analytical functions to get the basic data that I have to do my report and its takes about 50 min for SQL finish. Now, with these data, I need to create 3 different reports and I can't use the SQL even since there are a lot of aggregation (example would be product group in one case and by customer in 2nd). For each of these different group garages I need another report.
So how to create 3 reports of 1 SQL query without running the query 3 times?
First thing that comes to mind is to store the result set in a fictitious table, and then query the table since I get the basic data are about 300 lines and then perform different garages group.
Best regards
Igor
So how to create 3 reports of 1 SQL query without running the query 3 times?
You already know the obvious answer - store data 'somewhere '.
If any 'somewhere' depends on your needs and you have not provided ALL the.
MV - if the query is always the same, you might use a MV and make a complete refresh when you want that new data. The data are permanent and can be queried by other sessions, but the query that accesses the data is frozen in the definition of MV.
GTT (global temporary table) - If a NEW charge of data AND three reports will be ALWAYS executed by a single session and then data are no longer necessary so a TWG may work. The application that loads the TWG can be different for each race, but the data won't be available for a single session and ONLY for the duration of this session. So if something goes wrong and the session ends the data are missing.
First thing that comes to mind is to store the result set in a fictitious table, and then query the table since I get the basic data are about 300 lines and then perform different garages group.
Which is commonly called a "table of REPORT-READY." Those that are useful when data must be permanent and available for multiple sessions/users. Generally, there is a batch (for example the package procedure) that periodically refreshes / updates the data during a window of failure. Or the table can have a column (for example AS_OF) that allows it to contain multiple data sets and the update process let alone the existing data and creates a new set of data.
If your database is about 300 lines you can consider a table report and even use it to contain multiple data sets. Then, the reports can be written to query the data by using a value AS_OF that wraps and returns the appropriate data. You don't need a window of failure since the oldest data are still available (but can be removed when you no longer need.
If you need a set of data, you can use a partitioned table work (with only one partition) to collect the new set of data, then a SWAP PARTITION to 'swap' in the new data. Only, this "Exchange" takes a fraction of a second and avoids a window of failure. Once the swap done no matter what user query will get new data.
Maybe you are looking for
-
How to restore the built in application of new? Unfortunately I deleted
How to restore the built in application of new? Unfortunately I deleted
-
After upgrading to FF26, I get sites like Yahoo and Facebook that does not open correctly. Passwords for yahoo mail won't initially. I have to recycle the time pages 2 and 3 before they will not work correctly. I sometimes get the "year error occurre
-
LabVIEW time Edit menu and right-click on shortcut menus
How can I add a menu item to menus of time labview edition and the contextual right click menu of any object selected in a block diagram? I would like to integrate the tools commonly used by me out of the springboard script tool in real labview.
-
I have no audio device under xp pro. Someone at - it a? Where can I download it?
What I do on this issue is driving me crazy
-
Lexar Echo MX 64GB USB Flash Drive Media
When I plug my Lexar 64 GB Echo MX backup drive, it is not recognized by my computer (Dell XPS L702X; Windows 7 64-bit) and I can't access it. I tried the following, but it has not solved the problem: -J' I disconnected and plugged several times; -J'