SQL query to fix corrupted data
Hi allI rose struck with a request correcting corrupt files.
CREATE TABLE EX1
(
EMPID INTEGER,
DATE OF DW_EFF_DT,
DATE OF DW_EXPR_DT
);
INSERT INTO EX1 VALUES(1,'04-MAR-1998','13-MAR-1999');
INSERT INTO EX1 VALUES(1,'14-MAR-1999','02-MAY-2000');
INSERT INTO EX1 VALUES(1,'03-MAY-2000','01-MAY-2013');
INSERT INTO EX1 VALUES(1,'02-MAY-2013','31-DEC-9999');
I have empid with other attributes, we are remaining with the story. There is some data that is corrupted and we need to fix it.
DW_EFF_DT that's less than 1 February 2005 "should be taken by default January 31, 2005"
Once again the data must be corrected and my output should be like below
EMPNO DW_EFF_DT DW_EXPR_DT
1 2005 - 01 - 27 2005-01-28
1 2005 - 01 - 29 2005-01-30
2005-01-1 31 2013-05-01
1 02-05-2013 9999-12-31
I used the function of lead and lag, but it is applied sequentially.
How can I get the subtratcing dates with 1 for each spare line.
I tried the below query and half able to reach.
SELECT A.*, COALESCE (LEAD(NEW_DW_EFF,1) SUR-1,TO_DATE('31-DEC-9999','DD-MON-YYYY')) (ORDER BY NEW_DW_EFF) AS NEW_DW_EXPR_DT
Of
(
SELECT ID,
DW_EFF_DT,
DW_EXPR_DT,
CASE WHEN DW_EFF_DT < NEW_DW_EFF TO_DATE('01-FEB-2005','DD-MON-YYYY') CAN TO_DATE('31-JAN-2005','DD-MON-YYYY') DW_EFF_DT OTHER END as
OF EX1
) AT
ID DW_EFF_DT DW_EXPR_DT NEW_DW_EFF NEW_DW_EXPR_DT
1 4 MARCH 98 13 MARCH 99 31 JANUARY 05 30 JANUARY 05
1 14 MARCH 99 2 MAY 00 31 JANUARY 05 30 JANUARY 05
1-3 MAY 00 1 MAY 13-31 JANUARY 05 1 MAY 13
1 MAY 2, 13 DECEMBER 31, 99 2 MAY 13 31 DECEMBER 99
Please help me in this regard.i he tries again.
Thanks in advance,
KVB
KVB says:
EMPNO DW_EFF_DT DW_EXPR_DT
1 2005 - 01 - 27 2005-01-28
1 2005 - 01 - 29 2005-01-30
2005-01-1 31 2013-05-01
1 02-05-2013 9999-12-31In fact the last record is the current record.
TH 3rd effectiuve record date is set lower than 2005-02-01 dateis because. While we have adjusted to 2005-01-31,
The 2nd record expiration date will be adjusted to the 3rd date effective-1 card,
Now the 2nd effective record will be reduced to the 2nd record due date-1 and so on... until the first record.In fact I could not explain much better economically as why they do. Technically we must achieve this through SQL.
See you soon
Well, that makes more sense, thanks :)
Probably not the prettiest... but should work (you may need to tweak it a bit)
ME_TUBBZ? select
2 empid
3 , to_date('2005-02-01','yyyy-mm-dd') - ( 2* rn) + 1 as new_eff_date
4 , case
5 when rn = 1
6 then
7 dw_expr_dt
8 else
9 to_date('2005-02-01','yyyy-mm-dd') - (2 * (rn - 1))
10 end
11 as new_exp_date
12 , dw_eff_dt
13 , dw_expr_dt
14 from
15 (
16 select
17 empid
18 , dw_eff_dt
19 , dw_expr_dt
20 , row_number() over (partition by empid order by dw_eff_dt desc) as rn
21 from ex1
22 where dw_eff_dt < to_date('2005-02-01','yyyy-mm-dd')
23 )
24 order by dw_eff_dt asc
25 /
EMPID NEW_EFF_DATE NEW_EXP_DATE DW_EFF_DT DW_EXPR_DT
------------------ -------------------- -------------------- -------------------- --------------------
1 27-JAN-2005 00 00:00 28-JAN-2005 00 00:00 04-MAR-1998 00 00:00 13-MAR-1999 00 00:00
1 29-JAN-2005 00 00:00 30-JAN-2005 00 00:00 14-MAR-1999 00 00:00 02-MAY-2000 00 00:00
1 31-JAN-2005 00 00:00 01-MAY-2013 00 00:00 03-MAY-2000 00 00:00 01-MAY-2013 00 00:00
3 rows selected.
Elapsed: 00:00:00.01
ME_TUBBZ?
See you soon,.
Tags: Database
Similar Questions
-
SQL query to represent the data in the graph bar
Hello
JDev 11.1.1.5.0
We must create a dashboard to track the translation. We have created the ADF table with buttons 'Create', 'Update' and 'Delete' to manipulate the table.
Our DB table structure is
File_id (PK), File_Name, ToSpanish - ARE (YES/NO), beginning of the ToSpanish, ToChina(YES/No), ToChina-Date... etc.
Once the translated file required language then user must update the DB table using the button "Update".
So far, we have implemented the requirement above.
We need represent the status of the translation in the graph bar with the language as X access and file count get access Ex: Spanish-100 China - 200 files files
Please suggest the sql query to retrieve the necessary info from the DB table that can be represented in the graph bar. Also, it would be great, if you can provide a pointers to create a bar chart.
Thanks in advance,
MSR.
If you set your major increment and minor than 1, then you won't not show decimal points. You can try setting these 10 or 100 to reach your goal.
Subtype = "BAR_VERT_CLUST" >
-
Need a sql query to get several dates in rows
Hi all
I need a query to get the dates of the last 7 days and each dates must be in a line...
but select sysdate double... gives a line...
Output of expexcted
Dates:
October 1, 2013
30 sep-2013
29 sep-2013
28 sep-2013
27 sep-2013
26 sep-2013
Try:
SQL > SELECT sysdate-7 + LEVEL FROM DUAL
2. CONNECT BY LEVEL<=>=>
3 * ORDER BY 1 DESC
SQL > /.
SYSDATE-LEVEL 7 +.
-----------------------------
October 1, 2013 13:04:52
30 - Sep - 2013 13:04:52
29 - Sep - 2013 13:04:52
28 - Sep - 2013 13:04:52
27 - Sep - 2013 13:04:52
26 - Sep - 2013 13:04:52
25 - Sep - 2013 13:04:52
7 selected lines.
-
SQL query to get the dates between two dates
Hello
We have a chart with start date and end date... Now I need to get all the dates between the start date and end date...
Table looks to below...
Create the table date_table (start_date, end_date date);
The table data will be as below:
start_date end_date
January 1, 2013 January 4, 2013
February 1, 2013 February 3, 2013
............... .................
............... ..................
............... .................
May 1, 2013 may 3, 2013
I want a result like below...
holiday_dates
January 1, 2013
January 2, 2013
January 3, 2013
January 4, 2013
February 1, 2013
February 2, 2013
February 3, 2013
.................
.................
.................
.................
May 1, 2013
May 2, 2013
May 3, 2013
Can anyone help... ?
Ramesh9158 wrote:
Hello
Your query will not work for our case...
First... We do not know the number of rows in the table... If we cannot use the union...
Second... hard coding of dates... but we do not know what could be that goes back in the table... it could be no matter what it takes not only as appearing to the interpreter...
Hey riri.
My code will work everywhere I use with with union all statement to create the example data.
Try the query:
-The main query
Select d1 + row_number() over (partition by iden) - stopped by iden holiday 1
Of
connect by level<= d2="" -="" d1="" +="">=>
and prior iden iden =
and prior sys_guid() is not null
----
Ramin Hashimzade
-
SQL query for the region of the tree
Hello
I was wondering if someone is able to help me work on a SQL query, to format the data in the table required in a part of the tree... I've never used a tree and I'm fighting to get the right data (if possible).
The data in the table looks like this:
I want to put it in a tree, using level 1 level6 with a final layout that would look like this:
As you can see, the data are formatted in level6 down in the tree, but are filled in the table from level 1. Not all of the columns will be filled, so level2 for anyone 4 (France) is the equivalent of level in the tree like level6 to person 1 (Spain).
This is Apex 4.2.5
Oracle 11.2.0.3.0
Sample data:
CREATE TABLE employees ( employee VARCHAR2(100), level1 VARCHAR2(100), level2 VARCHAR2(100), level3 VARCHAR2(100), level4 VARCHAR2(100), level5 VARCHAR2(100), level6 VARCHAR2(100) ); INSERT INTO employees (employee, level1, level2, level3, level4, level5, level6) VALUES ('Person1','Team One','Recruitment','Human Resources','Fictituous Company','Murcia','Spain'); INSERT INTO employees (employee, level1, level2, level3, level4, level5, level6) VALUES ('Person2','Team Four','Testing','IT','Big Corporate','Hanover','Germany'); INSERT INTO employees (employee, level1, level2, level3, level4, level5, level6) VALUES ('Person3','Big Corporate','Hanover','Germany', null, null, null); INSERT INTO employees (employee, level1, level2, level3, level4, level5, level6) VALUES ('Person4','Brittany','France', null, null, null, null); INSERT INTO employees (employee, level1, level2, level3, level4, level5, level6) VALUES ('Person5','Team Three','Testing','IT','Big Corporate','Hanover','Germany'); INSERT INTO employees (employee, level1, level2, level3, level4, level5, level6) VALUES ('Person6','Public Relations','Government Agency','Brittany','France', null, null);
Added example given.
Hello
Apex-user wrote:
Thanks Frank, this is a good example, I can work with that! Your assumptions are correct.
A question I came, however, is that I have a data segment that comes across poorly formatted so to speak... where only the lower levels (1-2, etc.) have been filled from the bottom up.
An example of this data would be:
- INSERT INTO employees (employee, level 1, 2, level3, level4, level5, level6) VALUES ('Person7', 'One Team', 'Test', null, null, null, null);
As you can see that if you rerun the select, the test team is now duplicated, both at the level of the root in the tree as it should.
You are not sure if the sql can be adjusted to account for this, or if it's too hard?
It is obviously a question of data and I am trying to solve this separately (extracted data from another system out of my control).
Thank you!
Sorry, I'm confused.
You say that my assumptions were correct. What includes supported that "If test ' occurs under 'IT' in a row, then the extent of the 'Testing' occurs, it must be under"IT "? Right after you say that assumptions are correct, you give an example where 'Testing' occurs under 'IT' to a single line, but it is not less 'IT' to another line and where is 'One Team' under 'Testing' in a line, but is 'One Team' under 'Recruitment' in another row.
When a situation like this occurs, how you cope? Whenever you have a problem, please post the exact results you want from the given sample data, and an explanation of how you get these results. If you don't know about what would be the ideal results, or if you are flexible on the exact results, then at least give an example and explain your reasons.
Maybe you want to change the got_parent of subquery like this:
WITH unpivoted_data AS
(
SELECT *.
Employees
UNPIVOT (node_name
FOR lvl (level1 AS 1
level2 AS 2
level3 AS 3
level4 AS 4
level5 AS 5
level6 AS-6
)
)
)
got_parent AS
(
SELECT c.node_name
MIN (p.node_name) AS a parent
Of unpivoted_data c
LEFT OUTER JOIN unpivoted_data p ON p.employee = c.employee
AND p.lvl = c.lvl + 1
GROUP BY c.node_name
)
SELECT LPAD (' ' ')
2 * (LEVEL - 1)
) || Node_name AS entity
OF got_parent
START WITH parent IS NULL
Parent = node_name PRIOR CONNECTION
;
In this way, if 'Test' is current 'IT' in one line, but not under what in another line, whether under would consider the 'IT' request and not to be a root. If 'One Team' sudden 'Testing' in a line, but under "Recruitment" in another line, it will be (arbitrarily) consider it under "recruitment".
-
Need help to write a SQL query complex
I have the source tabe as below
-> SOURCE_TABLE
I want to load as target table belowNAME CUST_ID SVC_ST_DT SVC_END_DT TOM 1 31/08/2009 23/03/2011 DOCK 2 01/01/2004 31/05/2010 HARRY 3 28/02/2007 31/12/2009
-> TARGET_TABLE
Is it possible to write a SQL query that returns the data in the same way above the target table.NAME CUST_ID SVC_ST_DT SVC_END_DT TOM 1 31/08/2009 31/12/2009 TOM 1 01/01/2010 31/12/2010 TOM 1 01/01/2011 23/03/2011 DOCK 2 01/01/2004 31/12/2004 DOCK 2 01/01/2005 31/12/2005 DOCK 2 01/01/2006 31/12/2006 DOCK 2 01/01/2007 31/12/2007 DOCK 2 01/01/2008 31/12/2008 DOCK 2 01/01/2009 31/12/2009 DOCK 2 01/01/2010 31/05/2010 HARRY 3 28/02/2007 31/12/2007 HARRY 3 01/01/2008 31/12/2008 HARRY 3 01/01/2009 31/12/2009
Published by: AChatterjee on April 30, 2012 07:14
Published by: AChatterjee on April 30, 2012 07:14Or like this...
SQL> ed Wrote file afiedt.buf 1 with t as (select 'TOM' as NAME, 1 as CUST_ID, date '2009-08-31' as SVC_ST_DT, date '2011-03-23' as SVC_END_DT from dual union all 2 select 'DOCK', 2, date '2004-01-01', date '2010-05-31' from dual union all 3 select 'HARRY', 3, date '2007-02-28', date '2009-12-31' from dual) 4 -- 5 -- end of test data 6 -- 7 select name, cust_id, svc_st_dt, svc_end_dt 8 from ( 9 select name 10 ,cust_id 11 ,greatest(svc_st_dt, add_months(trunc(svc_st_dt,'YYYY'),yr*12)) as svc_st_dt 12 ,least(svc_end_dt, add_months(trunc(svc_st_dt,'YYYY'),(yr+1)*12)-1) as svc_end_dt 13 from t 14 cross join (select rownum-1 as yr 15 from dual 16 connect by rownum <= (select extract(year from max(svc_end_dt)) - extract(year from min(svc_st_dt)) + 1 from t) 17 ) 18 ) 19 where svc_st_dt <= svc_end_dt 20* order by 2, 3 SQL> / NAME CUST_ID SVC_ST_DT SVC_END_DT ----- ---------- -------------------- -------------------- TOM 1 31-AUG-2009 00:00:00 31-DEC-2009 00:00:00 TOM 1 01-JAN-2010 00:00:00 31-DEC-2010 00:00:00 TOM 1 01-JAN-2011 00:00:00 23-MAR-2011 00:00:00 DOCK 2 01-JAN-2004 00:00:00 31-DEC-2004 00:00:00 DOCK 2 01-JAN-2005 00:00:00 31-DEC-2005 00:00:00 DOCK 2 01-JAN-2006 00:00:00 31-DEC-2006 00:00:00 DOCK 2 01-JAN-2007 00:00:00 31-DEC-2007 00:00:00 DOCK 2 01-JAN-2008 00:00:00 31-DEC-2008 00:00:00 DOCK 2 01-JAN-2009 00:00:00 31-DEC-2009 00:00:00 DOCK 2 01-JAN-2010 00:00:00 31-MAY-2010 00:00:00 HARRY 3 28-FEB-2007 00:00:00 31-DEC-2007 00:00:00 HARRY 3 01-JAN-2008 00:00:00 31-DEC-2008 00:00:00 HARRY 3 01-JAN-2009 00:00:00 31-DEC-2009 00:00:00 13 rows selected.
-
Excerpt from the rule of the SQL query load
Hello
I have built a HFM extended analytical snippet and then wrote a SQL query to organize the data in the way that I need for Essbase. The output SQL file is a simple text file with a "|" delimiter. However, when I open the file in the State of charge in Essbase, I see a few characters random and none of the fields. the air in the notebook output file. Any ideas? Thank you.
EDIT: The problem was with the coding. Open in Notepad and saved in the UTF-8 format and it fine in Essbase. Now I need to figure out how to do this by SQL.
Published by: patdawg on December 28, 2009 11:04If you need to convert the file in utf - 8 and does not know how to do in SqlServer, Essbase comes with a utility to do so. (you can embed in your automation script, it is called essutf8.exe and is located in your arborpath\bin directory.) The reference of the technology has a lot of info about this utility
-
Failed to parse the SQL query by user
Hi all
in my application, I have a piece of text with a button "submit". In this article, I type a name and a report after the element region show me the result (s). This works for all my users (> 2000) perfectly, but that users become an error in the area of the report:
Failed to parse the SQL query:
ORA-01403: no data found
We try this with the same searchstring on the same computer/browser. If I connected the result is ok, if the logged on user, the error message appears. If I try this on the user's computer with me connected, ok result. If the user try this on another pc, error results.
I have a production and a developer workspace. In the developer workspace the user can try this perfectly without errors. That in the space of productive work, clear the error.
The SQL-Select in the ist verry simple reprirt:
Select id
name
raum
table
where instr (upper (name), upper (:P60_SEARCH)) > 0
However, all users can use this search box with report perfectly, only this one user has the error. There is no restrictions on this point or a State.
Can someone help me?Hi Carsten,
I don't think that there is a way to do it.
Could you please mark the correct and useful answers in this thread? Otherwise I'll never get near Andy ;) :)
Denes Kubicek
------------------------------------------------------------------------------
http://deneskubicek.blogspot.com/
http://www.Opal-consulting.de/training
http://Apex.Oracle.com/pls/OTN/f?p=31517:1
------------------------------------------------------------------------------ -
SQL query to get the range of Date values
Hello
The database is Oracle11i.
I'm looking for a way to generate a list of the dates of a fixed date in the past (could be hardcoded) at the time of the day (sysdate).
In other words, if the fixed date is June 19, 2011, and assuming today ' today is June 24, 2011 the SQL must be able to generate the
Next: -.
June 19, 2011
June 20, 2011
June 21, 2011
June 22, 2011
June 23, 2011
June 24, 2011
And the constraint is that I can not make any change to the database in question. I can only shoot a (SELECT) SQL query. NO.
use time dimension type of approach (time dimension is not available here) and no procedure, etc. from PL/SQL. Is it possible?
Thank youJaimeen Shah wrote:
HelloThe database is Oracle11i.
I'm looking for a way to generate a list of the dates of a fixed date in the past (could be hardcoded) at the time of the day (sysdate).
In other words, if the fixed date is June 19, 2011, and assuming today ' today is June 24, 2011 the SQL must be able to generate the
Next: -.
June 19, 2011
June 20, 2011
June 21, 2011
June 22, 2011
June 23, 2011
June 24, 2011And the constraint is that I can not make any change to the database in question. I can only shoot a (SELECT) SQL query. NO.
use time dimension type of approach (time dimension is not available here) and no procedure, etc. from PL/SQL. Is it possible?
Thank you
SQL> def date_start = '13/11/2010' SQL> def date_end = '22/11/2010' SQL> with 2 data as ( 3 select to_date('&date_start', 'DD/MM/YYYY') date1, 4 to_date('&date_end', 'DD/MM/YYYY') date2 5 from dual 6 ) 7 select to_char(date1+level-1, 'DD/MM/YYYY') the_date 8 from data 9 connect by level <= date2-date1+1 10 / THE_DATE ---------- 13/11/2010 14/11/2010 15/11/2010 16/11/2010 17/11/2010 18/11/2010 19/11/2010 20/11/2010 21/11/2010 22/11/2010
-
The search syntax of SQL query against the data type varchar2 preserving valid data.
Have a data model that we are not allowed to change and the column in question is a varchar2 (20). The column has at this stage no foreign key to the list of valid values. So, until we can get those who control the data model in order to make the adjustments we need for a SQL query that root out us bad data on the hours fixed.
Is what we expect to be good data below:
-Whole number, without floating point
-Length of 5 or less (greater than zero but less than 99999)
-Text "No_RP" can exist.
Request demo below works most of the time with the exception of 'or Column1 is null' is not contagious in the null record. I tried to change the logical terms around, but did not understand the correct layout still provide it. So help would be greatly appreciated it someone could put me straight on how to properly register a null value in the recordset that has been selected with other types of error for end users to correct their mistakes. Another thing, I suppose there could be a better approach syntactically to a call find all offender characters such as *, &, (and so on.)
WITH Sample_Data AS (SELECT '0' collar OF DOUBLE UNION ALL)
SELECT "2" collar OF DOUBLE UNION ALL
SELECT "99999" col OF DOUBLE UNION ALL
SELECT "100000" col OF DOUBLE UNION ALL
SELECT '1 a' collar OF DOUBLE UNION ALL
SELECT the "ABCD" OF DOUBLE UNION ALL pass
SELECT 'A1' collar OF DOUBLE UNION ALL
SELECT ' *' collar OF DOUBLE UNION ALL
SELECT "/" pass OF DOUBLE UNION ALL
SELECT '-' col OF DOUBLE UNION ALL
SELECT ' ' collar OF DOUBLE UNION ALL
SELECT "pass OF DOUBLE UNION ALL
4. SELECT 5 6' collar OF DOUBLE UNION ALL
SELECT "24.5" collar OF DOUBLE UNION ALL
SELECT '-3' collar OF DOUBLE UNION ALL.
SELECT 'A' collar OF DOUBLE UNION ALL
SELECT 'F' OF DOUBLE UNION ALL cervical
SELECT the 'Z' OF DOUBLE UNION ALL pass
SELECT the pass 'Bye' FROM DUAL UNION ALL
SELECT the "Hello World" OF DOUBLE UNION ALL pass
SELECT "=" col OF DOUBLE UNION ALL
SELECT "+" col OF DOUBLE UNION ALL
SELECT '_' pass OF DOUBLE UNION ALL
SELECT '-' col OF DOUBLE UNION ALL
SELECT ' (' col OF DOUBLE UNION ALL)
SELECT ')' collar OF DOUBLE UNION ALL
SELECT '&' collar OF DOUBLE UNION ALL
SELECT ' ^' collar OF DOUBLE UNION ALL
SELECT '%' collar OF DOUBLE UNION ALL
SELECT the pass of "$" OF DOUBLE UNION ALL
SELECT the pass ' # ' TO DOUBLE UNION ALL
SELECT ' @' collar OF DOUBLE UNION ALL
SELECT '!' collar OF DOUBLE UNION ALL
SELECT ' ~' collar OF DOUBLE UNION ALL
SELECT "' collar OF DOUBLE UNION ALL
SELECT '.' pass FROM DUAL
)
SELECT col from Sample_data
WHERE (translate (col, '_0123456789', '_') is not null
or length (col) > 5
col = 0 or
or col is null)
and (upper (col) <>'NO_RP');
One more thing, I also took the approach of the regular expression, but he could not understand. If anyone knows how to do with this approach, I would also appreciate learning this method as well. Below is a close because I had. Impossible to get a range to work as "between 0 and 100000", guessing because of the comparison of varchar2 and # even attempted using to_char and to_number.
Select to_number (column1) from the testsql where REGEXP_LIKE (column1, ' ^ [[: digit:]] + $') ORDER BY to_number (column1) CSA;
Thanks in advance for anyone to help.
NickHello
Thanks for posting the sample data in a useable form.
It would be useful that you also posted the accurate results you wanted from this data. You want the same results as those produced by the query you posted, except that nulls should be included? If so:SELECT col FROM sample_data WHERE CASE WHEN UPPER (col) = 'NO_RP' THEN 1 WHEN col IS NULL THEN -1 WHEN LTRIM (col, '0123456789') IS NOT NULL THEN -2 WHEN LENGTH (col) > 5 THEN -3 ELSE TO_NUMBER (col) END NOT BETWEEN 1 AND 99999 ;
The requirement that pass! = 0 gives that much more difficult. You could test easily for an integer from 1 to 5 digits, but then you must have a separate condition to make sure that the chain was not '0', '00', '000', ' 0000 'or ' 00000'.
(Unlike Solomon, I guess that do not want to choose no-0 numbers starting by 0, such as ' 007 'or ' 02138'.)Using regular expressions, you may lose a few keystrokes, but you also lose a lot of clarity:
SELECT col FROM sample_data WHERE REGEXP_LIKE ( col , '^0{1,5}$' ) OR NOT REGEXP_LIKE ( NVL ( UPPER (col) , 'BAD' ) , '^(([1-9][0-9]{0,4})|NO_RP)$' ) ;
Published by: Frank Kulash, December 13, 2010 21:50
Published by: Frank Kulash, December 13, 2010 22:11
Added regular expression solution -
Single SQL query for the analysis of the date of customs declaration under the table of Stock codes
Dear all,
Please tell us a single SQL query for the below,
We have a Table of Stock as shown below,
STOCK_TABLE
ITEM_CODE
(item code)
BAT_NO
(lot no.)
TXN_CODE
(transaction code)
DOC_NO
(number)
BOE_DT
(date of the customs declaration)
I1
B1
I1
I2
I3
B70
I4
B80
I5
B90
T102
1234
JULY 2, 2015
I6
B100
We have to find the date of customs declaration (i.e. the date when the items have come under this particular table) for items that are not attached to any document (that is, who have TXN_CODE, DOC_NO and BOE_DT fields with a NULL value).
For each item in the table of actions, which is not attached to any document, the customs declaration date is calculated as follows.
- If (code section, lot number) combination is present under HISTORY_TABLE, the date of customs declaration will receive the UPDT_DT, the transaction code (TXN_CODE) is an IN or transactions (which can be analyzed from the TRANSACTIONS table).
- If (code section, lot number) combination is NOT currently at the HISTORY_TABLE (or) the transaction code respective to item - batch number combination code is an operation then customs declaration date will be the date of the document (DOC_DT) that we receive from one of the 3 tables IN_TABLE_HEAD that contains the element of that particular lot.
- If the case 1 and case 2 fails, our customs declaration date will be the last date of document (DOC_DT) that we receive from one of the 3 tables IN_TABLE_HEAD containing that particular item and the BAT_NO in expected results will be that corresponding to this document, as appropriate, to another NULL.
- If the case 1 or case 2 is successful, the value of the last field (in the output expected, shown further below) BATCH_YN will be 'Y', because it fits the lot. Otherwise it will be 'n'.
-
SQL query for retrieving data based on Certain model
Hi all
I want to retrieve all the identifiers of all the people who are permanently seated during the last hour.
Data are expressed as below:
-Creation of the activity Table
CREATE TABLE activity_log
(
Username, NUMBER of
Activity VARCHAR2 (30),
StartTime VARCHAR2 (6).
EndTime VARCHAR2 (6)
);
-Filling with sample data
INSERT INTO activity_log VALUES('39','Walking','09:01','09:05');
INSERT INTO activity_log VALUES('39','Walking','09:06','09:10');
INSERT INTO activity_log VALUES('39','Sitting','09:11','09:15');
INSERT INTO activity_log VALUES('39','Sitting','09:16','09:20');
INSERT INTO activity_log VALUES('39','Sitting','09:21','09:25');
INSERT INTO activity_log VALUES('39','Standing','09:26','09:30');
INSERT INTO activity_log VALUES('39','Standing','09:31','09:35');
INSERT INTO activity_log VALUES('39','Sitting','09:36','09:40');
INSERT INTO activity_log VALUES('39','Sitting','09:41','09:45');
INSERT INTO activity_log VALUES('39','Sitting','09:46','09:50');
INSERT INTO activity_log VALUES('39','Sitting','09:51','09:55');
INSERT INTO activity_log VALUES('39','Sitting','09:56','10:00');
INSERT INTO activity_log VALUES('39','Sitting','10:01','10:05');
INSERT INTO activity_log VALUES('39','Sitting','10:06','10:10');
INSERT INTO activity_log VALUES('39','Sitting','10:11','10:15');
INSERT INTO activity_log VALUES('39','Sitting','10:16','10:20');
INSERT INTO activity_log VALUES('39','Sitting','10:21','10:25');
INSERT INTO activity_log VALUES('39','Sitting','10:26','10:30');
INSERT INTO activity_log VALUES('39','Sitting','10:31','10:35');
INSERT INTO activity_log VALUES('39','Standing','10:36','10:40');
INSERT INTO activity_log VALUES('39','Standing','10:41','10:45');
INSERT INTO activity_log VALUES('39','Walking','10:46','10:50');
INSERT INTO activity_log VALUES('39','Walking','10:51','10:55');
INSERT INTO activity_log VALUES('39','Walking','10:56','11:00');
INSERT INTO activity_log VALUES('40','Walking','09:01','09:05');
INSERT INTO activity_log VALUES('40','Walking','09:06','09:10');
INSERT INTO activity_log VALUES('40','Sitting','09:11','09:15');
INSERT INTO activity_log VALUES('40','Sitting','09:16','09:20');
INSERT INTO activity_log VALUES('40','Sitting','09:21','09:25');
INSERT INTO activity_log VALUES('40','Standing','09:26','09:30');
INSERT INTO activity_log VALUES('40','Standing','09:31','09:35');
INSERT INTO activity_log VALUES('40','Sitting','09:36','09:40');
INSERT INTO activity_log VALUES('40','Sitting','09:41','09:45');
INSERT INTO activity_log VALUES('40','Sitting','09:46','09:50');
INSERT INTO activity_log VALUES('40','Sitting','09:51','09:55');
INSERT INTO activity_log VALUES('40','Walking','09:56','10:00');
INSERT INTO activity_log VALUES('40','Sitting','10:01','10:05');
INSERT INTO activity_log VALUES('40','Standing','10:06','10:10');
INSERT INTO activity_log VALUES('40','Standing','10:11','10:15');
INSERT INTO activity_log VALUES('40','Walking','10:16','10:20');
INSERT INTO activity_log VALUES('40','Walking','10:21','10:25');
INSERT INTO activity_log VALUES('40','Walking','10:26','10:30');
INSERT INTO activity_log VALUES('40','Sitting','10:31','10:35');
INSERT INTO activity_log VALUES('40','Sitting','10:36','10:40');
INSERT INTO activity_log VALUES('40','Sitting','10:41','10:45');
INSERT INTO activity_log VALUES('40','Standing','10:46','10:50');
INSERT INTO activity_log VALUES('40','Walking','10:51','10:55');
INSERT INTO activity_log VALUES('40','Walking','10:56','11:00');
Based on the data, the user ID 39 must be found, since it's been sitting since 09:36-10:35, which is continuous 1 hour.
Any guidance how to do using SQL query will be of great help and appreciation.
Thank you very much
Kind regards
Bilal
So what exactly is wrong with the request that I already gave you?
Just to remind one untested (because of lack of insert statements) rewrite according to your new data:
with grp as)
Select
username
UserRecognizedActivityID activity
starttime
starttime + endetime + 1
row_number() over (partition by order of starttime userid)
-ROW_NUMBER() over (partition of userid, UserRecognizedActivityID order of starttime)
RN
of activity_log
)
Select
username
min (starttime) starttime
max (endtime) endtime
max (activity) activity
GRP
Group userid, rn
with round (max (endtime) - min (starttime) * 24 * 60) > = 59
-
How to identify columns that have the same data in a SQL query or function?
Deal all,
How to identify columns that have the same data in a SQL query or function? I have the sample data as below
!DEPT_ID EMP_ID Come on CITY STATE COUNTRY 1 1 1 June 1983 DELHI HUMAN RESOURCES India 1 2 18 January 1987 DELHI HUMAN RESOURCES India 1 3 28 November 1985 DELHI HUMAN RESOURCES India 1 4 4 June 1985 DELHI HUMAN RESOURCES India 2 5 5 June 1983 MUMBAI HD India 2 6 6 June 1983 MUMBAI HD India 2 7 7 June 1983 MUMBAI HD India . 19832 8 8 Jun MUMBAI HD India . June 19833 9 9 GURGAON DL India 3 10 10 June 1983 GURGAON DL India Now, I want to Indify columns that have the same data for the same Department ID.
Is it possible in sql unique or do I have to write the function for this? Pls Help how to write?
Thanks in advance.
You can try this?
WITH T1)
DEPT_ID, EMP_ID, DATE OF BIRTH, CITY, STATE, COUNTRY
), ()
SELECT 1, 1, TO_DATE('1.) June 1983', 'JJ. LUN. (YYYY'), 'DELHI', 'HR', 'INDIA' OF THE DUAL UNION ALL
SELECT 1, 2, TO_DATE('18.) January 1987', 'JJ. LUN. (YYYY'), 'DELHI', 'HR', 'INDIA' OF THE DUAL UNION ALL
SELECT 1, 3, TO_DATE('28.) November 1985', 'JJ. LUN. (YYYY'), 'DELHI', 'HR', 'INDIA' OF THE DUAL UNION ALL
SELECT 1, 4, TO_DATE('4.) June 1985', 'JJ. LUN. (YYYY'), 'DELHI', 'HR', 'INDIA' OF THE DUAL UNION ALL
SELECT 2.5, TO_DATE('5.) June 1983', 'JJ. LUN. (YYYY'), 'BOMBAY', 'HD', 'INDIA' OF THE DUAL UNION ALL
SELECT 2.6, TO_DATE('6.) June 1983', 'JJ. LUN. (YYYY'), 'BOMBAY', 'HD', 'INDIA' OF THE DUAL UNION ALL
SELECT 2.7, TO_DATE('7.) June 1983', 'JJ. LUN. (YYYY'), 'BOMBAY', 'HD', 'INDIA' OF THE DUAL UNION ALL
SELECT 2.8, TO_DATE('8.) June 1983', 'JJ. LUN. (YYYY'), 'BOMBAY', 'HD', 'INDIA' OF THE DUAL UNION ALL
SELECT 3, 9, TO_DATE('9.) June 1983', 'JJ. LUN. (YYYY'), 'GURGAON', 'DL', 'INDIA' OF THE DUAL UNION ALL
SELECT 3.10, TO_DATE('10.) June 1983', 'JJ. LUN. (YYYY'), 'GURGAON', 'DL', 'INDIA' OF THE DOUBLE)
SELECT DEPT_ID,
RTRIM (XMLAGG (XMLELEMENT(A,VALS||',')). Extract ('//Text ()'), ',') COLUMNS_WITH_DUPLICATE
DE)
SELECT * FROM)
SELECT DEPT_ID,
EMP_ID,
Date of birth
CITY,
STATE,
COUNTRY
DE)
SELECT DEPT_ID,
EMP_ID,
Date of birth
CITY,
STATE,
COUNTRIES,
COUNT (*) OVER(PARTITION BY DEPT_ID ORDER BY EMP_ID DESC,DOB DESC,CITY DESC,STATE DESC, COUNTRY DESC) RN
DE)
SELECT DEPT_ID,
CASE WHEN(CEID>1) AND THEN 'YES' ELSE 'NO' END AS EMP_ID.
CASE WHEN(CDOB>1) THEN 'YES' ELSE 'NO' END AS DATE OF BIRTH,
CASE WHEN(CCITY>1) AND THEN 'YES' ELSE 'NO' END AS CITY.
CASE WHEN(CSTATE>1) AND THEN 'YES' ELSE 'NO' END AS STATE.
CASE WHEN(CCOUNTRY>1) THEN 'YES' ELSE 'NO' END AS A COUNTRY
DE)
SELECT DISTINCT
DEPT_ID,
CEID,
CDOB,
CITY,
CSTATE,
CCOUNTRY
DE)
SELECT DEPT_ID,
COUNT (*) TO THE CEID (DEPT_ID PARTITION, EMP_ID),.
COUNT (*) ON CDOB (DEPT_ID SCORE, DATE OF BIRTH),
COUNT (*) ON THE CITY (DEPT_ID PARTITION, CITY),
COUNT (*) ON CSTATE (DEPT_ID PARTITION, STATE).
COUNT (*) ON CCOUNTRY (DEPT_ID, COUNTRY PARTITION)
FROM T1)))
WHERE RN = 1)
UNPIVOT (CLO FOR (VALS) IN (EMP_ID, DATE OF BIRTH, CITY, STATE, COUNTRY)))
WHERE COLS = "YES".
DEPT_ID GROUP;
OUTPUT:
DEPT_ID COLUMNS_WITH_DUPLICATE
--------- ------------------------1 CITY, COUNTRY, STATE
2 CITY, COUNTRY, STATE
3 CITY, COUNTRY, STATEPost edited by: Parth272025
-
Can I use session variables in data model BI publisher SQL query?
Hi Experts,
We apply security at the level of the BI Publisher 11g data.
In OBIEE we do so using session variables, so I wanted to just ask if we can use the same session variables in BI Publisher as well
That is, we can include a where clause in the SQL for the sample data as
Where ORG_ID = @{biServer.variables ['NQ_SESSION.]} {[INV_ORG']}
I would like to know your opinion on this.
PS: We implement security EBS r12 in BI Publisher.
Thank youRead this-> OBIEE 11 g: error: "[nQSError: 23006] the session variable, NQ_SESSION.» LAN_INT, has no definition of value. "When you create a SQL query using the session NQ_SESSION variable. LAN_INT in BI Publisher [ID 1511676.1]
Follow the ER - BUG: 13607750 -NEED TO be able TO SET up a SESSION IN OBIEE VARIABLE AND use it IN BI PUBLISHER
HTH,
SVS -
SQL query for the set of data rows not values
Hello!
I use Oracle 10 g (10.2.0.1.0) and I need help to solve this difficult task. I have a huge table with more than 145000 records and I shows you only a sample of it:
ID TEAMNAME_EN DT TEAMNAME_EN HPROB AM APROB FT
324813 31/8 / 2012 DEN HAAG GRONINGEN 1.90 3.30 3.10 2
324823 31/8 / 2012 MAINZ GREUTHER FÜRTH 1.75 3.25 3.65 2
324805 31/8 / 2012 GAZELEC DIJON 1.60 3.15 4.75 1
324810 31/8 / 2012 ÖREBRO DJURGÅRDEN 2.80 3.25 2 2.05
324795 31/8 / 2012 FC KÖLN COTTBUS 1.85 3.20 3.35 2
324837 31/8 / 2012 PORTLAND WOOD COLORADO RAPIDS 2,00 3.20 2.95 1
324828 31/8 / 2012 DROGHEDA UNITED, DUNDALK 1.45 3.65 5.25 1
324827 31/8 / 2012 CORK CITY SHAMROCK ROVERS 3,30 3,80 1.70 2
324833 31/8 / 2012 BARUERI ASA 2.45 3.20 1-2.30
324798 31/8 / 2012 GENÇLERBIRLIGI ORDUSPOR'A 2.00 3,10 X 3.00
324814 31/8 / 2012 ALMERE CITY FC OSS 1,80 3,50 3,20 2
324830 31/8 / 2012 CRICIÚMA BRAGANTINO 1.25 4.35 1 8.00
324820 31/8 / 2012 VOLENDAM FC EINDHOVEN 1.80 3.25 3.45 1
324818 31/8 / 2012 MVV MAASTRICHT TELSTAR 1.40 4.00 X 5.25
324819 31/8 / 2012 DORDRECHT VEENDAM 1.80 3.25 3.45 1
324834 31/8 / 2012 CEARÁ GUARATINGUETÁ 1.40 3.85 X 5.50
If this table consists of
dates
teams (hometeam, awayteam)
numbers for homewin, shoot, awaywin probability
and the final result as 1, X, 2
What I want is a sql query that returns to me for each hometeam, awayteam and (if possible in a single line)
all documents of the hometeam which had the same number of prior probability. For example:
CEARÁ (last line), I would like to the sql to show me all the records of CEARA who had _1.40 3.85 or 5.50_ in each of the three issues of probability, BUT the problem is that I do want separate lines... I can do so far is to calculate a sum of House probability, probability of drawing, close probability for each team but the same records are calculated again in each game of probability!
This means if CEARÁ has 1.40-3.85-5.50 in the past it will return me this line only once, and not 3 times (one for each set of probability)
I hope that I've done my duty,
Thank you for your time and lights
N. SaridakisIs that what you are looking for:
select hometeam, awayteam, hw, hd, hl from (select hometeam, awayteam, ft, SUM(CASE ft WHEN '1' THEN 1 ELSE 0 END) OVER (PARTITION BY hometeam) hw, SUM(CASE ft WHEN 'X' THEN 1 ELSE 0 END) OVER (PARTITION BY hometeam) hd, SUM(CASE ft WHEN '2' THEN 1 ELSE 0 END) OVER (PARTITION BY hometeam) hl from plays) where ft is null
HISTORY_TABLE
ITEM_CODE | BAT_NO |
TXN_CODE
DOC_NO
UPDT_DT
I1
B1
T1
1234
JANUARY 3, 2015
I1
B20
T20
4567
MARCH 3, 2015
I1
B30
T30
7890
FEBRUARY 5, 2015
I2
B40
T20
1234
JANUARY 1, 2015
TRANSACTION
TXN_CODE | TXN_TYPE |
T1 | IN |
T20 |
OFF
T30
ALL THE
T50
IN
T80
IN
T90
IN
T60
ALL THE
T70
ALL THE
T40
ALL THE
IN_TABLE_HEAD_1
H1_SYS_ID (primary key) | TXN_CODE | DOC_NO |
DOC_DATE
H1ID1
T1
1234
JANUARY 1, 2015
H1ID2
T70
1234
FEBRUARY 1, 2015
IN_TABLE_ITEM_1
I1_SYS_ID |
H1_SYS_ID
(foreign key referencing H1_SYS_ID in IN_TABLE_HEAD_1)
ITEM_CODE
I1ID1
H1ID1
I1
I1ID2
H1ID1
I100
I1ID3
H1ID2
I3
IN_TABLE_BATCH_1
B1_SYS_ID | TXN_CODE DOC_NO (now in IN_TABLE_HEAD_1) | BAT_NO |
B1ID1
T1
1234
B1 / can be empty
B1ID2
T70
1234
B70
IN_TABLE_HEAD_2
H2_SYS_ID (primary key) | TXN_CODE |
DOC_NO
DOC_DATE
H2ID1
T30
4567
FEBRUARY 3, 2015
H2ID2
T60
1234
JANUARY 3, 2015
IN_TABLE_ITEM_2
I2_SYS_ID | H2_SYS_ID (foreign key referencing H2_SYS_ID in IN_TABLE_HEAD_2) | ITEM_CODE |
I2ID1 | H2ID1 |
I1
I2ID2
H2ID1
I200
I2ID3
H2ID2
I2
IN_TABLE_BATCH_2
B2_SYS_ID |
I2_SYS_ID
(foreign key referencing I2_SYS_ID in IN_TABLE_ITEM_2)
BAT_NO
B2ID1
I2ID1
B30 / null
B2ID2
I2ID2
B90
B2ID2
I2ID3
B60
IN_TABLE_HEAD_3
H3_SYS_ID (primary key) | TXN_CODE | DOC_NO | DOC_DATE |
H3ID1 |
T50
1234
JANUARY 2, 2015
H3ID2
T80
1234
JANUARY 3, 2015
H3ID3
T90
1234
JANUARY 4, 2015
H3ID4
T40
1234
AUGUST 5, 2015
IN_TABLE_ITEM_3
I3_SYS_ID |
H3_SYS_ID
(foreign key referencing H3_SYS_ID in IN_TABLE_HEAD_3)
ITEM_CODE
BAT_NO
I3ID1
H31D1
I2
B50
I3ID2
H3ID2
I4
B40
I3ID3
H3ID3
I4
I3ID4
H3ID4
I6
There is no IN_TABLE_BATCH_3
Please find below the expected results.
OUTPUT
ITEM_CODE | BAT_NO | TXN_CODE | DOC_NO |
BOE_DT
BATCH_YN
I1
B1
T1
1234
JANUARY 3, 2015
THERE
I1
B30
T30
7890
FEBRUARY 5, 2015
N
I2
B60
T60
1234
JANUARY 3, 2015
N
I3
B70
T70
1234
FEBRUARY 1, 2015
THERE
I4
T90
1234
JANUARY 4, 2015
N
I6
T40
1234
AUGUST 5, 2015
N
Controls database to create the tables above and insert the records.
CREATE TABLE stock_table()item_code VARCHAR2()80),bat_no VARCHAR2()80),txn_code VARCHAR2()80),
doc_no VARCHAR2 (80), boe_dt DATE );
INSERT EN stock_table
VALUES ('I1', 'B1', '', '', '');
INSERT EN stock_table
VALUES ('I1', '', '', '', '');
INSERT IN stock_table
VALUES ('I2', '', '', '', '');
INSERT EN stock_table
VALUES ('I3', 'B70', '', '', '');
INSERT EN stock_table
VALUES ('I4', 'B80', '', '', '');
INSERT EN stock_table
VALUES ('I5', 'B90', 'T102', '1234', '02-JUL-2015');
INSERT EN stock_table
VALUES ('I6', 'B100', '', '', '');
SELECT *
FROM stock_table
CREATE TABLE history_table()item_code VARCHAR2()80),bat_no VARCHAR2()80),txn_code VARCHAR2()80),
doc_no VARCHAR2 (80), updt_dt DATE );
INSERT IN history_table
VALUES ('I1', 'B1', 'T1', '1234', '03-JAN-2015');
INSERT IN history_table
VALUES ('I1', 'B20', 'T20', '4567', '03-MAR-2015');
INSERT IN history_table
VALUES ('I1', 'B30', 'T30', '7890', '05-FEB-2015');
INSERT IN history_table
VALUES ('I2', 'B40', 'T20', '1234', '01-JAN-2015');
SELECT *
FROM history_table
CREATE TABLE transaction1()txn_code VARCHAR()80),txn_type VARCHAR()80));
INSERT INTO transaction1
VALUES ('T1', 'IN');
INSERT INTO transaction1
VALUES ('T20', 'OUT');
INSERT INTO transaction1
VALUES ('T30', 'ALL');
INSERT INTO transaction1
VALUES ('T40', 'ALL');
INSERT INTO transaction1
VALUES ('T50', 'IN');
INSERT INTO transaction1
VALUES ('T60', 'ALL');
INSERT INTO transaction1
VALUES ('T70', 'ALL');
INSERT INTO transaction1
VALUES ('T80', 'IN');
INSERT INTO transaction1
VALUES ('T90', 'IN');
SELECT *
FROM transaction1
CREATE TABLE in_table_head_1()h1_sys_id VARCHAR2()80) PRIMARY KEY,txn_code VARCHAR2()80),
doc_no VARCHAR2 (80), doc_dt DATE );
CREATE TABLE in_table_head_2()h2_sys_id VARCHAR2()80) PRIMARY KEY,txn_code VARCHAR2()80),
doc_no VARCHAR2 (80), doc_dt DATE );
CREATE TABLE in_table_head_3()h3_sys_id VARCHAR2()80) PRIMARY KEY,txn_code VARCHAR2()80),
doc_no VARCHAR2 (80), doc_dt DATE );
INSERT IN in_table_head_1
VALUES ('H1ID1', 'T1', '1234', '01-JAN-2015');
INSERT IN in_table_head_1
VALUES ('H1ID2', 'T70', '1234', '01-FEB-2015');
INSERT IN in_table_head_2
VALUES ('H2ID1', 'T30', '4567', '03-FEB-2015');
INSERT IN in_table_head_2
VALUES ('H2ID2', 'T60', '1234', '03-JAN-2015');
INSERT IN in_table_head_3
VALUES ('H3ID1', 'T50', '1234', '02-JAN-2015');
INSERT IN in_table_head_3
VALUES ('H3ID2', 'T80', '1234', '03-JAN-2015');
INSERT IN in_table_head_3
VALUES ('H3ID3', 'T90', '1234', '05-JAN-2015');
INSERT IN in_table_head_3
VALUES ('H3ID4', 'T40', '1234', '05-AUG-2015');
CREATE TABLE in_table_item_1()i1_sys_id VARCHAR2()80) PRIMARY KEY,
h1_sys_id VARCHAR2 (80) REFERENCES in_table_head_1()h1_sys_id),item_code VARCHAR2()80));
CREATE TABLE in_table_item_2()i2_sys_id VARCHAR2()80) PRIMARY KEY,
h2_sys_id VARCHAR2 (80) REFERENCES in_table_head_2()h2_sys_id),item_code VARCHAR2()80));
CREATE TABLE in_table_item_3(i3_sys_id VARCHAR2(80) PRIMARY KEY,
h3_sys_id VARCHAR2 (80) REFERENCES in_table_head_3()h3_sys_id),item_code VARCHAR2()80),
bat_no VARCHAR2 (80));
INSERT IN in_table_item_1
VALUES ('I1ID1', 'H1ID1', 'I1');
INSERT IN in_table_item_1
VALUES ('I1ID2', 'H1ID1', 'I100');
INSERT IN in_table_item_1
VALUES ('I1ID3', 'H1ID2', 'I3');
INSERT IN in_table_item_2
VALUES ('I2ID1', 'H2ID1', 'I1');
INSERT IN in_table_item_2
VALUES ('I2ID2', 'H2ID1', 'I200');
INSERT IN in_table_item_2
VALUES ('I2ID3', 'H2ID2', 'I2');
INSERT IN in_table_item_3
VALUES ('I3ID1', 'H3ID1', 'I2','B50');
INSERT IN in_table_item_3
VALUES ('I3ID2', 'H3ID2', 'I4','B40');
INSERT IN in_table_item_3
VALUES ('I3ID3', 'H3ID3', 'I4','');
INSERT IN in_table_item_3
VALUES ('I3ID4', 'H3ID4', 'I6','');
SELECT *
FROM in_table_item_1
SELECT *
FROM in_table_item_2
SELECT *
FROM in_table_item_3
CREATE TABLE in_table_batch_1()b1_sys_id VARCHAR2()80) PRIMARY KEY,
txn_code VARCHAR2 (80), doc_no VARCHAR2 (80), bat_no VARCHAR2 (80));
CREATE TABLE in_table_batch_2()b2_sys_id VARCHAR2()80) PRIMARY KEY,
i2_sys_id VARCHAR2 (80) REFERENCES in_table_item_2()i2_sys_id),bat_no VARCHAR2()80));
INSERT IN in_table_batch_1
VALUES ('B1ID1', 'T1', '1234', 'B1');
INSERT IN in_table_batch_1
VALUES ('B1ID2', 'T70', '1234', 'B70');
INSERT IN in_table_batch_2
VALUES ('B2ID1', 'I2ID1', 'B30');
INSERT IN in_table_batch_2
VALUES ('B2ID2', 'I2ID2', 'B90');
INSERT IN in_table_batch_2
VALUES ('B2ID3', 'I2ID3', 'B60');
Please advise a solution for the same.
Thank you and best regards,
Séverine Suresh
very forced (question subfactoring used to allow easy testing/verification - could work with these test data only)
with
case_1 as
(select s.item_code,
s.bat_no,
h.txn_code,
h.doc_no,
h.updt_dt boe_dt,
cases where s.bat_no = h.bat_no then 'Y' else ' n end batch_yn.
cases where h.txn_code is not null
and h.doc_no is not null
and h.updt_dt is not null
then 'case 1' '.
end refers_to
from (select item_code, bat_no, txn_code, doc_no, boe_dt
of w_stock_table
where bat_no is null
or txn_code is null
or doc_no is null
or boe_dt is null
) s
left outer join
w_history_table h
On s.item_code = h.item_code
and s.bat_no = h.bat_no
and exists (select null
of w_transaction1
where txn_code = nvl (s.txn_code, h.txn_code)
and txn_type in ('IN', 'ALL')
)
),
case_2 as
(select s.item_code,
NVL (s.bat_no, h.bat_no) bat_no.
NVL (s.txn_code, h.txn_code) txn_code.
NVL (s.doc_no, h.doc_no) doc_no.
NVL (s.boe_dt, h.updt_dt) updt_dt.
cases where s.bat_no = h.bat_no then 'Y' else ' n end batch_yn.
cases where h.txn_code is not null
and h.doc_no is not null
and h.updt_dt is not null
then 'case 2'.
end refers_to
from (select item_code, bat_no, txn_code, doc_no, boe_dt
of case_1
where refers_to is null
) s
left outer join
w_history_table h
On s.item_code = h.item_code
and exists (select null
of w_transaction1
where txn_code = nvl (s.txn_code, h.txn_code)
and txn_type in ('IN', 'ALL')
)
and not exists (select null
of case_1
where item_code = h.item_code
and bat_no = h.bat_no
and txn_code = h.txn_code
and doc_no = h.doc_no
and updt_dt = h.updt_dt
)
),
case_31 as
(select s1.item_code,
NVL (S1.bat_no, W1.bat_no) bat_no.
NVL (S1.txn_code, W1.txn_code) txn_code.
NVL (S1.doc_no, W1.doc_no) doc_no.
NVL (S1.updt_dt, W1.doc_dt) updt_dt.
cases where s1.bat_no = w1.bat_no then 'Y' else ' n end batch_yn.
cases where w1.txn_code is not null
and w1.doc_no is not null
and w1.doc_dt is not null
then "case 31'.
end refers_to
from (select item_code, bat_no, txn_code, doc_no, updt_dt, batch_yn, refers_to
of case_2
where refers_to is null
) s1
left outer join
(select i1.item_code, h1.txn_code, h1.doc_no, h1.doc_dt, b1.bat_no
of w_in_table_item_1 i1
inner join
w_in_table_head_1 h1
On i1.h1_sys_id = h1.h1_sys_id
inner join
w_in_table_batch_1 b1
On h1.txn_code = b1.txn_code
and h1.doc_no = b1.doc_no
) w1
On s1.item_code = w1.item_code
),
case_32 as
(select s2.item_code,
NVL (S2.bat_no, W2.bat_no) bat_no.
NVL (S2.txn_code, W2.txn_code) txn_code.
NVL (S2.doc_no, W2.doc_no) doc_no.
NVL (S2.updt_dt, W2.doc_dt) updt_dt.
cases where s2.bat_no = w2.bat_no then 'Y' else ' n end batch_yn.
cases where w2.txn_code is not null
and w2.doc_no is not null
and w2.doc_dt is not null
then "case 32'.
end refers_to
from (select item_code, bat_no, txn_code, doc_no, updt_dt, batch_yn, refers_to
of case_2
where refers_to is null
) s2
left outer join
(select i2.item_code, h2.txn_code, h2.doc_no, h2.doc_dt, b2.bat_no
of w_in_table_item_2 i2
inner join
w_in_table_head_2 h2
On i2.h2_sys_id = h2.h2_sys_id
inner join
w_in_table_batch_2 b2
On i2.i2_sys_id = b2.i2_sys_id
) w2
On s2.item_code = w2.item_code
),
case_33 as
(select s3.item_code,
w3.bat_no,
NVL (S3.txn_code, w3.txn_code) txn_code.
NVL (S3.doc_no, w3.doc_no) doc_no.
NVL (S3.updt_dt, w3.doc_dt) updt_dt.
cases where s3.bat_no = w3.bat_no then 'Y' else ' n end batch_yn.
cases where w3.txn_code is not null
and w3.doc_no is not null
and w3.doc_dt is not null
then "case 33'.
end refers_to
from (select item_code, bat_no, txn_code, doc_no, updt_dt, batch_yn, refers_to
of case_2
where refers_to is null
) s3
left outer join
(select i3.item_code, h3.txn_code, h3.doc_no, h3.doc_dt, i3.bat_no
of w_in_table_item_3 i3
inner join
w_in_table_head_3 h3
On i3.h3_sys_id = h3.h3_sys_id
) w3
On s3.item_code = w3.item_code
)
Select item_code, bat_no, txn_code, doc_no, boe_dt, batch_yn
of case_1
where refers_to is not null
Union of all the
Select item_code, bat_no, txn_code, doc_no, updt_dt, batch_yn
of case_2
where refers_to is not null
Union of all the
Select item_code, bat_no, txn_code, doc_no, updt_dt, batch_yn
from (select item_code, bat_no, txn_code, doc_no, updt_dt, batch_yn,
ROW_NUMBER() over (partition by item_code of updt_dt desc order) rn
from (select item_code, bat_no, txn_code, doc_no, updt_dt, batch_yn
of case_31
where refers_to is not null
Union of all the
Select item_code, bat_no, txn_code, doc_no, updt_dt, batch_yn
of case_32
where refers_to is not null
Union of all the
Select item_code, bat_no, txn_code, doc_no, updt_dt, batch_yn
of case_33
where refers_to is not null
)
)
where rn = 1
ITEM_CODE | BAT_NO | TXN_CODE | DOC_NO | BOE_DT | BATCH_YN |
---|---|---|---|---|---|
I1 | B1 | T1 | 1234 | JANUARY 3, 2015 | THERE |
I1 | B30 | T30 | 7890 | FEBRUARY 5, 2015 | N |
I2 | B60 | T60 | 1234 | JANUARY 3, 2015 | N |
I3 | B70 | T70 | 1234 | FEBRUARY 1, 2015 | THERE |
I4 | - | T90 | 1234 | JANUARY 5, 2015 | N |
I6 | - | T40 | 1234 | AUGUST 5, 2015 | N |
Concerning
Etbin
Maybe you are looking for
-
(1) using an iMac 2009 El cap 10.11.4 running and (the fool that I am) can not "for the life of me" "" how to remove the column on the right side of the folder window? looked at. System Preferences preferences from the Finder and CLICKED RIGHT folder
-
computer laptop vaio vgn-c290 and inability to restore the complete system
I have a vaio laptop vgn-c290 (windows vista Home premium), using the f10 at the system startup key, tried to restore the complete system (c :) drive), but installing custom programs stopped at 55% 'installation click on dvd' feature. I waited about
-
700 - 515xt envy: Broadcom 43142 802.11 bgn internal card and 5 Ghz wireless
Hello After purusing these cards looking for an answer I seemed to have stumbled on one. My card is a card of 2.4 ghz and the BGN is really mean 2.4 and if I wanted a double band I needed to have an AGN or similar suffix. That said it has been also m
-
CD Rom Boot priority NO mean average?
CD rom not boot priority no way, reboot and select proper boot device or insert boot media in selected boot device and press a key... I get on this message and my system goes no further... CD Rom Boot priority NO mean average? I woke up this morning
-
Hello My client wants to convert 2 standalone WLC as below in HA SSO redundancy. First wlc (AIR-CT5508-100-K9) has 100 licenses AP (need 50 upgrade) and second(AIR-CT5508-12-K9) there are 12 AP licenses. The two WLCs have the same number of licenses