Nth salary using the analytic function
I use under function to calculate second highest with empno and deptno salary.
Is it possible to get the same result with another query without using Assembly only analytical functions condition.using and windows function is possible to get the desired output?
SELECT e.empno,
e.DEPTNO,
tmp. SAL as second_higher_salary
FROM emp e,.
(SELECT Empno,
DEPTNO,
SAL,
DENSE_RANK() (PARTITION BY deptno ORDER of sal) AS rnk
WCP
) tmp
WHERE tmp.deptno = e.deptno
and tmp.rnk = 2
EMPNO DEPTNO SAL
---------- ---------- ----------
7934 10 2450
7782 10 2450
7839 10 2450
7876 20 1100
7369 20 1100
7902 20 1100
7788 20 1100
7566 20 1100
7900 30 1250
7844 30 1250
7654 30 1250
7521 30 1250
7499 30 1250
7698 30 1250
7900 30 1250
7844 30 1250
7654 30 1250
7521 30 1250
7499 30 1250
7698 30 1250
Here's my solution:
Select empno,
DEPTNO,
FIRST_VALUE (sal) (PARTITION BY deptno ORDER by sal desc)
de)
SELECT EmpNo,
DEPTNO,
Decode (DENSE_RANK () OVER (PARTITION BY deptno order by sal desc), 1,-sal, sal) sal
WCP
)
/
EMPNO | DEPTNO FIRST_VALUE (SAL) OVER (PARTITIONBYDEPTNOORDERBYSALDESC) |
---------- ---------- -----------------------------------------------------
7782 | 10 | 2450 | |
7934 | 10 | 2450 | |
7839 | 10 | 2450 | |
7566 | 20 | 2975 | |
7876 | 20 | 2975 | |
7369 | 20 | 2975 | |
7788 | 20 | 2975 | |
7902 | 20 | 2975 | |
7499 | 30 | 1600 | |
7844 | 30 | 1600 | |
7654 | 30 | 1600 | |
7521 | 30 | 1600 | |
7900 | 30 | 1600 | |
7698 | 30 | 1600 |
Tags: Database
Similar Questions
-
SQL using the analytic function
Hi allI want a help in the creation of my SQL query to retrieve the data described below:
I have a test of sample table containing data as below:
State ID Desc
MICHAEL 1 T1
ACTIVE 2 T2
T3 3 SUCCESS
DISABLE THE T4 4
The thing I want to do is to select all the lines with an ACTIVE status in the table but is there is no ACTIVE status, my request will give me the last line with MICHAEL status.
I can do this in a single request by using the analytical function for example, if yes can yiu help me on the request of unpacking.
Kind regards
Raluce
Something like that?
I had to fix it.
with testdata until)
Select 1 id, "T1" dsc "DISABLED" status of Union double all the
Select 2 id, 'T2' dsc, the status "ACTIVE" of all the double union
Select id 3, "T3" dsc, the status of 'SUCCESS' of all the double union
Select 4 id, "T4" dsc "DISABLED" status of double
)Select
ID
dsc
status
of testdata
where
status =
-case when (select count (*) in testdata where status = 'ACTIVE') > 0
then 'ACTIVE '.
Another 'DISABLED '.
end
and)
ID in (select id from testdata where status = ' ACTIVE')
or
ID = (select max (id) in testdata when status = 'DISABLED')
)STATE ID DSC
'2' 'T2' 'ACTIVE '.
Maybe it's more efficient
Select
ID
dsc
status
of testdata
where
status =
-case when (select count (*) in testdata where status = 'ACTIVE') > 0
then 'ACTIVE '.
Another 'DISABLED '.
end
and
ID =)
-case when (select count (*) in testdata where status = 'ACTIVE') > 0
then id
on the other
(select max (id) in testdata when status = 'DISABLED')
end
)Post edited by: correction of chris227
Post edited by: chris227
extended -
Oracle 11g Release 2
I'm assuming that the best solution is the use of analytical functions.
create table test3 ( part_type_id varchar2(50) ,group_id number ,part_desc_id number ,part_cmt varchar2(50) ) / insert into test3 values( 'ABC123',1,10,'comment1'); insert into test3 values( 'ABC123',1,10,'comment2'); insert into test3 values( 'ABC123',2,15,'comment1'); insert into test3 values( 'ABC123',2,15,'comment2'); insert into test3 values( 'EFG123',25,75,'comment3'); insert into test3 values( 'EFG123',25,75,'comment4'); insert into test3 values( 'EFG123',25,75,'comment5'); insert into test3 values( 'XYZ123',1,10,'comment6'); insert into test3 values( 'XYZ123',2,15,'comment7'); commit; select * from test3; PART_TYPE_ID GROUP_ID PART_DESC_ID PART_CMT -------------------- ---------- ------------ -------------------- ABC123 1 10 comment1 ABC123 1 10 comment2 ABC123 2 15 comment1 ABC123 2 15 comment2 EDG123 25 75 comment3 EDG123 25 75 comment4 EDG123 25 75 comment5 XYZ123 1 10 comment6 XYZ123 2 15 comment7 9 rows selected. Desired output: PART_TYPE_ID GROUP_ID PART_DESC_ID PART_CMT -------------------- ---------- ------------ -------------------- ABC123 1 10 comment1 ABC123 2 15 comment1 XYZ123 1 10 comment1 XYZ123 2 15 comment2 RULE: where one part_type_id has multiple (2 or more distinct combinations) of group_id/part_desc_id NOTE: There are about 12 columns in the table, for brevity I only included 4.
Post edited by: orclrunner was updated desired output and rule
Hello
Here's one way:
WITH got_d_count AS
(
SELECT part_type_id, group_id, part_desc_id
MIN (part_cmt) AS min_part_cmt
COUNT AS d_count (*) OVER (PARTITION BY part_type_id)
OF test3
GROUP BY part_type_id, group_id, part_desc_id
)
SELECT DISTINCT
group_id, part_desc_id, part_type_id, min_part_cmt
OF got_d_count
WHERE d_count > 1
;
Output:
GROUP_ID PART_DESC_ID MIN_PART_CMT PART_TYPE_ID
------------ ---------- ------------ ------------
ABC123 1 10 comment1
ABC123 2 15 comment1
XYZ123 1 10 comment6
XYZ123 2 15 comment7
Analytical functions, such as the COUNTY and MIN, many global versions, in addition, it can give the same results. Use the analytical versions when each row of output corresponds to exactly 1 row of input and the aggregate and GROUP BY version when each line of output corresponds to a group of lines 1 or more input. In this issue, each line of output appears to be a group of input lines having the same group_id, part_type_id, and part_desc_id (I'm guessing just, this only has never stated), so I used GROUP BY to get 1 row of output for every input lines.
-
by using the analytical function to get the right output.
Hello all;
I have the following date of sample below
It's the output I wantcreate table temp_one ( id number(30), placeid varchar2(400), issuedate date, person varchar2(400), failures number(30), primary key(id) ); insert into temp_one values (1, 'NY', to_date('03/04/2011', 'MM/DD/YYYY'), 'John', 3); insert into temp_one values (2, 'NY', to_date('03/03/2011', 'MM/DD/YYYY'), 'Adam', 7); insert into temp_one values (3, 'Mexico', to_date('03/04/2011', 'MM/DD/YYYY'), 'Wendy', 3); insert into temp_one values (4, 'Mexico', to_date('03/14/2011', 'MM/DD/YYYY'), 'Gerry', 3); insert into temp_one values (5, 'Mexico', to_date('03/15/2011', 'MM/DD/YYYY'), 'Zick', 9); insert into temp_one values (6, 'London', to_date('03/16/2011', 'MM/DD/YYYY'), 'Mike', 8);
Any help is appreciated. I'll post my request as soon as I can think of a good logic for this...placeid issueperiod failures NY 02/28/2011 - 03/06/2011 10 Mexico 02/28/2011 - 03/06/2011 3 Mexico 03/14/2011 - 03/20/2011 12 London 03/14/2011 - 03/20/2011 8
Hello
user13328581 wrote:
... Please note, I'm still learning how to use analytical functions.It doesn't matter; analytical functions will not help in this problem. The SUM aggregate function is all you need.
But what do you need to GROUP BY? What is the value of each row of the result will represent? A placeid? Yes, each line will represent only placedid, but it will be divided further. You want a separate line of the output for each placeid and every week, then you'll want of the week and GROUP BY placeid. You don't want to GROUP BY the raw issuedate; that would put on 3 March and 4 March in separate groups. And you don't want to GROUP BY failures; This would mean that a line with 3 failures could never be in the same group in line with 9 failures.This becomes the output you posted from the sample data you posted:
SELECT placeid , TO_CHAR ( TRUNC (issuedate, 'IW') , 'MM/DD/YYYY' ) || ' - '|| TO_CHAR ( TRUNC (issuedate, 'IW') + 6 , 'MM/DD/YYY' ) AS issueperiod , SUM (failures) AS sumfailures FROM temp_one GROUP BY placeid , TRUNC (issuedate, 'IW') ;
You can use a subquery to calculate TRUNC (issuedate, 'IW') once. The code would be of about as complicated, efficiency probably will not improve substantially and the results would be the same.
-
Using the analytical function.
Hello
I have this scenario.
We have two locations for a given item_id. Primary and in bulk.with t as ( select 21009 item_id,9 primary_available,450 max_qty,100 bulk_available,12122 bulk_locator_id from dual union all select 21009 item_id,9 primary_available,450 max_qty,2775 bulk_available,8704 bulk_locator_id from dual union all select 21009 item_id,9 primary_available,450 max_qty,524 bulk_available,15614 bulk_locator_id from dual union all select 21009 item_id,9 primary_available,450 max_qty,3300 bulk_available,15654 bulk_locator_id from dual) select t.* from t;
I'm trying to get a select statement out of this point of view, where I will be restock the primary AMOUNT of sites in bulk, BUT the smaller bulk first. Once she gets up, I shouldn't take more product.
There is an analytic function that would do this?
That's the max I could come up with.
So, in this scenario, I want to replen bulk_locator_id 100 ' 12122 'and ' 15614 bulk_locator_id 341'. That's all. ZERO of the other rentals (bulk_locator_id). If the question is not clear, please let me know.with t as ( select 21009 item_id,9 primary_available,450 max_qty,100 bulk_available,12122 bulk_locator_id from dual union all select 21009 item_id,9 primary_available,450 max_qty,2775 bulk_available,8704 bulk_locator_id from dual union all select 21009 item_id,9 primary_available,450 max_qty,524 bulk_available,15614 bulk_locator_id from dual union all select 21009 item_id,9 primary_available,450 max_qty,3300 bulk_available,15654 bulk_locator_id from dual) select t.*, max_qty - (primary_available + SUM(bulk_available) over(PARTITION BY item_id ORDER BY bulk_available)) replen_this_much from t;
Published by: RPuttagunta on September 11, 2009 16:23Hello
Thanks for posting the sample data.
It would be useful that you also posted the output you want. Is this?. BULK_ REPLEN_ ITEM_ PRIMARY_ MAX_ BULK_ LOCATOR_ THIS_ ID AVAILABLE QTY AVAILABLE ID MUCH ----- ---------- ----- ---------- ---------- ---------- 21009 9 450 100 12122 100 21009 9 450 524 15614 341 21009 9 450 2775 8704 0 21009 9 450 3300 15654 0
If so, you can get to this:
SELECT t.* , GREATEST ( 0 , LEAST ( TO_NUMBER (bulk_available) , TO_NUMBER (max_qty) - ( TO_NUMBER (primary_available) + NVL ( SUM (TO_NUMBER (bulk_available)) OVER ( PARTITION BY item_id ORDER BY TO_NUMBER (bulk_available) ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING ) , 0 ) ) ) ) AS replen_this_much FROM t ORDER BY item_id , TO_NUMBER (bulk_available) ;
You should really store your numbers in NUMBER of columns.
You essentially posted all what you need analytical functions. The problem was just wrapping this analytical function (or something very close to it) and LESS and more GRAND, so that the replen_this_much column is always between 0 and TO_NUMBER (bulk_available).
-
By using the analytical function to sort without showing this column in the result.
Hello
We use the Oracle 11.2
How to use Oracle Analytics to sort the output of a query without showing this column in the result?
Here's my query:
Select distinct nvl(SRC_CHR_NM,'0') | » #'|| NVL(EXPL_SRC,'0') | » #'|| NVL(DIM_NM,'0') | » #'|| NVL(DIM_CHR_NM,'0') | » #'|| NVL(DIM_CHR_ABR,'0') | » #'||
Decode (InStr (NVL(SRC_COL_NM,'0'), 'trim ('),'1 ', Replace (NVL(SRC_COL_NM,'0'),'trim (',' trim(PRM_DTL.'), '0',' PRM_DTL.)))) » || NVL(SRC_COL_NM,'0')) AS ALLOFIT,
DIM_NM
from EXPL_CONFIG where SBJ_AREA_CD = 'PRMDTL. '
I want to use analytical to sort by DIM_NM but I do not want to show. I want to just just the ALLOFIT column to show in the output.
Something like "row_number() over (order by DIM_NM desc).
Thank you!
Hello
If you SELECT SEPARATE, then you can only ORDER OF things in the SELECT clause. (Do you really need to do SELECT DISTINCT? Often, it's just an inefficient way to remove the lines that do not need to be there at all).
Move making the SELECT DISTINCT in a subquery and make the ORDER BY (and nothing else) in the main query.
I hope that answers your question.
If this isn't the case, please post a small example data (CREATE TABLE and only relevant columns, INSERT statements) and also publish outcomes from these data.
Explain, using specific examples, how you get these results from these data.
Always say what version of Oracle you are using (for example, 11.2.0.2.0).
See the FAQ forum: https://forums.oracle.com/message/9362002#9362002
-
update the table by using the analytic function
I have a table like this one:
I want a single statement that will update seq_no with a unique number for each test in seq_no insert valuecreate table fred (test varchar2(50), seq_no number(18,0)); insert into fred (test) values ('A'); insert into fred (test) values ('A'); insert into fred (test) values ('A'); insert into fred (test) values ('B'); insert into fred (test) values ('C'); insert into fred (test) values ('D'); insert into fred (test) values ('D'); insert into fred (test) values ('D'); insert into fred (test) values ('D'); insert into fred (test) values ('E'); insert into fred (test) values ('E'); insert into fred (test) values ('E');
for example like this:
I am sure that this should be an easy thing, but for the life of me I can't figure out how do it in a single statement. I should know this... hangs head in shameSQL 001> select test, row_number() over (partition by test order by test) from fred 2 / TEST ROW_NUMBER()OVER(PARTITIONBYTESTORDERBYTEST) -------------------------------------------------- -------------------------------------------- A 1 A 2 A 3 B 1 C 1 D 1 D 2 D 3 D 4 E 1 E 2 E 3
Use MERGE
SQL> merge into fred 2 using (select rowid rid, test, row_number() over (partition by test order by test) sno 3 from fred) fred1 4 on (fred.rowid = fred1.rid) 5 when matched then update 6 set seq_no = sno 7 / 12 rows merged. SQL> select * from fred 2 / TEST SEQ_NO -------------------------------------------------- ---------- A 1 A 2 A 3 B 1 C 1 D 1 D 4 D 3 D 2 E 1 E 2 TEST SEQ_NO -------------------------------------------------- ---------- E 3 12 rows selected.
-
How to achieve using the analytical functions - please help
Oracle version: 10g
-999 means that all THE values for this keywith input_parameters as (select 10 filter_key , '10ACCC' filter_value from dual union all select 50 filter_key ,'10ACCC0001' filter_value from dual union all select 60 filter_key , 'PIP' filter_value from dual union all select 70 filter_key , 'A' filter_value from dual) select * from input_parameters; with profile_search as( select 100 profile_id , 10 filter_key , '10ACCC' filter_value from dual union all select 100 profile_id , 50 filter_key , '10ACCC0001' filter_value from dual union all select 100 profile_id , 50 filter_key , '10ACCC0002' filter_value from dual union all select 100 profile_id , 60 filter_key , '999' filter_value from dual union all select 100 profile_id , 70 filter_key , '999' filter_value from dual union all select 101 profile_id , 10 filter_key , '10ACCC' filter_value from dual union all select 101 profile_id , 50 filter_key , '10ACCC001' filter_value from dual union all select 101 profile_id , 60 filter_key , 'PIP' filter_value from dual union all select 101 profile_id , 70 filter_key , '999' filter_value from dual union all select 102 profile_id , 10 filter_key , '10ACCC' filter_value from dual union all select 102 profile_id , 50 filter_key , '10ACCC0001' filter_value from dual union all select 102 profile_id , 60 filter_key , 'PIP' filter_value from dual union all select 102 profile_id , 70 filter_key , 'A' filter_value from dual) select filter_key , wm_concat(filter_value) from profile_search group by profile_id, filter_key ;
70 KEY HAS THE HIGHEST WEIGHT,need to identify profile that matches input parameters --------------------------------------------------------------------------- 102 is first match because it matches exactly 101 is second match because 999 can match any value 100 is third match
A next highest weight KEY 60 and 50 next highest weight BUTTON
required results:
profile_id: 102
Published by: devarade on January 19, 2010 20:01
Published by: devarade on January 19, 2010 20:01I guess that there is a typing error in your sample:
Select 101 profile_id, 50 filter_key, '10ACCC001' filter_value from all the double union
must be:
Select 101 profile_id, 50 filter_key, '01' filter_value from all 10ACCC00 the double union
Then:
with input_parameters as ( select 10 filter_key , '10ACCC' filter_value from dual union all select 50 filter_key ,'10ACCC0001' filter_value from dual union all select 60 filter_key , 'PIP' filter_value from dual union all select 70 filter_key , 'A' filter_value from dual ), profile_search as( select 100 profile_id , 10 filter_key , '10ACCC' filter_value from dual union all select 100 profile_id , 50 filter_key , '10ACCC0001' filter_value from dual union all select 100 profile_id , 50 filter_key , '10ACCC0002' filter_value from dual union all select 100 profile_id , 60 filter_key , '999' filter_value from dual union all select 100 profile_id , 70 filter_key , '999' filter_value from dual union all select 101 profile_id , 10 filter_key , '10ACCC' filter_value from dual union all select 101 profile_id , 50 filter_key , '10ACCC0001' filter_value from dual union all select 101 profile_id , 60 filter_key , 'PIP' filter_value from dual union all select 101 profile_id , 70 filter_key , '999' filter_value from dual union all select 102 profile_id , 10 filter_key , '10ACCC' filter_value from dual union all select 102 profile_id , 50 filter_key , '10ACCC0001' filter_value from dual union all select 102 profile_id , 60 filter_key , 'PIP' filter_value from dual union all select 102 profile_id , 70 filter_key , 'A' filter_value from dual ) select profile_id, sum(direct_match_cnt) || ' direct matches out of ' || (select count(*) from input_parameters) match from ( select b.profile_id, b.filter_key, b.filter_value, sum(case when b.filter_value = a.filter_value then 1 else 0 end) direct_match_cnt from input_parameters a, profile_search b where b.filter_key = a.filter_key and (b.filter_value = a.filter_value or b.filter_value = '999') group by b.profile_id, b.filter_key, b.filter_value ) x group by profile_id having count(*) = (select count(*) from input_parameters) order by sum(direct_match_cnt) desc / PROFILE_ID MATCH ---------- ---------------------------------------- 102 4 direct matches out of 4 101 3 direct matches out of 4 100 2 direct matches out of 4 SQL>
SY.
-
How to use Group by in the analytic function
I need to write the Department that has the minimum wage in a row. She must be with analytical function, but I have problem in group by. I can't use min() without group by.
Select * from (min (sal) select min_salary, deptno, RANK() ON RN (ORDER BY sal CSA, CSA rownum) of the Group of emp by deptno) < 20 WHERE RN order by deptno;
Published by: senza on 6.11.2009 16:09Hello
senza wrote:
I need to write the Department that has the minimum wage in a row. She must be with analytic functionTherefore with an analytic function? Looks like it is a duty.
The best way to get these results is with an aggregate, not analysis, function:
SELECT MIN (deptno) KEEP (DENSE_RANK FIRST ORDER BY sal) AS dept_with_lowest_sal FROM scott.emp ;
Note that you do not need a subquery.
This can be modififed if, for example, you want the lowest Department with the sal for each job.But if your mission is to use an analytical function, that's what you have to do.
but I have problem in group by. I can't use min() without group by.
Of course, you can use MIN without GROUP BY. Almost all of the aggregate (including MIN) functions have analytical equivalents.
However, in this issue, you don't need to. The best analytical approach RANK only, not use MIN. If you ORDER BY sal, the lines with rank = 1 will have the minimum wage.Select * from (min (sal) select min_salary, deptno, RANK() ON RN (ORDER BY sal CSA, CSA rownum) of the Group of emp by deptno) WHERE the RN< 20="" order="" by="">
Try to select plain old sal instead of MIN (sal) and get reid of the GROUP BY clause.
Add ROWNUM in the ORDER BY clause is to make RANK return the same result as ROW_NUMBER, every time that it is a tie for the sal, the output will still be distinct numbers. which line gets the lower number will be quite arbitrary, and not necessarily the same every time you run the query. For example, MARTIN and WARD have exactly the same salary, 1250. The query you posted would assign rn = 4 to one of them and rn = 5 to another. Who gets 4? It's a toss-up. It could be MARTIN the first time you try, and WARD the next. (In fact, in a very small table like scott.emp, it probably will be consistent, but always arbitrary.) If this is what you want, it would be clearer and simpler just to use ROW_NUMEBR instead of RANK.
-
A question about the analytical function used with the GROUP BY clause in SHORT
Hi all
I created the following table named myenterprise
If I want to find which is the total sales by city? I'll run the following queryCITY STOREID MONTH_NAME TOTAL_SALES ---------- ---------- ---------- ---------------------- paris id1 January 1000 paris id1 March 7000 paris id1 April 2000 paris id2 November 2000 paris id3 January 5000 london id4 Janaury 3000 london id4 August 6000 london id5 September 500 london id5 November 1000
that works very well and produces the expected result, i.e.SELECT city, SUM(total_sales) AS TOTAL_SALES_PER_CITY FROM myenterprise GROUP BY city ORDER BY city, TOTAL_SALES_PER_CITY;
Now in one of my books SQL (Mastering Oracle SQL) I found another method by using the SUM, but this time as an analytic function. Here's what the method of the book suggests as an alternative to the problem:CITY TOTAL_SALES_PER_CITY ---------- ---------------------- london 10500 paris 17000
I know that the analytic functions are executed after the GROUP BY clause has been transformed completely and Unlike regular aggregate functions, they return their result for each line belonging to the partitions specified in the partition clause (if there is a defined partition clause).SELECT city, SUM(SUM(total_sales)) OVER (PARTITION BY city) AS TOTAL_SALES_PER_CITY FROM myenterprise GROUP BY city ORDER BY city, TOTAL_SALES_PER_CITY;
Now my problem is that I do not understand what we have to use two functions SUM? If we only use one only, i.e.
This generates the following error:SELECT city, SUM(total_sales) OVER (PARTITION BY city) AS TOTAL_SALES_PER_CITY FROM myenterprise GROUP BY city ORDER BY city, TOTAL_SALES_PER_CITY;
The error is generated for the line 2 column 11 which is, for the expression SUM (total_sales), well it's true that total_sales does not appear in the GROUP BY clause, but this should not be a problem, it has been used in an analytical function, so it is evaluated after the GROUP BY clause.Error starting at line 2 in command: SELECT city, SUM(total_sales) OVER (PARTITION BY city) AS TOTAL_SALES_PER_CITY FROM myenterprise GROUP BY city ORDER BY city, TOTAL_SALES_PER_CITY Error at Command Line:2 Column:11 Error report: SQL Error: ORA-00979: not a GROUP BY expression 00979. 00000 - "not a GROUP BY expression" *Cause: *Action:
So here's my question:
Why use SUM (SUM (total_sales)) instead of SUM (total_sales)?
Thanks in advance!
:)
In case you are interested, that's my definition of the table:
Edited by: dariyoosh on April 9, 2009 04:51DROP TABLE myenterprise; CREATE TABLE myenterprise( city VARCHAR2(10), storeid VARCHAR2(10), month_name VARCHAR2(10), total_sales NUMBER); INSERT INTO myenterprise(city, storeid, month_name, total_sales) VALUES ('paris', 'id1', 'January', 1000); INSERT INTO myenterprise(city, storeid, month_name, total_sales) VALUES ('paris', 'id1', 'March', 7000); INSERT INTO myenterprise(city, storeid, month_name, total_sales) VALUES ('paris', 'id1', 'April', 2000); INSERT INTO myenterprise(city, storeid, month_name, total_sales) VALUES ('paris', 'id2', 'November', 2000); INSERT INTO myenterprise(city, storeid, month_name, total_sales) VALUES ('paris', 'id3', 'January', 5000); INSERT INTO myenterprise(city, storeid, month_name, total_sales) VALUES ('london', 'id4', 'Janaury', 3000); INSERT INTO myenterprise(city, storeid, month_name, total_sales) VALUES ('london', 'id4', 'August', 6000); INSERT INTO myenterprise(city, storeid, month_name, total_sales) VALUES ('london', 'id5', 'September', 500); INSERT INTO myenterprise(city, storeid, month_name, total_sales) VALUES ('london', 'id5', 'November', 1000);
It is clear that thet Analytics is reduntant here...
You can even use AVG or any analytic function...SQL> SELECT city, 2 avg(SUM(total_sales)) OVER (PARTITION BY city) AS TOTAL_SALES_PER_CITY 3 FROM myenterprise 4 GROUP BY city 5 ORDER BY city, TOTAL_SALES_PER_CITY; CITY TOTAL_SALES_PER_CITY ---------- -------------------- london 10500 paris 17000
-
I have two tables: employees and their departments. I'm figuring the total employees by the Department and the total employees of the entire society. I know I have to use the SUM function, but I can only calculate total employees by Department and company separately. I need to get this result:
Published by: user13675672 on January 30, 2011 14:29DEPT_NAME DEPT_TOTAL_SALARY COMPANY_TOTAL_SALARY RESEARCH 10875 29025 SALES 9400 29025 ACCOUNTING 8750 29025 This is my code: SELECT department_name, SUM(salary) as total_salary FROM employee, department WHERE employee.department_id = department.department_id GROUP BY department_name; SELECT SUM(salary) FROM employee; Can somebody help please? Thank you in advance.
Published by: user13675672 on January 30, 2011 14:31Hello
Something like:
SELECT dname, dept_tot_sal, SUM (dept_tot_sal) OVER () comp_tot_sal FROM (SELECT dname, SUM (sal) dept_tot_sal FROM dept, emp WHERE dept.deptno = emp.deptno GROUP BY dname);
There might be a smarter way, with no re - select.
Concerning
PeterAnalytical functions:
http://download.Oracle.com/docs/CD/E11882_01/server.112/e17118/functions004.htm -
How to define the condition in the analytic function
Oracle 10g version
Hi all
I have the following data samples:
Examples of data
WITH DATA AS
(
SELECT 100 case_id, next_date, to_date('01-feb-2015','dd-mon-yyyy') UNION double, ALL crt_date to_date('01-jan-2015','dd-mon-yyyy')
SELECT 100 case_id, next_date, to_date('01-mar-2015','dd-mon-yyyy') UNION double, ALL crt_date to_date('01-feb-2015','dd-mon-yyyy')
SELECT 100 case_id, next_date, to_date('01-apr-2015','dd-mon-yyyy') UNION double, ALL crt_date to_date('01-may-2015','dd-mon-yyyy')
SELECT 100 case_id, to_date('01-jun-2015','dd-mon-yyyy') next_date, to_date('01-apr-2015','dd-mon-yyyy') crt_date FROM dual
)
SELECT wagneur, MIN (next_date) OVER (PARTITION BY case_id) min_dt_analytical
,(
SELECT MIN (next_date) DATA dd
WHERE dd.case_id = d.case_id
AND dd.next_date > crt_date
) min_dt_sub_query
DATA d
;
My question is that I get min_dt_sub_query using sub query but I want to use the analytical instead of query sub function so I created min_dt_analytical column, but I do not have how to give the condition that is AND dd.next_date > crt_date analytical so that I can get the same result as min_dt_sub_query data accordingly. Thanks in advance
Concerning
MIT
Do not know if I understood your needs... but... something like that?
WITH DATA AS (SELECT 100 case_id, next_date, to_date('01-feb-2015','dd-mon-yyyy') UNION double, ALL crt_date to_date('01-jan-2015','dd-mon-yyyy')
SELECT the 100 case_id, next_date, to_date('01-mar-2015','dd-mon-yyyy') UNION double, ALL crt_date to_date('01-feb-2015','dd-mon-yyyy')
SELECT the 100 case_id, next_date, to_date('01-apr-2015','dd-mon-yyyy') UNION double, ALL crt_date to_date('01-may-2015','dd-mon-yyyy')
SELECT 100 case_id, next_date, to_date('01-apr-2015','dd-mon-yyyy') double crt_date to_date('01-jun-2015','dd-mon-yyyy'))
SELECT d.
MIN (next_date) OVER (PARTITION BY case_id) min_dt_analytical
MIN(CASE WHEN next_date > crt_date THEN next_date ELSE NULL END) OVER (PARTITION BY case_id) AS min_dt_sub_query2
DATA d;
HTH
-
Hi gurus,
I ask someone to enlighten me on the analytical functions.
I used the following query:
The sample result set is:select * from (select filename,count(regexp_substr(filename,'.*era'))over( partition by regexp_substr(filename,'.*era') ) cnt FROM l_x12n_835_fileinfo where partitionnum=76000 and clntsysnum=76500 and radt>='01-JAN-2010') where cnt>1
Now, I changed the query to:FILENAME CNT rsmedcalwa.20100105.chpwr.072.era 4 rsmedcalwa.20100105.chpwr.072.era.1 4 rsmedcalwa.20100105.chpwr.072.era.2 4 rsmedcalwa.20100105.chpwr.072.era.3 4 rsmedcalwa.20100105.chpwr.081.era 3 rsmedcalwa.20100105.chpwr.081.era.1 3 rsmedcalwa.20100105.chpwr.081.era.2 3 rsmedcalwa.20100106.chpwr.088.era 3 rsmedcalwa.20100106.chpwr.088.era.1 3 rsmedcalwa.20100106.chpwr.088.era.2 3 rsmedcalwa.20100108.chppr.363.era.3 4 rsmedcalwa.20100108.chppr.363.era.1 4 rsmedcalwa.20100108.chppr.363.era.2 4 rsmedcalwa.20100108.chppr.363.era 4
The result set has beenselect * from (select filename,count(regexp_substr(filename,'.*era'))over( partition by regexp_substr(filename,'.*era') order by filename ) cnt FROM l_x12n_835_fileinfo where partitionnum=76000 and clntsysnum=76500 and radt>='01-JAN-2010') where cnt>1
(1) I don't understand how the addition of the order by clause changes the count. Could someone explain please?FILENAME CNT rsmedcalwa.20100105.chpwr.072.era.1 2 rsmedcalwa.20100105.chpwr.072.era.2 3 rsmedcalwa.20100105.chpwr.072.era.3 4 rsmedcalwa.20100105.chpwr.081.era.1 2 rsmedcalwa.20100105.chpwr.081.era.2 3 rsmedcalwa.20100106.chpwr.088.era.1 2 rsmedcalwa.20100106.chpwr.088.era.2 3 rsmedcalwa.20100108.chppr.363.era.1 2 rsmedcalwa.20100108.chppr.363.era.2 3 rsmedcalwa.20100108.chppr.363.era.3 4 rsmedcalwa.20100112.chpwr.175.era.1 2 rsmedcalwa.20100112.chpwr.175.era.2 3
When I change this order by order of regexp_substr(filename,'.*era'), it gives me the correct number.
My requirement is to check how many similar file names I.
(2) if there are any other better elsewhere, please let me know.Hello
Analytical functions still carried on in a window of the the result set, which can be smaller than the result set.
If you have a PARTITION BY clause, the window for each row includes rows with the same values from all the PARTITION BY expressions.
If you have an ORDER BY clause, the window includes that of a section of consecutive lines in the score, as defined by a windowing clause (in other words, LINES or KEEP them). The default value is "RANGE BETWEEN UNBOUNDED PRECEDING AND LINE CURRENT.In your case, if the analytical clause is:
over( partition by regexp_substr(filename,'.*era') )
and the line you are looking at a filename = 'rsmedcalwa.20100105.chpwr.072.era.2', the window takes up the entire partition, in other words, all lines whose name includes "rsmedcalwa.20100105.chpwr.072.era". In other words, the window is identical to the partition, because theree is no ORDER BY clause.
But if you add an ORDER BY clause:
over( partition by regexp_substr(filename,'.*era') order by filename )
then the window (potentially) decreases. Since there is no clause window, the default value 'RANGE BETWEEN UNBOUNDED PRECDING AND a CURRENT LINE' is used, which means that only the lines with a lower or equal to "rsmedcalwa.20100105.chpwr.072.era.2" file name (using the normal string comparison) will affect the results.
The analytical ORDER BY clause is required for certain features (such as ROW_NUMBER) significance only in regard to certain commands. For the most part fucntions (including COUNTY), the ORDER BY clause is optional, and you do not have to use a. In this case, it seems that you do not want the effect of a smaller than the partition window, just so, do not use an ORDER byclause of Analytics for this function.
Remember, the analytical ORDER byclause is completely independent the ORDER BY query clause. If you want the result presented in a certain order, use an ORDER BY clause at the end of the query. It will not change the results of the analytical functions.
-
2.1 EA Bug: AutoComplete group generates group by for the analytic function
Hello
When you use an analytic function in the sql text, sqldeveloper generates an automatic group by statement in the sql text.
Kind regards
IngoAuto to produce group by can be disabled:
In tools, preferences, Editor Code completion Insight you will find the option
"Autogenerate GROUP BY clause"
Concerning
Ernst
-
A job for the analytical function "PARTION OF?
Hello
I'm still a little fuzzy on the use of partitions, but this looks like a possible candidate for me.
I need to count the number of different customers who visit an office in one day. If a customer visits an office more than once in a single day that counts for 1.
Entry
OFFICE CLIENT TRAN_DATE
1-11-1 April 09
1-11-1 April 09
1-11-1 April 09
1 11 2 April 09
2 22 2 April 09
2 22 2 April 09
2 33 2 April 09
Select a.office as 'OFFICE', a.customer AS 'CUSTOMER', a.tran_date AS 'TRAN_DATE', COUNT (*)
Of
(SELECT 1 AS 'OFFICE', AS A 'CUSTOMER' 11, APRIL 1, 2009 "AS"TRAN_DATE"OF THE DOUBLE
UNION ALL
SELECT 1, 11, APRIL 1, 2009 "OF THE DOUBLE
UNION ALL
SELECT 1, 11, APRIL 1, 2009 "OF THE DOUBLE
UNION ALL
SELECT 1: 11, 2 APRIL 2009 "OF THE DOUBLE
UNION ALL
SELECT 2: 22, APRIL 2, 2009 "OF THE DOUBLE
UNION ALL
SELECT 2: 22, APRIL 2, 2009 "OF THE DOUBLE
UNION ALL
SELECT 2: 33, APRIL 2, 2009 "OF THE DOUBLE
) one;
Desired result
1 1 April 09 1
1-2 April 09 1
2 2 April 09 2
Is this possible with partitions, do I have to use subqueries, or some other methid?
Thanks in advance for your help,
Lou
Published by: wind in the face on April 15, 2009 13:34Hey, Lou,
PARTITION BY is not a function.
COUNT is a function. There is an aggregate COUNT function and also an analytical function of COUNTY. (Almost all aggregate functions have analytical counterparts).How can you tell if a function is used as an aggregate function or Analytics? The analytic form will be "OVER (
)" after his altercation; the overall shape will not be.
PARTITION BY is one of the elements that may form part of the analytical clause.
"PARTITION BY x, y ' in an analuytic function corresponds to" GROUP BY x, y "when using functions aggreggate.You can get the same results for a large number of problems using either global or analytical of a function versions.
For example, both versions global and analytical County can tell you that vistied office only 1 customer 1 April 1, but 2 clients visited the office 2 April 2.
If you use the aggregation function ACCOUNT and ' GROUP BY Office, tran_date ', as John suggested, you will get only one line for each distinct combination of office and tran_date. In other words, even if there are 3 rows of your table where office = 1 and tran_date = April 1, the result set will have onely a row where office = 1 and tran_date = 1 April.
Because it is exactly what you want, you can use the aggregate COUNT fucntion, as shown in John.If you use the analytical ACCOUNT function, there will be a line of output for each row in your table.
So with the sample data you posted, this query:SELECT office , tran_date , COUNT (DISTINCT customer) OVER ( PARTITION BY office , tran_date ) AS cnt FROM table_x;
will these results:
. OFFICE TRAN_DATE CNT ---------- ----------- ---------- 1 01-APR-2009 1 1 01-APR-2009 1 1 01-APR-2009 1 1 02-APR-2009 1 2 02-APR-2009 2 2 02-APR-2009 2 2 02-APR-2009 2
To get the exact results you want, you can use SELECT DISTINCT, like this:
SELECT DISTINCT office , COUNT (DISTINCT customer) OVER ...
Maybe you are looking for
-
Last night, I put my iPhone 7 loading and started the 10.0.2 update. When I checked a few minutes later, the screen went totally BLANK. I tried smoking, but the sensor in the home button wasn't working, so I could not force to leave them. It seems th
-
iBooks access to iTunes Store does not not on ipad
When I try to enumerate all the libraries of iBooks but mine on my pad, I get a blank screen and shows nothing. It's used to work fine before some time last weekend. I tried yesterday and today, updated system software and did a hard reboot, but no
-
HP Pavilion DV7 - 4083CL 1333 MHz or 1066 MHz RAM?
Hello I have a HP Pavilion DV7 - 4083CL with 2 x 2 GB RAM DDR3. I think they are at 1066 MHz. I want to upgrade the RAM 2 x 4 GB. Should I care about in 1333 MHz or 1066 MHz RAM? If she is 1333 MHz it is really faster, or it will work just to 1 066 M
-
How does the DAQ hardware with data acquisition?
I'm sampling of 8 channels at 2.5 MHz I I need to know if the DAQmx 6133 I use gets every channel at 2.5 MHz or if she gets every channel (2.5/8) MHz. For example if I had a channel it can get up to 2.5 MHz, but if I get 8 channels, this channel will
-
I use a three-year-old Toshiba Satellite A355. The Eastern question is HL-DT-ST DVDRAM GSA-T50F Up at this time yesterday, it had been working flawlessly. Now, it only makes clicking sound, but does not open. Device Manager says "this device is fu