Analytical SQL

CREATE THE CUSTOMER TABLE

(CUSTOMER_ID NUMBER (10),

CLASS VARCHAR2 (10)

)  ;

Insert into CUSTOMERS (CUSTOMER_ID, CLASS) values ('210000', 'PREM');

Insert into CUSTOMERS (CUSTOMER_ID, CLASS) values ('210001', "WO");

Insert into CUSTOMERS (CUSTOMER_ID, CLASS) values ('210002', 'HIGH');

Insert into CUSTOMERS (CUSTOMER_ID, CLASS) values ('210003', 'HIGH');

COMMIT;

CREATE TABLE ACCOUNT

(CUSTOMER_ID NUMBER (10),)

ACCOUNT_ID NUMBER (10),

SALE NUMBER (10)

);

Insert into ACCOUNT (CUSTOMER_ID, ACCOUNT_ID, BALANCE) values ('210003', '110005',' - 450.0 ');

Insert into ACCOUNT (CUSTOMER_ID, ACCOUNT_ID, BALANCE) values ('210000', '110000',' - 200.0 ');

Insert into ACCOUNT (CUSTOMER_ID, ACCOUNT_ID, BALANCE) values ('210000', '110001',' 300,0');

Insert into ACCOUNT (CUSTOMER_ID, ACCOUNT_ID, BALANCE) values ('210001', '110002',' 3000,0');

Insert into ACCOUNT (CUSTOMER_ID, ACCOUNT_ID, BALANCE) values ('210002', '110003',' - 405,0');

Insert into ACCOUNT (CUSTOMER_ID, ACCOUNT_ID, BALANCE) values ('210002', '110004',' 805,0');

COMMIT;

I want to reach the exit in the following way when the account balance is negative (overpayment) I want and so the same customer whose other account into a positive. and the balance of the sum by the customer.

These example I mocked upward in the output in excel.

CUSTOMER_ID ACCOUNT_ID BALANCE CLASS CUSTOMER_TOTAL
210000 110000 -200 PREM 100
210000 110001 300 PREM 100
210002 110003 -405 HIGH 400
210002 110004 805 HIGH 400
210003 110005 -450 HIGH -450

Select

c.CUSTOMER_ID

ACCOUNT_ID

BALANCE

CLASS

sum (balance) on (partition c.CUSTOMER_ID)

CUSTOMER_TOTAL

Customer c, represent a

where

c.customer_id = a.customer_id

and

c.customer_id in)

Select

customer_id

account

where

Balance<>

order by

c.CUSTOMER_ID

ACCOUNT_ID

Post edited by: Sorry, chris227 have not seen that it was two tables. Join added

Tags: Database

Similar Questions

  • analytical sql question

    Dear all,
    I have the following problem for the output table of the table entry below.
    Do you have any idea to do this with the analytical help of sql?
    p.s. I did using pure plsql block which is too slow to work with a large amount of data. Given below is just a sample, in real time, I have millions of rows of data.

    Entry table:
    VALUE OF TIME USER

    1 X
    2 X
    3 B Y
    4 B Y
    5 X
    5 X
    6. A Y
    7 B Y
    7. A Y



    Table of outputs:
    VALUE OF START_TIME, END_TIME USER
    1          2          A     X
    5          5          A     X
    6          7          A     Y
    3          4          B     Y
    5          5          B     X
    7          7          B     Y
    create table mytable (time,myuser,value) as
    select 1 col1, 'A' col2, 'X' col3 from dual union all
    select 2 col1, 'A' col2, 'X' col3 from dual union all
    select 3 col1, 'B' col2, 'Y' col3 from dual union all
    select 4 col1, 'B' col2, 'Y' col3 from dual union all
    select 5 col1, 'A' col2, 'X' col3 from dual union all
    select 5 col1, 'B' col2, 'X' col3 from dual union all
    select 6 col1, 'A' col2, 'Y' col3 from dual union all
    select 7 col1, 'B' col2, 'Y' col3 from dual union all
    select 7 col1, 'A' col2, 'Y' col3 from dual union all
    select 8 col1, 'A' col2, 'Y' col3 from dual;
    
    select min(time),max(time),myuser,value
    from (select time,myuser,value,
           dense_rank() over(order by time)
          -Row_Number() over(partition by myuser,value order by time)
          as distance
           from mytable)
    group by myuser,value,distance
    order by myuser,min(time);
    
    MIN(TIME)  MAX(TIME)  M  V
    ---------  ---------  -  -
            1          2  A  X
            5          5  A  X
            6          8  A  Y
            3          4  B  Y
            5          5  B  X
            7          7  B  Y
    

    I used the sense of Tabibitosan B-)
    window function

  • Need help with patterns of comparison (analytical SQL?)

    Hello Forum users,.

    I have a strange problem on my hands that I cannot think of an effective way to code in Oracle SQL...

    (1) background

    Two tables:

    MEASURES
    IDENTIFICATION NUMBER
    M1 NUMBER
    NUMBER M2
    M3 NUMBER
    M4 NUMBER

    INTERESTING_VALUES
    IDENTIFICATION NUMBER
    M1 NUMBER
    NUMBER M2
    M3 NUMBER
    M4 NUMBER

    (2) OUTPUT NEEDED

    For each line of 'MEASURES', count the number of matches in 'INTERESTING_VALUES' (all lines). Please note that the games could not be in the same column, for example for a given row MEASURES M1 may correspond to INTERESTING_VALUES M3.

    I need to count the 2,3 and 4 matches and provide the values of the County stay.

    I'm not interested in partial matching (for example "superior to" or "less than"), numbers only need to match exactly.

    (You can use the features up to 11g).

    Thank you for your help.

    Well Yes, here you go...

    SQL> select * from measurement;
    
            ID         M1         M2         M3         M4
    ---------- ---------- ---------- ---------- ----------
             1         30         40        110        120
             2         12         24        175        192
             3         22         35        147        181
    
    SQL> select * from interesting;
    
            ID         M1         M2         M3         M4
    ---------- ---------- ---------- ---------- ----------
             1         16        171         30        110
             2         40        171         30        110
             5        181        147         35         22
             4        175         12        192         24
             3        175         86        192         24
    
    SQL>  SELECT id,
      2    SUM(
      3    CASE
      4      WHEN LEN=1
      5      THEN 1
      6      ELSE 0
      7    END) repeat2,
      8    SUM(
      9    CASE
     10      WHEN LEN=2
     11      THEN 1
     12      ELSE 0
     13    END) repeat3,
     14    SUM(
     15    CASE
     16      WHEN LEN=3
     17      THEN 1
     18      ELSE 0
     19    END) repeat4
     20     FROM
     21    (SELECT id,
     22      spath   ,
     23      (LENGTH(spath)-LENGTH(REPLACE(LOWER(spath),',') ))/ LENGTH(',') LEN
     24       FROM
     25      (SELECT t.* ,
     26        ltrim(sys_connect_by_path(cvalue,','),',') spath
     27         FROM
     28        (SELECT p.*,
     29          row_number() over (partition BY id, id1 order by cvalue) rnum
     30           FROM
     31          (SELECT m.id,
     32            m.m1      ,
     33            m.m2      ,
     34            m.m3      ,
     35            m.m4      ,
     36            (SELECT COUNT(*)
     37               FROM interesting q
     38              WHERE q.id=I.id
     39            AND (q.m1   = column_value
     40            OR q.m2     =column_value
     41            OR q.m3     =column_value
     42            OR m4       =column_value)
     43            ) cnt              ,
     44            column_value cvalue,
     45            i.id id1           ,
     46            i.m1 m11           ,
     47            i.m2 m21           ,
     48            i.m3 m31           ,
     49            i.m4 m41
     50             FROM measurement m                   ,
     51            TABLE(sys.odcinumberlist(m1,m2,m3,m4)),
     52            interesting I
     53         ORDER BY id,
     54            id1     ,
     55            cvalue
     56          ) p
     57          WHERE cnt=1
     58        ) t
     59        WHERE level        >1
     60        CONNECT BY prior id=id
     61      AND prior id1        =id1
     62      AND prior rnum      <=rnum-1
     63        --start with rnum=1
     64     ORDER BY id,
     65        id1     ,
     66        cvalue
     67      )
     68   ORDER BY 1,2
     69    )
     70  GROUP BY id
     71  ORDER BY 1;
    
            ID    REPEAT2    REPEAT3    REPEAT4
    ---------- ---------- ---------- ----------
             1          4          1          0
             2          9          5          1
             3          6          4          1
    

    I write the code without number so that it is easier for you to copy formatting code...

    select id,sum(case when len=1 then 1 else 0 end) repeat2,sum(case when len=2 then 1 else 0 end) repeat3,sum(case when len=3 then 1 else 0 end) repeat4 from
    (select id,spath,(LENGTH(spath)-LENGTH(REPLACE(LOWER(spath),',') ))/ LENGTH(',') len from
    (select t.*
    ,ltrim(sys_connect_by_path(cvalue,','),',') spath
    from (select p.*,row_number() over (partition by id, id1 order by cvalue) rnum from (SELECT m.id,
        m.m1      ,
        m.m2      ,
        m.m3      ,
        m.m4      ,
        (SELECT COUNT(*)
           FROM interesting q
          WHERE q.id=I.id
        AND (q.m1    = column_value
        OR q.m2     =column_value
        OR q.m3     =column_value
        OR m4       =column_value)
        ) cnt              ,
        column_value cvalue,
        i.id id1           ,
        i.m1 m11           ,
        i.m2 m21           ,
        i.m3 m31           ,
        i.m4 m41
        FROM measurement m                     ,
        TABLE(sys.odcinumberlist(m1,m2,m3,m4)),
        interesting I
      order by id,id1,cvalue
      ) p where cnt=1
    ) t where level>1
    connect by prior id=id and prior id1=id1 and prior rnum<=rnum-1
    order by id,id1,cvalue) order by 1,2) group by id order by 1
    

    Ravi Kumar

    Published by: ravikumar.sv on 15 Sep 2009 15:54

  • Analytics / requirement model

    It is a puzzle that I put myself after a discussion to make choices between SQL and PL/SQL - is not urgent not important and not serious.

    If I query dba_extents for a given table (e.g. sys.source$) measure information looks like this:

    Select file_id, block_id, blocks

    from dba_extents

    where owner = 'SYS '.

    and nom_segment = '$SOURCE. '

    order of file_id, block_id

    ;

    FILE_ID, BLOCK_IDBLOCKS

    ---------- ---------- ----------

    115048
    181688
    181768
    181928
    182888
    184408
    1100728

    ...

    177568128
    177696128
    177824128
    178080128
    189984128

    ...

    1907521024

    80 selected lines.

    I have a piece of code that bed the exent list, joins a list of numbers to enumerate each block in each scope, sorts the blocks of file_id and block_id, applies to an ntile (12) for the result set, and then selects the first and the last block in each tile to produce an output which is essentially 12 rows (first_file_id first_block_id (, last_file_id, last_block_id)-which I can convert it to a set of coating lines for the table rowid.  (This is essentially what dbms_parallel_execute when you create pieces of rowid - except that it uses the PL/SQL to do).

    My SQL does exactly the necessary work, but is significantly slower than the equivalent PL/SQL - we're talking only a few seconds on the line for very large objects, so the difference is not relevant for the purposes of actual production - largely, I think, because I have to increase the size of the initial result set between the number of extensions and the number of blocks and then narrow down again so that the PL/SQL can simply browse the definitions of making simple arithmetic.

    I'm sure that there is a way of MODEL clause to avoid explosion, and I'd like to see if anyone has the time, but I keep thinking I'm close an analytical solution, but can't quite get there. So if anyone can find a solution that would be even better than a solution of model - failing that someone proves that it can be done effectively in simple analytical SQL.

    UPDATE: I forgot to declare explicitly that point to the explosion of the block and ntile() was that it was a simple strategy to get the same number (+ /-1) block in all ranges of rowid.

    Concerning

    Jonathan Lewis

    Post edited by: Jonathan Lewis

    The solution:

    [Update: Please see https://stewashton.wordpress.com/2015/07/01/splitting-a-table-into-rowid-ranges-of-equal-size/ for an update solution, a bit cleaner.] Thanks Chris227, I learned for the function WIDTH_BUCKET]!

    with data as (
      select (select blocks / 12 from my_segments) blocks_per_chunk,
      object_id
      from my_objects
    )
    , extents as (
      select
      nvl(sum(blocks) over(
        order by file_id, block_id
        rows between unbounded preceding and 1 preceding
      ),0) cumul_start_blocks,
      sum(blocks) over(order by file_id, block_id) - 1 cumul_end_blocks,
      blocks, block_id, file_id,
      data.*
      from my_extents, data
    )
    , extents_with_chunks as (
      select
      trunc(cumul_start_blocks / blocks_per_chunk) first_chunk,
      trunc((cumul_end_blocks) / blocks_per_chunk) last_chunk,
      round(trunc(cumul_start_blocks / blocks_per_chunk)*blocks_per_chunk) first_chunk_blocks,
      round(trunc((cumul_end_blocks+1.0001) / blocks_per_chunk)*blocks_per_chunk)-1 last_chunk_blocks,
      e.* from extents e
    )
    , expanded_extents as (
      select first_chunk + level -1  chunk,
      cumul_start_blocks, file_id, block_id,
      case level when 1 then cumul_start_blocks
          else round((first_chunk + level -1)*blocks_per_chunk)
        end start_blocks,
        case first_chunk + level -1 when last_chunk then cumul_end_blocks
          else round((first_chunk + level)*blocks_per_chunk)-1
        end end_blocks
      from (
        select * from extents_with_chunks
        where first_chunk_blocks = cumul_start_blocks
          or last_chunk_blocks = cumul_end_blocks
          or first_chunk < last_chunk
      )
      connect by cumul_start_blocks = prior cumul_start_blocks
      and first_chunk + level -1 <= last_chunk
      and prior sys_guid() is not null
    )
    select chunk,
    min(file_id) first_file_id,
    max(file_id) last_file_id,
    min(block_id + start_blocks - cumul_start_blocks)
      keep (dense_rank first order by cumul_start_blocks) first_block_id,
    max(block_id + end_blocks - cumul_start_blocks)
      keep (dense_rank last order by cumul_start_blocks) last_block_id,
    max(end_blocks) + 1 - min(start_blocks) blocks
    from expanded_extents
    group by chunk
    order by chunk;
    
  • Functions Pipeline Table with other tables using

    I'm on DB 11.2.0.2 and used sparingly in pipeline table functions, but plans to arrays for a project that has a pretty big (many lines). In my tests, selecting from the table in pipeline perform well enough (if it's directly from the table pipleined or the view from above, I have created). Where I start to see some degregation when I try to join the tabe in pipeline discovered at other tables and add where conditions.

    Download

    SELECT A.empno, A.empname, A.job, B.sal

    OF EMP_VIEW A, B OF THE EMP

    WHERE A.empno = B.empno AND

    B.Mgr = '7839'

    I've seen articles and blogs that mention this as a matter of cardinality and offer some undocumented methods to try to fight.

    Can someone please give me some tips or tricks on this. Thank you!

    I created a simple example using the emp table below to help illustrate what I'm doing.

    DROP TYPE EMP_TYPE;

    DROP TYPE EMP_SEQ;

    CREATE OR REPLACE TYPE EMP_SEQ AS OBJECT

    (EMPNO NUMBER (10),)

    ENAME VARCHAR2 (100),

    VARCHAR2 (100)) WORK;

    /

    CREATE OR REPLACE TYPE EMP_TYPE AS TABLE EMP_SEQ;

    /

    FUNCTION to CREATE or REPLACE get_emp back EMP_TYPE PIPELINED AS

    BEGIN

    TO heart (SELECT IN

    EmpNo,

    Ename,

    job

    WCP

    )

    LOOP

    PIPE ROW (EMP_SEQ (cur.empno,

    cur. Ename,

    cur.job));

    END LOOP;

    RETURN;

    END get_emp;

    /

    create or REPLACE view EMP_VIEW select * from table (get_emp ());

    /

    SELECT A.empno, A.empname, A.job, B.sal

    OF EMP_VIEW A, B OF THE EMP

    WHERE A.empno = B.empno AND

    B.Mgr = '7839'

    bobmagan wrote:

    The ability to join would give me the most flexibility.

    Pipelines can be attached. But here is the PL/SQL code - no tables. And without index.

    Consider a view:

    create or replace view sales_personel in select * from emp where job_type = 'SALES '.

    And you use the view to determine the sellers in department 123:

    Select * from sales_personel where dept_id = 123

    Oracle considers that logically the next SQL statement like her can be pushed in the view:

    select * from emp where job_type = 'SALES' and dept_id = 123


    If the two columns in the filter are indexed for example, he may well decide to use a fusion of index to determine what EMP lines are dirty and department 123.

    Now consider the same exact scenario with a pipeline. The internal process of pipelines are opaque to the SQL engine. He can't say the internal code pipeline "Hey, don't give me employees service 123".

    He needs to run the pipeline. It must evaluate each channeled line and apply the "dept_id = 123" predicate. In essence, you must treat the complete pipeline as a table scan. And a slow that it take more than a simple disc lines when you perform the transformation of data too.

    So yes - you can use the predicates on the pipelines, can join them, use analytical SQL and so immediately - but expect it to behave like a table in terms of optimization of SQL/CBO, is not realistic. And pointing to a somewhat flawed understanding of what a pipeline is and how it should be designed and used.

  • How to find the rank lowest for each student using a Select query?

    Hey I'm having a little trouble here

    I have this table


    Student - Grade

    John - 8
    Richard - 9
    Louis - 9
    Francis - 5
    John - 13
    Richard - 10
    Peter - 12

    Shades may vary from 0 to 20.

    Name - varchar (50)
    Grade - wide

    I'm trying to generate a SQL search to search for each rank lowest for each student.

    So far, I did:

    Select s.name, s.grade
    s student
    where s.grade = (select min (grade) of student)

    The result of this search returns me only the rank lowest of all ranks posted above, which is Francis - 5.

    I want to find the lowest rank, but not only for Francis, but for all students in the table.

    How do I do that?

    Thank you in advance.

    Ok

    Now we head into analytical SQL:

    with student as (select 'John' name, 8 grade, 'fail' result from dual union all
                     select 'John' name, 13 grade, 'pass' result from dual union all
                     select 'Francis',     5 grade,  'fail' from dual union all
                     select 'Peter', 12, 'pass' from dual)
    -- End of your test data
    SELECT   name,
             MAX (result) KEEP (DENSE_RANK FIRST ORDER BY grade) result,
             MIN (grade) min_grade
    FROM     student
    GROUP BY name
    

    Please note that I passed ;)

    Concerning
    Peter

    To learn more:
    http://download.Oracle.com/docs/CD/B19306_01/server.102/b14200/functions056.htm#SQLRF00641

  • Issue Analytics ORA-30089

    Hello-
    worked with analytical sql in a little bit so maybe I'm just rusty.

    Goal: run the query to return data) 1 week and 2) a current view an average of 52 weeks

    Sample data and the ddl:

    DDL:
    CREATE TABLE my_fact
    (product_id NUMBER,
     customer_id NUMBER,
     week_no number,
     transaction_date DATE, 
     stock_on_hand NUMBER,
     stock_in_transit number);
    
    INSERT INTO my_fact VALUES (1,45, 29, SYSDATE, 100,70);
    INSERT INTO my_fact VALUES (1,45, 29, SYSDATE, 200,170);
    INSERT INTO my_fact VALUES (1,45, 15, SYSDATE -100, 300,70); -- added
    INSERT INTO my_fact VALUES (1,55, 29, SYSDATE, 100,70);
    INSERT INTO my_fact VALUES (2,32, 15, SYSDATE -100, 100,70);
    INSERT INTO my_fact VALUES (2,32, 10, SYSDATE-130, 100,70);
    INSERT INTO my_fact VALUES (3,32, 20, SYSDATE-66, 100,70);
    INSERT INTO my_fact VALUES (3,78, 29, SYSDATE, 100,70);
    now my query:
    SELECT product_id, customer_id, week_no,
    avg(stock_on_hand)  avg_current_week, 
    avg(avg(stock_on_hand)) OVER (ORDER BY (trunc(transaction_date)) RANGE BETWEEN INTERVAL '364' DAY(3) PRECEDING AND INTERVAL '1' PRECEDING) avg_52week
    FROM my_fact
    WHERE to_char(SYSDATE, 'WW') = week_no -- OTHER THAN THE ANALYTICAL AGGREGATE JUST WANT TO LOOK AT CURRENT WEEK
    GROUP BY product_id, customer_id, week_no, TRUNC(transacion_date)
    Get the error ora-30089 and don't know why?
    Also, according to my goal above is this structured correctly query (the analytical part)?

    Thank you!

    Published by: padawan on July 19, 2010 11:32

    Hello

    It would be better to display full ressults you want, not just a line.

    I think not only analytical help in this problem. Just use the AVG aggregation, with a CASE expression to get an average based on a subset of the results (for example, current week).

    SELECT        product_id
    ,       customer_id
    ,       MAX (week_no)          AS week_no
    ,       AVG ( CASE
                  WHEN  week_no           = TO_CHAR (SYSDATE, 'WW')
                  AND       transaction_date     > SYSDATE - 7     -- to guard against same week in an earlier year
                  THEN  stock_on_hand
              END
               )               AS avg_current_week
    ,       AVG (stock_on_hand)     AS avg_52_week
    FROM       my_fact
    WHERE       transaction_date     >= TRUNC (SYSDATE) - 364
    AND       transaction_date     <  TRUNC (SYSDATE) + 1
    GROUP BY  product_id
    ,       customer_id
    HAVING       MAX (week_no)     = TO_CHAR (SYSDATE, 'WW')
    ;
    

    If you average per week and then take the average of the averages, you are then weighted averages; in the contrary case, the average of 150 of the current week and 300 a another week would be 150 + 300 / 2 = 225, which is not what you want you want each of the lines that contributes to the average of the first (on 150) to be considered separately.

  • By subtracting the values in the same column?

    Hello

    How can I subtract two values in the column of the table. EG - there is a table with two columns with date and cumulative turnover.

    Date Total_sales
    ===== =======
    February 23, 68
    24 feb-122
    25 feb-150
    26-Feb-200
    27 feb-223

    I need to know about the date that sales have been maximum. As we can see on 24 - Feb, sales were 54. We do this by subtracting 122-68.
    How can subtract us values in the same column?

    Thank you

    Take a look at the lag() of analytic SQL function which should give you what you need.

    See you soon,.
    Harry

  • SQL using the analytic function


    Hi all

    I want a help in the creation of my SQL query to retrieve the data described below:

    I have a test of sample table containing data as below:

    State ID Desc

    MICHAEL 1 T1

    ACTIVE 2 T2

    T3 3 SUCCESS

    DISABLE THE T4 4

    The thing I want to do is to select all the lines with an ACTIVE status in the table but is there is no ACTIVE status, my request will give me the last line with MICHAEL status.

    I can do this in a single request by using the analytical function for example, if yes can yiu help me on the request of unpacking.

    Kind regards

    Raluce

    Something like that?

    I had to fix it.

    with testdata until)
    Select 1 id, "T1" dsc "DISABLED" status of Union double all the
    Select 2 id, 'T2' dsc, the status "ACTIVE" of all the double union
    Select id 3, "T3" dsc, the status of 'SUCCESS' of all the double union
    Select 4 id, "T4" dsc "DISABLED" status of double
    )

    Select
    ID
    dsc
    status
    of testdata
    where
    status =
    -case when (select count (*) in testdata where status = 'ACTIVE') > 0
    then 'ACTIVE '.
    Another 'DISABLED '.
    end
    and)
    ID in (select id from testdata where status = ' ACTIVE')
    or
    ID = (select max (id) in testdata when status = 'DISABLED')
    )

    STATE ID DSC

    '2' 'T2' 'ACTIVE '.

    Maybe it's more efficient

    Select
    ID
    dsc
    status
    of testdata
    where
    status =
    -case when (select count (*) in testdata where status = 'ACTIVE') > 0
    then 'ACTIVE '.
    Another 'DISABLED '.
    end
    and
    ID =)
    -case when (select count (*) in testdata where status = 'ACTIVE') > 0
    then id
    on the other
    (select max (id) in testdata when status = 'DISABLED')
    end
    )

    Post edited by: correction of chris227

    Post edited by: chris227
    extended

  • Help sql analytic function

    Table with 2 columns pro_id, sub_ver_id need only 5 pro_id for each sub_ver_id.

    SQL > select * from test1 by SUB_VER_ID;

    PRO_ID SUB_VER_ID
    ---------- ----------
    1 0
    2 0
    3 0
    4 0
    5 0
    6 0
    10 1
    15 1
    16 1
    11 1
    1 of 12

    PRO_ID SUB_VER_ID
    ---------- ----------
    13 1
    1 of 14
    11 2
    3 of 12

    .............................

    I'm new to the analytical function, I received the request in the form below, but not able to get an idea to limit the SRLNO to only 5 lines for each SUB_VER_ID. Any advice would be much appreciated.

    Select distinct sub_ver_id, pro_id, row_number () over (order by sub_ver_id) srlno
    from test1 by sub_ver_id

    Can be as below...

    select *
    from
    (
    select sub_ver_id,pro_id, row_number () over (partition by sub_ver_id order by null) srlno
    from test1
    ) where srlno <=5 order by sub_ver_id
    

    Thank you...

  • Can you explain why the "analytical" Word is used in Sql

    I found to define the "analytic function" phrase is such: "a piece of syntax that is originating excessive."
    I don't understand why's called it "Analytics". In English 'analytical' comes from the word "to analyse" which means the examination of something. So it looks like "analytic function" should review/analyze something? but all the functions examaine/analyze something. If I group by article in my request, then all aggregate functions will conduct the review of the data, then why I call them better too 'analytic (al). Can you explain why the "analytical" Word is used in Sql world?

    CharlesRoos wrote:
    I found to define the "analytic function" phrase is such: "a piece of syntax that is originating excessive."
    I don't understand why's called it "Analytics". In English 'analytical' comes from the word "to analyse" which means the examination of something. So it looks like "analytic function" should review/analyze something? but all the functions examaine/analyze something. If I group by article in my request, then all aggregate functions will conduct the review of the data, then why I call them better too 'analytic (al). Can you explain why the "analytical" Word is used in Sql world?

    Aggregate functions will bring together data that is to sum or count etc once it is grouped together. It is not just review, but grouping.
    Analytical functions review / analyze the other rows of data, without having to group them in the result that they can summarize a set of values of a particular group (partition) of the data, or they can simply retrieve values of other lines (for example lead, lag, the first_value, last_value etc. functions.) They are able to look through the data without any aggregation of it. So why they are analytical.

  • What version of Oracle to sql analytic?

    Can someone tell me what Oracle version there are the sql "analytical"?

    Thank you!

    Hello

    Mark1970 wrote:
    Can someone tell me what Oracle version there are the sql "analytical"?

    Thank you!

    Analytic functions (in other words, the functions by using the keyword, such as)

    RANK () OVER (ORDER BY hiredate)
    

    ) was introduced in Oracle 8.1

    (It's no coincidence if online views were introduced at the same time, since so many uses of anaytic functions require subqueries.)
    In - line views were for the first time in Oracle 8.1, but they worked in Oracle 8.0)

  • Merge no SQL using analytical functions

    Hi, the Sql tuning specialists:
    I have a question about the merger of view inline.

    I have a simple vision with the analytical functions inside. When questioning him, he does not index.


    VIEW to CREATE or REPLACE ttt
    AS
    SELECT EmpNo, deptno,
    ROW_NUMBER() over (PARTITION BY deptno ORDER BY deptno desc NULLS last) part_seq
    EMP AAA

    -That will do full table for emp scan
    Select * from TT
    WHERE empno = 7369


    -If I do not view use, I use the query directly, the index is used
    SELECT EmpNo, deptno,
    ROW_NUMBER() over (PARTITION BY deptno ORDER BY deptno desc NULLS last) part_seq
    EMP aaa
    WHERE empno = 7369


    question is: How can I force the first query to use indexes?

    Thank you

    MScallion wrote:
    What happens if you use the push_pred flag:

    Nothing will happen. And it would be a bug if he would.

    select * from ttt
    WHERE empno=7369
    

    and

    SELECT empno,deptno,
    row_number() OVER (PARTITION BY deptno ORDER BY deptno desc NULLS last) part_seq
    FROM emp aaa
    WHERE empno=7369
    

    are two logically different queries. Analytical functions are applied after + * resultset is common. So first select query all rows in the emp table then assign ROW_NUMBER() to recovered lines and only then select a line with empno = 7369 her. Second query will select the table emp with empno = 7369 line and only then apply ROW_NUMBER() - so since emp.empno is unique ROW_NUMBER returned by second query will always be equal to 1:

    SQL> select * from ttt
      2  WHERE empno=7369
      3  /
    
         EMPNO     DEPTNO   PART_SEQ
    ---------- ---------- ----------
          7369         20          4
    
    SQL> SELECT empno,deptno,
      2  row_number() OVER (PARTITION BY deptno ORDER BY deptno desc NULLS last) part_seq
      3  FROM emp aaa
      4  WHERE empno=7369
      5  /
    
         EMPNO     DEPTNO   PART_SEQ
    ---------- ---------- ----------
          7369         20          1
    
    SQL> 
    

    SY.

  • Service SQL, Dyanmics GP analytical accounting

    SQL Server, Dynamics GP 2010 and we are preparing to introduce competitive travel and feature fees.  Originally, we were told to add Integration Manager module and there is no problem.  Now we are told that we had to do customization, due to the use of cost accounting.  Is there any combination of modules of MS that would solve the question of integration without the need for custom software?

    There seems to be no topic on the Forum dealing with this type of product question mulltiple MS

    Hi Helen,

    For any questions about Dynamics GP, visit the forum here.

    Thanks for posting your question in the Microsoft answers Forum!

  • SQL question, perhaps with the analytical functions?

    I have a small problem:

    I have a table with:

    DAY_ID, PAGE_ORDER, SID, TIME, CONTENT.

    I want only to the rank (min) of lines with the same content when there is more than

    the one with the same content that follows.

    The data are:

    DAY PAGE_ORDER SID TIMES CONTENT

    20150825 1 4711 25.08.15 06:38:43 / body/home

    4711 2 20150825 25.08.15 06:39:10 home, aufmacher, Home, 42303938

    20150825 3 4711 25.08.15 06:39:15 welcome, aufmacher, Home, 42303938

    20150825 4 4711 25.08.15 06:39:20 home, aufmacher, Home, 42303938

    20150825 5 4711 25.08.15 06:39:24 home, aufmacher, Home, 42303938

    20150825 6 4711 25.08.15 06:39:32 home, aufmacher, Home, 42303938

    20150825 7 4711 25.08.15 06:39:39 home/aufmacher/Home/42303938

    20150825 8 4711 25.08.15 06:39:46 welcome, aufmacher, Home, 42303938

    20150825 9 4711 25.08.15 06:39:49 home, aufmacher, Home, 42303938

    4711 10 20150825 25.08.15 06:39:51 home, aufmacher, Home, 42303938

    4711 11 20150825 25.08.15 06:41:17 pol/art/2015/08/24/paris

    20150825 12 4711 25.08.15 06:42:36 / body/home

    20150825 13 4711 25.08.15 07:06:09 / body/home

    20150825 14 4711 25.08.15 07:06:36 reg/article/memo

    I want as a result:

    20150825 1 4711 25.08.15 06:38:43 / body/home

    4711 2 20150825 25.08.15 06:39:10 home, aufmacher, Home, 42303938

    4711 11 20150825 25.08.15 06:41:17 pol/art/2015/08/24/paris

    20150825 12 4711 25.08.15 06:42:36 / body/home

    20150825 14 4711 25.08.15 07:06:36 reg/article/memo

    Who knows a good way?

    Thank you very much

    It sounds like a simple solution group. You group by content and maybe a few other columns such as the day and sid. So, you want to show some value of inside this group. Several different aggregate functions to do this.

    Not tested, because of lack of tabe create and insert scripts

    select day, sid, content
            ,min(page_order) as page_order
            ,min(times) as times -- if the first page_order also has the first time
            ,min(times) keep dense_rank first (order by page_order) as times2 -- this is needed in case the first page_order is at a later time
    from yourTable
    group by day, sid, content
    

    If Solomon is right, and several identical content may exist (the example data show that). Then we can use the Tabibitosan method to create the groups.

    with step1 as (select t1.*, row_number() over (partition by day, sid, content order by page_order) rn
                         from yourTable
                         )
    select  day, sid, content
             , page_order - rn as group_number
             , min(page_order) as page_order
             , min(times) as times -- if the first page_order also has the first time
             , min(times) keep dense_rank first (order by page_order) as times2 -- this is needed in case the first page_order is at a later time
    from step1
    group by day, sid, content, page_order - rn
    order by day, sid, content, group_number;
    

Maybe you are looking for

  • Persistence of drafts of full screen

    Whenever I restart the system, at least 3 blank spaces appear for applications that are not yet released. Two of them can be closed by alt-hovering by clicking on the arrow button / close, but MPlayerX' space is impossible. Never. I tried to uninstal

  • LENOVO B570 HDD will not identify, so it DOES not start.

    idk what's happening but last night, it was so slow to load so I just the button and turn it off. today I turn on bios takes long time too load and then I didn't msg tht, no windows. I try two or three times did not work... I tried recovery it dosnt

  • How to order or download the system recovery software?

    Hello Most HP laptops already have called the program "recovery manager" that is built into the laptop. This allows each user recover the laptop whenever they need to. Now, there are some situations where you can not use the program from the desktop

  • Hey I bought the PC MightyMax 2011 and installed on my computer?

    PC MightyMax 2011 - I bought my computer to that thought, it would be useful.  I did a beast... is this Malware?  It seems to do a very good job, but all I see on the forums is people trying to get the 2009 version or free on their computers.

  • Startup Repair has been running for 2 hours is it Ok? __

    My HP computere cam test-load with Vista I downloaded vista recovery and my computer sees this disc and begins to run Startup Repair. He said now 'Attempt to repair' for a little more than 2 hours.