Help to identify duplicate lines.

I have a table that looks like the following.

create table Testing(
inv_num varchar2(100),
po_num  varchar2(100),
line_num varchar2(100)
)

data by the following.

Insert into Testing (INV_NUM,PO_num,line_num) values ('19782594','P0254836',1);
Insert into Testing (INV_NUM,PO_num,line_num) values ('19782594','P0254836',1);
Insert into Testing (INV_NUM,PO_num,line_num) values ('19968276','P0254836',1);
Insert into Testing (INV_NUM,PO_num,line_num) values ('19968276','P0254836',1);

what I'm trying to do, is to identify the different multiple elements within the table with the same PO_num, but inv_num.

I have try this

SELECT  
        T1.inv_num,
        T1.Po_num,
        T1.LINE_num  ,
        count(*) over( partition by 
        T1.inv_num)myRecords
    FROM testing T1
     where  T1.Po_num  = 'P0254836'
     group by 
        T1.inv_num,
        T1.Po_num,
        T1.LINE_num
        order by t1.inv_num

but I end up with a result similar to the following.

"INV_NUM"                 "PO_NUM"                  "LINE_NUM"                "MYRECORDS"
'19782594 '.'P0254836 '.« 1 »« 1 »
"19968276" "P0254836" "1"                       "1"                     

I really want to end up with a result similar to the following.

"INV_NUM"                 "PO_NUM"                  "LINE_NUM"                "MYRECORDS"
'19782594 '.'P0254836 '.« 1 »« 1 »
'19968276 '.'P0254836 '.« 1 »« 2 »

in substance, I wish that the County invoices in purchase order and I also want to include the row id so that I can pass the record with more than 1 at a separate table.

Please note that this is part of a much larger project, and I change took only a small subset of show here.

Thanks for the help in advance.

Hello

mlov83 wrote:

Hi Etbin

Thanks for the reply.

I really need it to be the result.

INV_NUM PO_NUM LINE_NUM RN
19782594 P0254836 1 1
19782594 P0254836 1 1
19968276 P0254836 1 2
19968276 P0254836 1 2

Let me explain a little better. I want to remove the second instance of inv_num for this po_num if 19782594 and 19782594, although they are duplicates which is the first dup put so I want to keep. Then19968276 for this po_num I want to identify separately as a separate set and finally remove from my table.

Hope that makes sense.

That's what you asked for:

SELECT inv_num, po_num, line_num

DENSE_RANK () OVER (PARTITION BY po_num,

ORDER BY inv_num

), Rn

Tests

ORDER BY po_num, inv_num

;

Output:

INV_NUM PO_NUM LINE_NUM RN

---------- ---------- ---------- -----

19782594 1 1 P0254836

19782594 1 1 P0254836

P0254836 19968276 1 2

P0254836 19968276 1 2

But I do not see how this will help to eliminate duplicates.  Does not add another column in double for records that are already duplicates.  To distinguish the first of a series of other duplicates, Etbin request is much more useful.

Tags: Database

Similar Questions

  • GROUP BY date range to identify duplicates revisited!

    Good afternoon

    It is a continuation of the previous discussion, I previously created GROUP BY date range to identify duplicates

    Your help with the following would be appreciated (examples of data below)

    I've highlighted what I mark as returned to double as below:

    example4.jpg

    Definition of duplicate (However, this is slightly different to the previous post)

    the same account_num

    maximum 20 days apart tran_effective_date

    tran_processed_date maximum 10 days apart

    the same tran_amount

    However, I do not want to return a duplicate if they have both a tran_priced_date filled.

    So, in light of the foregoing, I don't expect the following account_numbers to be marked as duplicate:

    N100283 - one of the records has populated trab_priced_date

    N101640 - none of the records have the tran_priced_date filled

    N102395 - same as N101640

    N102827 - same as N101640

    N108876 - although the two documents have the populated tran_priced_date, the tran_effective_dates are more than 20 days apart.

    BUT for the rest of the accounts, N100284 and N102396 I want to execute the following logic

    Compare the 3rd rank in 4th place and ask the following questions:

    Is tran_effective_date to a maximum of 20 days out?

    Is tran_processed_date maximum 10 days apart?

    If yes then report it as dupe

    Compare line 4 to 5, then ask the same question until you get to the line 4 or 5. When everything is done, I want to examine only the transactions that have the status of normal and if the above question is true for both and then return to my game of result as dupes.

    I hope that makes sense!

    BEGIN
      EXECUTE IMMEDIATE 'DROP TABLE samp_data';
    EXCEPTION
      WHEN OTHERS THEN
        IF SQLCODE = -942 THEN
          DBMS_OUTPUT.put_line('');
        ELSE
          RAISE;
        END IF;
    END;
    /
    
    
    CREATE TABLE samp_data (
      ACCOUNT_NUM             VARCHAR2(17),
      TRAN_ID                 NUMBER(10),
      TRAN_TYPE               VARCHAR2(50),
      TRAN_EFFECTIVE_DATE     TIMESTAMP(6),
      TRAN_PROCESSED_DATE     TIMESTAMP(6),
      TRAN_STATUS             VARCHAR2(17),
      TRAN_PRICED_DATE        TIMESTAMP(6),
      TRAN_AMOUNT             NUMBER(13,2)
      );
    /
    
    
    Insert into samp_data (ACCOUNT_NUM,TRAN_ID,TRAN_TYPE,TRAN_EFFECTIVE_DATE,TRAN_PROCESSED_DATE, TRAN_STATUS, TRAN_PRICED_DATE,TRAN_AMOUNT) 
    values ('N100283',140119178,'Regular With',to_timestamp('01-JUN-15 12.00.00.000000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('22-MAY-15 07.00.34.235000000 AM','DD-MON-RR HH.MI.SS.FF AM'), 'Normal', to_timestamp('21-MAY-15 03.26.18.954000000 AM','DD-MON-RR HH.MI.SS.FF AM'),200);
    Insert into samp_data (ACCOUNT_NUM,TRAN_ID,TRAN_TYPE,TRAN_EFFECTIVE_DATE,TRAN_PROCESSED_DATE,TRAN_STATUS, TRAN_PRICED_DATE,TRAN_AMOUNT) 
    values ('N100283',140158525,'Regular With',to_timestamp('13-JUN-15 12.00.00.000000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('26-MAY-15 08.39.14.090000000 AM','DD-MON-RR HH.MI.SS.FF AM'),'Normal', null,200);
    Insert into samp_data (ACCOUNT_NUM,TRAN_ID,TRAN_TYPE,TRAN_EFFECTIVE_DATE,TRAN_PROCESSED_DATE, TRAN_STATUS, TRAN_PRICED_DATE,TRAN_AMOUNT) 
    values ('N100284',140118826,'Regular With',to_timestamp('03-JUN-15 12.00.00.000000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('22-MAY-15 07.00.19.072000000 AM','DD-MON-RR HH.MI.SS.FF AM'),'Normal', to_timestamp('20-MAY-15 03.25.05.438000000 AM','DD-MON-RR HH.MI.SS.FF AM'),450);
    Insert into samp_data (ACCOUNT_NUM,TRAN_ID,TRAN_TYPE,TRAN_EFFECTIVE_DATE,TRAN_PROCESSED_DATE, TRAN_STATUS, TRAN_PRICED_DATE,TRAN_AMOUNT) 
    values ('N100284',140158120,'Regular With',to_timestamp('06-JUN-15 12.00.00.000000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('23-MAY-15 08.38.42.064000000 AM','DD-MON-RR HH.MI.SS.FF AM'), 'Reversed', to_timestamp('21-MAY-15 03.26.18.954000000 AM','DD-MON-RR HH.MI.SS.FF AM'),450);
    Insert into samp_data (ACCOUNT_NUM,TRAN_ID,TRAN_TYPE,TRAN_EFFECTIVE_DATE,TRAN_PROCESSED_DATE, TRAN_STATUS, TRAN_PRICED_DATE,TRAN_AMOUNT) 
    values ('N100284',140158120,'Regular With',to_timestamp('06-JUN-15 12.00.00.000000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('02-JUN-15 08.38.42.064000000 AM','DD-MON-RR HH.MI.SS.FF AM'), 'Normal', to_timestamp('31-MAY-15 03.26.18.954000000 AM','DD-MON-RR HH.MI.SS.FF AM'),450);
    Insert into samp_data (ACCOUNT_NUM,TRAN_ID,TRAN_TYPE,TRAN_EFFECTIVE_DATE,TRAN_PROCESSED_DATE, TRAN_STATUS, TRAN_PRICED_DATE,TRAN_AMOUNT) 
    values ('N101640',140118957,'Regular With',to_timestamp('18-MAY-15 12.00.00.000000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('22-MAY-15 07.00.25.015000000 AM','DD-MON-RR HH.MI.SS.FF AM'), 'Normal', null,120);
    Insert into samp_data (ACCOUNT_NUM,TRAN_ID,TRAN_TYPE,TRAN_EFFECTIVE_DATE,TRAN_PROCESSED_DATE, TRAN_STATUS, TRAN_PRICED_DATE,TRAN_AMOUNT) 
    values ('N101640',140158278,'Regular With',to_timestamp('22-MAY-15 12.00.00.000000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('26-MAY-15 08.38.56.228000000 AM','DD-MON-RR HH.MI.SS.FF AM'), 'Normal', null,130);
    Insert into samp_data (ACCOUNT_NUM,TRAN_ID,TRAN_TYPE,TRAN_EFFECTIVE_DATE,TRAN_PROCESSED_DATE, TRAN_STATUS, TRAN_PRICED_DATE,TRAN_AMOUNT) 
    values ('N102395',140118842,'Regular With',to_timestamp('03-JUN-15 12.00.00.000000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('22-MAY-15 07.00.19.665000000 AM','DD-MON-RR HH.MI.SS.FF AM'), 'Normal', null,250);
    Insert into samp_data (ACCOUNT_NUM,TRAN_ID,TRAN_TYPE,TRAN_EFFECTIVE_DATE,TRAN_PROCESSED_DATE, TRAN_STATUS, TRAN_PRICED_DATE,TRAN_AMOUNT) 
    values ('N102395',140158235,'Regular With',to_timestamp('03-JUN-15 12.00.00.000000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('26-MAY-15 08.38.53.093000000 AM','DD-MON-RR HH.MI.SS.FF AM'), 'Normal', null,250);
    
    
    
    
    Insert into samp_data (ACCOUNT_NUM,TRAN_ID,TRAN_TYPE,TRAN_EFFECTIVE_DATE,TRAN_PROCESSED_DATE, TRAN_STATUS, TRAN_PRICED_DATE,TRAN_AMOUNT) 
    values ('N102396',140118823,'Regular With',to_timestamp('09-JUN-15 12.00.00.000000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('18-MAY-15 07.00.18.931000000 AM','DD-MON-RR HH.MI.SS.FF AM'), 'Normal', to_timestamp('19-MAY-15 03.26.18.954000000 AM','DD-MON-RR HH.MI.SS.FF AM'),750);
    Insert into samp_data (ACCOUNT_NUM,TRAN_ID,TRAN_TYPE,TRAN_EFFECTIVE_DATE,TRAN_PROCESSED_DATE, TRAN_STATUS, TRAN_PRICED_DATE,TRAN_AMOUNT) 
    values ('N102396',140158099,'Regular With',to_timestamp('16-JUN-15 12.00.00.000000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('24-MAY-15 08.38.39.443000000 AM','DD-MON-RR HH.MI.SS.FF AM'), 'Reversed', to_timestamp('21-MAY-15 03.26.18.954000000 AM','DD-MON-RR HH.MI.SS.FF AM'),750);
    Insert into samp_data (ACCOUNT_NUM,TRAN_ID,TRAN_TYPE,TRAN_EFFECTIVE_DATE,TRAN_PROCESSED_DATE, TRAN_STATUS, TRAN_PRICED_DATE,TRAN_AMOUNT) 
    values ('N102396',140158099,'Regular With',to_timestamp('16-JUN-15 12.00.00.000000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('29-MAY-15 08.38.39.443000000 AM','DD-MON-RR HH.MI.SS.FF AM'), 'Normal', to_timestamp('30-MAY-15 03.26.18.954000000 AM','DD-MON-RR HH.MI.SS.FF AM'),750);
    Insert into samp_data (ACCOUNT_NUM,TRAN_ID,TRAN_TYPE,TRAN_EFFECTIVE_DATE,TRAN_PROCESSED_DATE, TRAN_STATUS, TRAN_PRICED_DATE,TRAN_AMOUNT) 
    values ('N102396',140158099,'Regular With',to_timestamp('12-JUN-15 12.00.00.000000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('22-MAY-15 08.38.39.443000000 AM','DD-MON-RR HH.MI.SS.FF AM'), 'Reversed', to_timestamp('30-MAY-15 03.26.18.954000000 AM','DD-MON-RR HH.MI.SS.FF AM'),750);
    Insert into samp_data (ACCOUNT_NUM,TRAN_ID,TRAN_TYPE,TRAN_EFFECTIVE_DATE,TRAN_PROCESSED_DATE, TRAN_STATUS, TRAN_PRICED_DATE,TRAN_AMOUNT) 
    values ('N102396',140158099,'Regular With',to_timestamp('14-JUN-15 12.00.00.000000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('23-MAY-15 08.38.39.443000000 AM','DD-MON-RR HH.MI.SS.FF AM'), 'Reversed', to_timestamp('30-MAY-15 03.26.18.954000000 AM','DD-MON-RR HH.MI.SS.FF AM'),750);
    Insert into samp_data (ACCOUNT_NUM,TRAN_ID,TRAN_TYPE,TRAN_EFFECTIVE_DATE,TRAN_PROCESSED_DATE, TRAN_STATUS, TRAN_PRICED_DATE,TRAN_AMOUNT) 
    values ('N102827',140118850,'Regular With',to_timestamp('03-JUN-15 12.00.00.000000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('22-MAY-15 07.00.20.045000000 AM','DD-MON-RR HH.MI.SS.FF AM') , 'Normal',null,157.84);
    Insert into samp_data (ACCOUNT_NUM,TRAN_ID,TRAN_TYPE,TRAN_EFFECTIVE_DATE,TRAN_PROCESSED_DATE, TRAN_STATUS, TRAN_PRICED_DATE,TRAN_AMOUNT) 
    values ('N102827',140158118,'Regular With',to_timestamp('03-JUN-15 12.00.00.000000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('26-MAY-15 08.38.41.861000000 AM','DD-MON-RR HH.MI.SS.FF AM'), 'Normal', null,157.84);
    Insert into samp_data (ACCOUNT_NUM,TRAN_ID,TRAN_TYPE,TRAN_EFFECTIVE_DATE,TRAN_PROCESSED_DATE, TRAN_STATUS, TRAN_PRICED_DATE,TRAN_AMOUNT) 
    values ('N108876',139840720,'Regular With',to_timestamp('01-MAY-15 12.00.00.000000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('11-MAY-15 08.35.34.646000000 AM','DD-MON-RR HH.MI.SS.FF AM'), 'Normal', to_timestamp('20-MAY-15 03.25.05.438000000 AM','DD-MON-RR HH.MI.SS.FF AM'),1000);
    Insert into samp_data (ACCOUNT_NUM,TRAN_ID,TRAN_TYPE,TRAN_EFFECTIVE_DATE,TRAN_PROCESSED_DATE, TRAN_STATUS, TRAN_PRICED_DATE,TRAN_AMOUNT) 
    values ('N108876',139889880,'Regular With',to_timestamp('22-MAY-15 12.00.00.000000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('12-MAY-15 08.49.29.080000000 AM','DD-MON-RR HH.MI.SS.FF AM'), 'Normal', to_timestamp('21-MAY-15 03.26.18.954000000 AM','DD-MON-RR HH.MI.SS.FF AM'),1000);
    /
    
    
    select * from samp_data
    ORDER BY account_num, tran_effective_date, tran_processed_date;
    

    PL continue the discussion in your original post

  • change the size of the table and remove the duplicate line

    Hello

    I run a program that does a for loop 3 times.  In each loop, the data are collected for x and y points (10 in this example).  How can I output the table with 30 points in a simple 2D array (size 2 x 30)?

    Moreover, from the table, how to delete duplicate rows that contain the same value of x?

    Thank you

    hiNi.

    He probaibly which would have allowed to attach a file that has a consecutive duplicate lines.

    Here is a simple solution.

  • Need help to identify the type of object in the loop of pl/sql

    Hello

    I need help to identify the Type of object declared beneath a procedure as shown below:

    I need to pass the parameter to the procedure as a TYPE of OBJECT and also refer to variables of Type Object in a loop

    create or replace type TEST_VALIDATION_REC is RECORD (order_num varchar2 (30),)

    number of inventory_item_id

    reserved_YN varchar2 (1).

    error_flag varchar2 (1).

    Error_message varchar2 (2000)

    );

    CREATE OR REPLACE TYPE VALD_TBL AS VARRAY (10000) OF TEST_VALIDATION_REC;

    PROCEDURE ADD_TO_ORD)

    p_lot_number_list IN VALD_TBL,

    p_ord_number IN Varchar2,

    p_user_id in NUMBER: = fnd_profile.value ('USER_ID'),-change 1.10

    p_responsibility_id in NUMBERS: = fnd_profile.value ('RESP_ID'),-change 1.10

    p_application_id IN VARCHAR2: = 'PO',-change 1.10

    x_error_flag OUT Varchar2,

    x_error_msg OUT Varchar2

    )

    In the above procedure, I had the VALD_TBL. Is it OK?

    And how in the loop if the records if I use:

    FOR indx1 IN 1.p_lot_number_list. COUNTY

    LOOP

    BEGIN

    SELECT

    inventory_item_id

    IN

    ln_item_id

    Of

    dummy_lot_tab

    WHERE

    lot_number = p_lot_number_list (indx1); - > how direct the item here?

    EXCEPTION

    WHILE OTHERS THEN

    ln_item_id: = NULL;

    END;

    Records are PL/SQL objects.  They are not the SQL objects.  You can create a SQL TYPE (schema level) as a collection (variable-tables only, tables nested).

    So therefore your first statement is syntactically incorrect

    CREATE OR REPLACE TYPE TEST_VALIDATION_REC IS RECORD
    (order_num VARCHAR2(30),
    inventory_item_id NUMBER,
    reserved_YN VARCHAR2(1),
    error_flag VARCHAR2(1),
    Error_message VARCHAR2(2000)
    );
    

    You must put in an anonymous PL/SQL block or the stored procedure

    DECLARE
    
       TYPE test_validation_rec IS RECORD
       (
        order_num VARCHAR2(30),
        inventory_item_id NUMBER,
        reserved_YN VARCHAR2(1),
        error_flag VARCHAR2(1),
        error_message VARCHAR2(2000)
       );
    
       TYPE vald_tbl iS VARRAY(10000) OF test_validation_rec;
    
       lv_tbl vald_tbl;
    
    BEGIN
    
       lv_tbl := vald_tbl();
      -- insert your code here 
    
    END;
    
  • Hello. I need help to identify the right choice of processor for updating my old PC system.

    Hello. I would be eternally grateful if someone can help me identify the processors will work with my system.

    The HP site says that my system supports up to Pentium 4 Hyper Threading 3.8 GHz processor.

    The main reason is if I install Windows7 operating system.

    I bought a CPU on eBay but it does not work when it was inserted. When you turn on the PC, no light illuminates and the system does not start. When you look at the inside of the case of the processor cooling fan works, but that's all. No lights or anything else, not even the fan starts.

    The processor is a Pentium 4 HT 661 SL96H 3.6 GHz 2 MB 800 MHz LGA775

    My system is...

    Compaq Presario SR1339uk

    Serial number: {removed privacy}

    System number: PS267AA

    Motherboard: Asus PTGD1 - LA

    Chipset: I915P Northbridge

    Southbridge ICH6 Intel i/o Controller Hub 6

    BIOS version: 3.28 23/01/06

    Operating system: Windows XP Home Edition 32-bit

    Any help much appreciated... Thank you...

    Sorry for not being clear. I mentioned all processors in HT 6 series like 64-bit processors supported.

  • help to identify a font

    can someone help me identify this font?

    It seems that franklin

    font.jpg

    Clearface Gothic , it seems.

  • Duplicate line Macro information

    Hello

    I'm having some trouble doing this button or a macro of this pdf file.

    I currently have a row of input cells he wishes to reproduce with the click of a button.

    The reason that I do not repeat these lines is first of all to reduce the space of the file and allow users to create their own application form with the number of pieces they request.

    I am running Adobe LiveCycle Designer ES 8.2.1

    Thank you

    Re: Duplicate line Macro information

    If you want to publish the form on [email protected] I wil take a look. Please include a description of what you're trying to do.

    Paul

  • The Oracle answers and duplicate lines

    Hello


    I have a SQL view called "DUP" which shows all lines that are the same
    for the 4 main fields INCNUM, TAKE, SAMP, CORR.
    Now there are a few lines that are duplicates, whereas these 4 key column, BUT
    There are also lines that are duplicates of 100%.

    The Oracle answers seems to hide lines that are 100% (all columns, not only 4) duplicates. (there no SEPARATE in SQL).
    How can I avoid this behavior?

    Also how to create whole new second report showing 100% duplicates lines i.e.
    who are the same for each unique column?

    Thank you
    metalray

    You just have to do the reverse then. Uncheck the separate support feature in the RPD and you will separate in your SQL

    Thank you
    Prash

  • can I create an index on the column of duplicate lines?

    Hello

    I tried to create a unique index on an existing table, containing the duplicate lines.

    I want to create an index of type everything on this column of duplicate rows.

    If possible pls suggest me.


    Kind regards
    Vincent.

    Yes.

  • Help! Trying to amount of money with the duplicate lines

    I have a database that have the code bars the columns following, quantity, etc.. The barcode colum has duplicates of bar code and the corresponding quantity is different. I worked hours trying to solve how to summarize each different bar codes and replace the table with 1 bar code each corresponding total quantity. For example, instead of:

    971386447563
    971386447562
    971386447561
    971386605655
    971386605654
    971386617771
    971386622241
    971386704031
    971386792391
    971387149781
    971387162621
    971387396676
    971387396677
    971387396678

    I would like that it follows:

    971386447566
    971386605659
    971386617771
    971386622241
    971386704031
    971386792391
    971387149781
    971387162621
    9713873966721

    I used the following code, which selects only duplicates.

    < cfquery name = "CountBarcodesCart1" datasource = "inventory" >

    SELECT *.

    OF cart1

    WHERE barcode (IN)

    SELECT bar code

    OF cart1

    GROUP BY barcode

    SEEN (COUNTING (barcode) > 1))

    < / cfquery >

    Thanks for anyone who can help with my business.

    I know there are much better to do this, but I'll share the following code and hope it helps someone else. What I did change table CART1 to a TEMP table, get the added dupliates and then copy the corrected (with no duplicates) to a new data table. Then I delete the TEMP table.



    SELECT the barcode, Sum (Quantity) as the quantitysum
    TEMP
    GROUP BY barcode



           
    INSERT INTO Cart1 (bar code, quantity)
    VALUES (#CountBarcodesTemp.barcode #, #CountBarcodesTemp.quantitysum #)
     

    #barcode #-#QuantitySum #.



    DELETE *.
    TEMP

  • GROUP BY date range to identify duplicates

    Good afternoon

    Your help with the following would be appreciated (examples of data below)

    I've highlighted what I mark as returned as duplicates

    example.jpg

    Duplicate definition:

    the same account_num

    maximum 20 days apart tran_effective_date

    tran_processed_date maximum 10 days apart

    the same tran_amount

    However, I do not want to return a duplicate if they have both a tran_priced_date filled.

    So in light of the foregoing

    N100283 would not qualify, even if the tran_effective_date and the tran_processed_date are 20 and 20 days respectively, we have a date tran_priced populated, but not the other

    N101640 & N102395 is not eligible because the two did not have the full trab_priced_date

    N108876 is not eligible as duplicate as the tran_effective_dates are more than 20 days apart.

    Your help would be much appreciated.

    BEGIN
      EXECUTE IMMEDIATE 'DROP TABLE samp_data';
    EXCEPTION
      WHEN OTHERS THEN
        IF SQLCODE = -942 THEN
          DBMS_OUTPUT.put_line('');
        ELSE
          RAISE;
        END IF;
    END;
    /
    
    
    CREATE TABLE samp_data (
      ACCOUNT_NUM             VARCHAR2(17),
      TRAN_ID                 NUMBER(10),
      TRAN_TYPE               VARCHAR2(50),
      TRAN_EFFECTIVE_DATE     TIMESTAMP(6),
      TRAN_PROCESSED_DATE     TIMESTAMP(6),
      TRAN_PRICED_DATE        TIMESTAMP(6),
      TRAN_AMOUNT             NUMBER(13,2)
      );
    /
    
    
    Insert into samp_data (ACCOUNT_NUM,TRAN_ID,TRAN_TYPE,TRAN_EFFECTIVE_DATE,TRAN_PROCESSED_DATE,TRAN_PRICED_DATE,TRAN_AMOUNT) values ('N100283',140119178,'Regular With',to_timestamp('01-JUN-15 12.00.00.000000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('22-MAY-15 07.00.34.235000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('21-MAY-15 03.26.18.954000000 AM','DD-MON-RR HH.MI.SS.FF AM'),200);
    Insert into samp_data (ACCOUNT_NUM,TRAN_ID,TRAN_TYPE,TRAN_EFFECTIVE_DATE,TRAN_PROCESSED_DATE,TRAN_PRICED_DATE,TRAN_AMOUNT) values ('N100283',140158525,'Regular With',to_timestamp('13-JUN-15 12.00.00.000000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('26-MAY-15 08.39.14.090000000 AM','DD-MON-RR HH.MI.SS.FF AM'),null,200);
    Insert into samp_data (ACCOUNT_NUM,TRAN_ID,TRAN_TYPE,TRAN_EFFECTIVE_DATE,TRAN_PROCESSED_DATE,TRAN_PRICED_DATE,TRAN_AMOUNT) values ('N100284',140118826,'Regular With',to_timestamp('03-JUN-15 12.00.00.000000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('22-MAY-15 07.00.19.072000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('20-MAY-15 03.25.05.438000000 AM','DD-MON-RR HH.MI.SS.FF AM'),450);
    Insert into samp_data (ACCOUNT_NUM,TRAN_ID,TRAN_TYPE,TRAN_EFFECTIVE_DATE,TRAN_PROCESSED_DATE,TRAN_PRICED_DATE,TRAN_AMOUNT) values ('N100284',140158120,'Regular With',to_timestamp('06-JUN-15 12.00.00.000000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('26-MAY-15 08.38.42.064000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('21-MAY-15 03.26.18.954000000 AM','DD-MON-RR HH.MI.SS.FF AM'),450);
    Insert into samp_data (ACCOUNT_NUM,TRAN_ID,TRAN_TYPE,TRAN_EFFECTIVE_DATE,TRAN_PROCESSED_DATE,TRAN_PRICED_DATE,TRAN_AMOUNT) values ('N101640',140118957,'Regular With',to_timestamp('18-MAY-15 12.00.00.000000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('22-MAY-15 07.00.25.015000000 AM','DD-MON-RR HH.MI.SS.FF AM'),null,120);
    Insert into samp_data (ACCOUNT_NUM,TRAN_ID,TRAN_TYPE,TRAN_EFFECTIVE_DATE,TRAN_PROCESSED_DATE,TRAN_PRICED_DATE,TRAN_AMOUNT) values ('N101640',140158278,'Regular With',to_timestamp('22-MAY-15 12.00.00.000000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('26-MAY-15 08.38.56.228000000 AM','DD-MON-RR HH.MI.SS.FF AM'),null,130);
    Insert into samp_data (ACCOUNT_NUM,TRAN_ID,TRAN_TYPE,TRAN_EFFECTIVE_DATE,TRAN_PROCESSED_DATE,TRAN_PRICED_DATE,TRAN_AMOUNT) values ('N102395',140118842,'Regular With',to_timestamp('03-JUN-15 12.00.00.000000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('22-MAY-15 07.00.19.665000000 AM','DD-MON-RR HH.MI.SS.FF AM'),null,250);
    Insert into samp_data (ACCOUNT_NUM,TRAN_ID,TRAN_TYPE,TRAN_EFFECTIVE_DATE,TRAN_PROCESSED_DATE,TRAN_PRICED_DATE,TRAN_AMOUNT) values ('N102395',140158235,'Regular With',to_timestamp('03-JUN-15 12.00.00.000000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('26-MAY-15 08.38.53.093000000 AM','DD-MON-RR HH.MI.SS.FF AM'),null,250);
    Insert into samp_data (ACCOUNT_NUM,TRAN_ID,TRAN_TYPE,TRAN_EFFECTIVE_DATE,TRAN_PROCESSED_DATE,TRAN_PRICED_DATE,TRAN_AMOUNT) values ('N102396',140118823,'Regular With',to_timestamp('09-JUN-15 12.00.00.000000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('22-MAY-15 07.00.18.931000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('19-MAY-15 03.26.18.954000000 AM','DD-MON-RR HH.MI.SS.FF AM'),750);
    Insert into samp_data (ACCOUNT_NUM,TRAN_ID,TRAN_TYPE,TRAN_EFFECTIVE_DATE,TRAN_PROCESSED_DATE,TRAN_PRICED_DATE,TRAN_AMOUNT) values ('N102396',140158099,'Regular With',to_timestamp('16-JUN-15 12.00.00.000000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('26-MAY-15 08.38.39.443000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('21-MAY-15 03.26.18.954000000 AM','DD-MON-RR HH.MI.SS.FF AM'),750);
    Insert into samp_data (ACCOUNT_NUM,TRAN_ID,TRAN_TYPE,TRAN_EFFECTIVE_DATE,TRAN_PROCESSED_DATE,TRAN_PRICED_DATE,TRAN_AMOUNT) values ('N102827',140118850,'Regular With',to_timestamp('03-JUN-15 12.00.00.000000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('22-MAY-15 07.00.20.045000000 AM','DD-MON-RR HH.MI.SS.FF AM'),null,157.84);
    Insert into samp_data (ACCOUNT_NUM,TRAN_ID,TRAN_TYPE,TRAN_EFFECTIVE_DATE,TRAN_PROCESSED_DATE,TRAN_PRICED_DATE,TRAN_AMOUNT) values ('N102827',140158118,'Regular With',to_timestamp('03-JUN-15 12.00.00.000000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('26-MAY-15 08.38.41.861000000 AM','DD-MON-RR HH.MI.SS.FF AM'),null,157.84);
    Insert into samp_data (ACCOUNT_NUM,TRAN_ID,TRAN_TYPE,TRAN_EFFECTIVE_DATE,TRAN_PROCESSED_DATE,TRAN_PRICED_DATE,TRAN_AMOUNT) values ('N108876',139840720,'Regular With',to_timestamp('01-MAY-15 12.00.00.000000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('11-MAY-15 08.35.34.646000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('20-MAY-15 03.25.05.438000000 AM','DD-MON-RR HH.MI.SS.FF AM'),1000);
    Insert into samp_data (ACCOUNT_NUM,TRAN_ID,TRAN_TYPE,TRAN_EFFECTIVE_DATE,TRAN_PROCESSED_DATE,TRAN_PRICED_DATE,TRAN_AMOUNT) values ('N108876',139889880,'Regular With',to_timestamp('22-MAY-15 12.00.00.000000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('12-MAY-15 08.49.29.080000000 AM','DD-MON-RR HH.MI.SS.FF AM'),to_timestamp('21-MAY-15 03.26.18.954000000 AM','DD-MON-RR HH.MI.SS.FF AM'),1000);
    /
    
    select * from samp_data
    ORDER BY account_num, tran_effective_date, tran_processed_date;
    

    Hello

    Here's one way:

    SELECT *.

    OF samp_data m

    WHEN THERE IS)

    SELECT 1

    OF samp_data o

    WHERE o.account_num = m.account_num

    AND o.tran_effective_date BETWEEN m.tran_effective_date - INTERVAL '20' DAY

    AND m.tran_effective_date + INTERVAL '20' DAY

    AND o.tran_processed_date BETWEEN m.tran_processed_date - INTERVAL '10' DAY

    AND m.tran_processed_date + INTERVAL '10' DAY

    AND o.tran_priced_date IS NOT NULL

    AND o.tran_id <> m.tran_id

    )

    AND tran_priced_date IS NOT NULL

    ;

    I guess that tran_id is unique.

    The EXISTS subquery returns TRUE if there is at least 1 other similar line to the line to the study.  The condition

    o.tran_id <> m.tran_id

    guarantee that this will be another line, not the same line.

  • need help to identify the responsible application of integration...

    Hello

    I'm a guy MSSQL and Oracle is still a little mysterious to me.  I'm looking for assistance tracking the application in an environment that is responsible for special inserts.

    For example, I have 10 different programs running and inserting values to a DB.  I need to know which of them inserts a '0' in a particular column.

    I found a trigger that will tell me the user, unfortunately these programs use a credential that is shared so that it does not help me... here is what I have:

    CREATE OR REPLACE TRIGGER check_for_zero_insert

    AFTER INSERTION

    ON DATA_TABLE_0001

    FOR EACH LINE

    DECLARE

    v_username varchar2 (10);

    BEGIN

    -Find user name of the person making the INSERTION in the table

    SELECT user INTO v_username

    DOUBLE;

    -Insert record into the audit table

    INSERT INTO audit_table

    (instance.

    seems,.

    user name ( )

    VALUES

    (: seems,)

    : instance.

    v_username);

    END;

    But even once, user name will not help me, I need the originating instance if possible (a name of the executable, PID or something to identify the specific application of the other side.)

    There are other columns of information in session $ v - you might find program, machine or a process (PID process) useful.

    declare

    v_user varchar2 (30); v_program varchar2 (64); v_machine varchar2 (64); v_process varchar2 (24);

    Start

    Select the process, machine, program, osuser

    in v_user, v_program, v_machine, v_process

    session $ v where sid = sys_context ('userenv', 'SID');

    end;

    /

  • Deleting duplicate lines

    Hello
    We need help removing duplicate rows from a table please.

    We get the following table:
    SSD@ermd> desc person_pos_history
     Name                                                                     Null?    Type
     ------------------------------------------------------------------------ -------- ------------------------
    
     PERSON_POSITION_HISTORY_ID                                               NOT NULL NUMBER(10)
     POSITION_TYPE_ID                                                         NOT NULL NUMBER(10)
     PERSON_ID                                                                NOT NULL NUMBER(10)
     EVENT_ID                                                                 NOT NULL NUMBER(10)
     USER_INFO_ID                                                                      NUMBER(10)
     TIMESTAMP                                                                NOT NULL DATE
    We discovered that rare person_id repeated for a particular event (3):
    select PERSON_ID, count(*)
    from person_pos_history
    group by PERSON_ID, EVENT_ID
    having event_id=3
         and count(*) > 1
    order by 2
    
     PERSON_ID   COUNT(*)
    ---------- ----------
        217045        356
        216993        356
        226198        356
        217248        364
        118879        364
        204471        380
        163943        384
        119347        384
        208884        384
        119328        384
        218442        384
        141676        480
        214679        495
        219149        522
        217021        636
    If we look at the id of the person 1 "217045", we can see that he repeats 356 times for event id 3.
    SSD@ermd> select POSITION_ASSIGNMENT_HISTORY_ID, POSITION_TYPE_ID, PERSON_ID,EVENT_ID, to_char(timestamp, 'YYYY-MM-DD HH24:MI:SS')
      2  from person_pos_history
      3  where EVENT_ID=3
      4  and person_id=217045
      5  order by timestamp;
    
       PERSON_POSITION_HISTORY_ID  POSITION_TYPE_ID  PERSON_ID   EVENT_ID TO_CHAR(TIMESTAMP,'
    ------------------------------ ---------------- ---------- ---------- -------------------
                            222775               38     217045         03 2012-05-07 10:29:49
                            222774               18     217045         03 2012-05-07 10:29:49
                            222773                8     217045         03 2012-05-07 10:29:49
                            222776                2     217045         03 2012-05-07 10:29:49
                            300469               18     217045         03 2012-05-07 10:32:05
    ...
    ...
                           4350816               38     217045         03 2012-05-08 11:12:43
                           4367970                2     217045         03 2012-05-08 11:13:19
                           4367973                8     217045         03 2012-05-08 11:13:19
                           4367971               18     217045         03 2012-05-08 11:13:19
                           4367972               38     217045         03 2012-05-08 11:13:19
    
    356 rows selected.
    It is prudent to assume that the event id per person with the earlier timestamp id is the one who was in charge of 1st, as a result, we want to keep and the rest should be deleted.

    You kindly help us with the sql for the removal of duplicates please.

    Concerning

    Hello

    rsar001 wrote:
    ... We must maintain a row of data for each unique position_type_id. The query you provided with gratitude deletes lines keep them all not only one line for the id of the person 119129 regardless of which is the position_type_id.

    In fact, the statement that I posted earlier maintains a line for each separate person_id combnination and event_id; This is what means the analytical PARTITION BY clause. That made this assertion would be clearer if you had a different sample of the value. You posted a rather large set of samples, about 35 lines, but each row has the same person_id, and each row has the same event_id. In your real table, you probably have different values in these columns. If so, you should test with data that looks more like your real table, with different values in these columns.

    How can we change the query so that it holds the position_type_id County please before you delete the duplicate rows.

    You can include positiion_type in the analytical PARTITION BY clause. Maybe that's what you want:

    ...     ,     ROW_NUMBER () OVER ( PARTITION BY  person_id
                                   ,                    event_id
                             ,             position_type     -- *****  NEW  *****
                             ORDER BY        tmstmp
                           )         AS r_num
    ...
    

    CREATE TABLE and staements of INSERTION for the sample data and desired outcomes from these data, I'm only guessing.

  • Need to delete duplicate lines

    I have two tables MEMBER_ADRESS and MEMBER_ORDER. The MEMBER_ADRESS has the lines duplicated in MEMBER_ID and MATCH_VALUE-based computing. These ADDRESS_IDs duplicate have been used in MEMBER_ORDER. I need to remove duplicates of MEMBER_ADRESS by making sure I keep one with PRIMARY_FLAG = 'Y' and update MEMBER_ORDER. ADDRESS_ID to the MEMBER_ADRESS. ADDRESS_ID I kept.

    I'm on 11 GR 1 material

    Thanks for the help.
    CREATE TABLE MEMBER_ADRESS
    (
      ADDRESS_ID               NUMBER,
      MEMBER_ID                NUMBER,
      ADDRESS_1                VARCHAR2(30 BYTE),
      ADDRESS_2                VARCHAR2(30 BYTE),
      CITY                     VARCHAR2(25 BYTE),
      STATE                    VARCHAR2(2 BYTE),
      ZIPCODE                  VARCHAR2(10 BYTE),
      CREATION_DATE            DATE,
      LAST_UPDATE_DATE         DATE,
      PRIMARY_FLAG             CHAR(1 BYTE),
      ADDITIONAL_COMPANY_INFO  VARCHAR2(40 BYTE),
      MATCH_VALUE              NUMBER(38) GENERATED ALWAYS AS (TO_NUMBER( REGEXP_REPLACE ("ADDITIONAL_COMPANY_INFO"||"ADDRESS_1"||"ADDRESS_2"||"CITY"||"STATE"||"ZIPCODE",'[^[:digit:]]')))
    );
    
    
    Insert into MEMBER_ADRESS
       (ADDRESS_ID, MEMBER_ID, ADDRESS_1, ZIPCODE, CREATION_DATE, LAST_UPDATE_DATE, PRIMARY_FLAG, MATCH_VALUE)
     Values
       (200, 30, '11 Hourse Rd.', 
        '58754', TO_DATE('08/11/2011 10:56:25', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('08/12/2011 10:34:10', 'MM/DD/YYYY HH24:MI:SS'), 'Y', 1158754);
    Insert into MEMBER_ADRESS
       (ADDRESS_ID, MEMBER_ID, ADDRESS_1, ZIPCODE, CREATION_DATE, LAST_UPDATE_DATE, PRIMARY_FLAG, MATCH_VALUE)
     Values
       (230, 12, '101 Banks St', 
        '58487', TO_DATE('08/11/2011 10:56:25', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('08/12/2011 10:35:42', 'MM/DD/YYYY HH24:MI:SS'), 'N', 10158487);
    Insert into MEMBER_ADRESS
       (ADDRESS_ID, MEMBER_ID, ADDRESS_1, ZIPCODE, CREATION_DATE, LAST_UPDATE_DATE, PRIMARY_FLAG, MATCH_VALUE)
     Values
       (232, 12, '101 Banks Street', 
        '58487', TO_DATE('08/11/2011 10:56:25', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('08/12/2011 10:41:15', 'MM/DD/YYYY HH24:MI:SS'), 'N', 10158487);
    Insert into MEMBER_ADRESS
       (ADDRESS_ID, MEMBER_ID, ADDRESS_1, ZIPCODE, CREATION_DATE, LAST_UPDATE_DATE, PRIMARY_FLAG, MATCH_VALUE)
     Values
       (228, 12, '101 Banks St.', 
        '58487', TO_DATE('08/11/2011 10:56:25', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('08/12/2011 10:38:19', 'MM/DD/YYYY HH24:MI:SS'), 'N', 10158487);
    Insert into MEMBER_ADRESS
       (ADDRESS_ID, MEMBER_ID, ADDRESS_1, ZIPCODE, CREATION_DATE, LAST_UPDATE_DATE, PRIMARY_FLAG, MATCH_VALUE)
     Values
       (221, 25, '881 Green Road', 
        '58887', TO_DATE('08/11/2011 10:56:25', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('08/12/2011 10:34:18', 'MM/DD/YYYY HH24:MI:SS'), 'Y', 88158887);
    Insert into MEMBER_ADRESS
       (ADDRESS_ID, MEMBER_ID, ADDRESS_1, ZIPCODE, CREATION_DATE, LAST_UPDATE_DATE, PRIMARY_FLAG, MATCH_VALUE)
     Values
       (278, 28, '2811 Brown St.', 
        '58224', TO_DATE('08/11/2011 10:56:25', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('08/12/2011 10:36:11', 'MM/DD/YYYY HH24:MI:SS'), 'N', 281158224);
    Insert into MEMBER_ADRESS
       (ADDRESS_ID, MEMBER_ID, ADDRESS_1, ZIPCODE, CREATION_DATE, LAST_UPDATE_DATE, PRIMARY_FLAG, MATCH_VALUE)
     Values
       (280, 28, '2811 Brown Street', 
        '58224', TO_DATE('08/11/2011 10:56:25', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('08/12/2011 10:45:00', 'MM/DD/YYYY HH24:MI:SS'), 'Y', 281158224);
    Insert into MEMBER_ADRESS
       (ADDRESS_ID, MEMBER_ID, ADDRESS_1, ADDRESS_2, ZIPCODE, CREATION_DATE, LAST_UPDATE_DATE, PRIMARY_FLAG, MATCH_VALUE)
     Values
       (300, 12, '3421 West North Street', 'x', 
        '58488', TO_DATE('08/11/2011 10:56:25', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('08/12/2011 10:42:04', 'MM/DD/YYYY HH24:MI:SS'), 'Y', 342158488);
    COMMIT;
    
    
    CREATE TABLE MEMBER_ORDER
    (
      ORDER_ID    NUMBER,
      ADDRESS_ID  NUMBER,
      MEMBER_ID   NUMBER
    );
    
    Insert into MEMBER_ORDER
       (ORDER_ID, ADDRESS_ID, MEMBER_ID)
     Values
       (3, 200, 30);
    Insert into MEMBER_ORDER
       (ORDER_ID, ADDRESS_ID, MEMBER_ID)
     Values
       (4, 230, 12);
    Insert into MEMBER_ORDER
       (ORDER_ID, ADDRESS_ID, MEMBER_ID)
     Values
       (5, 228, 12);
    Insert into MEMBER_ORDER
       (ORDER_ID, ADDRESS_ID, MEMBER_ID)
     Values
       (6, 278, 28);
    Insert into MEMBER_ORDER
       (ORDER_ID, ADDRESS_ID, MEMBER_ID)
     Values
       (8, 278, 28);
    Insert into MEMBER_ORDER
       (ORDER_ID, ADDRESS_ID, MEMBER_ID)
     Values
       (10, 230, 12);
    COMMIT;

    Chris:

    In fact, it's a little meaner it seems at first glance, but it seems to work, at least on your sample data.

    The update, based on the before and after the sample data.

    SQL> SELECT * FROM member_adress;
    
    ADDRESS_ID  MEMBER_ID ADDRESS_1              CREATION_DATE          LAST_UPDATE_DATE       P MATCH_VALUE
    ---------- ---------- ---------------------- ---------------------- ---------------------- - -----------
           228         12 101 Banks St.          08/11/2000 10:56:25 am 08/12/2005 10:38:19 am N    10158487
           230         12 101 Banks St           08/11/2000 10:56:25 am 08/12/2006 10:35:42 am N    10158487
           232         12 101 Banks Street       08/11/2000 10:56:25 am 08/12/2007 10:41:15 am N    10158487
           300         12 3421 West North Street 08/11/2000 10:56:25 am 08/12/2011 10:42:04 am Y   342158488
           221         25 881 Green Road         08/11/2000 10:56:25 am 08/12/2011 10:34:18 am Y    88158887
           278         28 2811 Brown St.         08/11/2000 10:56:25 am 08/12/2006 10:36:11 am N   281158224
           280         28 2811 Brown Street      08/11/2000 10:56:25 am 08/12/2011 10:45:00 am Y   281158224
           200         30 11 Hourse Rd.          08/11/2000 10:56:25 am 08/12/2005 10:34:10 am Y     1158754
    
    SQL> SELECT * FROM member_order
      2  ORDER BY member_id, order_id;
    
      ORDER_ID ADDRESS_ID  MEMBER_ID
    ---------- ---------- ----------
             4        228         12
             5        230         12
            10        230         12
            11        232         12
            12        300         12
             6        278         28
             8        278         28
             3        200         30
    
    SQL> UPDATE member_order mo
      2  SET address_id = (SELECT address_id
      3                    FROM (SELECT address_id, member_id, match_value
      4                          FROM (SELECT address_id, member_id, match_value,
      5                                        ROW_NUMBER () OVER (PARTITION BY member_id, match_value
      6                                                            ORDER BY CASE WHEN primary_flag = 'Y' THEN 1
      7                                                                          ELSE 2 END,
      8                                                                    GREATEST(NVL(creation_date,TO_DATE('1/1/0001','MM/DD/YYYY')),
      9                                                                             NVL(last_update_date,TO_DATE('1/1/0001','MM/DD/YYYY'))) desc) rn
     10                                FROM member_adress)
     11                          WHERE rn = 1) ma
     12                    WHERE mo.member_id = ma.member_id and
     13                          ma.match_value = (SELECT match_value from member_adress maa
     14                                            WHERE maa.address_id = mo.address_id));
    
    SQL> SELECT * FROM member_order
      2  ORDER BY member_id, order_id;
    
      ORDER_ID ADDRESS_ID  MEMBER_ID
    ---------- ---------- ----------
             4        232         12
             5        232         12
            10        232         12
            11        232         12
            12        300         12
             6        280         28
             8        280         28
             3        200         30
    

    Then to remove it, something like:

    SQL> DELETE FROM member_adress
      2  WHERE rowid NOT IN (SELECT rowid
      3                      FROM (SELECT rowid,
      4                                   ROW_NUMBER () OVER (PARTITION BY member_id, match_value
      5                                                       ORDER BY CASE WHEN primary_flag = 'Y' THEN 1
      6                                                                     ELSE 2 END,
      7                                                                GREATEST(NVL(creation_date,TO_DATE('1/1/0001','MM/DD/YYYY')),
      8                                                                         NVL(last_update_date,TO_DATE('1/1/0001','MM/DD/YYYY'))) desc) rn
      9                            FROM member_adress)
     10                          WHERE rn = 1);
    
    SQL> SELECT * FROM member_adress;
    
    ADDRESS_ID  MEMBER_ID ADDRESS_1              CREATION_DATE          LAST_UPDATE_DATE       P MATCH_VALUE
    ---------- ---------- ---------------------- ---------------------- ---------------------- - -----------
           232         12 101 Banks Street       08/11/2000 10:56:25 am 08/12/2007 10:41:15 am N    10158487
           300         12 3421 West North Street 08/11/2000 10:56:25 am 08/12/2011 10:42:04 am Y   342158488
           221         25 881 Green Road         08/11/2000 10:56:25 am 08/12/2011 10:34:18 am Y    88158887
           280         28 2811 Brown Street      08/11/2000 10:56:25 am 08/12/2011 10:45:00 am Y   281158224
           200         30 11 Hourse Rd.          08/11/2000 10:56:25 am 08/12/2005 10:34:10 am Y     1158754
    

    John

  • Deleting duplicate line

    Hello

    I have a delicate situation.

    my query is like this:
    select count(*), source, ttype, tdate, qty, rate, cost, charge, impstatus, errorcode, proj_code, res_code, role_code, job_id, extl_id, taskid, rate_curr, cost_curr,ext_source_id, chg_code, inp_type
    from transimp
    where ttype = 'X'
    and res_code = 'NLAB'
    and tdate >= '01-MAR-2009'
    group by source, ttype, tdate, qty, rate, cost, charge, impstatus, errorcode, proj_code, res_code, role_code, job_id, extl_id, taskid, rate_curr, cost_curr,ext_source_id, chg_code, inp_type
    having count(*) > 1
    This query returns me 700 records with count (*) = 3
    which means that they are duplicates of records in the table (Please note we have a primary key, but these documents are double according to the needs of the company).

    I want to remove these duplicates, i.e. after the deletion if I run the same query again, count (*) must be 1.

    any thoughts?

    Thanks in advance.

    Dear Caitanya!

    It is a DELETE statement that deletes duplicate rows:

    
    DELETE FROM our_table
    WHERE rowid not in
    (SELECT MIN(rowid)
    FROM our_table
    GROUP BY column1, column2, column3... ;
    

    Here Column1, Column2, Column3 is the key for each record.
    Be sure to replace our_table with the name of the table for which you want to remove duplicate rows. GROUP BY is used on the columns making up the primary key of the table. This script deletes every line in the group after the first row.

    I hope this could be of help to you!

    Yours sincerely

    Florian W.

Maybe you are looking for

  • Compaq Presario CQ60 space

    Hi all Looking to see where I can get a replacement bar spacing and plastic thingys that go below clip from? No idea of what they are called... Anyone got spare parts or know where I can pick up some please? Steve

  • notifications of delivery doesn't have an account who have not used for more than one year

    Help Recently I received a now defunct work e-mail delivery failure notifications, which did not use more than a year, what can I do to stop these notification of delivery failure. This is back with all of my incoming emails. no rule of message didn'

  • Microsoft Flight Simulator 2004 required administrator rights

    I have Microsoft Flight Simulator 2004 (I know, I know) and when I try to run it from a limited account, I get an error stating that I have administrator privileges. Is it possible to replace that or enter my password once? I don't need my children a

  • Windows 7 Explorer to random crashing c0000005

    Hello been troubleshooting and goggling this and without other options, clean boot, a video driver updated and windows disabled DEP on IE, outlook and excel (only apps open when its been happening, still explore continues to plant several times a day

  • Slow Internet on Mac using RV082

    I had problems with the speed of internet connection on my Mac. Initially, I noticed the problem when my mail kept calendar whenever I sent an attachment, I went on various forums, trying to find the solution but nothing has really worked, then I fou