Find duplicate records in the fields.

Hi all

There are 5 A B C D E of type Varchar fields in my table x. 4 A B C D fields are key fields.

I would like to ask two copies of the fields in the fields A B C D.

Please suggest me.

Thank you
KSG

Hello
Simply:

SELECT A, B, C, D, Count(*)
  FROM your_table
 GROUP BY A, B, C, D HAVING Count(*) > 1;

Tags: Database

Similar Questions

  • finding duplicate records in the DB table, or the data trasnpose

    Hello

    I have a question...

    Key | UID. Start Dt | End date. / / DESC


    --------------------------------------------------------------------------------
    1. 101 | March 12 09 | 30 May 09 | UID101

    2. 101 | January 1 09 | February 25 09 | UID101

    3. 102. 13 March 09 | 30 March 09 | UID102

    4. 103. 13 March 09 | 30 March 09 | UID103

    5. 103. 13 March 09 | April 1 09 | UID103

    6. 104. 13 March 09 | 30 May 09 | UID104

    7. 104. February 25 09 | 29 May 09 | UID104

    8. 105. 15 February 09 | March 1 09 | UID105

    9. 105. April 1 09 | 30 May 09 | UID105

    The query must know UID in duplicate according to the above data, which are stored in the same form in a table. The definition of the UID duplicate is

    (1) UID repeating themselves (records by 2) ex are 101,103,104 and 105.
    (2) each UID has two dates and date of end of beginning.
    (3) the UID for which dates are overlaping. For ex: touch #4, 103 UID whose start dates are March 13 09-30-Mar-09 and there also another record, with the # 5 UID 103 key dates are 13 Mar 09 to 1 April 09. Here, there is overlap or intersection in line #4 with key #5 key dates dates of rank. This UID is duplicated UID by def.

    What precedes that falls under def and selectable are 103 and 104 only 102 UID has only a single line, UID 105 dates are mutually exclusive or not that overlap and even for the UID.

    Is there a function available DB to make use of?


    Wanted not to delete records or duplicate records.

    There is a report to display these duplicate records.

    It would be good for me if I can get the data transposed for UID

    as

    Of

    4. 103. 13 March 09 | 30 March 09 | UID103

    5. 103. 13 March 09 | April 1 09 | UID103

    TO
    UID. Start the t1d. End t1d. Start the T2D. End T2D
    103: |13-Mar-09|30-Mar-091-Apr-09 13 March 09

    Any advice or ideas can be useful to gr8

    Thank you...

    It can also be done without Analytics:

    WITH test_data AS (
      SELECT  1 AS KEY, 101 AS UD, TO_DATE('03/12/2009','MM/DD/YYYY') AS START_DT, TO_DATE('05/30/2009','MM/DD/YYYY') AS END_DT, 'UD101' AS DSC FROM DUAL UNION ALL
      SELECT  2 AS KEY, 101 AS UD, TO_DATE('01/01/2009','MM/DD/YYYY') AS START_DT, TO_DATE('02/25/2009','MM/DD/YYYY') AS END_DT, 'UD101' AS DSC FROM DUAL UNION ALL
      SELECT  3 AS KEY, 102 AS UD, TO_DATE('03/13/2009','MM/DD/YYYY') AS START_DT, TO_DATE('03/30/2009','MM/DD/YYYY') AS END_DT, 'UD102' AS DSC FROM DUAL UNION ALL
      SELECT  4 AS KEY, 103 AS UD, TO_DATE('03/13/2009','MM/DD/YYYY') AS START_DT, TO_DATE('03/30/2009','MM/DD/YYYY') AS END_DT, 'UD103' AS DSC FROM DUAL UNION ALL
      SELECT  5 AS KEY, 103 AS UD, TO_DATE('03/13/2009','MM/DD/YYYY') AS START_DT, TO_DATE('04/01/2009','MM/DD/YYYY') AS END_DT, 'UD103' AS DSC FROM DUAL UNION ALL
      SELECT  6 AS KEY, 104 AS UD, TO_DATE('03/13/2009','MM/DD/YYYY') AS START_DT, TO_DATE('05/30/2009','MM/DD/YYYY') AS END_DT, 'UD104' AS DSC FROM DUAL UNION ALL
      SELECT  7 AS KEY, 104 AS UD, TO_DATE('02/25/2009','MM/DD/YYYY') AS START_DT, TO_DATE('05/29/2009','MM/DD/YYYY') AS END_DT, 'UD104' AS DSC FROM DUAL UNION ALL
      SELECT  8 AS KEY, 105 AS UD, TO_DATE('02/15/2009','MM/DD/YYYY') AS START_DT, TO_DATE('03/01/2009','MM/DD/YYYY') AS END_DT, 'UD105' AS DSC FROM DUAL UNION ALL
      SELECT  9 AS KEY, 105 AS UD, TO_DATE('04/01/2009','MM/DD/YYYY') AS START_DT, TO_DATE('05/30/2009','MM/DD/YYYY') AS END_DT, 'UD105' AS DSC FROM DUAL
    )
    select  t1.ud,
         t1.key, t1.start_dt, t1.end_dt,
         t2.key, t2.start_dt, t2.end_dt
    from     test_data t1
    ,     test_data t2
    where     t1.ud = t2.ud
      and     t1.key < t2.key
      and     ((t1.end_dt - t1.start_dt) + (t2.end_dt - t2.start_dt)) >
            (greatest(t1.end_dt, t2.end_dt) - least(t1.start_dt, t2.start_dt))
    /
    

    Result:

            UD        KEY START_DT   END_DT            KEY START_DT   END_DT
    ---------- ---------- ---------- ---------- ---------- ---------- ----------
           103          4 13-03-2009 30-03-2009          5 13-03-2009 01-04-2009
           104          6 13-03-2009 30-05-2009          7 25-02-2009 29-05-2009
    

    In addition, you will need to adjust the date a little comparison, depending on whether you set two periods where the first End_date is equal to the start_date in the second, because duplication or not.

    Published by: tijmen on December 21, 2009 06:17

  • How to find duplicates of a field value? For example - in a field, I have values like {123,345,346,123}, now I want to remove the duplicate value in the field so that my output looks like {123,345,346}

    How to find duplicates of a field value? For example - in a field, I have values like {123,345,346,123}, now I want to remove the duplicate value in the field so that my output looks like {123,345,346}

    If it's an array you want to deduplicate then here is a script [for use in the Script Processor] I prepared earlier:

    var result = new Array();

    var added = new Object();

    If (input1 [0]! = null)

    {

    for (var i = 0; i)< input1[0].length;="">

    {

    var point = input1 [0] [i];

    If (! added [item])

    {

    added [item] = 1;

    result [result. Length] = item;

    }

    }

    }

    Output 1 = result;

    Kind regards

    Nick

  • ADF |  Duplicate validation in the field.

    Hello

    Jdev: 11.1.1.1.6.0

    I have a requirement to allow the user to edit the data in an editable table. My question is, for one of the column, I should not allow the user to enter the duplicate on the JSF page levelvalue.

    It is not a primary key field. How to validate the duplicate column during the click on the button Save.

    Validation should work only on the page, not DB or OS level.

    Where is the duplicate, it should show an error message.

    I'm looking for the JAVA Bean code, instead of using the unique key validator in business-> validator entity rules.

    Please help me with the java code if any knows

    Thank you

    Hello

    Finally I got the answer, thank you for helping me.

    I used the code is:

    private void ItemIdValidator() {}

    P2PWebAMImpl am = (P2PWebAMImpl) resolvElDC ("P2PWebAMDataControl");

    PoShipmentLinesVOImpl shipmentlineView2 = (PoShipmentLinesVOImpl) am.getPoShipmentLines2 ();

    DCIteratorBinding dciter = (DCIteratorBinding) bindings.get ("PoShipmentLines2Iterator");

    Line r = dciter.getCurrentRow ();

    Number of itemidValue = (Number) r.getAttribute ("ItemId");

    Row [] filteredRowsInRange = shipmentlineView2.getFilteredRows ("ItemId", itemidValue);

    int i = filteredRowsInRange.length;

    String msg = "ItemId with the same number found. Please select another ItemId. « ;

    JSFUtils.addFacesErrorMessage (regClientIDPrefix + msg);

    {if(i>1)}

    throw new ValidatorException (new FacesMessage (FacesMessage.SEVERITY_ERROR, msg, null));

    }

    }

    Private Object resolvElDC (String data) {}
          FacesContext fc = FacesContext.getCurrentInstance ();
          Application app = fc.getApplication ();
          ExpressionFactory elFactory = app.getExpressionFactory ();
          ELContext elContext = fc.getELContext ();
          ValueExpression valueExp =
              elFactory.createValueExpression (elContext, "#{data." + data + ".dataProvider}", Object.class);
          Return valueExp.getValue (elContext);
      }

    Sainaba...

  • Alternative to find duplicate records

    Hello

    My requirement is to find records in doubles of the sub sample. But it takes more time to generate the output when you work for about 10 lakh of records.

    Is there an alternative approach without the help of the JOIN. Thanks in advance

    with aaa as

    (select 101 as id, seq 1, "Asthma" as an event, 'medical' as an union journalist double

    Select 101, 3, 'asthma', 'medi' Union double

    Select 101, 2, 'lag', 'meddi' Union double

    Select 102.2, "whooping cough", "LP" of double union

    Select 102.1, "whooping cough", "LPS" double Union

    102.4 select, "whooping cough", "LPWS' Union double

    Select 102.3, 'ddd', 'dd' double Union

    Select 103, 1, 'asthma', ' Union double

    Select 103, 2, 'asta', have ' Union double

    Select 104,2, "whooping cough", "xx" of the double

    )

    Select x.* from aaa x,

    (SELECT id, event, count (*)

    by aaa

    Group by id, event

    Having count (*) > 1

    ) b

    where x.id = b.id

    and x.event = b.event

    something along the lines

    with aaa as
    
    (select 101 as id, 1 as seq, 'asthma' as event, 'medical' as reporter from dual union
    
    select 101, 3, 'asthma', 'medi' from dual union
    
    select 101, 2, 'lag', 'meddi' from dual union
    
    select 102,2, 'whooping', 'LP' from dual union
    
    select 102,1, 'whooping', 'LPS' from dual union
    
    select 102,4, 'whooping', 'LPWS' from dual union
    
    select 102,3, 'ddd', 'dd' from dual union
    
    select 103, 1, 'asthma', 'm' from dual union
    
    select 103, 2, 'asta', 'm' from dual union
    
    select 104,2, 'whooping', 'xx' from dual
    
    )
    select * from (
    select aaa.* , count(*) over (partition by id,event) rn from aaa
    ) where rn > 1;
    

    Hope this helps

    Alvinder

  • How do to find duplicate records in a table, then delete them.

    Hi all

    I'm working on a database of GR 11, 2 under linux. Recently, we have created a unique index on two inplace of columns in a single-column index. When we try to create this index in pre-production and prod to get an error message saying that the "double values found. Now my team asked me to write a pl/sql package or procedure to find these duplicate values and remove it or any other way to do it for them as well. But I'm not familiar with stuff of PL/SQL or data level how to perform this task.
    Please help me on this issue, how can I proceed.
    Thanks in advance for your help.

    Try this:

    
    CREATE TABLE z_test2
    AS
       SELECT 1 a, 'aaa' b FROM DUAL
       UNION ALL
       SELECT 1 a, 'aaa' b FROM DUAL
       UNION ALL
       SELECT 1 a, 'bbbb' b FROM DUAL
       UNION ALL
       SELECT 12 a, 'aaa' b FROM DUAL
       UNION ALL
       SELECT 12 a, 'aaa' b FROM DUAL
       UNION ALL
       SELECT 12 a, 'aaa' b FROM DUAL
       UNION ALL
       SELECT 13 a, 'aaa' b FROM DUAL;
    
    DELETE FROM z_test2 x
          WHERE EXISTS
                   (SELECT '*'
                      FROM (SELECT a,
                                   b,
                                   ROW_NUMBER ()
                                      OVER (PARTITION BY a, b ORDER BY a)
                                      rn
                              FROM z_test2) y
                     WHERE x.a = y.a AND x.b = y.b AND rn > 1);
    
                     
    
  • Files or folders is not displayed when drive accessible via the path UNC path (or mapped drive) and not recorded in the field

    We have some users who are mobile and authentication of field across from site to site VPN not success not always.  For this reason, I give UNC paths for a shortcut or 'network' and otherwise mapped through a script when the map includes, for example:

    NET use S: \\10.10.22.11\Sharename / persistent: no password/User: domain\username

    This always gives the user access to actions you want outside of the domain.

    However, there are times where a folder or file saved from within the domain is not accessible to the remote user via the link above.

    I evaluated generally it was a matter of time and the spread and if I navigate from a performed computer and use the full path with the file name, the file opens. If, however, no such full path with the file name is used, the files or folders remain invisible.

    Since it is intermittent, I'm afraid, that I have little additional information.

    My hope is it's the common or at least known and there is something I can do to alleviate the problem.

    Thanks in advance for any help.

    Stuart

    Hi Stuart,

    Thanks for posting your question on the forum of the Microsoft community.

    The question will be better suited to the audience of professionals on the TechNet forums.

    I would recommend posting your query in the TechNet Forums.
     
    TechNet Forum
    http://social.technet.Microsoft.com/forums/en-us/home?category=w7itpro

    Thank you

  • How to find duplicates in the table

    I have a table with 3 columns

    name of the table - used

    empcode firstname lastname
    XYZ 123 pk
    yzz 456 pk
    101 kkk jk


    ALTER TABLE employee
    ADD (CONSTRAINT PRIMARY KEY employee_PK
    (empcode, firstname, lastname))


    all the three columns are as key to porimary, we are migrating the data, there are problems with the data as the cobination of all three, resulting in duplicate, in the last column is supposed to be duplicates but first two columns will not have the duplicate and a complete line of the table (combination have no duplicates)

    a query need to find duplicates to validate the whole lines

    (B-)

    select empcode,firstname,lastname,count(*)
    from employee
    group by empcode,firstname,lastname
    having count(*)>1;
    
  • ROW_NUMBER and duplicate records

    Hello

    Tried to delete duplicate records. The code below works, but would remove all, rather than simply the > #1 records:

    (SELECT academic_period, load_week, sub_academic_period, person_uid, course_number,
    course_reference_number, rowid that RID, row_number() over (partition of)
    academic_period, load_week, sub_academic_period, person_uid, course_number,
    order of course_reference_number of academic_period, load_week, sub_academic_period,
    person_uid, course_number, course_reference_number)
    Of THE cea
    WHERE (academic_period, load_week, sub_academic_period, person_uid, course_number,)
    IN course_reference_number)
    (SELECT academic_period, load_week, sub_academic_period, person_uid, course_number,
    course_reference_number
    Of THE cea
    GROUP of academic_period, load_week, sub_academic_period, person_uid, course_number,
    course_reference_number
    HAVING COUNT (*) > 1))


    If I try to put 'rn' and rn > 1, I get ora-00933: Sql not correctly completed command

    SELECT academic_period, load_week, sub_academic_period, person_uid, course_number,
    course_reference_number, rowid that RID, row_number() over (partition of)
    academic_period, load_week, sub_academic_period, person_uid, course_number,
    order of course_reference_number of academic_period, load_week, sub_academic_period,
    person_uid, course_number, course_reference_number): the nurse
    Of THE cea
    WHERE (academic_period, load_week, sub_academic_period, person_uid, course_number,)
    IN course_reference_number)
    (SELECT academic_period, load_week, sub_academic_period, person_uid, course_number,
    course_reference_number
    Of THE cea
    GROUP of academic_period, load_week, sub_academic_period, person_uid, course_number,
    course_reference_number
    After HAVING COUNT (*) > 1)
    and rn > 1

    I tried to remove as"rn" and make "rn" and "rn", which gave me an error of syntax also. The '' rn > 1 and '' clause gets an error of syntax also. I gone through a bunch of different Web sites. All of this indicates the syntax I am using will work. However, any query I run into a TOAD, always error when I include the 'rn > 1.

    Any ideas? Thank you!

    Victoria

    You mix two ways to identify duplicates.

    One way is HAVING:

    SELECT academic_period, load_week, sub_academic_period, person_uid, course_number, course_reference_number
    FROM cea
    GROUP BY academic_period, load_week, sub_academic_period, person_uid, course_number, course_reference_number
    HAVING COUNT(*) > 1)
    

    That tells you just what combinations of your group are more than once.

    Another way is to analytical functions:

    select *
    from (
      SELECT academic_period, load_week, sub_academic_period, person_uid, course_number, course_reference_number
             count(*) over (partition by academic_period, load_week, sub_academic_period, person_uid, course_number, course_reference_number) cnt
      FROM cea
    )
    where cnt > 1
    

    This will all return duplicate records - if some combinations has three duplicates, then this will return all three lines.

    If you use the ROW_NUMBER() place COUNT() analytical analytical function, you get this:

    select *
    from (
      SELECT academic_period, load_week, sub_academic_period, person_uid, course_number, course_reference_number
             row_number() over (partition by academic_period, load_week, sub_academic_period, person_uid, course_number, course_reference_number) rn
      FROM cea
    )
    where rn > 1
    

    All of these files without duplicates will get rn = 1. Those with duplicates gets rn = 1, rn = 2... If rn > 1 Gets all "unnecessary" records - the ones you want to remove.

    If a deletion might look like:

    delete cea
    where rowid in (
      select rid
      from (
        SELECT ROWID as rid,
               row_number() over (partition by academic_period, load_week, sub_academic_period, person_uid, course_number, course_reference_number) rn
        FROM cea
      )
      where rn > 1
    )
    

    Or if you want to manually inspect before you delete ;-):

    select *
    from cea
    where rowid in (
      select rid
      from (
        SELECT ROWID as rid,
               row_number() over (partition by academic_period, load_week, sub_academic_period, person_uid, course_number, course_reference_number) rn
        FROM cea
      )
      where rn > 1
    )
    

    In another answer, you can find a way to remove duplicates with HAVING.
    The point is that do you it either with or with ROW_NUMBER() - not both HAVING ;-)

  • How to fill in the field on the page with the value of the primary key of the previous page

    I am trying to create a patient monitoring system for a group of doctors. There is a patient form add based on the Patient table (which has Patient_Id as primary key) which branches off toward an preoperative evaluation form. I would like to fill in the field of Patient_Id of pre-surgical assessment with the Patient_Id generated by the page add Patient during treatment, which comes from the Patient_seq sequence. How can I do this? I tried to use a calculation on the pre-operative evaluation form, but no matter what I put, the field remains empty.

    Hello

    If I understand you correctly, once the patient record is created you branching to the next page (preoperative assessment form). First create a static hidden point in the form of patients *: PX_PATIENT_ID_COPY *.

    Create a process on submit after the calculation and validation patient form to get the inserted patient_id of patient table. Here's the code-

    SELECT patient_id into :PX_PATIENT_ID_COPY from patient_table where rownum=1
    
    order by patient_id desc
    

    I would not use the sequence to get the present value, because if the record inserted was removed from your patient table, the current value of the sequence would be invalid to find this record in the patient table.

    Basically when the inserted record, the process of getting the value and put into the hidden element.

    Pass the value of the item hidden on the next page by defining them in the direction of the page. Under the section of the Action in the branch of the page, set this items field type - *: PX_PATIENT_ID * (page the order of the day for patient_id). Type * & PX_PATIENT_ID_COPY.* in the with the field of these values. Make sure that include you the dot (.) after * & PX_PATIENT_ID_COPY *.

    Try it now. Hope that it would help.

    Kind regards

    Djelloul

    Blog: http://aspblog.whitepagesbd.com

    Web: http://tajuddin.whitepagesbd.com

  • Why loop will ignore the fields?

    PDF has several pages created from templates. My function is to search all the fields with a given prefix and remove these fields to visually clean the final document and reduce the size of the file. For some reason, the function must be called multiple times to find and destroy all the fields, even if they meet the same criteria. Here's the function:

    "Miss" is the prefix of the target
    function selfDestruct (theSecond) {}

    Create an empty variable for target field
    var myFullFieldName;

    Is the number of fields in the PDF file, it traverses the
    for (var i = 0; i < this.numFields; i ++) {}

    Gets the name of the current field
    var fieldInSight = this.getNthFieldName (i);

    Splits the name field into sections and put them in a table
    var fieldNamePieces = fieldInSight.split(".");

    Checks if the field is a target
    If (fieldNamePieces [2] == alinea2d) {}

    If true, it includes the name of the field
    myFullFieldName = fieldNamePieces [0] + '. ' + fieldNamePieces [1] + '. ' + fieldNamePieces [2] + "." + fieldNamePieces [3];
    Show me what is removed
    Console.println (myFullFieldName);
    //
    this.removeField (myFullFieldName);

    } / / end of the if
    } / / end of for
    } / / end of function

    My argument is either 'single', 'double' and so on to find and remove the following (the first prefix varies, of course), for example:

    • P2.actions.single.LeftColumn
    • P2.actions.single.rightcolumn
    • P2.actions.single.resetTextBox
    • P2.actions.single.TopRow
    • P2.actions.single.BottomRow

    Whenever I have it try (either in the console or a button for executing the script), if there is a group of five (see above), it removes three the first time, the second time a fourth and fifth/last field on the third try. Regularly and in the same order each time. I have a 'group' with only two fields share the same prefix, and each time, it removes one, then the other the second time I call the function. I can't understand why he did this or how it determines the order. If there are two or twenty pages with these fields, always happens in the same way in the same order.

    It gets weirder: if I spawn new pages between the call to the function, it leap-frogs get the 'fourth' field he missed the last time and the first three pages newly laid.

    Please enlighten me. Thanks in advance.

    It is an "implicit" table, Yes, and you should treat numFields-1-0.

  • How can I update several lines based on the comparison of the fields table 1 and 2

    I create the following SQL to test what I found a method to update the records where the fields below the game, but what ends up happening is that the update functions works the first time and then basically updates everything to null. I don't understand what I'm doing wrong. New to this.

    UPDATE SLS_HDR B
    DEFINE (B.ORD_DT, B.SHIP_ADD_CD, B.INV_ADD_CD, B.LOB, B.STATUS,
    B.ORD_TYPE, B.HDR_ROUTE, B.PRICE_LIST, B.CUST_ORDER, B.REF_A, B.REF_B,
    B.ORD_REF, B.ORD_DISC, B.SREP, B.SREP2, B.PLAN_DEL_DT, B.TXTA, B.TXTB,
    B.INV_CONTACT, B.SHIP_CONTACT, B.SOLD_CONTACT, B.PAY_CONTACT, B.ORD_AMT, B.UPDATED_DT)
    (SELECT =
    A.ORD_DT, A.SHIP_ADD_CD, A.INV_ADD_CD, A.LOB, A.STATUS,
    A.ORD_TYPE, A.HDR_ROUTE, A.PRICE_LIST, A.CUST_ORDER, A.REF_A, A.REF_B,
    A.ORD_REF, A.ORD_DISC, A.SREP, A.SREP2, A.PLAN_DEL_DT, A.TXTA, A.TXTB,
    A.INV_CONTACT, A.SHIP_CONTACT, A.SOLD_CONTACT, A.PAY_CONTACT, A.ORD_AMT, SYSDATE
    Of
    SLS_HDR_TEMP HAS
    WHERE
    A.FIN_COMP = B.FIN_COMP AND
    A.LOG_COMP = B.LOG_COMP AND
    A.ORD_NO = B.ORD_NO AND
    A.TRANS_DT = B.TRANS_DT AND
    A.BP_TYPE = B.BP_TYPE AND
    A.STATUS <>B.STATUS);

    Can someone advise?
    Thank you

    Published by: 903292 on December 19, 2011 13:15

    you don't have a where clause in your update so no-matched lines will be updated with null values

    UPDATE SLS_HDR B
    SET ( B.ORD_DT, B.SHIP_ADD_CD, B.INV_ADD_CD, B.LOB, B.STATUS,
    B.ORD_TYPE, B.HDR_ROUTE, B.PRICE_LIST, B.CUST_ORDER, B.REF_A, B.REF_B,
    B.ORD_REF, B.ORD_DISC, B.SREP, B.SREP2, B.PLAN_DEL_DT, B.TXTA, B.TXTB,
    B.INV_CONTACT, B.SHIP_CONTACT, B.SOLD_CONTACT, B.PAY_CONTACT, B.ORD_AMT,B.UPDATED_DT )
    = (
              SELECT
              A.ORD_DT, A.SHIP_ADD_CD, A.INV_ADD_CD, A.LOB, A.STATUS,
              A.ORD_TYPE, A.HDR_ROUTE, A.PRICE_LIST, A.CUST_ORDER, A.REF_A, A.REF_B,
              A.ORD_REF, A.ORD_DISC, A.SREP, A.SREP2,A.PLAN_DEL_DT, A.TXTA, A.TXTB,
              A.INV_CONTACT, A.SHIP_CONTACT, A.SOLD_CONTACT,A.PAY_CONTACT, A.ORD_AMT, SYSDATE
              FROM
              SLS_HDR_TEMP A
              WHERE
              A.FIN_COMP = B.FIN_COMP AND
              A.LOG_COMP = B.LOG_COMP AND
              A.ORD_NO = B.ORD_NO AND
              A.TRANS_DT = B.TRANS_DT AND
              A.BP_TYPE = B.BP_TYPE AND
              A.STATUS= B.STATUS
         )
    WHERE EXISTS
    (
         select null from SLS_HDR_TEMP A
         WHERE
         A.FIN_COMP = B.FIN_COMP AND
         A.LOG_COMP = B.LOG_COMP AND
         A.ORD_NO = B.ORD_NO AND
         A.TRANS_DT = B.TRANS_DT AND
         A.BP_TYPE = B.BP_TYPE AND
         A.STATUS= B.STATUS
    )
    

    OR using Merge

    Merge into SLS_HDR B
    using SLS_HDR_TEMP A
    on (A.FIN_COMP = B.FIN_COMP AND
    A.LOG_COMP = B.LOG_COMP AND
    A.ORD_NO = B.ORD_NO AND
    A.TRANS_DT = B.TRANS_DT AND
    A.BP_TYPE = B.BP_TYPE AND
    A.STATUS= B.STATUS)
    when matched then update set (B.ORD_DT, B.SHIP_ADD_CD, B.INV_ADD_CD, B.LOB, B.STATUS,
    B.ORD_TYPE, B.HDR_ROUTE, B.PRICE_LIST, B.CUST_ORDER, B.REF_A, B.REF_B,
    B.ORD_REF, B.ORD_DISC, B.SREP, B.SREP2, B.PLAN_DEL_DT, B.TXTA, B.TXTB,
    B.INV_CONTACT, B.SHIP_CONTACT, B.SOLD_CONTACT, B.PAY_CONTACT, B.ORD_AMT,B.UPDATED_DT ) = (A.ORD_DT, A.SHIP_ADD_CD, A.INV_ADD_CD, A.LOB, A.STATUS,
    A.ORD_TYPE, A.HDR_ROUTE, A.PRICE_LIST, A.CUST_ORDER, A.REF_A, A.REF_B,
    A.ORD_REF, A.ORD_DISC, A.SREP, A.SREP2,A.PLAN_DEL_DT, A.TXTA, A.TXTB,
    A.INV_CONTACT, A.SHIP_CONTACT, A.SOLD_CONTACT,A.PAY_CONTACT, A.ORD_AMT, SYSDATE)
    

    HTH...

    Thank you

  • find duplicates by using multiple criteria in the field help...

    I have a Table and I want to choose the following fields


    Cust_id, number
    BANK number
    CONNECTION_TYPE, varchar2 (the client A between the sexes, B or C)
    STAMP_DATE varchar2 (example 072010) or MMYYYY not a real stamp Date/time


    Basically, if the CUST_ID appears twice, it's ok, as long as the CONNECTION_TYPE is not the same.

    So I need to find duplicates on CUST_ID, with the CONNECTION_TYPE being same.

    It would be nice:
    9122222     326     1      102010
    9122222     326     2      082010
    This would not be permitted:
    9122222     326     1      102010
    9122222     326     1      082010
    That's what I used, but I can't understand how to add the second condition for CONNECTION_TYPE:


    Select cust_id, stamp_date, connect_type
    from Table1
    where cust_id in

    (select cust_id
    from Table1
    Group of cust_id
    view count (cust_id) > 1)

    order of cust_id;

    Thank you.

    Try this

    WITH TABLE1 AS (
        SELECT 9122222 CUST_ID,     326 STORE,
              '1' CONNECTION_TYPE,      '102010' STAMP_DATE from dual union all
        select 9122222,     326     ,'2',      '082010' from dual union all
        SELECT 9122222,     326,     '1',      '082010' FROM DUAL
    )
    SELECT CUST_ID, STORE, CONNECTION_TYPE, STAMP_DATE
    FROM (
        SELECT T1.*,
               COUNT(*) OVER ( PARTITION BY CUST_ID, STORE, CONNECTION_TYPE ) X
        FROM TABLE1 T1
    )
    WHERE X > 1
    ORDER BY cust_id;
    
    CUST_ID                STORE                  CONNECTION_TYPE STAMP_DATE
    ---------------------- ---------------------- --------------- ----------
    9122222                326                    1               102010
    9122222                326                    1               082010     
    
  • check the entry of duplicate data in the block record multi, which is a required field

    Hi all
    I have a situation where I have to check duplicate data entry (on a domain, which is a field, i.e. mandatory. it cannot be ignored by the user without entering a value) in key-in data in a Multi Record block.

    With regard to the reference I used logic, such as
    1 > trigger in a when-validate-Record of this block I assign the value of the current element in the variable of type array (collection type)
    as this trigger every time I leave this record, so its by assigning the value of the current time. And this process continues

    then
    2 > wrote in a when-validate-item trigger of the corresponding element (i.e., the relaxation is at the element level), where he compares the value of the current item and the value stored in the variable of type table trigger when-validate-Record (type of collection). If the value of the current element is mapped to a value stored in the variable of type table I shows following message ("Duplicate Record") raise_form_trigger default


    This code works very well for the double value of this record multi field check

    The problem is that if the user enters the value of this field and then go to the next field, enter the value in this field, and then press 'Enter Query' icon, Validate bolt trigger fires. As a result first when - validate record fires, that stores this value, and then when-validate-point fire, so it shows message duplicate record


    Please give me a code or a logical sense to solve this problem



    Any other logic to solve this problem is also welcome

    If you query the data and it shows two unique values in the block, then it should work as expected. But if you don't ask and just open form and try to insert the record and then for the first record it does not display this message of duplication. For this, you can write your own query to check the duplication of table.

    For the image of the form, you can use any image download site. A search on google.

    -Clément

  • Do find/replace multiple records at the same time?

    Hello

    I wonder if there is a way to apply the options find/replace, I select all the records that I have given merged in my InDesign file.

    Reserved spaces will not import my database spaces, example if I

    Model InDesign: < < bird > >

    < < beak > > < < beak value > >

    .csv:                              bird's beak,,, value of beak

    Eagle, beak:, Hung

    I get

    InDesign after the merger of data: Eagle

    beak: hung

    No spaces came with the merger.  If there is an option to preserve the spaces, it would be the simplest solution. However, I'm not finding such a command, so I would like to add a special character after my colon in the .csv and then use ID to find/replace this character in each of my files in a space or two.  Any ideas?

    Thank you.

    Edit: even on a single record, find/replace is not applied to my output document, either... Its release the special character that I wrote in the fields of .csv instead of space, it is supposed to transform.

    Post edited by: EX_Achilles

    Yes.

    Make sure that you set the scope of the search and replace of the Document. He may have missed the selection or history.

Maybe you are looking for