duplicate data

Hi all
In most disco I get the following deduplicated data:


VALUE AMNT1 AMNT2 DAVID ANUM HELP
060635a $ $200,000 19 100 18 December 06-0-564
$ $200,000 19 100 18 December 06 0 565
$ $200,000 19 800 18 December 06-0-564
$ $200,000 19 800 18 December 06 0 565


Instead of once AID repeats tiwce, any ideas?

Thank you

Hello
Thanks for letting me know. I'm glad that you found the problem and could fix.

Happy to discover :-)

If my answer has helped you please check the little box that says: your question has been answered. This way others will know that this solution works.

Best wishes
Michael

Tags: Business Intelligence

Similar Questions

  • OWB Mapping, how to remove similar or duplicate data?

    Dear all,

    I intend to create a mapping to move data from my data source. I have this kind of data

    Type of change in Date Employee_ID fpin fpout workhour

    29/08/2014 1234, 1 trip duty 08.00 17.00 8

    1234                              29/08/2014          1                                                            07.45          17.30          8.45

    This table has no primary key and the constraint. On the ETL process, the target is not a given with double Employee_ID, Date and shift as given above. When there is a data load only the data that has a value in the Type column. Therefore, only the first data loaded.

    I already use the deduplicator component to remove duplicate data, but it does not work in this case.

    No idea how to filter such data in OWB?

    Best regards

    Akhmad H Gumas

    Hello

    There are several options here:

    (1) use a grouping of the aggregation on Employee_ID operator (perhaps to the Date as well, since this seems to be your goal.)

    All other attributes could be using the min, max, first, last aggregate functions.

    But this way, it is difficult to get all the data in the first row (where the Type is filled)

    (2) use analytical functions to qualify each record in front of a filter for the first line only.

    This can be done using the sequence of operators like this:

    source_table-> Employee_ID expression-> filter

    Hereby, by using the following expression:

    INPUTGRP1 (employe_id, Type)

    OUTPUTGRP1 (DUPROWSEQ: row_number() over (partition of INGRP1.) Order of employe_id by INGRP1. TYPE DESC)

    The filter will be hav efollowing expression:

    INOUTGRP1. DUPROWSEQ = 1

    As an example of following seqq generated code:

    create table ttt (Employee_ID number, Date_w date, number of SHIFT, Type_w varchar2 (20), number, number, number of workhour fpout fpin);

    insert into values of ttt ("1234, sysdate, 1, ' travel duty ', 8.0, 17.0, 8);
    insert into values of ttt (1234, sysdate, 1 ", 7.45, 17.30, 8.45);

    commit;

    Select * from
    (
    Select row_number() over (partition by Employee_ID arrested by Type_w desc) DupRowSeq, ttt a a.*
    )
    where DupRowSeq = 1;

    Hope this will help you more.

    Bertram greetings

  • Why this code give me duplicate data

    HII, all the
    Why this code give me duplicate data
    SELECT (G.NAME_1 ||' '||G.NAME_2||' '||G.NAME_3||' '||G.NAME_4) AS NAME,
                  R.RES_NUM
    FROM    GUST G , RESERVATION R,ROOM_DETAILS S,ROOMS RR
    WHERE   G.RES_NUM = R.RES_NUM
    AND     R.RES_NUM = S.RES_NUM
    AND     RR.OCCUPIED = 'Y'
    RES_NUM
    --------
    1282
    1282
    1282
    1282
    1280
    1280
    1280
    1280
    1281
    1281
    1281
    1281
    1310
    1310
    1310
    1310
    
    16 rows selected

    Try this:

    SELECT DISTINCT (G.NAME_1 ||' '||G.NAME_2||' '||G.NAME_3||' '||G.NAME_4) AS NAME,
                  R.RES_NUM
    FROM    GUST G , RESERVATION R,ROOM_DETAILS S,ROOMS RR
    WHERE   G.RES_NUM = R.RES_NUM
    AND       R.RES_NUM = S.RES_NUM
    AND       RR.OCCUPIED = 'Y'
    
  • check the entry of duplicate data in the block record multi, which is a required field

    Hi all
    I have a situation where I have to check duplicate data entry (on a domain, which is a field, i.e. mandatory. it cannot be ignored by the user without entering a value) in key-in data in a Multi Record block.

    With regard to the reference I used logic, such as
    1 > trigger in a when-validate-Record of this block I assign the value of the current element in the variable of type array (collection type)
    as this trigger every time I leave this record, so its by assigning the value of the current time. And this process continues

    then
    2 > wrote in a when-validate-item trigger of the corresponding element (i.e., the relaxation is at the element level), where he compares the value of the current item and the value stored in the variable of type table trigger when-validate-Record (type of collection). If the value of the current element is mapped to a value stored in the variable of type table I shows following message ("Duplicate Record") raise_form_trigger default


    This code works very well for the double value of this record multi field check

    The problem is that if the user enters the value of this field and then go to the next field, enter the value in this field, and then press 'Enter Query' icon, Validate bolt trigger fires. As a result first when - validate record fires, that stores this value, and then when-validate-point fire, so it shows message duplicate record


    Please give me a code or a logical sense to solve this problem



    Any other logic to solve this problem is also welcome

    If you query the data and it shows two unique values in the block, then it should work as expected. But if you don't ask and just open form and try to insert the record and then for the first record it does not display this message of duplication. For this, you can write your own query to check the duplication of table.

    For the image of the form, you can use any image download site. A search on google.

    -Clément

  • Duplicate data from the proxy server

    Hello

    I am working on a project and use Google map to mark several spots in a city like cab or hospital, this thing I ask a URL with 3 longitude latitude parameter and miles. Latitude and longitude will describe your position and back-end, it will generate KML in CAB in x around mile.
    Now, when I ask the first time I generate a KML and save on the server in a directory after I ask another URL for this file.kml.

    If I ask the second time for the KML with different parameters, it will follow the same steps and re-writing file.kml on the server. When I try to access the file.kml in the second request proxy server Blackberry look back (previous) duplicate copy of file.kml because he thinks I ask once again for the same file. I can't display the correct data if there are less than 5 minutes between two request.

    How to defeat away to avoid.

    Any suggestion will be appreciated.

    Thank you and best regards,
    Puneet Kumar Dubey

    It seems that you do not use your random number in the URL args when you pass it as an argument.

    System.out.println("Accessing file at  :  "+"http://74.54.137.138:2039/Geocode_"+pin+"_"+randomNo+".kml");
                            String[] args = {"http://74.54.137.138:2039/Geocode_"+pin+".kml"};
    

    TO:

    System.out.println("Accessing file at  :  "+"http://74.54.137.138:2039/Geocode_"+pin+"_"+randomNo+".kml");
                            String[] args = {"http://74.54.137.138:2039/Geocode_"+pin+"_"+randomNo+".kml"};
    
  • Problems with duplicate DATA when the data file was added after the backup completes

    Hello

    I am facing a problem when running of duplicate database with the command of database duplicate RMAN on a 10 g database. If I have the duplication from a full backup that is missing a data file that has been added to the database after the full backup of preforms, I get the following error message:
    Starting restore at 10-10-2009 18:00:38
    
    released channel: t1
    released channel: t2
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of Duplicate Db command at 10/10/2009 18:00:39
    RMAN-03015: error occurred in stored script Memory Script
    RMAN-06026: some targets not found - aborting restore
    RMAN-06100: no channel to restore a backup or copy of datafile 43
    Redo log which was Pond at the time of the creation of 43 of the data file is also available in the backups. It seems that RMAN cannot use the log information archived redo to restore the content of the data file 43. I guess that because failure is already reported in the RESTORATION and not in the RECOVERY phase, so again the archived logs are not available yet. I get the same message even if I do another backup of the data file 43 (so a backup that is not in the same backupset as the backup to all the other data files).
    The script, the command duplicate product, I guess that RMAN reads the contents of the source database controlfile and trying to get a backup that contains all the data files to restore the database Assistant - if such a backup is not found, it fails.


    Of course, if I try to perform a restore/recovery of the source database, it works without problem:
    RMAN> restore database;
    
    Starting restore at 13.10.09
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: sid=156 devtype=DISK
    
    creating datafile fno=43 name=F:\ORA10\ORADATA\SOVDEV\SOMEDATAFILE01.DBF
    channel ORA_DISK_1: starting datafile backupset restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    restoring datafile 00001 to F:\ORA10\ORADATA\SOVDEV\SYSTEM01.DBF
    .....
    The 43 data file is re-created and then redo is applied to the course.

    So, does anyone know if the duplicate database can use archived redo logs to recreate the contents of a data file, as does a normal restore/recovery? If this is the case, then it is necessary to perform a full backup before each duplicated if a data file has been added after such a backup database.

    Thanks in advance for any answers.

    Kind regards
    Swear

    Hi swear,.

    I got exactly the same problem during replication.
    Because we backup archive logs every 6 hours with rman, I added an additional block of running this script.
    run
    {
    backup incremental level 0
    format "% d_ % s_ %%t p_ bk_ '.
    filesperset 4
    database not saved;
    }

    (I also hit a bug in the catalogue which was resolved by patching up the dbs catalog 11.1.0.6, 11.1.0.7 for.)

    This will restrict the data not making file not part of any backup rman 6 hours while jumping for which there is already a backup of data files.

    Kind regards

    Tycho

  • Is it possible in Windows 7 to create a CD copy of duplicate data

    Is it possible in Windows 7 to make a copy of a CD (I am referring to a data CD, not a music CD)

    Is it possible in Windows 7 to make a copy of a CD (I am referring to a data CD, not a music CD)

    Hello

    A data CD is pretty easy to copy.

    If you have just a reader, you can insert the data CD, copy all the files in a temporary folder and then burn files to a CD Virgin tose.

    If you have a CD player and a CD burner, you can select all the files in the CD player, drag and drop them on the CD burner and perform engraving.

    I hope this helps.

    Thank you for using Windows 7

    Ronnie Vernon MVP
  • Select only the lines with the duplicate data in columns

    Hello

    I try to find roles here in duplication.  I created a script for me the employee, the name of the State, the title of the post, the end date and the union_id.

    Select
    p.emp_no,
    p.Surname | «, » || p.Forenames,
    p.person_status,
    j.job_title,
    j.end_date,
    j.union_id

    person p
    inner join appointee_history ah on trim (ah.emp_no) = trim (p.emp_no) and ah.change_date = (select max (change_date) in the appointee_history where ah.emp_no = emp_no)
    join in-house job j on trim (j.job_title) = trim (ah.job_title) and trim (j.org_unit) = trim (ah.org_unit)
    where j.change_date = (select max (j2.change_date) of j2 work
    where trim (j2.job_title) = trim (j.job_title) and trim (j.org_unit) = trim (j2.org_unit))

    and p.person_status like 'a % '.
    and p.emp_no not like 9% '
    and j.union_id is not null
    order of j.union_id

    However, I'm looking only for duplication through the union_id. I tried a having count (*) > 1 but the emp_no is configured to be separate, and I get and not a single - group function error.

    Can anyone recommend a way to do this please?   I would be very grateful.

    Hello

    2729819 wrote:

    ...  So the expected results.

    See you soon

    This is what produces the query I posted earlier, if you use the new table and the columns and (unlike me) come out MORE correctly:

    WITH got_cnt AS

    (

    SELECT emp_no

    person_status

    function

    union_id

    COUNT AS NTC (*) OVER (PARTITION BY union_id)

    SAMPLE

    )

    SELECT person_status, emp_no, union_id, function

    OF got_cnt

    WHERE cnt > 1

    ORDER BY union_id

    ;

    You can get the same results with the COUNT aggregate function, but it should be faster, because it requires only a single pass through the table.

  • Get only the first line of duplicate data

    Hello

    I have the following situation...
    I have a table called employees with column (object, name, function) where "re" is the primary key.

    I import data within this table using an xml file, the problem is that some employees may be repeated inside this xml file, which means, I'll have repeated "re", and presents the columns is a primary key of my table.

    To work around this problem, I created a table called employees_tmp that has the same built as employees, but without the primary key constraint, on this way I can import the XML inside the employees_tmp table.

    What I need now is to copy that data employees_tmp used, but for cases where the "re" is taken, I just want to copy the first line found.

    Just an example:

    EMPLOYEES_TMP
    ---------------------
    RE FUNCTION NAME
    ANALYST GABRIEL 0987
    MANAGER GABRIEL 0987
    RANIERI ANALYST 0978
    VICE PRESIDENT RICHARD 0875


    I want to copy the data to employees looks like

    EMPLOYEES
    ---------------------
    RE FUNCTION NAME
    ANALYST GABRIEL 0987
    RANIERI ANALYST 0978
    VICE PRESIDENT RICHARD 0875

    How would I do that?

    I really appreciate any help.

    Thank you

    Try,

    SELECT re, NAME, FUNCTION
      FROM (SELECT re,
                   NAME,
                   FUNCTION,
                   ROW_NUMBER () OVER (PARTITION BY re ORDER BY NAME) rn
              FROM employees_tmp)
     WHERE rn = 1
    

    G.

  • to duplicate data

    Hello everyone, as I do to duplicate information in the database in the ADF file?

    There is no specific method for that?

    You can use the creation with params:
    http://www.Oracle.com/technetwork/developer-tools/ADF/learnmore/13-create-with-params-169140.PDF

  • CDC inserting duplicate data changing tables

    I use DCC to capture changes to 10.2.0.4 for 11.1.0.7 db.

    The source tables have primary key constraints, but changing tables has no constraints on them.

    I see many lines deuplicate on changing tables and these duplicates does not exist on the source.

    How can I get rid of these?

    I should create constraints on the changing table, if I do and then if duplicate a record is inserted the whole process stops, is it not?
    Any suggestions?

    If you do two updates on a record, then CDC will record two changes against the primary key...

    This is how works the CDC... If you don't want duplicates, do not use DCC. (use instead the stream or triggers)

  • Eliminate duplicates while counting dates

    Hi all

    Version: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0.


    How to count the dates and eliminate duplicate dates.

    For example,.

    Name from Date to Date

    AAA 31/05/2014-06/13/2014

    AAA 14/06/2014 06/27/2014

    AAA 21/06/2014 07/20/2014

    AAA 21/07/2014 08/20/2014

    We want the output as,

    County of name not days

    AAA 14

    AAA 14

    23 AAA (in this case must be eliminated 7 days from June 21, 2014-20/07/2014' because it is already counted dates previous ie.'06/14/2014 - 27/06/2014 '.) If we want the exit 23)

    AAA 31

    Thank you

    Pradeep D.

    Untested carefully... But can give you an idea:

    -----

    WITH t
         AS (SELECT 'AAA' nm,
                    TO_DATE ('05/31/2014', 'MM/DD/YYYY') from_dt,
                    TO_DATE ('06/13/2014', 'MM/DD/YYYY') TO_DT
               FROM DUAL
             UNION ALL
             SELECT 'AAA',
                    TO_DATE ('06/14/2014', 'MM/DD/YYYY'),
                    TO_DATE ('06/27/2014', 'MM/DD/YYYY')
               FROM DUAL
             UNION ALL
             SELECT 'AAA',
                    TO_DATE ('06/21/2014', 'MM/DD/YYYY'),
                    TO_DATE ('07/20/2014', 'MM/DD/YYYY')
               FROM DUAL
             UNION ALL
             SELECT 'AAA',
                    TO_DATE ('07/21/2014', 'MM/DD/YYYY'),
                    TO_DATE ('08/20/2014', 'MM/DD/YYYY')
               FROM DUAL),
         tt
         AS (SELECT nm,
                    from_dt,
                    to_dt,
                    (to_dt - from_dt) total_diff,
                    CASE
                       WHEN   LEAD (from_dt, 1)
                                 OVER (PARTITION BY nm ORDER BY from_dt)
                            - to_dt < 0
                       THEN
                            LEAD (from_dt, 1)
                               OVER (PARTITION BY nm ORDER BY from_dt)
                          - to_dt
                          - 1
                       ELSE
                          0
                    END
                       diff
               FROM t)
    SELECT nm,
           from_dt,
           to_dt,
             1
           + total_diff
           + LAG (diff, 1, 0) OVER (PARTITION BY nm ORDER BY from_dt) final_diff
      FROM tt
    

    Output:

    ------------

    NM FROM_DT TO_DT FINAL_DIFF

    AAA 2014 6/13/5/31/2014 14

    AAA 2014 6/27/6/14/2014 14

    AAA 23 6/2014 7/20/21/2014

    AAA 31 7/2014-8/20/21/2014

    See you soon,.

    Manik.

  • Duplicate validation of levels page to prevent the entry of the data in the database

    Hello

    Can someone please help me with this problem.

    I have a form with two items based on a table. I already have a validation level step to check for null. Now, I would like to create a page-level validation to verify that the duplicate data are not recorded in the database. I would check the database when the user clicks on the button 'create' to ensure that they are not insert record in doubles. If the data already exists, then display the error message and redirect them to another page. I use apex 3.2
    Thank you

    Hello

    Have you tried to write a PLSQL function to check for this?

    I've not tested this precisely, but something like this should work:

    (1) create a Page-level Validation
    (2) choose PLSQL for method
    (3) select function that returns a Boolean for the Type

    For the validation code, you could do something like this:

    DECLARE
    
        v_cnt number;
    
    BEGIN
    
        select count(*)
        into v_cnt
        from
        your_table
        where
        col1 = :P1_field1 AND
        col2 = :P2_field2;
    
        if v_cnt > 0 then return false;
        else return true;
        end if;
    
    END;
    

    If the query returns false, your error message will be displayed.

    Not sure how you would address redirection after this page well. Maybe just allow the user to try again with a different value (in case they made a mistake), or simply press a "Cancel" button to finish creating a new record.

    Amanda.

  • Remove entries from duplicate contacts

    When I log in as a user and I'm going to my contacts I have many duplicates listed which makes large and unmanageable list.  How to remove duplicates without having to do it one at a time?

    If the contacts are sync would be your Android gmail account, I suggest you make from the web and allowing the synchronization of changes to your computers and devices.  Here's how:

    1 - Open Gmail.com and sign in using this account
    2 - go to your contact list
    3. in the menu "more" on the page, choose "find and merge duplicates.  It will allow you to make them all at once, and no information is lost (she combines duplicate data, or add additional information to a person where possible).
  • KEEP the SECUREFILE DUPLICATES

    Hello

    I have a table that contains about 100 lines and nearly 50 of them are duplicates. I had specified DEDUPLICATE so that only one copy of the data is stored which is based on the hash value calculated.

    Suppose that now if I said KEEP the DUPLICATES, it will take in effect to existing data or it will affect only new incoming data. The lines will have duplicate data and will increase the heap size?

    Thank you.

    It will in effect for existing data or it will affect only new incoming data.

    If you are using SECUREFICHIERS LOB, then Yes, it will read all of the LOB segment and the segment size will increase if not you must use the oracle storage mechanism by default i.e. KEEP DUPLICATES

    Using Oracle SECUREFICHIERS LOBs

    ORACLE-BASE - SECUREFICHIERS output of database Oracle 11 g 1

Maybe you are looking for