Avoiding duplicates

Hi all

I have a detailed block that shows 3 records.

I have to avoid duplicates of records in this detailed block.

Suppose I go 3619919 as a credit note number and the next if I go 3619919 it should say duplicate and raise an error.

Please advice.

Thank you
Smail

Hello

You can use a solution of the element is calculated.

François

Tags: Oracle Development

Similar Questions

  • How can I avoid duplicate emails in my Inbox O.E.?

    duplicate emails

    How can I avoid duplicate emails in my Inbox O.E.. I also receive emails in BT Yahoo Mail but only one every time. Thank you. Brian

    Check out these links. I don't know if they are up to date, but it should get you going in the right direction.

    http://service.McAfee.com/faqdocument.aspx?ID=TS100408

  • How to import images in the latest Lightroom who don't not take FOREVER and avoid duplicates?

    I have the error and installed the "update" (highly questionable terms of ). Now, import takes FOREVER and I don't see the option to avoid duplicates?

    Any help much appreciated.

    I don't see the option to avoid duplication?

    Adobe removed this feature (and many others) of the new import in CC 2015.2 / 6.2.  They have apologized for that and for having introduced a large number of bugs, and in the next version they will be restoring the old import feature: Lightroom 6.2 version update and apologies, 6.2 Lightroom import update.

    Meanwhile, while you could try to solve the problems with 2015.2.1 / 6.2.1 I recommend return you to 2015.1.1 / 6.1.1 the last stable release: http://www.lightroomqueen.com/how-do-i-roll-back-to-lightroom-2015-1-1-or-lightroom-6-1-1/. Many people who have done so, and the same Adobe now recommends if you have serious problems (and you'll very probably).

  • Avoid duplicates when you import XMLs?

    There are a few versions, we have fixed the problem to get the duplicate items when you import new projects in another.  This was very helpful.

    However, I have noticed that when you import XMLs, first does not apply the same level of... discretion

    Is it possible to get XMLs for import without creating duplicate items?  I found a workaround by first import the XML file into a new project and then import this project first to my main project... but when I'm in a great documentary film which takes 3-4 minutes to open each time, it's not ideal.

    This is particularly important for people using the Pluraleyes because organize us images in first and then export to the EP, and then must bring with XMLs, and it would be great if we didn't have to do a bunch of media management manual to avoid duplicates in the project file.


    Thank you!

    R

    Hi Ryan,

    Get your feature to the appropriate string request by filling out this form: http://adobe.ly/feature_request

    Thank you

    Kevin

  • Avoid duplicates in the query

    Hello..

    I wrote the following to meet the requirement... query but duplicates
    SELECT   a.rpper, a.idate, a.icycle, b.rpper,
             b.idate, b.icycle
        FROM i_sum a, i_sum b
       WHERE a.rc = b.py
         AND a.py = b.rc
         AND a.prd = b.prd
         AND a.svc = b.svc
         AND a.itype = 'bieral'
         AND b.itype = 'bieral'
         AND a.dir = '-'
         AND b.dir = '-'
         AND (   (a.idate != b.idate)
              OR (a.icycle != b.iycle)
             )
    ORDER BY a.rpper, b.rpper
    The output that I get as
    A-G-11/9-SMS    30-Nov-09    112   G-A-11/9-SMS   31-Aug-09    113   
    --------
    ------
    G-A-11/9-SMS   31-Aug-09    113    A-G-11/9-SMS    30-Nov-09   112
    -----
    -----
    In the output above the 2nd row is duplicate to my exit...
    How can I avoid these data


    Thank you...

    Smile says:
    "I should meet on condition that a.icycle!" = b.icycle
    So I can't say the condition where a.icycle > b.icycle

    Yes you can. Think of it like this, suppose you have a list of numbers, 1, 2 and 3.

    You want to join this list of numbers to himself, ending up with pairs of numbers, but you don't want the same number appearing twice in any pair, nor do you want to reverse pairs that occur.

    That is to say. (1, 1), (2, 2) etc are not valid. (this is the equivalent you are a.icycle! = b.icycle). In addition, (1.2) and (2, 1) are not valid.

    Then, join the list to itself based on the num1! = num2 condition only will give you:

    1, 2
    1, 3
    2, 1
    2, 3,
    3, 1
    3, 2

    You can see that you have pairs reversed now appearing - IE. (1,2) and (2,1), (1, 3) and (3, 1) who - did I mention? -We do not want.

    to prevent that from happening, you must stop to join the list with numbers earlier than the number that you join with. That is to say.

    1, 2
    1, 3
    2 .....<----- can't="" join="" with="" 1,="" as="" 1="" is="" less="" than="" 2,="" and="" we="" already="" have="" (1,2)="" in="" the="" list.="" nor="" can="" you="" join="" with="" 2,="" as="" 2="2" and="" we="" are="" excluding="" them.="" therefore="" we="" start="" joining="" with="" 3="" -="" ie.="" 2=""><>

    Thus, the new list becomes:

    1, 2
    1, 3
    2, 3

    This is the equivalent of the join with a.icycle<>

    ETA: Perhaps a slightly more graphic response would help:

    {code}
    with the line = col under the join:

    1 2 3 4
    1 x x x x
    2 x x x x
    3 x x x x
    4 x x x x

    with line! = col under the join:

    1 2 3 4
    1 x x x
    2 x x x
    3 x x x
    4 x x x

    with the line< col="" as="" the="">

    1 2 3 4
    1 x x x
    2 x
    x 3
    4
    {code}

    Published by: Boneist on January 18, 2011 14:07

  • avoid duplicates when drag and drop browser

    Hello mac users,

    as you may have guessed, I come from years of using windows,

    Sorry if this question seems stupid to some of you,

    but I couldn't really find an answer during a few hours on other forums.

    What I would do is really basic:

    -Drag and drop a photo from a browser (safari, chrome, same result) in a folder

    -j' wait (or wants to) that when an existing file in the folder a dialogue pop up and ask me if I want to replace the existing file or to keep the two

    ~ but mac just keeps the two files and add a progressive number for the duplicate

    This behavior is standard? is it possible to have something close to my expectations?

    I found tons of answers about files merging and similar.

    I'm really stuck on something more basic.

    Thanks in advance for your attention!

    Pier

    I'm really the only one having this problem? ^^

    Help me please!

  • HP LaserJet Enterprise 600 M60: Avoid duplicate copies of HP LaserJet Enterprise 600 M602

    Hello

    We have several printers HP LaserJet Enterprise 600 M602 installed at different locations, printers at different locations are connected via USB to a berth to, when a use goes from 1 location to the other place and docks of the toughpad in dock station... another copy of gets printers installed copy 1 copy 2

    We don't want no copies of printers is installed because on all locations, we have same model...

    Can you please advice how to avoid multiple copies and a generic installed printer prints on all sites...

    Kind regards

    Atif

    Since they are attached USB you don't have any options.

    Each USB printer is recognized as a different device, so you can't print to one with the installation of the driver for another.

    If my post is solved your problem and click the accepted as button solution under him.

    In order to thank a Tech for a post click the thumbs up button under the post.

    You can even click on the two keys...

  • How to avoid duplicates

    When I copy the contents of a folder on a USB key, I sometimes gave options to maintain two copies or ignore that (and others like him) in the folder I'm in that copy. I keep the video files in a video folder on a USB key and I try to copy C/disc/videos as often as possible. But sometimes, I don't know where I stopped the last time that the copy and select the files already in the destination folder. Why didn't we the possibility to jump if a duplicate constantly? Is there a way to prevent duplicate files when you do this?

    What type of USB key that you use? I use a 500 GB USB external hard drive and can't reproduce your problem, but mine is (well) permanently connected.

    I always manage the items I copy on records, so when I add new files they copied to a folder on the external drive. If I do double job inadvertently with something the only place where it is likely to be found in the previously copied folder, and it's a simple matter to check.

    Another point, when you copied elements on the external hard drive you by keeping the files on the drive or are you allow to delete?  -Thanks - r.

  • How to avoid duplicates on a column with condition

    Hi all

    I need some advice here. At work, we have an Oracle APEX application that allow the user to add new records with the decision of the increment automatic number based on the year and the group name.

    Said that if they add the first record, group name AA, for 2012, they get the decision number AA 1 2013 as their record casein displayed page of the report.

    The second record of AA in 2013 will be AA 2 2013.

    If we add about 20 records, it will be AA 20 2013.

    The first record for 2014 will be AA 1 2014.

    However, recently, we get a claim of the user on two files of the same name of group have the same number of the decision.

    When I looked in the history table and find that the time gap between 2 record is about 0.1 seconds.

    In addition, we have the correspondence table which allows the user admin update the sequence number start with the restraint that it must be greater than the maximum number of the current name of the current year.

    This boot sequence number and the name of the group is stored together in a table.

    And in some other case, the user can add a decision duplicate for related record number. (this is a new feature)

    The current logic of the procedure to add the new record on the application are

    _Get max record table with selected group name (decision_number) and the current year.

    _INSERT in the folder table the new record came with the decision to number + 1

    _ update sequence number of the number of the decision just added.

    So instead of utitlising the process of editing the built-in automatic table of the APEX, I write a procedure that combine all three processes.

    I have run some loop for continually perform this procedure, and it seems that it can generate autotically new decision unique number with time about 0.1 second difference.

    However, when I increase the number of entry to 200 and let two users run 100 each.

    If the time gap is about 0.01 second, double decision numbers are displayed.

    What can I do to prevent duplicate?

    I can't just apply a unique constraint here for three columns with condition because it can be duplicated in some special conditions. I don't know much about the use of lock and its impact.

    This is the content of my procedure

    create or replace

    PROCEDURE add_new_case)

    -ID just use the trigger

    p_case_title IN varchar2,

    p_year IN varchar2,

    p_group_name IN VARCHAR2,

    -decisionnumber here

    p_case_file_number IN VARCHAR2,

    -active

    p_user in VARCHAR2

    )

    AS

    NUMBER default_value;

    caseCount NUMBER;

    seqNumber NUMBER;

    previousDecisionNumber NUMBER;

    BEGIN

    -execution immediate q '[alter session set nls_date_format = "dd/mm/yyyy"]';

    SELECT count (*)

    IN caseCount

    OF CASE_RECORD

    WHERE GROUP_ABBR = p_group_name

    AND to_number (to_char (create_date, "yyyy")) = to_number (to_char (date_utils.get_current_date, "yyyy"));

    SELECT max (decision_number)

    IN previousDecisionNumber

    OF CASE_RECORD

    WHERE GROUP_ABBR = p_group_name

    AND to_number (to_char (create_date, "yyyy")) = to_number (to_char (date_utils.get_current_date, "yyyy"));

    IF p_group_name IS NULL

    THEN seqNumber: = 0;

    ON THE OTHER

    SELECT Seq_number INTO seqNumber FROM GROUP_LOOKUP WHERE ABBREVIATION = p_group_name;

    END IF;

    IF caseCount > 0 THEN

    default_value: largest = (seqNumber, previousdecisionnumber) + 1;

    ON THE OTHER

    default_value: = 1;

    END IF;

    INSERT INTO CASE_RECORD (case_title, decision_year, GROUP_ABBR, decision_number, case_file_number, active_yn, created_by, create_date)

    VALUES (p_case_title, p_year, p_group_name, default_value, p_case_file_number, 'Y', p_user, sysdate);

    -Need to update the sequence here also

    UPDATE GROUP_LOOKUP

    SET SEQ_NUMBER = default_value

    WHERE the ABBREVIATION = p_group_name;

    COMMIT;

    EXCEPTION

    WHILE OTHERS THEN

    Logger.Error (p_message_text = > SQLERRM)

    , p_message_code = > SQLCODE

    , p_stack_trace = > dbms_utility.format_error_backtrace

    );

    LIFT;

    END;

    Many thanks in advance,

    Ann

    It's easier to solve for the case, while p_group_name is not null. In this case, you update a GROUP_LOOKUP line, so that you can select to update this line at the beginning, to prevent cases of two for the same group added at the same time. To do this, change the selection of GROUP_LOOKUP to:

    SELECT Seq_number INTO seqNumber FROM GROUP_LOOKUP WHERE ABBREVIATION = p_group_name for an updated VERSION OF the SEQ_NUMBER;

    and move this to be the first thing that did the procedure - before it has CASE_RECORD lines.

    In the case when p_group_name is set to null, you have some object to be locked. I think the best you can do is to lock the entire table GROUP_LOOKUP:

    the table lock in exclusive mode GROUP_LOOKUP wait 100;

    The '100 expectation' means that he will wait until 100 seconds before giving up and trigger an error. in practice, that is expected to only wait a moment.

    Exclusive mode allows others to read, but not to update the table.

    UPDATES and the LOCK of the TABLE will be updates of other sessions wait for this transaction to validate. Queries from other sessions are not affected.

    The locks are released when you commit or roll back.

  • Avoid duplicates in datagrid

    There are 2 dataGrids: G1 and G2. The lines can be dragged and G1 G2 tomb6e. There may be lines twice in the G1, but avoid lines duplicated in G2.

    Ideally, there should be a STOP signal if displayed when there is an attempt to drop a line duplicated in G2.

    How is that possible?

    Thank you.

    in your drag drop handler, get the lines moved, use the getItemIndex() collection table dataProvider underlying to see if this line is already there, if so, just call the Event.preventDefault () and set the feedback to stop.

    http://www.Adobe.com/cfusion/webforums/Forum/MessageView.cfm?forumid=60&CATID=585&ThreadId = 1318924 & highlight_key = y & keyword1 = preventdefault

    ATTA

  • Avoid duplicates in a table

    I have a tabular layout with 3 fields, one is hidden, the second field has a default value of the foreign key, the third field is a LOV

    Query:
    Select WAR_FIGHTG_FUNC display_value, WAR_FIGHTG_FUNC_C return_value
    of WAR_FIGHTG_FUNC_LU
    order by 1

    How do I create validation in tabular form, to do?

    Thank you
    Mary

    Published by: Lucky on February 12, 2010 15:50

    I just updated the example and added validation for duplicate entries.

    Denes Kubicek
    -------------------------------------------------------------------
    http://deneskubicek.blogspot.com/
    http://www.Opal-consulting.de/training
    http://Apex.Oracle.com/pls/OTN/f?p=31517:1
    http://www.Amazon.de/Oracle-Apex-XE-Praxis/DP/3826655494
    -------------------------------------------------------------------

  • How can I avoid duplicates with cfloop

    I use a bit of files Next-N code on a list of books which is obtained from a MS SQL database.
    The next-N registration code only works using a < CFLOOP QUERY = "" >, the way I have my SQL put in place I'm getting duplicates under CFLOOP but not under CFOUTPUT, this is because with CFOUTPUT can I use the group = "" param "

    Can you please look at my request and tell me how to structure my relationship so that this JOIN returns all the information on the same record rather than create two records (duplicate)?

    This is the basic layout of my tables

    Table of languages
    Language PK | Name of the language
    1 - English
    2 - French

    Table Books
    Book PK | Reserve name | Original language | Language of translation
    1 - book Test - 2-1

    Released on the site I want:
    Test book. O: French | T: English

    Released on the site I get:
    Test book. O: French | T: English
    Test book. O: French | T: English
    (the discharge reveals that a duplicate is produced due to the relationship between Orig and Trans with the table of languages)

    Here's my query:
    SELECT bookName, bookOrigLang, bookTransLang, langPK, bookPK, langLanguage
    (SELECT bookOrigLang FROM books WHERE Books.bookOrigLang = Languages.langPK) The original,
    (SELECT bookTransLang FROM books WHERE Books.bookTransLang = Languages.langPK) AS translation
    FROM dbo. Books, dbo. Translation

    -------------------
    I am grateful for any suggestions you might be able to

    Quote:
    Posted by: sjlsam2
    I get the following error if:
    Ambiguous column name 'cllangPK '.

    I can't be precise because most of the columns of the query is not a table name or an alias. So I'm not sure the columns that belong to tables. But the addition of a source table name, or an alias for all the columns should fix errors of "ambiguous column.

    Example:
    SELECT theSourceTable.clcoPK, theSourceTable.clcoCountry...

    Give each column a different name to refer to it in your cfoutput

    Example:
    SELECT
    ol.cllangLanguage AS OriginalLanguage,
    tl.cllangLanguage AS TranslatedLanguage,
    ....

    You will need to choose a single join syntax and use it consistently. So either do all your joins this way

    FROM TableA, TableB
    WHERE TableA.ID = TableB.ID

    ... or this way

    FROM TableA AS an INNER JOIN TableB b ON a.ID = b.ID

  • vs iPhoto photo avoid duplicates

    My library is 35 GB and my iPhoto 29 GB library, I think that the pictures are duplicated. How can I make sure that I do not delete all the photos? How can I combine all iPhoto and Photo photos? Is it possible? Thanks in advance!

    You can delete the iphoto library after having migrated to Photos

  • avoid duplicates in segment1

    Hello

    I have the query with me
    select DESCRIPTION, list_price_per_unit,segment1,segment2,segment3,PRIMARY_UOM_CODE,PRIMARY_UNIT_OF_MEASURE ,organization_id
    from mtl_system_items 
    where  INVENTORY_ITEM_STATUS_CODE ='Active'
    and    (segment1 like '205%' or segment1 like '7%')
    and    organization_id  IN (105,121)
    
    
    output is 
    
    DESCRIPTION    list_price_per_unit  SEGMENT1          segment2   segment3     PRIMARY_UOM_CODE   PRIMARY_UNIT_OF_MEASURE       organization_id
    
    SAFETY  GOGGLES    1                   7010101001                       613017           EA                             Each                                          105
    SAFETY  GOGGLES    1                   7010101001                       613017           EA                             Each                                          121
    CARGO LASHING - 10 MTR  2.85          7010101003                           613011           EA                              Each                                          105
    CARGO LASHING - 10 MTR  2.85          7010101003                            613011           EA                              Each                                          121
    
    POWDER, BRONZE FLUX    0.25           2059901230                                            BOX                             Box                                           121
    POWDER, BRONZE FLUX    0.25           2059901230                                            BOX                             Box                                           105
    
    
    
    well i need the data in segment only once ,i understand for eg 7010101001    appears twice as it is in organization id 121 and 105 
    
    
    but is there any way to make it appear once only 
    
    
    DESCRIPTION    list_price_per_unit  SEGMENT1          segment2   segment3     PRIMARY_UOM_CODE   PRIMARY_UNIT_OF_MEASURE       organization_id
    
    SAFETY  GOGGLES    1                  7010101001                       613017           EA                             Each                                          105
    CARGO LASHING - 10 MTR  2.85          7010101003                       613011           EA                              Each                                          105
    
    POWDER, BRONZE FLUX    0.25           2059901230                                            BOX                             Box                                           121
    
    
    i tried distinct(Segment1) but doesnt work
    kindly help me

    thanking in advance

    Published by: makdutakdu on September 23, 2010 09:32

    Published by: makdutakdu on September 23, 2010 09:34

    Check this query

    select DESCRIPTION, list_price_per_unit,segment1,segment2,segment3,PRIMARY_UOM_CODE,PRIMARY_UNIT_OF_MEASURE ,
    organization_id
    from mtl_system_items where
    rowid IN
    (
    select rowid from
    (
    select rowid, row_number() over (partition by SEGMENT1 order by null) rn from mtl_system_items
    where  INVENTORY_ITEM_STATUS_CODE ='Active'
    and    (segment1 like '205%' or segment1 like '7%')
    and    organization_id  IN (105,121)
    )
    where rn = 1
    )
    
  • How to avoid duplicate on form OPS

    Hello

    I have developed a search page in the OPS and after search, he questions a few lines in the region of the Table. a point is a textinputItem called "sequence".
    In this field user write values such as (1, 2, 3, 4, 5... etc). My requirement is to stop the user enter a value of sequence again... Means that if he (1,2,3,3,3,4,5), then he must raise the error.

    Let me know if you people could not understand my requirement...

    I developed something like this:

    Public Sub checkseq()
    {
    Sequence of string = "";
    OAViewObject vo = (OAViewObject) findViewObject ("XXBCR12ToolkitVO1");
    vo.executeQuery ();
    System.out.println ("line are" + vo.getAllRowsInRange () .length);
    Rank [r] = vo.getAllRowsInRange ();
    int n = vo.getAllRowsInRange () .length;
    for (int i = 0; i < n; i ++)
    {

    }

    }


    Kind regards
    Parag

    Hello
    WHT mistake you get paste it here.

    Nani :)

Maybe you are looking for