Primary key and unique key not Null

Hello gurus,

Asked in an interview about the difference between primary and unique keys.
I talked Unique keys can accept null values, but they are unique through should have the table. So, the next question asked me was "a Unique key NOT NULL can be treated as the primary key?

Tricky question! :)
I said yes! that it meets the requirement to be a primary key for this particular table, but since this isn't really a primary key, that it can't be covered in the foreign keys.

The interviewer wanted just a Yes or no.

Can someone please put some of this?

Thanks in advance!

Two things.
1. unique can also be referenced as a foreign key. If your statement "but because this isn't really a primary key may not be seized key foreign." is not true.
2. primary key and Unique key are different (for example you can have more than one UK in a table but PK's alone) and to know the difference, read some RDBMS concepts.

I'm curious to know what is the outcome of your interview? ;)

Tags: Database

Similar Questions

  • Validation auto-generated Primary Key not Null

    Hello

    I have a generated Assistant shape, that has a primary key that is generated automatically by a trigger before insert. In my form, primary key is initially empty and after that the user saves the form, then the primary key generates a number and that it fills in the form. I currently have a non-Null on the form validation, after that the user submitted the form and wants to update. How can I keep the validation non-Null on the form so that the user can write again at the start of the database, with the empty primary key field? Auto Wizard generates this out of the constraints of the table, and I would like to know if it is a validation required?


    Thank you
    Mary

    Well, you do NOT have NULL validation on the primary key point and that causes the problem that the user create folder? Started to run or remove the 'never '. I don't think it's necessary that you display as 'read only '.

    Kind regards
    Hari

  • where can I use statement box put Null or not primary key or null in the result?

    create the table test_target

    (

    letter_grade varchar2 (2) null,

    grade_point number (3,2) not null,

    max_grade number (3).

    min_grade number 4

    );

    create the table test_source

    (

    letter_grade VARCHAR2 (3),

    grade_point number (3,2).

    max_grade number (3,2).

    min_grade number (3).

    created_by date,

    Modified_By date

    );

    with the CBC as

    (

    Select src.table_name src_table_name, src.column_name src_col_name, src.data_type src_data_type, src.data_length src_data_len, src.data_precision src_data_precision, src.data_scale src_data_scale, src.nullable src_nullable

    user_tab_columns CBC

    where table_name = 'TEST_SOURCE '.

    ),

    As TGT

    (

    Select tgt.table_name tgt_table_name, tgt.column_name tgt_col_name, tgt.data_type tgt_data_type, tgt.data_length tgt_data_len, tgt.data_precision tgt_data_precision, tgt.data_scale tgt_data_scale, tgt.nullable tgt_nullable

    of tgt user_tab_columns

    where table_name = 'TEST_TARGET '.

    ),

    col_details as

    (

    Select src.src_table_name, nvl (tgt.tgt_table_name, first_value (tgt_table_name) more (order of nulls last tgt_table_name)) tgt_table_name;

    SRC.src_col_name, src.src_data_type, src.src_data_len, src.src_data_precision, src.src_data_scale, src.src_nullable,

    TGT.tgt_col_name, tgt.tgt_data_type, tgt.tgt_data_len, tgt.tgt_data_precision, tgt.tgt_data_scale, tgt.tgt_nullable

    the CBC

    outer join left tgt

    on)

    SRC.src_col_name = tgt.tgt_col_name

    )

    )

    Select *.

    de)

    Select the case sensitive option

    When tgt_data_type! = src_data_type or tgt_data_len! = src_data_len or tgt_data_precision! = src_data_precision or tgt_data_scale! = src_data_scale or tgt_nullable! = src_nullable

    then 'alter table ' | tgt_table_name | 'Edit ' | tgt_col_name | ' ' || src_data_type | ' (' ||

    -case when src_data_type null ('DATE') then

    on the other

    case

    When src_data_type in ('VARCHAR', 'VARCHAR2')

    then nvl (to_char (src_data_len), ' ') | ') '

    otherwise decode (nvl (src_data_precision-1),-1, null, nvl (to_char (src_data_precision), ' ') |) ', ' || NVL (to_char (src_data_scale), ' ') | ')')

    end

    end

    When tgt_col_name is null

    then 'alter table ' | tgt_table_name | 'Add ' | src_col_name | ' ' || src_data_type |

    -case when src_data_type null ('DATE') then

    on the other

    case

    When src_data_type in ('VARCHAR', 'VARCHAR2')

    then nvl (to_char (src_data_len), ' ') | ') '

    otherwise decode (nvl (src_data_precision-1),-1, null, nvl (to_char (src_data_precision), ' ') |) ', ' || NVL (to_char (src_data_scale), ' ') | ')')

    end

    end

    end alter_statement

    of col_details

    )

    where alter_statement is not null;

    result:

    ALTER table TEST_TARGET change LETTER_GRADE VARCHAR2 (3) / / if it is null or not null primary key or want to see the result as alter table TEST_TARGET change LETTER_GRADE VARCHAR2 (3) null or not null or primary key

    ALTER table TEST_TARGET change the NUMBER of MAX_GRADE (3, 2)

    ALTER table TEST_TARGET change the MIN_GRADE NUMBER (3, 0)

    ALTER table TEST_TARGET add CREATED_BY

    ALTER table TEST_TARGET add MODIFIED_BY

    Please try:

    drop table test_target purge;

    drop table test_source purge;

    create the table test_target

    (

    letter_grade varchar2 (2) primary key,.

    grade_point number (3,2) not null,

    max_grade number (3) unique.

    min_grade number 4,

    MIN_AGE number 4

    );

    create the table test_source

    (

    letter_grade VARCHAR2 (3),

    grade_point number (3,2).

    max_grade number (3,2).

    min_grade number (3).

    created_by date,

    Modified_By date

    );

    with the CBC as

    (

    Select src.table_name src_table_name, src.column_name src_col_name, src.data_type src_data_type, src.data_length src_data_len, src.data_precision src_data_precision, src.data_scale src_data_scale,

    CBC. Nullable src_nullable, decode (T.Constraint_Type, 'P', 'Primary Key', 'U', 'Unique', ") as src_cons

    user_tab_columns CBC

    left join (select Cc.Column_Name, Uc.Constraint_Type

    of user_cons_columns cc, uc user_constraints

    where Cc.Constraint_Name = Uc.Constraint_Name

    and Cc.Table_Name = Uc.Table_Name) t

    on T.Column_Name = Src.Column_Name

    where table_name = 'TEST_SOURCE '.

    ),

    As TGT

    (

    Select tgt.table_name tgt_table_name, tgt.column_name tgt_col_name, tgt.data_type tgt_data_type, tgt.data_length tgt_data_len,

    TGT.data_precision tgt_data_precision, tgt.data_scale tgt_data_scale, tgt.nullable tgt_nullable,

    Decode (T.Constraint_Type, 'P', 'Primary Key', 'U', 'Unique', ") as tgt_cons

    of tgt user_tab_columns

    left join (select Cc.Column_Name, Uc.Constraint_Type

    of user_cons_columns cc, uc user_constraints

    where Cc.Constraint_Name = Uc.Constraint_Name

    and Cc.Table_Name = Uc.Table_Name) t

    on T.Column_Name = TGT. Column_Name

    where table_name = 'TEST_TARGET '.

    ),

    col_details as

    (

    Select src.src_table_name, nvl (tgt.tgt_table_name, first_value (tgt_table_name) more (order of nulls last tgt_table_name)) tgt_table_name;

    SRC.src_col_name, src.src_data_type, src.src_data_len, src.src_data_precision, src.src_data_scale, src.src_nullable, src_cons,

    TGT.tgt_col_name, tgt.tgt_data_type, tgt.tgt_data_len, tgt.tgt_data_precision, tgt.tgt_data_scale, tgt.tgt_nullable, tgt_cons

    the CBC

    outer join full tgt

    on)

    SRC.src_col_name = tgt.tgt_col_name

    )

    )

    Select *.

    de)

    Select the case sensitive option

    When tgt_data_type! = src_data_type or tgt_data_len! = src_data_len or tgt_data_precision! = src_data_precision or tgt_data_scale! = src_data_scale or tgt_nullable! = src_nullable

    then 'alter table ' | tgt_table_name | 'Edit ' | tgt_col_name | ' ' || src_data_type | ' (' ||

    -case when src_data_type null ('DATE') then

    on the other

    case

    When src_data_type in ('VARCHAR', 'VARCHAR2')

    then nvl (to_char (src_data_len), ' ') | ') '

    otherwise decode (nvl (src_data_precision-1),-1, null, nvl (to_char (src_data_precision), ' ') |) ', ' || NVL (to_char (src_data_scale), ' ') | ')')

    end

    end

    ||

    cases where tgt_nullable = 'Y' then 'null '.

    of another end 'not null '.

    || tgt_cons

    When tgt_col_name is null

    then 'alter table ' | tgt_table_name | 'Add ' | src_col_name | ' ' || src_data_type |

    -case when src_data_type null ('DATE') then

    on the other

    case

    When src_data_type in ('VARCHAR', 'VARCHAR2')

    then nvl (to_char (src_data_len), ' ') | ') '

    otherwise decode (nvl (src_data_precision-1),-1, null, nvl (to_char (src_data_precision), ' ') |) ', ' || NVL (to_char (src_data_scale), ' ') | ')')

    end

    end

    || tgt_cons

    When src_col_name is null

    then 'alter table' | tgt_table_name: ' drop '. tgt_col_name

    end alter_statement

    of col_details

    )

    where alter_statement is not null;

    priamry key and unique key you choose user_contraints

    Check and change for your condition

    Concerning

    Mr. Mahir Quluzade

  • Automatic numbering on a primary key (not in order)

    Hello

    I was wondering how an AutoNumber primary keys without using the sequence when 'on' insert.

    Right now I use trigger:
    TRIGGER "scott"."do_numbers" BEFORE INSERT ON "scott"."test" 
    REFERENCING NEW AS NEW FOR EACH ROW 
    
    declare
     pragma autonomous_transaction;
    begin
    
    SELECT
     nvl(MAX(ID),0)+1
     INTO :NEW.ID
      FROM test;
    commit;
    END;
    the trigger above works when I'm inserting line-by-line. But now, I tried to insert 300 rows at a time. Here the insertion fails... I think he should there be commit after each insertion (what I think).

    Can someone explain to me how can this "automatic" numbering can be made when inserting large number or rows of one table to another at the same time?

    Thank you!

    user13071990 wrote:
    Hello

    I know how to do this with sequence I ask just how it can be done without the help of a.

    You would not do otherwise than to use a sequence, otherwise you will have problems in a multiuser environment.

    The closest thing would use an update statement in your trigger code...

    UPDATE test
    SET id = NVL(id,0)+1
    RETURNING id INTO :new.id;
    

    Although this is not ideal and you must use sequences.

    And you most certainly not use autonomous transactions within a trigger. It is simply not true, especially when you are trying to manipulate the data on the table of the triggers.

  • No column primary key not defined. error

    Hi all

    First of all, thanks to DWFAQ for advice a tutorials.

    Secondly, I built a contact form the wizard of registration form of update to MEASUREMENT - http://dwfaq.info/home.php?id=4

    I hit a problem, when you run the form I get the following error:

    Error:
    Internal error.
    Details of Developer:
    tNG_update.prepareSQL:
    No primary key column is defined. (UPD_NO_PK_SET)
    tNG backtrace - REVIEWS

    * tNG_update.executeTransaction
    o STARTER. Trigger_Default_Starter
    o tNG_update.doTransaction
    + BEFORE. Trigger_Default_saveData
    # tNG_update.saveData
    + BEFORE. Trigger_Default_FormValidation
    + tNG_update.prepareSQL*
    o tNG_update.getRecordset
    o tNG_update.getFakeRsArr
    o tNG_update.getLocalRecordset
    o tNG_update.getFakeRecordset

    Could someone please help - I'm so closely that it made wrong ;-)

    Thank you

    NJ

    Heya,

    I received your email and took a look at your script. It seems that the question was in the transaction of PrimaryKey. I sent the file to you with some modifications, where he says:

    $upd_contact-> setPrimaryKey ("id", "NUMERIC_TYPE", "GET", "id");

    I changed to

    $upd_contact-> setPrimaryKey ("id", "NUMERIC_TYPE", "VALUE", "id");

    There are other things that I've cleaned up a bit, but I think that's what was causing your error.

    Hope that helps!

  • What is the difference between primary key and unique indexes with forced not null?

    Primary key is = unique index + not null?

    The short answer is Yes.

    However, even if the primary key, applying both uniquness and not null, there is a notion of "special".

    You can only have one primary key in tables, but you can have multiple unique indexes and constraints not null.

    See: https://asktom.oracle.com/pls/asktom/f?p=100:11:0:P11_QUESTION_ID:8743855576462

  • How to refer to the primary key column of newly inserted rows of tabular form

    Hello

    I use APEX 4.2.0.00.27 with Oracle DB 11.2.0.3.0.

    I work with a tabular presentation wizard-created for insert and update a table using the integrated SRM process (sequence 10).  I'm trying to use a process of anonymous block of PL/SQL (sequence 30) to make another manipulation of table after the records were inserted or updated.  The manual process is associated with my tabular form and I use the variables of name of column binding in my program block.

    My (rsn_test) table has 3 columns: test_id (number), test_nbr (number), test_id2 (number).  Test_id column is identified as the primary key and the type of the source already exists a sequence rsn_test_seq.  Column test_id2 gets its default value 0 to a hidden page element.

    I would use my manual process for updating the value of the test_id2 column.  If it's 0 then I want to put the value of the column test_id.  If it is any other value, then it must remain at this value.  My logic works very well for an existing line, but I'm running into a problem with the newly added lines.  The new lines get inserted, but the test_id2 column remains the default value 0.  I can tell the debugger that the SRM process is triggered first and inserts the line, then my manual dealing with fires.  The problem seems to be that the connection variable: TEST_ID for the primary key column remains NULL after insertion.  I don't know how to get the value of the column test_id of my newly created line to use in my PL/SQL block to my update.

    Process of PL/SQL:

    DECLARE
    BEGIN
       :P7_SHOW := NULL;
       :P7_SHOW := NVL(:TEST_ID2,555) || ' and ' || NVL(:TEST_ID,787) || ' and ' || NVL(:TEST_NBR,9999);
       IF :TEST_ID2 = 0 AND :TEST_ID IS NOT NULL THEN
          UPDATE rsn_test
             SET test_id2 = :TEST_ID
           WHERE test_id = :TEST_ID;
       ELSE
          :TEST_ID2 := :TEST_ID2;
       END IF;
    END;
    
    

    Excerpt from the debugger:

    0.01625 0.00010 Processes - point: ON_SUBMIT_BEFORE_COMPUTATION
    0.01635 0.00008 Branch point: Before Computation
    0.01643 0.00003 Process point: AFTER_SUBMIT
    0.01646 0.00022 Tabs: Perform Branching for Tab Requests
    0.01668 0.00008 Branch point: Before Validation
    0.01676 0.00024 Validations:
    0.01700 0.00135 Perform basic and predefined validations:
    0.01835 0.00020 Perform custom validations:
    0.01855 0.00049 ...Validation "TEST_NBR must be numeric" - Type: ITEM_IS_NUMERIC
    0.01904 0.00007 ......Skip for row 1 because row hasn't changed
    0.01911 0.00016 ......Skip for row 2 because row hasn't changed
    0.01927 0.00012 ...Validation "TEST_ID2 must be numeric" - Type: ITEM_IS_NUMERIC
    0.01939 0.00007 ......Skip for row 1 because row hasn't changed
    0.01945 0.00018 ......Skip for row 2 because row hasn't changed
    0.01964 0.00005 Branch point: Before Processing
    0.01968 0.00004 Processes - point: AFTER_SUBMIT
    0.01972 0.00588 ...Process "ApplyMRU" - Type: MULTI_ROW_UPDATE
    0.02560 0.00154 ...Execute Statement: declare function x return varchar2 is begin begin for c1 in ( select "RSN_TEST_SEQ".nextval pk from sys.dual ) loop return c1.pk; end loop; end; return null; end; begin wwv_flow.g_value := x; end;
    0.02714 0.00140 ......Row 3: insert into "APPPCSRSN"."RSN_TEST" ( "TEST_ID", "TEST_NBR", "TEST_ID2") values ( :b1, :b2, :b3)
    0.02854 0.00011 ...Process "ApplyMRD" - Type: MULTI_ROW_DELETE
    0.02865 0.00004 ......Skip because condition or authorization evaluates to FALSE
    0.02869 0.00015 ...Process "Process Submit" - Type: PLSQL
    0.02884 0.00007 ......Skip for row 1 because row hasn't changed
    0.02891 0.00012 ......Skip for row 2 because row hasn't changed
    0.02903 0.00012 ......Process row 3
    0.02915 0.00429 ...Execute Statement: begin DECLARE BEGIN :P7_SHOW := NULL; :P7_SHOW := NVL(:TEST_ID2,555) || ' and ' || NVL(:TEST_ID,787) || ' and ' || NVL(:TEST_NBR,9999); IF :TEST_ID2 = 0 AND :TEST_ID IS NOT NULL THEN UPDATE rsn_test SET test_id2 = :TEST_NBR WHERE test_id = :TEST_ID; ELSE :TEST_ID2 := :TEST_ID2; END IF; END; end;
    0.03344 0.00013 ...Session State: Saved Item "P7_SHOW" New Value="0 and 787 and 1300"
    0.03356 0.00004 Branch point: After Processing
    0.03360 0.00048 ...Evaluating Branch: "AFTER_PROCESSING" Type: REDIRECT_URL Button: (No Button Pressed) Condition: (Unconditional)
    0.03407 0.00013 Redirecting to f?p=290:7:8717971109610:::::&success_msg=0%20row(s)%20updated%2C%201%20row(s)%20inserted.Success%2FEBD244168556408CBA714E3974918C09%2F
    0.03420 0.00012 Stop APEX Engine detected
    0.03432 0.00007 Stop APEX Engine detected
    0.03439 - Final commit
    
    

    Any suggestions?

    I have run tests on

    https://apex.Oracle.com/pls/apex/f?p=83488:1 demo/demo

    to see your problem.

    I have 2 solution for your problem.
    I add trial NOT tabular just usual block of PL/SQL

    BEGIN
    I'm IN (SELECT TEST_ID FROM RSN_TEST WHERE TEST_ID2 = 0)
    LOOP
          UPDATE RSN_TEST
             SET test_id2 = TEST_ID
           WHERE test_id = i.TEST_ID;
      END LOOP;
    END;

    and works very well, you can see in the sample.

    The other solution is to show new generated TEST_ID

    Adding a sequence as a default value for a column in a table field

    And to execute your procedure.

    I get how is with the good luck of time.

    By

  • How to create a validation "not null" unique for all elements in a page?

    Hello world

    How to create a validation "not null" unique for all elements in a page? I love textfields. Rather than create "not null" validation for each article, I would like to create a single validation control that will be used

    Thanks and greetings
    Umer

    Nice1 wrote:
    Bob, like u said I did the following:

    (1) less to create a button, there are 9 elements, and for each item, I put in the light of 'yes '.
    (2) under the button Delete, there is 1 point and have in light of the 'Yes' for the item
    (3) validation of the page defined for the 9 items under 'create a button' and put it to fire when 'create' button clicked
    (4) the page validation set for 1 number under 'remove' button and put it on fire when "delete" button clicked

    now, when I click on the 'create' button it shows same for the item under the "Delete" button that's a required element

    Sorry, I do not see this note. The required model will not work, there is no way to attach it to the button.

    The best solution is that the answer only a few answers to the top

    Create 2 validations page type as a PL/SQL with code

    1st validation
    
    :P1_ITEM1 IS NOT NULL and :P1_ITEM2 IS NOT NULL ...... and :P1_ITEM9 IS NOT NULL  include all 9 items
    
    Set the When Button Pressed to the CREATE button
    
    2nd validation
    
    :P1_ITEM10 IS NOT NULL
    
    Set the When Button Pressed to the DELETE button
    

    I think that this will be the best way to do it.

    Published by: Bob37 on April 27, 2012 12:02

  • Constraint unique but allows null values

    Hello

    I have a table in Oracle XE (10g) with four matching columns:

    PrimaryKey
    Book_FKEY
    Subject_FKEY
    Author_FKEY

    The rules for this table are:
    A line can contain Book_FKEY and Subject_FKEY (Author_FKEY is Null)
    A line can contain Book_FKEY and Author_FKEY (Subject_FKEY is Null)

    A line cannot contain only Book_FKEY, but a line must ALWAYS be Book_FKEY
    A line cannot contain, Book_FKEY and Subject_FKEY and Author_FKEY

    I tried to use the Unique key constraint in Oracle SQL Developer (v2.1.1) but it failed.
    Can you help me please?

    Thanks a lot for your help.

    I would use only triggers after exhausting all other possibilities of constraint. You already have the constraints of primary key and foreign key in place, which is good. Technically, I believe that what you need is an extra check constraint to accomplish what you want to do, I think your best bet is to add a Unique combination of Not Null constraints and check on this Board.

    First of all, Book_FKEY must always have a value, so make sure not Null column, if not already...

    alter table  modify Book_FKEY not null;
    

    Second, I guess that your primary key is a substitute (number generated automatically), then you'll want a unique constraint to enforce a single author of book/object instance...

    alter table  add constraint  unique (Book_FKEY, Subject_FKEY, Author_FKEY);
    

    And finally, you will need a check constraint to apply other rules...

    alter table  add constraint  check
    (
      (Book_FKEY is not null and Subject_FKEY is not null and Author_FKEY is     null) or
      (Book_FKEY is not null and Subject_FKEY is     null and Author_FKEY is not null)
    )
    

    I hope it works for you,
    Mark

  • Validate the file browse point is not null?

    Hello, using APEX 5.0.1. I have a process that needs to run in case a file browse is not null - validation of the PL/SQL Expression. * The file storage type is: BLOB column specified in the attribute of the Source element. for example: P14_STATUS_ID_CURRENT_VALUE = 0 AND: P14_IS_CANCELLED = ' only AND: P14_FILEBROWSE_ITEM IS NOT NULL, even if I download the file, the validation fails! ??? Then I tried this validation, but no result, select FILENAME from wwv_flow_files where name =: p$ _fname and: P14_STATUS_ID_CURRENT_VALUE = 0 AND: P14_IS_CANCELLED = 'n') is this a bug? is there a work around?

    Solved: It should be: dbms_lob.getlength(:P14_FILEBROWSE_ITEM) > 0

  • Choose the value is not null in my database using report 6i

    Hello world
    I have 2 fields in my database and I want to put in my report that the data that is not empty.
    I have the following code example, but I don't know where to put in my report and how to use it:
    Can someone help me please?
    SELECT column1
          ,column2
          ,DECODE (GRADE_Num,NULL, GRADE_Letter,GRADE_NUM) AS grade
    FROM ....

    Hello..

    Depends on what you want to do...
    What is your main query, your column of formulas or perhaps within a function, or a procedure... .so depends on your needs place you only
    And the decode line will always search results if this value is null. .because you tell if grade_num is null then show the note by letter... otherwise (way grade_num is not null), you will see the grade_num...
    This can be achieved using the NVL function also...
    instead of decoding you could write NVL (grade_name, grade_letter)...
    NVL function works as follows: If the first argument is null then subititute as the second argument.

    If your goal is to extract the lines where a certain column is not NULL only, you can then you can also say something like that...

    SELECT column1, Column2
    From table_name
    where (Column1 is not null) AND (column2 is not null);

  • How to know the history have not null value

    Hi all

    I want to find records that have prior LOG_REVIEW_STAGE = "HHH" have (DT_LOG_BEGIN is not null and DT_LOG_END IS NULL and LOG_STATUS = pending)
    based on the identification number

    Thanks in advance
    For Example my Ouput should be 
    
    ID     SORT_ORDER     LOG_STATUS     LOG_REVIEW_STAGE     DT_LOG_BEGIN     DT_LOG_END
    -----------------------------------------------------------------------------------------------
    20     700          Pending          FFF               1/26/2004     
    ID     SORT_ORDER     LOG_STATUS     LOG_REVIEW_STAGE     DT_LOG_BEGIN     DT_LOG_END
    -----------------------------------------------------------------------------------------------
    10     100          Complete     AAA               1/13/2004     1/13/2004
    10     200          Complete     BBB               1/23/2004     1/23/2004
    10     300          Pending          CCC               1/23/2004     
    10     400                    DDD          
    10     601                    EEE          
    10     700                    FFF          
    10     800                    GGG          
    10     900                    HHH         ---------------------->>>>>>>>>
    10     1000                    JJJ          
    10     1100                    KKK          
    20     100          Complete     AAA               1/13/2004     1/13/2004
    20     200          Complete     BBB               1/23/2004     1/23/2004
    20     300          Complete     CCC               1/23/2004     1/23/2004
    20     400          Complete     DDD               1/24/2004     1/24/2004
    20     601          Complete     EEE               1/25/2004     1/25/2004
    20     700          Pending          FFF               1/26/2004     
    20     900                    HHH          ---------------------->>>>>>>>>
    20     1000                    JJJ          
    20     1100                    KKK          
    create table TEMP_TABLE
    (
      id               NUMBER(2),
      sort_order       NUMBER(10),
      log_status       VARCHAR2(50),
      log_review_stage VARCHAR2(50),
      dt_log_begin     DATE,
      dt_log_end       DATE
    )
    ;
    
    
    
    insert into TEMP_TABLE(id, sort_order, log_status, log_review_stage, dt_log_begin, dt_log_end)
    values (10, 100, 'Complete', 'AAA', to_date('13-01-2004', 'dd-mm-yyyy'), to_date('13-01-2004', 'dd-mm-yyyy'));
    insert into TEMP_TABLE(id, sort_order, log_status, log_review_stage, dt_log_begin, dt_log_end)
    values (10, 200, 'Complete', 'BBB', to_date('23-01-2004', 'dd-mm-yyyy'), to_date('23-01-2004', 'dd-mm-yyyy'));
    insert into TEMP_TABLE(id, sort_order, log_status, log_review_stage, dt_log_begin, dt_log_end)
    values (10, 300, 'Pending', 'CCC', to_date('23-01-2004', 'dd-mm-yyyy'), null);
    insert into TEMP_TABLE(id, sort_order, log_status, log_review_stage, dt_log_begin, dt_log_end)
    values (10, 400, null, 'DDD', null, null);
    insert into TEMP_TABLE(id, sort_order, log_status, log_review_stage, dt_log_begin, dt_log_end)
    values (10, 601, null, 'EEE', null, null);
    insert into TEMP_TABLE(id, sort_order, log_status, log_review_stage, dt_log_begin, dt_log_end)
    values (10, 700, null, 'FFF', null, null);
    insert into TEMP_TABLE(id, sort_order, log_status, log_review_stage, dt_log_begin, dt_log_end)
    values (10, 800, null, 'GGG', null, null);
    insert into TEMP_TABLE(id, sort_order, log_status, log_review_stage, dt_log_begin, dt_log_end)
    values (10, 900, null, 'HHH', null, null);
    insert into TEMP_TABLE(id, sort_order, log_status, log_review_stage, dt_log_begin, dt_log_end)
    values (10, 1000, null, 'JJJ', null, null);
    insert into TEMP_TABLE(id, sort_order, log_status, log_review_stage, dt_log_begin, dt_log_end)
    values (10, 1100, null, 'KKK', null, null);
    insert into TEMP_TABLE(id, sort_order, log_status, log_review_stage, dt_log_begin, dt_log_end)
    values (20, 100, 'Complete', 'AAA', to_date('13-01-2004', 'dd-mm-yyyy'), to_date('13-01-2004', 'dd-mm-yyyy'));
    insert into TEMP_TABLE(id, sort_order, log_status, log_review_stage, dt_log_begin, dt_log_end)
    values (20, 200, 'Complete', 'BBB', to_date('23-01-2004', 'dd-mm-yyyy'), to_date('23-01-2004', 'dd-mm-yyyy'));
    insert into TEMP_TABLE(id, sort_order, log_status, log_review_stage, dt_log_begin, dt_log_end)
    values (20, 300, 'Complete', 'CCC', to_date('23-01-2004', 'dd-mm-yyyy'), to_date('23-01-2004', 'dd-mm-yyyy'));
    insert into TEMP_TABLE(id, sort_order, log_status, log_review_stage, dt_log_begin, dt_log_end)
    values (20, 400, 'Complete', 'DDD', to_date('24-01-2004', 'dd-mm-yyyy'), to_date('24-01-2004', 'dd-mm-yyyy'));
    insert into TEMP_TABLE(id, sort_order, log_status, log_review_stage, dt_log_begin, dt_log_end)
    values (20, 601, 'Complete', 'EEE', to_date('25-01-2004', 'dd-mm-yyyy'), to_date('25-01-2004', 'dd-mm-yyyy'));
    insert into TEMP_TABLE(id, sort_order, log_status, log_review_stage, dt_log_begin, dt_log_end)
    values (20, 700, 'Complete', 'FFF', to_date('26-01-2004', 'dd-mm-yyyy'), to_date('26-01-2004', 'dd-mm-yyyy'));
    insert into TEMP_TABLE(id, sort_order, log_status, log_review_stage, dt_log_begin, dt_log_end)
    values (20, 800, 'Pending', 'GGG', to_date('27-01-2004', 'dd-mm-yyyy'), null);
    insert into TEMP_TABLE(id, sort_order, log_status, log_review_stage, dt_log_begin, dt_log_end)
    values (20, 900, null, 'HHH', null, null);
    insert into TEMP_TABLE(id, sort_order, log_status, log_review_stage, dt_log_begin, dt_log_end)
    values (20, 1000, null, 'JJJ', null, null);
    insert into TEMP_TABLE(id, sort_order, log_status, log_review_stage, dt_log_begin, dt_log_end)
    values (20, 1100, null, 'KKK', null, null);
    commit;

    Please provide examples of data, as well as an explanation of your logic. It is very useful!

    I think that's what you want:

    SELECT id
         , sort_order
         , log_status
         , log_review_stage
         , dt_log_begin
         , dt_log_end
    FROM
    (
            SELECT id
                 , sort_order
                 , log_status
                 , log_review_stage
                 , dt_log_begin
                 , dt_log_end
                 , LEAD(log_review_stage) OVER (PARTITION BY id ORDER BY sort_order) AS next_log_review_stage
            FROM   temp_table
    )
    WHERE  log_status            = 'Pending'
    AND    dt_log_begin          IS NOT NULL
    AND    dt_log_end            IS NULL
    AND    next_log_review_stage = 'HHH'
    ;
    

    When run on your dataset, it returns:

                      ID           SORT_ORDER LOG_STATUS      LOG_REVIEW_STAGE                                   DT_LOG_BEGIN        DT_LOG_END
    -------------------- -------------------- --------------- -------------------------------------------------- ------------------- -------------------
                      20                  800 Pending         GGG                                                01/27/2004 00:00:00
    

    If this is incorrect, please explain why.

    Thank you!

  • Difference of path between primary key and a Unique Index

    Hi all

    Is there a specific way the oracle optimizer to treat differently the Primary key and Unique index?

    Oracle Version
    SQL> select * from v$version;
    
    BANNER
    --------------------------------------------------------------------------------
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE    11.2.0.3.0      Production
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    
    SQL> 
    Sample data test for Index Normal
    SQL> create table t_test_tab(col1 number, col2 number, col3 varchar2(12));
    
    Table created.
    
    SQL> create sequence seq_t_test_tab start with 1 increment by 1 ;
    
    Sequence created.
    
    SQL>  insert into t_test_tab select seq_t_test_tab.nextval, round(dbms_random.value(1,999)) , 'B'||round(dbms_random.value(1,50))||'A' from dual connect by level < 100000;
    
    99999 rows created.
    
    SQL> commit;
    
    Commit complete.
    
    SQL> exec dbms_stats.gather_table_stats(USER_OWNER','T_TEST_TAB',cascade => true);
    
    PL/SQL procedure successfully completed.
    
    SQL> select col1 from t_test_tab;
    
    99999 rows selected.
    
    
    Execution Plan
    ----------------------------------------------------------
    Plan hash value: 1565504962
    
    --------------------------------------------------------------------------------
    | Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    --------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT  |            | 99999 |   488K|    74   (3)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| T_TEST_TAB | 99999 |   488K|    74   (3)| 00:00:01 |
    --------------------------------------------------------------------------------
    
    
    Statistics
    ----------------------------------------------------------
              1  recursive calls
              0  db block gets
           6915  consistent gets
            259  physical reads
              0  redo size
        1829388  bytes sent via SQL*Net to client
          73850  bytes received via SQL*Net from client
           6668  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
          99999  rows processed
    
    SQL> create index idx_t_test_tab on t_test_tab(col1);
    
    Index created.
    
    SQL> exec dbms_stats.gather_table_stats('USER_OWNER','T_TEST_TAB',cascade => true); 
    
    PL/SQL procedure successfully completed.
    
    SQL> select col1 from t_test_tab;
    
    99999 rows selected.
    
    
    Execution Plan
    ----------------------------------------------------------
    Plan hash value: 1565504962
    
    --------------------------------------------------------------------------------
    | Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    --------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT  |            | 99999 |   488K|    74   (3)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| T_TEST_TAB | 99999 |   488K|    74   (3)| 00:00:01 |
    --------------------------------------------------------------------------------
    
    
    Statistics
    ----------------------------------------------------------
              1  recursive calls
              0  db block gets
           6915  consistent gets
              0  physical reads
              0  redo size
        1829388  bytes sent via SQL*Net to client
          73850  bytes received via SQL*Net from client
           6668  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
          99999  rows processed
    
    SQL> 
    Examples of test when using primary key data
    SQL> create table t_test_tab1(col1 number, col2 number, col3 varchar2(12));
    
    Table created.
    
    SQL> create sequence seq_t_test_tab1 start with 1 increment by 1 ;
    
    Sequence created.
    
    SQL> insert into t_test_tab1 select seq_t_test_tab1.nextval, round(dbms_random.value(1,999)) , 'B'||round(dbms_random.value(1,50))||'A' from dual connect by level < 100000;
     
    99999 rows created.
    
    SQL> commit;
    
    Commit complete.
    
    SQL> exec dbms_stats.gather_table_stats('USER_OWNER','T_TEST_TAB1',cascade => true);
    
    PL/SQL procedure successfully completed.
    
    SQL> select col1 from t_test_tab1;
    
    99999 rows selected.
    
    
    Execution Plan
    ----------------------------------------------------------
    Plan hash value: 1727568366
    
    ---------------------------------------------------------------------------------
    | Id  | Operation         | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
    ---------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT  |             | 99999 |   488K|    74   (3)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| T_TEST_TAB1 | 99999 |   488K|    74   (3)| 00:00:01 |
    ---------------------------------------------------------------------------------
    
    
    Statistics
    ----------------------------------------------------------
              1  recursive calls
              0  db block gets
           6915  consistent gets
              0  physical reads
              0  redo size
        1829388  bytes sent via SQL*Net to client
          73850  bytes received via SQL*Net from client
           6668  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
          99999  rows processed
    
    SQL> alter table t_test_tab1 add constraint pk_t_test_tab1 primary key (col1);
    
    Table altered.
    
    SQL> exec dbms_stats.gather_table_stats('USER_OWNER','T_TEST_TAB1',cascade => true); 
    
    PL/SQL procedure successfully completed.
    
    SQL> select col1 from t_test_tab1;
    
    99999 rows selected.
    
    
    Execution Plan
    ----------------------------------------------------------
    Plan hash value: 2995826579
    
    ---------------------------------------------------------------------------------------
    | Id  | Operation            | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
    ---------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT     |                | 99999 |   488K|    59   (2)| 00:00:01 |
    |   1 |  INDEX FAST FULL SCAN| PK_T_TEST_TAB1 | 99999 |   488K|    59   (2)| 00:00:01 |
    ---------------------------------------------------------------------------------------
    
    
    Statistics
    ----------------------------------------------------------
              1  recursive calls
              0  db block gets
           6867  consistent gets
              0  physical reads
              0  redo size
        1829388  bytes sent via SQL*Net to client
          73850  bytes received via SQL*Net from client
           6668  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
          99999  rows processed
    
    SQL> 
    If you see here the same even as the statistics were gathered,
    * In the 1st table T_TEST_TAB, table always use FULL table access after creating indexes.
    * And in the 2nd table T_TEST_TAB1, table uses PRIMARY KEY as expected.

    Any comments?

    Kind regards
    BPat

    >
    * In the 1st table T_TEST_TAB, table always use FULL table access after creating indexes.
    * And in the 2nd table T_TEST_TAB1, table uses PRIMARY KEY as expected.
    >
    Yes - for the first table a full table scan will be used as the currently selected column is nullable and indexes do not include null values.

    The index can be used for the second query, since all the data (first column) is available between the index and there may be no NULL values because of the primary key. If you check constraints, you find that the there is now a CHECK constraint to ensure that the first column cannot be null.

    For a full and interesting discussion see the explanation of this and a related issue on the question I ask in this thread
    What SYS tables (not seen) contain the value NULL spec /not/ column definition? and my response he posted: 23 April 2012 09:02

    I ask the question is based on a question here which is similar to yours
    Columns becoming nullable after a fall of primary key?

  • Potential problems for tables without primary keys and unique keys

    GoldenGate 11.2.1.0.3/Solaris 10
    DB: Oracle for Oracle (Source and target is 11.2.0.3)
    Topology: unidirectional


    In our one-way configuration GG, little of the tables being replicated is not a primary key or a Unique key.

    Last week when we have implemented GG for the test, we received warnings for these table below.
    GGSCI > add trandata WMHS_UD.crtn_dtl
    
    2013-01-12 11:34:33  WARNING OGG-00869  No unique key is defined for table 'CRTN_DTL'. All viable columns will be used to represent the key, but may not guarantee uniqueness.  KEYCOLS may be used to define the key.
    
    Logging of supplemental redo data enabled for table WMHS_UD.crtn_dtl.
    Replication seems to work very well for these tables.

    Googling, I think that there may be performance degradation when you replicate tables without PK or the United Kingdom.

    But are there other potential problems such as data of a certain kind not replicated to the lack of PK/UK?

    It really depends on the data.

    By default, GG is combining all columns as a virtual primary key but don't no conflict control by default. So when you can be sure that the record you insert into the table is unique, then it will work.
    BUT as soon as you insert the same record, which is already inserted, then you will encounter problems.

    Let me show what happens when you use an initial charge because it makes it easier to describe:
    We start at 10:00 the capture for a table. Now, you insert a record at 10:00 in the tables. When you now start an initial charge to 10.02, then check you have inserted in the database to 10.01 will be repeated two times. During the IPL as the initial charge is made to 10.02 and it includes data of 10.01 AND it will be replicated again through the process of capture/replicate.

  • A composite primary key needs of the Unique ID column or not?

    Hello dear,
    I have a table that's the primary key is a key that is composed of two columns and there are like 5 columns of information. You think I'm going to use two columns as the primary key of this table or create a unique ID column.
    Example:
    Employees table:
    Services of table:

    The table of the problem is the Department managers:
    Now, I'll add employee id and service id in this table and Manager info etc... (info does not exist in the employee table), should I use as primary or create the id column in this table?

    Thank you very much.

    You must use the new ID as primary (alter table add constraint primary key,...) you cannot use foreign key as primary :)

Maybe you are looking for