LMS huge trace file created

Hi all

2 CARS db in window Server 2003 nodes. The version of db is 10.2.0.3. My question is why we get huge lms trace files in the bdump directory and they become bigger and bigger. In the trace file, we have something like the following. Thanks a lot for your help, Shirley

2009-12-29 10:11:01.926
KJM_HISTORY: OP (12) of STALL RCVR context 0 elapsed 651716247 US
KJM HIST LMS3:
12:651716247 7:1 6:0 10:1 17:1 16:0 15:1 12:15475015 7:1 6:1
10:0 17:2 16:1 15:291 12:651692189 7:1 6:1 10:0 17:1 16:1
12:12345971 15:1 7:0 6:1 10:0 17:1 16:0 15:1 12:12020 7:0
6:1 10:0 17:1 16:1 15:0 12:11977 7:1 6:0 10:0 17:1
16:1 15:0 12:12054 7:1 6:0 10:0 17:1 16:1 15:0 12:12016
7:1 6:0 10:0 17:1 16:1 12:12017 7:1 6:0 10:0 15:0
17:1 16:1 15:0 12:11692
----------------------------------------
SO: 000000012A3B11D8, type: 4, owner: 000000012A0041B8, flag: INIT /-/-/ 0x00
(session) sid: 543 trans: 0000000000000000, creator: 000000012A0041B8, flag: (51) USR /-BSY /-/ - /-/ - / -.
DID: 0000-0000-00000000, DID short term: 0000-0000-00000000
TXN branch: 0000000000000000
Oct: 0, prv: 0, sql: 0000000000000000, psql: 0000000000000000, users: 0/SYS
last wait "gcs remote message" blocking sess = 0 x 0000000000000000 seq = 10 wait_time = 651716241 seconds since then started to wait = 300
waittime = 18, survey = 0, event = 0
Dumping history of waiting for Session
for "BSC remote messages" number = 1 wait_time = 651716241
waittime = 18, survey = 0, event = 0
for "BSC remote messages" number = 1 wait_time = 15475008
waittime = 18, survey = 0, event = 0
for "BSC remote messages" number = 1 wait_time = 651692179
waittime = 18, survey = 0, event = 0
for "BSC remote messages" number = 1 wait_time = 12345963
waittime = 18, survey = 0, event = 0
for "BSC remote messages" number = 1 wait_time = 12017
waittime = 18, survey = 0, event = 0
for "BSC remote messages" number = 1 wait_time = 11974
waittime = 18, survey = 0, event = 0
for "BSC remote messages" number = 1 wait_time = 12052
waittime = 18, survey = 0, event = 0
for "BSC remote messages" number = 1 wait_time = 12013
waittime = 18, survey = 0, event = 0
for "BSC remote messages" number = 1 wait_time = 12014
waittime = 18, survey = 0, event = 0
for "BSC remote messages" number = 1 wait_time = 11688
waittime = 18, survey = 0, event = 0
the temporary object counter: 0
----------------------------------------
UOL used: 0 locks (used = 0, free = 0)
KGX atomic operation Log 000007FFE6FF5600
Mutex 0000000000000000 (0, 0) oper idn 0 NONE
Library Cache 543 DTS uid 0 w/h 0 slp 0
KGX atomic operation Log 000007FFE6FF5648
Mutex 0000000000000000 (0, 0) oper idn 0 NONE
Library Cache 543 DTS uid 0 w/h 0 slp 0
KGX atomic operation Log 000007FFE6FF5690
Mutex 0000000000000000 (0, 0) oper idn 0 NONE
Library Cache 543 DTS uid 0 w/h 0 slp 0

check metalink note:
excessive LMS and lmd of sizes of trace files generated on windows rac - 437101.1

HTH
-André

Tags: Database

Similar Questions

  • Trace file created for the wrong session

    DB version: 10.2.0.3.0
    Version of the operating system: Solaris 5.10
    2 node RAC

    With the help of DBMS_MONITOR. SESSION_TRACE_ENABLE, I tried to draw an oracle session led by a C++ application.

    So, I determined which instance this session is connected by querying the gv$ session. I logged, SYS in this instance of node and then issued
    execute dbms_monitor.session_trace_enable(4371,98124, true, false);
    After activating, I saw new a trace file that is generated. But this track is actually some tracking SYS session that executed dbms_monitor.session_trace_enable and not the session (4371,98124). I tested a few session of SYS as basic queries
    select sysdate from dual
    And I see all these complaints contained in the trace file. What could be the cause of this problem?

    I noticed that
    SHARED_SERVERS = 1
    on both nodes. Is the origin of this strange problem? For the follow-up, if I disable using SHARED_SERVERS
    alter system set SHARED_SERVERS = 0
    It can cause connection problems for client applications because of their setting of the tnsnames.ora file?

    Hi, you can also try this one in conjunction with the dbms_monitor.

    DBMS_SYSTEM. SET_SQL_TRACE_IN_SESSION

    concerning

  • Collect some statistics that product schema oracle HUGE trace file

    Hello
    I gather schema stats on my E - Biz 12.1.3 / 11.2.0.3 system
    Each week, we gather schema Stats

    I noticed that when the work is completed a track of 3g fuile is produced under the RDBMS Oracle home

    I tried unchecking the checkbox when demand of COnc subjected to DO NOT SAVE the FILE OUTPUT but it remains family.

    I don't want it and don't understand why it is produced. Anyone out there have a resolution?

    We are not really using HRMS so he probably was enabled trace? How can I check if this is the case, please?

    Please run the query (query to get activated Trace/Log/Debug profile options [559618.1 ID]).

    Also, the responsibility of the system administrator, you place (security > requirements > install) and see if auditing is enabled.

    Thank you
    Hussein

  • Huge track files generated in the location of the trace

    Hello

    Files of huge follow-up generated in my case of EBS R12.1.3, version 11.2.0.2.0 of the database.

    These filled file per day system trace files.

    Names like SID_ora_26369_APPS.trc trace files

    Please help on this issue.

    Thanks in advance.

    Where are these trace file created? In which directory?

    Please see (request for Trace/Log/Debug profile compatible Options (Doc ID 559618.1)) to determine in / backtrace is activated at any level.

    Can you post the contents of the file path here? If the content is large, then you will need to download the file on any free hosting site and post the link here.

    Thank you

    Hussein

  • I can control the Trace files in bdump

    Experts in good morning...

    Question of BDUMP

    In BDUMP, I have the following files...

    -rw - r - 1 oracle oinstall 112687 19 Feb 13:41 alert_testdb.log
    -rw - r - r - 1 oracle oinstall 33068 Feb 19 12:03 alert_TSH1.log
    -rw - r - 1 oracle oinstall 20301 14 Feb 09:13 testdb_arc0_15379.trc
    -rw - r - 1 oracle oinstall 632 5 Feb 04:56 testdb_arc0_17339.trc
    -rw - r - 1 oracle oinstall 2118 Feb 5 05:22 testdb_arc0_17409.trc
    ... ..
    .... ..

    Totally 294 trace files...


    I checked some .trc files;  Almost have the same information.

    ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1
    Name of the system: Linux
    Name of the node: xxxxxxxxxxxxxx
    News Release: xxxxxxxxxxxxxxxxxxxxxxxxxx
    Version: xxxxxxxxxxxxxxxxxxxxxxxxx
    Machine: xxxxxx
    Instance name: testdb
    Redo thread mounted by this instance: 1
    Oracle process number: 0

    My question clear is:

    1. If the alert log contains details of the error, what is the purpose of trace in bdump files?
    2. why "n" no trace files created without useful information? (Almost with the same information]
    3. what type of information is usually stored in .trc files?

    What I know about tracefiles:

    Each background process writes the trace files if an internal error has occurred.
    If I'm wrong, please correct.

    Trace files and log alerts serve a different purpose. A simple way to think about it, is that the trace files are used when diagnosing problems. The alert log shows you what are the events are occurring in the database in general, flooding you don't not with unnecessary details. If the database crashed, the alerts log will tell you when the event happened, but the details of the process that crashed would be (I hope) in a trace file.

    Some trace files are huge, and you certainly don't want them in the log of alerts because it would make it too big to be manageable or read.

    For example, if a process crashes, the dumping process trace file to would be useful when you are working with Oracle Support to identify the problem. Or, if you want to see what a specific session, you can turn on tracing on it and and then format the trace with tkprof file to understand what made the session.

    The documentation is a good summary:

    Trace files

    A trace file is an administrative file containing diagnostic data used to investigate the problems. Trace can also, provide guidance for tuning applications or an instance, as explained in "Diagnostic and performance optimization.

    Types of Trace files

    Each server and the background process can periodically write to a trace file. File information on the environment in the process, status, activities and errors.

    The SQL trace facility also created trace files, which provide information of performance on individual SQL statements. To enable tracing for an identifier of the client, service, module, action, session, instance or database, you must run the procedures in the DBMS_MONITOR package or use Oracle Enterprise Manager.

    A dump is a special type of trace file. Considering that track tends to be out of diagnostic data, a dump is usually a unique data output of diagnosis in response to an event (for example, an incident). When an incident occurs, the database writes one or more landfills in the incident directory created for the incident. Incident of discharges also contain the case number in the file name.

  • Trace file is not getting created - TKPROF

    Hello

    the trace file is not created when trying to find the result TKPROF

    I did the following steps:

    1 ALTER SYSTEM SET TIMED_STATISTICS = TRUE;

    2.

    @/rdbms/admin/utlxplan.sql

    CREATE SYNONYM PUBLIC PLAN_TABLE FOR SYS. PLAN_TABLE;

    GRANT SELECT, INSERT, UPDATE, DELETE ON SYS. PLAN_TABLE TO THE PUBLIC;

    3.

    ALTER SESSION SET SQL_TRACE = TRUE;

    SELECT COUNT (*)

    DOUBLE;

    ALTER SESSION SET SQL_TRACE = FALSE;

    So I check USER_DUMP_DEST directory that is mentioned in the init.ora file... but I couln t find any trace file

    NOTE: file init.ora

    # define directories to store the tracks and alert files

    background_dump_dest=%ORACLE_HOME%/Admin/clustdb/bdump

    user_dump_dest=%ORACLE_HOME%/Admin/clustdb/

    then I created a new directory as USER_DUMP_DEST as below

    CREATE or REPLACE DIRECTORY user_dump_dest AS 'D:\app\user1\product\11.2.0\dbhome_1\admin\clustdb '.

    GRANT ALL ON DIRECTORY user_dump_dest to THE PUBLIC

    then I run the

    ALTER SESSION SET SQL_TRACE = TRUE;

    SELECT COUNT (*)

    DOUBLE;

    ALTER SESSION SET SQL_TRACE = FALSE;

    Eventhen it is not created. What could be the problem...

    Thanks in advance

    Concerning

    Gowtham

    view the results of

    SQL > SEE THE USER_DUMP_DEST PARAMETER

  • Oracle XE, generating a huge amount of trace files

    Hi all

    one of our customer have oracle XE installed on windows, 32-bit and that XE is generating a lot of (in GB) trc file in BDUMP... These trace files varying from 200 M to 1 GB creating havoc on my drive that I'm out of my drive...

    I checked the sql_trace is set to false, no code in our application is running sql_trace = true, so I get no idea about this unexpected behavior of oracle XE...

    What should be the steps to remove this error... any suggestions would be appreciated...

    Thanks and greetings
    VD

    Can you post the first 20 lines of any of these trace files?

    Would it be a queue of work process?

  • Trace file error CoreTelephony - more (El Capitan 10.11.3) disk space

    Hello. My Mac (Macbook Air 2013) started crashing, most often when I have 3 + tabs open in Chrome, or when I play Football Manager and have Chrome open, as if it's getting overworked. Normally I could easily do this, and I've done it since I got my Mac. But suddenly he crashes, freezes and shuts down, and when I reboot I get this message before you start: it is said: trace file error Coretelephony a file for coretelephony tracing operation failed, you might run out of disk space.

    I have 56 GB / 120 GB left, it makes no sense.

    Someone had the same problem, or maybe a solution? Thank you.

    These must be run as administrator. If you have only one user account, you are the administrator.

    Please launch the Console application in one of the following ways:

    ☞ Enter the first letters of his name in a Spotlight search. Select from the results (it should be at the top).

    ☞ In the Finder, select go utilities ▹ of menu bar or press the combination of keys shift-command-U. The application is in the folder that opens.

    ☞ Open LaunchPad and start typing the name.

    In the Console window, select

    DIAGNOSIS AND diagnostic USE information reports ▹ System

    (not diagnose them and use Messages) in the list of logs on the left. If you don't see this list, select

    List of newspapers seen ▹ display

    in the menu bar.

    There is a disclosure triangle to the left of the list item. If the triangle is pointing to the right, click it so that it points downwards. You will see a list of reports. A report of panic has a name that begins with "Kernel" and ends with ".panic." Select the most recent. The content of the report is displayed at right. Allows you to copy and paste to validate all of the content, text, not a screenshot.

    If you don't see any report, but you know, there was a panic, you have chosen diagnostic and using the list of Log Messages. INFORMATION on the USE of DIAGNOSTIC AND choose instead.

    In the interest of privacy, I suggest that, before posting, you change the UUID ' anonymous, ' a long string of letters, numbers and dashes in the header of the report, if it is present (it cannot be). "

    Please do not post other types of diagnostic report.

    I know that the report is long, perhaps several hundred lines. Please report all this anyway.

    When you post the report, an error message may appear on the web page: "you have included content in your post that is not allowed", or "the message contains invalid characters." It's a bug in the forum software. Thanks for posting the text on Pastebin, then post here a link to the page you created.

    If you have an account on Pastebin, please do not select private in exposure menu to paste on the page, because no one else that you will be able to see it.

  • Windows 7 disk 100% for new files created recently

    Problem: I noticed that when new files are created on the computer the 'System' process reacts, usually a day later (date of arrival) by analyzing these files consume 100% of the disk IO and slow down the whole computer.  I develop software applications of scale with my computer so builds and release new records will create a large number of files.  The high disk activity can sometimes take 2 hours before it slows down.

    Things, I already checked: I have already concluded that this isn't the search indexer.  This isn't the Symantec anti-virus software, I installed.

    Computer details:  CORE I5, 8 GB of RAM, operating system Windows 7, Dell Latitude drive, is BitLockered

    E/s disk Details:

    Process - system

    Activity - seems to be scheduled to run in the morning

    Trigger - seems to be new files created previous day analysis

    Files with most of the activity of writing during this process-

    C:\$MFT (NTSF Master File Table),

    C:\$logfile (Journal of NTFS Volume).

    C:\pagefile.sys (swap file),

    Please let me know if this is something that I can regulate or if there are configuration that would allow me to have certain files ignored.

    Thank you.

    JBS

    In order to diagnose your problem, we need run Windows performance toolkit, the instructions that are in this wiki

    If you have any questions do not hesitate to ask

    Please run the trace when you encounter the problem
  • DataGuard no temporary file created?

    DataGuard no temporary file created?

    This issue is no answer.

    MariaKarpa(MK)Journeyer

    Hi all

    11.2.0.3.10

    AIX6

    I have properly configured or created a database of pending for our database of UAT.

    But when I test the newspapers where not applied to switch logs.

    I checked the newspaper altert DB standby because it shows this:

    DDE: Key problem "ORA 1110' was the controlled flood (0 x 1) (incident)

    ORA-01110: data file 201: ' / u21/ORACLE/ORADATA/SITDR/temp01.dbf'

    ORA-27037: unable to get file status

    IBM AIX RISC System/6000 error: 2: no such file or directory

    Additional information: 3

    Why the TEMP is not created?

    I also tried to copy the alter logs and apply them manually, and they are applied successfully.

    My problem is that they are not automatically transferred and applied.

    I check the primary alter DB log and I see an error like:

    FAL [server, ARC5]: archive FAL failed, see the trace file.

    ARCH: Archive FAL failed. Continuous archiver

    SIT ORACLE instance - error check-in. Continuous archiver.

    Help, please...

    Thank you

    MK

    ID STATUS DB_MODE TYPE RECOVERY_MODE PROTECTION_MODE ACTIVE ARCHIVED_SEQ LAR #.

    --- --------- --------------- ---- -------------------- -------------------- ---- ------ -------------

    INACTIVE VALID OPEN ARC 1 0 0 836 MAXIMUM PERFORMANCE

    2 LGWR ONLY DEFERRED OPEN_READ MANAGED PERFORMANCE AP 4 1 523 IN TIME REAL

    FOLDS

    It is clear...  The State of dest_2 is DELAYED as a result no records will be sent to the destination. To allow you to do this

    SQL > alter system set log_archive_dest_state_2 = 'activate' scope = the two sid ='* ';

  • Cannot generate the SQL * Net trace file on Linux

    Oracle 11 g 1 material

    RHEL 6.3

    --------------------

    SQLNET.ora file:

    SQLNET. AUTHENTICATION_SERVICES = (DOB, TCPS, NTS)

    SSL_VERSION = 0

    SSL_CLIENT_AUTHENTICATION = TRUE

    SSL_CIPHER_SUITES = (SSL_RSA_EXPORT_WITH_RC4_40_MD5)

    LOG_DIRECTORY_CLIENT = / home/oracle

    LOG_FILE_CLIENT = SQLNET.log

    TRACE_DIRECTORY_CLIENT = / home/oracle

    TRACE_FILE_CLIENT = SQLNET.trc

    DIAG_ADR_ENABLED = OFF

    TRACE_UNIQUE_CLIENT = WE

    The sqlnet.ora file above does not create a trace file when you try to use SQL * more to connect to the remote database.

    What could be the problem?  (The user has write access to the directory where the log file and the Commission of truth and reconciliation must be created)

    Answered my own question.  It was that o/s was THAT RHEL 5.3 and that caused some compatibility problems.  Updated to 6.5 and all is fine.

  • How to find the trace file

    Hi all

    I generated a trace to help file:

    SQL > alter system set events "1940 trace name errorstack level 3";

    Modified system.

    SQL > drop user RHUNTER1;
    Drop user RHUNTER1
    *
    ERROR on line 1:
    ORA-01940: cannot delete a user who is currently logged on

    "SQL > alter system set ' 1940 trace name errorstack off events."

    Modified system.


    It's oracle database 10 g 2. Now, I checked for file trace udump. but there are a lot of generating a minute trace files. How can I find the exact trace file.

    Please help to find it.

    Thank you

    Here is the SQL that will tell you what will be your trace file name:

    MPOWEL01> l
      1   select i.value||'_ora_'||p.spid||'.trc'
      2    from v$process p, v$session s,
      3      (select value from v$parameter where name = 'instance_name') i
      4    where p.addr = s.paddr
      5*  and s.sid = userenv('sid')
    MPOWEL01> /
    
    I.VALUE||'_ORA_'||P.SPID||'.TRC'
    --------------------------------------------------------------------------------
    XXXX_ora_3014738.trc
    

    I built my own code to do this, but the above is leave a blog link, I save to the reading of the trace that you created in your session. See
    http://dioncho.WordPress.com/2009/03/19/

    HTH - Mark D Powell.

  • Cannot write trace files destined for the bottom dump in oracle 10 g

    Hi all

    Version of the OS: RHEL 5.7
    DB version: 10.2.0.4
    cluster: database node 2 RAC

    Today I faced a strange behavior for one of our production database. Its a database of 2 rac nodes. There is no automatic generation of trace files in the destination of bottom dump on the first node. I am able to see the second trace files in its context dump dest. But the strange behavior occurs on the first node. I see that alert logfile in the bottom dump dest. Despite an error that displays the generated trace file but no file is located in the bdump. Here is the error, but physically he no trace file is generated:
    Errors in file/oracle/db/admin / < sid > /bdump/ < sid >j0011558.trc:
    ORA-12012: error on auto run 94377 work
    ORA-12008: error path refresh materialized view
    can someone have any idea for this strange behavior. There is no script maintenance for the removal of trace files.

    Kind regards
    Imran Khan

    its his work after you re-create your synonym then Yes you should not recreate MV again.

  • Generation of Oracle Trace files for a session

    Hello

    I use oracle 10g (10.2.0.5) in RHEL5 server. I used the instruction exec dbms_system.set_sql_trace_in_session (147,3,TRUE); to draw a particular session. I am able to track successfully, but I created a new table in this session where I'm not able to find any statement of create table in the trace file generated, but I can find the insert statements that I made on the table newly created in the trace file.

    Is not recorded in the events of path of the DDL statements?

    Kind regards

    007

    Salvation;

    Please find below the doc who can help you

    DBMS_SYSTEM. SET_SQL_TRACE_IN_SESSION GENERATES NO TRACE FOR THE SESSION ACTIVE [ID 236178.1]
    How to enable the SQL Trace for all new Sessions [ID 178923.1]

    Follow-up to the Sessions to Oracle using the package DBMS_SUPPORT [ID 62160.1]

    Respect of
    HELIOS

  • Blocker withdraws the deadlock trace file (self)

    Hello

    Recently, I had a problem on a 10.2.0.4 database to single instance where blockages are produced. The following test case reproduced the problem (I create three tables parent, a child table with foreign keys indexed to all parents three tables and a procedure that performs an insert in the child table in a standalone transaction):
    create table parent_1(id number primary key);
    
    create table parent_2(id number primary key);
    
    create table parent_3(id number primary key);
     
    create table child( id_c number primary key,
                       id_p1 number,
                       id_p2 number,
                       id_p3 number,
                       constraint fk_id_p1 foreign key (id_p1) references parent_1(id),
                       constraint fk_id_p2 foreign key (id_p2) references parent_2(id),
                       constraint fk_id_p3 foreign key (id_p3) references parent_3(id)
                       );
     
    create index i_id_p1 on child(id_p1);
    
    create index i_id_p2 on child(id_p2);
    
    create index i_id_p3 on child(id_p3);
    
    create or replace procedure insert_into_child as
    pragma autonomous_transaction;
    begin
      insert into child(id_c, id_p1, id_p2, id_p3) values(1,1,1,1);
      commit;
    end;
    /
     
    insert into parent_1 values(1);
    
    insert into parent_2 values(1);
    
    commit;
    And now the action that causes the deadlock:
    SQL> insert into parent_3 values(1);
    
    1 row created.
    
    SQL> exec insert_into_child;
    BEGIN insert_into_child; END;
    
    *
    ERROR at line 1:
    ORA-00060: deadlock detected while waiting for resource
    ORA-06512: at "SCOTT.INSERT_INTO_CHILD", line 4
    ORA-06512: at line 1
    My question is: How can I determine which integration into the CHILD table waiting for? He could wait on a combination of these, PARENT_3, PARENT_2, PARENT_1, or even on the CHILD if I tried to insert a primary key that is duplicated in the CHILD. Since we have the full test case, we know that he was waiting on PARENT_3 (or better said, he expected to perform a commit / rollback of the transaction 'parent'), but is it possible to determine that only from the deadlock trace file? I ask that because to identify the problem, I had to perform redo log mining, tracing pl/sql with DBMS_TRACE and manual debugging on a clone of the production database that has been restored to a SNA just before blocking is product. So, I had to do a lot of work to get to the trainer table and if this information is already in the deadlock trace file, it would have saved me a lot of time.

    Here is the deadlock trace file. In section "DML LOCK", I assumed that the child table (tab = 227042) holds a 3 way locks (SX), all other tables of three parents have a mode 2 locks (SS), but from this excerpt, I see that parent_3 (tab = 227040) blocks the children insert:
    Deadlock graph:
                           ---------Blocker(s)--------  ---------Waiter(s)---------
    Resource Name          process session holds waits  process session holds waits
    TX-00070029-00749150        23     476     X             23     476           S
    session 476: DID 0001-0017-00000003     session 476: DID 0001-0017-00000003
    Rows waited on:
    Session 476: obj - rowid = 000376E2 - AAA3biAAEAAA4BwAAA
      (dictionary objn - 227042, file - 4, block - 229488, slot - 0)
    Information on the OTHER waiting sessions:
    End of information on OTHER waiting sessions.
    Current SQL statement for this session:
    INSERT INTO CHILD(ID_C, ID_P1, ID_P2, ID_P3) VALUES(1,1,1,1)
    ----- PL/SQL Call Stack -----
      object      line  object
      handle    number  name
    3989eef50         4  procedure SCOTT.INSERT_INTO_CHILD
    391f3d870         1  anonymous block
    .
    .
    .
    .
            SO: 397691978, type: 36, owner: 39686af98, flag: INIT/-/-/0x00
            DML LOCK: tab=227042 flg=11 chi=0
                      his[0]: mod=3 spn=35288
            (enqueue) TM-000376E2-00000000  DID: 0001-0017-00000003
            lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  res_flag: 0x6
            res: 0x398341fe8, mode: SX, lock_flag: 0x0
            own: 0x3980df420, sess: 0x3980df420, proc: 0x39859c660, prv: 0x398341ff8
            ----------------------------------------
            SO: 397691878, type: 36, owner: 39686af98, flag: INIT/-/-/0x00
            DML LOCK: tab=227040 flg=11 chi=0
                      his[0]: mod=2 spn=35288
            (enqueue) TM-000376E0-00000000  DID: 0001-0017-00000003
            lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  res_flag: 0x6
            res: 0x3983386e8, mode: SS, lock_flag: 0x0
            own: 0x3980df420, sess: 0x3980df420, proc: 0x39859c660, prv: 0x3983386f8
            ----------------------------------------
            SO: 397691778, type: 36, owner: 39686af98, flag: INIT/-/-/0x00
            DML LOCK: tab=227038 flg=11 chi=0
                      his[0]: mod=2 spn=35288
            (enqueue) TM-000376DE-00000000  DID: 0001-0017-00000003
            lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  res_flag: 0x6
            res: 0x398340f58, mode: SS, lock_flag: 0x0
            own: 0x3980df420, sess: 0x3980df420, proc: 0x39859c660, prv: 0x398340f68
            ----------------------------------------
            SO: 397691678, type: 36, owner: 39686af98, flag: INIT/-/-/0x00
            DML LOCK: tab=227036 flg=11 chi=0
                      his[0]: mod=2 spn=35288
            (enqueue) TM-000376DC-00000000  DID: 0001-0017-00000003
            lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  res_flag: 0x6
            res: 0x39833f358, mode: SS, lock_flag: 0x0
            own: 0x3980df420, sess: 0x3980df420, proc: 0x39859c660, prv: 0x39833f368
          ----------------------------------------
    Thanks in advance for your comments,
    Swear

    user633661 wrote:

    My question is: How can I determine which integration into the CHILD table waiting for? He could wait on a combination of these, PARENT_3, PARENT_2, PARENT_1, or even on the CHILD if I tried to insert a primary key that is duplicated in the CHILD. Since we have the full test case, we know that he was waiting on PARENT_3 (or better said, he expected to perform a commit / rollback of the transaction 'parent'), but is it possible to determine that only from the deadlock trace file?

    There is no way to get the answer from the deadlock trace.

    At this stage and with your example, the waiting session waits for a lock of the TX (transaction) - this means that he has no idea (and uninteresting) in the involved actual data, that it is simply waiting for a location of transaction undo segment header table clear.

    An easy way to demonstrate, it is as follows:-


    create the parent and child tables with the activated FK constraint
    Session 1 - set a save point, then insert line into parent but do not commit
    Session 2 - insert a load line in the child - the session will pass a waiting for TX lock on the parent transaction
    Session 1-restoration to the point of backup

    Because restoration is a save point, session 1 always held a TX lock in exclusive mode, even if it will take is more all lock TM (table).
    Session 2 will still wait for session 1 to commit or rollback - even if the parent required row does not exist, even in a State that is not validated.

    Concerning
    Jonathan Lewis
    http://jonathanlewis.WordPress.com
    Author: core Oracle

Maybe you are looking for

  • My roboform today toolbar missing after 'clear history' and/or a security update. What gives?

    I have a pc running Vista HomePremium. I deleted my story before and have not lost my roboform toolbar. 18/06/14 automatic security update do this? How can I restore it for my FF 30.0? (At least I think it's the version. I can not even find out who m

  • HP mini computer: password failed

    Password is not arrested system. CNU8431T9Z

  • port of mSATA for E550

    Hello I plan to buy Lenovo Thinkpad E550. I want to have a disc SSD and HDD. Currently, the Lenovo does not offer this option. They have either HDD or SSD. I want to know if E550 comes with an mSATA port so that I can attach a separate SSD on mine. I

  • Have no filmmaker in my programs

    Hi system XP, have downloaded the sp2 and sp3. I have yet to Director. Is it possible to separately download it please?

  • Reinstalling Windows XP on a Dell Optiplex 760

    Nice day I reinstall Windows XP on a Dell Optiplex 760. The system came preloaded with it, even if the certificate of authenticity (COA) indicates has a license for Windows Vista professional. I'm reinstalling provided factory recovery disc. My quest