query on dba_free_space ends waiting for events db file sequential read

Hi all

Env: 10gr 2 on Windows NT

I gave the query
Select tablespace_name, sum (bytes) / 1024/1024 dba_free_space group by tablespace_name and her for still waiting.
I checked the event waiting for v$ session and «db file sequential read»

I put a trace on the session before launching the above query:
 

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
-----
Parse        1      0.06       0.06          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        0      0.00       0.00          0          0          0           0
-----
total        2      0.06       0.06          0          0          0           0

Misses in library cache during parse: 1

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------
  db file sequential read                     13677        0.16        151.34
  SQL*Net message to client                       1        0.00          0.00
  db file scattered read                        281        0.01          0.53
  latch: cache buffers lru chain                  2        0.00          0.00


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------
Parse    13703      0.31       0.32          0          0          0           0
Execute  14009      0.75       0.83          0          0          0           0
Fetch    14139      0.48       0.74         26      56091          0       15496
------
total    41851      1.54       1.89         26      56091          0       15496

Misses in library cache during parse: 16
Misses in library cache during execute: 16

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------
  db file sequential read                        26        0.00          0.12

    1  user  SQL statements in session.
14010  internal SQL statements in session.
14011  SQL statements in session.
I took the AWR report (for 1 hour) and the top 5 events are out like,

Event                                 Waits    Time (s)   (ms)   Time Wait Class
------
db file sequential read           1,134,643       7,580      7   56.8   User I/O
db file scattered read              940,880       5,025      5   37.7   User I/O
CPU time                                            967           7.2
control file sequential read          4,987           3      1    0.0 System I/O
control file parallel write           2,408           1      1    0.0 System I/O
The PHYRDS (from dba_hist_filestatxs) on my system01.dbf is 161,028,980 for the final nod.

Could someone shed some light on what is happening here?

TIA,
JJ

In certain circumstances, questioning the dictionary can be slow, usually due to problems with bad bad statistics-related implementation plans, trying to collect statistics using dbms_stats.gather_fixed_objects_stats (); It has worked for me before.
You can also read Note 414256.1 poor performance for Tablespace Page in Grid Control display Console that also indicates a possible problem with the trash.

HTH

Enrique

Tags: Database

Similar Questions

  • "the db file sequential read" waiting for event slow down an application.

    "the db file sequential read" waiting for event slow down an application.

    It is a rather strange problem. There is an update statement that hangs on the wait event 'db file sequential read' and until you restart the database, the query works fine. It happens once a week, usually Monday or after several days of large amount of work.

    I checked the processor and is fine, memory is very good, although the SGA and PGA have taken maximum memory. Flow of the disc seems to be ok since each another session on the basis of data looks very good.

    I guess that there is a missing configuration to avoid having to restart the database each week.

    Any help is greatly appreciated.

    Hello

    If you want same order of the tables as plain exp after reboot just go with ordered hint

    UPDATE item_work_step
    SET user_name = :b1,
    terminal = SYS_CONTEXT ('USERENV', 'TERMINAL'),
    status_cd = 'IN PROCESS'
    WHERE item_work_step_route_id =
    (SELECT item_work_step_route_id
    FROM (SELECT /*+ORDERED */ iws.item_work_step_route_id
    FROM user_role ur,
    work_step_role wsr,
    work_step ws,
    app_step aps,
    item_work_step iws,
    item_work iw,
    item i
    WHERE wsr.role_cd = ur.role_cd
    AND ws.work_step_id = wsr.work_step_id
    AND aps.step_cd = ws.step_cd
    AND iws.work_step_id = ws.work_step_id
    AND iws.work_id = ws.work_id
    AND iws.step_cd = ws.step_cd
    AND iws.status_cd = 'READY'
    AND iw.item_work_id = iws.item_work_id
    AND iw.item_id = iws.item_id
    AND iw.work_id = iws.work_id
    AND i.item_id = iws.item_id
    AND i.item_id = iw.item_id
    AND i.deleted = 'N'
    AND i.item_type_master_cd = :b3
    AND ur.user_name = :b1
    AND aps.app_name = :b2
    AND ( iws.assignment_user_or_role IS NULL
    OR ( iws.assignment_user_or_role IN (
    SELECT ur.role_cd
    FROM user_role ur
    WHERE ur.user_name = :b1
    UNION ALL
    SELECT :b1
    FROM dual)
    AND iws.assignment_expiration_time > SYSDATE
    )
    OR ( iws.assignment_user_or_role IS NOT NULL
    AND iws.assignment_expiration_time <= SYSDATE
    )
    )
    AND (iws.pend_date IS NULL OR iws.pend_date <= SYSDATE
    )
    ORDER BY aps.priority,
    LEAST (NVL (iw.priority, 9999),
    NVL ((SELECT NVL (priority, 9999)
    FROM item_work
    WHERE item_id = i.parent_id
    AND work_id = 42),
    9999
    )
    ),
    DECODE (i.a3, NULL, 0, 1),
    NVL (iw.sla_deadline,
    (SELECT sla_deadline
    FROM item_work
    WHERE item_id = i.parent_id
    AND work_id = 42)
    ),
    i.parent_id,
    i.item_id) unclaimed_item_work_step
    WHERE ROWNUM <= 1)
    

    If you want to get rid of the nested loops use USE_HASH

    UPDATE item_work_step
    SET user_name = :b1,
    terminal = SYS_CONTEXT ('USERENV', 'TERMINAL'),
    status_cd = 'IN PROCESS'
    WHERE item_work_step_route_id =
    (SELECT item_work_step_route_id
    FROM (SELECT /*+ORDERED USE_HASH(ur wsr ws aps iws iw i) */ iws.item_work_step_route_id
    FROM user_role ur,
    work_step_role wsr,
    work_step ws,
    app_step aps,
    item_work_step iws,
    item_work iw,
    item i
    WHERE wsr.role_cd = ur.role_cd
    AND ws.work_step_id = wsr.work_step_id
    AND aps.step_cd = ws.step_cd
    AND iws.work_step_id = ws.work_step_id
    AND iws.work_id = ws.work_id
    AND iws.step_cd = ws.step_cd
    AND iws.status_cd = 'READY'
    AND iw.item_work_id = iws.item_work_id
    AND iw.item_id = iws.item_id
    AND iw.work_id = iws.work_id
    AND i.item_id = iws.item_id
    AND i.item_id = iw.item_id
    AND i.deleted = 'N'
    AND i.item_type_master_cd = :b3
    AND ur.user_name = :b1
    AND aps.app_name = :b2
    AND ( iws.assignment_user_or_role IS NULL
    OR ( iws.assignment_user_or_role IN (
    SELECT ur.role_cd
    FROM user_role ur
    WHERE ur.user_name = :b1
    UNION ALL
    SELECT :b1
    FROM dual)
    AND iws.assignment_expiration_time > SYSDATE
    )
    OR ( iws.assignment_user_or_role IS NOT NULL
    AND iws.assignment_expiration_time <= SYSDATE
    )
    )
    AND (iws.pend_date IS NULL OR iws.pend_date <= SYSDATE
    )
    ORDER BY aps.priority,
    LEAST (NVL (iw.priority, 9999),
    NVL ((SELECT NVL (priority, 9999)
    FROM item_work
    WHERE item_id = i.parent_id
    AND work_id = 42),
    9999
    )
    ),
    DECODE (i.a3, NULL, 0, 1),
    NVL (iw.sla_deadline,
    (SELECT sla_deadline
    FROM item_work
    WHERE item_id = i.parent_id
    AND work_id = 42)
    ),
    i.parent_id,
    i.item_id) unclaimed_item_work_step
    WHERE ROWNUM <= 1)
    

    and for small tables, you can try adding for example FULL (your) FULL (wsr)

    It can be rewritten in a different way, but it's the fastest way to try how query will be if you rewrite it. Check the explain plan command if certain partially ordered tables are not joined because you can get the Cartesian join, it seems that it will be ok.

    View query result in the em console.

    Concerning

  • query to get the wait for a sql_id (latency of IO) event

    I'm using Oracle 11.2.0.3.  and have the license pack diagnostics to access the awr / ash.

    someone there sql to get something like "Top 5 foreground timed wait event" for a SQL_ID?   In particular, I would like to know what were the events of waiting, both expected and waiting avg (ms) for a sql_id

    Something like this for that particular sql_id:

    )
    Events Expected Wait total (s) times Average waiting (ms
    db file sequential read
    36 895
    351

    It is not really sensible watch ASH do this because it is sampled data - every second for V$ ACTIVE_SESSION_HISTORY and all 10 DBA_HIST_ACTIVE_SESS_HISTORY - you can't get a true picture.

    You have aggregations in the DBA_HIST_SQLSTAT and/or the AWR sql statement.

    It's expensive to save this level of granularity for sql execution unique is why you need to activate on - SQL TRACE.

  • Reason for 'control file sequential read' wait?

    Hello

    We have a 10.2.0.4.0 2. node RAC database on Windows 2003 (all 64-bit).

    By looking at the 'top 5 timed events' section AWR reports (for 1 hour), we still see the 'time of CPU", as the number one event (due to our application, certain questions if all goes well in the study now by developers...), but recently I see"sequential read from the command file"as the event number two, with 3 574 633 expects and 831 time s. I was hoping to find out what was causing this high number of expectations. I started by trying to find a particular query that has experienced this expectation often, so I ran this SQL:
    select sql_id, count(*)
    from dba_hist_active_sess_history
    where event_id = (select event_id from v$event_name where name = 'control file sequential read')
    group by sql_id
    order by 2 desc ;
    As I hoped, the sql_id top of page returned really stands out, with an equal number of 14 182 (the next sql_id has a counter of 68). This is the text of the sql for this id:
    WITH unit AS( 
              SELECT UNIQUE S.unit_id
              FROM STOCK S, HOLDER H
              WHERE  H.H_ID = S.H_ID 
                AND  H.LOC_ID  = :B2 
                AND  S.PROD_ID   = :B1 
                ) 
    SELECT  DECODE((SELECT COUNT(*) FROM unit), 1, unit_ID, NULL) 
     FROM   unit
    WHERE   ROWNUM = 1
    ;
    (Ok, code a little strange, but I already have them change it.)

    My question is:

    Why / what is this code should read in the control file?


    Kind regards

    ADOS

    PS - I also checked the block number in dba_hist_active_sess_history for this sql_id and event_id p2, and it is still one of the 5 blocks in the controlfile. I've spilled the controlfile, but do not see anything interesting (even if it is true, it is the first time I've thrown a controlfile so have no idea really what to do!).

    Hello

    ADO wrote:

    WITH unit AS(
              SELECT UNIQUE S.unit_id
              FROM STOCK S, HOLDER H
              WHERE  H.H_ID = S.H_ID
                AND  H.LOC_ID  = :B2
                AND  S.PROD_ID   = :B1
    )
    SELECT  DECODE((SELECT COUNT(*) FROM unit), 1, unit_ID, NULL)
    FROM   unit
    WHERE   ROWNUM = 1
    ;
    

    This query contains a subquery factoring clause; and as it refers to the unit twice in the main part, chances are subquery factoring is to be materialized in a global temporary table in the schema SYS with SYS_TEMP_ Name% and is accessible on two occasions in the execution plan later (check presence stage of TRANSFORMATION TABLE TEMP). The step of filling of this intermediate table requires you to write in the temporary tablespace - and it's done via direct writing. Each direct entry to the segment (i.e. directly in the file on the disk) requires a data file status check - what is online or offline - that is done by access control files. So you see these expectations. This is one of the many things why subquery factoring is not good for production OLTP environments.

  • get the structure of the event inside the while loop to wait for event occurs before the execution

    Hello

    I have a small problem, when I raise an event using a value change button, which works very well.  The problem is that the VI does not wait for me raise an event and instead runs the same event again, even if I have not pressed the button to start again.  The mechanical action of the button switch is released is.

    I was wondering how you get the structure of the event to wait for a user event, after that he executed the first time.

    James.Morris wrote:

    There is no reason that the event should be raised twice as much that the only way that it fires in your code is by the user by clicking on 'Hall measure only'.

    Oh yes, there is, and I deserve a kudo for this one.  Mechanical action on the button is set on the switch until published, so click to generate an event, on the bottom and on the square, attached is an example.

    My boy has my students hate this question, and to be honest, I hated it.  When never would you do that intentionally?  Honestly?  Anyway to change the button back to normal (as default latch when released) and move the terminal button in the structure of the event where it is managed and it will work as usual.

  • Waiting for redo log file missing when restore main database using RMAN backup that was taken on the database physical standby

    Here's my question after tons of research and test without have the right solutions.

    Target:

    (1) I have a 12.1.0.2 database unique main enterprise 'testdb' as database instance running on the server "node1".

    (2) I created physical standby database "stbydb" on the server "node2".

    (3) DataGuard running on the mode of MaxAvailability (SYNC) with roll forward in real time 12 default c apply.

    (4) primary database has 3 groups of one-man redo. (/oraredo/testdb/redo01.log redo02.log redo03.log)

    (5) I've created 4 standby redo logfiles (/oraredo/testdb/stby01.log stby02.log stby03.log stby04.log)

    (6) I do RMAN backup (database and archivelog) on the site of relief only.

    (7) I want to use this backup for full restore of the database on the primary database.

    He is a DR test to simulate the scenario that has lost every primary & Eve total servers.

    Here is how to save, on the database pending:

    (1) performance 'alter database recover managed standby database Cancel' to ensure that compatible data files

    (2) RMAN > backup database;

    (3) RMAN > backup archivelog all;

    I got elements of backup and copied to primary db Server something like:

    /Home/Oracle/backupset/o1_mf_nnndf_TAG20151002T133329_c0xq099p_.BKP (data files)

    /Home/Oracle/backupset/o1_mf_ncsnf_TAG20151002T133329_c0xq0sgz_.BKP (spfile & controlfile)

    /Home/Oracle/backupset/o1_mf_annnn_TAG20151002T133357_c0xq15xf_.BKP (archivelogs)

    So here's how to restore, on the main site:

    I clean all the files (data files, controlfiles oder all gone).

    (1) restore spfile from pfile

    RMAN > startup nomount

    RMAN > restore spfile from pfile ' / home/oracle/pfile.txt' to ' / home/oracle/backupset/o1_mf_ncsnf_TAG20151002T133329_c0xq0sgz_.bkp';

    (2) modify pfile to convert to db primary content. pFile shows below

    *.audit_file_dest='/opt/Oracle/DB/admin/testdb/adump '

    * .audit_trail = "db".

    * full = '12.1.0.2.0'

    *.control_files='/oradata/testdb/control01.ctl','/orafra/testdb/control02.ctl'

    * .db_block_size = 8192

    * .db_domain = "

    *.db_file_name_convert='/testdb/','/testdb /'

    * .db_name = "testdb".

    * .db_recovery_file_dest ='/ orafra'

    * .db_recovery_file_dest_size = 10737418240

    * .db_unique_name = "testdb".

    *.diagnostic_dest='/opt/Oracle/DB '

    * .fal_server = "stbydb".

    * .log_archive_config = 'dg_config = (testdb, stbydb)'

    * .log_archive_dest_2 = "service = stbydb SYNC valid_for = (ONLINE_LOGFILE, PRIMARY_ROLE) db_unique_name = stbydb'"

    * .log_archive_dest_state_2 = 'ENABLE '.

    *.log_file_name_convert='/testdb/','/testdb /'

    * .memory_target = 1800 m

    * .open_cursors = 300

    * runoff = 300

    * .remote_login_passwordfile = "EXCLUSIVE."

    * .standby_file_management = "AUTO".

    * .undo_tablespace = "UNDOTBS1.

    (3) restart db with updated file pfile

    SQLPLUS > create spfile from pfile='/home/oracle/pfile.txt'

    SQLPLUS > the judgment

    SQLPLUS > startup nomount

    (4) restore controlfile

    RMAN > restore primary controlfile to ' / home/oracle/backupset/o1_mf_ncsnf_TAG20151002T133329_c0xq0sgz_.bkp';

    RMAN > change the editing of the database

    (5) all elements of backup catalog

    RMAN > catalog starts by ' / home/oracle/backupset / '.

    (6) restore and recover the database

    RMAN > restore database;

    RMAN > recover database until the SNA XXXXXX; (this YVERT is the maximum in archivelog backups that extends beyond the scn of the backup of the data file)

    (7) open resetlogs

    RMAN > alter database open resetlogs;

    Everything seems perfect, except one of the file log roll forward pending is not generated

    SQL > select * from v$ standby_log;

    ERROR:

    ORA-00308: cannot open archived log ' / oraredo/testdb/stby01.log'

    ORA-27037: unable to get file status

    Linux-x86_64 error: 2: no such file or directory

    Additional information: 3

    no selected line

    I intended to use the same backup to restore primary basic & helps record traffic and the downtime between them in the world of real output.

    So I have exactly the same steps (except STANDBY restore CONTROLFILE and not recover after database restore) to restore the database pending.

    And I got the same missing log file.

    The problem is:

    (1) complete alert.log filled with this error, not the concern here

    (2) now repeat it in real time apply won't work since the Party shall LGWR shows always "WAITING_FOR_LOG."

    (3) I can't delete and re-create this log file

    Then I tried several and found:

    The missing standby logfile was still 'ACTIVE' at present RMAN backup was made.

    For example, on db standby, under Group #4 (stby01.log) would be lost after the restoration.

    SQL > select GROUP #, SEQUENCE #, USE (s), the STATUS from v$ standby_log;

    GROUP # SEQUENCE # USED STATUS

    ---------- ---------- ---------- ----------

    4 19 ACTIVE 133632

    5 0 0 UNASSIGNED

    6 0 0 not ASSIGNED

    7 0 0 UNASSIGNED

    So until I take the backup, I tried on the primary database:

    SQL > alter system set log_archive_dest_state_2 = delay;

    This was the Group of standby_log side Eve #4 was released:

    SQL > select GROUP #, SEQUENCE #, USE (s), the STATUS from v$ standby_log;

    GROUP # SEQUENCE # USED STATUS

    ---------- ---------- ---------- ----------

    4 0 0 UNASSIGNED

    5 0 0 UNASSIGNED

    6 0 0 not ASSIGNED

    7 0 0 UNASSIGNED

    Then, the backup has been restored correctly without missing standby logfile.

    However, to change this primary database means break DataGuard protection when you perform the backup. It's not accept on the production environment.

    Finally, my real questions come:

    (1) what I do may not do on parameter change?

    (2) I know I can re-create the control file to redo before delete and then recreate after. Is there any simple/fast to avoid the standby logfile lost or recreate the lost one?

    I understand that there are a number of ways to circumvent this. Something to keep a copy of the log file waiting restoration progress and copy up one missing, etc, etc...

    And yes I always have done no real-time applies "to the aid of archived logfile" but is also not accept mode of protection of production.

    I just want proof that the design (which is displayed in a few oracle doc Doc ID 602299.1 is one of those) that backs up data backup works effectively and can be used to restore the two site. And it may be without spending more time to resume backups or put the load on the primary database to create the database before.

    Your idea is very much appreciated.

    Thank you!

    Hello

    1--> when I take via RMAN backup, RMAN does not redo log (ORL or SRL) file, so we cannot expect ORLs or SRL would be restored.

    2nd--> when we opened the ORL database should be deleted and created

    3rd--> Expecting, SRL should not be an issue.we should be able to do away with the fall.

    DR sys@cdb01 SQL > select THREAD #, SEQUENCE #, GROUP #, STATUS from v$ standby_log;

    THREAD # SEQUENCE # GROUP # STATUS

    ---------- ---------- ---------- ----------

    1 233 4 ACTIVE

    1 238 5 ACTIVE

    DR sys@cdb01 SQL > select * from v$ logfile;

    GROUP # STATUS TYPE MEMBER IS_ CON_ID

    ---------- ------- ------- ------------------------------ --- ----------

    3 /u03/cdb01/cdb01/redo03.log no. 0 online

    /U03/cdb01/cdb01/redo02.log no. 0 2 online

    1 /u03/cdb01/cdb01/redo01.log no. 0 online

    4 /u03/cdb01/cdb01/stdredo01.log WATCH No. 0

    /U03/cdb01/cdb01/stdredo02.log EVE 5 No. 0

    DR sys@cdb01 SQL > ! ls - ltr /u03/cdb01/cdb01/stdredo01.log

    method: cannot access the /u03/cdb01/cdb01/stdredo01.log: no such file or directory

    DR sys@cdb01 SQL >! ls - ltr /u03/cdb01/cdb01/stdredo02.log

    -rw - r-. 1 oracle oinstall 52429312 17 Oct 15:32 /u03/cdb01/cdb01/stdredo02.log

    DR sys@cdb01 SQL > alter database force claire logfile 4;

    change the database group claire logfile 4

    *

    ERROR on line 1:

    ORA-01156: recovery or current flashback may need access to files

    DR sys@cdb01 SQL > alter database recover managed standby database cancel;

    Database altered.

    DR sys@cdb01 SQL > change the database group claire logfile 4;

    Database altered.

    DR sys@cdb01 SQL > ! ls - ltr /u03/cdb01/cdb01/stdredo01.log

    -rw - r-. 1 oracle oinstall 52429312 17 Oct 15:33 /u03/cdb01/cdb01/stdredo01.log

    DR sys@cdb01 SQL >

    If you do, you can recreate the controlfile without waiting for redo log entry...

    If you still think it's something is not acceptable, you must have SR with support to analyze why he does not abandon SRL when controlfile_type is "underway".

    Thank you

  • I have to create the new group for waiting for redo log files?

    I have 10 group of files redo log with 2 members of each group for my primary database, I need to create new group for redo log files for the database of relief pending

    Group # members

    ==============

    1              2

    2              2

    3             2

    4             2

    5             2

    6             2

    7             2

    8             2

    9             2

    2 of 10

    If so, the following statement is correct? or nto

    ALTER DATABASE ADD STANDBY LOGFILE GROUP 1 ('D:\Databases\epprod\StandbyRedoLog\REDO01.) LOG',D:\Databases\epprod\StandbyRedoLog\REDO01_1.log');

    Please correct me if am doin wrong

    because when I run the statement I have error message saying: the group is already created.

    Thanks John

    I just found the answer

    Yes, it of recomeded to add the new group, for instnace if I have 10 group of 1 to 10, then the wait should be from 11 to 20

    Thanks I found the answer.

  • What is the purpose of waiting for redo log files

    Hello

    What is the purpose of the log files waiting for redo in the Dr?

    What happens if the standby redo log files are created? or else is not created?

    Please explain

    Thank you

    Re: what is the difference between onlinelog and standbylog

    I mentioned the goal of the eve of the redo log in RD files in above thread.

    Concerning
    Girish Sharma

  • WAITING FOR REDO DICTIONARY: FILE

    Hi all

    I'm WAITING for REMAKING DICTIONARY: NAME of FILE status

    SELECT STATUS FROM V$ STREAMS_CAPTURE;

    I tried to solve it and found that some redo log file is missing.
    Can you get it someone please let me know what could be the cause of the lack of redo log file?
    If the cause for this is that some redo log file is not saved, then how would I record?
    I have already questioned the views of data dictionary DBA_REGISTERED_ARCHIVED_LOG and DBA_CAPTURE and I have a few lines.

    Thank you all.

    Concerning
    Nick

    It must be reflected in the alerts log.

    You can use the following question as well:

    SQL > select name from v$ archived_log where between first_change # and next_change #;

  • Worx mobile app for iOS - mdx file Acrobat Reader

    Hi all

    In our firm, we use Citrix XenMobile. Adobe already provides support for Android devices, but since we mainly use iOS devices, we would be very happy on a file Acrobat Reader mdx for iOS. Are there plans?

    Hello

    As far as I know, the Acrobat Reader (for Android, iOS, Windows Phone) mobile products do not understand any special support for mdx files.   Please note that each mobile operating system has different limitations and restrictions.

    You submit a feature request form?

    Adobe - feature request/Bug Report Form

    The product management team will take your request into consideration for a future release, based on data from feedback and use user.

    We are not able to speculate on any future plans for Adobe products and services online in the Adobe user forums. Sorry for the inconvenience.

  • How long do you normally wait for reviewing the files?

    Hi, I'm new here and have a couple of files that have been waiting a week for review.

    So I was wondering how long it normally takes

    It is said ""thank you for your presentation, your files will be examined by the moderation team in the coming days. " "

    Thank you

    Doug

    Hello Doug,

    I found that your account has not been validated when it was created as an important step was missed. I fixed that for you so your content should be placed at the top of the queue and is discussed shortly.

    Kind regards

    Mat Hayward

  • Wait for the event begins with SQL * Net message from client-time wait 178577 units

    Hello

    I'm watching events waiting for a request from long time in TOAD.
    I start the query on an instance of TOAD, and open the browser to log on to another instance.
    But I am surprised to see that in "TOtal expected" on the RIGHT part->
    SQL * Net message from client is the longest time and is already - > 178577 units while I just to start the query.

    Considering that, in the current waiting she shows DB file scattered read correctly for a few seconds.

    Please suggest.

    user8941550 wrote:
    Hello. No explanation for this... :-(

    Hello

    people work here, you don't know?
    I think Tom Kyte explains it well enough. This wait event is linked to your session database waiting for the guest to say to do something.

    So it is not related to the database, but to your application.
    Also as it is a wait of session event you might have had your session inactive for some time (do nothing)

    If you want to check the waiting events correctly I suggest using tkprof and start a new session in SQL more as shown by Tom Kyte in the link I posted.

    Then, run your query in sqlplus setting track and pull it out as soon as your statement is completed.
    that is to say:

    -- myest.sql
    alter session set events '10046 trace name context forever, level 12';
    SELECT ... -- your query here
    exit
    

    Run in sqlplus in this way:

    sqlplus user/password@db @mytest.sql
    

    Then check with tkprof.

    Kind regards.
    Al

  • Waiting for CPU: what exactly does that mean?

    Hi all

    I worked on this database (11.2.0.3 on AIX 6.1) try to improve the performance of certain lots of an ERP system developed by my company. These are all processes that work very well in other environments, but here the clock times are horrible for the load and I was seeing 10% CPU, 90% CPU waiting for almost every process. This is the last section of a very long trace file where you can see the invisible "wait."

    I know not how to do this: if I run something in the OS with a higher priority than to Oracle, what I will get. "Lack of processor" according to me, is the name for it. My question is: what else can cause this? If the OS say that nothing was on the computer, how can I study the root cause?
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse   762529      3.24      16.60          0          3          0           0
    Execute 8334641   5593.35   51349.14     344238    2115862   12349440     1341634
    Fetch   7048666   1142.66    5978.90     385152   58108531       2068     7944263
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total   16145836   6739.25   57344.65     729390   60224396   12351508     9285897
    
    Misses in library cache during parse: 734
    Misses in library cache during execute: 731
    
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                    620864        0.62       4858.21
      Disk file operations I/O                       42        0.00          0.00
      latch: shared pool                              8        0.04          0.06
      asynch descriptor resize                        6        0.00          0.00
      direct path write temp                          2        0.01          0.01
      direct path read temp                          94        0.03          0.57
      db file scattered read                       2129        0.44         23.90
      log file switch completion                     15        0.12          0.98
      latch: cache buffers lru chain                  2        0.00          0.00
      resmgr:cpu quantum                             15        0.00          0.03
      latch: object queue header operation            2        0.00          0.00
    
     8180  user  SQL statements in session.
      609  internal SQL statements in session.
     8789  SQL statements in session.
      288  statements EXPLAINed in this session.
    ********************************************************************************
    Trace file: dbkpv_ora_811258_ARREC_BXA_AUTOMATICA.trc
    Trace file compatibility: 11.1.0.7
    Sort options: prsela  fchela  exeela  
           1  session in tracefile.
        8180  user  SQL statements in trace file.
         609  internal SQL statements in trace file.
        8789  SQL statements in trace file.
         754  unique SQL statements in trace file.
         288  SQL statements EXPLAINed using schema:
               KIPREV.prof$plan_table
                 Default table was used.
                 Table was created.
                 Table was dropped.
     25357662  lines in trace file.
       57345  elapsed seconds in trace file.

    marcusrangel wrote:
    don't you think that it is possible that tracing would cause such a huge head?

    Your trace file contains 25 million lines, that is a significant amount of write operations, to be compared with the information that reads one-piece 620K consume dry 4858.
    It is necessary to note that writing in the trace file leads to loss of time only when it is caused between measure of elapsed time from a visit to the database. But even if the write operation is due to the outside to measure the time of the call for a database (for example, information about a call, as FETCH #) it can be inside some call parent and will be included in the duration of the call of the mother.
    If you are interested in the details of this process, I prepare an article on this topic in my blog.

    I would therefore recommend to reproduce a similar load without follow-up.

    Published by: Alexander Anokhin on 12.07.2012 22:07

  • Log writer waiting &amp; buffer waiting for busy

    Hello

    Version 10.2.0.3

    I have a problem with a database of buffer high waits and log file sync waits top 5 events of the awr report. I noticed that recovery logs are stored in a partition of san. Not sure why they did it. Is that should store online redo logs in partition san? Please advice.

    Thank you
    PAVN

    In your first message hear buffer waits and log file sync waits.
    Now, you have two servers, a huge amount of e/s and nothing on buffer waits and log file sync waits a little towards the bottom of the list.

    Your IO subsystem is completely overload on server B - and it's probably because he's overloaded while updates remotely can get lock wait time (60 seconds by default, I beleve) on ServerA.

    Takes care to do much with the LOBs. In your case, it seems that you have them declared nocache logging at both ends - which helps explain the direct reading and the Scriptures.

    A few oddities - no direct visible path did not write on A - that suggests she is faced with the direct path writes using asynchronous methods, but hide the impact as a result. The other server that b shows a lot of written direct path (overloaded, perhaps not on async) AND direct bed - why re-read you LOBs on B (possibly something on the way your code is written them).

    A strategy to ease the burden - make the CACHE of the LOBs (but assign to a reasonable size RECYCLE cache or cache for a non-standard block size). If you can keep them cached for a few seconds, you will not have to make him reread you migrate them - reduce your total I/O.

    HW argument - it's a bit a classic with the LOBs (it's space allocation as the segment grows - displacement of high waters). There are a few bugs with LOBs and SAMS tablespaces that could be behind all this - check for your version of Oracle Metalink. Log File Sync - at the moment there not that much, you can do it if they are large, written on a strongly hammered system.

    Server B - too much I/O: check where it goes - maybe it's the code LOB management but seek other sources explain the db file sequential reads and read by another session - maybe you have too many readings of scattered files db passes as well.

    -Check the SQL sorted by readings and Segments by physical i/o for clues. There are probably a few heavy hitters.

    For transfer between systems (although it will probably not help) you could look on Configure SQL * Net components (SDU TDU, tcp/ip frames Jumbo, tx and rx pads) take care of pumping LOB size data through the link.

    Concerning
    Jonathan Lewis
    http://jonathanlewis.WordPress.com
    http://www.jlcomp.demon.co.UK

    "All experts it is a equal and opposite expert."
    Clarke

  • waiting for redo log deleted

    Oracle 11 g 2 (11.2.0.3) Linux x86_64

    A silly mistake was made. While recreate redo standby logfiles, I managed to remove (rm command) waiting for redo log file in progress (before the end of the application) of the BONE. Now I see in the log of alerts:

    ORA-00313: open failed for members of log group 7 of thread 1
    ORA-00312: online log 7 thread 1: ..../standby_redo7.log'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    

    What are my options at this point? I have to any cancelled transaction? The settlement seems to be moving right on the property. Where am I people?

    Thanks to you all.

    Hello

    1 ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

    2. ALTER LOGFILE GROUP 7 CLEAR DATABASE;

    3 ALTER DATABASE DROP STANDBY LOGFILE GROUP 7;

    4. ALTER DATABASE ADD STANDBY LOGFILE GROUP 7;  OR ALTER DATABASE ADD LOGFILE MEMBER '' HELPS GROUP 7;

    5. START AGAIN APPLY: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CURRENT LOGFILE USING DISCONNECT FROM THE SESSION;

    HTH

    Tobi

Maybe you are looking for

  • How to see what Apple ID I have?

    After a search through the community for this topic, I found a simple and direct answer for that.  Me seems to have two Apple ID, and I can't find the password to the original immediately the bat.  I know the name, but must be able to try a few cases

  • Upgrading RAM on Satellite Pro A10

    I want to upgrade the RAM on my Satellite Pro A10 - currently only 256 MB. I have a model A10-PS15E-02VPC-EN, which is P4 1.9 G - any advice on what the right extension configuration should be - I understand that I can go to 1 GB - if it was in a slo

  • Is there a way to SHORTEN the time of compilation

    Hey guys,. It is extremely laborious when you book only little change in the labview project and the overall program needs to be recompiled all over again! Need me often 30 minutes or more. He lead me almost crazy... Is there anyone who have samilar

  • laptop charger for dv7-1245dx

    My original adapter seems to be a shortage - couldnot find a replacement for adapter for dv7-1245dx. but after buying power 90w slim - I noticed that it was not listed as being compatible according to the compatibility list, should I buy another card

  • Z30 Live blackBerry account

    Having had my live account running successfully for more than a year on the hub as all of a sudden, it won't work. Can not see that I changed anything, apart from on my laptop where I added the account to outlook 2010. Any ideas what server and detai