DB file sequential reads on the table scan complete and LRU (new)

I would like to add a question on the subject

scan of full table and LRU

According to MOS doc ument 1457693.1

«.. signs diluvium readings of the caching blocks can be divided into a number of small multiblock and self-contained bed. »

The question is if sequential readings of db file submitted by a FULL SCAN operation will be cached on LRU or MRU list?

I'm afraid the flushes of heat/floods the buffer cache with a lot of FULL SCAN db file sequential reads.

For which direct path series reason readings will be inapplicable, so using this new feature of 11g is out of scope for this question.

Thank you for your interest,

Rainer Stenzel

There are a few different patterns of behavior depends on the size of the table (relative to the size of the buffer cache), but the key question is probably "readings will increment the counter of touch" - because if they are not the blocks will fall the LRU list fairly quickly, if they do then the blocks could (after a few tablescans) eventually be promoted in half hot from the cache.

I did some quick tests (that requires a little care in the Installer) which suggest the number touch was not incremented has therefore not had any effect on the question of if the block would get preferential treatment when they have reached the end of the LRU.

I'm a little puzzled by your expression: "cached on LRU or MRU list" - it's not two different lists; but people talk about "the end of the MRU of the LRU list.

Concerning

Jonathan Lewis

Tags: Database

Similar Questions

  • DB file sequential read and read of the direct path

    Hello

    Could someone please clear my doubts about 'db file sequential read' and ' path direct reading. And help me understand the tkprof report correctly.
    Please suggest if my understanding for scenario below is correct.

    We have a 11.2.0.1 ' cluster 2 node rac version + asm' production and environment of his test environment that is a stand-alone database.
    The query performs well in production compared to the test database.
    The table is to have 254 + columns (264) with many lobs coulumns however LOB is not currently selected in the query.
    I read in metalink this 254 table + column a intra-line-chaining, causing "db file sequential read" full table Scan.

    Here are some details on the table which is similar to the prod and test, block size is 8 k:
    TABLE                             UNUSED BLOCKS     TOTAL BLOCKS  HIGH WATER MARK
    ------------------------------  ---------------  ---------------  ---------------
    PROBSUMMARYM1                               0          17408          17407
    What I understand less tkprof in production environment for a given session is:
    1 - the request resulted in disk 19378 readings and 145164 consistent readings.
    2 19378 disc bed, 2425 reads disc has given rise to the wait event 'db file sequential read'.
    This statement is correct this disc remaining readings were "db file sequential reads" but real quick so didn't wait event tied to it?
    3 - 183 'direct path read' there also. Is it because of the order by clause of the query?

    SQL ID: 72tvt5h4402c9
    Plan Hash: 1127048874
    select "NUMBER" num 
    from
     smprd.probsummarym1 where flag ='f' and affected_item = 'PAUSRWVP39486' 
      order by num asc
    
    
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      0.53       4.88      19378     145164          0           0
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total        3      0.53       4.88      19378     145164          0           0
    
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS
    
    Rows     Row Source Operation
    -------  ---------------------------------------------------
          0  SORT ORDER BY (cr=145164 pr=19378 pw=0 time=0 us cost=4411 size=24 card=2)
          0   TABLE ACCESS FULL PROBSUMMARYM1 (cr=145164 pr=19378 pw=0 time=0 us cost=4410 size=24 card=2)
    
    
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       1        0.00          0.00
      ges message buffer allocation                   3        0.00          0.00
      enq: KO - fast object checkpoint                2        0.00          0.00
      reliable message                                1        0.00          0.00
      KJC: Wait for msg sends to complete             1        0.00          0.00
      Disk file operations I/O                        1        0.00          0.00
      kfk: async disk IO                            274        0.00          0.00
      direct path read                              183        0.01          0.72
      db file sequential read                      2425        0.05          3.71
      SQL*Net message from client                     1        2.45          2.45
    The same query when ran in no no rac - asm test stand alone database has given below tkprof.
    Does this mean that:
    1-here too, reads happen through "db file sequential read", but they were so fast that has failed to the wait event?
    2. "direct path read," it's because of the order clause in the query. "
    SQL ID: 72tvt5h4402c9
    Plan Hash: 1127048874
    select "NUMBER" num 
    from
     smprd.probsummarym1 where flag ='f' and affected_item = 'PAUSRWVP39486' 
      order by num asc
    
    
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.06          0          0          0           0
    Fetch        1      0.10       0.11      17154      17298          0           0
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total        3      0.10       0.18      17154      17298          0           0
    
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS
    
    Rows     Row Source Operation
    -------  ---------------------------------------------------
          0  SORT ORDER BY (cr=17298 pr=17154 pw=0 time=0 us cost=4694 size=12 card=1)
          0   TABLE ACCESS FULL PROBSUMMARYM1 (cr=17298 pr=17154 pw=0 time=0 us cost=4693 size=12 card=1)
    
    
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       1        0.00          0.00
      Disk file operations I/O                        1        0.00          0.00
      db file sequential read                         3        0.00          0.00
      direct path read                              149        0.00          0.03
      SQL*Net message from client                     1        2.29          2.29
    For trace files in the database Production and Test, I see that 'direct path read' is against the same data file that's stored table.
    Then how come 'this direct path read' because of the order by clause of the query and would have been in sort field size or pga?
    Or this direct path read extracts real PGA disk data, and "db file sequential read" do not extract data?
    I know, it's 'direct path read' is wait event when data are asked in PGA drive or what kind segment or temp tablespace is used.

    Here is the example of trace file in the Production database:
    *** 2013-01-04 13:49:15.109
    WAIT #1: nam='SQL*Net message from client' ela= 11258483 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1357278555109496
    CLOSE #1:c=0,e=9,dep=0,type=1,tim=1357278555109622
    =====================
    PARSING IN CURSOR #1 len=113 dep=0 uid=0 oct=3 lid=0 tim=1357278555109766 hv=138414473 ad='cfc02ab8' sqlid='72tvt5h4402c9'
    select "NUMBER" num from smprd.probsummarym1 where flag ='f' and affected_item = 'PAUSRWVP39486' order by num asc
    END OF STMT
    PARSE #1:c=0,e=98,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1127048874,tim=1357278555109765
    EXEC #1:c=0,e=135,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1127048874,tim=1357278555109994
    WAIT #1: nam='SQL*Net message to client' ela= 2 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1357278555110053
    WAIT #1: nam='ges message buffer allocation' ela= 3 pool=0 request=1 allocated=0 obj#=-1 tim=1357278555111630
    WAIT #1: nam='enq: KO - fast object checkpoint' ela= 370 name|mode=1263468550 2=65610 0=1 obj#=-1 tim=1357278555112098
    WAIT #1: nam='reliable message' ela= 1509 channel context=3691837552 channel handle=3724365720 broadcast message=3692890960 obj#=-1 tim=1357278555113975
    WAIT #1: nam='ges message buffer allocation' ela= 2 pool=0 request=1 allocated=0 obj#=-1 tim=1357278555114051
    WAIT #1: nam='enq: KO - fast object checkpoint' ela= 364 name|mode=1263468550 2=65610 0=1 obj#=-1 tim=1357278555114464
    WAIT #1: nam='KJC: Wait for msg sends to complete' ela= 9 msg=3686348728 dest|rcvr=65536 mtype=8 obj#=-1 tim=1357278555114516
    WAIT #1: nam='ges message buffer allocation' ela= 2 pool=0 request=1 allocated=0 obj#=-1 tim=1357278555114680
    WAIT #1: nam='Disk file operations I/O' ela= 562 FileOperation=2 fileno=6 filetype=2 obj#=85520 tim=1357278555115710
    WAIT #1: nam='kfk: async disk IO' ela= 5 count=1 intr=0 timeout=4294967295 obj#=85520 tim=1357278555117332
    
    *** 2013-01-04 13:49:15.123
    WAIT #1: nam='direct path read' ela= 6243 file number=6 first dba=11051 block cnt=5 obj#=85520 tim=1357278555123628
    WAIT #1: nam='db file sequential read' ela= 195 file#=6 block#=156863 blocks=1 obj#=85520 tim=1357278555123968
    WAIT #1: nam='db file sequential read' ela= 149 file#=6 block#=156804 blocks=1 obj#=85520 tim=1357278555124216
    WAIT #1: nam='db file sequential read' ela= 155 file#=6 block#=156816 blocks=1 obj#=85520 tim=1357278555124430
    WAIT #1: nam='db file sequential read' ela= 4826 file#=6 block#=156816 blocks=1 obj#=85520 tim=1357278555129317
    WAIT #1: nam='db file sequential read' ela= 987 file#=6 block#=156888 blocks=1 obj#=85520 tim=1357278555130427
    WAIT #1: nam='db file sequential read' ela= 3891 file#=6 block#=156888 blocks=1 obj#=85520 tim=1357278555134394
    WAIT #1: nam='db file sequential read' ela= 155 file#=6 block#=156912 blocks=1 obj#=85520 tim=1357278555134645
    WAIT #1: nam='db file sequential read' ela= 145 file#=6 block#=156920 blocks=1 obj#=85520 tim=1357278555134866
    WAIT #1: nam='db file sequential read' ela= 234 file#=6 block#=156898 blocks=1 obj#=85520 tim=1357278555135332
    WAIT #1: nam='db file sequential read' ela= 204 file#=6 block#=156907 blocks=1 obj#=85520 tim=1357278555135666
    WAIT #1: nam='kfk: async disk IO' ela= 4 count=1 intr=0 timeout=4294967295 obj#=85520 tim=1357278555135850
    WAIT #1: nam='direct path read' ela= 6894 file number=6 first dba=72073 block cnt=15 obj#=85520 tim=1357278555142774
    WAIT #1: nam='db file sequential read' ela= 4642 file#=6 block#=156840 blocks=1 obj#=85520 tim=1357278555147574
    WAIT #1: nam='db file sequential read' ela= 162 file#=6 block#=156853 blocks=1 obj#=85520 tim=1357278555147859
    WAIT #1: nam='db file sequential read' ela= 6469 file#=6 block#=156806 blocks=1 obj#=85520 tim=1357278555154407
    WAIT #1: nam='db file sequential read' ela= 182 file#=6 block#=156826 blocks=1 obj#=85520 tim=1357278555154660
    WAIT #1: nam='db file sequential read' ela= 147 file#=6 block#=156830 blocks=1 obj#=85520 tim=1357278555154873
    WAIT #1: nam='db file sequential read' ela= 145 file#=6 block#=156878 blocks=1 obj#=85520 tim=135727855515
    Here are the trace file for the test database:
    *** 2013-01-04 13:46:11.354
    WAIT #1: nam='SQL*Net message from client' ela= 10384792 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1357278371354075
    CLOSE #1:c=0,e=3,dep=0,type=3,tim=1357278371354152
    =====================
    PARSING IN CURSOR #1 len=113 dep=0 uid=0 oct=3 lid=0 tim=1357278371363427 hv=138414473 ad='c7bd8d00' sqlid='72tvt5h4402c9'
    select "NUMBER" num from smprd.probsummarym1 where flag ='f' and affected_item = 'PAUSRWVP39486' order by num asc
    END OF STMT
    PARSE #1:c=0,e=9251,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1127048874,tim=1357278371363426
    EXEC #1:c=0,e=63178,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1127048874,tim=1357278371426691
    WAIT #1: nam='SQL*Net message to client' ela= 2 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1357278371426766
    WAIT #1: nam='Disk file operations I/O' ela= 1133 FileOperation=2 fileno=55 filetype=2 obj#=93574 tim=1357278371428069
    WAIT #1: nam='db file sequential read' ela= 51 file#=55 block#=460234 blocks=1 obj#=93574 tim=1357278371428158
    WAIT #1: nam='direct path read' ela= 31 file number=55 first dba=460235 block cnt=5 obj#=93574 tim=1357278371428956
    WAIT #1: nam='direct path read' ela= 47 file number=55 first dba=136288 block cnt=8 obj#=93574 tim=1357278371429099
    WAIT #1: nam='direct path read' ela= 80 file number=55 first dba=136297 block cnt=15 obj#=93574 tim=1357278371438529
    WAIT #1: nam='direct path read' ela= 62 file number=55 first dba=136849 block cnt=15 obj#=93574 tim=1357278371438653
    WAIT #1: nam='direct path read' ela= 17 file number=55 first dba=136881 block cnt=7 obj#=93574 tim=1357278371438750
    WAIT #1: nam='direct path read' ela= 35 file number=55 first dba=136896 block cnt=8 obj#=93574 tim=1357278371438855
    WAIT #1: nam='direct path read' ela= 22 file number=55 first dba=136913 block cnt=7 obj#=93574 tim=1357278371438936
    WAIT #1: nam='direct path read' ela= 19 file number=55 first dba=137120 block cnt=8 obj#=93574 tim=1357278371439029
    WAIT #1: nam='direct path read' ela= 36 file number=55 first dba=137145 block cnt=7 obj#=93574 tim=1357278371439114
    WAIT #1: nam='direct path read' ela= 18 file number=55 first dba=137192 block cnt=8 obj#=93574 tim=1357278371439193
    WAIT #1: nam='direct path read' ela= 16 file number=55 first dba=137201 block cnt=7 obj#=93574 tim=1357278371439252
    WAIT #1: nam='direct path read' ela= 17 file number=55 first dba=137600 block cnt=8 obj#=93574 tim=1357278371439313
    WAIT #1: nam='direct path read' ela= 15 file number=55 first dba=137625 block cnt=7 obj#=93574 tim=1357278371439369
    WAIT #1: nam='direct path read' ela= 22 file number=55 first dba=137640 block cnt=8 obj#=93574 tim=1357278371439435
    WAIT #1: nam='direct path read' ela= 702 file number=55 first dba=801026 block cnt=126 obj#=93574 tim=1357278371440188
    WAIT #1: nam='direct path read' ela= 1511 file number=55 first dba=801154 block cnt=126 obj#=93574 tim=1357278371441763
    WAIT #1: nam='direct path read' ela= 263 file number=55 first dba=801282 block cnt=126 obj#=93574 tim=1357278371442547
    WAIT #1: nam='direct path read' ela= 259 file number=55 first dba=801410 block cnt=126 obj#=93574 tim=1357278371443315
    WAIT #1: nam='direct path read' ela= 294 file number=55 first dba=801538 block cnt=126 obj#=93574 tim=1357278371444099
    WAIT #1: nam='direct path read' ela= 247 file number=55 first dba=801666 block cnt=126 obj#=93574 tim=1357278371444843
    WAIT #1: nam='direct path read' ela= 266 file number=55 first dba=801794 block cnt=126 obj#=93574 tim=1357278371445619
    Thanks & Rgds,
    Vijay

    911786 wrote:

    Direct path readings can be done on the series tablescans in your version of Oracle, but if you have chained rows in the table and then Oracle can read read read the beginning of the line in the path directly, but must make a single block in the cache (the db file sequential read) to get the next part of the line.

    It is possible that your production system has a lot of chained rows, while your test system is not. As corroboration (though not convincing) indicator of what you might notice that if you take (reads disc - db file sequential reads) - who might get you close to all the blocks direct read - the numbers are very similar.

    I'm not 100% convinced that it's the answer for the difference in behavior, but worth a visit. If you can force and indexed access path in the table, do something like "select / * + index (use {pk}) * / max table (last_column_in_table)" and check the number of "table fetch continued lines" could be close to the number of db file sequential reads you. (There are other options for the counting of the chained rows that could be faster).

    Concerning
    Jonathan Lewis

  • "the db file sequential read" waiting for event slow down an application.

    "the db file sequential read" waiting for event slow down an application.

    It is a rather strange problem. There is an update statement that hangs on the wait event 'db file sequential read' and until you restart the database, the query works fine. It happens once a week, usually Monday or after several days of large amount of work.

    I checked the processor and is fine, memory is very good, although the SGA and PGA have taken maximum memory. Flow of the disc seems to be ok since each another session on the basis of data looks very good.

    I guess that there is a missing configuration to avoid having to restart the database each week.

    Any help is greatly appreciated.

    Hello

    If you want same order of the tables as plain exp after reboot just go with ordered hint

    UPDATE item_work_step
    SET user_name = :b1,
    terminal = SYS_CONTEXT ('USERENV', 'TERMINAL'),
    status_cd = 'IN PROCESS'
    WHERE item_work_step_route_id =
    (SELECT item_work_step_route_id
    FROM (SELECT /*+ORDERED */ iws.item_work_step_route_id
    FROM user_role ur,
    work_step_role wsr,
    work_step ws,
    app_step aps,
    item_work_step iws,
    item_work iw,
    item i
    WHERE wsr.role_cd = ur.role_cd
    AND ws.work_step_id = wsr.work_step_id
    AND aps.step_cd = ws.step_cd
    AND iws.work_step_id = ws.work_step_id
    AND iws.work_id = ws.work_id
    AND iws.step_cd = ws.step_cd
    AND iws.status_cd = 'READY'
    AND iw.item_work_id = iws.item_work_id
    AND iw.item_id = iws.item_id
    AND iw.work_id = iws.work_id
    AND i.item_id = iws.item_id
    AND i.item_id = iw.item_id
    AND i.deleted = 'N'
    AND i.item_type_master_cd = :b3
    AND ur.user_name = :b1
    AND aps.app_name = :b2
    AND ( iws.assignment_user_or_role IS NULL
    OR ( iws.assignment_user_or_role IN (
    SELECT ur.role_cd
    FROM user_role ur
    WHERE ur.user_name = :b1
    UNION ALL
    SELECT :b1
    FROM dual)
    AND iws.assignment_expiration_time > SYSDATE
    )
    OR ( iws.assignment_user_or_role IS NOT NULL
    AND iws.assignment_expiration_time <= SYSDATE
    )
    )
    AND (iws.pend_date IS NULL OR iws.pend_date <= SYSDATE
    )
    ORDER BY aps.priority,
    LEAST (NVL (iw.priority, 9999),
    NVL ((SELECT NVL (priority, 9999)
    FROM item_work
    WHERE item_id = i.parent_id
    AND work_id = 42),
    9999
    )
    ),
    DECODE (i.a3, NULL, 0, 1),
    NVL (iw.sla_deadline,
    (SELECT sla_deadline
    FROM item_work
    WHERE item_id = i.parent_id
    AND work_id = 42)
    ),
    i.parent_id,
    i.item_id) unclaimed_item_work_step
    WHERE ROWNUM <= 1)
    

    If you want to get rid of the nested loops use USE_HASH

    UPDATE item_work_step
    SET user_name = :b1,
    terminal = SYS_CONTEXT ('USERENV', 'TERMINAL'),
    status_cd = 'IN PROCESS'
    WHERE item_work_step_route_id =
    (SELECT item_work_step_route_id
    FROM (SELECT /*+ORDERED USE_HASH(ur wsr ws aps iws iw i) */ iws.item_work_step_route_id
    FROM user_role ur,
    work_step_role wsr,
    work_step ws,
    app_step aps,
    item_work_step iws,
    item_work iw,
    item i
    WHERE wsr.role_cd = ur.role_cd
    AND ws.work_step_id = wsr.work_step_id
    AND aps.step_cd = ws.step_cd
    AND iws.work_step_id = ws.work_step_id
    AND iws.work_id = ws.work_id
    AND iws.step_cd = ws.step_cd
    AND iws.status_cd = 'READY'
    AND iw.item_work_id = iws.item_work_id
    AND iw.item_id = iws.item_id
    AND iw.work_id = iws.work_id
    AND i.item_id = iws.item_id
    AND i.item_id = iw.item_id
    AND i.deleted = 'N'
    AND i.item_type_master_cd = :b3
    AND ur.user_name = :b1
    AND aps.app_name = :b2
    AND ( iws.assignment_user_or_role IS NULL
    OR ( iws.assignment_user_or_role IN (
    SELECT ur.role_cd
    FROM user_role ur
    WHERE ur.user_name = :b1
    UNION ALL
    SELECT :b1
    FROM dual)
    AND iws.assignment_expiration_time > SYSDATE
    )
    OR ( iws.assignment_user_or_role IS NOT NULL
    AND iws.assignment_expiration_time <= SYSDATE
    )
    )
    AND (iws.pend_date IS NULL OR iws.pend_date <= SYSDATE
    )
    ORDER BY aps.priority,
    LEAST (NVL (iw.priority, 9999),
    NVL ((SELECT NVL (priority, 9999)
    FROM item_work
    WHERE item_id = i.parent_id
    AND work_id = 42),
    9999
    )
    ),
    DECODE (i.a3, NULL, 0, 1),
    NVL (iw.sla_deadline,
    (SELECT sla_deadline
    FROM item_work
    WHERE item_id = i.parent_id
    AND work_id = 42)
    ),
    i.parent_id,
    i.item_id) unclaimed_item_work_step
    WHERE ROWNUM <= 1)
    

    and for small tables, you can try adding for example FULL (your) FULL (wsr)

    It can be rewritten in a different way, but it's the fastest way to try how query will be if you rewrite it. Check the explain plan command if certain partially ordered tables are not joined because you can get the Cartesian join, it seems that it will be ok.

    View query result in the em console.

    Concerning

  • Reason for 'control file sequential read' wait?

    Hello

    We have a 10.2.0.4.0 2. node RAC database on Windows 2003 (all 64-bit).

    By looking at the 'top 5 timed events' section AWR reports (for 1 hour), we still see the 'time of CPU", as the number one event (due to our application, certain questions if all goes well in the study now by developers...), but recently I see"sequential read from the command file"as the event number two, with 3 574 633 expects and 831 time s. I was hoping to find out what was causing this high number of expectations. I started by trying to find a particular query that has experienced this expectation often, so I ran this SQL:
    select sql_id, count(*)
    from dba_hist_active_sess_history
    where event_id = (select event_id from v$event_name where name = 'control file sequential read')
    group by sql_id
    order by 2 desc ;
    As I hoped, the sql_id top of page returned really stands out, with an equal number of 14 182 (the next sql_id has a counter of 68). This is the text of the sql for this id:
    WITH unit AS( 
              SELECT UNIQUE S.unit_id
              FROM STOCK S, HOLDER H
              WHERE  H.H_ID = S.H_ID 
                AND  H.LOC_ID  = :B2 
                AND  S.PROD_ID   = :B1 
                ) 
    SELECT  DECODE((SELECT COUNT(*) FROM unit), 1, unit_ID, NULL) 
     FROM   unit
    WHERE   ROWNUM = 1
    ;
    (Ok, code a little strange, but I already have them change it.)

    My question is:

    Why / what is this code should read in the control file?


    Kind regards

    ADOS

    PS - I also checked the block number in dba_hist_active_sess_history for this sql_id and event_id p2, and it is still one of the 5 blocks in the controlfile. I've spilled the controlfile, but do not see anything interesting (even if it is true, it is the first time I've thrown a controlfile so have no idea really what to do!).

    Hello

    ADO wrote:

    WITH unit AS(
              SELECT UNIQUE S.unit_id
              FROM STOCK S, HOLDER H
              WHERE  H.H_ID = S.H_ID
                AND  H.LOC_ID  = :B2
                AND  S.PROD_ID   = :B1
    )
    SELECT  DECODE((SELECT COUNT(*) FROM unit), 1, unit_ID, NULL)
    FROM   unit
    WHERE   ROWNUM = 1
    ;
    

    This query contains a subquery factoring clause; and as it refers to the unit twice in the main part, chances are subquery factoring is to be materialized in a global temporary table in the schema SYS with SYS_TEMP_ Name% and is accessible on two occasions in the execution plan later (check presence stage of TRANSFORMATION TABLE TEMP). The step of filling of this intermediate table requires you to write in the temporary tablespace - and it's done via direct writing. Each direct entry to the segment (i.e. directly in the file on the disk) requires a data file status check - what is online or offline - that is done by access control files. So you see these expectations. This is one of the many things why subquery factoring is not good for production OLTP environments.

  • Read data from table of $ E and insert in the staging table

    Hi all

    I'm new on ODI. I need your help to understand how to read data from a table ' E$ "and insert in an intermediate table.

    Scenario:

    The name of two columns, in a flat file, the employee and the employee id must be loaded into a data EMPstore +. A check constraint is added so that the data with the employee names in capital letters only to load in the data store. Check the command is set to the static movement . Right-click on the data store, select control , then check. The lines that have violated the check constraint are kept in E$ _EMP+ table.

    Problem:

    Problem is I want to read the data in the table E$ _EMP+ and transform in capital letters in the name of the employee and move the corrected data of E$ _EMP+ EMP+. Please advise me on how to automatically manage the 'soft' exceptions in ODI.

    Thank you

    If I understand, you want to change the columns in the tables of $ E and then load into the target.

    Now, if you notice how ODI recycles the error, there is an incremental update to the target using the E table $ after he filled the I$ table.

    I think you can do the same thing by creating an interface using the table of $ E as source and implement the business logic in this interface to fill the target.

  • query on dba_free_space ends waiting for events db file sequential read

    Hi all

    Env: 10gr 2 on Windows NT

    I gave the query
    Select tablespace_name, sum (bytes) / 1024/1024 dba_free_space group by tablespace_name and her for still waiting.
    I checked the event waiting for v$ session and «db file sequential read»

    I put a trace on the session before launching the above query:
     

    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

    call     count       cpu    elapsed       disk      query    current        rows
    -----
    Parse        1      0.06       0.06          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        0      0.00       0.00          0          0          0           0
    -----
    total        2      0.06       0.06          0          0          0           0

    Misses in library cache during parse: 1

    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------
      db file sequential read                     13677        0.16        151.34
      SQL*Net message to client                       1        0.00          0.00
      db file scattered read                        281        0.01          0.53
      latch: cache buffers lru chain                  2        0.00          0.00


    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

    call     count       cpu    elapsed       disk      query    current        rows
    ------
    Parse    13703      0.31       0.32          0          0          0           0
    Execute  14009      0.75       0.83          0          0          0           0
    Fetch    14139      0.48       0.74         26      56091          0       15496
    ------
    total    41851      1.54       1.89         26      56091          0       15496

    Misses in library cache during parse: 16
    Misses in library cache during execute: 16

    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------
      db file sequential read                        26        0.00          0.12

        1  user  SQL statements in session.
    14010  internal SQL statements in session.
    14011  SQL statements in session.
    I took the AWR report (for 1 hour) and the top 5 events are out like,

    Event                                 Waits    Time (s)   (ms)   Time Wait Class
    ------
    db file sequential read           1,134,643       7,580      7   56.8   User I/O
    db file scattered read              940,880       5,025      5   37.7   User I/O
    CPU time                                            967           7.2
    control file sequential read          4,987           3      1    0.0 System I/O
    control file parallel write           2,408           1      1    0.0 System I/O
    The PHYRDS (from dba_hist_filestatxs) on my system01.dbf is 161,028,980 for the final nod.

    Could someone shed some light on what is happening here?

    TIA,
    JJ

    In certain circumstances, questioning the dictionary can be slow, usually due to problems with bad bad statistics-related implementation plans, trying to collect statistics using dbms_stats.gather_fixed_objects_stats (); It has worked for me before.
    You can also read Note 414256.1 poor performance for Tablespace Page in Grid Control display Console that also indicates a possible problem with the trash.

    HTH

    Enrique

  • Read about the table of peoplesoft

    Hello
    on FSCM91, 8.52 tools, Oracle 11 g R2 DB,.
    I want to create a user to be able to read all the tables in the peoplesoft owner SYSADM.

    I gave him:
    Grant select on a table;

    But when he select in psoprdefn (for example) oracle said: there is no table. Because the user should issue:
    Select * from sysadm.psoprdefn;

    Is other than to create synonyms for all tables sysadm, possible (to avoid creating 65000 synonyms)?
    We don't want to alter session statement to each connection.
    Thank you.

    No, I mean really myuser[sysadm]@mydb
    You don't have to know the password of sysadm, connect you with your user as sysadm habe. Then you become sysadm queries on the tables of the sysadm as you were sysadm.
    To learn more:
    http://docs.Oracle.com/CD/E11882_01/network.112/e10744/concepts.htm#DBIMI223

    Nicolas.

  • When I generate the table of contents from a new .book, I get blank pages between each page of the table of contents

    With FrameMaker 2015. When I generate the table of contents from a new .book, I get blank pages between each page of the table of contents, they are totally blank does not even a block of text. I click on Add> autonomous Create TOC. I keep the default settings. And that's what I'm left with.

    TOC Frame 2015 small.png

    I can get rid of them by special> Delete page. But I never had to do before the 2015 version. I tried with older files generated in frame 10. same thing. If someone has encountered this?

    Master Pages are part of each document.

    You can import master pages from another document (i.e. a ' model') in your current document using the file > import > Formats... option and select layout from the list (deselect all others).

    If you have changed the current TOC for the problem of the Page Master on the left, then simply save the file and use the Edit > update book option for that OCD regenerated by using the new page templates.

  • How to insert image from mysql into the table using php and create the checkbox in the table?

    How can I insert image from mysql into the table using php and create the checkbox for each data as a vote? Here is my code...

    WELCOME

    connect_error) {die ("connection failed:".)} $conn-> connect_error); } $sql = "SELECT no, Calon, ID, of course, the Image OF THE candidates." $result = $conn-> Query; If ($resultat-> num_rows > 0) {echo ' '; export data of each line while ($row = $result-> fetch_assoc()) {"echo"}}
    NO Candidate INFO Vote
    " . $row ["no"]. "-" . $row ["Calon"]. "
    -" . $row ['ID']. "
    -" . $row ['class']. "
    "; } ECHO ' ' ;} else {echo '0 results' ;} $conn-> close();?} >

    hope someone can help me because I am a newbie in this program... need to finish this project... Thank you.

    If you have saved the file name in the database, it's pretty simple.

    echo '' . $row['description'] . '';
    

    Is the same for the box:

    echo '';
    

    If you have saved the image file in the database, it is much more complicated. I recommend you store only the file name in the database.

  • Who holds the keys for encryption AES mentioned in the table under "security and features iCloud?

    Who holds the keys for encryption AES mentioned in the table under "security and features iCloud?

    Article

    Security and privacy - Apple Support Overview iCloud

    has a useful table in the section entitled Security and features iCloud.

    The table shows the types of keys used to secure the different types of data.

    Apple holds these keys as it may be requested of Apple by third parties?

    Hmmm... You definitely raise a good and valid question to which I don't know the answer to, but if I had to guess, I would say that no one.  Would this be possible?  I know I've heard Cook mention that they "don't hold the keys" but does the same thing, it refers?  It would make a very interesting topic of discussion.

  • Conversion file Word 2011 to PDF Table of contents and hyperlinks are broken

    When you convert a document form Word 2011 (Mac) to PDF using Acrobat Pro.  The Table of contents and hyperlinks are broken and do not work.  I bought Acrobat Pro 11 to try to resolve this problem.

    I use Mac Version 10.10.1

  • Area of the reading of the text of TextInmput and execute the condition referring to the current text

    Reading field to validate text from TextInmput and run the condition referring to the current text, AS3.0 depend the text that is inserted for the user, flash will be read by the compiler to player and a connective so will define what image will be directed or what web page will be linkada with the URLRequest function, or function gotoAndPlay () for the photos of the calendar. Ultiliza dinamico or textInput text field object? I find the entrance. ---------------------------------------------------------

    OBJECTS: Text entry object: textoEntrada

    button for verification: button link

    ----------------------------------------------------------

    All the code is stored in image 1. We get the function:

    Code:

    Stop ();

    function of the charges (event: MouseEvent): void {}

    to var textoEntrada: String;

    If (textoEntrada == "sim") {}

    var link: URLRequest = new URLRequest ("' HTTP:------------google.com");

    navigateToURL (link);

    } else {gotoAndPlay (2);------------image 2ª option where in the timeline}

    }

    to botaolink.addEventListener (MouseEvent: CLICK, charges);

    COPY THE CODE NEXT AND WAS NOT WORKING!

    http://forums.Adobe.com/message/4199479#4199479

  • difference in size of the Table in oracle and timesten

    Hi all


    I have a large table with 2 million records,

    I see no big difference in the size of the table in oracle and Timesten
    In oracle table size to 4 GB, but in Timesten is arround 15 GB (using ttSize for 2 M lines)

    Could you please tell me what could be the cause of this difference?
    Is the size of the table in Timesten is always more than oracle?
    What are the factors and parameters affecting the size of Perm?

    It is typical for the storage needs for a DataSet to be significantly larger in TimesTen in Oracle. This is due to the Organization of the very different internal storage in TT from Oracle; Oracle is optimized to save space while TT is optimized for performance.

    Ways to minimize these costs are:

    1. make sure you use TimesTen 11.2.1. This has some characteristics compare compact (minor) and earlier versions.

    2 assess the use of numeric types; native types TimesTen (TT_TINYINT, TT_SMALLINT, TT_INTEGER and TT_BIGINT) use less space than MANY and longer by the effective calculation as well.

    3. check use of data of variable length (VARCHAR2, VARBINARY, NVARCHAR) and the trade-offs between online and online storage (see documentation for the compromise between these options stirage TT).

    Even when you use the foregoing, you will still see a storage important "inflation" for TT from Oracle.

    Chris

  • problems with the table of contents and index

    I do the localization for the Japan, so I'm all nice and good in Japanese (themes, table of contents and index), however, after compilation, only the TOC and India are in a font that does not support Japanese characters.
    Does anyone have any ideas how to solve this problem?
    Thank you

    Thank you Pete, it's part of the solution.
    In fact, what I discovered on a German forum for RoboHelp, is to set the machine to the locale for the country, and the location is made. I installed the language support pack before my problem and so I was able I was translating into topics. The problem appeared when I tried my first compilation. The subjects were still in Japanese, that I translated, but the table of contents and the index and were the only ones who wrote in a totally unknown style (not Japanese English and not another language, I know).
    Now that I've changed the location of my system, it works.
    So, I think that this should be a first improvement for RoboHelp, right? They boast that they support 35 languages, however the support isn't quite... finish, huh?
    Furthermore, the error message continues to come even after all these changes, but I don't like too much about this, I get what I need.
    Thanks again Pete for the solution and the quick response.

  • Xml file for reading in the clob in the staging table column

    Hello

    I am trying to query the intermediate table with the database adapter that has the column type CLOB containing the XML file. How to extract the XML of CLOB and map the fields to the another final scheme variable.

    Thank you

    Published by: chaitu123 on Sep 20, 2009 08:16

    (1) when you create DBAdapter on a table that has the clob column watch closely the xsd created for the DBAdapter cloumn clob element must be a String data type

    (2) create xsd for Xml files and create the variable of the xsd element

    (3) use ora:parseEscapedXML("yourDBAdapterclobElement") for XmlFileVarilable

    Krishna

Maybe you are looking for