LMD are in the DB buffer Cache?

DB version: 11.2.0.4

OS: RHEL 6.5

Oracle documentation defines "database buffer cache" in the following way

"The buffer of database cache, also called the buffer cache, is the area of memory that stores copies of data in blocks to read data files"

http://docs.Oracle.com/CD/E25054_01/server.1111/e25789/memory.htm#i10221

Let's say that the following UPDATE statement updates 100,000 records. The server process go get all the blocks that has the matching records and place it in the DB buffer cache and updates the blocks. This change metadata are recorded in the restore log buffer by progression. At the next checkpoint changed blocks are written to the data files. Right?

Employee UPDATE

SET salary = salary * 1.05

The EMPLOYEE

WHERE deptnum = 8;

basically: Yes, once again. Oracle has rewritten Sales buffers (i.e. buffers with different content than the corresponding blocks on the disk) on the drive under certain conditions. If the operation is cancelled, the blocks must be read again in the cache and change must be cancelled (and written).

Tags: Database

Similar Questions

  • What else are stored in the database buffer cache?

    What else are stored in the database buffer cache, except the data reading of data files blocks?

    The nitty gritty on this point, you ask someone smarter than me waaay.

  • CKPTQ in the database buffer cache and LRU

    Hi experts


    This feature can settle in cache buffers data base Oracle 10.2 or higher.
    Forums of sources: OTN and 11.2 Concepts guide

    According to my readings. To improve the functionality and make it more good American cache database is divided into several zones which are called workareasNow more

    Zoom this each activities will store multiple lists to store tampons inside the database buffer cache.

    Each wrokarea can have one or more then one lists to keep the wrokordering in there. The list of what each activity will have therefore to list LRU and list CKPTQ. LRU list

    is a list of buffers pinned, free and sales and CKPTQ is a list of stamp Sales. We can say THAT CKPTQ is a group of stamps Sales ordering of RBA weak and ready to be flushed from the cache on the disk.

    CKPTQ list is maintained by ordering of low RBE.
    As novice let me clearly low RBA and RBA senior first

    RBA is stored in the header of the block and we will give the information on which this block is spent and how many times it is changed.

    Low RBE: low RBE is the address to redo the first change that was applied to the block since his own last.
    RBA high: the high RBA is the address to redo the last change has been applied to the block.

    Now back to CKPTQ
    It can be like this (pathetic CKPTQ diagram)

    lowRBA = high RBA
    (Head of the CKPTQ)                         (CKPTQ line)

    CKPTQ is a list of stamp Sales. According to the concept of the RBA. The most recent modified buffer is at the tail of CKPTQ.

    Now the oracle process starts and try to get the DB cache buffer if she gets a buffer it will put an end SRM to the list.and buffer buffer LRU will become the most

    recently used.

    Now, if the process cannot find a necessary buffer.then first, he will try to find free tampons to LRU. If he finds his most he will place a datablock to the data file in the

    place where free buffer was sitting. (Good enough).

    Now, if the process cant fnd a buffer without LRU then first step would be he will find some Sales swabs at the end of the LRU to LRU list and place them on a

    CKPTQ (do not forget in the low order of RBA he organize it queue of CKPT). and now the oracle process will buffer required and place it on the end of the MRU of LRU list. (Because space was acclaimed by the displacement of Sales to CKPTQ buffers).

    I do not know of CKPTQ buffers (to be more precise tampon Sales) will move to datafiles.all buffers are line up n lower CKPTQ RBA way first. But

    emptied to datafile how and in what way and to what event?

    That's what I understand after these last three days, flipping through the blogs, forums and concepts guide. Now miss me you please erase me on and off it

    I can't bind the following features at this rate... It's

    (1) how the work of additional checkpoint with this CKPTQ?

    (2) now, what is this 3 second delay?

    (Every 3 seconds DBWR process will wake and find if nothing to write about the data files for this DBWR will check only CKPTQ).

    Apartment 3) form 3 second funda, when CKPTQ buffers will be moved? (IS IT when the process is unable to find any space in CKPTQ to keep buffers LRU. Its a

    moment where CKPTQ buffer will be moved on the disk)

    (4) can you please report when the control file will be updated with checkpoint so it can reduce recovery time?

    In many ques but I'm trying to build the entire process in mind that its operation may be I can be wrong in any phase in any stage, please correct me upward and

    Take me @ the end of the flow.


    Thank you
    Philippe

    Hi Aman,

    Yes, I a soft copy of ppt or white paper "Harald van Breederode" of 2009.

    -Pavan Kumar N

  • clarification of term required for the DB buffer Cache

    Hello!!

    Here, I have a very basic conceptual question, DB buffer Cache contains the data read from the data in the file as well reduce the disk i/o of fom and oracle. If suppose that the table is constantly questioned and remains in the DB buffer cache all the time, how does oracle ensures the user gets the latest information?

    The database buffer cache is a part of the zone system Global (SGA), which is responsible for caching of blocks accessed frequently for a segment. The subsequent transactions involving the same blocks can then access them from memory, instead of from the hard disk. The works of cache buffer of database on the basis of least recently used (LRU algorithm), according to which the most frequently accessed blocks are kept in memory while the less frequent are gradually.

    See this link

    http://www.Stanford.edu/dept/ITSS/docs/Oracle/10G/server.101/b10743/memory.htm :)

  • Read data larger than the DB buffer Cache

    DB version: 10.2.0.4
    OS: Solarit 5.10


    We have a DB with 1 GB for DB_CACHE_SIZE. Automatic shared memory management is disabled (SGA_TARGET = 0).

    If a query is triggered on a table that will grab the 2 GB of data. Hang in this session? How oracle handles this?

    Tom wrote:
    If the recovered blocks get automatically removed from the buffer cache once it is retrieved by the LRU algorithm, then Oracle must handle this without any problem. Right?

    Yes. No problem in that the "+ a fetch size +" (for example, by selecting 2 GB with a value of lines) need to fit completely in the db (only 1 GB in size) buffer cache.

    As mentioned Sybrand - everything in this case is emptied as blocks of data more recent will be read... and that emptied shortly after thereafter as of the even more recent data blocks are read.

    The ratio / access to the cache will be low.

    But this will not cause Oracle errors or problems - simply that degrade performance as volumes of data being processed exceeds the capacity of the cache.

    It's like running a very broad program that requires more RAM which is available on a PC. The 'additional RAM' is the file on the disk. The APA will be slow because its memory pages (some disk) must be swapped in and out of memory as needed. It will work faster if the PC has an enough RAM. However, the o/s is designed to address this exact situation that requires more RAM than physically available.

    Similar situation with the treatment of large chunks of data than the buffer cache has a capacity of.

  • Quality of the data buffer Cache

    Hi all

    Can someone please please tell some ways in which I can improve the quality of data buffer? He is currently at 51.2%. The DB is 10.2.0.2.0

    I want to know all the factors of wat should I keep in mind if I want to increase DB_CACHE_SIZE?

    Also, I want to know how I can find Cache Hit ratio?

    In addition, I want to know which are the most frequently viewed items in my DB?

    Thank you and best regards,

    Nick.

    Bolle wrote:
    You can try to reduce the size of the buffer cache to increase hit ratio.

    Huh? It's new! How would it happen?
    Aman...

  • Updated data are larger than the buffer cache

    Hi Experts,

    I have a small request. I have a table called CONTENT to have 12 GB of data. Here, I pulled a single update statement that updates to 8 GB of CONTENTS table of data using 1 GB of database buffer cache?

    How 1 GB of the Database Buffer Cache will be used to update the 8 GB of data? Level architectural any additional changes will happen (than usual) when executing "Updated data is larger than the buffer cache"?

    Could someone of you please response. Thank you

    Database: 10.2.0.5

    OS: Power 5-64 bit AIX system

    Hello

    the basic mechanism is the following:

    needed to update data blocks are read from the data files and cached in memory (buffer cache), the update is made to the buffer cache and the front of the (UNDO) image is stored in the segments of cancellation, operation (update here) is re-encoded to redo buffer until it goes again files If the buffer is samll or we need more space in the buffer cache, or we have a control point or... Oracle writes back the block modified data files to free the memory buffer for the more blocks.

    While the other runs the update of transactions can read before you change the image of CANCEL if validation, at the end of the transaction done change is confirmed and validation is recorded in the redo. If the cancellation is made at the end of the transaction before the image is "restored" and rollback is saved in do it again.

    Concerning

  • Why the blocks of temporary tables are placed in the buffer cache?

    I read the following statement, which seems quite plausible to me: "Oracle7.3 and generates from close db file sequential reading of the events when a dedicated server process reads data from temporary segment of the disc." Older versions of Oracle would read temporary segment data in the database buffer cache using db file scattered reads. Releases latest exploit heuristics that data of temporary segment is not likely to be shareable or revisited, then reads it directly to a server process programs global (PGA). »

    To verify this statement (and also for the pleasure of seeing one of these rare close db file sequential read events), I ran a little experiment on my Oracle 10.2 Linux (see below). Not only it seems that different this v above, the blocks of temporary tables are placed in the buffer cache, but also$ BH. OBJD for these blocks does not refer to an object in the database's existing (at least not one that is listed in DBA_OBJECTS). Either incidentally, I traced the session and have not seen any file db close sequential read events.

    So, I have the following questions:
    (1) is my experimental set-up and correct my conclusions (i.e. are blocks of temporary tables really placed in the buffer cache)?
    (2) if so, what is the reason for placing blocks of temporary tables in the buffer cache? As these blocks contain private session data, the blocks in the buffer cache can be reused by another session. So why do all cache buffer management fees to the blocks in the buffer cache (and possibly remove) rather than their caching in a private in-memory session?
    (3) what V$ BH. OBJD consult for blocks belonging to temporary tables?

    Thanks for any help and information
    Kind regards
    Martin

    Experience I ran (on 10.2 /Linux)
    =============================
    SQL*Plus: Release 10.2.0.1.0 - Production on Sun Oct 24 22:25:07 2010
    
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    
    
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    
    SQL> create global temporary table temp_tab_4 on commit preserve rows as select * from dba_objects;
    
    Table created.
    
    SQL> alter system flush buffer_cache;
    
    System altered.
    
    SQL> select count(*), status from v$bh group by status order by 1 desc;
    
      COUNT(*) STATUS
    ---------- -------
          4208 free
          3 xcur
    
    SQL> select count(*) from temp_tab_4;
    
      COUNT(*)
    ----------
         11417
    
    SQL> -- NOW THE BUFFER CACHE CONTAINS USED BLOCKS, THERE WAS NO OTHER ACTIVITY ON THE DATABASE
    select count(*), status from v$bh group by status order by 1 desc;
    SQL> 
      COUNT(*) STATUS
    ---------- -------
          4060 free
           151 xcur
    
    SQL> -- THE BLOCKS WITH THE "STRANGE" OBJD HAVE BLOCK# THAT CORRESPOND TO THE TEMPORARY SEGMENT DISPLAYED
    -- IN V$TEMPSEG_USAGE
    select count(*), status, objd from v$bh where status != 'free' group by status, objd order by 1 desc;
    SQL> SQL> 
      COUNT(*) STATUS      OBJD
    ---------- ------- ----------
           145 xcur       4220937
          2 xcur        257
          2 xcur        237
          1 xcur        239
          1 xcur    4294967295
    
    SQL> -- THE OBJECT REFERENCED BY THE NEWLY USED BLOCKS IS NOT LISTED IN DBA_OBJECTS
    select * from dba_objects where object_id = 4220937 or data_object_id = 4220937;
    
    SQL> 
    no rows selected
    
    SQL> SQL> -- THE BLOCKS WITH THE "STRANGE" OBJD ARE MARKED AS TEMP IN V$BH
    select distinct temp from v$bh where objd = 4220937;
    SQL> 
    T
    -
    Y
    
    SQL> 
    Edited by: user4530562 the 25.10.2010 01:12

    Edited by: user4530562 the 25.10.2010 04:57

    The reason to put the blocks to the global temporary table in the buffer cache is the same thing why you put ordinary table blocks in the cache buffers-> you want some of them to be in memory.

    If you ask why don't keep us somehow temporary tables in the PGA - well what happens if this temporary table will be 50 GB? 32-bit platforms cannot even handle this and you do not want a process of becoming uncontrollable so great.

    Moreover, TWG will allow you to restore, back to a backup (or savepoint implied when an error occurs during a call DML), and this requires protection by the cancellation. Place lines / revenge in PGA would have complicated the implementation even further... now GTT is almost of the regular tables which just happened to reside in temporary files.

    If you really want to put data in the PGA only, then you can create collections of PL/SQL and even access through the use of SQL (coll CAST AS xyz_type) where xyz_type is a TABLE of an object any.

    --
    Tanel Poder
    New online seminars!
    http://tech.e2sn.com/Oracle-training-seminars

  • Buckets of the interaction between the physical i/o and buffer cache hach

    I read the book Expert Oracle practices on issues of lock contention. While I was reading this chapter. I'm little bit confused the bahavior of the buffer cache when physical i/o occurs. According to Tom Kyte, when data blocks are read on disk (if missed cache) the following steps occurred. Ask Tom & amp; quot; How to work the Database Buffer Cache? & amp; quot;

    (a) access to the buffer cache and search for block

    (b) if the block isn't here, perform physical i/o and put it in the cache

    (c) return the block of memory cache

    However, I wonder what stage b has occurred which means put the data block in the buffer cache. For this, the data block is added to the associated buffer cache hash buckets?

    As far as I know, in order to cache data hit intended address of applicable block in buffer cache hash bucket. The hash function is generated during this data block address. (if acquired the lock of the child) And then find the address of the data block in the cache buffer chain to locate the block of buffer in the buffer cache.

    My second question is what stage an occur (go to the buffer cache and search block) how to block oracle look? I mean where did he look like? My third question relates to my second question, Oracle if discovered that the block in the buffer cache, does know where to find them? I guess that Oracle does not know where to locate it in the buffer cache. Therefore, it uses cache buffer hash buckets. I'm wrong?

    Last question is, I'm just trying to understand how buffer cache buffers are linked, buffer cache buffer hash, how it works?

    Thanks in advance.

    > What happens if planned lines resident for other blocks of data in the table? How can you join the other data block addresses? And, how do you know that lines which are located in what block of data?

    See this demo:

    Microsoft Windows XP [Version 5.1.2600]

    Copyright (C) 1985-2001 Microsoft Corp.

    C:\Documents and Settings\Administrateur > sqlplus scott/tiger

    SQL * more: Production release 11.2.0.1.0 Wed Dec 18 09:01:50 2013

    Copyright (c) 1982, 2010, Oracle.  All rights reserved.

    Connected to:

    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production

    With partitioning, OLAP, Data Mining and Real Application Testing options

    SQL > drop table test is serving;

    Deleted table.

    SQL > create table test in select * from object;

    Table created.

    SQL > set line 200.

    SQL > column nom_segment for a20;

    SQL > select nom_segment, segment_type, header_file, header_block dba_segments where nom_segment like 'TEST '.

    NOM_SEGMENT SEGMENT_TYPE HEADER_FILE, HEADER_BLOCK

    -------------------- ------------------ ----------- ------------

    4 1218 TEST TABLE

    Average for test table header block are 1218 that resides in file number 4.

    SQL > SELECT

    2 dbms_rowid.rowid_relative_fno (rowid) REL_FNO,

    3 dbms_rowid.rowid_block_number (rowid) BLOCKNO

    4 test where object_name = 'EMP ';

    REL_FNO BLOCKNO

    ---------- ----------

    4 2443

    SQL > variable s/n varchar2 (30)

    SQL > exec: s/n: = dbms_utility.make_data_block_address (4, 2443);

    PL/SQL procedure successfully completed.

    SQL > print s/n

    S/N

    --------------------------------

    16779659

    SQL > SELECT

    2 dbms_rowid.rowid_relative_fno (rowid) REL_FNO,

    3 dbms_rowid.rowid_block_number (rowid) BLOCKNO

    4 test where object_name = 'I_AUDIT ';

    REL_FNO BLOCKNO

    ---------- ----------

    4 1223

    SQL > exec: s/n: = dbms_utility.make_data_block_address (4, 1223);

    PL/SQL procedure successfully completed.

    SQL > print s/n

    S/N

    --------------------------------

    16778439

    SQL >

    Then, I got two dBA for different lines that are in the block No. 2443 and 1223.

    Concerning

    Girish Sharma

  • Changing data in the undo tablespace or db Oracle buffer cache

    Hello

    I have serious doubt in the feature of oracle architecture, when a user issues an update statement, the data blocks are transported to the db buffer cache and where the data changes are made? Made a copy data block is stored in the cache db pads and the changes are made to the block in the buffer cache? or the copy of the data block is stored in the undo tablespace and changes are made to the blocks in the undo tablespace?

    In singles, the changes to the data blocks are make to the db or undo tablespace buffer cache?


    Thanks in advance

    Kind regards
    007

    Did you have a look to the Internet for the answer?

    In short, if a trial Oracle wants to update a record in the table, it does:

    -Read the record to be changed in the buffercache
    -Audits (the record already locked, is the update allowed)
    -Put the folder (called image before) in a rollback in the UNDO tablespace segment
    -Writing redo information about this change in the UNDO tablespace
    -Lock the record
    -Write the change in registration in the buffer cache
    -Put the change also in the redo buffer
    S ' sit and wait... (for a commit or a rollback)

    When are committed:
    -Release the lock
    -Empty folder of restoration since the UNDO tablespace
    -Write a record of this change forward in the UNDO tablespace

    When rollback:
    -Release the lock
    -Rinse the record changed the buffercache
    -Read the original value
    -Empty folder of restoration since the UNDO tablespace
    -Write a record of this change forward in the UNDO tablespace

    It of here, some more specific complexity when a checkpoint occurs between the change and the commit / rollback or redo buffer has been emptied for redo files

    Please any other s/n, correct me if I'm wrong...

    See you soon
    FJFranken

  • Write the list of database buffer Cache

    What is the list of write in the database buffer cache? What is he function?

    Thanks in advance.

    >
    What is the list of write in the database buffer cache? What is he function?
    >
    The list of writing to dirty buffers that have not been written to disk yet.

    See "Database Buffer Cache" in the Concepts of database 11g doc
    http://docs.Oracle.com/CD/B28359_01/server.111/b28318/memory.htm
    >
    Organization of the Database Buffer Cache

    The buffers in the cache are organized into two lists: list of Scripture and the list (LRU) least recently used. List of writing contains stamps Sales, but which contain data has changed, has not yet been written to disk. The LRU list holds free buffers, pinned buffers and dirty buffers that have not yet moved to the list of Scripture. Free buffers contain no useful data and can be used. Pinned buffers are being accessed.

  • Copies of unexpected CR in the buffer cache

    Hello

    While trying to understand the mechanisms of the Oracle buffer cache, I ran a little experiment and observed an unexpected result. I believe that my expectation was wrong, and I would be so grateful, if someone could explain to me what I misunderstood.
    From what I understand, a copy of coherent reading (CR) of a buffer in the cache is created, when the old content of a buffer to be read, for example to ignore changes made by a transaction not yet committed when you query a table. I also thought, that copies of CR in the buffer cache can be reused by subsequent queries requiring a roll back image of the corresponding block.

    Now, I ran the following experiment on a DB of 10.2 or slowed down.
    1. I create a BC_TEST table (in a non-SAMS tablespace)
    -> V$ BH shows an A buffer with XCUR status for this table - V$ BH. COURSE number is 4, which indicates a segment according to various sources on the internet header.
    2 session 1 inserts a row into the table (and not valid)
    -> Now V$ BH shows 8 pads with XCUR status attached to the BC_TEST table. I think it's the blocks of a certain extent be allocated to the table (I expected as a block of data to load into the cache in addition to the header that was already there in step 1). There are still buffers with CLASS A # = 4 of step 1, a buffer B with XCUR and CLASS status # = 1, which indicates a block of data, according to various sources on the internet, and other 6 blocks with FREE status and CLASS # = 14 (this value is decoded differently in different sources on the internet).
    3 session 2 emits a ' select * from bc_test ".
    -> V$ BH shows 2 extra buffers with CR and the same FILE status #/ BLOCK # as buffer B in step 2. I understand that it compatible read copy needs to be done to undo the uncommitted changes in step 2 - However, I do not understand why * 2 * these copies are created.
    Note: with a slight variation of the experience, if I don't "select * from bc_test" in Session 2 between step 1 and 2, and then then I only get CR 1 copy in step 3 (I expect).
    4 session 2 issues "select * from bc_test"new"
    -> V$ BH presents yet another extra stamp with CR and the same FILE status #/ BLOCK # as buffer B step 2 (i.e. 3 these buffers in total). Here I don't understand, why the request cannot reuse the CR copy already created in step 3 (which already shows B buffer without changes to the transaction not posted in step 2).
    5 session 2 questions repeatedly ' select * from bc_test "new"
    --> The number of buffers with CR and the same FILE status #/ BLOCK # as buffer B step 2 increases by one with each additional request up to a total of 5. After that, the number of these buffers remains constant after additional queries. However various session statistics 2 ("consistent gets", "Blocks created CR", "the consistent changes", "data blocks consistent reads - undo records applied," "no work - gets consistent reading") suggests, that session 2 continues to generate copies of current reading with each "select * of bc_test" (are the buffers in the buffer cache may simply be reused from there on?).



    To summarize, I have the following question:
    (I) why the insertion of a single line (in step 2) load 8 blocks in the buffer cache - and what does the CLASS # = 14 indicate?
    (II) why the first select statement on the table (step 3) creates 2 copies of CR (use single) data block of the table (instead of one as I expect)?
    (III)) why other issues create copies of CR of this block of data that is unique (instead of reusing the copy created by the first select statement CR)?
    (IV) which limits the number of CR created copies to 5 (is there some parameter checking this value, it is according to some sizing of the cache or is it simply hard-coded)?
    (V) what exactly triggers the creation of a copy of CR of a buffer in the buffer cache?

    Thank you very much for any answer
    Kind regards
    Martin

    P.S. Please find below the Protocol of my experience

    --------------------------------------------------
    Control session
    --------------------------------------------------
    SQL > drop table bc_test;

    Deleted table.

    SQL > create table bc_test (collar number (9)) tablespace local01;

    Table created.

    SQL > SELECT bh.file #, bh.block #, bh.class #, bh.status, bh.dirty, bh.temp, bh.ping, bh.stale, bh.direct, bh.new
    2 from V$ BH bh
    3, o dba_objects
    4. WHERE bh. OBJD = o.data_object_id
    5 and o.object_name = 'BC_TEST. '
    6 order of bh.block #;

    FOLDER # BLOCK # CLASS # STATUS D T P S D N
    5 209 4 xcur O N N N N N


    --------------------------------------------------
    Session 1
    --------------------------------------------------
    SQL > insert into bc_test values (1);

    1 line of creation.

    --------------------------------------------------
    Control session
    --------------------------------------------------

    SQL > /.

    FOLDER # BLOCK # CLASS # STATUS D T P S D N
    5 209 4 xcur O N N N N N
    5 210 1 xcur O N N N N N
    5 211 14 free N N N N N N
    5 212 14 free N N N N N N
    5 213 14 free N N N N N N
    5 214 14 free N N N N N N
    5 215 14 free N N N N N N
    5 216 14 free N N N N N N

    8 selected lines.

    --------------------------------------------------
    Session 2
    --------------------------------------------------

    SQL > select * from bc_test;

    no selected line


    Statistics
    ----------------------------------------------------------
    28 recursive calls
    0 db block Gets
    Gets 13 coherent
    0 physical reads
    size of roll forward 172
    272 bytes sent via SQL * Net to client
    374 bytes received via SQL * Net from client
    1 SQL * Net back and forth to and from the client
    0 sorts (memory)
    0 sorts (disk)
    0 rows processed

    --------------------------------------------------
    Control session
    --------------------------------------------------
    SQL > /.

    FOLDER # BLOCK # CLASS # STATUS D T P S D N
    5 209 4 xcur N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 xcur N N N N N N
    5 211 14 free N N N N N N
    5 212 14 free N N N N N N
    5 213 14 free N N N N N N
    5 214 14 free N N N N N N

    8 selected lines.

    --------------------------------------------------
    Session 2
    --------------------------------------------------
    SQL > /.

    no selected line


    Statistics
    ----------------------------------------------------------
    0 recursive calls
    0 db block Gets
    Gets 5 in accordance
    0 physical reads
    size of roll forward 108
    272 bytes sent via SQL * Net to client
    374 bytes received via SQL * Net from client
    1 SQL * Net back and forth to and from the client
    0 sorts (memory)
    0 sorts (disk)
    0 rows processed

    SQL >

    --------------------------------------------------
    Control session
    --------------------------------------------------
    SQL > /.

    FOLDER # BLOCK # CLASS # STATUS D T P S D N
    5 209 4 xcur N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 xcur O N N N N N
    5 211 14 free N N N N N N
    5 213 14 free N N N N N N
    5 214 14 free N N N N N N

    8 selected lines.

    SQL >


    --------------------------------------------------
    Session 2
    --------------------------------------------------
    SQL > select * from bc_test;

    no selected line


    Statistics
    ----------------------------------------------------------
    0 recursive calls
    0 db block Gets
    Gets 5 in accordance
    0 physical reads
    size of roll forward 108
    272 bytes sent via SQL * Net to client
    374 bytes received via SQL * Net from client
    1 SQL * Net back and forth to and from the client
    0 sorts (memory)
    0 sorts (disk)
    0 rows processed

    SQL >

    --------------------------------------------------
    Control session
    --------------------------------------------------
    SQL > /.

    FOLDER # BLOCK # CLASS # STATUS D T P S D N
    5 209 4 xcur N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 xcur O N N N N N
    5 211 14 free N N N N N N
    5 213 14 free N N N N N N

    8 selected lines.

    --------------------------------------------------
    Session 2
    --------------------------------------------------
    SQL > select * from bc_test;

    no selected line


    Statistics
    ----------------------------------------------------------
    0 recursive calls
    0 db block Gets
    Gets 5 in accordance
    0 physical reads
    size of roll forward 108
    272 bytes sent via SQL * Net to client
    374 bytes received via SQL * Net from client
    1 SQL * Net back and forth to and from the client
    0 sorts (memory)
    0 sorts (disk)
    0 rows processed

    --------------------------------------------------
    Control session
    --------------------------------------------------
    SQL > /.

    FOLDER # BLOCK # CLASS # STATUS D T P S D N
    5 209 4 xcur N N N N N N
    5 210 1 xcur O N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 211 14 free N N N N N N
    5 213 14 free N N N N N N

    9 selected lines.

    --------------------------------------------------
    Session 2
    --------------------------------------------------
    SQL > select * from bc_test;

    no selected line


    Statistics
    ----------------------------------------------------------
    0 recursive calls
    0 db block Gets
    Gets 5 in accordance
    0 physical reads
    size of roll forward 108
    272 bytes sent via SQL * Net to client
    374 bytes received via SQL * Net from client
    1 SQL * Net back and forth to and from the client
    0 sorts (memory)
    0 sorts (disk)
    0 rows processed

    SQL >

    --------------------------------------------------
    Control session
    --------------------------------------------------
    SQL > /.

    FOLDER # BLOCK # CLASS # STATUS D T P S D N
    5 209 4 xcur N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 xcur O N N N N N

    7 selected lines.

    --------------------------------------------------
    Session 2
    --------------------------------------------------
    SQL > /.

    no selected line


    Statistics
    ----------------------------------------------------------
    0 recursive calls
    0 db block Gets
    Gets 5 in accordance
    0 physical reads
    size of roll forward 108
    272 bytes sent via SQL * Net to client
    374 bytes received via SQL * Net from client
    1 SQL * Net back and forth to and from the client
    0 sorts (memory)
    0 sorts (disk)
    0 rows processed

    SQL >

    --------------------------------------------------
    Control session
    --------------------------------------------------
    SQL > /.

    FOLDER # BLOCK # CLASS # STATUS D T P S D N
    5 209 4 xcur N N N N N N
    5 210 1 xcur O N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N

    7 selected lines.

    What version of 10.2, on what platform, and enabled to RAC? What exactly was the tablespace definition?

    (I) why the insertion of a single line (in step 2) load 8 blocks in the buffer cache - and what does the CLASS # = 14 indicate?

    It sounds like you may have formatted all of the first measure - assuming that you use 8 KB blocks and allocated system extended. But it didn't happen when I checked 10.2.0.3 on a version of the single instance of the Oracle.

    Class 14 is interpreted as "unused" in all versions of Oracle that I saw. This would be consistent with the formatting, but it bringing is not below the high water mark. You could ' alter system dump datafile N block min X block max Y' for the segment header block, the block used and unused block in the discharge.

    (II) why the first select statement on the table (step 3) creates 2 copies of CR (use single) data block of the table (instead of one as I expect)?

    Maybe because the first copy clean uncommitted transactions and clones of the second copy of the result to return to a given SNA - but that's just a guess, I had not noticed this behavior before, so I have to do some experiments to find out why it's happening.

    (III)) why other issues create copies of CR of this block of data that is unique (instead of reusing the copy created by the first select statement CR)?

    The first block of CR you create includes a moment earlier that the beginning of the second query, so your session must start from the current. If you have set the isolation level to fix SNA session (for example, "set transaction read only") before the first application, in my view, you would see that the copies of CR will not increase after the initial creation.

    (IV) which limits the number of CR created copies to 5 (is there some parameter checking this value, what is according to some sizing of the cache or is simply badly coded)?

    Hidden parameter Dbblock_max_cr_dba which, by default, is 6. The code still does not seem to obey this limit, but generally you will see only 6 copies of a block in the buffer.

    (V) what exactly triggers the creation of a copy of CR of a buffer in the buffer cache?

    There are at least two options - (a) a session needs to see a copy of a block with recent changes removed, either because they are uncommitted transactions, the session needs to see the block he was when he started to run a particular query or transaction, (b) a session changed a block when he gets hold of the current version - and in certain circumstances (possibly when the update is through a analysis), it will make a copy of the block, mark the previous CR copy, mark the new copy as xcur, and change the copy.

    Incidentally, xcur is a RAC status - it is possible to have blocks in the buffer which are xcur, but not "obtained in current mode.

    Concerning
    Jonathan Lewis

  • Question Basic setting, ask questions about the buffer cache

    Database: Oracle 10g
    Host: Sun Solaris, 16 CPU server



    I look at the behavior of some simple queries that I start the tuning of our data warehouse.

    Using SQL * more and AUTOTRACE, I ran this query two times in a row

    SELECT *.
    OF PROCEDURE_FACT
    WHERE PROC_FACT_ID BETWEEN 100000 AND 200000

    He finds the index on PROC_FACT_ID and conducted an analysis of the range of indexes to access the data in the table by rowid. The first time, that it ran, there are about 600 physical block reads as data in the table were not in the buffer cache. The second time, he had 0 physical block reads, because they were all in the cache. All this was expected behavior.

    So I ran this query twice now,

    SELECT DATA_SOURCE_CD, COUNT (*)
    OF PROCEDURE_FACT
    DATA_SOURCE_CD GROUP

    As expected, he made a full table scan, because there is no index on DATA_SOURCE_CD and then chopped the results to find the different DATA_SOURCE_CD values. The first run had these results

    compatible gets 190496
    physical reads 169696

    The second run had these results

    compatible gets 190496
    physical reads 170248


    NOT what I expected. I would have thought that the second run would find many of the blocks already in the cache of the pads of the first execution, so that the number of physical reads would drop significantly.

    Any help to understand this would be greatly appreciated.

    And is there something that can be done to keep the table PROCEDURE_FACT (the central table of our star schema) "pinned" in the buffer cache?

    Thanks in advance.

    -chris Curzon

    Christopher Curzon wrote:
    Your comment about the buffer cache used for smaller objects that benefit is something that I asked about a good deal. It sounds as if tuning the buffer cache will have little impact on queries that scan of entire tables.

    Chris,

    If you can afford it and you think it is a reasonable approach with regard to the remaining segments that are supposed to benefit the buffer cache, you can always consider your segment of table with 'CACHE' that will change the behavior on the full of a broad sector table scan (Oracle treats small and large segments differently during the execution of table scans complete regarding the cache of) marking stamps, you can override this treatment by using the CACHE. NOCACHE keyword) or move your table of facts to a DUNGEON hen establishing a (ALTER SYSTEM SET DB_KEEP_CACHE_SIZE = ), modify the segments (ALTER TABLE... STORAGE (USER_TABLES KEEP)) accordingly and perform a full table scan to load blocks in the cache of the DUNGEON.

    Note that the disadvantage of the approach of the KEEP pool is that you have less memory available for the default buffer cache (unless you add more memory on your system). When an object to mark as being cached is always is in competition with other objects in the cache buffers by default, so it could still be aged out (the same applies to the pool of DUNGEON, if the segment is too large or too many segments are allocated age blocks out as well).

    So my question: How can I get for a parallel analysis on queries that use a table scan complete such as what I posted in my previous email? It is a question of the provision of the "parallel" indicator, or is it an init.ora parameter I should try?

    You can use a PARALLEL hint in your statement:

    SELECT /*+ PARALLEL(PROCEDURE_FACT) */ DATA_SOURCE_CD, COUNT(*)
    FROM PROCEDURE_FACT
    GROUP BY DATA_SOURCE_CD;
    

    or you could mark an object as PARALLEL in the dictionary:

    ALTER MATERIALIZED VIEW PROCEDURE_FACT PARALLEL;
    

    Note that since you have 16 processors (or 16 cores that resemble Oracle 32? Check the CPU_COUNT setting) the default parallel degree would be usually 2 times 16 = 32, which means that Oracle generates at least 32 parallel slaves for a parallel operation (it could be another set of 32 slaves if the operation for example include a GROUP BY operation) If you do not use the PARALLEL_ADAPTIVE_MULTI_USER parameter (which allows to reduce the parallelism if several parallel operations running concurrently).

    I recommend to choose a lesser degree parallel to your default value of 32 because usually you gain much by such a degree, then you can get the same performance when you use lower a setting like this:

    SELECT /*+ PARALLEL(PROCEDURE_FACT, 4) */ DATA_SOURCE_CD, COUNT(*)
    FROM PROCEDURE_FACT
    GROUP BY DATA_SOURCE_CD;
    

    The same could be applied to the paralleling of the object:

    ALTER MATERIALIZED VIEW PROCEDURE_FACT PARALLEL 4;
    

    Note When defining the object of many operations in PARALLEL will be parallelisee (DML even can be run in parallel, if you enable dml parallel, which has some special restrictions), so I recommend to use it with caution and begin with an explicit indication in those statements where you know that it will be useful to do.

    Also check that your PARALLEL_MAX_SERVERS is high enough when you use parallel operations, which should be the case in your version of Oracle.

    Kind regards
    Randolf

    Oracle related blog stuff:
    http://Oracle-Randolf.blogspot.com/

    SQLTools ++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676 /.
    http://sourceforge.NET/projects/SQLT-pp/

  • Question about the keep buffer Pool and the Recycle Buffer Pool

    What will be the Pool of buffers to keep and recycle Buffer Pool contains actually in the Database Buffer Cache, especially what kind of objects? I know the definitions, but need to know the practical aspects to their topic.

    918868 wrote:
    What will be the Pool of buffers to keep and recycle Buffer Pool contains actually in the Database Buffer Cache, especially what kind of objects? I know the definitions, but need to know the practical aspects to their topic.

    When all else fails, read the Fine

    http://docs.Oracle.com/CD/E11882_01/server.112/e16638/memory.htm#PFGRF94285

  • Concept of database buffer cache

    Being a newbie, s/n, I learn Oracle Architecture through Database Buffer Cache in SGA, and it's different parts such as buffer pinned, blocks Sales & buffer. Bookish language seems to be a bit complex. So it will be great if someone here can find little time to help me understand the Database Buffer Cache & it's components and features in a simple and brief language.

    All entries of my friends here will be highly appreciated.

    Published by: 916438 on March 24, 2012 23:19

    916438 wrote:
    I went through the links, just a question - what is the difference and the relationship between dirty buffer and pinned?

    A dirty buffer is an image modified to the buffer that was made originally in memory. I will give an example that a buffer contains a value of 10 to the disk. Later, this buffer was consulted and was introduced into the buffer cache and with an update of the order was changed to 11 (no validation emitted again). Since it is a modified image of the buffer that was originally in the memory, it is called dirty buffer . Concept of pinning meaning essentially access to and since then a buffer is pinned, it will not be sued by the buffer cache flushing algorithm. To access a buffer for what anyone, query or change, it must be consulted or pinned. A buffer can be made dirty when you pin first.

    HTH
    Aman...

Maybe you are looking for

  • Is it possible to install the second disk in Satellite A300-15 a PSAJ4E

    Hello! I just bought Satellite A300-15A (PSAJ4E).I read in some reviews that these models have an empty space for an additional HARD drive, but that some models do not have an additional connector for a second HDD. I wonder if it is possible to insta

  • IR remote control Canon T5i smartphone app

    Hello! I tried two apps on Google Play, DSLR Remote and Remote camera apps. However, I can't make them work with my Canon T5i for remote control for shutter / timer.  This will be incredibly useful if it can work. Can anyone help? Thank you.

  • Compaq presario beep and black screen

    Compaq presario sr5450 desktop stop with beep .the beepis on 17 seconds and off for 10 seconds. Ichanged power supplys but still the same beep and black screen

  • How to set up WRT120N as a customer

    Hello I have a wireless network that runs a WRT610N router.  I would like to implement a WRT120N router to act as a client "adapter" for a device that has an ethernet connection only.  Can I connect WRT120N router unit OK but it cannot communicate wi

  • can I reset my computer to an old windows vista setting once I did a new install of vista?

    I straighten to correct a problem with updates. I put my vista disk in the drive and installed a new one, not knowing that it would eliminate access to everything on the disc; I thought it would be just the vista effect. is it possible to find an old