Updated data are larger than the buffer cache

Hi Experts,

I have a small request. I have a table called CONTENT to have 12 GB of data. Here, I pulled a single update statement that updates to 8 GB of CONTENTS table of data using 1 GB of database buffer cache?

How 1 GB of the Database Buffer Cache will be used to update the 8 GB of data? Level architectural any additional changes will happen (than usual) when executing "Updated data is larger than the buffer cache"?

Could someone of you please response. Thank you

Database: 10.2.0.5

OS: Power 5-64 bit AIX system

Hello

the basic mechanism is the following:

needed to update data blocks are read from the data files and cached in memory (buffer cache), the update is made to the buffer cache and the front of the (UNDO) image is stored in the segments of cancellation, operation (update here) is re-encoded to redo buffer until it goes again files If the buffer is samll or we need more space in the buffer cache, or we have a control point or... Oracle writes back the block modified data files to free the memory buffer for the more blocks.

While the other runs the update of transactions can read before you change the image of CANCEL if validation, at the end of the transaction done change is confirmed and validation is recorded in the redo. If the cancellation is made at the end of the transaction before the image is "restored" and rollback is saved in do it again.

Concerning

Tags: Database

Similar Questions

  • Shared pool larger than the buffer cache

    Hi all

    My database is 10.2.0.4 running linux platform

    No .of 2 cpu, RAM-2 GB

    SGA_TARGET wa set to 1 GB.

    Initially the memory have been configured as a shared pool around 300 MB and the buffer cache about 600 MB.

    When I questioned the v$ sga_resize_ops views I found some interesting results.

    Many operations and growth reduction were happened and the current size of the shared pool is about 600 MB and buffer cache is 300 MB. (this happened during last 1)

    I guess that the buffer cache must always be larger than the size compared to a shared pool. My assumption is right?

    Is it because of sql code using the do not bind variables resulting in growth shared pool? No relaods and radiation are almost ignored I think it should not be the case.

    Also no lock events listd in the top5

    I've also seen the 15% of the shared pool is marked as being of kGH:NO ACCESS, which means that the part is used for the cache buffers.

    Should I set the lower limit for the shared pool and the buffer cache or can I just ignore it.

    Thank you
    rajdhanvi

    You change your question now... your question was that he has sharedpool large size > buffer cache is acceptable... . Check your own second post... for your new question now is why pool continues to increase and partly used as buffer cache... the proof is given by tanel poder y what happens when EAMA is used... For the Kingston general hospital: NO ACCESS means that no one else could touch...

    Concerning
    Karan

  • Why the blocks of temporary tables are placed in the buffer cache?

    I read the following statement, which seems quite plausible to me: "Oracle7.3 and generates from close db file sequential reading of the events when a dedicated server process reads data from temporary segment of the disc." Older versions of Oracle would read temporary segment data in the database buffer cache using db file scattered reads. Releases latest exploit heuristics that data of temporary segment is not likely to be shareable or revisited, then reads it directly to a server process programs global (PGA). »

    To verify this statement (and also for the pleasure of seeing one of these rare close db file sequential read events), I ran a little experiment on my Oracle 10.2 Linux (see below). Not only it seems that different this v above, the blocks of temporary tables are placed in the buffer cache, but also$ BH. OBJD for these blocks does not refer to an object in the database's existing (at least not one that is listed in DBA_OBJECTS). Either incidentally, I traced the session and have not seen any file db close sequential read events.

    So, I have the following questions:
    (1) is my experimental set-up and correct my conclusions (i.e. are blocks of temporary tables really placed in the buffer cache)?
    (2) if so, what is the reason for placing blocks of temporary tables in the buffer cache? As these blocks contain private session data, the blocks in the buffer cache can be reused by another session. So why do all cache buffer management fees to the blocks in the buffer cache (and possibly remove) rather than their caching in a private in-memory session?
    (3) what V$ BH. OBJD consult for blocks belonging to temporary tables?

    Thanks for any help and information
    Kind regards
    Martin

    Experience I ran (on 10.2 /Linux)
    =============================
    SQL*Plus: Release 10.2.0.1.0 - Production on Sun Oct 24 22:25:07 2010
    
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    
    
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    
    SQL> create global temporary table temp_tab_4 on commit preserve rows as select * from dba_objects;
    
    Table created.
    
    SQL> alter system flush buffer_cache;
    
    System altered.
    
    SQL> select count(*), status from v$bh group by status order by 1 desc;
    
      COUNT(*) STATUS
    ---------- -------
          4208 free
          3 xcur
    
    SQL> select count(*) from temp_tab_4;
    
      COUNT(*)
    ----------
         11417
    
    SQL> -- NOW THE BUFFER CACHE CONTAINS USED BLOCKS, THERE WAS NO OTHER ACTIVITY ON THE DATABASE
    select count(*), status from v$bh group by status order by 1 desc;
    SQL> 
      COUNT(*) STATUS
    ---------- -------
          4060 free
           151 xcur
    
    SQL> -- THE BLOCKS WITH THE "STRANGE" OBJD HAVE BLOCK# THAT CORRESPOND TO THE TEMPORARY SEGMENT DISPLAYED
    -- IN V$TEMPSEG_USAGE
    select count(*), status, objd from v$bh where status != 'free' group by status, objd order by 1 desc;
    SQL> SQL> 
      COUNT(*) STATUS      OBJD
    ---------- ------- ----------
           145 xcur       4220937
          2 xcur        257
          2 xcur        237
          1 xcur        239
          1 xcur    4294967295
    
    SQL> -- THE OBJECT REFERENCED BY THE NEWLY USED BLOCKS IS NOT LISTED IN DBA_OBJECTS
    select * from dba_objects where object_id = 4220937 or data_object_id = 4220937;
    
    SQL> 
    no rows selected
    
    SQL> SQL> -- THE BLOCKS WITH THE "STRANGE" OBJD ARE MARKED AS TEMP IN V$BH
    select distinct temp from v$bh where objd = 4220937;
    SQL> 
    T
    -
    Y
    
    SQL> 
    Edited by: user4530562 the 25.10.2010 01:12

    Edited by: user4530562 the 25.10.2010 04:57

    The reason to put the blocks to the global temporary table in the buffer cache is the same thing why you put ordinary table blocks in the cache buffers-> you want some of them to be in memory.

    If you ask why don't keep us somehow temporary tables in the PGA - well what happens if this temporary table will be 50 GB? 32-bit platforms cannot even handle this and you do not want a process of becoming uncontrollable so great.

    Moreover, TWG will allow you to restore, back to a backup (or savepoint implied when an error occurs during a call DML), and this requires protection by the cancellation. Place lines / revenge in PGA would have complicated the implementation even further... now GTT is almost of the regular tables which just happened to reside in temporary files.

    If you really want to put data in the PGA only, then you can create collections of PL/SQL and even access through the use of SQL (coll CAST AS xyz_type) where xyz_type is a TABLE of an object any.

    --
    Tanel Poder
    New online seminars!
    http://tech.e2sn.com/Oracle-training-seminars

  • You can remove some microsoft updates that are older than the recent updates that are being installed

    I need to know if I can remove old updates from microsoft after upgrades

    Thanks for any help I receive

    E-mail address is removed from the privacy *.

    NO, you cannot uninstall updates if the operating system has been updated because it would mean that another edition of Windows is now installed.

    MowGreen
    Expert in Windows IT Pro - consumer safety

    * - 343-* FDNY
    NEVER FORGOTTEN

  • Migration of Time Machine to the new drive, backups are larger than the Original drive

    I am currently migrating my Time Machine backups from a 2 TB WD My Passport for Mac to a new WD 6 to my book for Mac. I have 63 backups on the old disk, and Finder says that the reader has TB filled 1.96.

    However, each individual backup is about 350 GB, which certainly does not. I think normally it was something weird with the Finder listing the wrong sizes, except that when I copy the backups to the new disk, they copy effectively to 350GB a piece. I initially tried to copy everything the ~ 2 TB Backups.backupdb at once, but it was so great, she didn't, then I copy on each both a backup in a folder that I had planned to rename "Backups.backupdb" at the end. At this rate, I will fill new TB cases 6 drive long before copying all backups on.

    How is that possible?

    Just to be complete, I'm using a Macbook Pro of the retina 2012, 2.6 GHz Intel Core i7, 16 GB 1600 MHz DDR3 graphics card Intel HD 4000 1536 MB, 512 GB of storage, under OS X 10.11.3.

    Time Machine saves the previous backups for the continuous backup size to become more wider until there is no more space on the backup drive. It is quite normal because Time Machine is an archive backup utility.

  • Read data larger than the DB buffer Cache

    DB version: 10.2.0.4
    OS: Solarit 5.10


    We have a DB with 1 GB for DB_CACHE_SIZE. Automatic shared memory management is disabled (SGA_TARGET = 0).

    If a query is triggered on a table that will grab the 2 GB of data. Hang in this session? How oracle handles this?

    Tom wrote:
    If the recovered blocks get automatically removed from the buffer cache once it is retrieved by the LRU algorithm, then Oracle must handle this without any problem. Right?

    Yes. No problem in that the "+ a fetch size +" (for example, by selecting 2 GB with a value of lines) need to fit completely in the db (only 1 GB in size) buffer cache.

    As mentioned Sybrand - everything in this case is emptied as blocks of data more recent will be read... and that emptied shortly after thereafter as of the even more recent data blocks are read.

    The ratio / access to the cache will be low.

    But this will not cause Oracle errors or problems - simply that degrade performance as volumes of data being processed exceeds the capacity of the cache.

    It's like running a very broad program that requires more RAM which is available on a PC. The 'additional RAM' is the file on the disk. The APA will be slow because its memory pages (some disk) must be swapped in and out of memory as needed. It will work faster if the PC has an enough RAM. However, the o/s is designed to address this exact situation that requires more RAM than physically available.

    Similar situation with the treatment of large chunks of data than the buffer cache has a capacity of.

  • The names of source files are larger than is supported by the file system...

    "The names of source files are larger than is supported by the file system. Try to move to a location that has a shorter path name, or try renaming to name more short (s) before performing this operation. »

    There are about 10 subfolders on the computer. None can be deleted, moved, or renamed without one appearing in the message. I tried UNLOCKER ASSISTANT & DELINVFILE get rid of this file. Once more, none worked. Command prompt did not work as well. Please help me, this issue will not go away on my desk.

    Try first to remove bypassing the Recycle Bin using SHIFT-DEL(thanks, Michael Murphy) rather than just led

    Try to use one of the following free products to remove the file/folder.  Unlocker to: http://www.softpedia.com/get/System/System-Miscellaneous/Unlocker.shtml or file Assassin http://www.malwarebytes.org/fileassassin.php (with or without forcing the deletion and you have to force the deletion in this case: http://www.mydigitallife.info/2008/12/27/force-delete-cannot-delete-locked-or-in-used-files-or-folders-with-fileassassin/). These programs often work when normal functions remove Vista not work correctly.  Have you tried Unlocker (I don't know if it's the same as Unlocker Assistant) but if this if it is different, gives it a try.

    Here is an article on this topic (related to another topic, but essentially the same problem) with a number of suggestions - which can work for you: http://www.howtogeek.com/forum/topic/how-to-delete-source-file-names-are-larger-than-is.

    I hope that one of these options help.  If this isn't the case, after return and we will see if we can find another answer.

    Good luck!

    Lorien - MCSA/MCSE/network + / has + - if this post solves your problem, please click the 'Mark as answer' or 'Useful' button at the top of this message. Marking a post as answer, or relatively useful, you help others find the answer more quickly.

  • __File names are larger than is supported by the file system

    I tried to upgrade from Vista to 7, but the upgrade attempted to restore Vista.  To exit a loop of reboot, I started from the drive of W7 upgrade went to the command prompt and found a folder with subfolders 25.  After moving them to another directory, I was able to get the roll to Vista is complete.  I tried to delete the record with 25 subfolders.  But I get the error message "the folder contains items whose names are too long for the trash.  and "the names of source files are larger than is supported by the file system.  Try to move to a location that has a shorter path name, or try renaming to shorter name (s) before launching the operation".  I tried shift-delete to bypass the Recycle Bin, but it does not work.  I tried moving and able to copy to a different folder after what I was able to remove the old records but could not delete the new folder.  Now, for some reason any I can rename the subfolder name (they were all named "Downloads"), but when I did it it would create several subfolders named "downloads".  So now I have 60-70 subfolders, rather than the original 25.

    Does anyone has a suggestion or a program to help me remove the offending folder?  Should I go on my recovery drive and recover Vista of the D: drive on the C: drive?  If this does not work, will it reformat the hard drive?  If I reformat, how to move to Windows 7?  Thanks for any help.

    HII secoch ,

    Thanks for posting your queries.

    I hope you have this problems, but this forum is to help everyone, I want something here too, I did too and it really helped.

    First thing is you do not need to download all programs for this.

    Like, Windows has problems dealing with extra long path/filenames. If the
    combination of the names of access road and closer to 255 characters (127 for
    Windows 95/98/me) Windows probably not manage it well, if you try to remove or
    Rename it (you can create it, but not change it). The first
    thing is trying to rename some of the files which lead to the queue
    himself. The directory displayed a level of return, and then select Rename the
    folder the file is located in. Try a single letter (of course, to keep a record of what
    folder you called that so you can reverse the process!). Then, return to the
    file and see if Windows then you can rename or remove. Otherwise, go back
    yet another folder and rename this one and so on. At some point, you should be
    be able to find a series of folder names of reduced length which you can then
    Rename the file to a name that is short enough to allow you to recreate then
    the original directory tree and still work with the file in question.  In fact, the main goal is to try to make the shortest path of length instead of only the file name and the folder name.

    I also hope that this help option.  If this isn't the case, please after return or PIN me.

    I am always available on mail too-> * address email is removed from the privacy *.

  • Icons on the homepage are larger than normal, how can I make it smaller. __

    my desktop icons are larger than normal, how can I make the smaller

    Right click on the desktop and click on views and there should be three options for selecting the size of the icons.  If they are too big, choose one of the smaller options and see if it suits you better.

    I hope this helps.

    Good luck!

    Lorien - MCSA/MCSE/network + / has + - if this post solves your problem, please click the 'Mark as answer' or 'Useful' button at the top of this message. Marking a post as answer, or relatively useful, you help others find the answer more quickly.

  • Copies of unexpected CR in the buffer cache

    Hello

    While trying to understand the mechanisms of the Oracle buffer cache, I ran a little experiment and observed an unexpected result. I believe that my expectation was wrong, and I would be so grateful, if someone could explain to me what I misunderstood.
    From what I understand, a copy of coherent reading (CR) of a buffer in the cache is created, when the old content of a buffer to be read, for example to ignore changes made by a transaction not yet committed when you query a table. I also thought, that copies of CR in the buffer cache can be reused by subsequent queries requiring a roll back image of the corresponding block.

    Now, I ran the following experiment on a DB of 10.2 or slowed down.
    1. I create a BC_TEST table (in a non-SAMS tablespace)
    -> V$ BH shows an A buffer with XCUR status for this table - V$ BH. COURSE number is 4, which indicates a segment according to various sources on the internet header.
    2 session 1 inserts a row into the table (and not valid)
    -> Now V$ BH shows 8 pads with XCUR status attached to the BC_TEST table. I think it's the blocks of a certain extent be allocated to the table (I expected as a block of data to load into the cache in addition to the header that was already there in step 1). There are still buffers with CLASS A # = 4 of step 1, a buffer B with XCUR and CLASS status # = 1, which indicates a block of data, according to various sources on the internet, and other 6 blocks with FREE status and CLASS # = 14 (this value is decoded differently in different sources on the internet).
    3 session 2 emits a ' select * from bc_test ".
    -> V$ BH shows 2 extra buffers with CR and the same FILE status #/ BLOCK # as buffer B in step 2. I understand that it compatible read copy needs to be done to undo the uncommitted changes in step 2 - However, I do not understand why * 2 * these copies are created.
    Note: with a slight variation of the experience, if I don't "select * from bc_test" in Session 2 between step 1 and 2, and then then I only get CR 1 copy in step 3 (I expect).
    4 session 2 issues "select * from bc_test"new"
    -> V$ BH presents yet another extra stamp with CR and the same FILE status #/ BLOCK # as buffer B step 2 (i.e. 3 these buffers in total). Here I don't understand, why the request cannot reuse the CR copy already created in step 3 (which already shows B buffer without changes to the transaction not posted in step 2).
    5 session 2 questions repeatedly ' select * from bc_test "new"
    --> The number of buffers with CR and the same FILE status #/ BLOCK # as buffer B step 2 increases by one with each additional request up to a total of 5. After that, the number of these buffers remains constant after additional queries. However various session statistics 2 ("consistent gets", "Blocks created CR", "the consistent changes", "data blocks consistent reads - undo records applied," "no work - gets consistent reading") suggests, that session 2 continues to generate copies of current reading with each "select * of bc_test" (are the buffers in the buffer cache may simply be reused from there on?).



    To summarize, I have the following question:
    (I) why the insertion of a single line (in step 2) load 8 blocks in the buffer cache - and what does the CLASS # = 14 indicate?
    (II) why the first select statement on the table (step 3) creates 2 copies of CR (use single) data block of the table (instead of one as I expect)?
    (III)) why other issues create copies of CR of this block of data that is unique (instead of reusing the copy created by the first select statement CR)?
    (IV) which limits the number of CR created copies to 5 (is there some parameter checking this value, it is according to some sizing of the cache or is it simply hard-coded)?
    (V) what exactly triggers the creation of a copy of CR of a buffer in the buffer cache?

    Thank you very much for any answer
    Kind regards
    Martin

    P.S. Please find below the Protocol of my experience

    --------------------------------------------------
    Control session
    --------------------------------------------------
    SQL > drop table bc_test;

    Deleted table.

    SQL > create table bc_test (collar number (9)) tablespace local01;

    Table created.

    SQL > SELECT bh.file #, bh.block #, bh.class #, bh.status, bh.dirty, bh.temp, bh.ping, bh.stale, bh.direct, bh.new
    2 from V$ BH bh
    3, o dba_objects
    4. WHERE bh. OBJD = o.data_object_id
    5 and o.object_name = 'BC_TEST. '
    6 order of bh.block #;

    FOLDER # BLOCK # CLASS # STATUS D T P S D N
    5 209 4 xcur O N N N N N


    --------------------------------------------------
    Session 1
    --------------------------------------------------
    SQL > insert into bc_test values (1);

    1 line of creation.

    --------------------------------------------------
    Control session
    --------------------------------------------------

    SQL > /.

    FOLDER # BLOCK # CLASS # STATUS D T P S D N
    5 209 4 xcur O N N N N N
    5 210 1 xcur O N N N N N
    5 211 14 free N N N N N N
    5 212 14 free N N N N N N
    5 213 14 free N N N N N N
    5 214 14 free N N N N N N
    5 215 14 free N N N N N N
    5 216 14 free N N N N N N

    8 selected lines.

    --------------------------------------------------
    Session 2
    --------------------------------------------------

    SQL > select * from bc_test;

    no selected line


    Statistics
    ----------------------------------------------------------
    28 recursive calls
    0 db block Gets
    Gets 13 coherent
    0 physical reads
    size of roll forward 172
    272 bytes sent via SQL * Net to client
    374 bytes received via SQL * Net from client
    1 SQL * Net back and forth to and from the client
    0 sorts (memory)
    0 sorts (disk)
    0 rows processed

    --------------------------------------------------
    Control session
    --------------------------------------------------
    SQL > /.

    FOLDER # BLOCK # CLASS # STATUS D T P S D N
    5 209 4 xcur N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 xcur N N N N N N
    5 211 14 free N N N N N N
    5 212 14 free N N N N N N
    5 213 14 free N N N N N N
    5 214 14 free N N N N N N

    8 selected lines.

    --------------------------------------------------
    Session 2
    --------------------------------------------------
    SQL > /.

    no selected line


    Statistics
    ----------------------------------------------------------
    0 recursive calls
    0 db block Gets
    Gets 5 in accordance
    0 physical reads
    size of roll forward 108
    272 bytes sent via SQL * Net to client
    374 bytes received via SQL * Net from client
    1 SQL * Net back and forth to and from the client
    0 sorts (memory)
    0 sorts (disk)
    0 rows processed

    SQL >

    --------------------------------------------------
    Control session
    --------------------------------------------------
    SQL > /.

    FOLDER # BLOCK # CLASS # STATUS D T P S D N
    5 209 4 xcur N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 xcur O N N N N N
    5 211 14 free N N N N N N
    5 213 14 free N N N N N N
    5 214 14 free N N N N N N

    8 selected lines.

    SQL >


    --------------------------------------------------
    Session 2
    --------------------------------------------------
    SQL > select * from bc_test;

    no selected line


    Statistics
    ----------------------------------------------------------
    0 recursive calls
    0 db block Gets
    Gets 5 in accordance
    0 physical reads
    size of roll forward 108
    272 bytes sent via SQL * Net to client
    374 bytes received via SQL * Net from client
    1 SQL * Net back and forth to and from the client
    0 sorts (memory)
    0 sorts (disk)
    0 rows processed

    SQL >

    --------------------------------------------------
    Control session
    --------------------------------------------------
    SQL > /.

    FOLDER # BLOCK # CLASS # STATUS D T P S D N
    5 209 4 xcur N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 xcur O N N N N N
    5 211 14 free N N N N N N
    5 213 14 free N N N N N N

    8 selected lines.

    --------------------------------------------------
    Session 2
    --------------------------------------------------
    SQL > select * from bc_test;

    no selected line


    Statistics
    ----------------------------------------------------------
    0 recursive calls
    0 db block Gets
    Gets 5 in accordance
    0 physical reads
    size of roll forward 108
    272 bytes sent via SQL * Net to client
    374 bytes received via SQL * Net from client
    1 SQL * Net back and forth to and from the client
    0 sorts (memory)
    0 sorts (disk)
    0 rows processed

    --------------------------------------------------
    Control session
    --------------------------------------------------
    SQL > /.

    FOLDER # BLOCK # CLASS # STATUS D T P S D N
    5 209 4 xcur N N N N N N
    5 210 1 xcur O N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 211 14 free N N N N N N
    5 213 14 free N N N N N N

    9 selected lines.

    --------------------------------------------------
    Session 2
    --------------------------------------------------
    SQL > select * from bc_test;

    no selected line


    Statistics
    ----------------------------------------------------------
    0 recursive calls
    0 db block Gets
    Gets 5 in accordance
    0 physical reads
    size of roll forward 108
    272 bytes sent via SQL * Net to client
    374 bytes received via SQL * Net from client
    1 SQL * Net back and forth to and from the client
    0 sorts (memory)
    0 sorts (disk)
    0 rows processed

    SQL >

    --------------------------------------------------
    Control session
    --------------------------------------------------
    SQL > /.

    FOLDER # BLOCK # CLASS # STATUS D T P S D N
    5 209 4 xcur N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 xcur O N N N N N

    7 selected lines.

    --------------------------------------------------
    Session 2
    --------------------------------------------------
    SQL > /.

    no selected line


    Statistics
    ----------------------------------------------------------
    0 recursive calls
    0 db block Gets
    Gets 5 in accordance
    0 physical reads
    size of roll forward 108
    272 bytes sent via SQL * Net to client
    374 bytes received via SQL * Net from client
    1 SQL * Net back and forth to and from the client
    0 sorts (memory)
    0 sorts (disk)
    0 rows processed

    SQL >

    --------------------------------------------------
    Control session
    --------------------------------------------------
    SQL > /.

    FOLDER # BLOCK # CLASS # STATUS D T P S D N
    5 209 4 xcur N N N N N N
    5 210 1 xcur O N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N

    7 selected lines.

    What version of 10.2, on what platform, and enabled to RAC? What exactly was the tablespace definition?

    (I) why the insertion of a single line (in step 2) load 8 blocks in the buffer cache - and what does the CLASS # = 14 indicate?

    It sounds like you may have formatted all of the first measure - assuming that you use 8 KB blocks and allocated system extended. But it didn't happen when I checked 10.2.0.3 on a version of the single instance of the Oracle.

    Class 14 is interpreted as "unused" in all versions of Oracle that I saw. This would be consistent with the formatting, but it bringing is not below the high water mark. You could ' alter system dump datafile N block min X block max Y' for the segment header block, the block used and unused block in the discharge.

    (II) why the first select statement on the table (step 3) creates 2 copies of CR (use single) data block of the table (instead of one as I expect)?

    Maybe because the first copy clean uncommitted transactions and clones of the second copy of the result to return to a given SNA - but that's just a guess, I had not noticed this behavior before, so I have to do some experiments to find out why it's happening.

    (III)) why other issues create copies of CR of this block of data that is unique (instead of reusing the copy created by the first select statement CR)?

    The first block of CR you create includes a moment earlier that the beginning of the second query, so your session must start from the current. If you have set the isolation level to fix SNA session (for example, "set transaction read only") before the first application, in my view, you would see that the copies of CR will not increase after the initial creation.

    (IV) which limits the number of CR created copies to 5 (is there some parameter checking this value, what is according to some sizing of the cache or is simply badly coded)?

    Hidden parameter Dbblock_max_cr_dba which, by default, is 6. The code still does not seem to obey this limit, but generally you will see only 6 copies of a block in the buffer.

    (V) what exactly triggers the creation of a copy of CR of a buffer in the buffer cache?

    There are at least two options - (a) a session needs to see a copy of a block with recent changes removed, either because they are uncommitted transactions, the session needs to see the block he was when he started to run a particular query or transaction, (b) a session changed a block when he gets hold of the current version - and in certain circumstances (possibly when the update is through a analysis), it will make a copy of the block, mark the previous CR copy, mark the new copy as xcur, and change the copy.

    Incidentally, xcur is a RAC status - it is possible to have blocks in the buffer which are xcur, but not "obtained in current mode.

    Concerning
    Jonathan Lewis

  • Question Basic setting, ask questions about the buffer cache

    Database: Oracle 10g
    Host: Sun Solaris, 16 CPU server



    I look at the behavior of some simple queries that I start the tuning of our data warehouse.

    Using SQL * more and AUTOTRACE, I ran this query two times in a row

    SELECT *.
    OF PROCEDURE_FACT
    WHERE PROC_FACT_ID BETWEEN 100000 AND 200000

    He finds the index on PROC_FACT_ID and conducted an analysis of the range of indexes to access the data in the table by rowid. The first time, that it ran, there are about 600 physical block reads as data in the table were not in the buffer cache. The second time, he had 0 physical block reads, because they were all in the cache. All this was expected behavior.

    So I ran this query twice now,

    SELECT DATA_SOURCE_CD, COUNT (*)
    OF PROCEDURE_FACT
    DATA_SOURCE_CD GROUP

    As expected, he made a full table scan, because there is no index on DATA_SOURCE_CD and then chopped the results to find the different DATA_SOURCE_CD values. The first run had these results

    compatible gets 190496
    physical reads 169696

    The second run had these results

    compatible gets 190496
    physical reads 170248


    NOT what I expected. I would have thought that the second run would find many of the blocks already in the cache of the pads of the first execution, so that the number of physical reads would drop significantly.

    Any help to understand this would be greatly appreciated.

    And is there something that can be done to keep the table PROCEDURE_FACT (the central table of our star schema) "pinned" in the buffer cache?

    Thanks in advance.

    -chris Curzon

    Christopher Curzon wrote:
    Your comment about the buffer cache used for smaller objects that benefit is something that I asked about a good deal. It sounds as if tuning the buffer cache will have little impact on queries that scan of entire tables.

    Chris,

    If you can afford it and you think it is a reasonable approach with regard to the remaining segments that are supposed to benefit the buffer cache, you can always consider your segment of table with 'CACHE' that will change the behavior on the full of a broad sector table scan (Oracle treats small and large segments differently during the execution of table scans complete regarding the cache of) marking stamps, you can override this treatment by using the CACHE. NOCACHE keyword) or move your table of facts to a DUNGEON hen establishing a (ALTER SYSTEM SET DB_KEEP_CACHE_SIZE = ), modify the segments (ALTER TABLE... STORAGE (USER_TABLES KEEP)) accordingly and perform a full table scan to load blocks in the cache of the DUNGEON.

    Note that the disadvantage of the approach of the KEEP pool is that you have less memory available for the default buffer cache (unless you add more memory on your system). When an object to mark as being cached is always is in competition with other objects in the cache buffers by default, so it could still be aged out (the same applies to the pool of DUNGEON, if the segment is too large or too many segments are allocated age blocks out as well).

    So my question: How can I get for a parallel analysis on queries that use a table scan complete such as what I posted in my previous email? It is a question of the provision of the "parallel" indicator, or is it an init.ora parameter I should try?

    You can use a PARALLEL hint in your statement:

    SELECT /*+ PARALLEL(PROCEDURE_FACT) */ DATA_SOURCE_CD, COUNT(*)
    FROM PROCEDURE_FACT
    GROUP BY DATA_SOURCE_CD;
    

    or you could mark an object as PARALLEL in the dictionary:

    ALTER MATERIALIZED VIEW PROCEDURE_FACT PARALLEL;
    

    Note that since you have 16 processors (or 16 cores that resemble Oracle 32? Check the CPU_COUNT setting) the default parallel degree would be usually 2 times 16 = 32, which means that Oracle generates at least 32 parallel slaves for a parallel operation (it could be another set of 32 slaves if the operation for example include a GROUP BY operation) If you do not use the PARALLEL_ADAPTIVE_MULTI_USER parameter (which allows to reduce the parallelism if several parallel operations running concurrently).

    I recommend to choose a lesser degree parallel to your default value of 32 because usually you gain much by such a degree, then you can get the same performance when you use lower a setting like this:

    SELECT /*+ PARALLEL(PROCEDURE_FACT, 4) */ DATA_SOURCE_CD, COUNT(*)
    FROM PROCEDURE_FACT
    GROUP BY DATA_SOURCE_CD;
    

    The same could be applied to the paralleling of the object:

    ALTER MATERIALIZED VIEW PROCEDURE_FACT PARALLEL 4;
    

    Note When defining the object of many operations in PARALLEL will be parallelisee (DML even can be run in parallel, if you enable dml parallel, which has some special restrictions), so I recommend to use it with caution and begin with an explicit indication in those statements where you know that it will be useful to do.

    Also check that your PARALLEL_MAX_SERVERS is high enough when you use parallel operations, which should be the case in your version of Oracle.

    Kind regards
    Randolf

    Oracle related blog stuff:
    http://Oracle-Randolf.blogspot.com/

    SQLTools ++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676 /.
    http://sourceforge.NET/projects/SQLT-pp/

  • Error: The file size for all drives of paging can be a little larger than the size you specified, when you open windows in Windows 7

    Original title: paging notice when open windows on mac, divide the hard disk

    I have a split hardrive on a MAC and windows, and quickbooks installed on this drive.  When I open windows, I get a notice of paging file.  -size of the paging file.  "The file size for all drives of paging can be a little larger than the specified size.  IM using Bootcamp for the windows side.  It is to show the memory almost full and I did not yet get many date in QB.  Don't know what I have to do.

    Hello

    1. have you done any software or changes to the material on the computer before this problem?

    2. it worked correctly previously?

    3. how many hard drives are in the computer?

    4. is the problem with a partition or a particular hard drive?

    Try the steps from the link.

    Change the size of virtual memory

    http://Windows.Microsoft.com/en-us/Windows7/change-the-size-of-virtual-memory

    What is virtual memory?
    http://Windows.Microsoft.com/en-us/Windows7/what-is-virtual-memory

    Hope this information is useful.

  • < Unspecified file name > file is larger than the maximum size supported by datastore '< indeterminate datastore >.

    I know that this issue has much spoken in the forums, but the answers are always to make sure that your block sizes are set to 8 MB - mine are already. Let me explain:

    I have a virtual machine with a large amount of connected storage - something along the lines of discs 10 x 1.99 to. Sit all VMDK on partitions of the VMFS of 8 MB of size block, including the configuration of the VM (location of the pagefile).

    Every time I try and snapshot of the virtual machine, I see the "< unspecified file name > file is larger than the maximum size supported by the data store ' < unspecified datastore >. All other virtual machines instant fine, but any other VM has a similar amount of storage as the VM problem.

    I have now moved the configuration files of the virtual machine to a new partition VMFS 5 of 1.91 TB, but the instant error persists. Most of the readers is sitting on VMFS 3.33 or 3.46. It will take me a while to move all VMFS 5 to see if that solves the problem.

    VMware.log for VM reports:

    2011-10-09T09:55:55.328Z| vcpu-0|  DiskLibCreateCustom: Unsupported disk capacity or disk capacity (2186928627712 bytes) is too large for vmfs file size.
    2011-10-09T09:55:55.328Z| vcpu-0| DISKLIB-LIB   : Failed to create link: The destination file system does not support large files (12)
    2011-10-09T09:55:55.328Z| vcpu-0| SNAPSHOT: SnapshotBranchDisk: Failed to branch disk: '/vmfs/volumes/4dc30ba3-b13c5026-92d8-d485643a1de4/spoon-app/spoon-app_2.vmdk' -> '/vmfs/volumes/4dc30ba3-b13c5026-92d8-d485643a1de4/spoon-app/spoon-app_2-000001.vmdk' : The destination file system does not support large files (12)
    
    

    My VMDK and volumes are smaller than 2032GB. I don't understand why, it's be a problem.

    Anyone have any ideas?

    Although ESXi 5 supports larger LUN as a raw physical devices (up to 64 TB), the maximum size of a virtual disk has not yet changed.

    André

  • Fonts are larger than normal

    Some Web sites were not showing correctly in contrast to in the computer.
    While browsing some website, some paragraph fonts are larger than normal.
    I am facing this problem on this site
    http://www.hackforums.NET

    Hey, please try to set the level of size of text to the smallest option in the settings of firefox android. for information, see https://support.mozilla.org/en-US/questions/976944#answer-499625

  • Now, the screen moves as the surface is larger than the actual size of the screen. I checked its display options, but how do I disable this screen oversized?

    Hello

    My mother was a member of the family to try to help inrease size but Bobo and now its screen in any program or simply from bottom of screen size is larger than the size of the screen.  She can move the mouse to the left to bring the extreme left veiw and then the same thing right but never have all the screen in veiw now.  How to restore to normal for her.  I tried control - display, but not... help her please help me

    Hi M Carmel,

    Thanks for posting in the Microsoft community.

    I understand that you are facing the issue with the resolution of the screen display.

    I suggest you to see the links and check.

    Change the resolution of your monitor

    http://Windows.Microsoft.com/en-us/Windows-XP/help/Setup/change-monitor-resolution

    Change your screen resolution

    http://Windows.Microsoft.com/en-us/Windows-XP/help/change-screen-resolution

    Please follow these recommended steps, review the additional information provided and after back if you still experience the issue. I will be happy to provide you with additional options available that you can use to get this resolved.

Maybe you are looking for