Why the blocks of temporary tables are placed in the buffer cache?

I read the following statement, which seems quite plausible to me: "Oracle7.3 and generates from close db file sequential reading of the events when a dedicated server process reads data from temporary segment of the disc." Older versions of Oracle would read temporary segment data in the database buffer cache using db file scattered reads. Releases latest exploit heuristics that data of temporary segment is not likely to be shareable or revisited, then reads it directly to a server process programs global (PGA). »

To verify this statement (and also for the pleasure of seeing one of these rare close db file sequential read events), I ran a little experiment on my Oracle 10.2 Linux (see below). Not only it seems that different this v above, the blocks of temporary tables are placed in the buffer cache, but also$ BH. OBJD for these blocks does not refer to an object in the database's existing (at least not one that is listed in DBA_OBJECTS). Either incidentally, I traced the session and have not seen any file db close sequential read events.

So, I have the following questions:
(1) is my experimental set-up and correct my conclusions (i.e. are blocks of temporary tables really placed in the buffer cache)?
(2) if so, what is the reason for placing blocks of temporary tables in the buffer cache? As these blocks contain private session data, the blocks in the buffer cache can be reused by another session. So why do all cache buffer management fees to the blocks in the buffer cache (and possibly remove) rather than their caching in a private in-memory session?
(3) what V$ BH. OBJD consult for blocks belonging to temporary tables?

Thanks for any help and information
Kind regards
Martin

Experience I ran (on 10.2 /Linux)
=============================
SQL*Plus: Release 10.2.0.1.0 - Production on Sun Oct 24 22:25:07 2010

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> create global temporary table temp_tab_4 on commit preserve rows as select * from dba_objects;

Table created.

SQL> alter system flush buffer_cache;

System altered.

SQL> select count(*), status from v$bh group by status order by 1 desc;

  COUNT(*) STATUS
---------- -------
      4208 free
      3 xcur

SQL> select count(*) from temp_tab_4;

  COUNT(*)
----------
     11417

SQL> -- NOW THE BUFFER CACHE CONTAINS USED BLOCKS, THERE WAS NO OTHER ACTIVITY ON THE DATABASE
select count(*), status from v$bh group by status order by 1 desc;
SQL> 
  COUNT(*) STATUS
---------- -------
      4060 free
       151 xcur

SQL> -- THE BLOCKS WITH THE "STRANGE" OBJD HAVE BLOCK# THAT CORRESPOND TO THE TEMPORARY SEGMENT DISPLAYED
-- IN V$TEMPSEG_USAGE
select count(*), status, objd from v$bh where status != 'free' group by status, objd order by 1 desc;
SQL> SQL> 
  COUNT(*) STATUS      OBJD
---------- ------- ----------
       145 xcur       4220937
      2 xcur        257
      2 xcur        237
      1 xcur        239
      1 xcur    4294967295

SQL> -- THE OBJECT REFERENCED BY THE NEWLY USED BLOCKS IS NOT LISTED IN DBA_OBJECTS
select * from dba_objects where object_id = 4220937 or data_object_id = 4220937;

SQL> 
no rows selected

SQL> SQL> -- THE BLOCKS WITH THE "STRANGE" OBJD ARE MARKED AS TEMP IN V$BH
select distinct temp from v$bh where objd = 4220937;
SQL> 
T
-
Y

SQL> 
Edited by: user4530562 the 25.10.2010 01:12

Edited by: user4530562 the 25.10.2010 04:57

The reason to put the blocks to the global temporary table in the buffer cache is the same thing why you put ordinary table blocks in the cache buffers-> you want some of them to be in memory.

If you ask why don't keep us somehow temporary tables in the PGA - well what happens if this temporary table will be 50 GB? 32-bit platforms cannot even handle this and you do not want a process of becoming uncontrollable so great.

Moreover, TWG will allow you to restore, back to a backup (or savepoint implied when an error occurs during a call DML), and this requires protection by the cancellation. Place lines / revenge in PGA would have complicated the implementation even further... now GTT is almost of the regular tables which just happened to reside in temporary files.

If you really want to put data in the PGA only, then you can create collections of PL/SQL and even access through the use of SQL (coll CAST AS xyz_type) where xyz_type is a TABLE of an object any.

--
Tanel Poder
New online seminars!
http://tech.e2sn.com/Oracle-training-seminars

Tags: Database

Similar Questions

  • Temporary tables are themselves created, name starting with BIN$...

    Hello

    I use Oracle 10.2 and a problem that make certain operations on existing tables, some temporary tables to develop starting with BIN$.

    For ex: BIN$ HULdSlmnRZmbCXAl/pkA9w == $0

    But I do not see these tables in the database after two or three days. I found that these temporary tables are a replica of the existing tables and these contain only the structure of the table, but not the data. I couldn't even drop database.

    Can someone throw some tips on this?

    Thank you and best regards,
    Siva

    These paintings are in the trash of Oracle. When you delete a table, Oracle renames just for BIN$ xxx so that you can undrop table if you made a mistake.

    -You can ignore tables in the trash if you wish. Oracle purge them automatically when it needs the space.
    -You can prevent a table goes the trash by adding the command serve down, i.e.

    DROP TABLE table_name PURGE
    

    -You can empty the trash using the command

    SQL> purge dba_recyclebin
    

    (or just recyclebin if you do wish to erase objects you own to the recyclebin).

    Justin

  • Updated data are larger than the buffer cache

    Hi Experts,

    I have a small request. I have a table called CONTENT to have 12 GB of data. Here, I pulled a single update statement that updates to 8 GB of CONTENTS table of data using 1 GB of database buffer cache?

    How 1 GB of the Database Buffer Cache will be used to update the 8 GB of data? Level architectural any additional changes will happen (than usual) when executing "Updated data is larger than the buffer cache"?

    Could someone of you please response. Thank you

    Database: 10.2.0.5

    OS: Power 5-64 bit AIX system

    Hello

    the basic mechanism is the following:

    needed to update data blocks are read from the data files and cached in memory (buffer cache), the update is made to the buffer cache and the front of the (UNDO) image is stored in the segments of cancellation, operation (update here) is re-encoded to redo buffer until it goes again files If the buffer is samll or we need more space in the buffer cache, or we have a control point or... Oracle writes back the block modified data files to free the memory buffer for the more blocks.

    While the other runs the update of transactions can read before you change the image of CANCEL if validation, at the end of the transaction done change is confirmed and validation is recorded in the redo. If the cancellation is made at the end of the transaction before the image is "restored" and rollback is saved in do it again.

    Concerning

  • Copies of unexpected CR in the buffer cache

    Hello

    While trying to understand the mechanisms of the Oracle buffer cache, I ran a little experiment and observed an unexpected result. I believe that my expectation was wrong, and I would be so grateful, if someone could explain to me what I misunderstood.
    From what I understand, a copy of coherent reading (CR) of a buffer in the cache is created, when the old content of a buffer to be read, for example to ignore changes made by a transaction not yet committed when you query a table. I also thought, that copies of CR in the buffer cache can be reused by subsequent queries requiring a roll back image of the corresponding block.

    Now, I ran the following experiment on a DB of 10.2 or slowed down.
    1. I create a BC_TEST table (in a non-SAMS tablespace)
    -> V$ BH shows an A buffer with XCUR status for this table - V$ BH. COURSE number is 4, which indicates a segment according to various sources on the internet header.
    2 session 1 inserts a row into the table (and not valid)
    -> Now V$ BH shows 8 pads with XCUR status attached to the BC_TEST table. I think it's the blocks of a certain extent be allocated to the table (I expected as a block of data to load into the cache in addition to the header that was already there in step 1). There are still buffers with CLASS A # = 4 of step 1, a buffer B with XCUR and CLASS status # = 1, which indicates a block of data, according to various sources on the internet, and other 6 blocks with FREE status and CLASS # = 14 (this value is decoded differently in different sources on the internet).
    3 session 2 emits a ' select * from bc_test ".
    -> V$ BH shows 2 extra buffers with CR and the same FILE status #/ BLOCK # as buffer B in step 2. I understand that it compatible read copy needs to be done to undo the uncommitted changes in step 2 - However, I do not understand why * 2 * these copies are created.
    Note: with a slight variation of the experience, if I don't "select * from bc_test" in Session 2 between step 1 and 2, and then then I only get CR 1 copy in step 3 (I expect).
    4 session 2 issues "select * from bc_test"new"
    -> V$ BH presents yet another extra stamp with CR and the same FILE status #/ BLOCK # as buffer B step 2 (i.e. 3 these buffers in total). Here I don't understand, why the request cannot reuse the CR copy already created in step 3 (which already shows B buffer without changes to the transaction not posted in step 2).
    5 session 2 questions repeatedly ' select * from bc_test "new"
    --> The number of buffers with CR and the same FILE status #/ BLOCK # as buffer B step 2 increases by one with each additional request up to a total of 5. After that, the number of these buffers remains constant after additional queries. However various session statistics 2 ("consistent gets", "Blocks created CR", "the consistent changes", "data blocks consistent reads - undo records applied," "no work - gets consistent reading") suggests, that session 2 continues to generate copies of current reading with each "select * of bc_test" (are the buffers in the buffer cache may simply be reused from there on?).



    To summarize, I have the following question:
    (I) why the insertion of a single line (in step 2) load 8 blocks in the buffer cache - and what does the CLASS # = 14 indicate?
    (II) why the first select statement on the table (step 3) creates 2 copies of CR (use single) data block of the table (instead of one as I expect)?
    (III)) why other issues create copies of CR of this block of data that is unique (instead of reusing the copy created by the first select statement CR)?
    (IV) which limits the number of CR created copies to 5 (is there some parameter checking this value, it is according to some sizing of the cache or is it simply hard-coded)?
    (V) what exactly triggers the creation of a copy of CR of a buffer in the buffer cache?

    Thank you very much for any answer
    Kind regards
    Martin

    P.S. Please find below the Protocol of my experience

    --------------------------------------------------
    Control session
    --------------------------------------------------
    SQL > drop table bc_test;

    Deleted table.

    SQL > create table bc_test (collar number (9)) tablespace local01;

    Table created.

    SQL > SELECT bh.file #, bh.block #, bh.class #, bh.status, bh.dirty, bh.temp, bh.ping, bh.stale, bh.direct, bh.new
    2 from V$ BH bh
    3, o dba_objects
    4. WHERE bh. OBJD = o.data_object_id
    5 and o.object_name = 'BC_TEST. '
    6 order of bh.block #;

    FOLDER # BLOCK # CLASS # STATUS D T P S D N
    5 209 4 xcur O N N N N N


    --------------------------------------------------
    Session 1
    --------------------------------------------------
    SQL > insert into bc_test values (1);

    1 line of creation.

    --------------------------------------------------
    Control session
    --------------------------------------------------

    SQL > /.

    FOLDER # BLOCK # CLASS # STATUS D T P S D N
    5 209 4 xcur O N N N N N
    5 210 1 xcur O N N N N N
    5 211 14 free N N N N N N
    5 212 14 free N N N N N N
    5 213 14 free N N N N N N
    5 214 14 free N N N N N N
    5 215 14 free N N N N N N
    5 216 14 free N N N N N N

    8 selected lines.

    --------------------------------------------------
    Session 2
    --------------------------------------------------

    SQL > select * from bc_test;

    no selected line


    Statistics
    ----------------------------------------------------------
    28 recursive calls
    0 db block Gets
    Gets 13 coherent
    0 physical reads
    size of roll forward 172
    272 bytes sent via SQL * Net to client
    374 bytes received via SQL * Net from client
    1 SQL * Net back and forth to and from the client
    0 sorts (memory)
    0 sorts (disk)
    0 rows processed

    --------------------------------------------------
    Control session
    --------------------------------------------------
    SQL > /.

    FOLDER # BLOCK # CLASS # STATUS D T P S D N
    5 209 4 xcur N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 xcur N N N N N N
    5 211 14 free N N N N N N
    5 212 14 free N N N N N N
    5 213 14 free N N N N N N
    5 214 14 free N N N N N N

    8 selected lines.

    --------------------------------------------------
    Session 2
    --------------------------------------------------
    SQL > /.

    no selected line


    Statistics
    ----------------------------------------------------------
    0 recursive calls
    0 db block Gets
    Gets 5 in accordance
    0 physical reads
    size of roll forward 108
    272 bytes sent via SQL * Net to client
    374 bytes received via SQL * Net from client
    1 SQL * Net back and forth to and from the client
    0 sorts (memory)
    0 sorts (disk)
    0 rows processed

    SQL >

    --------------------------------------------------
    Control session
    --------------------------------------------------
    SQL > /.

    FOLDER # BLOCK # CLASS # STATUS D T P S D N
    5 209 4 xcur N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 xcur O N N N N N
    5 211 14 free N N N N N N
    5 213 14 free N N N N N N
    5 214 14 free N N N N N N

    8 selected lines.

    SQL >


    --------------------------------------------------
    Session 2
    --------------------------------------------------
    SQL > select * from bc_test;

    no selected line


    Statistics
    ----------------------------------------------------------
    0 recursive calls
    0 db block Gets
    Gets 5 in accordance
    0 physical reads
    size of roll forward 108
    272 bytes sent via SQL * Net to client
    374 bytes received via SQL * Net from client
    1 SQL * Net back and forth to and from the client
    0 sorts (memory)
    0 sorts (disk)
    0 rows processed

    SQL >

    --------------------------------------------------
    Control session
    --------------------------------------------------
    SQL > /.

    FOLDER # BLOCK # CLASS # STATUS D T P S D N
    5 209 4 xcur N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 xcur O N N N N N
    5 211 14 free N N N N N N
    5 213 14 free N N N N N N

    8 selected lines.

    --------------------------------------------------
    Session 2
    --------------------------------------------------
    SQL > select * from bc_test;

    no selected line


    Statistics
    ----------------------------------------------------------
    0 recursive calls
    0 db block Gets
    Gets 5 in accordance
    0 physical reads
    size of roll forward 108
    272 bytes sent via SQL * Net to client
    374 bytes received via SQL * Net from client
    1 SQL * Net back and forth to and from the client
    0 sorts (memory)
    0 sorts (disk)
    0 rows processed

    --------------------------------------------------
    Control session
    --------------------------------------------------
    SQL > /.

    FOLDER # BLOCK # CLASS # STATUS D T P S D N
    5 209 4 xcur N N N N N N
    5 210 1 xcur O N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 211 14 free N N N N N N
    5 213 14 free N N N N N N

    9 selected lines.

    --------------------------------------------------
    Session 2
    --------------------------------------------------
    SQL > select * from bc_test;

    no selected line


    Statistics
    ----------------------------------------------------------
    0 recursive calls
    0 db block Gets
    Gets 5 in accordance
    0 physical reads
    size of roll forward 108
    272 bytes sent via SQL * Net to client
    374 bytes received via SQL * Net from client
    1 SQL * Net back and forth to and from the client
    0 sorts (memory)
    0 sorts (disk)
    0 rows processed

    SQL >

    --------------------------------------------------
    Control session
    --------------------------------------------------
    SQL > /.

    FOLDER # BLOCK # CLASS # STATUS D T P S D N
    5 209 4 xcur N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 xcur O N N N N N

    7 selected lines.

    --------------------------------------------------
    Session 2
    --------------------------------------------------
    SQL > /.

    no selected line


    Statistics
    ----------------------------------------------------------
    0 recursive calls
    0 db block Gets
    Gets 5 in accordance
    0 physical reads
    size of roll forward 108
    272 bytes sent via SQL * Net to client
    374 bytes received via SQL * Net from client
    1 SQL * Net back and forth to and from the client
    0 sorts (memory)
    0 sorts (disk)
    0 rows processed

    SQL >

    --------------------------------------------------
    Control session
    --------------------------------------------------
    SQL > /.

    FOLDER # BLOCK # CLASS # STATUS D T P S D N
    5 209 4 xcur N N N N N N
    5 210 1 xcur O N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N
    5 210 1 cr N N N N N N

    7 selected lines.

    What version of 10.2, on what platform, and enabled to RAC? What exactly was the tablespace definition?

    (I) why the insertion of a single line (in step 2) load 8 blocks in the buffer cache - and what does the CLASS # = 14 indicate?

    It sounds like you may have formatted all of the first measure - assuming that you use 8 KB blocks and allocated system extended. But it didn't happen when I checked 10.2.0.3 on a version of the single instance of the Oracle.

    Class 14 is interpreted as "unused" in all versions of Oracle that I saw. This would be consistent with the formatting, but it bringing is not below the high water mark. You could ' alter system dump datafile N block min X block max Y' for the segment header block, the block used and unused block in the discharge.

    (II) why the first select statement on the table (step 3) creates 2 copies of CR (use single) data block of the table (instead of one as I expect)?

    Maybe because the first copy clean uncommitted transactions and clones of the second copy of the result to return to a given SNA - but that's just a guess, I had not noticed this behavior before, so I have to do some experiments to find out why it's happening.

    (III)) why other issues create copies of CR of this block of data that is unique (instead of reusing the copy created by the first select statement CR)?

    The first block of CR you create includes a moment earlier that the beginning of the second query, so your session must start from the current. If you have set the isolation level to fix SNA session (for example, "set transaction read only") before the first application, in my view, you would see that the copies of CR will not increase after the initial creation.

    (IV) which limits the number of CR created copies to 5 (is there some parameter checking this value, what is according to some sizing of the cache or is simply badly coded)?

    Hidden parameter Dbblock_max_cr_dba which, by default, is 6. The code still does not seem to obey this limit, but generally you will see only 6 copies of a block in the buffer.

    (V) what exactly triggers the creation of a copy of CR of a buffer in the buffer cache?

    There are at least two options - (a) a session needs to see a copy of a block with recent changes removed, either because they are uncommitted transactions, the session needs to see the block he was when he started to run a particular query or transaction, (b) a session changed a block when he gets hold of the current version - and in certain circumstances (possibly when the update is through a analysis), it will make a copy of the block, mark the previous CR copy, mark the new copy as xcur, and change the copy.

    Incidentally, xcur is a RAC status - it is possible to have blocks in the buffer which are xcur, but not "obtained in current mode.

    Concerning
    Jonathan Lewis

  • Question Basic setting, ask questions about the buffer cache

    Database: Oracle 10g
    Host: Sun Solaris, 16 CPU server



    I look at the behavior of some simple queries that I start the tuning of our data warehouse.

    Using SQL * more and AUTOTRACE, I ran this query two times in a row

    SELECT *.
    OF PROCEDURE_FACT
    WHERE PROC_FACT_ID BETWEEN 100000 AND 200000

    He finds the index on PROC_FACT_ID and conducted an analysis of the range of indexes to access the data in the table by rowid. The first time, that it ran, there are about 600 physical block reads as data in the table were not in the buffer cache. The second time, he had 0 physical block reads, because they were all in the cache. All this was expected behavior.

    So I ran this query twice now,

    SELECT DATA_SOURCE_CD, COUNT (*)
    OF PROCEDURE_FACT
    DATA_SOURCE_CD GROUP

    As expected, he made a full table scan, because there is no index on DATA_SOURCE_CD and then chopped the results to find the different DATA_SOURCE_CD values. The first run had these results

    compatible gets 190496
    physical reads 169696

    The second run had these results

    compatible gets 190496
    physical reads 170248


    NOT what I expected. I would have thought that the second run would find many of the blocks already in the cache of the pads of the first execution, so that the number of physical reads would drop significantly.

    Any help to understand this would be greatly appreciated.

    And is there something that can be done to keep the table PROCEDURE_FACT (the central table of our star schema) "pinned" in the buffer cache?

    Thanks in advance.

    -chris Curzon

    Christopher Curzon wrote:
    Your comment about the buffer cache used for smaller objects that benefit is something that I asked about a good deal. It sounds as if tuning the buffer cache will have little impact on queries that scan of entire tables.

    Chris,

    If you can afford it and you think it is a reasonable approach with regard to the remaining segments that are supposed to benefit the buffer cache, you can always consider your segment of table with 'CACHE' that will change the behavior on the full of a broad sector table scan (Oracle treats small and large segments differently during the execution of table scans complete regarding the cache of) marking stamps, you can override this treatment by using the CACHE. NOCACHE keyword) or move your table of facts to a DUNGEON hen establishing a (ALTER SYSTEM SET DB_KEEP_CACHE_SIZE = ), modify the segments (ALTER TABLE... STORAGE (USER_TABLES KEEP)) accordingly and perform a full table scan to load blocks in the cache of the DUNGEON.

    Note that the disadvantage of the approach of the KEEP pool is that you have less memory available for the default buffer cache (unless you add more memory on your system). When an object to mark as being cached is always is in competition with other objects in the cache buffers by default, so it could still be aged out (the same applies to the pool of DUNGEON, if the segment is too large or too many segments are allocated age blocks out as well).

    So my question: How can I get for a parallel analysis on queries that use a table scan complete such as what I posted in my previous email? It is a question of the provision of the "parallel" indicator, or is it an init.ora parameter I should try?

    You can use a PARALLEL hint in your statement:

    SELECT /*+ PARALLEL(PROCEDURE_FACT) */ DATA_SOURCE_CD, COUNT(*)
    FROM PROCEDURE_FACT
    GROUP BY DATA_SOURCE_CD;
    

    or you could mark an object as PARALLEL in the dictionary:

    ALTER MATERIALIZED VIEW PROCEDURE_FACT PARALLEL;
    

    Note that since you have 16 processors (or 16 cores that resemble Oracle 32? Check the CPU_COUNT setting) the default parallel degree would be usually 2 times 16 = 32, which means that Oracle generates at least 32 parallel slaves for a parallel operation (it could be another set of 32 slaves if the operation for example include a GROUP BY operation) If you do not use the PARALLEL_ADAPTIVE_MULTI_USER parameter (which allows to reduce the parallelism if several parallel operations running concurrently).

    I recommend to choose a lesser degree parallel to your default value of 32 because usually you gain much by such a degree, then you can get the same performance when you use lower a setting like this:

    SELECT /*+ PARALLEL(PROCEDURE_FACT, 4) */ DATA_SOURCE_CD, COUNT(*)
    FROM PROCEDURE_FACT
    GROUP BY DATA_SOURCE_CD;
    

    The same could be applied to the paralleling of the object:

    ALTER MATERIALIZED VIEW PROCEDURE_FACT PARALLEL 4;
    

    Note When defining the object of many operations in PARALLEL will be parallelisee (DML even can be run in parallel, if you enable dml parallel, which has some special restrictions), so I recommend to use it with caution and begin with an explicit indication in those statements where you know that it will be useful to do.

    Also check that your PARALLEL_MAX_SERVERS is high enough when you use parallel operations, which should be the case in your version of Oracle.

    Kind regards
    Randolf

    Oracle related blog stuff:
    http://Oracle-Randolf.blogspot.com/

    SQLTools ++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676 /.
    http://sourceforge.NET/projects/SQLT-pp/

  • Shared pool larger than the buffer cache

    Hi all

    My database is 10.2.0.4 running linux platform

    No .of 2 cpu, RAM-2 GB

    SGA_TARGET wa set to 1 GB.

    Initially the memory have been configured as a shared pool around 300 MB and the buffer cache about 600 MB.

    When I questioned the v$ sga_resize_ops views I found some interesting results.

    Many operations and growth reduction were happened and the current size of the shared pool is about 600 MB and buffer cache is 300 MB. (this happened during last 1)

    I guess that the buffer cache must always be larger than the size compared to a shared pool. My assumption is right?

    Is it because of sql code using the do not bind variables resulting in growth shared pool? No relaods and radiation are almost ignored I think it should not be the case.

    Also no lock events listd in the top5

    I've also seen the 15% of the shared pool is marked as being of kGH:NO ACCESS, which means that the part is used for the cache buffers.

    Should I set the lower limit for the shared pool and the buffer cache or can I just ignore it.

    Thank you
    rajdhanvi

    You change your question now... your question was that he has sharedpool large size > buffer cache is acceptable... . Check your own second post... for your new question now is why pool continues to increase and partly used as buffer cache... the proof is given by tanel poder y what happens when EAMA is used... For the Kingston general hospital: NO ACCESS means that no one else could touch...

    Concerning
    Karan

  • What is in the buffer cache

    Hello members,

    I'm trying to understand why some batch repeatedly not pio, I expect.

    One thing keeps popping up; Is it possible to see what is in the buffer cache. I can query v$, but not x$. Version is 10.2.0.3

    Something in the lines of what segments has how many blocks in the buffer, as
    SEGMENT_NAME                     BLOCKS
    ---------------------------- ----------
    SYS_IL0006063998C00032$$             18
    SYS_C0077916                        108
    STA_BONUS_REMIT                     800
    STABONREM_PK                        216
    Concerning
    Peter

    v$ BH.objd = DBA_OBJECTS.data_object_id

    -----------
    Sybrand Bakker
    Senior Oracle DBA

  • Why the DNS cache would have entered negative when connecting?

    Hello

    Thanks in advance for reading this.

    I have a problem when DNS cache contains negative entries to our Exchange Server, the file server / printing and even some among the domain controllers.  I tried to completely flush the cache by using the ipconfig/flushdns command, and then view the result with the ipconfig/displaydns command which will leave only the local host entries.  After that I have to reboot the machine, the negative entries are here once again and I cannot ping the servers exchange or print file.  Nslookup works fine however and resolves names without problem.  When I rinse again and then try clicking, it works for awhile but then the negative entries will appear again while I'm connected.  What makes it more interesting, is that it is not always the same negative records that appear.

    Things that did not help I tried:

    1. rebuilt laptop client using a different material.  This leads me to believe that this isn't related viruses/spyware since it happened immediately after setting a cell phone completely new with a new build of Windows XP.
    2. added to ipconfig/flushdns to the logon script.

    Info about my environment:

    1. we have all our servers in one location on the desktop, and all branches are connected via the VPN site-to site.
    2 the client machine is added to the domain and is running Windows XP SP3.
    3. this question is not present on the main site, only in branches.
    4. the branches using Sonicwall TZ 200 units for DHCP.
    5. it may be a coincidence, but this problem seems to have happened since we went from Sonicwall TZ 170 s to TZ 200
    but I do not see how it would affect the cache.
    6 I emptied the cache, and then more minutes late the negative entries will appear again even if pings worked a few minutes earlier.  It doesn't seem to be a physical problem because my RDP session is never the problem while I'm troubleshooting.

    If anyone has any idea as to why negative cache entries will appear when the physical network is fine, and when nslookup is no problem to find the servers, I would be very grateful for your input.

    Hi ethorneloe,

    Since your question involves the use of Exchange Server please join the TechNet community for assistance. They specialize in THIS type of environment Pro and will be better suited to help you.

    TechNet Forum

    http://social.technet.Microsoft.com/forums/en-us/category/ExchangeServer/

  • Why the police Cache Service Windows won't start?

    Cache of police service Windows will not start it is stuck at the start so I have to turn it off so that it does not use 25% cpu.

    Windows 7 Home Premium 64-bit

    I am aware that other users also have this problem.

    Hi bmxjumperc,

    Try running online virus scan and priority of service process in the Task Manager. To do this, try the following steps.
    Step 1: Virus scanner online
    a. visit http://onecare.live.com/site/en-us/center/whatsnew.htm
    b. click "full service scan.
    c. scan the computer and remove the infected files and check the result.

    Step 2: Change the priority of service process
    a. right click on the task bar, click Start Task Manager.
    b. click on the Services tab, right-click FontCache point and click on 'Go to address'
    c. click right tab process, right click on the process, click on set the priority and click down.
    d. click ok to apply the changes and check the result.

    Visit our Microsoft answers feedback Forum and let us know what you think.

  • Why create indexes use non-temporary tables?

    Looking into creating this index R-Tree, I noticed that all the staging tables are created as permanent instead of temporary tables.

    Why create a land index do this?

    Create the staging tables as temporary tables should reduce significant global creation time - what I'm missing here?

    Bryan

    Temporary tables are session-private. In other words, the different parallel slaves

    sessions may not see the same temporary tables.

  • Are global temporary tables, a standard feature of Oracle?

    I apologize for introducing me to this community with what must seem like a very stupid question...

    I am a software developer, working on a product that uses Oracle as its database, specifically Oracle 11 g Enterprise Edition. Recently, I solved a problem of performance by converting an ordinary table into a global temporary table. Before my boss allows me to put this change in the product, it wants to be sure that global temporary tables are a standard part of Oracle, not something that the customer must install separately or pay extra for. (This is the first time that we never used them in our product, so I think that most of the team are not familiar with them).

    I know that Oracle has had global temporary tables since the last millennium, so if ever, they have been a feature of the premium, they are unlikely to be now, but the boss wants me to get independent confirmation of this.

    Thank you.

    Steve Pemberton

    Here you can see "feature availability by Edition":

    http://docs.Oracle.com/CD/E11882_01/license.112/e47877/editions.htm#DBLIC116

    TWG tables is not even mentioned, which means that they do not belong to the functional components are paid separately.

    One caveat - if you have an application that uses connection pooling, it is recommended to use ON COMMIT DELETE ROWS, not ON COMMIT PRESERVE ROWS

    (or always use explicitly "DELETE gtt_table" at the beginning), because otherwise a user of the application can display the data TWG who has previously made a second user of the application.

    Kind regards

    Zlatko

  • global temporary table in oracle

    HI all,
    I want to know more about TWG in oracle, why we use it, cant why we use table bunch (we can truncate and we can use) as TWG, please help me

    The data in a temporary table are private to the session that created and can be specific to the session or operations. If the data are deleted at the end of the transaction, the table must be defined as follows:

    CREATE TEMPORARY table GLOBAL My_Temp_Table (NUMBER of Column1, Column2 NUMBER) ON COMMIT DELETE ROWS.

    If on the other hand that data should be kept until the session ends, it should be defined as follows:

    CREATE TEMPORARY TABLE global my_temp_table (NUMBER of Column1, Column2) ON COMMIT PRESERVE ROWS;

    Various characteristics
    1. If the TRUNCATE statement is issued against a temporary table, only the specific session data trucated. There is no impact on the data of the other sessions.
    2. the data in temporary tables is automatically delete at the end of the session of the database, even if it ends abnormally.
    3 indexes can be created on temporary tables. Content in the index and the scope of the index are the same as the database session.
    4. the views can be created against temporary tables and combinations of temporary and permanent tables.
    5 Tempoarary tables can have triggers associated with them.
    6. export and import utilities can be used to transfer table definitions, but without data lines are treated.
    7. There are a number of tables linked to temporary restrictions, but here are the specific version.

  • Temporary tables in stored procedure

    Hello

    I write a stored procedure that will get data from different sources and generates a spreadsheet finally. Initial selection gets the basic data and chooses then merges the data.

    To do this, I created a table in the database, I'm filling in data in this table by using the same procedure and finally selection of data in this table to generate the worksheet.

    Now, I plan to use the TEMPORARY table instead of create database table. Can someone tell me where I can watch examples for temporary tables?

    What is the best option in performance wise?

    or

    I can handle the whole scenario with slider? examples?

    Hello

    Why you cannot use an ordinary table?

    Search for [Global Temporary | http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/statements_7002.htm#sthref7247] in the diocumentation, including SQL * manual language, an alternative.

    'Temporary' applies only to the data. A global temporary Table is created once and remains until you DROP, the same as any other table.
    The data in the table are temporary. If you create the table by saying "ON COMMIT PRESERVE ROWS" (which seems appropriate, according to your description) the data will be automatically deleted when you end the session database.

    All global data in temporary Tables are specific to the session. If two (or more) people use the same table at the same time, each one will see data that they inserted themselves; they'll never see rows inserted by the other session.

    Almost everything you can do with an ordinary table, you can do with a global temporary Table. In particular, DML (such as FUSION) and cursors work exactly as they do on other tables.

  • using temporary Tables

    Hello

    I use a temporary table in my procedure to insert and select data. I used ON COMMIT PRESERVE ROWS option.but given in my temporary table are not deleted, even though I explicitly have a close connection to my ASP.NET page. So from now on I use delete (temporary table) to remove the data and insert the new set of records.
    I would like to know why this phenomenon occurs

    Kind regards

    Hello

    ODP.NET uses the default connection pool, so the connection is not actually closed when you call close/features in your application. This is a limitation of the layer OIC you cannot restore an existing connetion to the State 'blank '... That is to say, the packaging, temporary tables, etc.

    Unfortunately, this makes temporary tables a bit difficult to work with. You can either
    (1) use a transaction in your code odp.net rather than ON COMMIT PRESERVE ROWS to keep the data available until you're done with it, or
    (2) disable connection pool by setting Pooling = false in your connection string, or
    (3) explicitly clear the table do you.

    See you soon,.
    Greg

  • Temporary tables

    Hi guys,.

    Please can you advise if we can create indexes and constraints into Temp Tables can also we grant or revoke access to them or can someone with passwords schema access this particular diagram TEMP tables.

    See you soon,.
    Shaz

    The fact is more often that not the use of temporary tables in Oracle is false because all of the databases are not the same, and operations that have been a trend toward the use of temporary tables in SQL Server and Sybase normally does not require the same approach in Oracle where SQL big is often the preferred approach.

    They do not create of redo

    Not quite true that the use of temporary tables generates the cancellation. Undo generates redo.

    Please can you advise if we can create indexes and constraints into Temp Tables can also us grant or revoke access to them or what anyone with the schema > passwords can access the TEMP of this particular schema tables.

    The question is far from clear.

    The content of a temporary table is private.
    Multiple sessions can use a temporary table at the same time, any use of a temporary table is private to the session.
    You can create an index and constraints.
    Privileges of a temporary table are controlled according to any other object.

    Published by: Dom on May 8, 2012 Brooks 16:09

Maybe you are looking for