Oracle read consistency solution

Hi members of the group.

I struggle to understand the principles of reading oracle coherence. For example:

If I run a select query, and when the select query to read half of the table, another query run an update query, change the last lines of the table and engage.

What will be the result of the query select? The oracle books, it is said that, when an application starts, this query using single point in time. If the request encounters a block of data with different number of SNA, it reads the data block of segments of cancellation. My question is that each data block holds number SCN? If this is not the case, how Oracle to determine the modified data block?

Hello

Try to save time and move by the the link
FTP://FTP.software.IBM.com/software/data/pubs/papers/readconsistency.PDF

You will understand the coherence of reading, and what you expect.

-Pavan Kumar N

Tags: Database

Similar Questions

  • Reading consistent (Datablock and SNA)

    Hi experts,

    When a SELECT statement is issued, SCN of the select query is determined. Then the blocks with higher YVERT are reconstructed from the RBS structure... is what I know uptill this point

    Now as I'm user01 according to the consistency of reading for the user model:

    I had an employee table in my database. I pulled a SNA update query (1005) and updated two blocks. So now view data
    I pulled a query select (YVERT is 1010) and all the blocks that are less then it will be then displayed for me blocks all present and updated data will be displayed.

    Now user02 started a session and he pulled a select statement on the same table and SNA was 1012
    One more time all the blocks with YVERT less then 1012 will appear so he will also receive an update of data? that is not possible because I have not committed my data...

    How to read the maintenance of uniformity?
    So, how does this work?

    Second thing my executed query: select query will make me blocks of database... How to select query will come from knowledge that this block is engaged or uncommittted?

    The thing is when I fire the select query is good that first it checks n difference YVERT and then it will check for the block if its committed or not? am I wrong?

    When I commit the commit THAT SNA will be updated to the data file and the control file that is recorded in the Redo Log?

    Thank you
    Philippe

    Kamy wrote:

    I spotted the answer what I wanteds.got my eaxact response in the book of concepts oracle 11.2 on page 181 para says:

    * Read consistency and Transaction tables the database also uses an operating table,
    calls a list of self-dealing (ITL), to determine if a transaction has been
    uncommitted when the database has begun to change the block. The header block of
    each block segment contains a transaction table.*

    But Sir, I had a doubt this operation and ITL table the two are from different tables?

    You have reason to doubt.
    For people who have known these things for a long time, the "operating table" is the name given to the structure in a segment undo header block to contain a list of recent transactions that have used this undo segment to store their files of cancellation.

    The 'list of self-dealing' is the name given to a structure that exists in each normal block of data (including the index) to contain a list of recent transactions that have modified this block. It was very stupid of the author of this page to decide to reuse the name of a structure as an alias for the other.

    Concerning
    Jonathan Lewis

  • Oracle Read-Ahead: not the same as looking forward, Yes?

    Hello

    I tried to understand the systems Oracle Read-Ahead, and I think I understand how it works, but I would like to say that I'm not mistaken.


    What I managed to understand in the Oracle documentation, the Oracle read-ahead mechanism is especially used for sequential access to data blocks. For example, to perform operations such as table scans.

    In this case, Oracle will for example use a physical operation called scattered read.
    Read the straggling, in turn, is configured in the ini. ORA parameter a name like db_multiblock_read_count. A multiblock read count of 60, will make the scattered read read operation 60 blocks of sequential data from the drive, starting with the data block that was requested in the operation.

    From what I understand, this is the basis of the read-ahead mechanism. It is use mult block transactions reading data rather than the simple operations of bock.

    If what I've said so far is correct. There are two things I'd like to know:

    1.
    These 60 blocks read by the read sctarred. They go directly into the database buffer cache? Or are they loaded everything first in to a lower level cache, as a scattered read cache?

    For example, if the table scan requested page 1. And the read operation recovered scattered pages from database 1 to 60. I guess that this single page 1 would be transferred to the buffer cache. More later when the requested table to page 2 scan, the cache buffers would recover the scattered read cache this page in the buffer cache. Or is it completely wrong? And when scattered reading gets 1 to 60 pages. It automatically transforms the data pages in the buffer cache in random positions of the main buffer?


    2.
    With Oracle read-ahead, table scans and other such physical operations always incur straggling expected read periodically. For example, every 60 pages, scanning table facing in another scattered read wait OK?

    3.
    Read-ahead in oracle anticipates a physical operator database page needs a specific point in time, Yes? The following does not occur:
    While the analysis of the table is currently on page 50, with another 10 pages more left in the database buffer, background processes decides it's a good idea ask to get another 50 pages of data yet. So that the thread of forground treatment table scan never has a stop in reading e/s.

    It is of to future research, and it is not a function of Oracle, is it?

    Thank you for your understanding.

    -------------------

    I did a few tests where, although the size of the table to a data transformations are still the same (1 GB) but the time I take to make my exit from the source table varies. The processing time for each record has increased considerably from one test to the next. But I noticed that the time-out of the transformation of data remains constant, although the time cpu increases considerably. The increase in time CPU means that the extraction of time until that next buffer was reduced, which would give more time for the system to look forward. Eventally I could have done the time so much CPU, if read IO was executed in anticipation, that it would be possible for PLAYBACK of theortically waiting time at 0. I know is not going to happen. So I had just be certain of how early reading works.


    My best regards.

    sono99 wrote:
    Hello

    I tried to understand the systems Oracle Read-Ahead, and I think I understand how it works, but I would like to say that I'm not mistaken.

    What I managed to understand in the Oracle documentation, the Oracle read-ahead mechanism is especially used for sequential access to data blocks. For example, to perform operations such as table scans.

    In this case, Oracle will for example use a physical operation called scattered read.


    Read the straggling, in turn, is configured in the ini. ORA parameter a name like db_multiblock_read_count. A multiblock read count of 60, will make the scattered read read operation 60 blocks of sequential data from the drive, starting with the data block that was requested in the operation.

    From what I understand, this is the basis of the read-ahead mechanism. It is use mult block transactions reading data rather than the simple operations of bock.

    Don't know if you wanted to read several blocks at the same time or the features of the cache before reading provided by some storage devices. But it seems to me you speak CRBM (Multiblock read County) and no cache of early reading provided by storage vendors.

    If what I've said so far is correct. There are two things I'd like to know:

    1.


    These 60 blocks read by the read sctarred. They go directly into the database buffer cache? Or are they loaded everything first in to a lower level cache, as a scattered read cache?

    For example, if the table scan requested page 1. And the read operation recovered scattered pages from database 1 to 60. I guess that this single page 1 would be transferred to the buffer cache. More later when the requested table to page 2 scan, the cache buffers would recover the scattered read cache this page in the buffer cache. Or is it completely wrong? And when scattered reading gets 1 to 60 pages. It automatically transforms the data pages in the buffer cache in random positions of the main buffer?

    If my MBRC is 60 and none of the blocks are in a buffer and size of the table is more than 60 blocks then together 60 blocks would be transferred to the buffer cache. There was nothing called scattered read cache in the architecture of Oracle.

    2.
    With Oracle read-ahead, table scans and other such physical operations always incur straggling expected read periodically. For example, every 60 pages, scanning table facing in another scattered read wait OK?

    Yes file db scattered read wait event would mean that an another physical i/o was instituted for the extraction of blocks next records.

    3.
    Read-ahead in oracle anticipates a physical operator database page needs a specific point in time, Yes? The following does not occur:
    While the analysis of the table is currently on page 50, with another 10 pages more left in the database buffer, background processes decides it's a good idea ask to get another 50 pages of data yet. So that the thread of forground treatment table scan never has a stop in reading e/s.

    It is of to future research, and it is not a function of Oracle, is it?

    Yes, I don't think so it is a feature of Oracle, but provides some storage provider. It is also known as Read ahead.

    Thank you for your understanding.

    -------------------

    I did a few tests where, although the size of the table to a data transformations are still the same (1 GB) but the time I take to make my exit from the source table varies. The processing time for each record has increased considerably from one test to the next. But I noticed that the time-out of the transformation of data remains constant, although the time cpu increases considerably. The increase in time CPU means that the extraction of time until that next buffer was reduced, which would give more time for the system to look forward. Eventally I could have done the time so much CPU, if read IO was executed in anticipation, that it would be possible for PLAYBACK of theortically waiting time at 0. I know is not going to happen. So I had just be certain of how early reading works.

    My best regards.

    I don't understand your last paragraph.

    Concerning
    Anurag

  • Poor Oracle Performance consistency running concurrent queries

    We have 600 million records in the cache must be retrieved on queries (filter or Multi Extractor, we tried both), but we are faced with poor performance when we increase concurrency, we use 5 son of job by node Manager representing a total of 300 sons, we are not able to get a rate higher than 400 records/s (that is, each node consistency to recover 6 records/s) It would take too long (nearly 20 days) to 600 M of process records. This means that consistent performance are poor until the database.

    The improvements that we tried:

    We have created the index (index simple and compound), implemented from POF, tunned shuts down the JVM to reduce the GC, configured network communication (doesn't any sense cause we run everything on the same machine, but we tried), increased coherence nets, implemented best practices and after all, we tried to run on Exalogic , and after all, we had the same problem.

    The material on our tests, we used:

    -Dell R910 with 1 TB of Ram and 80 processors;

    -Oracle Exalogic X 3.

    Each node consistency is using 16 GB of Ram, and we use a distributed Cache.

    Before dropping out of coherence I would like to know if there is something more, we can try.

    We opened the SR and after many tests, that we abandoned consistency, we changed the technology, and now the Solution works well. We understood that the consistency is just for the cache and not for intensive query.

  • Read error - SOLUTION system configuration data

    Today, I found a solution to the error of notorious boot not only on the toshiba, but some other brands also.

    The error is: System configuration data read error.
    He invites to start with 2 beeps which prevents the computer laptop Moldavian from.

    I tried everything I could to remve this message but noything helped me.
    Replace all, cmos battery, updated the bios, loader, drives, keyboard and etc.

    Today, I found a solution very simple that I want to share with you guys.
    This solution worked for me and I hope it will work for you too.

    Are you ready? :)

    Go to the Bios and change the Bios language french.

    Post here if it works for you also.

    Thanks for sharing your experience with us. But its really hard to believe that your solution would be linked to all devices such problem

    However, good to know that this small change in the BIOS has helped you to resolve the issue

  • SQL Tuning "read-consistency Issue.

    Hello

    Oracle 9.2 we Dim 5.10

    I have a problem with the query below.
    SELECT INDV_ID
    FROM
     INDV WHERE SSN = :B1
    
    
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse        1      0.00       0.00          0          0          0           0
    Execute    121      0.16       0.09          0          0          0           0
    Fetch      121    327.52     320.00    1446367    1892864          0           2
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total      243    327.68     320.10    1446367    1892864          0           2
    
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 42  (CORE)   (recursive depth: 1)
    
    Rows     Execution Plan
    -------  ---------------------------------------------------
          0  SELECT STATEMENT   GOAL: CHOOSE
          0   TABLE ACCESS   GOAL: ANALYZED (BY INDEX ROWID) OF 'INDV'
          0    INDEX   GOAL: ANALYZED (RANGE SCAN) OF 'INDV_IDX1' (NON-UNIQUE)
    
    
    
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                     97091        0.02          4.85
      db file scattered read                     270729        0.01         40.49
    ********************************************************************************
    The above query is part of a release of tkprof with sql trace level 8. A simple query with a good explain plan and if I run the query each it doesn't take any time. But the work takes 12 hours to complete with most of the time showing the query above as the "current state". The query above should run appx 16000 times in the work. The above output is a record which lasted only 15-20 min. If we see the value of the query and disk above, the value is too high from the table and he ranks should not change very often (I won't say static though).

    I would be very grateful if someone can explain the behavior of query above.

    Thank you
    Ankit.

    It looks like SQL from a pl/sql procedure.
    "Implementation Plan" is almost certainly wrong, and you do a comprehensive analysis or full index.
    It is likely that you have a type mismatch that is the problem.

    Option 1) your SSN is a character column, the pl/sql variable is of a different type - perhaps a number.
    Option 2) for some reason, you have a defined character offset - SSN is perhaps a varchar2 and the incoming pl/sql variable is a 'nvarchar' or similar, probably because of character set used on the client.

    Since you have the tkprof output and it comes to 9i, then you might look at the trace file to find the text of this statement and check the line where it says: «Parsing in cursor...» ", this will be an entry hv = NNNNN.

    To see the offender for the instruction memory:

    select
            operation, options, object_name
    from    v$sql_plan
    where
            hash_value = {your value for NNNNN}
    order by
            child_number, id
    ;
    

    Next step - look for the pl/sql with this SQL and check the type of variable.

    Concerning
    Jonathan Lewis
    http://jonathanlewis.WordPress.com
    http://www.jlcomp.demon.co.UK

    "The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge." Stephen Hawking.

  • "Oracle reads the setting from the file when you start... what PART of Oracle?

    I have done this for years, but I don't know the answer to this question.

    I don't know how it's done on windows, but on * nix install, it's oracle that does almost everything, it is the executable that is behind most, if not all background processes.  It simply works with functions and different identities.

    In doing so:

    /export/home/oracle> export ORACLE_SID=nothere
    /export/home/oracle> sqlplus /nolog
    
    SQL*Plus: Release 10.2.0.3.0 - Production on Thu Dec 13 10:49:23 2012
    
    Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.
    
    SQL> connect / as sysdba
    Connected to an idle instance.
    SQL> startup nomount;
    ORA-01078: failure in processing system parameters
    LRM-00109: could not open parameter file '/u01/app/oracle/product/10.2.0/dbs/initnothere.ora'
    

    led me to the error message file $ORACLE_HOME/oracore/mesg/lrmus.msg which is said in the header:
    "/ NAME".
    / lrmus.msg - message file for setting Manager (LRM) basic error
    / DESCRIPTION
    "/ List of all of the LRM database error messages.

    So, I think that this is part of the "bootstrap" code in the oracle itself executable that runs before the spawning of himself in all other versions of himself.

    John

  • consistent playback in oracle 9i

    Hi all

    I have DB 9.2.0.1.0 installed in windows XP

    can you help me in this scenario:


    (a) user has created a test table and inserted some records in the table and gives user b select privileges on the table test

    (b) the user has not committed the records that he inserted into the table test. but, the user b is always possible
    See a.test records?


    Why read consistency does not work here?


    Any idea?


    Kai

    Here's what I mean:

    In session 1:
    SQL> create table t (c number);
    
    Table created.
    
    SQL> insert into t values (42);
    
    1 row created.
    
    SQL> grant select on t to public;
    
    Grant succeeded.
    
    Notice, I didn't do a commit
    
    Now in session 2:
    
    SQL> select * from t;
    
                       C
    --------------------
                      42
    

    Because the SUBSIDY has caused an implicit validation;

  • CANCEL reading and overall-TWG temporary table

    Hello Expert!

    Based on an analysis made by my colleagues... He mentions on a compound of SQL to extract data only to leave a TWG and overturn readings! (Current_obj # ash captured as a event 0 and wait - db_file_sequential_read)

    The thought difficult to obtain a cancellation reason reading when extracting data from TWG!.

    On cancellation, cancellation would be needed!

    But as TWG specific Session, at any time in my session would read the latest copy for all operations. This reading must come from a buffer or disc (Temp).

    Why a reading of cancellation will be required?

    Can you please help me understand a scenario where cancel read for TWG is valid or unrealistic?

    Or conceptually missed something?

    Based on an analysis made by my colleagues... He mentions on a compound of SQL to extract data only to leave a TWG and overturn readings! (Current_obj # ash captured as a event 0 and wait - db_file_sequential_read)

    The thought difficult to obtain a cancellation reason reading when extracting data from TWG!.

    TWG activity generates CANCEL as a result of DML for any table. Which CANCEL then also causes RESTORE to be generated.

    On cancellation, cancellation would be needed!

    OK - but this isn't the only case of use.

    But as TWG specific Session, at any time in my session would read the latest copy for all operations. This reading must come from a buffer or disc (Temp).

    And that is where you get off the track. These statements are NOT correct.

    A session can open several sliders on these data TWG and these several cursors can represent a different "coherent reading" as far as data is concerned.

    Tom Kyte explains better in this AskTom blog.

    https://asktom.Oracle.com/pls/Apex/f?p=100:11:0:P11_QUESTION_ID:4135403700346803387

    and we said...

    Yes, temporary tables generate UNDO - and therefore to generate REDO to CANCEL it.

    Do it again for the cancellation must be created because Cancel is treated the same, the undo tablespace seems to be corrupted on an event of failure/media instance recovery if the cancellation has disappeared.

    The cancellation must be filed in support of the coherence of reading. For example, if you:

    (a) charge a TWG of temporary table with data

    (b) open cursor_1 SELECT * from TWG

    (c) update the data of TWG

    (d) open cursor_2 SELECT * from TWG

    (e) delete most of the data of TWG

    (f) open cursor_3 SELECT * from TWG

    Each of these sliders should see a different result set - we have not recovered from them, has just opened their. cursor_1 must push the blocks changed to point in time (b). This requires CANCELLATION. Thus, the cancellation must be generated - and undo is always protected by the roll forward.

    Changed blocks to the TWG, as for ANY table could be physically written to disk until the transaction is completed. It is not always room in the cache buffers for ALL data that has changed.

    And, as Tom above, each of the sliders really sees the data at a point different 'read-consistent. If some of these extractions of cursor will require reading the UNDO data for "compatible" data for this particular slider.

    The point in time' is established when a cursor is open - no data has yet been read yet.

    See the section 'Multiversion Concurrency Control' of the doc

    https://docs.Oracle.com/CD/B28359_01/server.111/b28318/consist.htm#i17881

    Multiversion Concurrency Control

    Database Oracle automatically provides the reading consistency to a query so that all the data that sees the request comes from a single point in time (at the level of instruction read consistency). Oracle database can also provide read consistency to all queries in a transaction (consistency of reading at the level of transactions).

    Oracle database uses information stored in its rollback segments to provide these consistent views. Rollback segments contain old data values that have been changed by validated or recently committed transactions. Figure 13-1 shows how the Oracle database provides consistent reading using rollback segments data at the level of instruction.

    That reference above 'consistent read to a query' also applies when cursors are opened.

  • read plsqlchallenge question of coherence

    Hi Experts,

    Today, I saw a good question in reading consistency plsqlchallange.com I want to share with you. The interesting thing is, I thought that the cursor may receive prior to extraction. However, he said that it can be extracted when cursor connection occur. Is she not devoid of meaning? Because there is also a keyword called FETCH, if the cursor to retrieve data, so why we use the FETCH keyword too?

    My second question is, as far as I know in FOR extraction of cursor loop 100 lines for each iteration. As opposed to simple loop. I wonder if Oracle does not have the functionality of reading consistency, will be the display still loop the same result because it gets 100 lines per iteration?

    I create and populate a table as follows:
    CREATE TABLE plch_employees
    (
    employee_id INTEGER
    , last_name VARCHAR2 (100)
    Number of salary
    )
    /

    BEGIN
    INSERT INTO plch_employees
    VALUES (100, 'Smith', 100000);

    INSERT INTO plch_employees
    VALUES (200, "Jones", 1000000);

    COMMIT;
    END;
    /
    Blocks leading to the following two lines of text to display after running?
    Smith
    Jones

    BEGIN

    FOR SheikYerbouti IN (SELECT *)

    OF plch_employees

    ORDER BY employee_id)

    LOOP

    DELETE FROM plch_employees;

    Dbms_output.put_line (emp_rec.last_name);

    END LOOP;

    END;

    DECLARE
    CURSOR emps_cur
    IS
    SELECT * from plch_employees
    ORDER BY employe_id;

    SheikYerbouti emps_cur % ROWTYPE;
    BEGIN
    OPEN emps_cur.
    DELETE FROM plch_employees;

    LOOP
    GET emps_cur INTO SheikYerbouti.
    EXIT WHEN emps_cur % NOTFOUND;

    Dbms_output.put_line (emp_rec.last_name);
    END LOOP;
    CLOSE Emps_cur;
    END;

    Concerning

    Charlie

    Robert Angel wrote:

    Open cursor creates your snapshot that is consistent and is used by any code from there.

    Common misperseption opening cursor creates a snapshot. Is not! Opening slider simply records current SNA which becomes the reference point of read-consistency level statement. When the extraction is issued it read table and check the SNA of each block. If block YVERT > open cursor Oracle YVERT realize block changed since the cursor was opened, and will go to CANCEL to the previous status of block. He will check that previous state of block SNA and if it is new > open cursor YVERT repeats the process. Finally he will find either block consistent state in UNDO or trigger of"snapshot too old".

    SY.

  • How Oracle Clusterware Oracle RAC and 3rd-party Clustering with them

    I have some questions on how Oracle, Oracle RAC and 3rd party clustering CLusterware fit between them.

    Q1. My understanding is that the Infrastructure Grid Oracle Clusterware is Oracle's Clustering solution that allows applications to cluster? that is, it is a general clusterware solution to compete with those already on the market. Is this correct?

    Q2. So why CARS is necessary for the Oracle cluster databases? An Oracle database is not just considered one application like any other? So why can not just use you GI / Clusterware to gather what you would do for any other application? Why so you need RAC to the Clusterware Summit?

    Q3. Is it is possible to combine an Oracle database using 3rd party of clustering and not use Oracle Clusterware and CARS at all that is can you say cluster, an Oracle database using say Sun Clusters, Clustering AIX or Linux native same clustering?

    Q4. If the RAC is purely for Oracle Cluster databases - what is with the title "Real Application Clusters"? What's real on this subject?, what to ask?

    Q5. I also read that RAC can use 3rd party of clustering. However if you decide that this is the case, then you need to install Oracle Clusterware when even (I think for interconnection must be created to allow the merger with Cache between Instances of node?).

    Is this the case?  If yes why never bore you with 3rd party clusters? -Since you will have to install Oracle Clustering anyway - and I probably have to license (all I can think about is the scenario that you already have a cluster of 3rd party in place and you decide that you need to use the same material for a database cluster)?

    All wisdom greatly appreciated,

    Jim

    Yes, with 10g, you had to use the CRS, and with 11g, to use the GI.  For 9i RAC, you would use Verits Cluster and Sun Cluster or anything else.

    11g RAC to GI.  As above, you can use clusterware extra if needed.  I gave the example above where to a 10g RAC we used the CRS to the cluster database/listener/vip and allowing groups of volume and filesystem in HACMP cluster.  File systems were not shared, but could switch to another node failure node, who has been treated by HACMP.  The volume of the groups containing the raw devices were shared volume groups.  For 11g, I never used extra clusterware on top of CRS and have always used ASM which is part of the GI.

    The application provider only certified database to run on 10g on AIX with raw devices.  What is strange about that?  It would be unusual that you probably expect to use ASM instead of raw devices, but it is what it is.

    See the following Note of Oracle for more information: using Oracle Clusterware with vendor Clusterware FAQ (Doc ID 332257.1)

    It is perhaps easier to explain if you can explain what problem you are trying to solve?

  • Questions of Oracle 9i

    Hi all

    OEL 5.6
    ORA 9i, 9.2.0.6.0

    I'm performing db/sql tuning.

    If I put timed_statistics = not true (in collaboration with statspack), present a lot to generate logs such as my disk space is exhausted? This file connects I do constantly check to avoid lots of disk space?
    It also adds stress or resource loading in our database?


    Thank you very much
    Jan

    Published by: yxes2013 on December 20, 2012 23:24

    Edited by: yxes2013 21.12.2012 17:42 * OEL 5.6 added that the version of the OS

    yxes2013 wrote:
    Sir your forgiveness, please

    Reports (SELECT) generate NO CANCELLATIONS or REDO

    As far as I know long select held large undo for read consistency/latency of other users.

    Yes, SELECT long-term work can use UNDO, but it is DML (& NOT SELECT) that generates the CANCELLATION.

    >

    After SQL & results demonstrating the I/O contention actually exists.

    How? Is it using statspack?

    STATSPACK shows that I have TEMP or ROLLBACK claims? or lack of memory problem?

    Thank you

    top - 16:11:37 up 51 days, 23:13,  2 users,  load average: 7.81, 11.98, 9.11
    Tasks: 576 total,   7 running, 569 sleeping,   0 stopped,   0 zombie
    Cpu(s): 45.9%us,  0.4%sy,  0.0%ni, 49.6%id,  4.0%wa,  0.0%hi,  0.1%si,  0.0%st
    Mem:  12079104k total, 12040792k used,    38312k free,    74632k buffers
    Swap:  6104692k total,   177312k used,  5927380k free, 10174656k cached
    

    By on 'top' shows system 50% (49.6%) inactive for the CPU usage; CPU is not the bottleneck.
    It shows only "177312" used swap so RAM is not a bottleneck.
    Which means that the disc is most likely the bottleneck.

    results format post vmstat as below (after 1 minute)

    [oracle@localhost ~]$ vmstat 6 10
    procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
     r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
     0  0 118976 274692   9264 311236    0    1     6    28  101  106  1  2 97  0  0
     0  0 118976 274552   9280 311240    0    0     0    24 1042 1054  1  1 98  1  0
     0  0 118976 271344   9288 311240    0    0     1    31 1004 1069  2  3 95  0  0
     1  0 118976 271344   9304 311240    0    0     0    65 1004 1036  1  3 96  0  0
     1  0 118976 268988   9320 311240    0    0     2    33 1001 1014  2  3 95  0  0
     0  0 118976 264772   9336 311264    0    0     1    33  999  993  2  4 94  0  0
     0  0 118976 264152   9352 311264    0    0     0    25  996  952  1  2 98  0  0
     0  0 118976 264136   9360 311264    0    0     0    13 1068 1030  1  2 96  0  0
     0  0 118976 264152   9376 311264    0    0     0    27 1000  963  1  2 96  1  0
     1  0 118976 264144   9392 311264    0    0     0    33 1023  984  1  2 97  0  0
    [oracle@localhost ~]$ 
    
  • Problems with the 'Grid entity' configuration consistency

    I have difficulty getting the configuration work "Entity grid" consistency. I'm on Oracle Version consistency 3.6.0.4 Build 19111. I carefully followed the instructions for the third scenario configuration consistency documented on the page "JPA on the grid" [url http://docs.oracle.com/cd/E14571_01/doc.1111/e16596/tlcgd003.htm#CHDGGGAJ] here. I will include excerpts from the records, but I think that I have been following the docs. I must be missing something that the docs report directly? Any ideas would be greatly appreciated.

    The problem I encounter is that when this 'Grid entity' coherence mode is used, the entities are not reading from the database when the application starts. The application behaves as if there is absolutely no data in the database, even if I know that I have lines in tables to backup my entities. In my test dummy application, I also have problems persistent entities. If I temporarily disable consistency commenting annotations {noformat}@Customizer{noformat} entity everything works perfectly (DataSet starting loads from the database as expected and new entities are correctly preserved). If I change to the "grid + hidden +" things configuration seems to work fine. But it's not ideal because we want the features of read/write of entityof the grid.

    Here is my file of coherence-cache - config.xml (corresponds almost exactly to the documentation):
    <?xml version='1.0'?>
    <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"
       xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd">
      <caching-scheme-mapping>
        <!--
          Map all entity classes to the eclipselink-distributed-readwrite scheme
    -->
        <cache-mapping>
          <cache-name>*</cache-name>
          <scheme-name>eclipselink-distributed-readwrite</scheme-name>
        </cache-mapping>
      </caching-scheme-mapping>
      <caching-schemes>
        <distributed-scheme>
          <scheme-name>eclipselink-distributed-readwrite</scheme-name>
          <service-name>EclipseLinkJPAReadWrite</service-name>
          <!--
            Configure a wrapper serializer to support serialization of relationships.
          -->
          <serializer>
              <instance>
                 <class-name>oracle.eclipselink.coherence.integrated.cache.WrapperSerializer</class-name>
              </instance>
          </serializer>
          <backing-map-scheme>
            <read-write-backing-map-scheme>
              <internal-cache-scheme>
                <local-scheme/>
              </internal-cache-scheme>
              <!-- 
                 Define the cache scheme 
               -->
              <cachestore-scheme>
                <class-scheme>
                  <class-name>oracle.eclipselink.coherence.integrated.EclipseLinkJPACacheStore</class-name>
                  <init-params>
                    <!-- This param is the entity name -->
                    <init-param>
                      <param-type>java.lang.String</param-type>
                      <param-value>{cache-name}</param-value>
                    </init-param>
                    <!-- This param should match the persistence unit name in persistence.xml -->
                    <init-param>
                      <param-type>java.lang.String</param-type>
                      <param-value>test-pu</param-value>
                    </init-param>
                  </init-params>
                </class-scheme>
              </cachestore-scheme>
            </read-write-backing-map-scheme>
          </backing-map-scheme>
          <autostart>true</autostart>
        </distributed-scheme>
      </caching-schemes>
    </cache-config>
    Persistence.XML:
    <?xml version="1.0" encoding="windows-1252" ?>
    <persistence xmlns="http://java.sun.com/xml/ns/persistence"
                 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                 xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd"
                 version="1.0">
      <persistence-unit name="test-pu">
        <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
        <jta-data-source>jdbc/WineDS</jta-data-source>
        <non-jta-data-source>jdbc/WineDS</non-jta-data-source>
        <class>datagridtest.WineEntity</class>
        <properties>
          <property name="eclipselink.target-server" value="WebLogic_10"/>
          <property name="eclipselink.target-database" value="Oracle11"/>
           <!--<property name="eclipselink.ddl-generation" value="create-tables"/>-->
        </properties>
      </persistence-unit>
    </persistence>
    entity class:
    @Entity(name="Wine")
    @NamedQueries({
      @NamedQuery(name = "Wine.findAll", query = "select o from Wine o"),
      @NamedQuery(name = "Wine.findByRegion", query = "select o from Wine o where o.region = :region"),
      @NamedQuery(name = "Wine.findByVintage", query = "select o from Wine o where o.vintage = :vintage"),
      @NamedQuery(name = "Wine.findByWinery", query = "select o from Wine o where o.winery = :winery")
    })
    @SequenceGenerator(name = "Wine Seq", sequenceName = "WINE_SEQ", allocationSize = 50, initialValue = 50)
    @Customizer(CoherenceReadWriteCustomizer.class)
    public class WineEntity implements Serializable {
         @Id
         @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "Wine Seq")
         private Integer id;
         @Version
         private Integer version;
         
         private String name;
         
         private int vintage;
         
         private String region;
         
         private String winery;
         
         typical getters and setters follow...
         ....
    Extracts of EJB:
    @Stateless(name = "SessionEJB", mappedName = "PortletProducerApplication-DataGridTest-SessionEJB")
    public class SessionEJBBean implements SessionEJBLocal {
         @PersistenceContext(unitName="test-pu")
         private EntityManager em;
         
         ...
         
         public List<WineEntity> findAllWines() {
              TypedQuery<WineEntity> allWinesQuery = em.createNamedQuery("Wine.findAll", WineEntity.class);
              return allWinesQuery.getResultList();     
         }
         
         ...
         
    }
    (IWLS) server logs:
    <Jun 25, 2012 3:54:41 PM EDT> <Notice> <EclipseLink> <BEA-2005000> <2012-06-25 15:54:41.686--ServerSession(1768045153)--EclipseLink, version: Eclipse Persistence Services - 2.1.3.v20110304-r9073> 
    <Jun 25, 2012 3:54:41 PM EDT> <Notice> <EclipseLink> <BEA-2005000> <2012-06-25 15:54:41.688--ServerSession(1768045153)--Server: 10.3.5.0> 
    <Jun 25, 2012 3:54:42 PM EDT> <Notice> <EclipseLink> <BEA-2005000> <2012-06-25 15:54:42.142--ServerSession(1768045153)--file:/C:/Users/EDITED/AppData/Roaming/JDeveloper/system11.1.1.5.37.60.13/o.j2ee/drs/TestApp/DataGridTestEJB.jar/_test-pu login successful> 
    2012-06-25 15:54:42.502/58.626 Oracle Coherence 3.6.0.4 <Info> (thread=[ACTIVE] ExecuteThread: '4' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): Loaded operational configuration from "jar:file:/C:/Oracle/Middleware/oracle_common/modules/oracle.coherence_3.6/coherence.jar!/tangosol-coherence.xml"
    2012-06-25 15:54:42.513/58.637 Oracle Coherence 3.6.0.4 <Info> (thread=[ACTIVE] ExecuteThread: '4' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): Loaded operational overrides from "jar:file:/C:/Oracle/Middleware/oracle_common/modules/oracle.coherence_3.6/coherence.jar!/tangosol-coherence-override-dev.xml"
    2012-06-25 15:54:42.516/58.640 Oracle Coherence 3.6.0.4 <D5> (thread=[ACTIVE] ExecuteThread: '4' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): Optional configuration override "/tangosol-coherence-override.xml" is not specified
    2012-06-25 15:54:42.524/58.648 Oracle Coherence 3.6.0.4 <D5> (thread=[ACTIVE] ExecuteThread: '4' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): Optional configuration override "/custom-mbeans.xml" is not specified
    
    Oracle Coherence Version 3.6.0.4 Build 19111
     Grid Edition: Development mode
    Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved.
    
    2012-06-25 15:54:42.673/58.797 Oracle Coherence GE 3.6.0.4 <Info> (thread=[ACTIVE] ExecuteThread: '4' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): Loaded cache configuration from "file:/C:/workspaces/spaces_branch/TestApp/DataGridTest/src/coherence-cache-config.xml"
    2012-06-25 15:54:43.360/59.484 Oracle Coherence GE 3.6.0.4 <D4> (thread=[ACTIVE] ExecuteThread: '4' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): TCMP bound to /172.30.112.202:8088 using SystemSocketProvider
    2012-06-25 15:54:46.747/62.871 Oracle Coherence GE 3.6.0.4 <Info> (thread=Cluster, member=n/a): Created a new cluster "cluster:0xC4DB" with Member(Id=1, Timestamp=2012-06-25 15:54:43.377, Address=172.30.112.202:8088, MachineId=49866, Location=site:EDITED,machine:EDITED,process:10776, Role=WeblogicServer, Edition=Grid Edition, Mode=Development, CpuCount=4, SocketCount=2) UID=0xAC1E70CA000001382535DD31C2CA1F98
    2012-06-25 15:54:46.757/62.881 Oracle Coherence GE 3.6.0.4 <Info> (thread=[ACTIVE] ExecuteThread: '4' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): Started cluster Name=cluster:0xC4DB
    
    Group{Address=224.3.6.0, Port=36000, TTL=4}
    
    MasterMemberSet
      (
      ThisMember=Member(Id=1, Timestamp=2012-06-25 15:54:43.377, Address=172.30.112.202:8088, MachineId=49866, Location=site:EDITED,machine:EDITED,process:10776, Role=WeblogicServer)
      OldestMember=Member(Id=1, Timestamp=2012-06-25 15:54:43.377, Address=172.30.112.202:8088, MachineId=49866, Location=site:EDITED,machine:EDITED,process:10776, Role=WeblogicServer)
      ActualMemberSet=MemberSet(Size=1, BitSetCount=2
        Member(Id=1, Timestamp=2012-06-25 15:54:43.377, Address=172.30.112.202:8088, MachineId=49866, Location=site:EDITED,machine:EDITED,process:10776, Role=WeblogicServer)
        )
      RecycleMillis=1200000
      RecycleSet=MemberSet(Size=0, BitSetCount=0
        )
      )
    
    TcpRing{Connections=[]}
    IpMonitor{AddressListSize=0}
    
    2012-06-25 15:54:46.809/62.933 Oracle Coherence GE 3.6.0.4 <D5> (thread=Invocation:Management, member=1): Service Management joined the cluster with senior service member 1
    2012-06-25 15:54:47.155/63.279 Oracle Coherence GE 3.6.0.4 <D5> (thread=DistributedCache:EclipseLinkJPAReadWrite, member=1): Service EclipseLinkJPAReadWrite joined the cluster with senior service member 1
    INFO: Found persistence provider "org.eclipse.persistence.jpa.PersistenceProvider". OpenJPA will not be used.
    INFO: Found persistence provider "org.eclipse.persistence.jpa.PersistenceProvider". OpenJPA will not be used.
    [EL Info]: 2012-06-25 15:54:47.351--ServerSession(1901778202)--EclipseLink, version: Eclipse Persistence Services - 2.1.3.v20110304-r9073
    [EL Info]: 2012-06-25 15:54:47.351--ServerSession(1901778202)--Server: 10.3.5.0
    [EL Info]: 2012-06-25 15:54:47.551--ServerSession(1901778202)--EclipseLinkCacheLoader-test-pu login successful
    And the error when you try to make persistent the new entity:
    javax.ejb.TransactionRolledbackLocalException: Error committing transaction:; nested exception is: javax.persistence.OptimisticLockException: Exception [EclipseLink-5004] (Eclipse Persistence Services - 2.1.3.v20110304-r9073): org.eclipse.persistence.exceptions.OptimisticLockException
    Exception Description: An attempt was made to update the object [datagridtest.WineEntity@1c59a6cc], but it has no version number in the identity map. 
    It may not have been read before the update was attempted. 
    Class> datagridtest.WineEntity Primary Key> 2,050
         at weblogic.ejb.container.internal.EJBRuntimeUtils.throwTransactionRolledbackLocal(EJBRuntimeUtils.java:238)
         at weblogic.ejb.container.internal.EJBRuntimeUtils.throwEJBException(EJBRuntimeUtils.java:136)
         at weblogic.ejb.container.internal.BaseLocalObject.postInvoke1(BaseLocalObject.java:650)
         at weblogic.ejb.container.internal.BaseLocalObject.__WL_postInvokeTxRetry(BaseLocalObject.java:455)
         at weblogic.ejb.container.internal.SessionLocalMethodInvoker.invoke(SessionLocalMethodInvoker.java:52)
         at datagridtest.SessionEJB_qxt9um_SessionEJBLocalImpl.mergeWineEntity(Unknown Source)
    So just to sum up, I currently see two problems: 1) no data from the database consistency and implementation — why not consistency out of database? and (2) what's this persistent mistake on OptimisticLockException? Thanks again for any help.

    JPQL API criteria or both will result in coherence filters when using GridRead or GridEntity.
    TopLink grid has no way of knowing if the data in the cache are sufficient to satisfy the query, for example based on the provided query TopLink grid would determine how between a user with no book and insufficient. Your application will need to determine that not enough has been stored in the cache or to perform an initial charge in bulk for a particular user to redirect the request to the database programmatically. Queries of grid can easily be redirected to the database by setting the IgnoreDefaultRedirector on the query using query.setHint (QueryHints.QUERY_REDIRECTOR, new IgnoreDefaultRedirector()); Read from the database of entities will be automatically pushed to the consistency.

  • Difference between Oracle versions!

    Hello
    I would like to know the difference between versions of oracle from oracle 7.
    Also would like to know a way to understand the particular function works in version oracle x but not before version x.

    for example. listagg function came in oracle 11.

    Is there a link where I can find it and consolidated in one place.

    Hello

    923808 wrote:
    Hello
    I would like to know the difference between versions of oracle from oracle 7.
    Also would like to know a way to understand the particular function works in version oracle x but not before version x.

    for example. listagg function came in oracle 11.

    LISTAGG was back in 11.2, to be exact.

    Is there a link where I can find it and consolidated in one place.

    Not that I know of.

    From Oracle 9, there is a 'What's new' manual with all versions. For example:
    http://docs.Oracle.com/CD/B10501_01/server.920/a96531/TOC.htm
    Also from Oracle 9, most commonly used textbooks have a "What's New" section to the beginning:
    http://docs.Oracle.com/CD/B10501_01/server.920/a96540/wnsql.htm#971925

    Here is a list I made for my personal use:

    Index of Features, showing when they were introduced into Oracle
    
    ADD_MONTHS Function                    6.0     or earlier
    Aggregate functions, user-defined          9
    ALTER TABLE x RENAME COLUMN a TO b;          9.2
    Analytic Functions                    8.1
         IGNORE NULLS                    10
    APPEND, SPOOL                         10
    
    BINARY_INTEGER same as PLS_INTEGER          10.1     http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14261/datatypes.htm#sthref690
    
    CASE Expressions                    8.1
    CLOB Datatype                         8.1     or earlier     max size=4G in versions 8.1 and 11.2
    COLUMN ... FOLD_AFTER (or FOLD_BEFORE)          8.1     or earlier
    Conditional Compilation of PL/SQL          10.1.0.4
    CONNECT BY                         2
         CONNECT_BY_ISCYCLE     pseudo-column     10
         CONNECT_BY_ISLEAF     pseudo-column     10
         CONNECT BY NONCYCLE               10
         CONNECT_BY_ROOT          operator     10
         LEVEL <= x (for Counter Table)          8.1.7.4     Re: Oracle Virtual Machine Connect by Prior Problem
         ORDER SIBLINGS BY               9
         Subquery in START WITH clause          9.2     or earlier
         SYS_CONNECT_BY_PATH     function     9
    
    DATE between 4713 and 9999               8
    DATE Literals                         10
    DBMS_Metadata                         9
    DBMS_RLS                         8.1
    DBMS_SCHEDULER                         10
    DELETE in MERGE                         11.1
    
    EXTRACT                              9
    
    FIRST Function                         9
    FOLD_AFTER (FOLD_BEFORE) in SQL*Plus COLUMN     8.1     or earlier
    Foreign Key Constraints                    7     (very limited enforcement in 6)
    Functions, User-defined                    7
         Aggregate                    9
    
    GREATEST Function                    6.0     or earlier
    
    IGNORE NULLS
         in FIRST_VALUE, LAST_VALUE          10
         in LAG                         11.2
    In-Line Views                         8.0     (documented in 8.1)
    
    KEEP (DENSE_RANK ...)                    9
    
    LAST Function                         9
    LEAST Function                         6.0     or earlier
    LISTAGG                              11.2
    LNNVL Function                         10
    
    MEDIAN Function (Aggregate or analytic)          10
    MERGE                              9
    MERGE ... DELETE                    10.1
    MODEL                              10
    MONTHS_BETWEEN Function                    6.0     or earlier
    
    NEW_TIME Function                    6.0     or earlier
    NOCYCLE, CONNECT BY                    10
    NTH_VALUE Function                    11.2
    NULLIF Function                         9
    
    ORDER BY in Sub-Queries                    9
    ORDER SIBLINGS BY                    9
    OWA_Pattern Package (for Regular expressions)     9
    
    Partioned Outer Joins (Query Partitioning)     10
    PERCENT_RANK function                    8.1
    PIVOT                              11
    PLS_INTEGER same as BINARY_INTEGER          10.1     http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14261/datatypes.htm#sthref690
    Policies, Row-Level Security               8.1
    PURGE                              10     
    
    Q-Notation for string literals               10
    Query Partition                         10
    
    Read Consistency                    4
    Recursive subqueries                    11.2
    Regular Expressions                    10
         expr argument (_INSTR and _SUBSTR)     11.1
         OWA_PATTERN Package               9
    Row-Level Security Policies               8.1
    
    Scalar Sub-Queries (except in WHERE, etc.)     8.1     not documented until 9.1
    SPOOL ... APPEND                    10
    STATS_x Functions (Incl. STATS_MODE)          10
    SYS_CONNECT_BY_PATH                    9
    SYS_CONTEXT                         8.1     USERENV introduced in 8.0?
    SYSDATE as Function                    6.0 ?     (It was a pseudo-column in PL/SQL 1.0, 1989.  6.0 SQL Language manual (1990) calls it a function.)
    
    Triggers                         7
    
    User-Defined Functions                    7
         Aggregate                    9
    USERENV Function                    8.0?     Expanded to SYS_CONTEXT in 8.1
    UTL_Match Package                    11.2     In Packages and Types manual
    
    Virtual Columns     in tables               11
    VPD ("Virtual Private Database")          8.1
    
    WIDTH_BUCKET Function                    9
    WITH Clause                         9
         Recursive                    11.2
    
    XML                              9
    XMLAGG Function                         9.2      (not in 9.1 SQL Language manual)
    XMLELEMENT Function                    9.2      (not in 9.1 SQL Language manual)
    XMLQUERY                         10
    

    Published by: Frank Kulash, 28 March 2012 18:44

  • DBA Oracle 10g book

    Hello

    I am new to Oracle, would say that if I note me on 10, I'd give may be 2 points because I know just the basics like tablespaces, file data, a little on the architecture... Although I've erased the 1 review of sql databases, but which is more related to sql.

    In the next 2 months I'm supposed a... disaster recovery as we use our backup tapes exisitng to restore the entire OS and Oracle DB with the application on UNIX... to prepare myself before for this exercise, I think I have to learn a lot about the foundations of the Oracle, how works RMAN, how can I restore the db through RMAN backup , apply the logs to archive blah blah.

    I downloaded a few pdf from tahiti.oracle.com and now I realize there is a large enough ocean and not practical to fill the same in the next 2 months & I can not afford to read the material on a full-time, due to problems of support in production... so to treat this I thought I'd go for a book , cannot so you guys recommend a good book for Oracle 10 g dba & RMAN.

    If I have to buy separately from RMAN book that is not a problem I can do.

    Let me know, if my approach to managing this problem seems logical.

    Concerning

    Learner

    Take the first small not, I suggest starting with the "2 day DBA" book from the library of documentation:
    http://download.Oracle.com/docs/CD/B19306_01/server.102/b14196.PDF

    If you want to have a good understanding of the Oracle, I recommend before jumping in RMAN, take a look at Tom Kyte's latest book. I recently went through the book, and you will find the original version of this study here:
    http://www.Amazon.com/Expert-Oracle-database-architecture-programming/DP/1430229462

    Oracle 11g RMAN revenues for data book is very good, and you will find a review I wrote about this book here:
    http://www.Amazon.com/RMAN-recipes-Oracle-database-problem-solution/DP/1590598512/ref=pd_sim_b_14

    Charles Hooper
    Co-author of "Expert Oracle practices: Oracle Database Administration of the Oak Table.
    http://hoopercharles.WordPress.com/
    IT Manager/Oracle DBA
    K & M-making Machine, Inc.

Maybe you are looking for

  • MacPro 3,1

    This is the machine I want to upgarde to El Capitan Model name: Mac Pro Model identifier: MacPro3, 1 Processor name: Intel Quad - Core Xeon Processor speed: 2.8 GHz Number of processors: 1 Total number of Cores: 4 L2 Cache: 12 MB Memory: 4 GB Bus spe

  • Mac - latest firefox url bar not woking

    Hello My url bar does not return after typing the address.The right arrow key does not work. I've disabled all the addon and still the same issue.Reinstalled (replaced) with a new download. Help would be good as Firefox is the browser I use all the t

  • Example damper performance report does not properly

    Hello I have a problem in the use of edit cell.vi-Word to manipulate several cells in a table to a word document. That makes this vi on my system is to insert the values of the first column in all other columns. See the example for damper Performance

  • Copy Emails to disk

    How copy a whole folder of emails on a disk and be able to open and read from the disk?

  • New installation of Windows 7 or Windows 8.1.

    OT: Can I get the new cd of Windows 7 windows Vista 6 update? Hello, I have a Dell laptop with vista Home premium 6 Service Pack2, IE 9. I have a card smartswipe reader that not more supports 6 Can I buy a newCD and install Windows 7 or only 8.1? I'd