Doubts about the merger of cache - PI (last Image) in the CARS

We are Oracle 10.2.0.4 database configuration 2-node RAC on Linux x86_64

I was going through the Oracle documentation and some books that were discussing on the merging of cache (reading-reading, reading / writing, writing-write, read-write behavior) etc.

It was mentioned that the line lock is similar to that single instance so whevever a block is necessary an instance finishes its operation on the block and transfers the block to the other instance that requires exclusive access.
What I realized, this is once the transaction is completed the first transfers to instance the block to the other instance to have exclusive access to this block.

Also the first instance maintains a PI (last Image) of the block that she can use for read operations, but he can't change the block as he has transferred the block to the other instance and he only shared access.

Now when a checkpoint occurs all instances should serve their IP so in this case, if the first instance is in the middle of a select statement that uses the PI of the block, what will heppen to the query, if it causes an error (something like snapshot too old)?

I'm not able to understand the notion of PI and what happens during flushing data to disk blocks.

Evelyne says:

While I don't understand, what's the meaning of the Image here?

A PI (last image) is a copy of the block which is known to have existed as the current version (CU) of the block at some point in the past-, then it can be used as starting point for recovery if the node that contains the current block goes down. If IP blocks can reduce recovery times in the database.

In my view, blocks IP can also be used by the holder of the IP for generating local copies of CR without reference to DRM - if the SNA are appropriate. Thus PIs can also be used the interswitching traffic.

Initially, there was a rule that when a node has written the courses block, it should send a message to all other nodes brand their IP copy as a free buffer (indeed, forget them) - although I saw a note once Oracle can change this option to convert IP blocks to the blocks of CR.

Concerning
Jonathan Lewis
http://jonathanlewis.WordPress.com
http://www.jlcomp.demon.co.UK

"All experts it is a equal and opposite expert."
Clarke

Tags: Database

Similar Questions

  • I have a doubt about the file .folio and publications

    Hello, I m new here.

    I want to start working with DPS, but I have a doubt about which version to buy.

    At the moment I have one customer just wants to publish a magazine, but my intention is to have more customers and publish more magazines.

    If I buy the unique edition of DPS, I read that I can publish a single file .folio. What it means? Each folio file represents a publication?

    Please, I need help to understand this before you purchase the software.

    Thank you very much

    Paul

    Here's a quick blog I wrote to compare the simple edition and

    multifolio apps:

    http://boblevine.us/Digital-Publishing-Suite-101-single-Edition-vs-multi-Folio-apps/

    Bob

  • Doubt about the Index

    Hi all

    Oracle Database 11 g Enterprise Edition Release 11.2.0.2.0 - 64 bit Production
    PL/SQL Release 11.2.0.2.0 - Production
    "CORE 11.2.0.2.0 Production."
    AMT for Linux: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - Production

    I have a question about the index. Is - this required that the index will be useful if we have a "WHERE" clause I tried to find myself there but do not.
    In this example I haven't used where clause used but group. But it gives a comprehensive analysis. Is it possible to get the scan interval or something else using Group by?
    SELECT tag_id FROM taggen.tag_master GROUP by tag_id 
    
    Explain Plan:
    Plan hash value: 1688408656
     
    ---------------------------------------------------------------------------------------
    | Id  | Operation             | Name          | Rows  | Bytes | Cost (%CPU)| Time     |
    ---------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT      |               |  4045 | 20225 |     6  (17)| 00:00:01 |
    |   1 |  HASH GROUP BY        |               |  4045 | 20225 |     6  (17)| 00:00:01 |
    |   2 |   INDEX FAST FULL SCAN| TAG_MASTER_PK |  4045 | 20225 |     5   (0)| 00:00:01 |
    ---------------------------------------------------------------------------------------

    Hello

    SamFisher wrote:
    Since I was on what they do full scan. Is it possible to restrict of fullscan without using where clause?
    I guess having limit clause but not quite know.

    Why?
    If this query is producing good results, then you need a full analysis.
    If fool you somehow the optimizer by doing a scan of interval, it will be slower.

  • Some doubts about the topology, interfaces and security modules

    Hello

    Below, some questions about the ODI:


    1. to use an LKM ODI always ask to use two different DATASERVERS (one for the SOURCE) and another to the TARGET?

    2. what would be the best way to create a new IKM with GROUP BY clauses?

    3. What is the required minimum PROFILE for developers users could import projects created in other ODI environments?

    4. If a particular WORK_REP is lost, it is possible that retrieve projects from version control information stored in the MASTER_REP?

    1.) Yes. LKM always loads data from one root to another.
    More than once I saw that even if there is a single physical server, several servers are configured in the topology Manager. This would lead to the use of a LKM because ODI consider 2 different servers.
    If the physical server is set only once, LKM won't be necessary.

    2.) IKM automatically adds a GROUP BY clause if it detects an aggregation function in the Interface implementation.

    3.) try to use the profile of the creator of NG.

    4.) this is not an easy task. But all the versioned objects are compressed and stored in a BLOB field in the master repository.
    You will need to know the names and versions you need to recover.
    SNP_VERSION and SNP_DATA have this information. Retrieves the field BLOB SNP_DATA and unpack using a zip utility. This will give you the XML property of the object that was transferred.
    Now, you can import this xml file and retrieve the object.

    You will need to loop through all the records in order of I_DATA, then extract to .xml file, and then import them to build the work rep.

  • Not clear about how to move changes to RAW images using side car files

    I have Lightroom 5.6 running on my Windows 7 and Lightroom 5.6 office running on another computer that has Windows 8.1. If I copy a picture with his side car file to the second computer and import them into Lightroom, only the sides come with the image; None of the fixtures e.g., culture, exhibition, etc.

    Without doubt, I wait or more the car lane side or miss a step.

    Thank you in advance...

    Todd

    In the develop module, go to the menu picture and choose the option "read metadata from file."

  • Some doubts about the navigation in unifying

    Hi all

    I had a few questions about unifying navigation.

    Is it possible to move the admin mode user mode access level?

    I mean, if a particular feature as Manager of the shell I can only access from admin mode is it possible to provide access even in user mode?

    If so, how?

    My 2nd question of doubt is, currently, we can access company BPs level "Journal of society" or "Resource Manager" under shell 'Company Workspace'.

    Is it possible to move the "journal of the society" or "Resource Manager" in the folder? If yes how?

    I tried in "navigation user mode" to move the company BPs level at shell of the House, but I can't do it.

    To answer your questions:

    (1) User-Mode browser can have the user feature included. You cannot change the view mode Admin or move functions admin for user mode.

    (2) you cannot move these on the Home tab.

  • Doubt about the writer of the buffer

    Friends, according to the docs, DBWR written as below: -.

    "The process of writer (DBWn) of database written periodically cold, dirty buffers to disk. DBWn writes tampons in the following circumstances:

    A server process not found own buffers to read the new blocks in the database buffer cache.

    As the pads are dirty, the number of free buffers decreases. If the number is less than an internal threshold and so clean buffers are needed, then the process server report DBWn to write.

    The database uses the LRU to determine what dirty buffers to write. When stamps Sales reach the cold end of the LRU, the database moves off the LRU to a write queue. DBWn wrote buffers in the queue on the disc, using if possible written multiblock. This mechanism prevents the end of the LRU get clogged with dirty buffers and allows own buffers available to be reused.

    The database must move the control point, which is the position in the thread of Redo from which must begin the recovery instance.

    Storage spaces are turned into read-only or offline status.
    "

    Can someone explain the 4th scenario is to say "" you database must move the control point, which is the position in the thread to which instance again must start recovery. "." whereby DBWR write in data files?

    Thanks in advance.

    918868 wrote:
    Why the database does advance the checkpoint? Aman said in the last post that Oracle has not advanced at the checkpoint because of the SCN generation/updates is for other reasons? What is the relationship to this buffer written in advance of the checkpoint?

    There are many different types of "checkpoints" (see http://jonathanlewis.wordpress.com/2007/04/12/log-file-switch/) the two best-known are the point of log file picker control and additional control point.

    The checkpoint log file switch (which I think is named the Checkpoin wire) is the one who translates written Oracle buffers of data in the data files if their first vector of change roll forward is saved in a file journal; This point of control translates into an update to each data file header AFTER the checkpoint is complete and that the redo log file can be overwritten.

    The additional control point is shot every three seconds by DBWR poster himself tells dbwr to recalculate a new target YVERT and redo block address (based on things like the MTTR, the fast_start_target, the log_checkpoint_interval or the log_checkpoint_timeout) and write to the file and the data blocks that have first were changed before SNA; the SNA is only recorded in the control file.

    The first type of control point allows Oracle to choose the right file to log to start recovering from the second tells it where to start in the file.

    Concerning
    Jonathan Lewis

  • doubt about the update

    Hi all

    Three statements to update, I need to run on a single table.
    Is there a way I can update instructions executed all three in a single statement?
        UPDATE xxops_forecast_extract b SET territory_id = (SELECT a.territory_id
             FROM fdev_hier_node_mv a
             WHERE a.shr_node_id = b.shr_node_id
              AND NVL(end_dt,SYSDATE) > SYSDATE) ;
        COMMIT;
    
        UPDATE xxops_forecast_extract b SET position_id = (SELECT a.row_id
            FROM s_postn a
            WHERE a.name = 'TD-'||UPPER(b.am_id))
            WHERE position_level = 7
            AND b.am_id IS NOT NULL;
        COMMIT;
      
        UPDATE xxops_forecast_extract b SET position_id = (SELECT a.row_id
            FROM s_postn a
            WHERE UPPER(a.desc_text) = UPPER(TRIM(B.POSITION_NAME)))
            WHERE position_level = 7
            AND b.am_id IS NULL;
     Below are the sample data for the tables. 
    
     xxops_forecast_extract 
     shr_node_id am_id position_name  position_id  territory_id
     2231211     Dave     (null)        (null)       (null)
     2231211     Michele  (null)        (null)       (null)
     2231211     (null)   COMM WEST 230 (null)       (null)
     2231211     (null)   COMM ISAM 110 (null)       (null)
    
     fdev_hier_node_mv
     shr_node_id territory_id 
      2231211      5694
    
    
     s_postn
     row_id    name       desc_text
     12122   TD-Dave     (null)
     12123   TD-Michele  (null)
     89381   (null)          COMM WEST 230
     89382   (null)          COMM ISAM 110
    
     Resulting table after update
    
     xxops_forecast_extract 
     shr_node_id am_id position_name  position_id  territory_id
     2231211     Dave     (null)        12122       5694
     2231211     Michele  (null)        12123       5694
     2231211     (null)   COMM WEST 230 89381       5694
     2231211     (null)   COMM ISAM 110 89382       5694
    Thank you all.

    Hello

    You can combine the statements by combining subqueries.
    No logic not apply to all the the original updates must be out of the WHERE clause and put in a CASE statement.
    The CASE statements should "update" column to itself if none of the conditions apply.

    For example, your final two statements UPDATE, that have subqueries on s_postn, both can be combined like this:

    UPDATE     xxops_forecast_extract     b
    SET     position_id =
         (
         SELECT     CASE
                   WHEN     (     b.am_id          IS NOT NULL
                        AND     UPPER (a.name)     = 'TD-' || UPPER (b.am_id)
                        )
                   OR     (     b.am_id          IS NULL
                        AND     UPPER (a.desc_text)     = UPPER (TRIM (b.position_name))
                        )
                   THEN     a.row_id
                   ELSE     b.position_id
              END
         FROM     s_postn     a
            WHERE     UPPER (a.name)          = 'TD-' || UPPER (b.am_id)
         OR     UPPER (a.desc_text)     = UPPER (TRIM (b.position_name))
         )
    ;
    

    There seems to be some mistakes in the statemnts UPDATE that you posted. For example, the last two refer to a column called position_level that does not exist in the other table.
    The above statement produces the results you want with the data you've posted.

    As you can see, the code is much harder to understand, debug and maintain.
    If the performance gain (if any) justifies the addional complexity is debatable in this case.
    I'm sure that combining all three queries would be useful.

    Consider using the MERGE: it is sometimes easier to use, even if, as in this case, you're never insert.

  • HP Pavilion dv4-1120br: doubts about the BIOS update for my model with the operating system current instaled.

    Hello

    I have a HP Pavilion dv4-1120br, I know that this model of HP laptop is old, but I instaled the 64 bit of Windows 10 on my machine and saw that my BIOS is version F.30 and the last of them available on the site of HP F.66 has.

    The problem is that this model is not suport for Windows 10 because, as I said, is too old. Can I download and run the update of the BIOS for Windows 7 64-bit available in the Web site, or I need to downgrade my OS to Windows 7, update the BIOS and then pass it back to Windows 10?

    Thanks for the help!

    Cantarino

    Hello;

    Let me welcome you on the HP forums!

    Your second proposal is correct - as the updates to the BIOS on the HP site are specific to the OS version and if you try to run one for Win7 in Win10, it will refuse to run.

    Good luck

  • Doubt about the matrix or table

    I need to write some values that I read from the serial port. But I need to increment the column automatically each new number.

    What happens is that each new issue, the last of them turns to zero and I lost the number.

    How can I store the last number and receive the number in the next column?

    Kind regards.


  • Doubts about the speed

    Hello gentlemen;

    I have a few questions, I would like to ask more experienced people here. I have a program running on a computer that has a processor i7 processor. In this computer that I have programmed in LabVIEW, meanwhile in another lab, we have another PC, a little older, a dual core 2.3 Ghz, in this pc, we perform a testing platform for a couple of modems, let us not get into the details.

    My problem is that I discovered recently that my program, I programmed in the computer, i7, much slower work in the other machine, the dual core, so the timings are all wrong and the program does not run correctly. For example, there is a table with 166 values, which, in the i7 machine are filled quickly, leaving almost without delay, however, the double machine heart, it takes a few milliseconds to fill about 20 values in the table, and because of the timing, it can fill more values and so the waveform that I use is all wrong. This, of course, live of the whole program and I can't use it as a test I need to integrate.

    I have create a .exe program in labview and try it in the different PC that's how I got to this question.

    Now, I want to know if there is actually a big problem due to the characteristics of the computer, the program is slow in one machine. I know that, to ensure the eficiently program, I need to use States, sub - vi, idea of producer-consumer machines and other things. However, I discovered this is not a problem of the speed generated by the program, because, if that were the case, the table would eventually fill it completely, however in slow computer, it is not filled more with 20 values.

    Else, helps to hide unnecessary variables in the front panel?, because the time beeing I have keep track of lots of variables in the program, so when I create the .exe I still see them runing to keep this follow-up. In the final version, that I won't need them so I'll delete some and hide front panel some. It helps that require less condition?

    I would like to read your comments on this topic, if you have any ideas in machines to States, sub - vi, etc., if there is a way to force the computer to use more resources in the Labview program, etc.
    I'm not add any VI because, in the current state, I know you will say, state machines, sub.vi and so on, and I think that the main problem is between the difference in computers, and I'm still working in the things of the State/sub-VI/etc

    Thank you once again, we just let this hollow.

    Kind regards

    IRAN.

    Get started with, using suitable as a machine for States stream you can ensure that your large table would be always filled completely before moving on, regardless of how long it takes. Believe it or not add that a delay to your curls will do more all the program run faster and smoother, because while loops are eager and can consume 100% of CPU time just a loop waiting for a button press, at the same time all other processes are fighting for time CPU.

  • Doubt about the persistent object

    Hi friends,

    I've stored data object persistent after that some time, my Simulator has taken a lot of time to load the application so I run clear.bat do fast Simulator. But after I run clear.bat. The values of what I stored in the persistent object had disappeared. Someone at - he said, therefore, it is the persistent object data are parties to cause of the performer, the clear.bat or any other reason. pls clarify my doubt friends...

    Kind regards

    s.Kumaran.

    It is b'caz of clean.bat. Clean.bat will remove all applications and unnecessary files, etc...

  • Doubts about the migration parallel to Lync 2013-> Skype4B 2015 on VCS - C (not clustered)

    Hello everyone!

    As I saw on Cisco documents, applying "B2BUA/Microsoft Interoperability" on VCS can "communicate" with just an instance Microsoft Lync pool servers, but we need to migrate the Lync server on parallel to the servers of Skype, we need to have a few "maintenance window" to migrate all users!

    Can we keep 'UP' communication for VCS (lync and Skype) pool of two servers until the end of the migration? The lync Server legacy 2013 (shared resources) with VCS today can communicate with users (migrated) for 2015 of Skype with trunk Lync existing TLS today?

    I think we generate another certificate for TLS and affecting some Skype server on the option "host approved", that's okay, I forgot something? Or I have other ways to communicate two pools Microsoft server with a VCS - C with the application "B2BUA/Microsoft Interoperability?

    Thanks for help me!

    To see some possible examples of deployment options, refer to Appendix 3 of the infrastructure Microsoft (X8.8) Deployment Guide and totalled, suggest that you also look over the guide in full as it might answer some of your questions about what is supported.

  • doubts about the css class...

    I tried to load a background image in the theme universal apex 5 in the login page.

    and I used the code found in the following link and got it works

    Apex 5.0: Theme Roller and background image

    But I doubt that can be very simple for the css professionals.

    .t-PageBody-.t-body connection

    {

    Background: URL("Sports.jpg") repeat top center white scroll;

    Color: #000000;

    do-family: Arial, Helvetica, Sans-serif;

    do-size: 12px;

    line-height: 17px;

    }

    .t - PageBody.t - body

    How do you know .t-PageBody - .t-body connection was the main class to change...

    Let me know if my interpretation is correct

    .t-PageBody - login is the main class

    and .t-Body is the upper class?


    pauljohny100 wrote:

    I tried to load a background image in the theme universal apex 5 in the login page.

    and I used the code found in the following link and got it works

    Apex 5.0: Theme Roller and background image

    But I doubt that can be very simple for the css professionals.

    .t-PageBody-.t-body connection

    {

    Background: URL("Sports.jpg") repeat top center white scroll;

    Color: #000000;

    do-family: Arial, Helvetica, Sans-serif;

    do-size: 12px;

    line-height: 17px;

    }

    .t - PageBody.t - body

    How know .t-PageBody-.t-body connection was the main class change...

    Let me know if my interpretation is correct

    .t-PageBody - login is the main class

    and .t-Body is the upper class?

    .t-PageBody--login .t-Bodyis a descendant selector. It matches any element with a class attribute that contains a t-Body value having an element ancestor with a class attribute that contains a t-PageBody--login value. There is no concept of 'main' class or 'slot' in CSS. The required selector is likely to have been determined on the supplement page using a web Inspector.

    It is advisable to take some tutorials to get at least a basic understanding of web technologies when you work with APEX.

  • Doubt about the LDAP synchronization

    Hi all

    I have sync LDAP enabled on my server of IOM. I also installed OID connector. I installed it since I want a user to be able to see DIO user resource in service to him in the "resources" tab. Now, whenever I create a new user, the user is created successfully. Now I have an access policy that grants the user the user OID resource based on its role. Now, once the user is created, I see in the OID I use. Of course, it is placed in service in the cn = default user directory but I read here that it is configurable from the LDAP container rules xml file. Now this provisioning in OID arrives with LDAP synchronization, and so I do not see any resource under the tab "resources". Then I grants it the user the OID resource by attaching the role to him and now he gets put in service to the OID as well. Now I see that based on the pre fill out cards that I put in place, this user gets provisioned to the correct container in the OID. But the question is now I find myself with two users with the same name and details in the directory of the OID. I don't want that to happen. Is there some way I can cut the somehow OID LDAP synchronization over the create user operation? Commissioning product only when I apply the role and therefore in the correct container?

    Thank you
    $id

    This is where, with a solid knowledge of the IOM is required. Should be re-evaluated the connector. For example, if the user exists, you know that you can not use the default create the task from the user. You will need to put just a spot of AutoComplete, since you know that each user will exist. You must also remove all your form variables that are managed from the user profile of the IOM.

    I suggest the following:

    Change your form to include only the user ID, the common name and the OrclGUID and name of the organization. You can use a pre-fill adapter on all those who will come from the user profile, because they already exist. If you need to move them to a different OU, after execution of the AutoComplete that defines the status of provisioned, you could start an update task the organization name field, which then the user to the appropriate ORGANIZATIONAL unit.

    You really need to think about all the tasks, and what is involved and change the connector. When you implement two methods that accomplish the same thing, you need to remove a few pieces of one of them. If you need to look at all of the tasks that will be required and the actions which are carried out. Some of them will have to be autocomplété so you can always view the status of correct resource.

    -Kevin

Maybe you are looking for