Doubt about the writer of the buffer

Friends, according to the docs, DBWR written as below: -.

"The process of writer (DBWn) of database written periodically cold, dirty buffers to disk. DBWn writes tampons in the following circumstances:

A server process not found own buffers to read the new blocks in the database buffer cache.

As the pads are dirty, the number of free buffers decreases. If the number is less than an internal threshold and so clean buffers are needed, then the process server report DBWn to write.

The database uses the LRU to determine what dirty buffers to write. When stamps Sales reach the cold end of the LRU, the database moves off the LRU to a write queue. DBWn wrote buffers in the queue on the disc, using if possible written multiblock. This mechanism prevents the end of the LRU get clogged with dirty buffers and allows own buffers available to be reused.

The database must move the control point, which is the position in the thread of Redo from which must begin the recovery instance.

Storage spaces are turned into read-only or offline status.
"

Can someone explain the 4th scenario is to say "" you database must move the control point, which is the position in the thread to which instance again must start recovery. "." whereby DBWR write in data files?

Thanks in advance.

918868 wrote:
Why the database does advance the checkpoint? Aman said in the last post that Oracle has not advanced at the checkpoint because of the SCN generation/updates is for other reasons? What is the relationship to this buffer written in advance of the checkpoint?

There are many different types of "checkpoints" (see http://jonathanlewis.wordpress.com/2007/04/12/log-file-switch/) the two best-known are the point of log file picker control and additional control point.

The checkpoint log file switch (which I think is named the Checkpoin wire) is the one who translates written Oracle buffers of data in the data files if their first vector of change roll forward is saved in a file journal; This point of control translates into an update to each data file header AFTER the checkpoint is complete and that the redo log file can be overwritten.

The additional control point is shot every three seconds by DBWR poster himself tells dbwr to recalculate a new target YVERT and redo block address (based on things like the MTTR, the fast_start_target, the log_checkpoint_interval or the log_checkpoint_timeout) and write to the file and the data blocks that have first were changed before SNA; the SNA is only recorded in the control file.

The first type of control point allows Oracle to choose the right file to log to start recovering from the second tells it where to start in the file.

Concerning
Jonathan Lewis

Tags: Database

Similar Questions

  • Question Basic setting, ask questions about the buffer cache

    Database: Oracle 10g
    Host: Sun Solaris, 16 CPU server



    I look at the behavior of some simple queries that I start the tuning of our data warehouse.

    Using SQL * more and AUTOTRACE, I ran this query two times in a row

    SELECT *.
    OF PROCEDURE_FACT
    WHERE PROC_FACT_ID BETWEEN 100000 AND 200000

    He finds the index on PROC_FACT_ID and conducted an analysis of the range of indexes to access the data in the table by rowid. The first time, that it ran, there are about 600 physical block reads as data in the table were not in the buffer cache. The second time, he had 0 physical block reads, because they were all in the cache. All this was expected behavior.

    So I ran this query twice now,

    SELECT DATA_SOURCE_CD, COUNT (*)
    OF PROCEDURE_FACT
    DATA_SOURCE_CD GROUP

    As expected, he made a full table scan, because there is no index on DATA_SOURCE_CD and then chopped the results to find the different DATA_SOURCE_CD values. The first run had these results

    compatible gets 190496
    physical reads 169696

    The second run had these results

    compatible gets 190496
    physical reads 170248


    NOT what I expected. I would have thought that the second run would find many of the blocks already in the cache of the pads of the first execution, so that the number of physical reads would drop significantly.

    Any help to understand this would be greatly appreciated.

    And is there something that can be done to keep the table PROCEDURE_FACT (the central table of our star schema) "pinned" in the buffer cache?

    Thanks in advance.

    -chris Curzon

    Christopher Curzon wrote:
    Your comment about the buffer cache used for smaller objects that benefit is something that I asked about a good deal. It sounds as if tuning the buffer cache will have little impact on queries that scan of entire tables.

    Chris,

    If you can afford it and you think it is a reasonable approach with regard to the remaining segments that are supposed to benefit the buffer cache, you can always consider your segment of table with 'CACHE' that will change the behavior on the full of a broad sector table scan (Oracle treats small and large segments differently during the execution of table scans complete regarding the cache of) marking stamps, you can override this treatment by using the CACHE. NOCACHE keyword) or move your table of facts to a DUNGEON hen establishing a (ALTER SYSTEM SET DB_KEEP_CACHE_SIZE = ), modify the segments (ALTER TABLE... STORAGE (USER_TABLES KEEP)) accordingly and perform a full table scan to load blocks in the cache of the DUNGEON.

    Note that the disadvantage of the approach of the KEEP pool is that you have less memory available for the default buffer cache (unless you add more memory on your system). When an object to mark as being cached is always is in competition with other objects in the cache buffers by default, so it could still be aged out (the same applies to the pool of DUNGEON, if the segment is too large or too many segments are allocated age blocks out as well).

    So my question: How can I get for a parallel analysis on queries that use a table scan complete such as what I posted in my previous email? It is a question of the provision of the "parallel" indicator, or is it an init.ora parameter I should try?

    You can use a PARALLEL hint in your statement:

    SELECT /*+ PARALLEL(PROCEDURE_FACT) */ DATA_SOURCE_CD, COUNT(*)
    FROM PROCEDURE_FACT
    GROUP BY DATA_SOURCE_CD;
    

    or you could mark an object as PARALLEL in the dictionary:

    ALTER MATERIALIZED VIEW PROCEDURE_FACT PARALLEL;
    

    Note that since you have 16 processors (or 16 cores that resemble Oracle 32? Check the CPU_COUNT setting) the default parallel degree would be usually 2 times 16 = 32, which means that Oracle generates at least 32 parallel slaves for a parallel operation (it could be another set of 32 slaves if the operation for example include a GROUP BY operation) If you do not use the PARALLEL_ADAPTIVE_MULTI_USER parameter (which allows to reduce the parallelism if several parallel operations running concurrently).

    I recommend to choose a lesser degree parallel to your default value of 32 because usually you gain much by such a degree, then you can get the same performance when you use lower a setting like this:

    SELECT /*+ PARALLEL(PROCEDURE_FACT, 4) */ DATA_SOURCE_CD, COUNT(*)
    FROM PROCEDURE_FACT
    GROUP BY DATA_SOURCE_CD;
    

    The same could be applied to the paralleling of the object:

    ALTER MATERIALIZED VIEW PROCEDURE_FACT PARALLEL 4;
    

    Note When defining the object of many operations in PARALLEL will be parallelisee (DML even can be run in parallel, if you enable dml parallel, which has some special restrictions), so I recommend to use it with caution and begin with an explicit indication in those statements where you know that it will be useful to do.

    Also check that your PARALLEL_MAX_SERVERS is high enough when you use parallel operations, which should be the case in your version of Oracle.

    Kind regards
    Randolf

    Oracle related blog stuff:
    http://Oracle-Randolf.blogspot.com/

    SQLTools ++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676 /.
    http://sourceforge.NET/projects/SQLT-pp/

  • I have a doubt about the file .folio and publications

    Hello, I m new here.

    I want to start working with DPS, but I have a doubt about which version to buy.

    At the moment I have one customer just wants to publish a magazine, but my intention is to have more customers and publish more magazines.

    If I buy the unique edition of DPS, I read that I can publish a single file .folio. What it means? Each folio file represents a publication?

    Please, I need help to understand this before you purchase the software.

    Thank you very much

    Paul

    Here's a quick blog I wrote to compare the simple edition and

    multifolio apps:

    http://boblevine.us/Digital-Publishing-Suite-101-single-Edition-vs-multi-Folio-apps/

    Bob

  • Doubt about the Index

    Hi all

    Oracle Database 11 g Enterprise Edition Release 11.2.0.2.0 - 64 bit Production
    PL/SQL Release 11.2.0.2.0 - Production
    "CORE 11.2.0.2.0 Production."
    AMT for Linux: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - Production

    I have a question about the index. Is - this required that the index will be useful if we have a "WHERE" clause I tried to find myself there but do not.
    In this example I haven't used where clause used but group. But it gives a comprehensive analysis. Is it possible to get the scan interval or something else using Group by?
    SELECT tag_id FROM taggen.tag_master GROUP by tag_id 
    
    Explain Plan:
    Plan hash value: 1688408656
     
    ---------------------------------------------------------------------------------------
    | Id  | Operation             | Name          | Rows  | Bytes | Cost (%CPU)| Time     |
    ---------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT      |               |  4045 | 20225 |     6  (17)| 00:00:01 |
    |   1 |  HASH GROUP BY        |               |  4045 | 20225 |     6  (17)| 00:00:01 |
    |   2 |   INDEX FAST FULL SCAN| TAG_MASTER_PK |  4045 | 20225 |     5   (0)| 00:00:01 |
    ---------------------------------------------------------------------------------------

    Hello

    SamFisher wrote:
    Since I was on what they do full scan. Is it possible to restrict of fullscan without using where clause?
    I guess having limit clause but not quite know.

    Why?
    If this query is producing good results, then you need a full analysis.
    If fool you somehow the optimizer by doing a scan of interval, it will be slower.

  • Some doubts about the topology, interfaces and security modules

    Hello

    Below, some questions about the ODI:


    1. to use an LKM ODI always ask to use two different DATASERVERS (one for the SOURCE) and another to the TARGET?

    2. what would be the best way to create a new IKM with GROUP BY clauses?

    3. What is the required minimum PROFILE for developers users could import projects created in other ODI environments?

    4. If a particular WORK_REP is lost, it is possible that retrieve projects from version control information stored in the MASTER_REP?

    1.) Yes. LKM always loads data from one root to another.
    More than once I saw that even if there is a single physical server, several servers are configured in the topology Manager. This would lead to the use of a LKM because ODI consider 2 different servers.
    If the physical server is set only once, LKM won't be necessary.

    2.) IKM automatically adds a GROUP BY clause if it detects an aggregation function in the Interface implementation.

    3.) try to use the profile of the creator of NG.

    4.) this is not an easy task. But all the versioned objects are compressed and stored in a BLOB field in the master repository.
    You will need to know the names and versions you need to recover.
    SNP_VERSION and SNP_DATA have this information. Retrieves the field BLOB SNP_DATA and unpack using a zip utility. This will give you the XML property of the object that was transferred.
    Now, you can import this xml file and retrieve the object.

    You will need to loop through all the records in order of I_DATA, then extract to .xml file, and then import them to build the work rep.

  • Some doubts about the navigation in unifying

    Hi all

    I had a few questions about unifying navigation.

    Is it possible to move the admin mode user mode access level?

    I mean, if a particular feature as Manager of the shell I can only access from admin mode is it possible to provide access even in user mode?

    If so, how?

    My 2nd question of doubt is, currently, we can access company BPs level "Journal of society" or "Resource Manager" under shell 'Company Workspace'.

    Is it possible to move the "journal of the society" or "Resource Manager" in the folder? If yes how?

    I tried in "navigation user mode" to move the company BPs level at shell of the House, but I can't do it.

    To answer your questions:

    (1) User-Mode browser can have the user feature included. You cannot change the view mode Admin or move functions admin for user mode.

    (2) you cannot move these on the Home tab.

  • Doubts about the merger of cache - PI (last Image) in the CARS

    We are Oracle 10.2.0.4 database configuration 2-node RAC on Linux x86_64

    I was going through the Oracle documentation and some books that were discussing on the merging of cache (reading-reading, reading / writing, writing-write, read-write behavior) etc.

    It was mentioned that the line lock is similar to that single instance so whevever a block is necessary an instance finishes its operation on the block and transfers the block to the other instance that requires exclusive access.
    What I realized, this is once the transaction is completed the first transfers to instance the block to the other instance to have exclusive access to this block.

    Also the first instance maintains a PI (last Image) of the block that she can use for read operations, but he can't change the block as he has transferred the block to the other instance and he only shared access.

    Now when a checkpoint occurs all instances should serve their IP so in this case, if the first instance is in the middle of a select statement that uses the PI of the block, what will heppen to the query, if it causes an error (something like snapshot too old)?

    I'm not able to understand the notion of PI and what happens during flushing data to disk blocks.

    Evelyne says:

    While I don't understand, what's the meaning of the Image here?

    A PI (last image) is a copy of the block which is known to have existed as the current version (CU) of the block at some point in the past-, then it can be used as starting point for recovery if the node that contains the current block goes down. If IP blocks can reduce recovery times in the database.

    In my view, blocks IP can also be used by the holder of the IP for generating local copies of CR without reference to DRM - if the SNA are appropriate. Thus PIs can also be used the interswitching traffic.

    Initially, there was a rule that when a node has written the courses block, it should send a message to all other nodes brand their IP copy as a free buffer (indeed, forget them) - although I saw a note once Oracle can change this option to convert IP blocks to the blocks of CR.

    Concerning
    Jonathan Lewis
    http://jonathanlewis.WordPress.com
    http://www.jlcomp.demon.co.UK

    "All experts it is a equal and opposite expert."
    Clarke

  • Doubt about the matrix or table

    I need to write some values that I read from the serial port. But I need to increment the column automatically each new number.

    What happens is that each new issue, the last of them turns to zero and I lost the number.

    How can I store the last number and receive the number in the next column?

    Kind regards.


  • Doubts about the speed

    Hello gentlemen;

    I have a few questions, I would like to ask more experienced people here. I have a program running on a computer that has a processor i7 processor. In this computer that I have programmed in LabVIEW, meanwhile in another lab, we have another PC, a little older, a dual core 2.3 Ghz, in this pc, we perform a testing platform for a couple of modems, let us not get into the details.

    My problem is that I discovered recently that my program, I programmed in the computer, i7, much slower work in the other machine, the dual core, so the timings are all wrong and the program does not run correctly. For example, there is a table with 166 values, which, in the i7 machine are filled quickly, leaving almost without delay, however, the double machine heart, it takes a few milliseconds to fill about 20 values in the table, and because of the timing, it can fill more values and so the waveform that I use is all wrong. This, of course, live of the whole program and I can't use it as a test I need to integrate.

    I have create a .exe program in labview and try it in the different PC that's how I got to this question.

    Now, I want to know if there is actually a big problem due to the characteristics of the computer, the program is slow in one machine. I know that, to ensure the eficiently program, I need to use States, sub - vi, idea of producer-consumer machines and other things. However, I discovered this is not a problem of the speed generated by the program, because, if that were the case, the table would eventually fill it completely, however in slow computer, it is not filled more with 20 values.

    Else, helps to hide unnecessary variables in the front panel?, because the time beeing I have keep track of lots of variables in the program, so when I create the .exe I still see them runing to keep this follow-up. In the final version, that I won't need them so I'll delete some and hide front panel some. It helps that require less condition?

    I would like to read your comments on this topic, if you have any ideas in machines to States, sub - vi, etc., if there is a way to force the computer to use more resources in the Labview program, etc.
    I'm not add any VI because, in the current state, I know you will say, state machines, sub.vi and so on, and I think that the main problem is between the difference in computers, and I'm still working in the things of the State/sub-VI/etc

    Thank you once again, we just let this hollow.

    Kind regards

    IRAN.

    Get started with, using suitable as a machine for States stream you can ensure that your large table would be always filled completely before moving on, regardless of how long it takes. Believe it or not add that a delay to your curls will do more all the program run faster and smoother, because while loops are eager and can consume 100% of CPU time just a loop waiting for a button press, at the same time all other processes are fighting for time CPU.

  • Doubt about the persistent object

    Hi friends,

    I've stored data object persistent after that some time, my Simulator has taken a lot of time to load the application so I run clear.bat do fast Simulator. But after I run clear.bat. The values of what I stored in the persistent object had disappeared. Someone at - he said, therefore, it is the persistent object data are parties to cause of the performer, the clear.bat or any other reason. pls clarify my doubt friends...

    Kind regards

    s.Kumaran.

    It is b'caz of clean.bat. Clean.bat will remove all applications and unnecessary files, etc...

  • Doubts about the migration parallel to Lync 2013-> Skype4B 2015 on VCS - C (not clustered)

    Hello everyone!

    As I saw on Cisco documents, applying "B2BUA/Microsoft Interoperability" on VCS can "communicate" with just an instance Microsoft Lync pool servers, but we need to migrate the Lync server on parallel to the servers of Skype, we need to have a few "maintenance window" to migrate all users!

    Can we keep 'UP' communication for VCS (lync and Skype) pool of two servers until the end of the migration? The lync Server legacy 2013 (shared resources) with VCS today can communicate with users (migrated) for 2015 of Skype with trunk Lync existing TLS today?

    I think we generate another certificate for TLS and affecting some Skype server on the option "host approved", that's okay, I forgot something? Or I have other ways to communicate two pools Microsoft server with a VCS - C with the application "B2BUA/Microsoft Interoperability?

    Thanks for help me!

    To see some possible examples of deployment options, refer to Appendix 3 of the infrastructure Microsoft (X8.8) Deployment Guide and totalled, suggest that you also look over the guide in full as it might answer some of your questions about what is supported.

  • doubts about the css class...

    I tried to load a background image in the theme universal apex 5 in the login page.

    and I used the code found in the following link and got it works

    Apex 5.0: Theme Roller and background image

    But I doubt that can be very simple for the css professionals.

    .t-PageBody-.t-body connection

    {

    Background: URL("Sports.jpg") repeat top center white scroll;

    Color: #000000;

    do-family: Arial, Helvetica, Sans-serif;

    do-size: 12px;

    line-height: 17px;

    }

    .t - PageBody.t - body

    How do you know .t-PageBody - .t-body connection was the main class to change...

    Let me know if my interpretation is correct

    .t-PageBody - login is the main class

    and .t-Body is the upper class?


    pauljohny100 wrote:

    I tried to load a background image in the theme universal apex 5 in the login page.

    and I used the code found in the following link and got it works

    Apex 5.0: Theme Roller and background image

    But I doubt that can be very simple for the css professionals.

    .t-PageBody-.t-body connection

    {

    Background: URL("Sports.jpg") repeat top center white scroll;

    Color: #000000;

    do-family: Arial, Helvetica, Sans-serif;

    do-size: 12px;

    line-height: 17px;

    }

    .t - PageBody.t - body

    How know .t-PageBody-.t-body connection was the main class change...

    Let me know if my interpretation is correct

    .t-PageBody - login is the main class

    and .t-Body is the upper class?

    .t-PageBody--login .t-Bodyis a descendant selector. It matches any element with a class attribute that contains a t-Body value having an element ancestor with a class attribute that contains a t-PageBody--login value. There is no concept of 'main' class or 'slot' in CSS. The required selector is likely to have been determined on the supplement page using a web Inspector.

    It is advisable to take some tutorials to get at least a basic understanding of web technologies when you work with APEX.

  • Doubt about the LDAP synchronization

    Hi all

    I have sync LDAP enabled on my server of IOM. I also installed OID connector. I installed it since I want a user to be able to see DIO user resource in service to him in the "resources" tab. Now, whenever I create a new user, the user is created successfully. Now I have an access policy that grants the user the user OID resource based on its role. Now, once the user is created, I see in the OID I use. Of course, it is placed in service in the cn = default user directory but I read here that it is configurable from the LDAP container rules xml file. Now this provisioning in OID arrives with LDAP synchronization, and so I do not see any resource under the tab "resources". Then I grants it the user the OID resource by attaching the role to him and now he gets put in service to the OID as well. Now I see that based on the pre fill out cards that I put in place, this user gets provisioned to the correct container in the OID. But the question is now I find myself with two users with the same name and details in the directory of the OID. I don't want that to happen. Is there some way I can cut the somehow OID LDAP synchronization over the create user operation? Commissioning product only when I apply the role and therefore in the correct container?

    Thank you
    $id

    This is where, with a solid knowledge of the IOM is required. Should be re-evaluated the connector. For example, if the user exists, you know that you can not use the default create the task from the user. You will need to put just a spot of AutoComplete, since you know that each user will exist. You must also remove all your form variables that are managed from the user profile of the IOM.

    I suggest the following:

    Change your form to include only the user ID, the common name and the OrclGUID and name of the organization. You can use a pre-fill adapter on all those who will come from the user profile, because they already exist. If you need to move them to a different OU, after execution of the AutoComplete that defines the status of provisioned, you could start an update task the organization name field, which then the user to the appropriate ORGANIZATIONAL unit.

    You really need to think about all the tasks, and what is involved and change the connector. When you implement two methods that accomplish the same thing, you need to remove a few pieces of one of them. If you need to look at all of the tasks that will be required and the actions which are carried out. Some of them will have to be autocomplété so you can always view the status of correct resource.

    -Kevin

  • Doubt about the restoration

    Hi all

    I have a doubt.

    I took the back of a database.

    All in restoration, the same I found that it is storing thw control file in the folder $ORACLE_HOME/dbs instead of the actual file.

    Can someone explain to me why this happens?

    The acutal controlfile location is "/ orasoft/test '.

    If 'restore spfile' already fails, you have a "dummy" spfile, who does not have an entry CONTROL_FILES. I guess that you did not indicate the DBID, which is mandatory when a catalog is not used. It is an example of the documentation, how to restore the spfile:

    http://download.Oracle.com/docs/CD/B19306_01/backup.102/b14192/recov004.htm#sthref582

    After a successful restore spfile deliver 'force startup nomount', so that the instance is restarted with a correct spfile. To restore the controlfiles and the 'rest' of the database again follow the docs:

    http://download.Oracle.com/docs/CD/B19306_01/backup.102/b14192/recov004.htm#sthref564

    Werner

  • A doubt about the alert Message...

    Hello

    I am very new to Oracle Forms & I use Forms 6i.

    As I practice some bases as 'Validation' a text element I went by a few examples of code to bring or poping-up alert messages...

    Here is my Code... in this I don't understand Y... .do write us the 'Message ('no msg')' line TWICE... ??

    Although I commented on the 2nd one... but there must write two times... & wat is the logic behind it...?

    Help, please...

    ---------------------------------------------------------------------------------------------------
    Begin

    If (: aijaz_emp_master.em_dob >: aijaz_emp_master.em_doj) then

    set_item_property ('em_dob', current_record_attribute, 'VA_ERRORS');

    -If (: aijaz_emp_master.em_dob is not null and (: aijaz_emp_master.em_doj >: aijaz_emp_master.em_dob)) then
    message ("DOB must b higher or NULL MJ '");
    -message ('wort upper b DOB or NULL MJ');

    raise form_trigger_failure;

    on the other
    set_item_property ('em_dob', current_record_attribute, 'VA_NOnERROR');
    end if;

    end;
    ---------------------------------------------------------------------------------------------------------

    MIRA,
    I agree that this can be a little confusing, but once you understand the difference between messages and alerts it will make sense.

    In Oracle Forms, a 'Message' is any information that is displayed in the 'Message' of the forms Console line and requires no interaction with a user. The console is displayed at the bottom of the window and includes the 'Message line' and 'Statue line. If the message line already has a message, and then forms will promote the message in an alert (popup window) and the nouveau/deuxieme message is displayed in the message line. In general, many forms developers will use this "Double-Message" method to display information in an alert because it is less lines of code to assign the text of the message in an alert and then display the alert. It is advisable to always call the built-in function Clear_Message() before calling Message built-in to prevent unintential * promotion message.

    An alert in a physical object in a form of Oracle (note: there is a 'Alerts' node in the browser of object of forms). There are three types of alerts of forms: Note, attention and stop. Display the notes information, precautions display warnings and errors off display program, or critical information. In order to display an alert, you must first create an alert object in the Alerts node. You can create an Alert object for each of your posts, but this creates clutter and can be confusing. I prefer to use three alerts - one for each type (Note, attention, Stop) and then programmatically set the title, the text of the message and the buttons (max 3). As you can see, there is more work involved to display an alert. In addition, the integrated Display_Alert() is a function if you need a variable to receive the value returned by the call to Display_Alert.

    Here is an example of using e-mail in the forms:

    /* Standard message */
    BEGIN
       Clear_Message;
       Message('Display in the Message Line only');
    END;
    
    /* Message to display Alert */
    BEGIN
       Clear_Message; -- make sure you don't accidentally promote a different message
       Message('Display in the Default Alert');
       Message('Display in the Default Alert');
    END;
    
    /* Message displayed in a Note Alert */
    /* Assumes an Alert of type "Note" has already been created. */
    DECLARE
       al_id      ALERT;
       al_btn    NUMBER;
       v_msg    VARCHAR2(200); /* Max Characters that can be displayed in an Alert */
       v_title    VARCHAR2(78);  /* Max title characters */
    BEGIN
       al_id := Find_Alert('Note');
       IF NOT ID_NULL(al_id) THEN
          v_title := 'This is an informational message';
          v_msg := 'This is an informational message that requires the user to acknowledge the message';
          Set_Alert_Property( al_id, TITLE, v_title);
          Set_Alert_Property( al_id, ALERT_MESSAGE_TEXT, v_msg);
          al_btn := Show_Alert( al_id );
       END IF;
    END;
    

    Personally, we have a library package, we have created to simplify the display of alerts and allows you to set the properties of the alert and display it in a single call.

    /* In this sample, I use named notation to
    illustrate the meaning of the different parameters
    in our wrapper package. */
    DECLARE
       al_button  NUMBER;
    BEGIN
       IF (:aijaz_emp_master.em_dob > :aijaz_emp_master.em_doj) THEN
          set_item_property('em_dob', current_record_attribute, 'VA_ERRORS');
          al_button := message_pkg.warning( TITLE => 'Warning: DOB Required',
                                      MSG_TEXT => 'DOB must b NULL or Greater tahn DOJ',
                                            BUTTONS => 'OK');
          raise form_trigger_failure;
       ELSE
          set_item_property('em_dob', current_record_attribute, 'VA_NOnERROR');
       END IF;
    END;
    

    Hope this helps,
    Craig B-)

    If someone useful or appropriate, please mark accordingly.

    Published by: Silvere December 13, 2010 10:31

Maybe you are looking for

  • How to deploy the blocker allowed parameters of sites on multiple computers?

    We run on Win 7 platform with ESR Firefox version 31.4.0. We want to disable the blocker for popups for specific sites. I tried to use FirefoxADM_0.5.9.4 to apply the Group Policy settings, but it does not work. When I look at the file firefox_login.

  • Satellite L750-1RJ - CPU compatibility

    I bought a laptop Toshiba L750-1RJ and I would like to know what processors are compatible with my laptop? Thanks in advance,Steve

  • Failed to retrieve a blocked account.

    My live.com email account is hijacked, and now my access is blocked.  There is a password that I need that was sent to the secondary email address that I have provided for my live.com account, but this email no longer exists.  Now I don't have access

  • All-in-one HP Deskjet 2512: all-in-one deskjet HP 2512 / "alignment failed".

    I installed a new HP ink cartridges (61XL) in my all-in-one printer of 2512.  When I try to do the alignment (several times), I get the message on the screen of "alignment failed".  Printed, but not correctly.  Does anyone have a solution?

  • High temperature of Acer Aspire One 725

    Hello I just got my new Acer Aspire One 725 and it works very well - however, when I checked what temperatures at which it turns, I'm quite concerned. The inactive temprature is about ~ 60 degrees Celsius, and under load, he got to 87 degrees Celsius