Rollback segment and log files

Hello everyone,

I have a question that intrigues me because I learned the concepts of Directors Oracle.

Indeed, when a user sends a query to update (for example), server process writes in the DB the old and the new snapshot buffer cache data. He wrote the same thing in an entry form redo in the redo log buffer.

Now, if the DBW is triggered before POSTING (it is possible of course), it will write the dirty block in the data file. He wrote the old cliché in a rollback segment and the new in a segment of the table. But just before that, he sends a message to the LGWR to write the corresponding entries in the redo log files.

What I don't understand, is the use of the LGWR writes? Indeed, if the system breaks down at this moment, we drive the old snapshot of the data (in the rollback segment) and recovery can be performed correctly! Why do two writings of the same information in the disk in same instant; the one in the data file and the other in the log file.

Thank you for the answers in advance, I hope that I was clear!

I thought that when the DBWR is triggered, it writes ALL the blocks Sales

Lol not always, not often 'ALL '.
The DBWR follows a list of stamp Sales. May be asked to do a writing by a process that cannot find a tampon without (once again, that the process may not have really looked through the LMS together, only up to N buffers). DBWR doesn't then write ALL buffers Sales.

Also, remember even when the DBWR writes all buffers, it takes time to write. He does NOT order written by "undo and table / index of a single transaction set. He just does not care if a dirty buffer's undo or a block table or index.

Hemant K Collette
http://hemantoracledba.blogspot.com

Tags: Database

Similar Questions

  • Files dump debugging and log files to install in the disk cleanup window

    The two points above appear in my disk cleanup window, and I wonder if it is o: k to remove. The first occupies a space of 283 KB and the other 2537 KB. The description of these two files in the disk cleanup window is "files created by Windows.

    I have a second question: I have about 5 GB left as free space on my hard drive of 'C '. If this is considered sufficient, or should I get more? I am a senior retired and only use the computer for basic items.

    I have a computer Dell Dimension 4100 with Service Pack 3 for Windows XP - Home Edition Operating System - desktop, 111 Pentium with 930 MHZ, 20 GB hard drive and 512 MB RAM. The file system is FAT 32.

    Thank you very much for your cooperation and your response.

    "Debug dump files" are copies of the contents of your computer's memory.

    When a program crashes, it will sometimes "dump" all or part of the
    contents of the RAM to a file on your hard drive. The file is useful to
    a technical support person who tries to understand why the program
    crashed. (Most of the tech support people have no idea what to do with a dump
    file.)

    Unless you have been asked to submit a dump for inspection file, feel free
    to get rid of them.

    Setup logs files are "log" installation of a program. If you
    Watch a (they are really just text files - open with Notepad) you will have
    see that they give a blow by blow description of how the program has been
    installed, glorious detail.

    If a program does not properly settled, the Setup log will tell you what that
    was wrong. If you have not had a facility wrong, do not hesitate to ditch
    These files as well.

    On the question of how much free space on the disk is enough... There is no
    standards. 15% of your drive must be empty in order for Windows
    The Defragmenter built into the work. In addition, it is to you.
     
    Your processor is another story. It's not easy to find effective security
    software that will work on a P3. Even if you only use the computer
    for the basic elements, an internet connection is pretty basic for a online
    criminal to find you.

    Another response of the community of Windows XP newsgroups

  • Problem with Oracle Net, Oracle Listener and log files

    Hi all
    I don't know if this section of the forum is adapted to my question, I'll move this thread to the appropriate section as soon as someone point that out.

    I have some difficulties in communication between OEM and its auditor of troubleshooting. It used to work fine, until I try to add sqldeveloper image. While I was trying to establish a connection in Sqldeveloper, I started OEM at the same time to compare its configuration of the connection to the NETWORK. Unfortunately, today, two of them (OEM and developer) failed to start. I tried to look at what is happening in the sysman/log directory, but it wasn't anything there except emrep_config.log, which is confusing because I expected more on the log files.

    The bottom line is, mu net Oracle is corrupted and I'm looking for advice to solve the problem. Help, please
    BTW, I have:
    SQL> select * from v$version;
    
    BANNER
    --------------------------------------------------------------------------------
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE     11.2.0.1.0     Production
    TNS for Linux: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    Best regards
    Val

    As you did not provide the error messages, and I'm not sitting in your system:

    http://download.Oracle.com/docs/CD/E11882_01/network.112/e10836/trouble.htm#i466349

    Which of course should you go through before this announcement, but then 100 percent of all the people posting here refuse to use the documentation, so you are no exception to this rule.

    ------------
    Sybrand Bakker
    Senior Oracle DBA

  • Hyperion 11.1.2.3 Applications and log files from the server

    Hello

    I use the Hyperion 11.1.2.3 Applications. version and currently the Essbase application logs are configured to generate in Essbase server. Please could you let me know if I can configure the Essbase applications to generate applications and Essbase logs to another server in files.

    Thank you

    Michel K

    You can change the location of the log with a setting of essbase.cfg

    DEFAULTLOGLOCATION

    If you want to change the location of the ODL files, then you need to change the logging.xml file in bin to essbase.

    See you soon

    John

  • segments and data files

    can someone clarify me the following...

    If a table space includes a number of segments... is that true

    -A table space can contain a number of files, each consisting of a number of segments of. ?

    Also when a table partitioning is involved,... lets say partition us a table on two separate drives,... a segment would be placed in a data file on disc one and the other segment placed in a data file on disk B... but where information held relating to the global table (if that makes sense... that is to say where would hold information on partitions ,, ?)

    Any help would be appreciated

    -A table space can contain a number of files, each consisting of a number of segments of. ?

    True and true half. Segments in a tablespace may extend in all, part or all of the data in this table files. User has no command to specify what datafile inside a tablespace should be used for a segment.

    but where information held relating to the global table

    The attributes of the table is stored in the metadata, and it is in the SYSTEm tablespace. However, a partitioned table has a default tablespace associated with it. Partitions that you do not specify that a tablespace to go.

    Example: MP_OHDB is default tablespace of the table but each subpartition in it's own tablespace bit!

    CREATE TABLE MEM_NBR
    (
      BUSINESS_DATE  DATE,
      INSTANCE       NUMBER(5),
      LOGBAPI_NBR    NUMBER(20)
    )
    TABLESPACE MP_OHDB
    PCTUSED    0
    PCTFREE    10
    INITRANS   1
    MAXTRANS   255
    PARTITION BY RANGE (BUSINESS_DATE)
    SUBPARTITION BY LIST (INSTANCE)
    SUBPARTITION TEMPLATE
      (SUBPARTITION INSTANCE_1 VALUES (1) TABLESPACE MP_OH_GEN_P1,
       SUBPARTITION INSTANCE_2 VALUES (2) TABLESPACE MP_OH_GEN_P2,
       SUBPARTITION INSTANCE_3 VALUES (3) TABLESPACE MP_OH_GEN_P3,
       SUBPARTITION INSTANCE_4 VALUES (4) TABLESPACE MP_OH_GEN_P4,
       SUBPARTITION OTHERS VALUES (DEFAULT) TABLESPACE MP_OHDB
      )
    INTERVAL( NUMTODSINTERVAL(1,'DAY'))
    (
      PARTITION P0 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
        NOLOGGING
        NOCOMPRESS
        TABLESPACE MP_OHDB
        PCTFREE    10
        INITRANS   1
        MAXTRANS   255
        STORAGE    (
                    BUFFER_POOL      DEFAULT
                   )
      ( SUBPARTITION P0_INSTANCE_1 VALUES (1)    TABLESPACE MP_OH_GEN_P1,
        SUBPARTITION P0_INSTANCE_2 VALUES (2)    TABLESPACE MP_OH_GEN_P2,
        SUBPARTITION P0_INSTANCE_3 VALUES (3)    TABLESPACE MP_OH_GEN_P3,
        SUBPARTITION P0_INSTANCE_4 VALUES (4)    TABLESPACE MP_OH_GEN_P4,
        SUBPARTITION P0_OTHERS VALUES (DEFAULT)    TABLESPACE MP_OHDB ),
      PARTITION VALUES LESS THAN (TO_DATE(' 2010-11-17 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
        NOLOGGING
        NOCOMPRESS
        TABLESPACE MP_OHDB
        PCTFREE    10
        INITRANS   1
        MAXTRANS   255
        STORAGE    (
                    BUFFER_POOL      DEFAULT
                   )
        SUBPARTITIONS 5 STORE IN (MP_OH_GEN_P1,MP_OH_GEN_P2,MP_OH_GEN_P3,MP_OH_GEN_P4,MP_OHDB),
      PARTITION VALUES LESS THAN (TO_DATE(' 2010-11-18 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
        NOLOGGING
        NOCOMPRESS
        TABLESPACE MP_OHDB
        PCTFREE    10
        INITRANS   1
        MAXTRANS   255
        STORAGE    (
                    BUFFER_POOL      DEFAULT
                   )
        SUBPARTITIONS 5 STORE IN (MP_OH_GEN_P1,MP_OH_GEN_P2,MP_OH_GEN_P3,MP_OH_GEN_P4,MP_OHDB),
      PARTITION VALUES LESS THAN (TO_DATE(' 2010-11-19 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
        NOLOGGING
        NOCOMPRESS
        TABLESPACE MP_OHDB
        PCTFREE    10
        INITRANS   1
        MAXTRANS   255
        STORAGE    (
                    BUFFER_POOL      DEFAULT
                   )
        SUBPARTITIONS 5 STORE IN (MP_OH_GEN_P1,MP_OH_GEN_P2,MP_OH_GEN_P3,MP_OH_GEN_P4,MP_OHDB)
    )
    NOCOMPRESS
    NOCACHE
    PARALLEL ( DEGREE DEFAULT INSTANCES DEFAULT )
    MONITORING;
    
  • user managed backup and deferred tax rollback segments

    Hi Forum,

    Greetings of the day,

    Here I have a question about segments of rollback backup and deferred users managed, I don't have much experience with the user managed backups taken by backup of BEGIN and END.

    1. how facilities oracle using the database so that it is saved each tablespace, he will use deferred rollback segments?

    2. placing the tablespace in backup mode is the same as read-only mode, on a technical level. So that it can read the blocks and changes made by the user can be moved to deferred rollback segments and later applied to tablespace, when it is the backup of the END?

    3.AFAIK, will be created only to deferred rollback segments in the SYSTEM tablespace. So we need to keep an eye on the growth of the tablesace SYSTEM while the database/tablespace is backed by this method?

    4. when the tablespace is put out of backup by using the backup of the END , will be that it applied by the deferred rollback segments?

    5. where oracle will place deferred rollback segments, when tablesapce SYSTEM itself is implemented backup START ?

    6. DON'T face us any kind of failures during all backup /END START as full tablespace etc SYSTEM.

    Your answers will really helpful.

    Thank you

    Uday

    Here I have a question about segments of rollback backup and deferred users managed, I don't have much experience with the user managed backups taken by backup of BEGIN and END.

    1. how facilities oracle using the database so that it is saved each tablespace, he will use deferred rollback segments?

    In what context you mention the use of rollback differed with the backup segments maintained by users?

    2. placing the tablespace in backup mode is the same as read-only mode, on a technical level. So that it can read the blocks and changes made by the user can be moved to deferred rollback segments and later applied to tablespace, when it is the backup of the END?

    Do you mean to say that when the tablespace is backed up, it is not accessible to the DML? If so, this is a wrong concept. Oracle did not go for the rollback segment (which I do not know how you are linking anways), but maintains the writing the image in full blocks for the first time in the log files of recovery following the normal writing changed buffers.

    3.AFAIK, will be created only to deferred rollback segments in the SYSTEM tablespace. So we need to keep an eye on the growth of the tablesace SYSTEM while the database/tablespace is backed by this method?

    Response above asked about segment deferred cancellation and their relationship, according to you, with the backup tablespace?

    4. when the tablespace is put out of backup by using the backup of the END , will be that it applied by the deferred rollback segments?

    5. where oracle will place deferred rollback segments, when tablesapce SYSTEM itself is implemented backup START ?

    6. DON'T face us any kind of failures during all backup /END START as full tablespace etc SYSTEM.

    Seems that you don't really have an idea how the backups managed by users actually work. I'll suggest that you read about it first in the backup and restore documentation before entering in any event.

    HTH

    Aman...

  • URL and monitoring of log files by using Foglight

    Hi all

    We are using Foglight 5.6.4 and running foglight agent manager 5.6.7 inside.

    We try to follow a URL and log files that are hosted on a machine Windows 2012...

    Know will support implement our requirements (IE 2012 Windows monitoring)?

    Could you share your ideas on this pls...

    Thank you best regards &,.

    Guenoun

    This will be an interesting question.

    Article don't not specify how, but there is an interesting question of the platform.

    Includes support for windows 2012 Fglam for Foglight 5.6.7

    http://eDOCS.quest.com/Foglight/567/doc/core/SystemRequirements/platforms.2.php

    But agents being inherited are not a bunch of 2012.

    That's what I would have tried (and it is not guaranteed, due to the fact that we don't have a lot of 2012 windows):

    I would try the package inherited from windows 2003 because windows 2008 is was not the agent of webmointor by mistake) and see if it works. That follows you suggest to use the package of windows 2003, but again, we have the unknown of the platform is windows 2012

    https://support.quest.com/SolutionDetail.aspx?ID=SOL69379&category=solutions&SKB=1

    If it is a test environment and you can upgrade to Foglight 5.6.10 you can have a bit of an easier time because we have a NEW monitor of web agent which is not a part of the legacy package (it's also prettier)

    http://eDOCS.quest.com/Foglight/5610/files/CartridgeForWebMonitor_5610_ReleaseNotes.html

    And then I will complete with the package inherited from Windows 2008 to monitor newspapers, it is not tested yet and don't know if officially supported but I suspect it should work (just remember to do the 'Show packages for all platforms' "when you deploy the package).

    Hope this helps.

    Golan

  • Peformance Gain for a different drive for the SQL Server data and log (VM) file?

    Hello

    It is suggested to separate SQL Server 2012 data and log files in different drive for VM.  Is there a performance gain if two of them resided in the same logical (Partition VMDK) unit number?  OR should they (different dives) in different LUNS (Partition VMDK)?

    Thank you

    The General mantra in the physical world goes as follows:

    For optimal performance, you need to separate logs and database of transactions on different physical disks (e.g. disks) stacks.


    This general concept applies naturally agnostically to virtual workloads as well. It essentially translates to:

    For performance, you should put the files of database transactions on various resident on different LUNS VMDK or use separate RDM. You should also use a separate vSCSI controller to each VMDK/RDM as the controller has its own IO queue.

    There is a point more important which is often neglected in the physical and virtual world in particular:

    Ideally, you should make sure that your physical LUN for logs and DB are not a group of common disk same stacks of physical disks. This violates the general concept above.

  • regarding cancellations and rollback segments

    Hello

    I read Undo records can be stored in two segments of rollback or undo some tablespaces.

    When they stored in undo and when they stored in rollback segments.
    and what is the relationship between the em?

    Thank you
    Vijay

    However, you must understand that "transactions_per_rollback_segment" does not mean that it is a limit on the number of transactions that supports a rollback segment!
    This setting is used by Oracle to determine the number of segments of rollback it must bring online when you open a database instance.
    This applies not to the segments 'TYPE 2 UNDO' that are automatically created and managed (online, taken offline, created, abandoned as needed) when you use UNDO_MANAGEMENT = 'AUTO '.

    Hemant K Collette

  • Expectations extended on synchronization of log file while the parallel writing journal is fine

    We have 9.2.0.8 as experiences of long waits on the database log file sync (average waiting time = 46 ms) while waiting for the log file write Parallels is very good (average waiting time is less than 1 millisecond).

    The application is of type middleware, it connects to several other applications. A single user in a single application action train several requests to send back through this middleware, so he needs response time of db in milliseconds.

    The database is quite simple:

    -It has a few config tables that the application reads but rarely updated

    -She has table TRANSACTION_HISTORY: the application inserts records into this table using Insert einreihig (about 100 lines per second); each insert is followed by a validation.

    Records are kept for several months and then purged. The table has only column VARCHAR2/NUMBER/DATE, no LOBS, LONG, etc. The table has 4 non-unique single-column index.

    The average line length is 100 bytes.

    The load profile does not appear something unusual, the main figures: 110 transactions per second average transaction = 1.5 KB size.

    The data below are to 1 hour interval (purge wasn't running during this interval), physical reads or writes physical rate is low:

    Load profile

    ~ ~ ~ Per second per Transaction

    ---------------       ---------------

    Size: 160,164.75 1,448.42

    Logical reads: 521,58 57 675,25

    Block changes: 934,90 8.45

    Physical reads: 76,27 0.69

    Physical writings: 86,10 0.78

    Calls of the user: 491,69 4.45

    Analysis: 321,24 2.91

    Hard analysis: 0.09 0.00

    Kinds: 126.96 1.15

    Logons: 0.06 0.00

    Runs: 17.70 1 956,91

    Operations: 110,58


    Top 5 events are dominated by the synchronization of log file:

    Top 5 timed events

    ~~~~~~~~~~~~~~~~~~                                                     % Total

    Event expects Ela time (s)

    -------------------------------------------- ------------ ----------- --------

    401 608 18 448 59.94 file synchronization log

    db file parallel write 124 044 3 404 11.06

    CPU time                                                        3,097    10.06

    Enqueue 10 476 2 916 9.48

    DB file sequential read 261 947 2 435 7.91


    Section events:

    AVG

    Total wait wait wait

    Hour of wait time wait for the event (s) (ms) /txn

    ---------------------------- ------------ ---------- ---------- ------ --------

    Synchronize 0 401 608 46 18 448 1.0 file log

    db file parallel write 124 044 0 3 404 27 0.3

    Enqueue 10 476 277 2 916 278 0.0

    DB file sequential read 261 947 0 2 435 9 0.7

    buffer busy waits 11 467 67 173 15 0.0

    SQL * Net more data to the client 1 565 619 0 79 0 3.9

    lock row cache 2 800 0 52 18 0.0

    control file parallel write 1 294 0 45 35 0,0

    Log end of switch file 261 0 36 138 0.0

    latch free 2 087 1 446 24 12 0.0

    PL/SQL 1 1 20 19531 0,0 lock timer

    log file parallel write 0 143 739 17 0.4 0

    db file scattered read 1 644 0 17 10 0.0

    sequential log file read 636 0 8 13 0.0


    Log buffer is about 1.3 MB. We could increase the log buffer, but there is no log buffer space waits, so I doubt this will help.


    Newspapers in recovery have their own file systems, not shared with the data files. This explains the difference between waiting avg on parallel writing of log (less than 1 ms) file and db file parallel write (27 ms).

    Restoring logs is 100 MB, there are about 120 journal switches per day.


    What has changed: the pads/validations rate grew. Several months ago there were 25 inserts/validations per second in the TRANSACTION_HISTORY table, now get us 110 inserts/validation per second.


    What problem it causes application: due to slow down the reaction of the basis of the application (Java-based) requires discussions more and more.


    MOS documents on synchronization of log file (for example, 1376916,1 waits for troubleshooting "log file sync") recommend to compare the average waiting time on synchronization of log file and the log file parallel write.

    If the values are close (for example log file sync = 20 ms and log file parallel write = 10 ms) so expectations are caused by nits IO. However, it is not the case here.


    There was a bug (2669566) in 9.2 which resulted in underreporting lgwr parallel time of writing to the log file. I was talking about September 2005, during which the bug was present in 9.2.0.6, reported 10.1 fixed in: file parallel journal written (JL Comp) it is possible that your problem IS written to the log file.

    Concerning

    Jonathan Lewis

  • "redo the write time" includes "log file parallel write".

    IO performance Guru Christian said in his blog:

    http://christianbilien.WordPress.com/2008/02/12/the-%E2%80%9Clog-file-sync%E2%80%9D-wait-event-is-not-always-spent-waiting-for-an-IO/

    Waiting for synchronization of log file can be divided into:
    1. the 'writing time again': this is the total elapsed time of the writing of the redo log buffer in the log during recovery (in centisecondes).
    2. the 'parallel log writing' is actually the time for the e/s of journal writing complete
    3. the LGWR may have some post processing to do, signals, then the foreground process on hold that Scripture is complete. The foreground process is finally awake upward by the dispatcher of system. This completes pending "journal of file synchronization.

    In his mind, there is no overlap between "redo write time" and "log file parallel write.

    But in the metalink 34592.1:

    Waiting for synchronization of log file may be broken down into the following:
    1 awakening LGWR if idle
    2 LGWR gathers again to be written and issue the e/s
    3. the time for the IO journal writing complete
    4 LGWR I/O post processing
    ...

    Notes on adjustment according to log file sync component breakdown above:
    Steps 2 and 3 are accumulated in the statistic "redo write time". (that is found in the SEO Title Statspack and AWR)
    Step 3 is the wait event "log file parallel write. (Note.34583.1: "log file parallel write" reference Note :) )

    MetaLink said there is overlap as "remake the writing time" include steps 2 and and "log file parallel write" don't understand step 3, so time to "log file parallel write" is only part of the time of 'redo write time. Won't the metalink note, or I missed something?
  • transactions in rollback segments table

    Hi guys,.

    I read the following, but I'm still having a bit of difficulty to conceptualize. Someone would be so kind to give me a very brief example?

    Thank you

    For each rollback segment, Oracle maintains a table of transaction - a list of all transactions that use the associated rollback segment and entries of restoration for each change made by these operations.

    Whenever the transaction happens, there is an entry that is maintained by oracle to control that in which the segment, in which block cancellation of the transaction is maintained. The table which is now it is called Transaction table. The transaction entry is maintained for giving this information of cancellation for queries that seek when you need for coherent picture playback. Location of corresponding cancellation information are also maintained in the header of the block also Transaction as cancel Byte Address (UBA).

    The operating table is for us, V$ transaction. If you look at the XIDUSN, XIDSLOT, XIDSQN columns in this table, you would be able to see the segment where the cancellation of the transaction is maintained. Fixed to cancel arrays are x$ ktuxe and x$ ktcxb (hopefully, I did not error in names).

    I'll see if I can find good paper describing this.
    Update:

    Here are two links that I hope will be useful.
    http://www.evdbt.com/AutomaticUndoInternals.PDF
    http://www.juliandyke.com/presentations/RedoInternals.pp

    Do a lot of block of dumping, a very good way to know this stuff.
    HTH
    Aman...

    Published by: Aman... on March 10, 2009 01:10

  • Transaction is written to the log file and it is not written to undo tablespace. During a failure of the system how oracle rolls back the transaction.

    Hi all

    My question is:

    Transaction is written to the log file and it is not written to undo tablespace.

    During a failure of the system how oracle rolls back the transaction.

    I have already provided the answer, you ignored if well (you seem to only read the responses by people of your country).

    Redo log is always written * first * before * writing to the data block (redo log writing is much more aggressive). So it DOESN 'T MATTER if you lose these scriptures of rollback segment.

    Valuation: rear roller followed by roll forward, using redo log files and/or archive redo log files.

    Sybrand Bakker

    Senior Oracle DBA

  • data containing rollback segment file crashed

    Hello
    In one of our production box (oracle 8.1.7.4 and HP UX), data file that contains the rollback segment has crashed. To open the database, I commented the line segments rollback in init file and launched the db and submitted offline for a fall as the data file crashed

    ALTER database datafile ' / db10/rollback/rbs01.dbf' drop offline;

    and then I tried to drop the tablespace containing only datafile, but it threw an error indicating a rollback segment active R0 is always there in the tablespace. I do not know how to file the and create rollback segments. Any help would be greatly appreciated.


    Thank you
    Guna

    If the rollback segments are active, they are necessary to achieve the recovery process (roll forward was OK if you do not lose the redo, but there is a rolling back phase: read my previous post). Your database is probably not in very good condition.
    Can you post the last 50-100 lines of your alert.log?
    You have a valid backup of this data file?

  • difference between roll back and the rollback segment.

    Hello


    10.2.0.1.0 Oracle

    What is the difference between reverse and cancel segemnt? they are an and even according to the change of version of oracle? Help, please

    Hello

    The two become the same functionality, but in oracle 9i they did simplify, because if you use rollback segments, you need to put them online in init.ora, and you must take care of management of space etc..
    so, they introduced in oracle 9i, tablespace undo you can creage undo segments and space management will take care of oracle managed files, so its reduced the load of s/n

    Kind regards
    Simma...

Maybe you are looking for

  • median case, conditional median

    I want to get the MEDIAN of a column IF it meets the specific criteria that that are in another column.  EXCEL can do so with the following formula: MEDIAN = (IF ($A:$ A = "professional association", $B: $B)) "Because the formula itself refers to a c

  • The DSP to the SPL conversion

    Hi all I created a VI that converts the value of the PSD to SPL. Formula used to convert PSD SPL is SPL = 10log(PSD/P^2) where P = 2 * 10 ^-5 Problems encountered: 1 outside while loop does not end even if the condition is true? Please see the attach

  • computer can't find my CD drive

    I recently had my laptop hard drive replaced, now I can not install anything using my cd/dvd drive, that I can't find that he displayed as a material, what should I do

  • very slow file read/copy of WinXP to Win7

    I have two PCs, it is WInXP, one is Win7.  On Win7, I produce read/copy of WinXP to Win7, it is very slow, about 10% of the 100 Mbps network (as indicated by the Task Manager and the status of file copy).  However, if I do the copy of file for WinXP

  • Sound in quick settings

    Why Sony removed the sound icon in the quick settings bar? It is very annoying because it was available in the quick settings before the update! Sony sorting!