Instance recovery fast and redolog file size

Hello

Could you please explain to me how redolog file size allows to retrieve the instance more quickly?

Thank you
KSG

Soon, I'll try to say.

The answer lies in the relationship between the number of buffers Sales necessary for recovery. Buffers dirty necessary for recovery is limited by the checkpoint process when DBWR is ping some of them writing in the data file. So with a log switch, you'll hit a control point. So if the size of the log files to be smaller, frequency control would be great, which makes written buffers more aggressively in the fikles of data and limit the duration of the reocvery instance. Her biger make you, it would take more than the checkpoint occurs and so in the case of instance recovery, it would bemore time required. That said, using a log file of small size would also lead to "cehckpoint inicomplete" error as well, because it can happen that DBWR will not be able to match with his own writing speed and the speed of the current generation gate of event.

HTH
Aman...

Tags: Database

Similar Questions

  • Lightroom 4 presets and the file sizes

    When I apply a preset to a photo, then export it... the file with that I find myself is significanatly smaller than my original file?  These trained same presets as big / or in most cases from the files of size in previous versions of lightroom...

    Develop a Module Presets have almost nothing to do with the size of the file.

    File size is according to the pixel Dimensions, Compression and File Format (if applicable)

    I suspect that you export the settings that you were in previous versions of Lightroom.

    What is the file format for which you export? What are the parameters of file Type, quality, Compressoin? What are your measurements? How these export settings differ from your previous export?

  • The long text of the chain and cod file size

    In my application, I have a few static string constants that are very long. Channels will be displayed on the screen when it is administered buttons are clicked. I find that once I added these chains, cod file size has increased by 30%, which is not desirable. I expect buttons will rarely be clicked but I must have these text for display.

    Is there a better way to manage these channels so that they do not swell the cod file, for example, compress them.

    Thanks in advance.

    If you put the strings in a separate resource file as BBDeveloper suggests, you can also run them with maximum compression gzip before you add them to the project. Then after opening the file (with getResourceAsStream), wrap the input in a GZIPInputStream stream before reading.

  • redolog file size

    Hello
    in 10.2.0.4 on Win 2008 in alertlog I have:
    Thu Aug 26 10:55:28 2010
    Thread 1 advanced to log sequence 274 (LGWR switch)
      Current log# 1 seq# 274 mem# 0: D:\BASES\MYDB\LOGS\REDO01.LOG
    Thu Aug 26 10:59:13 2010
    Thread 1 advanced to log sequence 275 (LGWR switch)
      Current log# 2 seq# 275 mem# 0: D:\BASES\MYDB\LOGS\REDO02.LOG
    Thu Aug 26 11:03:43 2010
    Thread 1 advanced to log sequence 276 (LGWR switch)
      Current log# 3 seq# 276 mem# 0: D:\BASES\MYDB\LOGS\REDO03.LOG
    Then move frequently from one journal to another. The size of the log files are 102M each.
    They are too small?

    Thank you.

    We cann't tell by looking at the size of redologs if there is sufficient database or not.

    This script will tell you how many redo logs are generated by your database per hour

    Select the day of to_char(first_time,'MM-DD'), to_char (sum (decode (to_char (first_time, 'hh24'), '00', 1, 0)),'99 ') "00."
    TO_CHAR (Sum (decode (to_char (first_time, 'hh24'), '01', 1, 0)),'99 ') "01."
    TO_CHAR (Sum (decode (to_char (first_time, 'hh24'), '02', 1, 0)),'99 ') "02",.
    TO_CHAR (Sum (decode (to_char (first_time, 'hh24'), '03', 1, 0)),'99 ') "03."
    TO_CHAR (Sum (decode (to_char (first_time, 'hh24'), '04', 1, 0)),'99 ') "04."
    TO_CHAR (Sum (decode (to_char (first_time, 'hh24'), '05', 1, 0)),'99 ') "05."
    TO_CHAR (Sum (decode (to_char (first_time, 'hh24'), '06', 1, 0)),'99 ') "06."
    TO_CHAR (Sum (decode (to_char (first_time, 'hh24'), '07', 1, 0)),'99 ') "07"
    TO_CHAR (Sum (decode (to_char (first_time, 'hh24'), '08', 1, 0)),'99 ') "08."
    TO_CHAR (Sum (decode (to_char (first_time, 'hh24'), '09', 1, 0)),'99 ') "09"
    TO_CHAR (Sum (decode (to_char (first_time, 'hh24'), '10', 1, 0)),'99 ') "10."
    TO_CHAR (Sum (decode (to_char (first_time, 'hh24'), '11', 1, 0)),'99 ') "11."
    TO_CHAR (Sum (decode (to_char (first_time, 'hh24'), '12', 1, 0)),'99 ') "12."
    TO_CHAR (Sum (decode (to_char (first_time, 'hh24'), '13', 1, 0)),'99 ') "13."
    TO_CHAR (Sum (decode (to_char (first_time, 'hh24'), '14', 1, 0)),'99 ') "14."
    TO_CHAR (Sum (decode (to_char (first_time, 'hh24'), '15', 1, 0)),'99 ') "15."
    TO_CHAR (Sum (decode (to_char (first_time, 'hh24'), '16', 1, 0)),'99 ') "16."
    TO_CHAR (Sum (decode (to_char (first_time, 'hh24'), '17', 1, 0)),'99 ') "17."
    TO_CHAR (Sum (decode (to_char (first_time, 'hh24'), '18', 1, 0)),'99 ') "18."
    TO_CHAR (Sum (decode (to_char (first_time, 'hh24'), '19', 1, 0)),'99 ') "19."
    TO_CHAR (Sum (decode (to_char (first_time, 'hh24'), '20', 1, 0)),'99 ') "20."
    TO_CHAR (Sum (decode (to_char (first_time, 'hh24'), '21', 1, 0)),'99 ') "21."
    TO_CHAR (Sum (decode (to_char (first_time, 'hh24'), '22', 1, 0)),'99 ') "22."
    TO_CHAR (Sum (decode (to_char (first_time, 'hh24'), '23', 1, 0)),'99 ') "23".
    v $ log_history group by to_char (first_time, 'MM - DD')

    Check also...

    (1) V$ SESS_IO. This view contains the BLOCK_CHANGES column that indicates how many blocks have been changed by the session.
    High values indicate a session generates a lot of roll forward.

    SELECT s1.sid, s1.serial #, s1.username, s1.program, s2.block_changes FROM v$ session s1, v$ sess_io s2
    WHERE s1.sid = s2.sid ORDER BY 5 desc;

    You can run top query several times and see the delta between each occurrence of BLOCK_CHANGES. Large deltas
    indicate generation high roll forward of the session.

    (2) V$ TRANSACTION. This view contains information about the amount of undo blocks and cancel records accessed by the
    transaction (columns: USED_UBLK and USED_UREC). Cancellation means more changes.

    SELECT s.sid, s.serial #, s.username, s.program, 2 t.used_ublk, t.used_urec FROM v$ session s, v$ transaction t
    WHERE s.taddr = t.addr;

    You can run top query several times and see the delta between each occurrence of USED_UBLK and USED_UREC.
    Large deltas indicate generation high roll forward of the session.

    (3) use AWR or Statspack report on time. Consider that SQls has higher gets / execution.

    Also, check if you have DML triggers on tables, if any tablespace hotbackup mode. The audit may also cause high recovery
    Generation.

  • Differences in size and HD file size

    I have two questions.

    I. how a 1 TB with 925,67 GB disk volume...

    ... contains a record of Time Machine Backups.backupdb 4.55 TB?

    II. with more than a dozen other disk images (.dmg) in the same folder of your choice, why there disk utility chose this one to treat separately?

    Time Machine uses the file system "physical" which, if the software does not know how to treat hard links will double, triple, quadruple, etc. have the same file.

    A hard link is a 2nd, 3rd, 4th, etc... name of a directory that points to the same exact physical file.  Whenever a hard link is created, the file gets a reference count incremented.  A directory entry for the file is removed every time, the hardlink reference count is decremented.  Then the account goes to zero, the data file are deleted.

    Back to TimeMachine.  TimeMachine makes it look like you have several complete your file system copies, but in reality if a file is unchanged in 2 adjacent backups, a permanent link will be put into the current backup pointing to the previous identical file. Now it seems like backup have a copy of the file, but there is ONLY 1 copy of the data on the disk.  If the file has been changed, then the current backup gets its own copy of the new or modified file and no hard link is created for this file during the backup.

    It is to try to understand a storage count app that walks the directory had a file with hard links 2 times or more significant.  The app must keep track of each file identifier he saw and then check each file that is looking against the previous files, he saw to see if they have the same file identifier.  It's not bad for a small number of files, but when you start talking about tons of files (in millions), you start to need a lot of memory to keep a record of each found file identifier, and you need fast search algorithms which allows to quickly add new items other than the software metering will eventually slow down.

    However, to the best of my knowledge the Finder does not track of hard links.  Especially because very few users actually create them, and few people to get information from a Time Machine backup.

    Now the volume itself maintains counters of allocated and free space, so it is not confused by the harlinks, and given that these values are updated each time storage is allocated or freed, it's quick return this information at the request of Volume Get Info.

    II. with more than a dozen other disk images (.dmg) in the same folder of your choice, why there disk utility chose this one to treat separately?

    I do not understand the question.  Then again, maybe I would not have understood the question if you have changed the wording.  I do not look that closely at utility disc info.

  • Access the recovery disk and deleting files

    I am running Vista Home Premium SP2 on a HP G50 laptop.  Recently, I ran a backup of my system and wanted to place on an external drive which was under XP.  The system wouldn't let me, so I put it on the recovery partition drive.  This filled the disk and slowed my computer down a lot.  I want to remove the backup files but can not access the recovery disc.  How to access this disk to remove the backup files?  In addition, remove the recovery files?  I made the recovery disc.

    Hello

    Here is the information from HP for your problem with the partititon of recovery

    (D) drive is red in Vista

    http://h10025.www1.HP.com/ewfrf/wc/document?DocName=c01555992&cc=us&LC=en&DLC=en

    for any additional questions about this contact HP

  • lrcat and lrdata file size... What is too big

    People,

    I run several lib in LR2.7 and I also run a large library and I wan't to know what would be the top recommendation from the point of view of the size of such a size large library or the size of the file.

    Currently my biggest lrcat file is 2.03 GB with the lrdata in 17,32 GB file.

    Performance is very good with this lib on my workstation... I can't run it on my laptop cause it is too slow for obvious reason, but is there a limit (via set hard or just best practices) in order to limit my files lrcat or lrdata for less then 20 GB, 10 GB, 5 GB?  Those with large lib you can chime in here what you do?  The Adobe ones can you chime on what is the best practice?

    Thanks in advance and I tried to find this information through search but I could not.

    -Diesel

    To 17 GB, I wouldn't mind the size of the file lrdata (excerpts), 10 times that is not uncommon when previews are set high quality. However, 2 GB for the lrcat file suggests that you have a very large number of images (i.e. 100 000 more) or masses of metadata associated with each image. Usually, Lr2.x is having trouble when catalogs become large, but Lr3 should be able to do this fairly easily.

  • How to restore the database only with redolog, ctl and dbf files

    Hi friends,

    I had an oracle 11g database, but the server is down. So I just save only the DBF, CTL and REDOLOG files. But I couldn't save the file spfile and pwd.

    How to restore this database in the other server? How can I recreate the spfile?

    I hope someone can give me a hand.

    Thank you.

    Hello

    Thanks for all your advice.

    I try to recreate an environment equal to the original.

    I have install windows server 2003 and install the database from oracle 10.1 all default options.

    I create the directory D:\oracle\product\10.1.0\oradata\. Then I put the files (*.) CTL *. DBF, *. Log) in this directory.

    After this, I create a pfile file in: D:\oracle\product\10.1.0\db_1\database\init.ora with only the db_name and the control_file, the compatibility settings.

    Then open a CMD and create a service, set the ORACLE_SID and try to start the database:

    oradim-new - sid

    Set ORACLE_SID =

    sqlplus / as sysdba

    startup nomount pfile='D:\oracle\product\10.1.0\db_1\database\init.ora';

    change the editing of the database;

    ALTER database open;

    And, the open database

    After I install Oracle Sql Developer, and I can connect to instance and check the tables.

    But when I try to do an export of CMD, I get tracking errors:

    Export: Release 10.1.0.2.0 - Production on Jue 30 Abr 10:04:24 2015

    Copyright (c) 1982, 2004, Oracle.  All rights reserved.

    Contrase±a:

    Conectado has: Oracle Database 10 g Enterprise Edition Release 10.1.0.2.0 - product

    ion

    With partitioning, OLAP and Data Mining options

    Exportaci¾n made in el juego WE8MSWIN1252 character y el juego CARAC

    Teres AL16UTF16 NCHAR

    Exportando toda database...

    . exportando tablespace definiciones

    . exportando people

    . exportando usuario definiciones

    . exportando roles

    . exportando recursos costos

    . exportando rollback segmentos definiciones

    . exportando enlaces a the database

    . exportando sequence n·meros

    . exportando directorios alias

    . exportando espacios of numbers in context

    . exportando numbers of Ajas functions bibliotecas

    . exportando tipo p·blico sin¾nimos

    . exportando privado tipo sin¾nimos

    . exportando definiciones objetos types

    . exportando actions y objetos del sistema procedure

    . exportando actions y objetos procedure pre-sequence

    . exportando agrupamiento definiciones

    EXP-00056: is ha UN error 24324 ORACLE sown

    ORA-24324: Service Manager no inicializado

    EXP-00056: is ha UN error 24324 ORACLE sown

    ORA-24324: Service Manager no inicializado

    EXP-00056: is ha UN error 24324 ORACLE sown

    ORA-24324: Service Manager no inicializado

    EXP-00000: the exportaci¾n no ha terminado correctamente

    Someone knows this problem?

    Best regards.

  • my photos should be 100 pixels wide, 100 pixels height and file size not more than 15kb

    photos must be of pixels of width 100 height 100 pixels and the file size no more than 15kb

    photos must be of pixels of width 100 height 100 pixels and the file size no more than 15kb

    ============================
    I suggest that make you copies of your
    photos for this task... you certainly do not
    these very small versions to crush desire
    (replace) your original files.

    Windows Live Photo Gallery-
    How to resize pictures:

    First... If the pictures must be square...
    You can crop the photo by clicking on... Difficulty.
    Crop the Photo / square / apply.

    Right-click one or more selected inches...
    Choose... "Resize" in the menu...
    Choose a size... * Custom / 100 *...
    Navigate to a folder in which to save in...
    On the left, click on the button "resize and save"...

    (I suggest that you save it resized)
    photos to a new folder to prevent
    (replacement) by replacing the originals)

    Take a look at the following link:

    Resizing Photos in Windows Live Photo Gallery
    http://blogs.msdn.com/PIX/archive/2007/11/30/resizing-photos-in-Windows-Live-Photo-Gallery.aspx

    Volunteer - MS - MVP - Digital Media Experience J - Notice_This is not tech support_I'm volunteer - Solutions that work for me may not work for you - * proceed at your own risk *.

  • Custom fonts: min and max maximum size on the font file?

    Hello

    What is the limit of file size (max and min) on the font file for loading custom fonts?

    I have search the Forum and found a comment by Peter_strange.

    "The documentation says 60 K or 90K depending on how load you, 90 K if you load a an InputStream memory."

    But I want to be sure, share please if someone has the info.

    Thank you.

    Pradeep thanks for your clarification.

    Now, I checked that there is no limit of min file size and maximum file size limit is 90 KB for load of cases and 60 KB in case of direct load flow.

    I also have a resource good sample by peter_strange -.

    http://supportforums.BlackBerry.com/T5/Java-development/font-loader-Manager-utility/m-p/592647#M1223...

  • limit file size exceeds in cold backup

    Hi all

    I use oracle 10g database (10.2.0.1.0) in Red hat Enterprise Linux 5 OS.

    While taking cold backup on external support, I got error,
    "Exceed the file size limit."
    Actual size of the file is 9 GB. but it will take only 4 GB rest. 5Gb, no error saved and giving,
    file size limit is over.

    How I can fix this error.

    Help me.

    user1115373 wrote:
    Thank you.

    I think that, this command

    ALTER system set db_recovery_file_dest_size is 9 GB scope = spfile;.

    to increase the size of the flash recovery area.

    but, I do cold backup at the level of the BONE. Is this possible.

    guide me.

    Then its related OS level and I think that OS is not support this.

  • Captivate 3 - everyone figured out how to reduce to a CP project file size?

    Greetings,

    I think that this question has been asked before, but I was unable to find a solution to the problem and I was wondering if someone else (a solution).

    Some people called this phenomena "file bloat".

    I'm using Captivate 3. In the process of adding and deleting slides the .cp file gets quite large. I tried to do the thing of "save under...". ». I also tried the unused resources removal (graphics, backgrounds, etc) and save the file again. Nothing seems to work.

    I tried to import a project into an empty project (the application seems to work, but nothing happens). I tried export in XML format and the file will not import (corrupted).

    If someone managed to find an answer on how to remove unused slides/resources of a Captivate project "inflated". Apart from rebuild the project file from scratch is there a solution?

    Thank you

    TPK

    Hello

    The only way I've ever seen a reduction in file size Captivate 3 was by performing the following steps:

    1. Open a second instance of Captivate.
    2. Create a new empty project of same size as the other instance.
    3. Switching between instances, copy slides of the bloated and paste into the fresh.
    4. Save a new and the file size is reduced.
    5. Throw away the old one.

    Note that you will probably need to restore links that you had.

    Just a note special here on one of the things that you have tried. You said that you exported in XML format. I don't think that you can export as XML and then import XML into a new project. I think that the XML export works very similarly to the export of Microsoft Word. Only the exported project he will be able to properly re - import. If I understand correctly, the XML export is designed to Captivate developers who need to locate (translate) their projects.

    See you soon... Rick

  • Reduction in file size of scanned documents

    ISSA. I analyze the statements of income, statements etc. with several banking pages in pdf format, then want to send them an email with several pdf files attached to a single email. My file sizes are 3 MB, 8 MB, 12MB, etc. even if I scan only about 15 pages in each pdf usually.

    I got PDFs page 82 to others by e-mail and the file size is less than 1 MB.

    How can I reduce the file size for easier transmission through e-mail?

    Thank you!

    Hi Zohami,

    After you change the settings, the settings will be will be, to only do once

    Try several times, scan to see what works better settings for you.

    After you change the settings in the desktop shortcut, the settings will apply for digitization of the all-in-one touch pad with the same shortcut name.

    I hope this helps answer your question.

  • strategy of redo and control files

    Excuse my ignorance in this area (not oracle or a DBA), but from a risk perspective, I read his best practices for storing a redundant copy of recovery logs and control files on a separate disk / server - for redundancy.

    However, if the server is a server database physical, with local storage (in which case files of database and the oracle software resides), no configuration of cluster if the server dies a sudden death, then what exactly are the newspapers redundant recovery and control of files will save you of? Is it to minimize the loss of data, or else (told you I wasn't a DBA or Oracle guru). If you back up your recovery connects dialy then I don't know what exactly the redundant copies really are for/record you leave?

    So if the server dies (and the storage is still ok) and then transport the storage to another server and you are in business again. Multiplexing or not does not change your ability to use another server.

    In a disaster situation, where you lose the server and the storage is also lost, so you have to go to a backup and multiplexing saves you from what it is because the backup has only 1 copy of the controlfile and archivelogs.

    If you get a logical corruption (IE a bug in the software) with controlfiles multiplex, then you're still screwed because all controlfiles are physical copies of each other, so all controlfiles have the same corruption. In this case, you must go to a good backup of the controlfile.

    Multiplexing allows you to save when you get corruption Physics (due to the corruption of storage) of your command multiplex or multiplex redo logs file, and so you can use the good copy of the file control or redo logs to continue without losing the validated data.

  • What is the difference between the file index and swap file

    What is the difference between the .pag and .ind files?

    I know that the file contains data real means-data blocks and cells and the index file contains the pointer to the block of data that is available in the paging file.

    but is there another difference? What about size?

    In my opinion, pagefile size is always greater than the index file. It is Scripture?
    If the size of the Index file is larger than the pagefile so that it happened? If the size of the index file is larger that the pagefile is then write?


    If I deleted the page file, then it is assigned to the index file?
    or
    If I deleted some data block of the pagefile, then how is affects the index file?

    Published by: 949936 on August 27, 2012 06:32

    Are these test questions? Well, in the hope that in fact 949936 instead has a strong sense of the investigation, here goes:

    In my opinion, pagefile size is always greater than the index file. It is Scripture?
    If the size of the Index file is larger than the pagefile so that it happened? If the size of the index file is larger that the pagefile is then write?

    ^ ^ ^ Generally the. IND index, or the files are smaller than the. PAG page file (s). However, there is a minimum eight megabytes index file size. If clearly on a BSO your database and send a single number (which equals one block), you will have a. IND, quite a bit larger than the. PAG.

    For example, I have cleared out the good old Sample.Basic and then sent the number "5" in the "New York"-> "Cola"-> "sales"-> "Jan"-> "real." This resulted in a. The IND of 8 216 576 bytes, and a file size. PAG 149 bytes file size.

    If I deleted the page file, then it is assigned to the index file?

    ^ ^ ^ Are you sure this isn't some kind of certification exam? Do you mean that you went down to the BONE and deleted the file? Irretrievably corrupted, the database is what will happen. If you want to clear data, I guess it depends on how you do it. Send to a #Missing 5 has increased the size of the. PAG 149 to 229 bytes file. My guess is that doing so forced the overhead on Essbase. There is no impact on the. File of the IND in this context.

    If I deleted some data block of the pagefile, then how is affects the index file?

    ^ ^ ^ If you delete data into Essbase (whether through the above method which clears completely a block or CLEARDATA command) and then force a restructuring by MaxL or EAS, if the number of blocks have been reduced enough to cross this limit of eight megabytes, you should see the. IND file size down. I see usually do this all the time with backup processes every night that have a component of restructuring for them.

    I hope that answers your questions. You can watch the SER60 that there is a fairly comprehensive review of storage BSO which are.

    Kind regards

    Cameron Lackpour

Maybe you are looking for