The log files can be removed automatically in the environment of the HA

Hi experts BDB.

I write db HA 4.6.21 the bdb version-based application. Two processes are running on two machines, a master who will be read/write db, one as a client/backup reads only the db. There is a thread in demon master who perform checkpoint all the 1 second: dbenv-> txn_checkpoint (dbenv, 1, 1, 0), and dbenv-> log_archive (dbenv, NULL, DB_ARCH_REMOVE) will be called after runnng checkpoint each time. The env has been opened with indicator: DB_CREATE | DB_INIT_TXN |  DB_INIT_LOCK | DB_INIT_LOG | DB_REGISTER | DB_RECOVER | DB_INIT_MPOOL | DB_THREAD | DB_INIT_REP;   Autoremove indicator has been defined: envp-> set_flags (uid_dbs.envp, DB_LOG_AUTOREMOVE, 1) before open env.

I found this thread https://forums.Oracle.com/message/10945602#10945602 , who discussed non-ha environment and I tested my code in an env without DB_INIT_REP non-ha, it worked. However in HA env these log files were never deleted. Could help you in this matter? The customer needs to run checkpoint? Can it be a bug in the bdb?

Thank you

Min

There is a thread in demon master who perform checkpoint all the 1 second: dbenv-> txn_checkpoint (dbenv, 1, 1, 0), and dbenv-> log_archive (dbenv, NULL, DB_ARCH_REMOVE) will be called after runnng checkpoint each time. The env has been opened with indicator: DB_CREATE | DB_INIT_TXN |  DB_INIT_LOCK | DB_INIT_LOG | DB_REGISTER | DB_RECOVER | DB_INIT_MPOOL | DB_THREAD | DB_INIT_REP;   Autoremove indicator has been defined: envp-> set_flags (uid_dbs.envp, DB_LOG_AUTOREMOVE, 1) before open env.

I do not say that this is causing a problem, but make the DB_ENV-> log_archive (DB_ARCH_REMOVE) in your thread and setting DB_ENV-> set_flags (DB_LOG_AUTOREMOVE) are redundant. In your thread, you control the timing. The DB_ENV-> set_flags (DB_LOG_AUTOREMOVE) option checks and removes unnecessary log files when we create a new log file.

Have you seen in the documentation of DB_ENV-> set_flags (DB_LOG_AUTOREMOVE) we do not recommend do deleting files of automatic newspaper with replication? While this warning is not repeated in DB_ENV-> log_archive (DB_ARCH_REMOVE), it also applies to this option. You should review the use of this option, especially if it is possible that your customer could go down for a long time.

But it is a warning and automatic log removal should work. My first thought here is to ask if your client has recently experienced a synchronization? Internally, we block archive on the client during certain parts of a synchronization master to improve the chances that we will keep all records required by the sync client. We block archiving for 30 seconds after the client synchronization.

I found this thread https://forums.oracle.com/message/10945602#10945602 that discussed non-ha environment and I tested my code in an env without DB_INIT_REP non-ha, it worked. However in HA env these log files were never deleted.

This thread is to discuss a different issue. The reason for our BDB 4.6 warning against the use of automatic elimination of log with replication is that it only takes into account all sites in your replication group, so that we can remove a log of the master who always needs a client.

We added remove auto journal group that supports replication manager replication 5.3 BDB, and this discussion is about a change in the behavior of this addition. With this addition, we no longer need to recommend against the use of elimination automatic log with replication in BDB 5.3 and later versions.

Could help you in this matter? The customer needs to run checkpoint? Can it be a bug in the bdb?

I'm not sure that the customer needs to run its own control points, because it performs checkpoints when it receives control point of the master journal records.

But none of these options to delete log on the master does nothing to remove the logs on the client. You will need to do the following: to archive logs separately to the customer and the master.

Paula Bingham

Oracle

Tags: Database

Similar Questions

  • I get the following error messages at startup: Windows Sidebar Settings.ini. is used by another system. Recorder warning: recorder! Initialize is not yet call... The log file can corruption exp. application failure did not start.

    Original title: opening of the pop ups

    I have Vista Home version.  When I connect, I get the pop - ups following:

    1.) _Windows sidebar Settings.ini. is used by another system.

    Recorder 2.) WARNING: Logger! Initialize is not yet call... The log file can corruption exp.

    3.) down the application did not start.

    Also: while I'm on the internet a pop - up occurs from time to time indicating: "Internet Explorer has stopped working".  Sometimes I get started, and sometimes I can just 'X' out and I'm always on.

    Hello

    1 how long have you been faced with this problem?
    2 did you recent hardware or software changes to your computer before this problem?

    I suggest you try the procedure below.

    Step 1: Try to perform the clean boot in order to solve startup error messages.

    From your computer by using a minimal set of drivers and startup programs so that you can determine if a background program is interfering with your game or program. This type of boot is known as a "clean boot".

    Follow the steps provide in the article below to perform the clean boot. http://support.Microsoft.com/kb/929135

    Step 2: I suggest you try the steps outlined in the article below to solve the problem of internet Explorer.
    http://support.Microsoft.com/kb/936213

    Thanks and greetings
    Umesh P - Microsoft technical support.

    Visit our Microsoft answers feedback Forum and let us know what you think.
    [If this post can help solve your problem, please click the 'Mark as answer' or 'Useful' at the top of this message.] [Marking a post as answer, or relatively useful, you help others find the answer more quickly.]

  • My computor can't find my log file can't download can not write to the log file

    "' my computor cant found my profile so it connects me on different everytime I turn on my computor and can't download and cant wright for the newspaper and ID 439 Source file" ESENT "he says:

    Hi patriciarangel,

    1. you receive an error message when you connect to other profiles?

    2. What is the exact error message you are getting?

    Your question does contain all the required information necessary for us to help you. Please re - write your question, this time make sure you have all the information necessary and we will try to help.

    See the link below:

    How to ask a question

    http://support.Microsoft.com/kb/555375

  • firefox desktop data file, can I remove it?

    Folder of old data of firefox on desktop... what should I back up this or can I delete it? My computer crashed at point, & may have created, then... do not remember why it is there. I want to just make sure it's ok to delete

    Sounds that you have made an update of Firefox and have created a new profile.

    When you refresh/Reset Firefox then created a new profile and some personal data (bookmarks, history, cookies, passwords, data form) are automatically imported.
    The current profile folder will be moved to "Old data Firefox" folder on the desktop.
    Installed extensions and other customizations (toolbars, Pref.) that you have made are lost and must be redone.

    It is possible to retrieve data from the "Old data Firefox" folder on the desktop, but be careful not to copy the files corrupted to avoid transporting more problems.

    If the new profile works and you have confirmed that you have all the data you need then you can delete this folder.

    You can copy files like these in your current Firefox profile folder.

    • bookmarks/history: backups in the bookmarkbackups folder and perhaps places.sqlite, if you really need the history
    • other SQLite files cookies.sqlite (cookies) and formhistory.sqlite (saved form data)
    • logins. JSON and signons3.txt (decryption key) of the saved passwords in password manager
    • Permissions.SQLite and, possibly, content - prefs.sqlite permissions and Site preferences
    • cert8. DB for stored intermediate certificates (Certificate Manager)
    • sessionstore.js to open and pinned tabs and tab groups
    • Persdict.dat for the words that you added to the spelling dictionary

    You can retrieve more personal data.

    You can use this button to go to the current Firefox profile folder:

  • Can I remove the old update files that have already been applied?

    Good evening! I just a quick may be a dumb question... when I've updated my computer, if I remove the before update... like when a service pack number, as for example I downloaded the number of service pack 2 What do I do with the number of the service pack 1? delete it?

    Depends on the operating system - Windows

    Please note this is only for Windows XP and is not on Vista or Windows 7

    Folders that have uninstall as part of the name (for example $NtUninstallKB282010$ who reside in C:\windows (hidden files) are window Hot difficulty updating folders/files) can be removed safely (providing ever, you wish to uninstall the updates). I recommend you leave these records for a period of at least one month to make sure that the update works correctly.

    These updates can be removed individually or together. To learn more about the update/s go on:
    http://support.Microsoft.com/kb/xxxxxx
    NB: XXXXXX = the actual number, not to mention the "Q" or "Ko."

    Once you have removed the uninstall folders/files, then go to control panel, add/remove programs. Select the title of the corresponding Windows fix on the folder/file of the patch you just deleted, and select Delete. You will get a Windows error. This is because you deleted the uninstall folder/files. Simply choose OK and the entry will be removed from the Add/Remove Programs list.

    Don't NOT delete the folder $ $hf_mig

    Cleaning after installation of SP2
    http://aumha.org/win5/a/sp2faq.php#after

    and/or

    XP SP3: Post Installation Cleanup
    http://aumha.NET/viewtopic.php?f=62&t=33827

    For Vista and Windows 7

    The update uninstall method in Windows Vista is quite different from that in Windows XP.

    Uninstaller for each update folder no longer exists in Vista and the uninstall information is stored by the Volume Shadow Copy service.

    After each update, Volume Shadow Copy service backup only the updated files.

    So with Vista, it's a differential backup, rather than a full backup files for uninstall
    in Windows XP.

    This safeguard mechanism is used to save disk space.

    So basically, you are unable to manually delete the uninstall it from the computer.

    TaurArian [MVP] 2005-2011 - Update Services

  • Video transfer of the file to the desktop and can not remove it

    Windows Explorer

    Video camera Hitachi desktop VOB file, impossible to remove and windows Explorer runs my cpu at 99%. How to delete the file, it won't let me rename etc. Can run the Explorer in the background, but cannot stop it.

    Try to remove from a command prompt. Tip: you can put the name of the file on the
    Clipboard by pressing SHIFT and right-click the file - copy as path
    will appear in the menu.
     
    Type cmd in the start search box, and select command line. Type
     
    Del
     
    Right-click in the line of command to paste the path.
     
    Frame with quotes
     
    for example
     
    del "c:\some folder\filename.ext".
     
    If Explorer has a lock on the file, Ctrl + Shift + right click on an empty spot
    on start and choose Explorer output. To restart Explorer press Ctrl + Shift +.
    Escape and menu file - new task (run)... and type explore.
    --
    ..
    --
    "Andrew Racz" wrote in message news: cb6fbaba-ee1a-4a7a-aa8b-7e97961224d9...
    > Fired video VOB Hitachi camera desktop file, can't remove it
    > and windows Explorer runs my cpu at 99%. How to delete the file
    > It won't let me not rename etc. Can run the Explorer the
    > background but cannot stop it.
     
     
  • Removal of log files

    I'm running out of space on my folder/var/log. After having many problems with a cluster yesterday, I have a large number of log files in this folder. Any thoughts on which those can be deleted?

    My understanding is that any numbered log file could be removed safely (that is to say, vmkernel.33, or pass - 7.log.)  Here are some of the multiple log files I have. Suggestions are appreciated!

    In ' / var/log ', the following people have the file name, plus at least 4 filename.1

    Boot.log; cron; ksyms; e-mail log; messages; rpmpkgs; guarantee; spoole; vmkwarning

    Then I have:

    VMkernel and vmkernel.1 - 36

    In/var/log/vmware I have:

    esxcfg - boot.log 1-4
    hostd.log and 0.log - through the intermediary of pass - 9.log;
    several TGZ files, 'spend - support.tgz 1-4", all have date yesterday on them. Don't know what happened there...
    and a bunch of vmware-cim - 0.log through vmware - cim.9.log

    OK, so you get the idea...

    Hello

    Delete all log file that ends in a number. Those are backups. Also, remember to delete the log files from/var/log/vmware as well.

    The 'logrotate' configuration should be configured to do this automatically for you. Discover all the pointee VMware hardening Guideline by top Virtualizaition safety links ESX/ESXi section. It is a good step to take.

    Best regards

    Edward L. Haletky

    VMware communities user moderator

    ====

    Author of the book "VMWare ESX Server in the enterprise: planning and securing virtualization servers, Copyright 2008 Pearson Education.»

    Blue gears and SearchVMware Pro Articles: http://www.astroarch.com/wiki/index.php/Blog_Roll

    Security Virtualization top of page links: http://www.astroarch.com/wiki/index.php/Top_Virtualization_Security_Links

  • the log files are not purged

    Hi all

    I have a data store of TT, with layout LogPurge = 1. There are a lot of transactions, manipulating the data store. If I'm not mistaken, the log files, which are older, then the old checkpoint file is deleted automatically by TT, is there is no operation holding them. In my case, the log files are not deleted, so the ls - ltr command prints:

    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 10:49 appdbtt.res1
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 10:49 appdbtt.res0
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 10:49 appdbtt.res2
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:02 appdbtt.log0
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:03 appdbtt.log1
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:03 appdbtt.log2
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:04 appdbtt.log3
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:04 appdbtt.log4
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:04 appdbtt.log5
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:05 appdbtt.log6
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:05 appdbtt.log7
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:06 appdbtt.log8
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:06 appdbtt.log9
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:07 appdbtt.log10
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:07 appdbtt.log11
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:08 appdbtt.log12
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:08 appdbtt.log13
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:09 appdbtt.log14
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:09 appdbtt.log15
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:09 appdbtt.log16
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:10 appdbtt.log17
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:10 appdbtt.log18
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:11 appdbtt.log19
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:11 appdbtt.log20
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:12 appdbtt.log21
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:12 appdbtt.log22
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:13 appdbtt.log23
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:13 appdbtt.log24
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:14 appdbtt.log25
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:14 appdbtt.log26
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:15 appdbtt.log27
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:15 appdbtt.log28
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:16 appdbtt.log29
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:16 appdbtt.log30
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:16 appdbtt.log31
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:17 appdbtt.log32
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:17 appdbtt.log33
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:18 appdbtt.log34
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:18 appdbtt.log35
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:19 appdbtt.log36
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:19 appdbtt.log37
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:20 appdbtt.log38
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:20 appdbtt.log39
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:21 appdbtt.log40
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:21 appdbtt.log41
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:22 appdbtt.log42
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:22 appdbtt.log43
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:22 appdbtt.log44
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:23 appdbtt.log45
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:23 appdbtt.log46
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:24 appdbtt.log47
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:25 appdbtt.log48
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:25 appdbtt.log49
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:25 appdbtt.log50
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:26 appdbtt.log51
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:26 appdbtt.log52
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:27 appdbtt.log53
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:27 appdbtt.log54
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:28 appdbtt.log55
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:28 appdbtt.log56
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:29 appdbtt.log57
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:29 appdbtt.log58
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:30 appdbtt.log59
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:30 appdbtt.log60
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:31 appdbtt.log61
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:31 appdbtt.log62
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:32 appdbtt.log63
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:32 appdbtt.log64
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:33 appdbtt.log65
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:33 appdbtt.log66
    -rw-rw-rw-1 timesten, timesten 487444480 dec 07 11:33 appdbtt.ds0
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:34 appdbtt.log67
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:34 appdbtt.log68
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:35 appdbtt.log69
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:35 appdbtt.log70
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:35 appdbtt.log71
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:36 appdbtt.log72
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:36 appdbtt.log73
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:37 appdbtt.log74
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:37 appdbtt.log75
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:38 appdbtt.log76
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:38 appdbtt.log77
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:39 appdbtt.log78
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:39 appdbtt.log79
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:40 appdbtt.log80
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:40 appdbtt.log81
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:41 appdbtt.log82
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:41 appdbtt.log83
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:42 appdbtt.log84
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:42 appdbtt.log85
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:43 appdbtt.log86
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:43 appdbtt.log87
    -rw-rw-rw-1 timesten, timesten 632098816 dec 07 11:43 appdbtt.ds1
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:44 appdbtt.log88
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:45 appdbtt.log89
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:45 appdbtt.log90
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:46 appdbtt.log91
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:46 appdbtt.log92
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:46 appdbtt.log93
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:47 appdbtt.log94
    -rw-rw-rw-1 timesten, timesten 67108864 dec 07 11:47 appdbtt.log95
    -rw-rw-rw-1 timesten, timesten 4767744 dec 07 11:47 appdbtt.log96

    As you can see, I have 67 log files older than the old checkpoint file. Now, if I enter the DS in ttIsql and call ttLogHolds I get:
    Command > call ttLogHolds();
    < 0, 38034792, replication, APPDBTT:_ORACLE >
    67 44319520, checkpoint, < appdbtt.ds0 >
    88 45855168, checkpoint, < appdbtt.ds1 >
    3 lines found.

    What can be the problem?

    Thanks in advance:
    Dave

    This bookmark

    < 0,="" 38034792,="" replication="" ,="" appdbtt:_oracle="">

    indicates that the bookmark AWT has not moved since 0 log file. Since the AWT is performed by the replication agent, it retains a bookmark to follow where its has reached in reading through files looking for operations against any cachegroups AWT transaction logs. It seems that somehow an operation against a cachegroup AWT was not committed, which means that he cannot be sent to Oracle, recognized and settled the bookmark. Once the bookmark moves into a new transaction log file, all the old log files can then be purged.

    You might be able to identify any transaction cease using ttXactAdmin and looking for the locks held against AWT cachegroups.

  • Can I remove all these files "Temp" folders

    C:\TEMP
    C:\Windows\Temp
    C:\Program Files\HP\Temp
    C:\Documents and Settings\Administrator\Local Settings\Temp
    C:\Documents and Settings\All Users Temp
    C:\Documents and Settings\Default User Settings\Temp
    C:\Documents and Settings\Tim\Local Settings\Temp
    C:\WINDOWS\pchealth\helpctr\Temp
    C:\Documents and Settings\Tim\Local Settings\Application Temp
    C:\Documents and Settings\All Users\Application Data\Bitdefender\Desktop\Temp
    C:\Documents and Settings\Tim\Application Data\Real\Update\temp
    C:\Documents and Settings\Tim\Application Data\Thornsoft Development\ClipMate6\TEMP
    C:\Documents and Settings\Tim\Local Settings\Application Data\Microsoft\BingBar\Temp

    Are there temporary files on the HARD disk that are used for ongoing operations... or are all temporary files "delete"?

    ThanksTRinAZ

    all temporary files can be removed.

    go to start, all programs, accessories, System Tools, disk cleanup.

    or if you are at the level of the folder, then simply select all, clear selections.

  • "redo the write time" includes "log file parallel write".

    IO performance Guru Christian said in his blog:

    http://christianbilien.WordPress.com/2008/02/12/the-%E2%80%9Clog-file-sync%E2%80%9D-wait-event-is-not-always-spent-waiting-for-an-IO/

    Waiting for synchronization of log file can be divided into:
    1. the 'writing time again': this is the total elapsed time of the writing of the redo log buffer in the log during recovery (in centisecondes).
    2. the 'parallel log writing' is actually the time for the e/s of journal writing complete
    3. the LGWR may have some post processing to do, signals, then the foreground process on hold that Scripture is complete. The foreground process is finally awake upward by the dispatcher of system. This completes pending "journal of file synchronization.

    In his mind, there is no overlap between "redo write time" and "log file parallel write.

    But in the metalink 34592.1:

    Waiting for synchronization of log file may be broken down into the following:
    1 awakening LGWR if idle
    2 LGWR gathers again to be written and issue the e/s
    3. the time for the IO journal writing complete
    4 LGWR I/O post processing
    ...

    Notes on adjustment according to log file sync component breakdown above:
    Steps 2 and 3 are accumulated in the statistic "redo write time". (that is found in the SEO Title Statspack and AWR)
    Step 3 is the wait event "log file parallel write. (Note.34583.1: "log file parallel write" reference Note :) )

    MetaLink said there is overlap as "remake the writing time" include steps 2 and and "log file parallel write" don't understand step 3, so time to "log file parallel write" is only part of the time of 'redo write time. Won't the metalink note, or I missed something?
  • The clause in the database log file duplicate

    I tried to duplicate a database (Oracle RAC 10 g ASM 2 node Windows 2003 Server R2) and in the document of the oracle, it went something like:

    DUPLICATE the TARGET DATABASE to dupdb
    PFILE = /dup/oracle/dbs/initDUPDB.ora
    LOGFILE
    ' / dup/oracle/oradata/trgt/redo01.log' SIZE 200 K,.
    ' / dup/oracle/oradata/trgt/redo02.log' SIZE 200 K,.
    ' / dup/oracle/oradata/trgt/redo03.log' SIZE 200 K;

    I thought as the source database RMAN backup already contained the definitions of redo log. Are only these redo log files used for the process during replication of the database? Is the logfile clause is required to use the command DUPLICATE TARGET DATABASE?

    Thank you much in advance.

    LOG file can be used to specify where the newspapers of recovery for the duplicate database should be placed. This clause is not mandatory and it is really necessary only if you intend to rename the redo log files. If you don't need/want to rename the files, you can use the NOFILENAMECHECK option.

  • can I remove atomatic updates for windows?

    Does anyone know for absolute certainty if you can remove the automatic updates of 3 years without it harming your computer, because I was told don't not delete the but geez, they take a lot of space, and if they are updated does not update on the oldest?

    Remove the old Windows updates: -.

    Folders that have uninstall as part of the name (for example $NtUninstallKB282010$ who reside in C:\windows (hidden files) are window Hot difficulty updating folders/files) can be removed safely (providing ever, you wish to uninstall the updates). I recommend you leave these records for a period of at least one month to make sure that the update works correctly.

    These updates can be removed individually or together. To learn more about the update/s go on:
    http://support.Microsoft.com/kb/xxxxxx
    NB: XXXXXX = the actual number, not to mention the "Q" or "Ko."

    Once you have removed the uninstall folders/files, then go to control panel, add/remove programs. Select the title of the corresponding Windows fix on the folder/file of the patch you just deleted, and select Delete. You will get a Windows error. This is because you deleted the uninstall folder/files. Simply choose OK and the entry will be removed from the Add/Remove Programs list.

    Don't NOT delete the folder $ $hf_mig

    Cleaning after installation of SP2
    http://aumha.org/win5/a/sp2faq.php#after

    and/or

    XP SP3: Post Installation Cleanup
    http://aumha.NET/viewtopic.php?f=62&t=33827

    Please note this is only for Windows XP and is not on Vista or Windows 7

  • ACFS.log.0 oracleoks log file

    A trivial question about the acfs.log.N files

    (e.g. acfs.log.0, acfs.log.1, acfs.log.2, acfs.log.3, etc., 1 GB size each),

    they can be found inside the book:

    CRS_HOME/log / < host name >/ACF/kernel

    with a small file: file.order

    that lists the temporal order allowing to consider

    Is it safe to remove them (if I have anymore) with rm f acfs.log. *?

    According to lsof no process them use at present.

    Also: is there a way to limit the number of files created?

    Sorry to bother you, but I am not able to find information in the Oracle web sites, or in the doc, or Googling on.

    Looks like the oracleoks log files (Service of the Oracle core, a Linux module is not open source, loaded into the kernel after installation of the rack)

    It's a 11.2.0.4 installation of CRS, on a single node, I have some files of acfs.log.N, each filled with documents such as:

    ofs_aio_writev: OfsFindNonContigSpaceFromGBM failed with status 0xc00000007f

    Thank you

    Oscar

    Hello

    The log files can be deleted as described above with the exception of Windows, where the kernel will not let you delete the active log file. Notice that under Linux/Unix, if you delete all the acfs.log. ? files, you will lose entries following newspaper because you deleted the current logfile that the kernel has opened and is expected to be available - the bits go to never never land. When the kernel thinks the (nonexistent) current log file must be full, will end at the next file in the sequence, and all will be proceeding normally.

    There is no documented way to change the number or the maximum size of the log files.

    As for the error being logged - you're low disk space.

    best regards / Tony

  • Analysis &amp; analyze Log files

    Afternoon,

    Yet once thank you advance for looking over my post.

    Here's what I'm working on this period. I'm working on a screen of messaging that is managed via a scheduled task on the hour, every hour. It is of course in coldfusion.

    What I have to do is enter e-mails. The only mention of the status of the e-mail message is in a daily log in the e-mail server. The log file can be between 20 KB to 120 MB. The format of the logfile itself is a little random based on what stage of the E-mail process is on.

    The name of the file is saved as sysMMDD.txt and we have a process that runs every 20 minutes to check the size of the log file for the current date. If it is greater than 10 MB we rename it sysMMDD_1.txt it's really irrelevant to my question but I would like to provide all the information of the thought.

    Going back to the actual format of the journal format it looks like this:

    HH: mm SS:MS TYPE (HASH) [IP ADDRESS] etc.

    Type = type of e-mail or service called

    Hash = a code of unique hash of the email to connect steps

    etc is all the text after the [IP ADDRESS] has no database based on what stage structure.

    The monitor needs to catch all send them in this newspaper between one hour file. Don't forget, the newspaper could contain up to a days worth of data. As it is now I am able to do a count of sending it by searching for the number of times "ldeliver" appears in the log.

    Anyone have any suggestions for the analysis of a log like this? I fear that the way I do it now, which is a hack, is not enough and there is probably a better way to do it.

    Basically, right now I am a cfloop with the index = 'line' through the file. You can imagine how it works with large log files, that's why we created the above scheduled task to rename the log files. Now if I start adding extractions of time as well, I'm sure that this process is going to bust.

    I know that this post is dispersed, but it's just one of those days where everything seems to happen at the same time. Someone at - it other ideas to go about this process? Someone said that an ODBC data source in the text file, but will only work when it is delimited by spaces and only the first "four" pieces are reliable format?

    Any help is appreciated!

    Sorry, Yes.  I don't see you mention that another application generates the log.

    Looping through the file line-by-line adds not really too much of an overload of the resource.  It does not need to load the entire file into RAM, it reads each line in turn.  I tried looping through a file of 1 GB on an instance of CF with only 512 MB of RAM allocated to her and she brassèrent away quite a few lines per millisecond, treatment and that he never broke a sweat.  It took about 7 minutes to treat 1 million lines and never consumed more than a marginal amount of memory.

    Do really know in this way will you gyp?  It doesn't look like the kind of process that must be lightning fast: it is a process in the background, is it not?

    I suppose if you were involved in this topic, you could file through grep the pump at the level of the filesystem first to extract the lines you want and treat this much smaller file.  The file system must process the files of this size quite quickly and effectively.

    I don't would not bother to try to put this stuff in a DB and then treat it: it would be probably more work than just a loop on the file that you are now.

    --

    Adam

  • can I remove this kind of files from drive c in the windows folder, kb2544893.log - for example

    There are many of these types of log files, I need to make space on the c drive, once again - kb2544893.log - or are they update the files I need. they are on windows, in the c drive - thank you

    It is recommended that them, do not delete because some of them would use for troubleshooting and for example, kb2544893.log contains activity journal on kb2544893 update if something happened to your system, you can draw using log files.

    Instead, I suggest running disk clean up and defragnment that help you save space. To clean the discs take a look at:

    http://support.Microsoft.com/kb/310312

    Also uninstall programs you don't want to add/remove programs.

Maybe you are looking for