mod_wl_ohs_0202.log continue to grow

11.1.0.1.0 grid control installed on Redhat 5.2. The reposotory database is Oracle 11.2.0.2 on the same Linux machine.
File /u01/app/gc_inst/WebTierIH1/diagnostics/logs/OHS/ohs1/mod_wl_ohs_0202.log continues to grow, 6.5 GB after 6 months. renamed the file and created a vacuum mod_wl_ohs_0202.log. But the the old file still gets written. Not sure if I need to delete the file.

What is the best practice for managing this file to avoid it grow too big?

Thank you

Please see the article-ID

11 G Grid Control Performance: Webtier log - mod_wl_ohs.log home of the WHO is very large in size and no rotation [1271676.1 ID]

MOS...

HTH

Tags: Enterprise Manager

Similar Questions

  • JMS store the size of the file continues to grow

    Hello

    In our production system, the file size of the JMS store continues to grow (1 GB per day and statitistics on the show file size it is linear in time, so not because of a burst of time messages).
    First we thought that this is due to unconsumed messages or transactions not recognize, but console weblogic JMS statistics view current or pending message in one of the WAD of the file (and the application logs does not trace of error).
    The only exception is some of the messages that go on a SOL JMS file without being consumed her, but it's only 2000 messages to them and which represent all together less than 10 MB, so it doesn't seem to be related (unless there is a very specific fragmentation problem that ignore us).

    In addition, we have compact the jms store files using WLST and several trips are compact in 7.1 MB:
    Before:

    -rwx - 1 webadm webgrp 1, 24 3 August 10:51 FILESTORE_CANOEJMSSERVER_1000000.DAT

    -rwx - 1 webadm webgrp 1, 24 3 August 10:53 FILESTORE_CANOEJMSSERVER_1000001.DAT

    -rwx - 1 webadm webgrp 761 M 24 August 10:54 FILESTORE_CANOEJMSSERVER_1000002.DAT

    Command:

    Java weblogic.store.Admin

    Compact - ed.

    quit smoking

    After:

    -rw - r - r - 1 webadm webgrp 7, 1 M 25 August 11:33 FILESTORE_CANOEJMSSERVER_1000000.DAT

    We also release the content of the file using wlst, and we get an xml file of 11 containing only the message in the DLQ JMS file.

    The application treats about 10,000 JMS messages every 5 minutes on both nodes of a cluster. Watch application logs that he used successfully up to approximately 150 JMS message in a second.
    The size seems to increase only on the first node (the second node jms store file size is 50 MB without be compacted, 1 MB after beeing Compact):

    -rwx - 1 webadm webgrp 54 M 24 August 10:55 FILESTORE_CANOEJMSSERVER_2000000.DAT

    We have not seen this problem on other platforms.

    We do not know whether it can have a link, but the disk partition where the Bank JMS file are is not the same on the production platform.

    If you have an idea about the source of the problem or the things we should look at to understand what is happening, it would be greatly appreciated (because the problem appeared on the production platform, we have no access directly and must apply to the intervention. Thus information on the problem is not as easy as we would like to be).

    Thank you.

    Kind regards

    Hello

    To keep this conversation updated some information.
    So we managed to avoid this problem by failing to put messages in GENERAL store (and instead of remove them). A special consumer on the file have certainly solve the problem too.
    It seems that the jms server could not reuse space of the message successfully consumed for about 10,000 are added simultanesouly (every 5 minutes) and some of the 10,000 messages (3 to be exact) are unsuccessful are wrong and go (after 3 unsuccessful attempts) in file DLQ JMS in the same jmsstore.

    Note: in this case, add a limitation on the maximum number of bytes the Server JMS or modifying synchronous write strategy does not change the behavior: the size of file storage continues to increase until the file system is full.

    [edit] I forgot to mention that we you of the different priorities of JMS to the message in the JMS file [/ Edit]

  • log file curious growing on rt-target

    Hello community,

    I have a pxi-time real-system that automatically records the events our network multicast and creates a log file that grows every minute.

    The file is named packetsIn.log and appears under/or-rt/system/ethernet.

    The content of the file is

    Source Mac:...

    Dest Mac:...

    PacketSize:...

    The mac addresses that appear in the file are not of my controller but some switches or multicast-addresses of our network.

    If you delete the file, it appears again a few seconds after the action. The file grows too, if no labview applications are running on the system.

    A that someone has noticed similar behaviour? What could cause the creation of this file?

    We use LabVIEW 2009 SP1 and the curious, it's, it appears only in one of our controllers.

    Hope someone could help me

    Ayoub

    Hi ayoub,.

    Please try opening the nor - rt.ini and the following entry set:

    [DEBUG]
    RTPacketParsingEnbled = FALSE

    After that the file should stop more.

    Best regards

    Lam

  • Flash of Oracle 11 g recovery continues to grow

    Recovery flash of Oracle 11 g (E:\ partition on a Windows 2003 Server) continues to grow. He has already used 200 GB of disk space and only 46GB of disk space is found in this partition. Can say us how many times Oracle 11g delete older backups?

    Thank you.
    Andy

    To view the restore points, you can query v$ restore_point
    To view the restore point guaranteed you can query * $restore_point where guarantee_flashback_database = "YES."

    It's a single instance database (no dataguard / rac)?

    Full backups how do you have?

    You can also view ML/MOS ID 462978.1

    Have you checked how much of backupsets are declared obsolete?

  • PDApp.log continually fill

    After you ask where my disk space was gone, I discovered Adobe PDApp.log (and the files of reversal) are the cause. It seems that they are constantly filled by the following:

    24/10/15 08:01:26:749 | [WARNING] |  | USS | OPM | OPM |  |  | 15848. Opm_data SQL Query could not CREATE TABLE IF NOT EXISTS (varchar (25) domain, subdomain varchar (25), key varchar (100), value TEXT, PRIMARY KEY (domain name, subdomain, key));

    24/10/15 08:01:26:749 | [WARNING] |  | USS | OPM | OPM |  |  | 15848. Cannot create opm_data table creation

    24/10/15 08:01:26:749 | [ERROR] |  | USS | OPM | OPM |  |  | 15848. Failed to open data opm in opm_createLibRef session

    24/10/15 08:01:26:749 | [WARNING] |  | USS | OPM | OPM |  |  | 15848. Could not close opm database with sqliteCloseStatus 21

    24/10/15 08:01:26:749 | [WARNING] |  | USS | OPM | IMSLib_OPMWrapper |  |  | 15848. Failure of the OPM create Ref lib

    24/10/15 08:01:26:749 | [INFO] |  | USS | OPM | IMSLibHelper |  |  | 15848. Allocation of OPMWrapper succeeded

    24/10/15 08:01:26:749 | [WARNING] |  | USS | OPM | IMSLib_OPMWrapper |  |  | 15848. The ref lib OPM is uninitialized in OPMSetValueForKey

    24/10/15 08:01:26:749 | [INFO] |  | USS | OPM | IMSLibHelper |  |  | 15848. Cannot get the local db user name in getProxyCredentialsFromLocalStore proxy

    24/10/15 08:01:26:749 | [INFO] |  | USS | OPM | IMSLib |  |  | 15848. Get failed proxy to store information while creating the instance IMSLib...

    24/10/15 08:01:26:749 | [INFO] |  | USS | OPM | IMSLib |  |  | 15848. Fetch accessToken running...

    24/10/15 08:01:26:749 | [INFO] |  | USS | OPM | IMSLib |  |  | 15848. InLocale invalid argument when calling IMS_fetchAccessToken

    24/10/15 08:01:26:749 | [INFO] |  | USS | OPM | IMSLib |  |  | 15848. With the help of environment:https://ims-na1.adobelogin.com

    24/10/15 08:01:26:749 | [WARNING] |  | USS | OPM | IMSLib_OPMWrapper |  |  | 15848. The ref lib OPM is uninitialized in OPMSetValueForKey

    24/10/15 08:01:26:749 | [WARNING] |  | USS | OPM | IMSLibHelper |  |  | 15848. Has failed to obtain the guid encrypted in the local store while processing fetchDeviceTokenFromLocalStore

    24/10/15 08:01:26:749 | [WARNING] |  | USS | OPM | IMSLib |  |  | 15848. Has failed to extract the token device of local store during the generation of the accessToken

    24/10/15 08:01:26:749 | [INFO] |  | USS | OPM | IMSLib |  |  | 15848. Releasing the instance IMSLib

    24/10/15 08:01:26:749 | [INFO] |  | USS | OPM | IMSLibHelper |  |  | 15848. Reference OPM released

    24/10/15 08:01:26:749 | [INFO] |  | USS | OPM | IMSLib |  |  | 15848. IMSLib released instance...

    24/10/15 08:01:26:750 | [INFO] |  | USS | IMSLib | IMSLib |  |  | 15848. Build Version - 9.0.1.20

    24/10/15 08:01:26:750 | [INFO] |  | USS | IMSLib | IMSLib |  |  | 15848. Exploitation forest verbosity level set to 4

    24/10/15 08:01:26:750 | [INFO] |  | USS | IMSLib | IMSLib |  |  | 15848. Create an instance of IMSLib...

    24/10/15 08:01:26:750 | [INFO] |  | USS | OPM | OPM |  |  | 15848. Build Version - 9.0.1.20

    24/10/15 08:01:26:750 | [INFO] |  | USS | OPM | OPM |  |  | 15848. Exploitation forest verbosity level set to 4

    24/10/15 08:01:26:751 | [WARNING] |  | USS | OPM | OPM |  |  | 15848. Error is impossible not executeGeneralSQLQuery. error: 14 errorMsg: could not open the database

    I can't tell if it's a question of auth or MySQL problem. However, it is not good. Lightroom, at this stage, is almost unusable due to the CPU/disk get absolutely pegged when the application is open. I tried to reinstall a few days ago and that bought me about one day of use without problem.

    Any help would be appreciated!

    Rob,

    Please try to remove just the Adobe Creative Cloud Desktop App

    https://helpx.Adobe.com/creative-cloud/help/uninstall-creative-cloud-desktop-app.html

    and then re-install it from here: creative cloud desktop application

    If that does not happens, please contact our support Department for assistance:

    FAQ: How to contact Adobe for support?

    Kind regards

    Guinot

  • Undo tablespace continues to grow

    Hello
    My undotbs grows to 14g (even if I have my undotbs actul size 9g) I try to resize the data files, but
    This may not work.

    So, I guess for this task

    >
    -Create a new as undo tablespace:
    SQL > create undo tablespace UNDOTBS2 datafile "< full file path >" size < small >;

    -Change the parameter UNDO_TABLESPACE
    SQL > alter system set UNDO_TABLESPACE = UNDOTBS2;

    -Remove UNDOTBS1
    SQL > drop tablespace UNDOTBS1 including content and data files; >

    but my question is, is he worthy of delete the undotbs1 whose having a lot of data and if I deleted
    These data (undotablespace), I won't be able to completely recover?

    db_version:10.2.0 (Linux)

    There is procedure to drop undo tablespace. If you already increase first then I suggest you to study why his is growing? Check with developers, to monitor the database

  • VM size? Continues to grow? file hard

    Looks like my file hard is booming, even if I don't do a ton or win a lot.  I understand that it will grow as you save stuff and fill out more.

    They grow much naturally for other reasons, or is it only the space that meet you with files and stuff?

    each time you add or remove a virtual disk file will grow.

    When you defragment inside the guest, the disc will develop.

    If you empty the closed the disc will develop...

    There is no way around that - get used to it.

    If you installed the vmware-tools is a feature called "shrink" which can be used to recover the space lost.

    Use it regularly

    ___________________________________

    Description of the vmx settings: http://sanbarrow.com/vmx.html

    VMware-liveCD: http://sanbarrow.com/moa.html

  • The file size of database Berkeley DB continues to grow

    We have three separate processes making inserts, updates, and deletes for a single file and we see the increase in size of file continuously until what we have exhausted all the space on our filesystem (10 GB).

    The BDB refrence guide says:
    Space released by removing the key/data pairs in a Btree database or Hash is never returned in the file system, even if it is reused when possible. This means that the databases Btree and hash are to develop alone. If enough key is deleted from a database which shrinks the underlying file is desirable, you must create a new database and copy the folders of the former into it.

    My understanding of the statement above is that BDB should reuse the pages for which the key/data pair was deleted. Evidence of why this is not the case in our case? The erase process that we have put in place runs in an infinite loop and we checked that the deletions are actually going through.

    Our keys are integers, generated in ascending consecutive order. We are using the C API.

    Thank you
    SB

    Published by: user1033737 on March 25, 2009 16:03

    Hey sb,.

    There is no specific maximum size which must be in the database before the beginning of page re-use. In addition, you need not to close and reopen the database handles in order to influence on the re-use of the page. BDB reuses pages when they are empty and does not have any type of key auto/page weighting (as it leads to dead ends). The factors that you should look at are the rates of insert, update and delete, if transactions are long service life, politics detection of blocking, etc. page fill factor and the size of the database page size.

    An example that might explain the behavior you see is one where the removal of the process / thread (s) gets 'end' to a page trying to delete on key found on this page. At the time when he acquires the lock for writing on this page, there is not more consecutive touches it, and at best the removal of the process/thread manages only to remove just one of these keys (other keys are those with higher values, because the processes/threads insert and update are very advanced), so the page will not be emptied so that it can be placed on the free list for reuse. If the insertion and update processes are very active, this can result in fast splits (or new page allocations), where sets of keys is flying on the new pages. In addition, the new inserted keys can land on pages already populated, thus making it harder for the erase process. If the new keys being inserted can be divided according to their values in already populated pages where there is room for them, then the pages of the free list, if any, will not be reused. Also note that we do not touch/page balancing.

    A result of such a scenario is a bottom of page fill factor (there are a number of key/data pairs in the many pages of the worksheet). You can monitor statistics database using 'db_stat d' to get information on the number of pages, pages on the free list, etc. fill factor page.
    Try to compact the database and see how it goes. In addition, if you have a simple unit test program showing this behavior, I may be able to better find the culprit.

    Best regards
    Andrei

  • What to do when a support continues to grow for you?

    I own 2 Mac Pro, 2 MacBooks Pro, 2 Iphones, 1 itouch, Ipad 2, 2 ipods - all great.  Yep - I'm bragging a little here

    And a lemon 27 inch imac.  A little over 3 years.

    He does this thing when she will feed not on except if the power cord is removed for a minute or 2.  Day 1 new brand out of the box so it would be randomly closed in the middle of a project.

    He has been in the shop 4 times.

    display replaced

    Replaced hard drive

    Replaced logic board

    Problem still there.

    So 2 weeks ago I finally lost it with her and talked to apple support, which eventually put me through to Mathew who has finally agreed that the best thing to do would be to replace the unit.  It was March 16.

    Since that time I sent Mathew about 3 or 4 times and left 3 or 4 messages on his phone and does not respond.

    Now when I enter my serial number in the system now he says no 'covered '.

    I'd be reason to suspect that what Mathew agreed has been denied, I can't seem to get a response from someone?

    I'm a good supporter of the mac, or have been so far. Probably just need to harden, but is it normal to ignore the right customers?

    If it's a little more than three years (is by days? weeks? months?), your AppleCare extended (if you paid for it, before the end of your first year of the AppleCare), without doubt, has completed/expired.

    As such, maybe nobody will tell you you have right is no longer a replacement iMac.

    Why wait you so long (almost until the end of your AppleCare program 3-year warranty) to send a complaint about your problems with your iMac.

    You have three full years to get the issues resolved.

    (There is another user who posted in these forums very recently with similar problems with his iMac (he has a year more AppleCare extended on the left) and we advised him to act now and focus on getting a new replacement iMac.).

    You may be out of time and no luck thereby and will need to repair your iMac on your own dime, now.

    If your extended AppleCare ended only, and less than a week, you might want to call Apple again and talk to someone else (get their name, extension number, if you speak with someone directly from Apple in CA) and explain the problem to them, telling them again how long your iMac was out of warranty and see if the employee will be sympathetic to your problem and cause deliver you is another repair free or the option of a replacement iMac.

    It is not for lack of Apple you had problems with your iMac and have not reported it early enough that you always had plenty of time to coverage of warranty left.

    This may be why they are not contact you or return your calls.

    Try calling Apple, again and talk to someone again and get ALL of their contact information and they issue a file number, write it down.

    Good luck!

  • Continuing to grow/expand my limits: dealing with overflow

    I have a strange situation with a sensor that I hope you guys can help me with...

    I have a 16-bit binary value out of a chip.  I can read this quite well, but from time to time the signal has a behavior of infinity (from values near near 65535 values 0 close in a reading and vice versa). This is due to the chip and not BT reading problems, and I don't see all the settings in order to adjust it on the side of the chip...

    I wonder if there are strategies to deal with this on how to interpret the data.  Ideally, I would just change the low values up, but I can't find how reliable to apply.  I thought that the signal dropped to 2 ^ 15, which would be a pretty big indicator; I have no feeling if it is "legitimate".

    Thanks for any thoughts!

    If it's the acceleration, how your arm 'rest' can the axes y and z and x, accelerations, showing constant? How your acceleration will never be positive and negative ever?

    It does not explain what your graphic example, but if as you have already said you see numbers ranging from 65535 to near zero, it could be that you are a reading I16 and somehow castant or misinterpretation it like a U16.

    Then, for a noisy signal around zero,

    Cast-1 to U16 I16 gives 65535

    I16 cast 0 to U16 gives 0

  • Help me remove music files duplicate 4700 that grows like rot on a

    Could not stop the download of multimedia information and have now 4700 music files that continue to grow

    Help, please.

    To begin with, I'd stop any application that can be download all these files, Windows Explorer, itunes, Media Player, or other. Set to zero if you have to. Then I would start in Mode safe mode and do a few things.

    Get your updated antivirus program and boot into Safe Mode. Note that some viruses can hide from your normal antivirus program, so you really need to scan in Safe Mode. To enter in Safe Mode when you turn on first, press F8 on every seconds until you get the menu, and then select Safe Mode. Then run a complete system scan.

    Microsoft has suggestions and offerings to

    http://Windows.Microsoft.com/en-us/Windows7/how-do-I-remove-a-computer-virus

    Moderator Forum Keith has a few suggestions along this line to

    http://answers.Microsoft.com/en-us/Windows/Forum/Windows_7-performance/Windows-Explorer-has-stopped-working/6ab02526-5071-4DCC-895F-d90202bad8b3

    If that suits him fine. If this is not the case, use system restore to go back to an earlier date at the beginning of the problem. To run system restore, click Start-> programs-> Accessories-> System Tools-> system restore. Click on the box that says show more restore points.

    You can check the corrupted system files. Open an administrator command prompt and run SFC if the above does not help. Click START, and then type sfc in the search box, right-click to SFC. EXE and click run as administrator. Then, from the command prompt type sfc/scannow.

    Finally if all else fails, you can look at the rather cryptic system event log. To make, click Start-> Control Panel-> administration-> event viewer tools. Once in Event Viewer system log-click and scroll entries looking for these "error" with indicator see if you can find guidance on where the problem may be.

    I hope this helps. Good luck.

  • VM log in vSPhere 5.5 files

    Gurus of the attention:

    I'm confused about something, I just read in a VMware KB: VMware KB: Log rotation and logging to vmware.log options

    Here it is:

    http://KB.VMware.com/selfservice/microsites/null the parameter log.rotateSize

    By default, the virtual machine logfile ( vmware.log) turned due to the operation power on or off the power of the virtual machine.

    To configure log rotation based on the size of the file, include this option in the .vmx of the virtual machine file:

    log.rotateSize = < maximum size in bytes of the file can reach >

    Note:

    1. The default value is 0 or unlimited.
    2. This parameter is only supported in the host before ESXi 5.1

    NOTE that I held #2 above in bold and underlined, it.  It's for real?  You cannot adjust this setting in 5.1?   It must be a mistake, because I created the parameter and value. but there isn't anyway to confirm his work unless I have wait forever and the monitor.  FAKE.

    VMware someone who knows what the story is with this chime please in and provides some certainty to the following:

    (1) what is the default if the 5.1 and 5.5 for the virtual machine file paper regarding the size and rotation

    (2) is this a mistake in the article that declares the parameter above is not supported, and it is not taken in charge, then, how is the rotation of newspaper for a fitting VM.

    I recently saw VM logs as large as 1 GB in size at a customer, I want to set up a quick script to change this, because many have and there are scripts available, but when I look at the scripts that have been developed, they all use this setting to set log rotation.

    Please help and to get a straight answer for those who have read this KB and started scratching heads...

    Thank you.

    Unfortunately, it is true, rotation of newspaper has been replaced by a kind of diary of limitation. I ran into a similar problem with virtual XenApp servers where the log files continue to grow (some Go! in about a month) until the virtual machine is able to roll or vMotioned.

    André

  • Table Oracle continues to expand and do not reuse space

    Before I opened a SR with Oracle, I thought I'd use this forum for help with this answer.

    We have a large table that gets 1-2 m inserts a day while serving m 1-2 rows per day that are 30 days old. The table remains stable, lines 55-60 m, but the table continues to grow daily concerts.

    The tablespace uses SAMS and here is the presentation of the table:

    CREATE TABLE owner1.a
    (
    col1 VARCHAR2 (20 BYTE) NOT NULL,
    col2 NUMBER (3) NOT NULL,
    COL3 VARCHAR2 (4000 BYTE),
    COL4 VARCHAR2 (24-BYTE),
    col5 VARCHAR2 (4000 BYTE)
    )
    TABLESPACE one
    PCTUSED 0
    PCTFREE 3
    INITRANS 50
    MAXTRANS 255
    STORAGE)
    64K INITIALS
    MINEXTENTS 1
    MAXEXTENTS UNLIMITED
    PCTINCREASE 0
    DEFAULT USER_TABLES
    )
    LOGGING
    NOCOMPRESS
    NOCACHE
    NOPARALLEL
    MONITORING;


    I know that rebuild the table is going to reclaim space, but we try to understand why he keeps more and re-use only no space between the deleted segments. We want to address the issue, then you should never rebuild the table...

    Thank you

    Shawn

    Published by: 901337 on December 8, 2011 09:38

    INSERT / * + APPEND * /.

    above adds new above existing lines of high waters

    Application code includes hint APPEND to accelerate INTEGRATION?

  • Oracle Redo Log activity

    Hi members,

    I have an Oracle database instance where the log files continue to grow even when there is no activity of the use of the application.
    What I noticed it seems to be some model of frequency as to where this newspaper activity occurs.
    My question is, is there any default Oracle process or to the demand for jobs that will result in the generation of this log data.


    Thank you
    Alan

    Hello
    You will have many background running processes and that they generate text-based newspapers. You for example MMON, automatic backups (if enabled), 11g you have autotask process which tune top sqls, collect stats etc. Then, even when nobody connects to the db you'll always some activities there.

    Paul

  • Firefox Home continues to show old, closed tabs

    When Firefox Home on my iPhone syncs, it adds tabs that are currently open on my computer, but it continues to list the old tabs that I have long closed on my computer. It lists all the old tabs, only some. I have reset, erased data in my sync account, signed in and out of FxHome, reset the iPhone, even reinstalled FxHome. Of the tabs list continues to grow. I'll appreciate any suggestions and advice.

    It is possible that there is a problem with the sessionstore.js and sessionstore.bak files in the Firefox profile folder.

    Delete the sessionstore.js file [2] and sessionstore.bak in the Firefox profile folder.

    If you see files sessionstore-# .js with a number in the left part of the name as sessionstore - 1.js then delete those as well.
    Delete sessionstore.js will cause App Tabs and tab groups and opened and closed the tabs (back) to get lost, so you will need to create them again (take note or bookmarks).

    See also:

Maybe you are looking for