Expectations extended on synchronization of log file while the parallel writing journal is fine

We have 9.2.0.8 as experiences of long waits on the database log file sync (average waiting time = 46 ms) while waiting for the log file write Parallels is very good (average waiting time is less than 1 millisecond).

The application is of type middleware, it connects to several other applications. A single user in a single application action train several requests to send back through this middleware, so he needs response time of db in milliseconds.

The database is quite simple:

-It has a few config tables that the application reads but rarely updated

-She has table TRANSACTION_HISTORY: the application inserts records into this table using Insert einreihig (about 100 lines per second); each insert is followed by a validation.

Records are kept for several months and then purged. The table has only column VARCHAR2/NUMBER/DATE, no LOBS, LONG, etc. The table has 4 non-unique single-column index.

The average line length is 100 bytes.

The load profile does not appear something unusual, the main figures: 110 transactions per second average transaction = 1.5 KB size.

The data below are to 1 hour interval (purge wasn't running during this interval), physical reads or writes physical rate is low:

Load profile

~ ~ ~ Per second per Transaction

---------------       ---------------

Size: 160,164.75 1,448.42

Logical reads: 521,58 57 675,25

Block changes: 934,90 8.45

Physical reads: 76,27 0.69

Physical writings: 86,10 0.78

Calls of the user: 491,69 4.45

Analysis: 321,24 2.91

Hard analysis: 0.09 0.00

Kinds: 126.96 1.15

Logons: 0.06 0.00

Runs: 17.70 1 956,91

Operations: 110,58


Top 5 events are dominated by the synchronization of log file:

Top 5 timed events

~~~~~~~~~~~~~~~~~~                                                     % Total

Event expects Ela time (s)

-------------------------------------------- ------------ ----------- --------

401 608 18 448 59.94 file synchronization log

db file parallel write 124 044 3 404 11.06

CPU time                                                        3,097    10.06

Enqueue 10 476 2 916 9.48

DB file sequential read 261 947 2 435 7.91


Section events:

AVG

Total wait wait wait

Hour of wait time wait for the event (s) (ms) /txn

---------------------------- ------------ ---------- ---------- ------ --------

Synchronize 0 401 608 46 18 448 1.0 file log

db file parallel write 124 044 0 3 404 27 0.3

Enqueue 10 476 277 2 916 278 0.0

DB file sequential read 261 947 0 2 435 9 0.7

buffer busy waits 11 467 67 173 15 0.0

SQL * Net more data to the client 1 565 619 0 79 0 3.9

lock row cache 2 800 0 52 18 0.0

control file parallel write 1 294 0 45 35 0,0

Log end of switch file 261 0 36 138 0.0

latch free 2 087 1 446 24 12 0.0

PL/SQL 1 1 20 19531 0,0 lock timer

log file parallel write 0 143 739 17 0.4 0

db file scattered read 1 644 0 17 10 0.0

sequential log file read 636 0 8 13 0.0


Log buffer is about 1.3 MB. We could increase the log buffer, but there is no log buffer space waits, so I doubt this will help.


Newspapers in recovery have their own file systems, not shared with the data files. This explains the difference between waiting avg on parallel writing of log (less than 1 ms) file and db file parallel write (27 ms).

Restoring logs is 100 MB, there are about 120 journal switches per day.


What has changed: the pads/validations rate grew. Several months ago there were 25 inserts/validations per second in the TRANSACTION_HISTORY table, now get us 110 inserts/validation per second.


What problem it causes application: due to slow down the reaction of the basis of the application (Java-based) requires discussions more and more.


MOS documents on synchronization of log file (for example, 1376916,1 waits for troubleshooting "log file sync") recommend to compare the average waiting time on synchronization of log file and the log file parallel write.

If the values are close (for example log file sync = 20 ms and log file parallel write = 10 ms) so expectations are caused by nits IO. However, it is not the case here.


There was a bug (2669566) in 9.2 which resulted in underreporting lgwr parallel time of writing to the log file. I was talking about September 2005, during which the bug was present in 9.2.0.6, reported 10.1 fixed in: file parallel journal written (JL Comp) it is possible that your problem IS written to the log file.

Concerning

Jonathan Lewis

Tags: Database

Similar Questions

  • synchronization of log file event

    Hi all


    We use Oracle 9.2.0.4 on SUSE Linux 10. In the statspack report, one of the best timed event is
    log file sysnc
    We are in the process. We do not use any storage.IS this a bug of 9.2.0.4 or what is the solution of it
    STATSPACK report for
    
    DB Name         DB Id    Instance     Inst Num Release     Cluster Host
    ------------ ----------- ------------ -------- ----------- ------- ------------
    ai          1495142514 ai                1 9.2.0.4.0   NO      ai-oracle
    
                Snap Id     Snap Time      Sessions Curs/Sess Comment
                ------- ------------------ -------- --------- -------------------
    Begin Snap:     241 03-Sep-09 12:17:17      255      63.2
      End Snap:     242 03-Sep-09 12:48:50      257      63.4
       Elapsed:               31.55 (mins)
    
    Cache Sizes (end)
    ~~~~~~~~~~~~~~~~~
                   Buffer Cache:     1,280M      Std Block Size:         8K
               Shared Pool Size:       160M          Log Buffer:     1,024K
    
    Load Profile
    ~~~~~~~~~~~~                            Per Second       Per Transaction
                                       ---------------       ---------------
                      Redo size:              7,881.17              8,673.87
                  Logical reads:             14,016.10             15,425.86
                  Block changes:                 44.55                 49.04
                 Physical reads:              3,421.71              3,765.87
                Physical writes:                  8.97                  9.88
                     User calls:                254.50                280.10
                         Parses:                 27.08                 29.81
                    Hard parses:                  0.46                  0.50
                          Sorts:                  8.54                  9.40
                         Logons:                  0.12                  0.13
                       Executes:                139.47                153.50
                   Transactions:                  0.91
    
      % Blocks changed per Read:    0.32    Recursive Call %:    42.75
     Rollback per transaction %:   13.66       Rows per Sort:   120.84
    
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                Buffer Nowait %:  100.00       Redo NoWait %:  100.00
                Buffer  Hit   %:   75.59    In-memory Sort %:   99.99
                Library Hit   %:   99.55        Soft Parse %:   98.31
             Execute to Parse %:   80.58         Latch Hit %:  100.00
    Parse CPU to Parse Elapsd %:   67.17     % Non-Parse CPU:   99.10
    
     Shared Pool Statistics        Begin   End
                                   ------  ------
                 Memory Usage %:   95.32   96.78    
        % SQL with executions>1:   74.91   74.37
      % Memory for SQL w/exec>1:   68.59   69.14
    
    Top 5 Timed Events
    ~~~~~~~~~~~~~~~~~~                                                     % Total
    Event                                               Waits    Time (s) Ela Time
    -------------------------------------------- ------------ ----------- --------
    log file sync                                      11,558      10,488    67.52
    db file sequential read                           611,828       3,214    20.69
    control file parallel write                           436         541     3.48
    buffer busy waits                                     626         522     3.36
    CPU time                                                          395     2.54
              -------------------------------------------------------------
    ^LWait Events for DB: ai  Instance: ai  Snaps: 241 -242
    -> s  - second
    -> cs - centisecond -     100th of a second
    -> ms - millisecond -    1000th of a second
    -> us - microsecond - 1000000th of a second
    -> ordered by wait time desc, waits desc (idle events last)
    
                                                                       Avg
                                                         Total Wait   wait    Waits
    Event                               Waits   Timeouts   Time (s)   (ms)     /txn
    ---------------------------- ------------ ---------- ---------- ------ --------                
    log file sync                      11,558      9,981     10,488    907      6.7
    db file sequential read           611,828          0      3,214      5    355.7
    control file parallel write           436          0        541   1241      0.3
    buffer busy waits                     626        518        522    834      0.4
    control file sequential read          661          0        159    241      0.4
    BFILE read                            734          0        110    151      0.4
    db file scattered read            595,462          0         81      0    346.2
    enqueue                                15          5         19   1266      0.0
    latch free                            109         22          1      8      0.1
    db file parallel read                 102          0          1      6      0.1
    log file parallel write             1,498      1,497          1      0      0.9
    BFILE get length                      166          0          0      3      0.1
    SQL*Net break/reset to clien          199          0          0      1      0.1
    SQL*Net more data to client         5,139          0          0      0      3.0
    BFILE open                             76          0          0      0      0.0
    row cache lock                          5          0          0      0      0.0
    BFILE internal seek                   734          0          0      0      0.4
    BFILE closure                          76          0          0      0      0.0
    db file parallel write                173          0          0      0      0.1
    direct path read                       18          0          0      0      0.0
    direct path write                       4          0          0      0      0.0
    SQL*Net message from client       480,888          0    284,247    591    279.6
    virtual circuit status                 64         64      1,861  29072      0.0
    wakeup time manager                    59         59      1,757  29781      0.0

    Your elapsed time is about 2000 seconds (31: 55 rounded up) - and your log file sync time is 10,000 - which is 5 seconds per second for the duration. Otherwise your session count is about 250 at the beginning and end of snapshot - so if we assume that the number of sessions is stable for the duration, each session has undergone 40 seconds synchronization log file in the meantime. You have saved roughly 1 500 operations in the meantime (0.91 per second, about 13 per cent of restorations) - so synchronize your time log file was on average more than 6.5 seconds by validation.

    Regardless of how you look at it, this suggests that numbers of synchronization of the log file are false, or you had a temporary outage. Given that you had some expectations occupied buffer and control file write expects about 900 m/s each, the hardware failure seems likely.

    Check log file parallel write times to see if this helps to confirm the hypothesis. (Unfortunately some platforms report liog, parallel wriite time of the files properly for earlier versions of 9.2 - so this may not help.)

    You also 15 enqueue waits with an average of 1.2 seconds - check the enqueue statistics in the report section to see what enqueue it was: if it was for example (CF - control file), then it also helps confirm the hypothesis of material.

    It is possible that you had a couple of resets of material or something like this in the meantime that stopped your system quite dramatically for a minute or two.

    Concerning
    Jonathan Lewis
    http://jonathanlewis.WordPress.com
    http://www.jlcomp.demon.co.UK

    "Science is more than a body of knowledge; It's a way of thinking. "
    Carl Sagan

  • View the log files in the configuration of the cluster (HTTPS UI)

    Hi all

    In my stand-alone configuration, I can view and download log files from HTTPS-Webgui under "Journal subscriptions"-> "log files".

    But in a cluster configuration, I can't see the column "Log Files" under "journal subscriptions. How can I access the files?

    Thank you!

    Christoph

    Hi Christoph,

    try to change the mode of the machine to access the files.

    Best regards

    Enrico

  • oracle.as.install.engine.exception.LogInitializeException: not enough space to create the log files in the location specified in the inventory. Create a space under the null specified inventory or to point to a different directory

    Hello

    I have installed:

    -(Oracle Linux) OL 6.6

    121 GB HD

    5.0 GB RAM

    -JDK-7u80-EA-bin-b05-Linux-x64-20_jan_2015.tar.gz (Java)

    -Fmw_12.1.3.0.0_infrastructure.jar (infrastructure)

    -Fmw_12.1.3.0.0_ohs_linux64.bin (SST)

    I try to install OBIEE, I unzip these files:

    -bi_linux_x86_111170_64_disk1_1of2.zip

    -bi_linux_x86_111170_64_disk1_2of2.zip

    -bi_linux_x86_111170_64_disk2_1of2.zip

    -bi_linux_x86_111170_64_disk2_2of2.zip

    -bi_linux_x86_111170_64_disk3.zip

    I run/home/oracle/OBIEE/Disk1/runInslaller and open the screen to select the oraInventory directory, when I click OK, the error message appears (see image below):

    Error_Screen.png

    [oracle@localhost Disk1] $. / runInstaller

    Iniciando Universal Oracle install...

    Espaço Verificando Temp: deve ser superior a 1536 MB.   Passado Reais 36602 MB

    Verificando swap Espaço: deve ser superior a 500 MB.   Passado Reais 2553 MB

    Verificando monitor: deve ser configurado para exibir pelo menos 256 cores.    Reais 16777216 Passado

    2nd para iniciar o Oracle Universal Installer from/tmp/OraInstall2015-09-13_08-13-50 h Aguarde... [oracle@localhost Disk1] $ 13/09 / 2015 20:13:53 oracle.as.install.bi.util.ConsumerUIProperties getCustomPropertiesFilename

    INFO: Using a custom UI properties of the oracle/as/install/bi/config/consumer-ui.properties file

    [ERROR]: error initializing log values

    oracle.as.install.engine.exception.LogInitializeException: Espaço insuficiente para criar os log archives na Localização address para o inventario. Espaço shouts ob o inventario especificado/home/oracle/oraInventory UO aponte para outro inventario

    to oracle.as.install.engine.logging.EngineLogHelper. < init > (EngineLogHelper.java:65)

    at oracle.as.install.engine.logging.EngineLogHelper.initialize(EngineLogHelper.java:192)

    to oracle.as.install.engine.InstallEngine. < init > (InstallEngine.java:135)

    to oracle.as.install.engine.InstallEngine. < clinit > (InstallEngine.java:130)

    at oracle.sysman.oio.oioc.OiocOneClickInstaller.main(OiocOneClickInstaller.java:603)

    In English:

    oracle.as.install.engine.exception.LogInitializeException: not enough space to create the log files in the location specified in the inventory. Create a space under the null specified inventory or to point to a different directory

    What can be?

    Well, you very probably not a lot of space on the left under/Home.

    You can move to another location that is located on/home/oracle/orInventory the / partitioning of the partition where you probably have more space left if you used the default value.

    For example, as a root user:

    mkdir/U01

    MV/home/oracle/oraInventory/U01

    Update /etc/oraInst.loc and replace

    inventory_loc = / home/oracle/oraInventory

    with

    inventory_loc = / u01/oraInventory

  • Try to collect events to a log file and the Agent installed Linux and work - need help.

    I modified liagent.ini by documentation... If I understand well it... actually I changed so many times my eyes hurt.

    Here it is:

    ; Configuration of the Agent of VMware Log Insight. Please save it in UTF-8 format if you use non-ASCII names / values!

    ; The actual configuration is this file that is associated with the server settings to form animal - effective .ini

    ; Note: The agent is not necessary to restart after making a configuration change

    ; Note: It may be more efficient to configure Server Agents page!

    [Server]

    hostname = 192.168.88.89

    ; Name of host or IP of your Server Log Insight / load balancing cluster. By default:

    ; hostname = LOGINSIGHT

    ; Protocol can be cfapi (Log Insight REST API), syslog. By default:

    proto = cfapi

    ; Server port connect Insight to connect to you. Default ports for protocols (TCP all):

    ; syslog: 514; syslog with ssl: 6514; cfapi: 9000; cfapi with ssl: 9543. By default:

    port = 9000

    ; Use SSL. By default:

    SSL = no.

    ; Example of configuration with the certification authority:

    ; SSL = yes

    ; ssl_ca_path=/etc/PKI/TLS/certs/CA.PEM

    ; Time in minutes to force the reconnection to the server.

    ; This option reduces the imbalances caused by the long lifetime as TCP connections. By default:

    Reconnect = 30

    [record]

    ; Logging detail level: 0 (no debug messages), 1 (essentials), 2 (verbose with more impact on performance).

    ; This option should always be 0 in normal conditions. By default:

    debug_level = 1

    [storage]

    ; Local max storage expiration (data + logs) in the valid range MBs.: 100-2000 MB.

    max_disk_buffer = 2000

    ; Uncomment the appropriate section to collect log files

    ; The recommended method is to activate the content pack Linux server LI

    [filelog | bro]

    Directory = / data/bro/newspapers/2015-03-04

    ; include = * .log

    parser = auto



    I post it here, I have created a support pack?


    Post edited by: I added a screenshot of the status of the personnel of kevinkeeneyjr

    Post edited by: kevinkeeneyjr added liagent.ini

    Ah! Yes, the agent is to collect real-time events. If no new event is written then it won't work. If you want to collect logs that have been generated before you use the importer of Log insight which was published with LI 3.3. I hope this helps!

  • You can edit vmx file while the virtual machine is running? And is it safe?

    You can edit vmx file while the virtual machine is running? And is it safe?

    Hello

    Moved to the forum of the Virtual Machine and the guest operating system.

    If you edit the VMX while the virtual machine is powered on change is not levied until the power off then power on (not reboot) and there is a good chance that the change will be deleted when turn you off the system. So it is not safe to do so.

    Best regards
    Edward L. Haletky
    VMware communities user moderator
    ====
    Author of the book ' VMWare ESX Server in the enterprise: planning and securing virtualization servers, Copyright 2008 Pearson Education.
    Blue gears and SearchVMware Pro items - top of page links of security virtualization - Security Virtualization Round Table Podcast

  • SQL * Loader - can I specify the log file in the control file

    SQL * Loader - can I specify the log file in the control file? If so how

    Yes,

    Try sqlldr "echo $command_line_par".

    Note: ' back quotes

    Kind regards

  • synchronization of log file much larger than log file parallel write

    Hi all

    average of log file sync wait is 30 ms log file parallel write is only 10 ms, this mean? What are the main reasons for this difference?

    Sincerely yours.

    A. U.

    Hello

    average of log file sync wait is 30 ms log file parallel write is only 10 ms, this mean? What are the main reasons for this difference?

    Essentially, when newspaper writer writes, several session may be waiting. During 10 ms of time, you can have a written lgwr and sessions of 3 user waiting on the 'log file sync '.

    Kind regards

    Franck.

  • ORA-39070: unable to open the log file, while using expdp

    Hello
    Please notice why I get ORA-39070: unable to open the log file.
    When you run the expdp against the default directory: the expdp DATA_PUMP_DIR is completed successfully.

    -current direcory on the Linux file system:
    $ pwd
    /log_files/oracle/cdl1
    -Create directory and grant of read/write on it
    $ sqlplus sys/xxxx@cdl as sysdba
    SQL*Plus: Release 11.2.0.3.0 Production on Wed Feb 22 16:36:01 2012
    Copyright (c) 1982, 2011, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
    Data Mining and Real Application Testing options
    
    SQL> create or replace directory EXPTEMPDIR as '/log_files/oracle/cdl1';
    Directory created.
    
    SQL> grant read,write on directory EXPTEMPDIR to system;
    Grant succeeded.
    -Verify the existence of the directory:
    QL> select * from dba_directories;
    
    OWNER      DIRECTORY_ DIRECTORY_PATH
    ---------- ---------- --------------------------------------------------
    ...
    SYS        EXPTEMPDIR /log_files/oracle/cdl1
    ...
    Check for permission to write to the directory:
    [/log_files/oracle/cdl1]$ touch 1
    [/log_files/oracle/cdl1]$ ls -l
    total 0
    -rw-r--r-- 1 oracle oinstall 0 Feb 22 16:38 1
    -try to expdp:
    expdp system/xxxx@cdl FULL=y DIRECTORY=EXPTEMPDIR dumpfile=expfull220212.dmp
    Export: Release 11.2.0.3.0 - Production on Wed Feb 22 16:38:33 2012
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
    Data Mining and Real Application Testing options
    ORA-39002: invalid operation
    ORA-39070: Unable to open the log file.   <<<<<<<<=======
    ORA-29283: invalid file operation
    ORA-06512: at "SYS.UTL_FILE", line 536
    ORA-29283: invalid file operation

    Post pl OS details - it's an RAC instance?

    Error ORA-39002 ORA-39070 ORA-29283 ORA-6512 when you use Export DataPump (EXPDP) [ID 1305166.1]
    Export DataPump (EXPDP) triggers errors ORA-39002 ORA-39070 ORA-29283 ORA-6512 on RAC Instance [1317798.1 ID]

    HTH
    Srini

  • Generate the log file for the dialog box

    Hi all


    I'm generating information for the dialog box as a .txt log file format. That means that if the box is checked, the log file will be give ' checkbox1 - 01.»   Check the report, sizes against the information on tickets and slug jobs"is checked


    If the checkbox is not checked, the log file will be give ' checkbox1 - 01.»   Check the report, sizes against ticket and slug information on employment"is not checked


    and also the entry "myText2" also needs to generate the log file


    Can someone help on this... Help would be appreciated!



    var l is new window ('dialogue');.

    myGroup1 var = w.add ("panel", undefined, ' P & & G check the list ');

    myGroup1.alignChildren = 'left ';

    CheckBox1 var = myGroup1.add ("checkbox", not defined, '01.   (Check the ratio, size against the information on tickets and slug jobs");

    CheckBox2 var = myGroup1.add ("checkbox", not defined, '02.   "" "Check images are linked");

    var checkbox3 = myGroup1.add ("checkbox", not defined, '03.   Visually check the progress of KV/model/CP images");

    var checkbox4 = myGroup1.add ("checkbox", not defined, '04.   Visually check the progress of other elements such as Logo and bottle");

    var checkbox5 = myGroup1.add ("checkbox", not defined, '05.   Check the positioning of the markup language");

    var checkbox6 = myGroup1.add ("checkbox", not defined, '06.   Ensure that all measures are calculated Live based area");

    var checkbox7 = myGroup1.add ("checkbox", not defined, '07.   After that the resizing of the picture KV frame open to cut and bleed");

    var checkbox8 = myGroup1.add ("checkbox", not defined, '08.   Complete Magenta if there is insufficient image');

    var checkbox9 = myGroup1.add ("checkbox", not defined, '09.   ("To ensure that the document's bleed, crop gutter and slug information brands ');

    var checkbox10 = myGroup1.add ("checkbox", not defined, '10.   Make sure that the final work is updated on the server");

    var checkbox11 = myGroup1.add ("checkbox", not defined, '11.   ("Enter time cmd");

    var myGroup2 = w.add ('panel', undefined, 'The operator name');

    var myText2 = myGroup2.add ("edittext", undefined, "");

    myText2.characters = 25;

    myGroup2.orientation = 'left ';

    var buttons = w.add ("group");

    Buttons.Add ('button', undefined, 'Export to PDF', {name: 'ok'});

    Buttons.Add ('button', undefined, 'Cancel');

    w.Show ();

    ~ group();

    ~ If (myGroup1.alignChildren.value! = true) {}

    ~ alert ('yes')

    //~ }


    myDoc = app.activeDocument;

    w = [];


    DESCRIPTION: Make a TXT file

    myDoc = app.activeDocument;

    Log1 = makeLogFile (app.activeDocument.name.split('.') ([0], myDoc, true);

    log (log1, app.activeDocument.name);

    ~ log2 = makeLogFile ("test", myDoc, false);

    ~ Journal (log2, "Text file log base 2");

    Log1. Execute();

    ~ log2.execute ();

    function makeLogFile (aName, aDoc, deleteIt) {}

    var logLoc; path to the folder that will contain the log file

    try {}

    logLoc = aDoc.filePath;

    } catch (e) {}

    logLoc = getmyDoc (). parent.fsName

    }

    var queue = aFile (logLoc + "/" + name + ".txt");

    If {(deleteIt)

    aFile.remove ();

    return aFile;

    }

    var n = 1;

    so that {(aFile.exists)

    aFile = File (logLoc + "/" + String (n) + ".txt" aName);

    n ++

    }

    return aFile

    }

    function getScriptPath() {}

    try {}

    Return app.activeScript;

    } catch (e) {}

    Return File (e.fileName);

    }

    }

    function log (aFile, message) {}

    var today = new Date();

    If (! aFile.exists) {}

    do the new log file

    aFile.open ("w");

    aFile.write (String (today) + "\n");

    aFile.close ();

    }

    }

    function log (aFile, message) {}

    var text = o;

    If (! aFile.exists) {}

    do the new log file

    aFile.open ("w");

    aFile.write (message + "\n" + "\n" + String (w) + "\n");

    aFile.close ();

    }

    ~ aFile.open ("e");

    ~ aFile.seek (0.2);

    ~ aFile.write ("\n" + message);

    ~ aFile.close ();

    }

    myDoc.close (SaveOptions.no);

    Thanks in advance

    Steve

    Hi Steve,.

    There are some errors in your code.

    1. function 'getmyDoc' is used, but not created.
    2. fucntion 'getScriptPath' is created but not used. (In any case, this will not give you error)
    3. function 'journal' has defined two times with the same length of the parameter.

    etc...

    Here, I have modified your code. Try this.

    var w = new Window ("dialog");
    var myGroup1 = w.add('panel', undefined, 'P&&G Check List');
    myGroup1.alignChildren = "left";
    var checkbox1 = myGroup1.add ("checkbox", undefined, "  01.  Check the ratio, sizes against job ticket and slug information");
    var checkbox2 = myGroup1.add ("checkbox", undefined, "  02.  Check images are linked");
    var checkbox3 = myGroup1.add ("checkbox", undefined, "  03.  Visually check the progression of KV/Model/CP images");
    var checkbox4 = myGroup1.add ("checkbox", undefined, "  04.  Visually check the progression of other elements like Logo and Bottle");
    var checkbox5 = myGroup1.add ("checkbox", undefined, "  05.  Check the placement of Language Tagging");
    var checkbox6 = myGroup1.add ("checkbox", undefined, "  06.  Ensure that all measurements are calculated based on Live area");
    var checkbox7 = myGroup1.add ("checkbox", undefined, "  07.  After resizing the KV image frame opened up to trim and bleed");
    var checkbox8 = myGroup1.add ("checkbox", undefined, "  08.  Fill Magenta if there is inadequate image");
    var checkbox9 = myGroup1.add ("checkbox", undefined, "  09.  Ensure the document has bleed, crop marks, gutter marks and slug information");
    var checkbox10 = myGroup1.add ("checkbox", undefined, "  10.  Ensure the final artwork is updated in the Server");
    var checkbox11 = myGroup1.add ("checkbox", undefined, "  11.  Enter time in CMD");
    var myGroup2 = w.add('panel', undefined, ' Operator Name');
    var myText2 = myGroup2.add("edittext", undefined, "");
    myText2.characters = 25;
    myGroup2.orientation = "left";
    var buttons = w.add ("group");
    buttons.add ("button", undefined, "Export PDF", {name: "ok"});
    buttons.add ("button", undefined, "Cancel");
    w.show ();
    myDoc = app.activeDocument;
    log1 = makeLogFile(app.activeDocument.name.split('.')[0], myDoc, true);
    log(log1, app.activeDocument.name);
    log1.execute();
    function makeLogFile(aName, aDoc, deleteIt)
    {
        var logLoc = "";
        try
        {
            logLoc = aDoc.filePath;
            } catch (e) {}
        var aFile = File(logLoc + "/" + aName + ".txt");
        var n = 1;
        while (aFile.exists)
        {
            aFile = File(logLoc + "/" + aName + String(n) + ".txt");
            n++;
            }
        return aFile
        }
    function log(aFile, message)
    {
        var text = w;
        var rep = "";
        if (!aFile.exists)
        {
            aFile.open("w");
            var today = new Date();
            rep += String(today) + "\n";
            rep += message + "\n" + "\n\n";
            for(var i =0;i
    

    Kind regards

    Cognet

  • Trying to acquire "tail" result on all of the Cluster Log Files of the ESX host

    Hello everyone!

    I'm trying to design a script that I can use to point to ESX host groups and generate the last 10 lines of the log files specific on these systems by using Get-journal. First of all, I would like to point out that the following code works for a single individual host:

    $esx = get-VMHost - VMhost ""hostname"" " "

    $esxlog = Get-Log -key 'vmkernel' -VMHost $esx

    $esxlog. Entries & lt;


    I had to leave out the expression that specifies the last ten lines of $esxlog. Entries b/c this editor kept interpret this section as html and present it as such

    This will show the expected results, i.e. the last 10 lines of the newspaper on this host.

    Now, when I try to expand this concept and try to create a loop on this basis by using the code below:

    $esx = Get-Cluster "NOMCLUSTER" | Get-VMHost

    foreach ($_ in $esx) {$esxlog = Get-Log -key 'vmkernel' -VMHost $_ }

    $esxlog. Entries} & lt;


    once more the code specifying the last ten lines has been omitted here in order to avoid confusion

    I only get the following error: a parameter corresponding parameter name "System.Object []" cannot be found.

    What I am doing wrong with my script?

    The variable $_ is a predefined variable so that you should not use it as your loop variable.

    You could do it like this

    $esx = Get-Cluster  | Get-VMHost
    $esx | %{
         $esxlog = Get-Log -Key "vmkernel" -VMHost $_
         $nrEntries = $esxlog.Entries.Count
         Write-Host $_.Name -foregroundcolor green
         $esxlog.Entries[http://($nrEntries-11)..($nrEntries-1)|http://($nrEntries-11)..($nrEntries-1)]
    }
    
  • Log file for the security issue on the form?

    In 11.1.2 has created a new application which reflects a previous app but when non-admin users, enter the app they receive this error when trying to access the form.

    «Security and/or filtering resulted in a necessary dimension is not represented on this data form.»

    I have at least read rights on the scenario, the Version and the entity (I rename the office).

    Worse that planning has encountered an error check the log... where is a log file for this error? I can solve it much easier if I know what size the user/app is not happy with...

    OK scratch that forget the newspaper, found that for some reason any data base/filter in the EA reflects not what in planning security. How to correctly push refresh to Essbase? The database admin/refresh planning seems not to do the tour.


    HEY THERE' S MONDAY... read messages before found one of our custom dimensions has been verified for safety while he should not have been... everything fixed.

    Thank you

    JTS

    Published by: jts July 11, 2011 06:52

    Published by: jts July 11, 2011 07:07

    fixed

  • Moving oracle data file while the database is broken down

    I have a database in montage mode while I restore the last of my data files through rman, which won't be finished for a day and a half, and then there's a mountain of archivelogs to apply. I need to pass the data to another disk files so that I have room to restore this file. I looked around a few instructions, but they are all different. Many of them tell me to stop first of all, the database is not a good option with the restoration goes. More say to use the operating system to copy the file, but a site ignored that. It is even said that I can't log out of my session between saying oracle sql to move the file and open the database. All I get is a list of steps that all take my database work.

    Currently, I copy the file to the new destination of Linux, and I'm confused about what my options are and how to get the data file of movement before Oracle is running out of space for my restoration, without doing anything to make things worse. I guess I can just wait for the copy at the end and then run something like "alter rename file ' / old/my_file.dbf ' to ' / new/my_file.dbf '; If I have to cancel the copy for a reason, it is not a big deal, as long as I don't have the space in time.

    A quick look, it seems that you should be able to move the data file and to a change of name as you have suggested, as long as you are in editing mode. I hope you have good backup (or an easier way to recover) in case it does not work.

    MOS Doc 115424.1 - how to rename or move data files and log files

    HTH
    Srini

  • audio and video synchronization on a file in the library media player Windows 7 Home Premium 64-bit

    How can I synchronize audio and video on a file in the library media player Windows 7 Home Premium 64-bit
    video play about a second faster than the audio, the download is good, cause it works on another computer on a flash drive...

    How can I synchronize audio and video on a file in the library media player Windows 7 Home Premium 64-bit
    video play about a second faster than the audio, the download is good, cause it works on another computer on a flash drive...

    Set up a device to sync in Windows Media Player

    Sync manually in Windows Media Player

    Windows Media Player sync: frequently asked questions

  • Is there a log file for the copy of the errors

    Often, when I copy a lot of files, there are a few errors... which usually appears at the end of the copy function.  These errors appear in a dialog box that is too small to read the path/file names.  And I need not only to take a decision to continue with the copy or to jump.  There is also a checkbox that drives me to do the same with the other similar problem files x.

    My problem is that I would like to know the names of file/full path of all files by mistake.  Is there some log file I can inspect to review the list of files by mistake?  I looked in the case log, but could not see that anything that is related to file copy errors.

    Hello

    No, there are no newspaper or file to list the error files.

Maybe you are looking for