SQL Log Shipping 14420, 14421 false alerts

I've implemented a 2014 SQL Server transaction logs to copy data from a primary to a secondary server. It is all working properly save, copy and restore however the alert that is configured to run on the secondary server reports that the last backup time are the time of the full initial backup before transaction logs starts. This is the information that is reported when I question the msdb.dbo.log_shipping_monitor_primary table. Therefore, when the alert monitoring is running, he reports:

MSG 14420, level 16, State 1, procedure sp_check_log_shipping_monitor_alert, line 61
The main expedition of database XXXXXXX.xxxxxxx newspaper threshold of backup of 60 minutes and has not performed a backup of newspaper for 1120 minutes operation. Check information for log and logshipping agent monitor.

SQL server was originally installed before the server has been renamed to its current name, however I've renamed the instance and all the database queries (for example SELECT @@SERVERNAME) return the correct server name.

Any help would be appreciated.

This issue is beyond the scope of this site (for consumers) and to be sure, you get the best (and fastest) reply, we have to ask either on Technet (for IT Pro) or MSDN (for developers)
*

Tags: Windows

Similar Questions

  • Redo log shipping after a stop pending physical

    Hallo,

    I just have a question for redo log shipping

    in a configuration simple primary/secondary 11g.

    In a shutted down an instance physical standby, redoshipping is active? Primary redo file transfer is still ongoing?

    Thanks in advance

    Hello;

    I'm not going to say no. I still SEE on the primary side before I shutdown, maintains false alarms from happening.

    Test

    Using this query

    http://www.mseberg.NET/data_guard/monitor_data_guard_transport.html

    DB_NAME HOSTNAME LOG_ARCHIVED LOG_APPLIED APPLIED_TIME LOG_GAP

    ---------- -------------- ------------ ----------- -------------- -------

    TDBGIS01B PRIMARY 5558 5557 21-OCT/12:51 1

    Stop waiting

    SQL > alter database recover managed standby database cancel;

    Database altered.

    SQL > shutdown

    ORA-01109: database is not open

    The database is dismounted.

    Journal of strength comes on primary side

    SQL > alter system switch logfile;

    Modified system.

    (3)

    Journal of before was 5558 check so make sure 5559 or higher

    Last log is

    o1_mf_1_5558_c2hn5vdf_.arc

    Wait a few minutes, no change

    Best regards

    mseberg

  • The archive logs shipped but more old logs do not applied...

    Environment:

    Oracle on Solaris 11.2.0.3 10.5

    The Data Guard environment working fine for 6 months.

    I recently had problems with disk space on the day before and had to copy some files from newspapers of the archives to a mount point different that had been shipped to the waiting, but not yet applied because I had to cancel the paper application process.

    After having cleared the space problem and start log shipping and return application log on (change database recovery managed to sleep...) I have the new logs shipped but my watch is a bunch of log files behind. I can see it in the V$ ARCHIVED_LOG on intelligence, where the last seq # applied is 8083 in my case and the more current seq # is currently more than 8200.

    I have all the archives always connects in 'other' the mount point and can move back the directory the correct archive according to needs.

    Since the next file that the application requires is seq # 8084, I moved this file in the/arch directory awaits the day before 'see' and apply it. No luck.

    By searching in the alert log just before after execution of the 'alter database recover managed standby database disconnect;' I see the following:
    Media Recovery Waiting for thread 1 sequence 8084
    and also:
    Fetching gap sequence in thread 1, gap sequence 8084-8084
    and just below that:
    FAL[client]: Error fetching gap sequence, no FAL server specified
    I thought I read in the docs that the FAL_SERVER and the FAL_CLIENT are no longer needed. (?)

    I also found a CKPT script that lists information about DG Environment. For the primary side to hang on, but the day before one produces this output:
    NAME                           DISPLAY_VALUE
    ------------------------------ ------------------------------
    db_file_name_convert           /u01/oradata/APSMDMP1, /u01/or
                                   adata/APSMDMP1, /u02/oradata/A
                                   PSMDMP1, /u02/oradata/APSMDMP1
                                   , /u03/oradata/APSMDMP1, /u03/
                                   oradata/APSMDMP1, /u04/oradata
                                   /APSMDMP1, /u04/oradata/APSMDM
                                   P1
    
    db_name                        APSMDMP1
    db_unique_name                 APSMDMP1STBY
    dg_broker_config_file1         /u01/app/oracle/product/11203/
                                   db_2/dbs/dr1APSMDMP1STBY.dat
    
    dg_broker_config_file2         /u01/app/oracle/product/11203/
                                   db_2/dbs/dr2APSMDMP1STBY.dat
    
    dg_broker_start                FALSE
    fal_client
    fal_server
    local_listener
    log_archive_config             DG_CONFIG=(APSMDMP1,APSMDMP1ST
                                   BY)
    
    log_archive_dest_2             SERVICE=APSMDMP1 ASYNC VALID_F
                                   OR=(ONLINE_LOGFILES,PRIMARY_RO
                                   LE) DB_UNIQUE_NAME=APSMDMP1 MA
                                   X_CONNECTIONS=2
    
    log_archive_dest_state_2       ENABLE
    log_archive_max_processes      4
    log_file_name_convert          /u01/oradata/APSMDMP1, /u01/or
                                   adata/APSMDMP1
    
    remote_login_passwordfile      EXCLUSIVE
    standby_archive_dest           ?/dbs/arch
    standby_file_management        AUTO
    
    NAME       DB_UNIQUE_NAME                 PROTECTION_MODE DATABASE_R OPEN_MODE
    ---------- ------------------------------ --------------- ---------- --------------------
    APSMDMP1   APSMDMP1STBY                   MAXIMUM PERFORM PHYSICAL S READ ONLY WITH APPLY
                                              ANCE            TANDBY
    
    
       THREAD# MAX(SEQUENCE#)
    ---------- --------------
             1           8083
    
    PROCESS   STATUS          THREAD#  SEQUENCE#
    --------- ------------ ---------- ----------
    ARCH      CLOSING               1       7585
    ARCH      CLOSING               1       7588
    ARCH      CLOSING               1       6726
    ARCH      CLOSING               1       7581
    RFS       RECEIVING             1       8462
    MRP0      WAIT_FOR_GAP          1       8084
    RFS       IDLE                  0          0
    
    NAME                           VALUE      UNIT                           TIME_COMPUTED                  DATUM_TIME
    ------------------------------ ---------- ------------------------------ ------------------------------ ------------------------------
    transport lag                  +00 00:01: day(2) to second(0) interval   08/16/2012 13:24:04            08/16/2012 13:23:35
                                   59
    
    apply lag                      +01 23:08: day(2) to second(0) interval   08/16/2012 13:24:04            08/16/2012 13:23:35
                                   43
    
    apply finish time              +00 00:17: day(2) to second(3) interval   08/16/2012 13:24:04
                                   21.072
    
    estimated startup time         15         second                         08/16/2012 13:24:04
    
    NAME                                                            Size MB    Used MB
    ------------------------------------------------------------ ---------- ----------
                                                                          0          0
    I do not use a fast recovery area.

    What should I do to get the old logs to check-in begins to apply again?

    The alert log says something about "Adding value to the media", but I don't know why I would need that since I have the log files itself.

    Any help is appreciated.

    Also, I spend a significant amount of time to read through this forum and many articles on the web but couldn't find exactly the solution I was looking for.

    Thank you very much!!

    -gary

    Try to save the log file

    alter database register logfile 'full path for the 8074 sequence.

    and check select message from v$ dataguard_status;

  • Reporter mode physical log shipping standby when you apply the patch

    I apply a (Group of hotfixes) processor for a dataguard primary and physical shall help Doc-ID 278641.1 . I need to temporarily stop archivelog shipment from primary to standby. The documentation gives this example of order:

    ALTER system set log_archive_dest_state_X = reporter brought = the two sid ='* '

    Where X is the number of destination used for shipment of recovery for the backup site.

    So, what is the X? In online research, the answer is almost always Log_archive_dest_state_2. Questioning primary school, 2 is idle and waiting seems to be on 1, so I think I use 1. Which should I use?

    On the primary:

    Select the State, nom_dest, DESTINATION of v$ archive_dest where status = "VALID";

    VALID
    LOG_ARCHIVE_DEST_1
    prod1s

    VALID
    LOG_ARCHIVE_DEST_10
    USE_DB_RECOVERY_FILE_DEST

    view the log_archive_dest_1 setting;

    Log_archive_dest_1 SERVICE string VALID_FOR = (ONLINE_LOGFILES, PRIMARY_ROLE) prod1s = LGW R SYNC AFFIRM DB_UNIQUE_NAME = prod1s

    LOG_ARCHIVE_DEST_10 string LOCATION = USE_DB_RECOVERY_FILE_DEST REQUIRED


    view the log_archive_dest_state setting

    VALUE OF TYPE NAME
    ------------------------------------ ---- ------------------------------
    log_archive_dest_state_1 ENABLE stri
    NG
    log_archive_dest_state_10 ENABLE stri
    NG
    LOG_ARCHIVE_DEST_STATE_2 ENABLE stri
    NG
    activate log_archive_dest_state_3 stri
    NG
    activate log_archive_dest_state_4 stri
    NG
    activate log_archive_dest_state_5 stri
    NG
    activate log_archive_dest_state_6 stri
    NG
    activate log_archive_dest_state_7 stri
    NG
    activate log_archive_dest_state_8 stri
    NG
    activate log_archive_dest_state_9 stri
    NG


    On the physics of the day before:

    Select the State, nom_dest, DESTINATION of v$ archive_dest where status = "VALID";

    VALID
    LOG_ARCHIVE_DEST_2
    prod1

    VALID
    LOG_ARCHIVE_DEST_10
    USE_DB_RECOVERY_FILE_DEST

    VALID
    STANDBY_ARCHIVE_DEST
    USE_DB_RECOVERY_FILE_DEST

    view the log_archive_dest_ setting

    LOG_ARCHIVE_DEST_10 string LOCATION = USE_DB_RECOVERY_FILE_DEST REQUIRED
    SERVICE string LOG_ARCHIVE_DEST_2 = VALID_FOR (ONLINE_LOGFILES, PRIMARY_ROLE) = prod1 LGWR
    SYNC AFFIRM DB_UNIQUE_NAME = prod1

    view the log_archive_dest_state setting

    VALUE OF TYPE NAME
    ------------------------------------ ----------- ------------------------------
    log_archive_dest_state_1 string DAVID
    log_archive_dest_state_10 string ENABLE
    allow the chain of LOG_ARCHIVE_DEST_STATE_2
    allow the chain of log_archive_dest_state_3
    allow the chain of log_archive_dest_state_4
    allow the chain of log_archive_dest_state_5
    allow the chain of log_archive_dest_state_6
    allow the chain of log_archive_dest_state_7
    allow the chain of log_archive_dest_state_8
    allow the chain of log_archive_dest_state_9

    DataGuard with physical standby
    DB to: 10.2.0.4.0
    No RAC, ASM or DG broker

    Published by: JJ on May 2, 2012 10:05

    Published by: JJ on May 2, 2012 10:05

    Hello;

    You want this

    ALTER system set log_archive_dest_state_2 = reporter; (you need to do to your log_archive_dest_state_n) DEST ID that apply to your system (looks like 2 also)

    I'll post my full how patch is here a moment.

    0. Disable log shipping from the Primary
    1. Shutdown Standby
    2. Install patch on Standby software only
    3. Startup Standby in recovery mode (do NOT run any SQL at the standby)
    4. Shutdown Primary
    5. Install patch on Primary
    6. Run SQL on Primary
    7. Re-enable log shipping
    8. Monitor the redo apply from Primary to Standby --- this will also upgrade the Standby 
    
    See Oracle support article : How do you apply a Patchset,PSU or CPU in a Data Guard Physical Standby configuration [ID 278641.1]
    

    Please consider old questions of closing some of you.

    Best regards

    mseberg

    Published by: mseberg on May 2, 2012 09:16

  • 11g - how do I enable sql logging

    Hi, experts,
    I can't find how to activate the SQL logging (in nqquery.log)

    How to configure?

    Hi user,

    You can do this with the administration tool. Open the administration tool, and then open your repository in online mode. Go to the manage--> identity. Double-click the user that you want to be able to see queries and the level 2 loggin increse.

    I hope this helps.

    J. -.

  • The bosses like ORA - looking in the SQL log files

    Version: 11.2.0.3


    In our database of Prod, I do about 15 SQL files provided by the team of apps.

    After the implementation, the apps team asked if I had errors. Because I had no time to browse each log file, I just did a grep for the model
     ORA- 
    in the execution log files.

    $ grep ORA- *.log
    <nothing returned> , which means no ORA- errors).
    Later, it was discovered that several triggers are in INVALID state and errors of compilation during execution of the script. She got the rebound. When I checked the logs carefully, I could see errors like below in the log file
    SQL > CREATE OR REPLACE TRIGGER CLS_NTFY_APT_TRG
      2  AFTER INSERT ON CLS_NTFY
      3  FOR EACH ROW
      4  DECLARE message VARCHAR2(100);
      5  BEGIN
      6    IF (:new.jstat=1) THEN
      7        message:='JOB '||:new.mcode||'/'||:new.ajbnr||'/'||:new.jobnr||' inserted';
      8        DBMS_ALERT.SIGNAL('FORKMAN',message);
      9       END IF;
     10  END;
     11  /
    
    Warning: Trigger created with compilation errors.
    The apps team is annoyed with me because they raise another CR to get these compiled triggers.

    Question1.
    What are the patterns of errors usually you grep after running SQL files? Today, I learned that I should be looking for the ' alert' string in the log files. So, I added the following diagrams to grep in the future.
    ORA- 
    Warning 
    If you guys are looking for any other patters error, let me know.


    Question2.
    No idea why I don't have an ORA-error for the above Trigger compilation error?

    Kavanagh wrote:
    Question2.
    No idea why I don't have an ORA-error for the above Trigger compilation error?

    Because it is the way in which SQL * more reports that an error has occurred... This isn't a real message of the database itself. If you need to see the error you need to do:

    SHO ERR
    

    thereafter to show the error actually happened.

  • Query regarding SQL logs

    I'm currently working on database forensics, if I do some manual changes in the database through some sql queries there no entry in the logs to prove...

    Can anyone suggest me some tools that can give the desired result. (i.e. pulled the log of all manual queries)

    PS I am using SQL Server 2005

    Any help would be appreciated

    Thank you.

    Hello

    Your question is beyond the scope of this community.

    Please repost your question in the SQL Server TechNet Forums.

    https://social.technet.Microsoft.com/forums/SQLServer/en-us/home?category=SQLServer

    See you soon.

  • Under XP, is false alert.

    I get an error message saying that my computer is in danger of being taken over by a terrible virus and I have to click on it, but I try to click on the red x tryingto get rid of it, but it won't go away and so I have to make a "disconnect", then reconnect to get rid of, but it happens several times a day.  When it happens, that I can't even close the programs I'm or even click out of the "caveat" so I have to close my computer by disconnection without leaving the programs I have up.  How can I get these to stop and if I constantly disconnect my computer without closing programs will be that this harm my computer?  When I log back on it tells me that I disconnected without closing programs (of course, I know that), but after doing this several times a day every day, it will hurt my computer?  Moreover, when I receive these alerts the only other thing on the alert is a yellow triangle with a black exclamation point in the middle of it.  Thanks for your help.

    Well the short answer is you have been taken over by a fake AV program, and this will continue, unless you remove the virus. 1st thing you need to do is to download malware bytes from here, download, install and update the program. Make sure that you run full scan of the system. Once the analysis is completed and the results appear, erase everything and you should be fine. Make sure download also that you Microsoft Security Essentials from here to stay protected.

    Let me know if you have any problems.

    Hope this helps,

    JB

  • False alerts in TMS after upgrading to 14.6.2

    After modernizing TMS from 14.3 to 14.6.2, we receive frequent false alarms that are some of the features of basic (IPGW, ISDNGW, supervisor blade) down.

    Within 1-2 minutes we get the high alert once again but when you see the status of the device is not really down. It is perhaps because the MSDS does not answer these devices SNMP and there seems to be a bug. I would like to know what can be checked on TMS or devices to avoid this problem.

    Just to inform that was not with the old version of TMS.

    We have also seen this frequently on TMS14.6 as well, especially with infrastructure devices (TCS, VCS, ISDN gateway, etc.) and sometimes also of endpoints.

    After the TMS15.0.1 we do not see this issue any more.

    Wayne
    --
    Remember the frequency responses and mark your question as answered as appropriate.

  • SQL log file grows to limit, but it can be narrowed and used space is listed as 1%.

    Hello.

    3 guests ESXi 5.1.0 1463097 in a cluster.

    MS SQL 2008 R2 server (SQL Server 10.50.2500) on a Standard VM MS Windows 2008 R2 server.

    VIM_SQLEXP instance name

    Name of the comic: VIM_VCDB.

    Recovery mode: Simple

    DB 1 MB, unrestricted growth. Current database size: 3887MB

    Log growth of 10%, limited to 4 GB maximum size after shrink this morning BUT 2MB

    Every morning the logfile has run nearly full but the VMware VirtualCenter Server service runs and I can start the vSphere Client. I know not if I do not shrink the log, the service will stop later today or tomorrow.

    I studied and followed

    Determine where growth occurs in the VMware vCenter Server (1028356) database.

    SQL Server recovery mode affect the Transaction log space required (1001046)

    Troubleshooting of transaction logs on a Microsoft SQL database server (1003980)

    but I do not see any table more than many. I see that VPX_EVENT has increased about 28 MB at night, but nothing else seems alarming.

    Any ideas what it could be?

    Best regards

    Peter Lageri Rasch

    Denmark

    Hello

    Some time ago I had a similar situation as you.

    I recommend you to check this carefully, it might be useful:

    Troubleshooting when the vcenter db transaction log is full (MS SQL server 2005 express)

    Best regards

    Pablo

  • Compressed log shipping

    Hello

    1 can. someone please explain what is sending log compressed in oracle?
    2. the benefits of its use during the existing function?
    3 requires a permit to use it? If yes how it would be calculated?

    Thanks in advance

    See http://www.oracle.com/technetwork/articles/sql/11g-dataguard-083323.html
    where Compression is identified as a "new feature" available in 11g.

    Hemant K Collette

  • SQL logging into DBAdapter

    Hi guys,.

    I want to empty all DBAdapter sql code in a file.

    Someone knows how to trun logging SQL for DBAdapter. ?

    Thank you

    Kevin

    using 11g or 10g?

    for 11g follow this:
    http://download.Oracle.com/docs/CD/E12839_01/integration.1111/e10231/life_cycle.htm
    Figure 2-2 the Configuration Page of newspaper

  • vROPS to LI LI Agent using log shipping

    I am trying to use the solution for recording of vrops to loginsight (Cloud Management Marketplace |) Solution Exchange) and I'm having a few problems, as indicated in this post on the vrops forum... VROPS configuration automatically changes Log Insight (in a bad way)

    There is someone to help these solutions togheter and had the same problem (vrops changing the configuration of li automatically)?

    I had to exclude some cutting, because since my installation of vrops I was getting event per day about 20 million only for the vrops, which are almost INFO/DEBUG.

    Someone at - it a guideline on which configuration can be done?

    It is seems to be only little documentation about the configuration of newspapers... I've seen some articles that talk about the log4j files, but it seems that they reset itself by default if I change the configuration of the interface chart user also.

    Concerning

    Francesco

    Confirmed - Sorry about the back and forth on it. Tests on my end has not happened with 6.0.1. So seems that you can't use the content and user interface pack remote syslog with 6.0.1. Thank you report to us - I will get it tracked with vR Ops team. You opened a pension for this business? If this isn't the case, could you and ensure that it is assigned to the Ops vR?

  • Capacity/IO false alerts?

    Hi all

    I recently started to manage vCOPS (currently 5.6 to 5.7 soon) for a client. I have a lot of experience with the install, but not so much in the interview on the day the day and usage. I have a problem that I can't understand, and I've not found much information going around on this subject. This is probably an error based on the user, that is me haha. Here's the gist of it:

    Data warehouses are reports for TWO storage capacity alerts, and e/s, however, data warehouses are not remotely close to capacity (as low as 6% of disk space usage) or disk i/o (as low as 1%). I'm confused as to what is causing the alert because there aren't any problems that I can find. I have attached a few screenshots to show what I see. If there are no gurus vCOPS out there please let me know what Miss me.

    Thank you

    ~ Brandishes

    There is a bug in recently discovered 5.7.x,, which may be related to your situation- 0-days-rest on the large data with very little of use warehouses online
    A fix should be available by support for affected customers, probably soon.

  • SQL logging for Transport to another network

    Problem: We have an oracle database which receives all transactions from an application of BMC repair on the same network. We need to capture/transaction log, place them in the order in a flat file of some sort, transfer them to another network, and then apply them to the db replica in sequential order.

    I went through the docs of LogMiner and other things, but I don't really see a solution. Data Guard will not help, because we can not maintain a connection to transport of newspapers of the previous day.

    Our environment is Oracle 11 g R1.

    Any suggestions would be much apprecieted.

    Thank you
    Justin B.

    Taking a backup and sneakernetting that the destination network must only be done once during initial Setup. After that, you can just Sandalenet the archived log files, put Eve into recovery mode and what is it worth of newspapers. Any solution will be a first step, which is to restore some sort of backup as the starting point.

    DCC requires a Subscriber, but should not be a different database. This is why I suggested it would be a separate program that you have written that agrees with the changes and wrote a flat file (XML or otherwise). You will then need an additional program on the destination network that could read your custom XML file and generate DML appropriate after you Sandalenet the XML file. Sneakernetting an XML file is likely to be more difficult than logs archived simply sneakernetting because XML will be much larger files. Therefore, you must also write a maintain the programs that read and write the XML. I'm hard-pressed to see how this would be the 'best' way for any reasonable definition of "best".

    Justin

Maybe you are looking for

  • iPhone 5s stuck on the loading with the apple logo bar

    my iphone 5 s is currently blocked on the loading with the apple logo bar, my home button does not work (I normally use assisstive-touch), ive tried connecting my phone to a laptop and then connect to itunes, but when I connect to itunes it say «to a

  • Sharing my Satellite with Win 7 capable screen?

    My new computer capable of screen sharing.IF yes how can I achieve this? I have a friend who wants to capture my screen to put in place certain areas for me. Thank you (I do it easily through the LENS I CHATTED on my MAC) Gail

  • I have a padlock on my backups in Time Capsule.

    Time Machine has obviously done a check on my Time Capsule to improve reliability. He put a padlock on the backup and now it is inaccessible for time machine. I have the last AirportTC is running the latest firmware - v7.7.3. Does anyone know why it

  • is it possible in LABVIEW to create an output EXE as VC ++

    Ghislain IAM. new to LABVIEW IAM. I have 1 + years of experience in VC ++ 6.0. Is it possible in LABVIEW to create an output EXE as VC ++. If it please suggest me how to create...

  • Why my computer off and get blue screens

    Why my computer off and get blue screens once from time to time. My hard drive sounds like he works hard. I can hear it ticking sweet sounds when I start the computer, and it takes a while to calm down. Ron