When creating Data Guard?

We intend to migrate a database RAC and Data Guard.

When you recommend implementation of custody of data? Before the export/import (from the old server to new CARS in custody of data) or after?

You must first configure your new primary CARS and then configure the standby of the CARS (after the imp on the primary).
This way you avoid the mass mailing to redo the expectation that comes with the PMI, if this sequence of steps will be faster.

Disadvantage: If you take the primary CARS into production immediately after importation, there will be a short period of time without protection in this way DR.

Kind regards
Uwe Hesse

"Don't believe it, try it!
http://uhesse.com

Tags: Database

Similar Questions

  • Can't see the logical volume when creating data store - vSphere 4.1

    Hi guys, got another problem with ESXi 4.1 on a HP Proliant ML350 G6.

    ESXi 4.1 update 1 has been installed successfully on the server on an internal USB. Our next step is to create a store of data through the vSphere client.

    The server has five 500 GB SAS configured in a RAID 5 array drives. The server has a HP Smart Array P410i RAID controller. I see five physical disks and logical volume of about 2.3 TB in RAID Utility. However, when access us the hypervisor via the vSphere client and try to create a data via a storage LUN/local store, the display shows then no logical volumes or the drive bays on which to create a data store.

    We are usnure why this is, as the ESXi 4.1 installation program could see this logical volume when I was installing the hypervisor, but he does not appear in vSphere for use in a data store.

    If anyone can help?

    Thank you very much.

    Each disk must be less than 2 TB - 512b

  • remove the new HBA analysis when creating data warehouses?

    I created a script to create multiple data warehouses (100 +), but notices after each one is created, it triggers a new analysis of all the HBAS the LUNS is announced to the.

    It is; This new analysis can hold until all data stores have been created, then just scan once at the end?

    example line of code to the ctreation store:

    < New-store data-server vcenter - VMHost esx.domain.com - name san-lun-01-path naa.xxxxxxxxxxxxxxxxxxxxxxxx - Vmfs - BlockSizeMB 4 >

    Thanks a bunch for all help!

    The new analysis is actually performed by vCenter, you could follow Duncans post for this disable manually or you can add a line to your script that adds the parameter disable the Rescan and then change it back after...

    http://www.yellow-bricks.com/2009/08/04/automatic-rescan-of-your-HBAs/

  • Alter tablespace name wrong when creating data file

    Hello

    While creating a data file, I used alter tablespace name wrong.
    For example,.
    I used:
    ALTER TABLESPACE ADD datafile APPS_TS_SEED ' / d01/oracle/stkmnfun/db/apps_st/data/a_txn_data05.dbf' size 1024M;

    Correct one:
    ALTER TABLESPACE ADD datafile APPS_TS_TX_DATA ' / d01/oracle/stkmnfun/db/apps_st/data/a_txn_data05.dbf' size 1024M;

    I have created a more a_txn_data06 with the correct STORAGE space...
    But, did anyone know how this can be corrected?


    Thanks in advance,

    As long as it has not at all affected in this data file, you can deposit with
    alter tablespace... datafile drop...

    Kind regards
    Uwe Hesse

    http://uhesse.WordPress.com

  • Create Data Guard standby CARS

    Hi all

    I tried to create a primary RAC RAC relief and got some confused after reading advice online.

    According to the doc oracle for a CCR environment

    1. On the recovery instance, set the LOCATION attribute of the LOG_ARCHIVE_DEST_1 initialization parameter to archive locally, because the cross-instance archival is not necessary.
    2. Set it on the receiving instance, SERVICE attribute of the LOG_ARCHIVE_DEST_1 initialization parameter for archiving to the recovery instance.

    It seems that one instance can set LOG_ARCHIVE_DEST_1 as local and others would point to the location of this instance of recovery by setting the service field.

    But I read some blog on the steps to the eve of the creation, and they are not mentioned on this setting. And I also try to add standby database configuration dg without set LOG_ARCHIVE_DEST_1 and activate the configuration, it works well. So I'm confused which is correct.

    In my case, I have a primary cars and a rac prepare to serve as a standby. Both of the version is 11 GR 2. I want to create a configuration of DGs to manage the db in waiting. Should I change LOG_ARCHIVE_DEST_1 for instances of Eve?

    Liz

    What the documentation (and you are referencing the version 10.2) said in fact is that the recovery instance must use the LOCATION to specify where the archivelogs are written.  If you assume that the target location is not a shared file system, the receiving instance is using the SERVICE to specify the location of the target. If you have a file system (not ACFS that wasn't available in 10g ASM) clustered, you might have shared location.

    See the documentation for 11.2 that is updated http://docs.oracle.com/cd/E11882_01/server.112/e41134/rac_support.htm#SBYDB4962

    Citing it (my emphasis):

    Configure standby redo log archiving on each standby database instance. The standby redo log must be stored in a location that is accessible by all instances of database pending, and each instance of database backup must be set to archive the log restore pending by progress at the same location. See section 6.2.3.2 for more information about the configuration log archiving redo pending.

    Hemant K Collette

  • Oracle will always create a Data Guard certification?

    Oracle will always create a data protection Certification?

    If so it would be more than a "HA" - high availability.

    or would it only be on Active Data Guard.

    Thank you

    We have a Data Guard Admin certification soon. Unfortunately, I have not closer than that date.

    https://blogs.Oracle.com/certification/entry/0856_31

    Kind regards
    Brandye Barrington

    The Forum Moderator

  • Nobody knows the largest available space supported for vmfs data store when creating a new form of vmfs data store a 3T lun mapped from FC storage.

    I traced a form of lun 3 t FC storage.

    And the project to create a vmfs on this lun, creating, data store, space available by default is 1024 G, no 3 t, so I guess that maybe some limitations.

    Anyone know? Thank you very much.

    Hello

    The limit is 2 TB - 512Bytes

    That 512 bytes is VERY important to remember.

    You will see that 1 TB as it roles once you exceed 2 tablespoons.

    As Anton suggested you can use two 1.5 TB LUN safely. However, you can in fact more than 500 GB LUNS or such. Will the LARGEST you can is not always the most efficient allocation of resources.

    Best regards

    Edward L. Haletky VMware communities user moderator, VMware vExpert 2009, url = http://www.virtualizationpractice.comvirtualization practical analyst [url]
    "Now available: url = http://www.astroarch.com/wiki/index.php/VMware_Virtual_Infrastructure_Security' VMware vSphere (TM) and Virtual Infrastructure Security: securing the virtual environment ' [url]
    Also available url = http://www.astroarch.com/wiki/index.php/VMWare_ESX_Server_in_the_Enterprise"VMWare ESX Server in the enterprise" [url]
    [url =http://www.astroarch.com/wiki/index.php/Blog_Roll] SearchVMware Pro [url] | URL = http://www.astroarch.com/blog Blue Gears [url] | URL = http://www.astroarch.com/wiki/index.php/Top_Virtualization_Security_Links Top security virtualization [url] links | URL = http://www.astroarch.com/wiki/index.php/Virtualization_Security_Round_Table_Podcast Virtualization Security Table round Podcast [url]

  • Data Guard and auditor of the Apex

    Hi all

    I seem to be unable to understand the configuration of parameters for protective Apex listener when connecting to a database.

    We have a lot of servers of databases (without data guard) and we use the command line Wizard to generate the configuration file. It's straigthforeward and
    easy. We enter the server, port, and Service_name and it works.

    But for a guard database, there are two servers and I seem to be unable to set it in the right way. After doing some research I came across the parameter apex.db.customURL which should solve the problem.

    I removed the references to the server, port, and Service_name and put the key in.

    The result were errors of connection due to some incorrect port settings.

    SEVERE: The pool named: apex is not properly configured, error: IO error: format invalid number for the port number

    oracle.dbtools.common.jdbc.ConnectionPoolException: the pool named: apex is not properly configured, error: IO error: format invalid number for the port number

    at oracle.dbtools.common.jdbc.ConnectionPoolException.badConfiguration(ConnectionPoolException.java:65)

    What Miss me?

    Thank you
    Michael

    (Here's the rest of our configuration :)

    <? XML version = "1.0" encoding = "UTF-8" standalone = 'no '? >

    < ! DOCTYPE SYSTEM property "http://java.sun.com/dtd/properties.dtd" > ""

    Properties of <>

    < comment > saved on Mon Oct 19 18:28:41 CEST 2015 < / comment >

    < key "debug.printDebugToScreen entry" = > false < / entry >

    < key "security.disableDefaultExclusionList entry" = > false < / entry >

    < key = "db.password entry" > @055EA3CC68C35F70CF34A203A8EE1A55D411997069F6AE9053B3D1F0B951D84E0E < / entry >

    < key = "enter cache.maxEntries" > 500 < / entry >

    < key = "enter error.maxEntries" > 50 < / entry >

    < key = "enter security.maxEntries" > 2000 < / entry >

    < key = "cache.directory entry" > / tmp/apex/cache < / entry >

    < enter key = "jdbc. DriverType"> thin < / entry >

    < key = "enter log.maxEntries" > 50 < / entry >

    < enter key = "jdbc. MaxConnectionReuseCount"> 1000 < / entry >

    < key "log.logging entry" = > false < / entry >

    < enter key = "jdbc. InitialLimit' > 3 < / entry >

    < enter key = "jdbc. MaxLimit' 10 > < / entry >

    < key = "enter cache.monitorInterval" 60 > < / entry >

    < key = "enter cache.expiration" > 7 < / entry >

    < key = "enter jdbc.statementTimeout" > 900 < / entry >

    < enter key = "jdbc. MaxStatementsLimit' 10 > < / entry >

    < key = "misc.defaultPage entry" > apex < / entry >

    < key = "misc.compress" / entry >

    < enter key = "jdbc. MinLimit"> 1 < / entry >

    < key = "cache.type entry" > lru < / entry >

    < key "cache.caching entry" = > false < / entry >

    < key "error.keepErrorMessages entry" = > true < / entry >

    < key = "cache.procedureNameList" / entry >

    < key = "cache.duration entry" > days < / entry >

    < enter key = "jdbc. InactivityTimeout"1800 > < / entry >

    < key "debug.debugger entry" = > false < / entry >

    < key = "enter db.customURL" > JDBC: thin: @(DESCRIPTION = (ADDRESS_LIST = (ADRESSE = (COMMUNAUTÉ = tcp.world) (PROTOCOL = TCP) (host = DB-ENDUR) (Port = 1520)) (ADDRESS = (COMMUNITY = tcp.world)(PROTOCOL=TCP) (Host = DB-ENDURK) (PORT = 1521)) (LOAD_BALANCE = off)(FAILOVER=on)) (CONNECT_DATA = (SERVICE_NAME = ENDUR_PROD.) VERBUND.CO. «"" AT))) < / entry >»»»

    < / properties >



    Hi Michael Weinberger,

    Michael Weinberger wrote:

    I seem to be unable to understand the configuration of parameters for protective Apex listener when connecting to a database.

    We have a lot of servers of databases (without data guard) and we use the command line Wizard to generate the configuration file. It's straigthforeward and
    easy. We enter the server, port, and Service_name and it works.

    But for a guard database, there are two servers and I seem to be unable to set it in the right way. After doing some research I came across the parameter apex.db.customURL which should solve the problem.

    I removed the references to the server, port, and Service_name and put the key in.

    Keep the references the server name, port and service. No need to delete.

    The result were errors of connection due to some incorrect port settings.

    SEVERE: The pool named: apex is not properly configured, error: IO error: format invalid number for the port number

    oracle.dbtools.common.jdbc.ConnectionPoolException: the pool named: apex is not properly configured, error: IO error: format invalid number for the port number

    at oracle.dbtools.common.jdbc.ConnectionPoolException.badConfiguration(ConnectionPoolException.java:65)

    What Miss me?

    You must create two entries in the configuration file "defaults.xml" for your ADR (formerly APEX Listener).

    One for db.connectionType and, secondly, for db.customURL, for example:

    customurl
    jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=
    (ADDRESS=(COMMUNITY=tcp.world)(PROTOCOL=TCP)(HOST=DB-ENDUR)(Port = 1520))
    (ADDRESS=(COMMUNITY=tcp.world)(PROTOCOL=TCP)(HOST=DB-ENDURK)(PORT = 1521))
    (LOAD_BALANCE=off)(FAILOVER=on))(CONNECT_DATA=(SERVICE_NAME=ENDUR_PROD.VERBUND.CO.AT)))
    

    Reference: http://docs.oracle.com/cd/E56351_01/doc.30/e56293/config_file.htm#AELIG7204

    NOTE: After you change the configuration file, don't forget to restart independent ADR / Java EE application server support if ADR is deployed on one.

    Also check if your URL for a JDBC connection is working properly and if there are any questions, you can turn on debugging for ADR:

    Reference:

    Directed by Tony, you should post the ADR related questions to the appropriate forum. Reference: ADR, SODA & JSON in the database

    You can also move this thread on the forum of the ADR.

    Kind regards

    Kiran

  • TAF TNS entry for active data guard

    Hello world

    Need your advice on the configuration of the TNS entrance side transparent client failover of the DB in waiting in a Data Guard configuration active?

    Normally, I create a service manually and configure this service to start or stop according to the change of role of database. Then have personalities of the two sites in the entry of TNS (example below).

    TEST =

    (DESCRIPTION =

    (FAILOVER = ON)

    (ENABLE = BROKEN)

    (LOAD_BALANCE = TRUE)

    (ADDRESS = (PROTOCOL = TCP) (HOST = VIP1-SITE1)(PORT = 1521))

    (ADDRESS = (PROTOCOL = TCP) (HOST = VIP2-SITE1)(PORT = 1521))

    (ADDRESS = (PROTOCOL = TCP) (HOST = VIP1-SITE2)(PORT = 1521))

    (ADDRESS = (PROTOCOL = TCP) (HOST = VIP2-SITE2)(PORT = 1521))

    (CONNECT_DATA =

    (SERVER = DEDICATED)

    (SERVICE_NAME = TEST)

    (FAILOVER_MODE =

    (TYPE = SELECT)

    (METHOD = BASIC)

    (RETRIES = 20)

    (TIME = 2)

    )

    )

    )

    I learned this method of-> http://uhesse.com/2009/08/19/connect-time-failover-transparent-application-failover-for-data-guard/

    Is there a better way to do this? Can we achieve this using an AMT only?

    I think I found the answer. According to the document oracle practices best client failover, the service can be configured to start automatically on the standby server when the role of the db is changed. This can be achieved by creating a service with srvctl with option '-l PRIMARY' on both sites

    For example:

    Main cluster: srvctl add service d Austin s oltpworkload r - ssa1 ssa2 ssa3, ssa4-l primary SCHOOL - q TRUE EI SESSION m BASE w 10 - z 150

    Cluster of relief: srvctl add service d Houston s oltpworkload - r BSR1, ssb2, ssb3, ssb4 - l PRIMARY - q PURE EI SESSION m BASE w 10 - z 150

    Documentary link--> http://www.oracle.com/au/products/database/maa-wp-11gr2-client-failover-173305.pdf

    Also the entrance to the AMT must be created with two descriptions, one for primary and one for standby

    TNS_DG =

    (DESCRIPTION_LIST =

    (LOAD_BALANCE = OFF)

    (FAILOVER = ON)

    (ENABLE = BROKEN)

    (DESCRIPTION =

    (ADDRESS_LIST =

    (LOAD_BALANCE = ON)

    (ADDRESS = (PROTOCOL = TCP) (HOST = vip1-primarydb)(PORT = 1521))

    (ADDRESS = (PROTOCOL = TCP) (HOST = vip2-primarydb)(PORT = 1521))

    )

    (CONNECT_DATA =

    (SERVER = DEDICATED)

    (SERVICE_NAME = orcl)

    (FAILOVER_MODE =

    (TYPE = SELECT)

    (METHOD = BASIC)

    (RETRIES = 20)

    (TIME = 2)

    )

    )

    )

    (DESCRIPTION =

    (ADDRESS_LIST =

    (LOAD_BALANCE = ON)

    (ADDRESS = (PROTOCOL = TCP) (HOST = vip1-standbydb)(PORT = 1521))

    (ADDRESS = (PROTOCOL = TCP) (HOST = vip2-standbydb)(PORT = 1521))

    )

    (CONNECT_DATA =

    (SERVER = DEDICATED)

    (SERVICE_NAME = orcl)

    (FAILOVER_MODE =

    (TYPE = SELECT)

    (METHOD = BASIC)

    (RETRIES = 20)

    (TIME = 2)

    )

    )

    )

    )

  • Database is configured for Data Guard

    I'm lance a UTF8 conversion on a development database, which has been cloned from Data Guard. There is a warning in the Migration status: "database is configured for Data Guard" which is looking at DMU to determine this? The database is opened in read-write mode and it behaves like a primary database (I have the MISP scanner and run updates to address representations not valid). I would like to know what I need to update the settings.

    This prevent me to convert tables using DEC? When I try to select this option for all tables I get the message, "CDS does not support conversion method"Copy data using CREATE TABLE AS SELECT"for tables that are involved in a process like capture Oracle Streams, or apply.". Another method of conversion available for the table.

    Thank you

    Ben

    CDS checks if the DG_BROKER_START parameter is set to true.

    The problem with the ETG is independent of the Data Guard. The CDS is true for tables which:

    -are asynchronous stream capture source, or

    -update conflict managers have, or

    -DML managers have, or

    -conflict resolution settings have

    The above tables are considered to be configured for Oracle Streams and are not supported by the method of conversion of DEC. This is because the DEC method creates a copy converted the table and falls from the original. The CDS is not able to move the configuration information from the old table to the new flow.

    Thank you

    Sergiusz

  • Several primary and physical databases Configuration ensures in Data Guard Broker

    Hello

    Is it possible to add two or several primary and physical databases configuration ensures in data guard broker?

    I have 1 primary databases and two databases physical standby that is

    (1) primary that is pri - (database primary)

    (2) secondary i, e, s (physical pending)

    (3) Secondary2 i.e. sec2 (physical pending)

    Practical AM sinister place, my scenario is my pri and dry machines are in seat, if the pri crashed it switch to s that works very well and my S2 is in another area office. Suppose that if my two siege machines pri and sec crashed, so I want to do my mahcine sec2 as primary.

    I have two separate computers to the broker a headquarters and a District Office

    Use failure of quick start on Data Guard Broker, broker headquarters machine I have configured pri and dry but in sector office broker not able congifured pri and S2 and the machine.

    can be done several primary database configuration with data bases on hold?

    Has anyone done this before, or has a perform a recovery after loss of place...

    need help or suggestion

    thanx

    No.... It is not possible. When you use the DG broker, the first thing you can do in the DGMGRL utility is to deliver CONFIGURATION to CREATE. You can see on the doc of this command that you define here the PRIMARY DATABASE.

    The command to add a DATABASE to the broker, adds a new database pending. You cannot add an another primary.

    The broker configuration is explicitly for a primary and all standby databases is supported. If you have an another primary, you create a separate DG broker configuration.

    See you soon,.
    Brian

  • Problem link DB between active Data Guard and reports application database

    My version of the 11.2.0.2.0 and OS database is Oracle Solaris 10 9/10.
    I am facing a problem in my custody of data Active data base for purposes of tax. Active Data guard information is as below.

    SQL > select name, database_role, open_mode from v$ database;

    NAME DATABASE_ROLE OPEN_MODE
    --------- ---------------- --------------------
    ORCL PHYSICS READ SHALL ONLY APPLY

    Detail of the problem is less than
    ------------------------------
    I have created a db link (name: DATADB_LINK) between active data guard and report of application of data base for purposes of tax.
    SQL > create database DATADB_LINK link to connect to HR identified by HR using 'DRFUNPD ';
    Database link created.

    But when I run a query using db link to my database of enforcement report I got this error below.

    ORA-01555: snapshot too old: rollback segment number 10 with the name ' _SYSSMU10_4261549777$ ' too small
    ORA-02063: preceding the line of DATADB_LINK

    Then I see logfile named database alart Active Data Guard and get below error

    ORA-01555 caused by the following SQL statement (SQL ID: 11yj3pucjguc8, time of request = 1 sec, SNA: 0x0000.07c708c3): SELECT "A2". "' BUSINESS_TRANSACTION_REFERENCE ', 'A2 '. "' BUSINESS_TRANSACTION_CODE ', MAX (CASE 'A1'. "TRANS_DATA_KEY"WHEN "feature' AND 'A1'." " END OF TRANS_DATA_VALUE"), MAX (CASE 'A1'. "TRANS_DATA_KEY" WHEN 'otherFeature' THEN 'A1' '. "" END OF TRANS_DATA_VALUE")

    But the interesting point if I run the query report directly in the Active Data Guard database, I got never error.

    So it's a problem of link DB between active Data Guard and other databases?

    Fazlul Kabir Mahfuz wrote:
    My version of the 11.2.0.2.0 and OS database is Oracle Solaris 10 9/10.
    I am facing a problem in my custody of data Active data base for purposes of tax. Active Data guard information is as below.

    SQL > select name, database_role, open_mode from v$ database;

    NAME DATABASE_ROLE OPEN_MODE
    --------- ---------------- --------------------
    ORCL PHYSICS READ SHALL ONLY APPLY

    Detail of the problem is less than
    ------------------------------
    I have created a db link (name: DATADB_LINK) between active data guard and report of application of data base for purposes of tax.
    SQL > create database DATADB_LINK link to connect to HR identified by HR using 'DRFUNPD ';
    Database link created.

    But when I run a query using db link to my database of enforcement report I got this error below.

    ORA-01555: snapshot too old: rollback segment number 10 with the name ' _SYSSMU10_4261549777$ ' too small
    ORA-02063: preceding the line of DATADB_LINK

    Then I see logfile named database alart Active Data Guard and get below error

    ORA-01555 caused by the following SQL statement (SQL ID: 11yj3pucjguc8, time of request = 1 sec, SNA: 0x0000.07c708c3): SELECT "A2". "' BUSINESS_TRANSACTION_REFERENCE ', 'A2 '. "' BUSINESS_TRANSACTION_CODE ', MAX (CASE 'A1'. "TRANS_DATA_KEY"WHEN "feature' AND 'A1'." " END OF TRANS_DATA_VALUE"), MAX (CASE 'A1'. "TRANS_DATA_KEY" WHEN 'otherFeature' THEN 'A1' '. "" END OF TRANS_DATA_VALUE")

    But the interesting point if I run the query report directly in the Active Data Guard database, I got never error.

    So it's a problem of link DB between active Data Guard and other databases?

    Check this statement that applies to your environment

    * ORA-01555 on Active Data Guard Standby Database [1273808.1 ID] *.

    also

    http://asktom.Oracle.com/pls/asktom/f?p=100:11:0:P11_QUESTION_ID:8908307196113

  • Addition of tablespace Data Guard env has warned.

    Hello

    Dear all,

    I have the body of data the day before with the following parameters on:

    Standby_file_management = AUTO-> primary
    Standby_file_management = AUTO-> pending

    When I create new tablespace using OEM primary, I received the following warning in the log backup of database alerts.
    Please help me, it's this warning critical and problematic in the future?
    I'm the oracle (Oracle Data Guard concepts and admin) book B14239-05 part No., to add the tablespace.

    #####################################
    WARNING: File being created with the same name as in primary school
    Can be replaced
    Recovery created file /d01/silprod/TEST_PRIM/db/apps_st/data/FAZI.dbf
    Successfully added datafile recovery media 42
    DataFile #42: ' / d01/silprod/TEST_PRIM/db/apps_st/data/FAZI.dbf'
    Game 26 Nov 10:06 2009
    RFS [8]: network Possible disconnect with primary database
    Again shipping customer logged in as PUBLIC
    -User is valid
    RFS [11]: assigned to the RFS 11301 process
    «RFS [11]: Type of database called a ' standby' physical «»
    Primary database is in MAXIMUM PERFORMANCE mode
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS [11]: successfully opened the standby log 3: ' / d01/silprod/TEST_PRIM/db/apps_st/data/stdlog03a.dbf'
    Game Nov 26 10:07:18 2009
    RFS [10]: Eve successfully opened log 4: ' / d01/silprod/TEST_PRIM/db/apps_st/data/stdlog04a.dbf'
    Game Nov 26 10:08:06 2009
    Media, recovery waiting for thread 1 sequence 27 (in transit)
    #####################################

    Elementary school:
    ----------------
    SQL > select name from v$ datafile, whose name like '% FAZI % ';

    /D01/silprod/TEST_PRIM/DB/apps_st/data/Fazi.dbf

    Standby:
    -----------------
    SQL > select name from v$ datafile, whose name like '% FAZI % ';

    /D01/silprod/TEST_PRIM/DB/apps_st/data/Fazi.dbf


    Thank you.

    Everything is OK, as you standby_file_management is AUTOMATIC.

    just ignore it as warning

  • Failover monitoring - Data Guard Broker

    Hello

    I'm working on a database of Oracle 10.2.0.4 on Solaris 10. This is a database of CARS 2 physical node that is configured in the standby mode.

    I want to monitor (send an e-mail) to myself failover (which will be triggered by the Data Guard Broker), so I think I can control failover using alert log (that normally connects the command when initiate us a failover) but I'm not sure if DataGuard Broker also did the same thing (written out the appropriate orders when a failover is triggered)

    Is there another way, we happen to know that a failover occurs (we can interrogate v database_role $ dataguard State) but I'm looking for some trigger that will be triggered instantly when a failover is initiated?

    It is also possible to monitor the observer, either upward or not?

    Hello
    you have several possibilities to do so. Easier is to use the control grid preset for her events. Or you can put a trigger on the event "after DB_ROLE_CHANGE on the database.

    The observer monitoring can be done with dgmgrl as

    connect sys/@;
    show configuration verbose;
    

    Showing you the presence and location of the observer.

    I'll give an example for the use of the trigger that starts a service according to the role of the database. You can customize to send you an email.

    begin
      dbms_service.create_service('safe','safe');
    end;
    /
    
    create trigger rollenwechsel after DB_ROLE_CHANGE on database
    declare
      vrole varchar(30);
    begin
      select database_role into vrole from v$database;
      if vrole = 'PRIMARY' then
        DBMS_SERVICE.START_SERVICE('safe');
      else
        DBMS_SERVICE.STOP_SERVICE('safe');
      end if;
    end;
    /
    
  • Photos do not sort by time when the Date and time are adjusted

    I'm the combination of images of my friends with those friends, using Mac Photo of OS X V10.11.4. Because the dates of our cameras vary, I used "Image, set Date and time" to change the dates of the imported pictures. When the dates and times are adjusted, the photos are sorted by date no time. Added photos changed the given date are sorted at the end, regardless of time stamp. The sort options in the pictures leave much to be desired.

    Where are you seeing this? All the photos are sorted by date/time, the photo has been added to the library - Photos (years/collections/moments) are sorted by date/time pictures

    You can select the photos of interest and create your own album of them and then sort but title ascending or descending or by date

    LN

Maybe you are looking for