Primary school ceased to redo data on the site of watch on the ship

Hi all;

Primary database suddenly stopped shipping data recovery to the backup site.

No network problem. (the two are able to ping)

> > Main site

SYS > SELECT STATUS, GAP_STATUS FROM V$ ARCHIVE_DEST_STATUS WHERE DEST_ID = 2;

STATUS GAP_STATUS

--------- ------------------------

ERROR CAN BE RESOLVED GAP

SYS > display the parameter log_archive_dest_2.

VALUE OF TYPE NAME

------------------------------------ ----------- ------------------------------

string log_archive_dest_2 = "stby_crmsdb", delay service LGWR ASYNC NOAFFIRM = optional compression 0 = enable max_failure = 0 = 1 = 300 reopen max_connections db_unique_name = "stbycrms" net_timeout = 30, valid_for =(all_logfiles,primary_role)

SYS > display the parameter log_archive_dest_state_2.

VALUE OF TYPE NAME

------------------------------------ ----------- ------------------------------

LOG_ARCHIVE_DEST_STATE_2 string ENABLE

SYS > select thread #, max(sequence#) from v$ archived_log group thread #;

THREAD # MAX(SEQUENCE#)

---------- --------------

1 15014


> > Site of sleep

SYS > select thread #, max(sequence#) from v$ archived_log in case of application = 'YES' group by thread #;

THREAD # MAX(SEQUENCE#)

---------- --------------

1 14996

SYS > select nom_dest, STATUS, TYPE, SRL, DEST_ID, RECOVERY_MODE from v$ archive_dest_status where dest_id = 1;

DEST_ID NOM_DEST STATUS TYPE SRL RECOVERY_MODE

---------   -------------------- --------- ------- ----- ----------------------

APPLY 1 LOG_ARCHIVE_DEST_1 NO. MANAGED IN TIME VALID LOCAL REAL

Primary database alert log shows error as msg

ORA-00604: an error occurred at recursive SQL level

Heartbeat PING [ARC4]: Unable to connect to the day before "stby_crmsdb". The error is 604.

Thank you.

If you open the database read-ONLY the trigger is activated. But if the action of the trigger requires that all WRITING, then it generates an error.

Don't forget the primary/secondary test where you connect as SYSDBA primary waiting and intelligence in primary. If when you try to connect the connection fails because the logon trigger fails, then connects stop moving and you get a delay until you correct the problem.

Oracle notes that can help:

Creation of trigger connection on the primary database, we get errors on physical standby 604 - ORA ORA-1552 (Doc ID 785885.1)

PHYSICS: ORA - 604 ORA-16000 pending open read-only (Doc ID 730659.1)

Interesting question, happy he came again because I almost forgot this one!

Best regards

mseberg

Tags: Database

Similar Questions

  • Passage of the roles - ensures primary school

    Hi all

    We have a requirement in our environment to switch mode on primary eve. This is to stop the machine from primary database since it is now more than 350 days.

    I have a few crossing the reg of doubts.

    We have two servers

    1 Server1-Prim - main database hosts with SID - LPBT

    LISTENER - WITH SID-LPBT ON 1521 PORT WITH HOST = 80.0.1.187

    TNSNAMES with entries for the two services LPBT and STBYLPBT

    2 Server2-Stby - secondary database hosts with SID - STBYLPBT

    LISTENER - WITH SID-STBYLPBT ON PORT 1521 WITH HOST = 80.0.0.240

    TNSNAMES with entries for the two services LPBT and STBYLPBT

    Our is a physical standby and passage procedure below of the manually using SQL commands instead of dataguard

    Digital allows a primary and standby to switch roles without data loss.

    No need to recreate the old primary school. Conducted for a scheduled maintenance.

    Steps to follow:

    1. check if primary can be switched to sleep mode

    SQL > select switchover_status from database v$.

    If returns the value "TO_STANDBY", its good to pass the primary role of Eve.

    2 convert primary to standby

    SQL > alter database validation to the transition to physical standby mode;

    If the value is "ACTIVE SESSIONS" in step 1, then

    SQL > alter database validation to the transition to physical standby with the stop of the session;

    3. stop and restart the old primary as before

    SQL > shutdown immediate;

    SQL > startup nomount;

    At this point, we now have two databases, such as the day before.

    4. on the basis of data target standby, check status of the digital switchover. If the value is "TO_PRIMARY" then

    SQL > alter database validation at the grade crossing;

    If the value is "ACTIVE SESSIONS", then add "WITH the STOP of the SESSION" to above the command.

    5. stop and restart the new primary database

    SQL > shutdown immediate; start-up;

    6. start back on new database (former primary) pending

    SQL > alter database mount standby

    SQL > alter recover managed standby database disconnect log file using current;

    ----------------------------------------------------------------------------------------------------------------------------------------

    My question is after the transition to digital, if we connect as 'conn sys@lpbt as sysdba' since sqlplus window from another pc, how the request will go to the new primary database server (80.0.0.240) since tnsnames only will point to the old principal server (80.0.1.187).

    can you please explain how it works, or should we add a few more entries to the listener or files tnsnames.

    Please notify

    We have three databases and we use the db_links inside these three db for transactions

    so can we not use as main stbylpbt, as db_links does refer to db_link-host - lpbt solve in tnsnames who will fail.

    Yes, you are right. You have to re-create these links db to point to your new primary database ("old Standby") using guests as "STBYLPBT".

    As mentioned previously, you must redirect all your applications and users to point to the new primary database by making use of the "STBYLPBT" entry TNS.

    in this case, after the success, if we need to change db_names (vice_versa) and listeners as was before

    You don't have to do anything with regard to the 'db_name' or with the listener. Db_name remains the same on the primary and implementing standby.

    I don't understand the reason for you to change the listener. 1 listener on the server must be able to accept connections.

    80.0.0.240 is your host before and there is a listener on this. After a failover, it becomes your primary database. You would try to connect to the new primary database with input TNS "STBYLPBT." It reminds the host '80.0.0.240' which is your new primary database. Hope this clears.

    STBYLPBT =

    (DESCRIPTION =

    (ADDRESS_LIST =

    (ADDRESS = (PROTOCOL = TCP) (HOST = 80.0.0.240)(PORT = 1521))

    )

    (CONNECT_DATA =

    (SERVICE_NAME = stbylpbt)

    )

    )

    -Shivananda

  • ORA-16086: Redo data cannot be written in the expectation of recovery log redo

    Hello

    I have enough disk space: 85% full on both servers, v$ flash_recovery_area_usage shows 56% of the area of dest used recovery.

    So after checking that I have matching sizes for standby records and logs and check that there is enough space allocated to the domain recovery and disk space on the server, I don't know what else could cause a problem to archive log shipping.

    Any ideas?

    My old journals and newspapers of recovery were the same size:

    SQL > select group #, the bytes of the log v$.

    GROUP # BYTES

    ---------- ----------

    1 52428800

    2 52428800

    3 52428800

    SQL > select group #, bytes from v$ standby_log;

    GROUP # BYTES

    ---------- ----------

    4 52428800

    5 52428800

    6 52428800

    7 52428800

    standby time:

    SQL > select group #, the bytes of the log v$.

    GROUP # BYTES

    ---------- ----------

    1 52428800

    3 52428800

    2 52428800

    SQL > select group #, bytes from v$ standby_log;

    GROUP # BYTES

    ---------- ----------

    4 52428800

    5 52428800

    6 52428800

    7 52428800

    However, on the autonomy in standby:

    SQL > select group #, status from v$ standby_log;

    GROUP # STATUS

    ---------- ----------

    4. UNORDERED

    5 UNASSIGNED

    6 NOT ASSIGNED

    7. UNASSIGNED

    Finally, on the primary:

    SQL > select status, error from v$ archive_dest where dest_id = 2;

    STATE ERROR

    --------- -----------------------------------------------------------------

    ERROR ORA-16086: Redo data cannot be written in the expectation of recovery log redo

    Hello

    As you can see, your flash_recovery_Area is 99% used.  57% consumed by the archivelogs and 42% consumed by the flashback logs

    ARCHIVED JOURNAL 57.19 0

    439

    FLASHBACK TO CONNECT 42,44 0

    178

    you have 3 options

    1. increase the flash_recovery_Area if you have space left in recovery_file_Dest

    2. check the retention of archives and remove the old

    3. do you really use flashback function? If this isn't the case, you can disable it. If you need to check if there is no restore point (sometimes if you have and not intended them to use it will cause flashback logs to be kept longer than they should

    Select * from v$ restore_point;

    also check the retention of flashback in operation below

    show the flashback of parameter

  • Migration of data from the old platform to the new primary database, need advice.

    I have physical standby facility and everything works now.

    Next weekend, we will do the actual migration of the old platform to the new environment.

    I have several issues of concern.

    Migration will go to the primary database. I'll have to remove patterns and inside expdp dmp.

    While I do all those, what data base waiting? should I disable it again apply?

    What other concerns and precautions I need to take before I have to remove all data from primary school and do a migration?

    Thank you in advance.

    Hello;

    My main concern would be the FRA (assuming you use it).

    By doing all that generates a ton of archives, you have to worry about the space on both sides.

    I would consider increasing my FRA on both sides.

    I would not disable the recovery, but I look very close and be willing to adjust my space as needed.

    As long as you don't miss space you should be fine. I had once a backup log files more than 250, and it took about 15 minutes to catch up.

    Have a little prepared scripts in advance if you can increase the space or delete archive applied and it should be fine.

    I also opened at least two terminals on the primary and Standby. Everyone look at space and the other to execute all what you need to adjust the space.

    The rest is common sense, first do the smaller drawing if you have an idea what to expect. Decaying etc. as much as possible.

    Best regards

    mseberg

    I have a shell script called 'quickcheck.sh' (use a separate but .env file it will send information vital back)

    With a little work, you can do this in something that makes it easy to keep an eye on things.

    #!/bin/bash
    ####################################################################
    #
    
    if [ "$1" ]
    then DBNAME=$1
    else
    echo "basename $0 : Syntax error : use . quickcheck  "
    exit 1
    fi
    
    #
    # Set the Environmental variable for the instance
    #
    . /u01/app/oracle/dba_tool/env/${DBNAME}.env
    #
    #
    
    $ORACLE_HOME/bin/sqlplus /nolog <
    

    and then a SQL file called quickaudit:

    SPOOL OFF
    CLEAR SCREEN
    SPOOL /tmp/quickaudit.lst
    
    --SELECT SYSDATE FROM DUAL;
    --SHOW USER
    
    PROMPT
    PROMPT -----------------------------------------------------------------------|
    PROMPT
    
    SET TERMOUT ON
    SET VERIFY OFF
    SET FEEDBACK ON
    
    PROMPT
    PROMPT Checking database name and archive mode
    PROMPT
    
    column NAME format A9
    column LOG_MODE format A12
    
    SELECT NAME,CREATED, LOG_MODE FROM V$DATABASE;
    
    PROMPT
    PROMPT ------------------------------------------------------------------------|
    PROMPT
    
    PROMPT
    PROMPT Checking free space in tablespaces
    PROMPT
    
    column tablespace_name format a30
    
    SELECT tablespace_name ,sum(bytes)/1024/1024 "MB Free" FROM dba_free_space WHERE
    tablespace_name <>'TEMP' GROUP BY tablespace_name;
    
    PROMPT
    PROMPT ------------------------------------------------------------------------|
    PROMPT
    
    PROMPT
    PROMPT Checking freespace by tablespace
    PROMPT
    
    column dummy noprint
    column  pct_used format 999.9       heading "%|Used"
    column  name    format a16      heading "Tablespace Name"
    column  bytes   format 9,999,999,999,999    heading "Total Bytes"
    column  used    format 99,999,999,999   heading "Used"
    column  free    format 999,999,999,999  heading "Free"
    break   on report
    compute sum of bytes on report
    compute sum of free on report
    compute sum of used on report
    
    set linesize 132
    set termout off
    select a.tablespace_name                                              name,
           b.tablespace_name                                              dummy,
           sum(b.bytes)/count( distinct a.file_id||'.'||a.block_id )      bytes,
           sum(b.bytes)/count( distinct a.file_id||'.'||a.block_id ) -
           sum(a.bytes)/count( distinct b.file_id )              used,
           sum(a.bytes)/count( distinct b.file_id )                       free,
           100 * ( (sum(b.bytes)/count( distinct a.file_id||'.'||a.block_id )) -
                   (sum(a.bytes)/count( distinct b.file_id ) )) /
           (sum(b.bytes)/count( distinct a.file_id||'.'||a.block_id )) pct_used
    from sys.dba_free_space a, sys.dba_data_files b
    where a.tablespace_name = b.tablespace_name
    group by a.tablespace_name, b.tablespace_name;
    
    PROMPT
    PROMPT ------------------------------------------------------------------------|
    PROMPT
    
    PROMPT
    PROMPT Checking Size and usage in GB of Flash Recovery Area
    PROMPT
    
    SELECT
      ROUND((A.SPACE_LIMIT / 1024 / 1024 / 1024), 2) AS FLASH_IN_GB,
      ROUND((A.SPACE_USED / 1024 / 1024 / 1024), 2) AS FLASH_USED_IN_GB,
      ROUND((A.SPACE_RECLAIMABLE / 1024 / 1024 / 1024), 2) AS FLASH_RECLAIMABLE_GB,
      SUM(B.PERCENT_SPACE_USED)  AS PERCENT_OF_SPACE_USED
    FROM
      V$RECOVERY_FILE_DEST A,
      V$FLASH_RECOVERY_AREA_USAGE B
    GROUP BY
      SPACE_LIMIT,
      SPACE_USED ,
      SPACE_RECLAIMABLE ;
    
    PROMPT
    PROMPT ------------------------------------------------------------------------|
    PROMPT
    
    PROMPT
    PROMPT ------------------------------------------------------------------------|
    PROMPT
    
    PROMPT
    PROMPT Checking free space In Flash Recovery Area
    PROMPT
    
    column FILE_TYPE format a20
    
    select * from v$flash_recovery_area_usage;
    
    PROMPT
    PROMPT ------------------------------------------------------------------------|
    PROMPT
    
    PROMPT
    PROMPT ------------------------------------------------------------------------|
    PROMPT
    
    PROMPT
    PROMPT Checking last sequence in v$archived_log
    PROMPT
    
    clear screen
    set linesize 100
    
    column STANDBY format a20
    column applied format a10
    
    --select max(sequence#), applied from v$archived_log where applied = 'YES' group by applied;
    
    SELECT  name as STANDBY, SEQUENCE#, applied, completion_time from v$archived_log WHERE  DEST_ID = 2 AND NEXT_TIME > SYSDATE -1;
    
    prompt
    prompt----------------Last log on Primary--------------------------------------|
    prompt
    
    select max(sequence#) from v$archived_log where NEXT_TIME > sysdate -1;
    
    PROMPT
    PROMPT ------------------------------------------------------------------------|
    PROMPT
    
    PROMPT
    PROMPT Checking switchover status
    PROMPT
    
    select switchover_status from v$database;
    
    PROMPT
    PROMPT ------------------------------------------------------------------------|
    PROMPT
    
    SPOOL OFF
    
    exit
    

    The env file looks like this: (if the file would be PRIMARY.env)

    ORACLE_BASE=/u01/app/oracle
    
    ULIMIT=unlimited
    
    ORACLE_SID=PRIMARY
    
    ORACLE_HOME=$ORACLE_BASE/product/11.2.0.2
    
    ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data
    
    LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
    
    LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
    
    LIBPATH=$LD_LIBRARY_PATH:/usr/lib
    
    TNS_ADMIN=$ORACLE_HOME/network/admin
    
    PATH=$ORACLE_HOME/bin:$ORACLE_BASE/dba_tool/bin:/bin:/usr/bin:/usr/ccs/bin:/etc:/usr/sbin:/usr/ucb:$HOME/bin:/usr/bin/X11:/sbin:/usr/lbin:/GNU/bin/make:/u01/app/oracle/dba_tool/bin:/home/oracle/utils/SCRIPTS:/usr/local/bin:.
    
    export EXP_DIR=/u01/oradata/PRIMARY_export
    
    export TERM=vt100
    
    export ORACLE_BASE ORACLE_SID ORACLE_TERM ULIMIT
    
    export ORACLE_HOME
    
    export LIBPATH LD_LIBRARY_PATH ORA_NLS33
    
    export TNS_ADMIN
    
    export PATH
    

    Published by: mseberg on December 9, 2011 18:19

  • How to join two tables to retrieve the data from the columns in table two. Tables have primary and foreign key relationships

    Hello

    I want to join the two tables to retrieve the data from the columns of the two table passing parameters to the join query. Tables have primary and foreign key relationships

    Details of the table

    Alert-1 - AlertCode (FK), AlerID (PK)

    2 AlertCode-AlertDefinition-(PK)

    Help, please


    ----------

    Hi Vincent,.

    I think that you have not worked on adf 12.1.3.  In adf 12.1.3 you don't have to explicitly create the association. When you create the EO to your table, Association xxxxFkAssoc, will be created by ADF12.1.3 for you automatically. Please try this and do not answer anything... You can also follow the links below. I solved the problem by using the following link

    Oracle ADF Guide step by step - Oracle ADF tutorial: creating a relationship of the master / detail using Oracle ADF

    ---

  • Adding STAND OF REDO LOG in the primary side...

    Hi all

    I set STANDBY_FILE_MANAGEMENT = AUTO side pending and LOG_FILE_NAME_CONVERT points to a location that exists at the level of the BONE...
    When I added a data file to an existing table in the primary circuit and made a journal on the same order. The obtained additional data file
    relflected next to the day BEFORE...

    But when I try to add that a log file pending (new group) does not have added...

    Please help me on this...


    For this I need to perform the steps below.
    1. Add a redolog watches over the primary side...
    2. then cancel the recovery managed by the side process ensures...
    3 - Add the redolog ensures even (same name and size) on the side pending
    4 put the watch into recovery mode...

    Thank you

    STANDBY_FILE_MANAGEMENT = AUTO for data files is...

    See Doc...
    http://docs.Oracle.com/CD/B14117_01/server.101/b10755/initparams206.htm

  • Clearing Online redo logs on the physical standby of the target

    Hello

    Version 11202.

    Primer on computer a.
    Watch on computer b.

    Later note: 11.2 Data Guard physical standby failover best practices using SQL * Plus [1304939.1 ID].
     
    Online redo logs on the target physical standby need to be cleared before that standby database can become a primary database. 
    And later:
     
    On the target physical standby run the following query to determine if the online redo logs have not been cleared... 
    (1) I did not understand where I have to run the query. On computer A, computer B or?
    2)
     
             Online redo logs on the target physical standby need to be CLEARED 
            
    Which means that it must be DELETED? Does transaction could be lost by running the command of compensation?

    Thank you

    Hello again;

    I would NOT use a script. The reason is that you are not sure what "switchover_status" will be returned by the database of v$.

    Oracle a broker data available for a quick pass, but I'm from the old school and I want to see how thing works in SQL for awhile first.

    Best regards

    mseberg

  • Retrieve the data from the merged Partition

    Can I recover data from the partition that was merged with primary or C?

    Hi satpalsingh.chouhan,

    Is there any way to recover the data merged into the partition.

    There are many third-party programs that could help you in the task.

    You will need to use your favorite search engine to find such a third-party program that could help you in the task.

    Important: Using third-party software, including hardware drivers can cause serious problems that may prevent your computer from starting properly. Microsoft cannot guarantee that problems resulting from the use of third-party software can be solved. Software using third party is at your own risk.

  • Used computers for primary school in South Africa

    I visited the PRIMARY SCHOOL of NOMPONDO in Kwa Zulu Natal, South Africa.  They need 40-50 computers for the computer lab they are building for children.  It seems as if many companies use Dell computers as a service now and their old computers are returned to Dell.  Dell has a program/person I can contact to bet used computers for these children?  Must be able to run Windows 7.  Thank you!

    I read that it was to recover and recycle plastic and metal. You could try emailing the goodwill and check with them. This is the US site, but it's a place to watch.

    http://www.goodwill.org/

  • Cannot use the native sequence of database to insert data through the DbAdapter

    Hello

    I use Oracle Fusion Middleware 12.1.3 (selection of pre-designed developer programs for Oracle VM VirtualBox VMs |) Oracle Technology Network). I'm trying to insert data into the Oracle database coming with the device via a DbAdapter. I want to use the native Oracle sequences for primary keys and have configured appropriatelly the DbAdapter. But in execution, I get the following exception:

    Caused by: java.sql.SQLIntegrityConstraintViolationException: ORA-01400: cannot insert NULL into ('SCOTT'. "ORDERS '." (' ' ID ')

    If I configure the such DbAdapter that do not use sequences, I get the same exception. I'm missing here?

    Kind regards

    Nicolas

    Reading more carefully the exception, in the log file of the server, not just the display of messages in the console of the em, I could note that the ConnectionFactory configured on the outgoing Dbadapter connection used to access the database was incorrect. He was using JavaDb instead of Oracle that I made a mistake of copying and pasting. Correction of this error has resolved the problem.

  • Keep the data on the record points and serve the rest

    Hello world!

    I think somehow either CompressWorkspace [tree] or PurgeTable what we want to do, but somehow I don't see exactly how...

    Here's our situation: the admin of our customer demand can create backup points. He made what is called "data freezing" happens a few times a year (3-4).

    The database is created using WO_OVERWRITE to keep a track full edit. However, for older data we now to just keep recording data and delete what that it is between the two. Only for the most recent backup points, we always want to keep the complete history. Is there a way to do this?

    To clarify things, with some text-Art, this might look like this for 5 Savepoints SP1... SP5:

    [SP1]-[SP2]-[SP3] hhh [SP4] [SP5] of hhh hhh [LATER]

    where - means "no historical data here" and "hhh" means "here for historical data.

    My first guess was the CompressWorkspace procedure, but if I give two points for first registration is deleted. What we want to do is give two points of record, have deleted all the data of the history between the two and keep both the backup points. In the example above, we would like to compress between SP1 and SP2 the SP2 and SP3.

    Note: the database will be migrated soon to 11 GR 2, so all the features of the OWM until this version can be used for a solution.

    Any help is appreciated!

    Kind regards

    Andreas

    Hi Andreas,

    You don't want to use CompressWorkspace.  Specify the backup even point to firstSP and secondSP and the parameter compress_view_wo_overwrite set to true.  Thus, for example:

    SQL > dbms_wm.compressworkspace exec ('LIVE', true, "SP1", "SP1");

    This should be done for each backup point where you want to delete the history.  The procedure would remove all lines for each primary key value, with the exception of the most recent in the backup point.  It cannot be done on a variety of checkpoints at the same time, as that would remove the previous save points, as you pointed out.

    Kind regards

    Ben

  • Reg: Export data of the physical database system standby.

    Hi all

    We have a standard edition one 11 GR 1 material oracle environment, I need to export the data from the physical monitoring system.

    If anyone can suggest me, how to do it safely (up state).

    Kind regards

    Konda.

    Oracle Data Guard is available only as a feature of Oracle Database Enterprise Edition. It is not available with Oracle database Standard edition.

    Then you must export data only from primary or you use EXP instead of EXPDP on the standby database. Because EXPDP create a temporary table export process duration.

    Concerning

    Mr. Mahir Quluzade

  • date of breach varies at continuous intervals and ending date of the previous row

    Hello
    can someone give me an idea for situation below

    I have a table with records
    CREATE  TABLE test1
      (
        key VARCHAR2(11 BYTE) NOT NULL ENABLE,
       cd VARCHAR2(10 BYTE) NOT NULL ENABLE,
            start_dt DATE NOT NULL ENABLE,
        end_dt date,
        CONSTRAINT x1 PRIMARY KEY (key, cd, start_dt));
        
    Insert into test1 (key,cd,start_dt,end_dt) values ('1234','A',to_date('01-MAR-78 00:00:00','DD-MON-RR hh24:mi:ss'),to_date('31-DEC-99 00:00:00','DD-MON-RR hh24:mi:ss'));
    Insert into test1 (key,cd,start_dt,end_dt) values ('1234','B',to_date('01-MAR-78 00:00:00','DD-MON-RR hh24:mi:ss'),to_date('31-DEC-99 00:00:00','DD-MON-RR hh24:mi:ss'));
    Insert into test1 (key,cd,start_dt,end_dt) values ('1234','Q',to_date('01-JAN-89 00:00:00','DD-MON-RR hh24:mi:ss'),to_date('31-JAN-97 00:00:00','DD-MON-RR hh24:mi:ss'));
    Insert into test1 (key,cd,start_dt,end_dt) values ('1234','B',to_date('01-FEB-97 00:00:00','DD-MON-RR hh24:mi:ss'),to_date('30-APR-97 00:00:00','DD-MON-RR hh24:mi:ss'));
    insert into test1(key,cd,start_dt,end_dt) values ('1234','Q',to_date('01-FEB-97 00:00:00','DD-MON-RR hh24:mi:ss'),to_date('30-APR-97 00:00:00','DD-MON-RR hh24:mi:ss'));
    Insert into test1 (key,cd,start_dt,end_dt) values ('1234','B',to_date('01-MAY-97 00:00:00','DD-MON-RR hh24:mi:ss'),to_date('31-MAR-99 00:00:00','DD-MON-RR hh24:mi:ss'));
    commit;
    
    key     cd     start_dt                     END_DT
    1234     B     01-MAR-78 00:00:00     31-DEC-99 00:00:00
    1234     A     01-MAR-78 00:00:00     31-DEC-99 00:00:00
    1234     Q     01-JAN-89 00:00:00     31-JAN-97 00:00:00
    1234     Q     01-FEB-97 00:00:00     30-APR-97 00:00:00
    1234     B     01-FEB-97 00:00:00     30-APR-97 00:00:00
    1234     B     01-MAY-97 00:00:00     31-MAR-99 00:00:00
    I want to merge the range date lines that overlap, break at intervals and set the indicator (A_IND, B_IND, Q_IND) accordingly. (IE, if a record in the source has a value for the 'cd' = 'B' column in a given date range, then the output should display 'Y' for the B_IND column for the date ranges in the source folder.)
    I want also to the date of the end of each record to be 1 day less than the start_dt of the next record, as shown below.
    and the output should be
    key     start_dt                          END_DT     A_IND     B_IND     Q_IND
    1234     01-MAR-78 00:00:00     31-DEC-88 00:00:00     Y     Y     N
    1234     01-JAN-89 00:00:00     30-JAN-97 00:00:00     Y     Y     Y
    1234     31-JAN-97 00:00:00     31-JAN-97 00:00:00     Y     Y     N
    1234     01-FEB-97 00:00:00     29-APR-97 00:00:00     Y     Y     Y
    1234     30-APR-97 00:00:00     30-APR-97 00:00:00     Y     Y     N
    1234     01-MAY-97 00:00:00     30-MAR-99 00:00:00     Y     Y     N
    1234     31-MAR-99 00:00:00     31-DEC-99 00:00:00     Y     Y     N

    Hello

    user12997203 wrote:
    Hello
    can someone give me an idea for situation below

    I have a table with records

    CREATE  TABLE test1 ...
    

    Thanks for the display of the data of the sample; It is very useful.

    I want to merge the range date lines that overlap, break at intervals and set the indicator (A_IND, B_IND, Q_IND) accordingly. (IE, if a record in the source has a value for the 'cd' = 'B' column in a given date range, then the output should display 'Y' for the B_IND column for the date ranges in the source folder.)

    Do not assume that everyone who wants to help you is familiar with your application and your needs. Explain what key role plays in this problem. See {message identifier: = 10865697}.
    Here's one way:

    WITH     change_points       AS
    (
         SELECT     key
         ,     start_dt     AS dt
         FROM     test1
        UNION
         SELECT     key
         ,     end_dt          AS dt
         FROM     test1
    )
    ,     ranges          AS
    (
         SELECT  key
         ,     dt                    AS start_dt
         ,     LEAD (dt)     OVER ( PARTITION BY  key
                                        ORDER BY          dt
                           )          AS next_dt
         FROM     change_points
    )
    SELECT    r.key
    ,       r.start_dt
    ,       r.next_dt - CASE
                         WHEN  r.next_dt = MAX (r.next_dt) OVER ()
                    THEN  0
                    ELSE  1
                      END                         AS end_dt
    ,       MAX (CASE WHEN t.cd = 'A' THEN 'Y' ELSE 'N' END)     AS a_ind
    ,       MAX (CASE WHEN t.cd = 'B' THEN 'Y' ELSE 'N' END)     AS b_ind
    ,       MAX (CASE WHEN t.cd = 'Q' THEN 'Y' ELSE 'N' END)     AS q_ind
    FROM       ranges     r
    JOIN       test1          t  ON   t.key          = r.key
                        AND     t.start_dt     < r.next_dt
                      AND     t.end_dt     > r.start_dt
    GROUP BY  r.key
    ,            r.start_dt
    ,       r.next_dt
    ORDER BY  r.start_dt
    ;
    

    Note that this depends on the values that you want in the columns a_ind, b_ind and q_ind. For example, if you were doing this for users of German language, you might want to show 'J' and 'n' instead of 'Y' and 'n'. In this case, you would use MIN instead of MAX, to get the dominant value.

  • Data on the server failure recovery

    Hi all

    The rule of thumb: each 1 GB JVM can store 350 MB of data of the real object, assumes that we have 8 GB of data on the consistency, according to the above rule:

    8192/350 = ~ 24 JMVs required = > 24 * 1.2 = 29 ~ GB

    Then we have at least 2 servers of 16G RAM.

    We plan to have 4 virtual machines Java as cache servers on each of the two servers 16 GB as it advocates cannot have more than 4 GB of heap for each node in the cluster.
    I wonder if one of the servers fails, nodes in 4 cluster residing on it go down. If primary and backup of certain data are distributed both within these 4 cluster nodes that descend, then the data is lost even the other server has sufficient capacity for full redundancy.

    Is this correct? If so, how to solve? Is smart enough to put main and in the various boxes, not just different node each time as possible consistency?

    Hi Henry,.

    Not consistency inherently tries to minimize the risk of data loss and automatically checks the backup copy of the data to be kept on a different machine when possible. There is no more adjustment required for it. The StatusHA is the status published by JMX metric of coherence and more details can be found here:

    http://docs.Oracle.com/CD/E18686_01/CoH.37/e18682/appendix_mbean.htm

    I hope this helps!

    See you soon,.
    NJ

  • datafiles and redo logs on the same drive

    Hi guys,.

    http://docs.Oracle.com/CD/E11882_01/server.112/e25494/onlineredo002.htm#i1306224

    >

    Data files should also be placed on different disks of redo log files to reduce contention in writing of the data blocks and redo records.
    >

    I really think if he actually any challenge when the first of all oracle can only writes files 1 redo log at a time.
    http://docs.Oracle.com/CD/E11882_01/server.112/e25494/onlineredo001.htm
    >
    Database only Oracle uses redo log files at the same time to store redo records written since the restore log buffer by progression. Log roll forward LGWR is preparing actively to is called the log during recovery.
    >

    If the process flow, I got after reading the chapters is

    When LGWR fills a log file of redo (roll forward records) then there will be a log switch + control points where writing for data blocks occur. There seems to be a flow series rather than a simultaneous sort of flow. So I don't really understand it when he speaks of contention will take place when records of data and files written redo which is the redo log file are on the same disks.

    Just to confirm with you guys, whenever there is a switch of newspaper one point of control occurs too right.
    http://docs.Oracle.com/CD/E11882_01/server.112/e25494/onlineredo002.htm? You can search checkpoint to access the section she made mention of this documentation.

    Think about this:

    Restore means to keep the information around in case you need to re-do (recovery or standby mode, i.e. the continuous collection). Updates as you do in a broad sense will be so potentially being twice, once for the data files and once to do it again. I'm sure you understand that the data is written in memory, and then later to the data by the writer of the db files, while there could be many more group of writing data, or they may even be long delayed. In addition, the data files are read at random, so you can't really think again in series compared to the data. Do it again is series, archiving is set, but the data is random, and random how depends on how your system is used.

    So that means all the type of reads and writes per redo and archive is fundamentally different from data and cancel. In the first case, you want to be able to breath out as I/O that you can, for the latter, you want to be able to randomly reading or writing at different times, with Oracle being smart enough to do a bit of it in memory and optimistic enough to make assumptions on when to do things and quite lazy for not doing everything right. Roll forward is critical.

    A while ago, someone pointed out that the e/s modern buffered in memory, do not really worry about this, because all the work required to set up and maintain it after spending records, fees are not much better than striping and mirroring everything (you can google SAMI). This is true to a point, and we can debate endlessly about RAID types and their effects on performance and how their buffering makes [url http://www.baarf.com/] useless BAARF. But the real debate is, where is the point that you should use separate to redo and data devices? In the real world, we often receive a standard hardware configuration, which works very well until it's not. A disk or controller will puff in a RAID-5 can happen "is not" real quick.

    You should probably take two thoughts:

    The docs are pretty General, and some old tips do not apply, may have transformed into myth or perhaps too general to be meaningful.

    There is always something in the db, and the more things underway, less you can make generalizations about the serialization.

Maybe you are looking for

  • format header or footer

    Hello Question: How do I format the header or the footer, I want to say how can I resize the columns? Thanks in advance Hans

  • How to add toolbars additional bookmark

    Hello, I know this is possible, don't know how to do it. I would like to add a toolbar additional bookmarks under or above my existing. Thus, eliminating the need to go to the far right and clicking on for a drop-down list. Thank you in advance for y

  • Satellite L355 - dreaded blue screen - Bios does not end

    I have a new Satellite L355.Sometimes at startup I get the DBS. Otherwise said, the bios does not end and the screen is blank.Laptop repair center replaced the motherboard twice, but I still have the problem.I found that if I turn off the power and r

  • HP ProBook 650 G1: '8' key is super sensitive.

    Hello!  I have a HP Probook 650 computer G1 cell phone brand new.  If I barely touch the '8' key he type the number "8" a dozen times. Or if I type with any force over a slight pressure on the rest of the keys, i8t wi8l also start typin8g the nu8mber

  • Problemas con el solucionador

    Tengo a problema... mi PC me pide hacer una copia seguridad los archivos, pico & nose carga, sell me qe is reqiere device USB o una unidad DVD pero lo ago UN initiates insertando los 2 & aun asi no me works solo llega a charged 25% & are