Datagaurd Log_archive_config

Hi all

Is required to specify in Dataguard configiration parameter log_archive_config.
I've set up a DR without specifying this setting and it works very well without any problems whatsoever.


Concerning
Sphinx

Hello;

No. (the simple and short answer)

It has a default value as many other parameters:

http://docs.Oracle.com/CD/B28359_01/server.111/b28320/initparams112.htm

The default value works fine for Data Guard. ('SEND, RECEIVE, NODG_CONFIG')

Oracle docs say "highly recommended". See Chapter 14 for more details - E10700-02

But if you run in maximum availability you'll probably be glad you can he defined. Add the unique name of database to other databases. By assigning you can avoid many problems in a RAC configuration.

Remember to close some of your previous questions.

Total Questions:      69 (21 unresolved) 

Best regards

mseberg

Published by: mseberg on March 4, 2013 14:55

Tags: Database

Similar Questions

  • Datagaurd Installer

    Hi all can someone pls share markets / blog where I can get the steps to create a configuration of datagaurd with a primary and two databases on hold.

    Thanks in advance.

    There is no difference in creation of facility dataguard for 1 or more - you have to keep in mind the following parameters will be changed

    Fal_server

    Fal_client

    log_archive_config

  • Datagaurd doubt

    Hi all

    I was asked this question in an interview.

    In a datagaurd running, set - up, if the password of the primary database file is deleted what will happen?

    Kindly give me your ideas.

    My answer is, if it gets deleted we can always take a copy of standby and replace it because both are the same files.

    Hello

    your right. Copy the password mode standby to primary school.

    After the copy, make some tests using for example "dgmgrl' and check some properties.

    Lose the password is sometimes very delicate while you get a lot of difficulty making connections where to wait.

    concerning

    SPA09

  • Restore backup DB for DataGaurd


    Hello

    For the configuration of datagaurd if restore us offline DB Backup DB... (Stop DB) instead of RMAN... I copy pfile, DB files, control file, and the DB primarary archivelog backup offline DB standby manually and will start DB mode recvoery managed...

    Should I sleep on standby control file DB?

    It's a workable solution... Please suggest...

    Thank you

    980223 wrote:

    Hey Maher,

    Thanks for the info...

    But if it's cold backup copy of DB primary... How DB control file is the primary control DB file diffent...

    When I start standby DB I wll change pfile directed by Eve which will be different from primary DB configuration but I do not understand why we cannot use primary standby control file side because it's consistent backup primarary DB DB.

    Can you please clarify?

    Kind regards

    Vaishali

    As you now, primary and standby database roles is different, it's controlfile.

    Select database_role in the database of v$.

    As you know, database v$ view display info from controlfile.

    Concerning

    Mr. Mahir Quluzade

  • 10 g-datagaurd

    How to install datagaurd 10 g installation

    Hello

    How to install datagaurd 10 g installation

    I don't think that there is no option to configure dataguard while installing oracle database 10g.

    Kind regards
    A H E E R X

  • Error in moving to the Datagaurd

    Hi all

    We made a move to the primary to standby database. Everything is well understood database_role and the database opens. But while we try to allow repeat applies of new primary eve the following error occurred.

    SQL > ALTER DATABASE RECOVER MANAGED STANDBY DATABASE to the CURRENT HELP LOGFILE DISCONNECT FROM the SESSION.

    ERROR on line 1:

    ORA-01665: control file is not a standby control file

    Thanks in advance.

    Hello

    You start the new database of rescue recovery process. Right?

    Yasin.

  • Datagaurd parameter DB_UNIQUE_NAME

    Hi all

    I have some confussion about DB_UNIQUIE_NAME.

    Why the DB_NAME and DB_UNIQUE_NAME are same primary side.

    Can do us different on the primary side and even the Satnd per side.

    One last thing, I want to ask

    How would I know that this transition to digital has been completed by Switchover_Staus in V$ database.

    Concerning

    Vikas Sharma

    Yes, you can assign any DB_UNIQUE_NAME to any database as long as it is unique in the Data Guard configuration.

  • Issue Archive on env datagaurd.

    Hello

    Need to know if Archives will be removed at the main site using rman (backup archivelog all delete all entries), until it was shipped or applied to the backup site in a 10 G env.


    Kind regards.

    You are right:

    Configure RMAN to purge the archivelogs after application on hold [728053.1 ID]

    and

    Backups RMAN in Max Performance/Max availability data guard environment [331924.1 ID]

    But I would go this route is possible as soon as possible.

    Best regards

    mseberg

  • Standby database continues to fall.

    11.2.0.3 (primary and Standby)

    ARCHIVE_LAG_TARGET integer 900

    log_archive_config string

    dg_config = (prod, Dr)

    Log_archive_dest chain

    Log_archive_dest_1 chain

    LOCATION = / oracle/archive/PROD VALID_FOR = (ALL_LOGFILES, EVERYTHING_ROLES) DB_UNIQUE_NAME = PROD


    Hello


    I have a primary with a standby configuration datagaurd. The day before, got 24 hours, so we stop the day before and synchronized on a tier of storage, but almost immediately started late and two days later, it is 20 hours late, once again. Please help,


    Archive logs are of different sizes as well:


    -rw - r - 1 oracle dba 9608466944 Aug 02:18 prod-768088995-1-137005 1. ARC - prod

    -rw - r - 1 oracle dba 9608854016 Aug 06:26 prod-768088995-1-137055 1. ARC - DR


    First, it fails then succeed to apply the logs:

    THREAD # SEQUENCE # APPLIED NEXT_TIME COMPLETIO

    ---------- ---------- --------- --------- ---------

    1 136558 NOT 30 JULY 13 30 JULY 13

    1 136558 YES 30 JULY 13 30 JULY 13

    1 136559 YES 30 JULY 13 30 JULY 13

    1 136559 NO 30 JULY 13 30 JULY 13

    1 136560 NO 30 JULY 13 30 JULY 13

    1 136560 YES 30 JULY 13 30 JULY 13

    1 136561 NO 30 JULY 13 30 JULY 13

    1 136561 YES 30 JULY 13 30 JULY 13

    1 136562 NO 30 JULY 13 30 JULY 13

    1 136562 YES 30 JULY 13 30 JULY 13

    1 136563 NOT 30 JULY 13 30 JULY 13

    Journal Alerts Dr.:

    Thu Aug 01 05:54:23 2013

    FAL [Server ARC7]: archive FAL failed, see the trace file.

    ARCH: Archive FAL failed. Continuous archiver

    ORACLE PROD instance - error check-in. Continuous archiver.

    Thu Aug 01 05:54:25 2013

    Errors in the /oracle/diag/rdbms/rsaprod/PROD/trace/PROD_nsa2_13894020.trc file:

    ORA-19502: error on the write file ' ', block number (block size =)

    LNS: Impossible to archive log 3 thread 1 sequence 137134 (19502)

    Thu Aug 01 05:54:37 2013

    Archived journal 309536 extra for each sequence 1 137134 0xffffffff9138587b dest ID thread entry 1:

    Thu Aug 01 05:54:53 2013

    Thread 1 cannot allot of new newspapers, sequence 137136

    ...

    ...

    Thread 1 Advanced to record the sequence 137143 (switch LGWR)

    Currently journal # 1, seq # 137143 mem # 0: +DATA/prod/onlinelog/group_1.530.768090283

    Currently journal # 1, seq # 137143 mem # 1: +ARC/prod/onlinelog/group_1.263.768090301

    Thu Aug 01 06:19:32 2013

    Archived journal 309547 extra for each sequence 1 137142 0xffffffff9138587b dest ID thread entry 1:

    Thu Aug 01 06:20:11 2013

    minact-RCS: received the error during the scan e:1555 usn:143 useg

    minact-SNA: useg scan slip on with error e:1555

    Thu Aug 01 06:20:18 2013

    LNS: Standby redo log file selected for thread 1 sequence 137143 for destination LOG_ARCHIVE_DEST_2

    Thu Aug 01 06:23:11 2013

    minact-RCS: received the error during the scan e:1555 usn:143 useg

    Just an update,

    It was hardware related. He used only 1 path to the disc instead of the 4 data

  • Insert - Performance problem

    Hi Experts,

    I am new to Oracle. Ask for your help to fix the performance of a query of insertion problem.

    I have an insert query that is go search for records of the partitioned table.

    Background: the user indicates that the query was running in 30 minutes to 10 G. The database is upgraded to 12 by one of my colleague. Now the query works continuously for hours, but no result. Check the settings and SGA is 9 GB, Windows - 4 GB. DB block size is 8192, DB Multiblock read file Count is 128. Overall target of PGA is 2457M.

    The parameters are given below


    VALUE OF TYPE NAME
    ------------------------------------ ----------- ----------
    DBFIPS_140 boolean FALSE
    O7_DICTIONARY_ACCESSIBILITY boolean FALSE
    whole active_instance_count
    aq_tm_processes integer 1
    ARCHIVE_LAG_TARGET integer 0
    asm_diskgroups chain
    asm_diskstring chain
    asm_power_limit integer 1
    asm_preferred_read_failure_groups string
    audit_file_dest string C:\APP\ADM
    audit_sys_operations Boolean TRUE

    AUDIT_TRAIL DB string
    awr_snapshot_time_offset integer 0
    background_core_dump partial string
    background_dump_dest string C:\APP\PRO
    \RDBMS\TRA
    BACKUP_TAPE_IO_SLAVES boolean FALSE
    bitmap_merge_area_size integer 1048576
    blank_trimming boolean FALSE
    buffer_pool_keep string
    buffer_pool_recycle string
    cell_offload_compaction ADAPTIVE channel


    cell_offload_decryption Boolean TRUE
    cell_offload_parameters string
    cell_offload_plan_display string AUTO
    cell_offload_processing Boolean TRUE
    cell_offloadgroup_name string
    whole circuits
    whole big client_result_cache_lag 3000
    client_result_cache_size big integer 0
    clonedb boolean FALSE
    cluster_database boolean FALSE
    cluster_database_instances integer 1


    cluster_interconnects chain
    commit_logging string
    commit_point_strength integer 1
    commit_wait string
    string commit_write
    common_user_prefix string C#.
    compatible string 12.1.0.2.0
    connection_brokers string ((TYPE = DED
    ((TYPE = EM
    control_file_record_keep_time integer 7
    control_files string G:\ORACLE\

    TROL01. CTL
    FAST_RECOV
    NTROL02. CT
    control_management_pack_access string diagnostic
    core_dump_dest string C:\app\dia
    bal12\cdum
    cpu_count integer 4
    create_bitmap_area_size integer 8388608
    create_stored_outlines string
    cursor_bind_capture_destination memory of the string + tell
    CURSOR_SHARING EXACT string

    cursor_space_for_time boolean FALSE
    db_16k_cache_size big integer 0
    db_2k_cache_size big integer 0
    db_32k_cache_size big integer 0
    db_4k_cache_size big integer 0
    db_8k_cache_size big integer 0
    db_big_table_cache_percent_target string 0
    db_block_buffers integer 0
    db_block_checking FALSE string
    db_block_checksum string TYPICAL
    Whole DB_BLOCK_SIZE 8192

    db_cache_advice string WE
    db_cache_size large integer 0
    db_create_file_dest chain
    db_create_online_log_dest_1 string
    db_create_online_log_dest_2 string
    db_create_online_log_dest_3 string
    db_create_online_log_dest_4 string
    db_create_online_log_dest_5 string
    db_domain chain
    db_file_multiblock_read_count integer 128
    db_file_name_convert chain

    DB_FILES integer 200
    db_flash_cache_file string
    db_flash_cache_size big integer 0
    db_flashback_retention_target around 1440
    chain of db_index_compression_inheritance NONE
    DB_KEEP_CACHE_SIZE big integer 0
    chain of db_lost_write_protect NONE
    db_name string ORCL
    db_performance_profile string
    db_recovery_file_dest string G:\Oracle\
    y_Area


    whole large db_recovery_file_dest_size 12840M
    db_recycle_cache_size large integer 0
    db_securefile string PREFERRED
    channel db_ultra_safe
    db_unique_name string ORCL
    db_unrecoverable_scn_tracking Boolean TRUE
    db_writer_processes integer 1
    dbwr_io_slaves integer 0
    DDL_LOCK_TIMEOUT integer 0
    deferred_segment_creation Boolean TRUE
    dg_broker_config_file1 string C:\APP\PRO


    \DATABASE\
    dg_broker_config_file2 string C:\APP\PRO
    \DATABASE\
    dg_broker_start boolean FALSE
    diagnostic_dest channel directory
    disk_asynch_io Boolean TRUE
    dispatchers (PROTOCOL = string
    12XDB)
    distributed_lock_timeout integer 60
    dml_locks whole 2076
    whole dnfs_batch_size 4096

    dst_upgrade_insert_conv Boolean TRUE
    enable_ddl_logging boolean FALSE
    enable_goldengate_replication boolean FALSE
    enable_pluggable_database boolean FALSE
    event string
    exclude_seed_cdb_view Boolean TRUE
    fal_client chain
    fal_server chain
    FAST_START_IO_TARGET integer 0
    fast_start_mttr_target integer 0
    fast_start_parallel_rollback string LOW


    file_mapping boolean FALSE
    fileio_network_adapters string
    filesystemio_options chain
    fixed_date chain
    gcs_server_processes integer 0
    global_context_pool_size string
    global_names boolean FALSE
    global_txn_processes integer 1
    hash_area_size integer 131072
    channel heat_map
    hi_shared_memory_address integer 0

    hs_autoregister Boolean TRUE
    iFile file
    inmemory_clause_default string
    inmemory_force string by DEFAULT
    inmemory_max_populate_servers integer 0
    inmemory_query string ENABLE
    inmemory_size big integer 0
    inmemory_trickle_repopulate_servers_ integer 1
    percent
    instance_groups string
    instance_name string ORCL


    instance_number integer 0
    instance_type string RDBMS
    instant_restore boolean FALSE
    java_jit_enabled Boolean TRUE
    java_max_sessionspace_size integer 0
    JAVA_POOL_SIZE large integer 0
    java_restrict string no
    java_soft_sessionspace_limit integer 0
    JOB_QUEUE_PROCESSES around 1000
    LARGE_POOL_SIZE large integer 0
    ldap_directory_access string NONE


    ldap_directory_sysauth string no.
    license_max_sessions integer 0
    license_max_users integer 0
    license_sessions_warning integer 0
    listener_networks string
    LOCAL_LISTENER (ADDRESS = string
    = i184borac
    (NET) (PORT =
    lock_name_space string
    lock_sga boolean FALSE
    log_archive_config string


    Log_archive_dest chain
    Log_archive_dest_1 chain
    LOG_ARCHIVE_DEST_10 string
    log_archive_dest_11 string
    log_archive_dest_12 string
    log_archive_dest_13 string
    log_archive_dest_14 string
    log_archive_dest_15 string
    log_archive_dest_16 string
    log_archive_dest_17 string
    log_archive_dest_18 string


    log_archive_dest_19 string
    LOG_ARCHIVE_DEST_2 string
    log_archive_dest_20 string
    log_archive_dest_21 string
    log_archive_dest_22 string
    log_archive_dest_23 string
    log_archive_dest_24 string
    log_archive_dest_25 string
    log_archive_dest_26 string
    log_archive_dest_27 string
    log_archive_dest_28 string


    log_archive_dest_29 string
    log_archive_dest_3 string
    log_archive_dest_30 string
    log_archive_dest_31 string
    log_archive_dest_4 string
    log_archive_dest_5 string
    log_archive_dest_6 string
    log_archive_dest_7 string
    log_archive_dest_8 string
    log_archive_dest_9 string
    allow the chain of log_archive_dest_state_1


    allow the chain of log_archive_dest_state_10
    allow the chain of log_archive_dest_state_11
    allow the chain of log_archive_dest_state_12
    allow the chain of log_archive_dest_state_13
    allow the chain of log_archive_dest_state_14
    allow the chain of log_archive_dest_state_15
    allow the chain of log_archive_dest_state_16
    allow the chain of log_archive_dest_state_17
    allow the chain of log_archive_dest_state_18
    allow the chain of log_archive_dest_state_19
    allow the chain of LOG_ARCHIVE_DEST_STATE_2

    allow the chain of log_archive_dest_state_20
    allow the chain of log_archive_dest_state_21
    allow the chain of log_archive_dest_state_22
    allow the chain of log_archive_dest_state_23
    allow the chain of log_archive_dest_state_24
    allow the chain of log_archive_dest_state_25
    allow the chain of log_archive_dest_state_26
    allow the chain of log_archive_dest_state_27
    allow the chain of log_archive_dest_state_28
    allow the chain of log_archive_dest_state_29
    allow the chain of log_archive_dest_state_3

    allow the chain of log_archive_dest_state_30
    allow the chain of log_archive_dest_state_31
    allow the chain of log_archive_dest_state_4
    allow the chain of log_archive_dest_state_5
    allow the chain of log_archive_dest_state_6
    allow the chain of log_archive_dest_state_7
    allow the chain of log_archive_dest_state_8
    allow the chain of log_archive_dest_state_9
    log_archive_duplex_dest string
    log_archive_format string ARC%S_%R.%
    log_archive_max_processes integer 4

    log_archive_min_succeed_dest integer 1
    log_archive_start Boolean TRUE
    log_archive_trace integer 0
    whole very large log_buffer 28784K
    log_checkpoint_interval integer 0
    log_checkpoint_timeout around 1800
    log_checkpoints_to_alert boolean FALSE
    log_file_name_convert chain
    whole MAX_DISPATCHERS
    max_dump_file_size unlimited string
    max_enabled_roles integer 150


    whole max_shared_servers
    max_string_size string STANDARD
    memory_max_target big integer 0
    memory_target large integer 0
    NLS_CALENDAR string GREGORIAN
    nls_comp BINARY string
    nls_currency channel u
    string of NLS_DATE_FORMAT DD-MON-RR
    nls_date_language channel ENGLISH
    string nls_dual_currency C
    nls_iso_currency string UNITED KIN

    nls_language channel ENGLISH
    nls_length_semantics string OCTET
    string nls_nchar_conv_excp FALSE
    nls_numeric_characters chain.,.
    nls_sort BINARY string
    nls_territory string UNITED KIN
    nls_time_format HH24.MI string. SS
    nls_time_tz_format HH24.MI string. SS
    chain of NLS_TIMESTAMP_FORMAT DD-MON-RR
    NLS_TIMESTAMP_TZ_FORMAT string DD-MON-RR
    noncdb_compatible boolean FALSE


    object_cache_max_size_percent integer 10
    object_cache_optimal_size integer 102400
    olap_page_pool_size big integer 0
    open_cursors integer 300
    Open_links integer 4
    open_links_per_instance integer 4
    optimizer_adaptive_features Boolean TRUE
    optimizer_adaptive_reporting_only boolean FALSE
    OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES boolean FALSE
    optimizer_dynamic_sampling integer 2
    optimizer_features_enable string 12.1.0.2

    optimizer_index_caching integer 0
    OPTIMIZER_INDEX_COST_ADJ integer 100
    optimizer_inmemory_aware Boolean TRUE
    the string ALL_ROWS optimizer_mode
    optimizer_secure_view_merging Boolean TRUE
    optimizer_use_invisible_indexes boolean FALSE
    optimizer_use_pending_statistics boolean FALSE
    optimizer_use_sql_plan_baselines Boolean TRUE
    OPS os_authent_prefix string $
    OS_ROLES boolean FALSE
    parallel_adaptive_multi_user Boolean TRUE


    parallel_automatic_tuning boolean FALSE
    parallel_degree_level integer 100
    parallel_degree_limit string CPU
    parallel_degree_policy chain MANUAL
    parallel_execution_message_size integer 16384
    parallel_force_local boolean FALSE
    parallel_instance_group string
    parallel_io_cap_enabled boolean FALSE
    PARALLEL_MAX_SERVERS integer 160
    parallel_min_percent integer 0
    parallel_min_servers integer 16

    parallel_min_time_threshold string AUTO
    parallel_server boolean FALSE
    parallel_server_instances integer 1
    parallel_servers_target integer 64
    parallel_threads_per_cpu integer 2
    pdb_file_name_convert string
    pdb_lockdown string
    pdb_os_credential string
    permit_92_wrap_format Boolean TRUE
    pga_aggregate_limit great whole 4914M
    whole large pga_aggregate_target 2457M

    -
    Plscope_settings string IDENTIFIER
    plsql_ccflags string
    plsql_code_type chain INTERPRETER
    plsql_debug boolean FALSE
    plsql_optimize_level integer 2
    plsql_v2_compatibility boolean FALSE
    plsql_warnings DISABLE channel: AL
    PRE_PAGE_SGA Boolean TRUE
    whole process 300
    processor_group_name string
    query_rewrite_enabled string TRUE


    applied query_rewrite_integrity chain
    rdbms_server_dn chain
    read_only_open_delayed boolean FALSE
    recovery_parallelism integer 0
    Recyclebin string on
    redo_transport_user string
    remote_dependencies_mode string TIMESTAMP
    remote_listener chain
    Remote_login_passwordfile string EXCLUSIVE
    REMOTE_OS_AUTHENT boolean FALSE
    remote_os_roles boolean FALSE

    replication_dependency_tracking Boolean TRUE
    resource_limit Boolean TRUE
    resource_manager_cpu_allocation integer 4
    resource_manager_plan chain
    result_cache_max_result integer 5
    whole big result_cache_max_size K 46208
    result_cache_mode chain MANUAL
    result_cache_remote_expiration integer 0
    resumable_timeout integer 0
    rollback_segments chain
    SEC_CASE_SENSITIVE_LOGON Boolean TRUE

    sec_max_failed_login_attempts integer 3
    string sec_protocol_error_further_action (DROP, 3)
    sec_protocol_error_trace_action string PATH
    sec_return_server_release_banner boolean FALSE
    disable the serial_reuse chain
    service name string ORCL
    session_cached_cursors integer 50
    session_max_open_files integer 10
    entire sessions 472
    Whole large SGA_MAX_SIZE M 9024
    Whole large SGA_TARGET M 9024


    shadow_core_dump string no
    shared_memory_address integer 0
    SHARED_POOL_RESERVED_SIZE large integer 70464307
    shared_pool_size large integer 0
    whole shared_server_sessions
    SHARED_SERVERS integer 1
    skip_unusable_indexes Boolean TRUE
    smtp_out_server chain
    sort_area_retained_size integer 0
    sort_area_size integer 65536
    spatial_vector_acceleration boolean FALSE


    SPFile string C:\APP\PRO
    \DATABASE\
    sql92_security boolean FALSE
    SQL_Trace boolean FALSE
    sqltune_category string by DEFAULT
    standby_archive_dest channel % ORACLE_HO
    standby_file_management string MANUAL
    star_transformation_enabled string TRUE
    statistics_level string TYPICAL
    STREAMS_POOL_SIZE big integer 0
    tape_asynch_io Boolean TRUE

    temp_undo_enabled boolean FALSE
    entire thread 0
    threaded_execution boolean FALSE
    timed_os_statistics integer 0
    TIMED_STATISTICS Boolean TRUE
    trace_enabled Boolean TRUE
    tracefile_identifier chain
    whole of transactions 519
    transactions_per_rollback_segment integer 5
    UNDO_MANAGEMENT string AUTO
    UNDO_RETENTION integer 900

    undo_tablespace string UNDOTBS1
    unified_audit_sga_queue_size integer 1048576
    use_dedicated_broker boolean FALSE
    use_indirect_data_buffers boolean FALSE
    use_large_pages string TRUE
    user_dump_dest string C:\APP\PRO
    \RDBMS\TRA
    UTL_FILE_DIR chain
    workarea_size_policy string AUTO
    xml_db_events string enable

    Thanks in advance

    Firstly, thank you for posting the 10g implementation plan, which was one of the key things that we were missing.

    Second, you realize that you have completely different execution plans, so you can expect different behavior on each system.

    Your package of 10g has a total cost of 23 959 while your plan of 12 c has a cost of 95 373 which is almost 4 times more.  All things being equal, cost is supposed to relate directly to the time spent, so I expect the 12 c plan to take much more time to run.

    From what I can see the 10g plan begins with a scan of full table on DEALERS, and then a full scan on SCARF_VEHICLE_EXCLUSIONS table, and then a full scan on CBX_tlemsani_2000tje table, and then a full scan on CLAIM_FACTS table.  The first three of these analyses tables have a very low cost (2 each), while the last has a huge cost of 172K.  Yet once again, the first three scans produce very few lines in 10g, less than 1,000 lines each, while the last product table scan 454 K lines.

    It also looks that something has gone wrong in the 10g optimizer plan - maybe a bug, which I consider that Jonathan Lewis commented.  Despite the full table scan with a cost of 172 K, NESTED LOOPS it is part of the only has a cost of 23 949 or 24 K.  If the math is not in terms of 10g.  In other words, maybe it's not really optimal plan because 10g optimizer may have got its sums wrong and 12 c might make his right to the money.  But luckily this 'imperfect' 10g plan happens to run fairly fast for one reason or another.

    The plan of 12 starts with similar table scans but in a different order.  The main difference is that instead of a full table on CLAIM_FACTS scan, it did an analysis of index on CLAIM_FACTS_AK9 beach at the price of 95 366.  It is the only main component of the final total cost of 95 373.

    Suggestions for what to do?  It is difficult, because there is clearly an anomaly in the system of 10g to have produced the particular execution plan that he uses.  And there is other information that you have not provided - see later.

    You can try and force a scan of full table on CLAIM_FACTS by adding a suitable example suspicion "select / * + full (CF) * / cf.vehicle_chass_no...". "However, the tips are very difficult to use and does not, guarantee that you will get the desired end result.  So be careful.  For the essay on 12 c, it may be worth trying just to see what happens and what produces the execution plan looks like.  But I would not use such a simple, unique tip in a production system for a variety of reasons.  For testing only it might help to see if you can force the full table on CLAIM_FACTS scan as in 10g, and if the performance that results is the same.

    The two plans are parallel ones, which means that the query is broken down into separate, independent steps and several steps that are executed at the same time, i.e. several CPUS will be used, and there will be several readings of the disc at the same time.  (It is a mischaracterization of the works of parallel query how).  If 10g and 12 c systems do not have the SAME hardware configuration, then you would naturally expect different time elapsed to run the same parallel queries.  See the end of this answer for the additional information that you may provide.

    But I would be very suspicious of the hardware configuration of the two systems.  Maybe 10 g system has 16-core processors or more and 100's of discs in a matrix of big drives and maybe the 12 c system has only 4 cores of the processor and 4 disks.  That would explain a lot about why the 12 c takes hours to run when the 10 g takes only 30 minutes.

    Remember what I said in my last reply:

    "Without any contrary information I guess the filter conditions are very low, the optimizer believes he needs of most of the data in the table and that a table scan or even a limited index scan complete is the"best"way to run this SQL.  In other words, your query takes just time because your tables are big and your application has most of the data in these tables. "

    When dealing with very large tables and to do a full table parallel analysis on them, the most important factor is the amount of raw hardware, you throw the ball to her.  A system with twice the number of CPUS and twice the number of disks will run the same parallel query in half of the time, at least.  It could be that the main reason for the 12 c system is much slower than the system of 10g, rather than on the implementation plan itself.

    You may also provide us with the following information which would allow a better analysis:

    • Row counts in each tables referenced in the query, and if one of them are partitioned.
    • Hardware configurations for both systems - the 10g and the 12 a.  Number of processors, the model number and speed, physical memory, CPU of discs.
    • The discs are very important - 10g and 12 c have similar disk subsystems?  You use simple old records, or you have a San, or some sort of disk array?  Are the bays of identical drives in both systems?  How are they connected?  Fast Fibre Channel, or something else?  Maybe even network storage?
    • What is the size of the SGA in both systems?  of values for MEMORY_TARGET and SGA_TARGET.
    • The fact of the CLAIM_FACTS_AK9 index exist on the system of 10g.  I guess he does, but I would like that it confirmed to be safe.

    John Brady

  • Name for the DB file conversion

    Dear friends

    Here is my init .ora file

    I have an application working on a database called PRDSPRT, I cloned this server of database on a new server to install a custodian of data on this subject the db_name should be the same as his need for the application

    How can I install the DBfile _ name convert?

    & the log_file convert its pointing to the same path now

    * ._b_tree_bitmap_plans = FALSE # required 11i setting

    * ._fast_full_scan_enabled = FALSE

    * ._like_with_bind_as_equality = TRUE

    * ._sort_elimination_cost_ratio = 5

    * ._sqlexec_progression_cost = 2147483647

    * ._system_trig_enabled = true

    * ._trace_files_public = TRUE

    * .aq_tm_processes = 1

    *.background_dump_dest='/oracle/dev2/d01/db/tech_st/10.2.0/admin/PRDSPRT_srv-hq-on01/bdump'

    * full = '10.2.0'

    *.control_files='/oracle/dev2/d01/db/apps_st/data/cntrl01.dbf','/oracle/dev2/d01/db/apps_st/data/cntrl02.dbf','/oracle/dev2/d01/db/apps_st/data/cntrl03.dbf'

    *.core_dump_dest='/oracle/dev2/d01/db/tech_st/10.2.0/admin/PRDSPRT_srv-hq-on01/cdump'

    * .cursor_sharing = "TRUE" # required 11i settting

    * .db_block_checking = "FALSE".

    * .db_block_checksum = 'TRUE '.

    * .db_block_size = 8192

    * .db_file_multiblock_read_count = # required 8 put 11i

    * .db_files = 512 # Max database files lol

    * .db_name = "PRDSPRT".

    * .db_unique_name = "PROD".

    * .dml_locks = 10000

    * .job_queue_processes = 50

    * .event = "10298 trace name forever, context level 32.

    *. LOG_ARCHIVE_DEST_1 = ' LOCATION = / oracle/dev2/d01/db/archive.

    *. LOG_ARCHIVE_FORMAT='%t_%s_%r.dbf'

    * .log_buffer = 10485760

    * .log_checkpoint_interval = 100000

    * .log_checkpoint_timeout = 1200 # control point at least every 20 minutes.

    * .log_checkpoints_to_alert = TRUE

    * .max_dump_file_size = size of the trace file '20480' #.

    * .nls_comp = "binary" 11i parameter # required

    * .nls_date_format = "DD-MON-RR.

    * .nls_language = "to the American.

    * .nls_length_semantics = "BYTE" # required put 11i

    * .nls_numeric_characters ='.,'

    * .nls_sort = "binary" 11i parameter # required

    * .nls_territory = "america."

    * .olap_page_pool_size = 4194304

    * .open_cursors = process memory consumes 600 #, unless you use MTS.

    * .optimizer_secure_view_merging = false

    * .parallel_max_servers = 20

    * .parallel_min_servers = 0

    * .pga_aggregate_target = 1 G

    * .plsql_code_type 'INTERPRETED' = # 11i preset

    *.plsql_native_library_dir='/oracle/dev2/d01/db/tech_st/10.2.0/plsql/nativelib'

    * .plsql_native_library_subdir_count = 149

    * .plsql_optimize_level = 2 # required put 11i

    * runoff = # 500 max. lol x 2 users

    * .session_cached_cursors = 500

    * Once = process X 400 # 2

    * .sga_target = 2G

    * .shared_pool_reserved_size = 500 M

    * .shared_pool_size = 1 G

    * .timed_statistics = true

    * .undo_management = "AUTO" # required put 11i

    * .undo_tablespace = "APPS_UNDOTS1" # required put 11i

    *.user_dump_dest='/oracle/dev2/d01/db/tech_st/10.2.0/admin/PRDSPRT_srv-hq-on01/udump'

    *.utl_file_dir='/usr/tmp','/usr/tmp','/patch/ora_tmp/PROD','/Oracle/DEV2/D01/DB/tech_st/10.2.0/appsutil/outbound/PRDSPRT_srv-HQ-on01','/usr/tmp '

    * .workarea_size_policy = "AUTO" # required put 11i

    * .remote_login_passwordfile = "EXCLUSIVE."

    #standbydbsetup

    DB_UNIQUE_NAME = PROD

    SERVICE_NAME = PROD

    #Physical db ensures

    log_archive_config = "DG_CONFIG = (PROD, TEST)"

    log_archive_dest1 = "LOCATION = / oracle/dev2/d01/db/archive /"

    VALID_FOR = (ALL_LOGFILES, ALL_ROLES)

    DB_UNIQUE_NAME = PROD'

    log_archive_dest2 = "SERVICE = TEST LGWR ASYNC

    VALID_FOR = (ONLINE_LOGFILES, PRIMARY_ROLE)

    DB_UNIQUE_NAME = TEST'

    log_archive_dest_state_1 = enable

    LOG_ARCHIVE_DEST_STATE_2 = enable

    log_archive_max_processes = 15

    #failover settings

    * .fal_Server = TEST

    * .fal_client = PROD

    db_file_name_convert = ('/ oracle/dev2/d01/db/Archives / ',' / oracle/dev2/d01/db/Archives /',)

    Oracle/dev2/d01/db/apps_st/data / ',' / oracle/dev2/d01/db/apps_st/data / ")"

    log_file_name_convert =('/Oracle/DEV2/D01/DB/Archive/','/Oracle/DEV2/D01/DB/Archive/')

    standby_file _managemnt = auto

    data guard.jpg

    Hello

    If the new server or secondary server has the same directory as the primary structure or old, so you don't have to worry about these settings.

    (Edit)

    You can read about the settings in the following link:

    https://docs.Oracle.com/CD/E11882_01/server.112/e41134/create_ps.htm#i76119

    Concerning

    Juan M

  • Creation of two physical databases ensures

    Hi all

    11 GR 2

    Rhel6.5

    For single basic physical intelligence, are the init.ora parameters:

    LOG_ARCHIVE_CONFIG = 'DG_CONFIG = (PROD, STBY)'

    LOG_ARCHIVE_DEST_2 = "SERVICE = STBY NOAFFIRM ASYNC VALID_FOR = (ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME = STBY"; "

    LOG_ARCHIVE_DEST_STATE_2 = ENABLE;

    The following settings are correct for the physical double standby database creation?

    LOG_ARCHIVE_CONFIG = 'DG_CONFIG = (PROD, STBY, STBY2)'

    LOG_ARCHIVE_DEST_2 = "SERVICE = STBY NOAFFIRM ASYNC VALID_FOR = (ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME = STBY"; "

    LOG_ARCHIVE_DEST_3 = "SERVICE = STBY2 NOAFFIRM VALID_FOR =(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME STBY2 = ASYNC';"

    LOG_ARCHIVE_DEST_STATE_2 = ENABLE;

    LOG_ARCHIVE_DEST_STATE_3 = ENABLE;

    Thank you very much

    JC

    Hello

    Just before that you create, you may need to think about that (it can help you to reduce network traffic / performance little main question)

    We have two physical Standby in two ways

    (1) again are shipped from the primary two databases on hold

    (2) again will be sent on the eve of first and second Eve will receive repeat first aid (primary will not be directly in contact with second Eve)

    Starting from your configuration.

    Two standby databases will receive repeat primary, so the donc les deux two standby will be sync will be main without delay (depends on the availability of network and the mode of production)

    The following settings seems good!

    LOG_ARCHIVE_CONFIG = 'DG_CONFIG = (PROD, STBY, STBY2)'

    LOG_ARCHIVE_DEST_2 = "SERVICE = STBY NOAFFIRM ASYNC VALID_FOR = (ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME = STBY"; "

    LOG_ARCHIVE_DEST_3 = "SERVICE = STBY2 NOAFFIRM VALID_FOR =(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME STBY2 = ASYNC';"

    LOG_ARCHIVE_DEST_STATE_2 = ENABLE;

    LOG_ARCHIVE_DEST_STATE_3 = ENABLE;

    11 g in the halls, FAL_CLIENT is not mandatory, but have control over FAL_SERVER that you have two databases on hold.

    FAL_SERVER--> in short where we receive archives (oracle log source where this database represent relief)

    In your situation, you need to have TWO values in each database for FAL_SERVER

    http://docs.Oracle.com/CD/B28359_01/server.111/b28320/initparams078.htm

    Prod

    --------

    FAL_SERVER = STBY, STBY2

    To STBY

    --------

    FAL_SERVER = PROD, STBY2

    For STBY2

    --------

    FAL_SERVER = PROD, STBY

    If you have parameters, above the during pass / failover wont it be a problem.

    Consider allowing the return of flame in all three database (at least for the minimum retention)

    Have flashback will eliminate recreation to sleep when there is failure unexpected-over(NOT Graceful fail-over)

    If you choose the second option.

    (2) again will be sent on the eve of first and second Eve will receive repeat first aid

    Take a look on

    http://docs.Oracle.com/CD/B28359_01/server.111/b28294/cascade_appx.htm

    Thank you

  • Apply real-time: delay is ignored

    Hello

    I am trying to set up a standby database can be found in paragraph 12.1, but apply in time is always activate.

    Of the primary database, I always see in the alert.log:

    WARNING: Managed Standby recovery started by APPLYING in TIME REAL

    DELAY of 60 minutes ignored elementary

    Same database alert.log emergency exit:

    Managed Standby recovery started by APPLYING in TIME REAL

    Specified previously ignorant DELAY 60 minutes for the sequence of thread 1 169

    Archived newspapers is copied on standby but applied immediately. Delay of log_archive_dest_2 is ignored. Why?

    Primary DB:

    Select the id dest_id, database_mode db_mode, recovery_mode,.

    protection_mode, standby_logfile_count 'litigants. "

    ACTIVE standby_logfile_active,

    archived_seq #.

    v $ archive_dest_status;

    ID DB_MODE RECOVERY_MODE PROTECTION_MODE ACTIVE ARCHIVED_SEQ LAR #.

    --- --------------- ----------------------- -------------------- ---- ------ -------------

    1 OPEN 0 0 169 IDLE MAXIMUM PERFORMANCE

    2 RISE-STANDBY MANAGED IN TIME REAL IS APPLIES TO MAXIMUM PERFORMANCE 4 0 169

    Specific parameters are:

    log_archive_config DG_CONFIG = (JFDG, JFDG_SB)

    LOG_ARCHIVE_DEST_2 SERVICE = JFDG_SB ARCH ASYNC DELAY = 60 VALID_FOR = (ALL_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME = JFDG_SB

    LOG_ARCHIVE_DEST_STATE_2 ENABLE

    standby_file_management AUTO

    I restarted standby db with

    ALTER DATABASE RECOVER MANAGED STANDBY BASE DISCONNECT FROM THE SESSION.

    or

    recover the parallel managed standby database disconnect 2;

    = > same result

    How can I use a delay of application of archived log?

    The version of the database is:

    Patch 20831110: applied on Thu Sep 24 09:55:04 CEST 2015

    Patch ID: 18977826

    Patch description: "Set update database: 12.1.0.2.4 (20831110).

    Operating system: Redhat 64 x 6.6 (2.6.32 - 504.el6.x86_64)

    Hello

    1. oracle docs: https://docs.oracle.com/database/121/SBYDB/release_changes.htm#SBYDB5408

    Apply specific features to redo

    • The USING CURRENT LOGFILE article is is no longer required to start real-time applies.
    • To physical databases ensure, issue the ALTER DATABASE RECOVER MANAGED STANDBY DATABASE statement. (From the Oracle 12 c Release 1 database (12.1), the USING CURRENT LOGFILE article is obsolete and is no longer necessary to start real-time applies.) (doc: https://docs.oracle.com/database/121/SBYDB/log_apply.htm#SBYDB0050)

    2. entry delay (real-time applies to will does not provide the option, his conflict with your apply requirement)

    Doc: https://docs.oracle.com/database/121/SBYDB/log_apply.htm#SBYDB4762

    Note:

    If you set a deadline for a destination which in real time is activated, delay is ignored. If you set a delay as described in the following paragraph, then you must start the application using the NEWSPAPER ARCHIVE in the HELP file

    clause contained in Section 8.3.1

    .

    3 doc:https://docs.oracle.com/database/121/SQLRF/statements_1006.htm#SQLRF00802

    https://docs.Oracle.com/database/121/SQLRF/statements_1006.htm#BGECJDHB

    Note:

    Beginning with the Oracle 12 c database

    real-time applies

    is enabled by default in the course from the redo apply. Real-time apply redo retrieves waiting for redo log files as soon as they are written, without having to check in all first to the physical database ensures. You can disable real-time apply, with the USING ARCHIVED LOGFILE clause. Refer to:

    It may be useful

    Every bit of information is covered in Oracle docs, spare some time for learning.

    -Pavan Kumar N

  • Waiting for redo log file missing when restore main database using RMAN backup that was taken on the database physical standby

    Here's my question after tons of research and test without have the right solutions.

    Target:

    (1) I have a 12.1.0.2 database unique main enterprise 'testdb' as database instance running on the server "node1".

    (2) I created physical standby database "stbydb" on the server "node2".

    (3) DataGuard running on the mode of MaxAvailability (SYNC) with roll forward in real time 12 default c apply.

    (4) primary database has 3 groups of one-man redo. (/oraredo/testdb/redo01.log redo02.log redo03.log)

    (5) I've created 4 standby redo logfiles (/oraredo/testdb/stby01.log stby02.log stby03.log stby04.log)

    (6) I do RMAN backup (database and archivelog) on the site of relief only.

    (7) I want to use this backup for full restore of the database on the primary database.

    He is a DR test to simulate the scenario that has lost every primary & Eve total servers.

    Here is how to save, on the database pending:

    (1) performance 'alter database recover managed standby database Cancel' to ensure that compatible data files

    (2) RMAN > backup database;

    (3) RMAN > backup archivelog all;

    I got elements of backup and copied to primary db Server something like:

    /Home/Oracle/backupset/o1_mf_nnndf_TAG20151002T133329_c0xq099p_.BKP (data files)

    /Home/Oracle/backupset/o1_mf_ncsnf_TAG20151002T133329_c0xq0sgz_.BKP (spfile & controlfile)

    /Home/Oracle/backupset/o1_mf_annnn_TAG20151002T133357_c0xq15xf_.BKP (archivelogs)

    So here's how to restore, on the main site:

    I clean all the files (data files, controlfiles oder all gone).

    (1) restore spfile from pfile

    RMAN > startup nomount

    RMAN > restore spfile from pfile ' / home/oracle/pfile.txt' to ' / home/oracle/backupset/o1_mf_ncsnf_TAG20151002T133329_c0xq0sgz_.bkp';

    (2) modify pfile to convert to db primary content. pFile shows below

    *.audit_file_dest='/opt/Oracle/DB/admin/testdb/adump '

    * .audit_trail = "db".

    * full = '12.1.0.2.0'

    *.control_files='/oradata/testdb/control01.ctl','/orafra/testdb/control02.ctl'

    * .db_block_size = 8192

    * .db_domain = "

    *.db_file_name_convert='/testdb/','/testdb /'

    * .db_name = "testdb".

    * .db_recovery_file_dest ='/ orafra'

    * .db_recovery_file_dest_size = 10737418240

    * .db_unique_name = "testdb".

    *.diagnostic_dest='/opt/Oracle/DB '

    * .fal_server = "stbydb".

    * .log_archive_config = 'dg_config = (testdb, stbydb)'

    * .log_archive_dest_2 = "service = stbydb SYNC valid_for = (ONLINE_LOGFILE, PRIMARY_ROLE) db_unique_name = stbydb'"

    * .log_archive_dest_state_2 = 'ENABLE '.

    *.log_file_name_convert='/testdb/','/testdb /'

    * .memory_target = 1800 m

    * .open_cursors = 300

    * runoff = 300

    * .remote_login_passwordfile = "EXCLUSIVE."

    * .standby_file_management = "AUTO".

    * .undo_tablespace = "UNDOTBS1.

    (3) restart db with updated file pfile

    SQLPLUS > create spfile from pfile='/home/oracle/pfile.txt'

    SQLPLUS > the judgment

    SQLPLUS > startup nomount

    (4) restore controlfile

    RMAN > restore primary controlfile to ' / home/oracle/backupset/o1_mf_ncsnf_TAG20151002T133329_c0xq0sgz_.bkp';

    RMAN > change the editing of the database

    (5) all elements of backup catalog

    RMAN > catalog starts by ' / home/oracle/backupset / '.

    (6) restore and recover the database

    RMAN > restore database;

    RMAN > recover database until the SNA XXXXXX; (this YVERT is the maximum in archivelog backups that extends beyond the scn of the backup of the data file)

    (7) open resetlogs

    RMAN > alter database open resetlogs;

    Everything seems perfect, except one of the file log roll forward pending is not generated

    SQL > select * from v$ standby_log;

    ERROR:

    ORA-00308: cannot open archived log ' / oraredo/testdb/stby01.log'

    ORA-27037: unable to get file status

    Linux-x86_64 error: 2: no such file or directory

    Additional information: 3

    no selected line

    I intended to use the same backup to restore primary basic & helps record traffic and the downtime between them in the world of real output.

    So I have exactly the same steps (except STANDBY restore CONTROLFILE and not recover after database restore) to restore the database pending.

    And I got the same missing log file.

    The problem is:

    (1) complete alert.log filled with this error, not the concern here

    (2) now repeat it in real time apply won't work since the Party shall LGWR shows always "WAITING_FOR_LOG."

    (3) I can't delete and re-create this log file

    Then I tried several and found:

    The missing standby logfile was still 'ACTIVE' at present RMAN backup was made.

    For example, on db standby, under Group #4 (stby01.log) would be lost after the restoration.

    SQL > select GROUP #, SEQUENCE #, USE (s), the STATUS from v$ standby_log;

    GROUP # SEQUENCE # USED STATUS

    ---------- ---------- ---------- ----------

    4 19 ACTIVE 133632

    5 0 0 UNASSIGNED

    6 0 0 not ASSIGNED

    7 0 0 UNASSIGNED

    So until I take the backup, I tried on the primary database:

    SQL > alter system set log_archive_dest_state_2 = delay;

    This was the Group of standby_log side Eve #4 was released:

    SQL > select GROUP #, SEQUENCE #, USE (s), the STATUS from v$ standby_log;

    GROUP # SEQUENCE # USED STATUS

    ---------- ---------- ---------- ----------

    4 0 0 UNASSIGNED

    5 0 0 UNASSIGNED

    6 0 0 not ASSIGNED

    7 0 0 UNASSIGNED

    Then, the backup has been restored correctly without missing standby logfile.

    However, to change this primary database means break DataGuard protection when you perform the backup. It's not accept on the production environment.

    Finally, my real questions come:

    (1) what I do may not do on parameter change?

    (2) I know I can re-create the control file to redo before delete and then recreate after. Is there any simple/fast to avoid the standby logfile lost or recreate the lost one?

    I understand that there are a number of ways to circumvent this. Something to keep a copy of the log file waiting restoration progress and copy up one missing, etc, etc...

    And yes I always have done no real-time applies "to the aid of archived logfile" but is also not accept mode of protection of production.

    I just want proof that the design (which is displayed in a few oracle doc Doc ID 602299.1 is one of those) that backs up data backup works effectively and can be used to restore the two site. And it may be without spending more time to resume backups or put the load on the primary database to create the database before.

    Your idea is very much appreciated.

    Thank you!

    Hello

    1--> when I take via RMAN backup, RMAN does not redo log (ORL or SRL) file, so we cannot expect ORLs or SRL would be restored.

    2nd--> when we opened the ORL database should be deleted and created

    3rd--> Expecting, SRL should not be an issue.we should be able to do away with the fall.

    DR sys@cdb01 SQL > select THREAD #, SEQUENCE #, GROUP #, STATUS from v$ standby_log;

    THREAD # SEQUENCE # GROUP # STATUS

    ---------- ---------- ---------- ----------

    1 233 4 ACTIVE

    1 238 5 ACTIVE

    DR sys@cdb01 SQL > select * from v$ logfile;

    GROUP # STATUS TYPE MEMBER IS_ CON_ID

    ---------- ------- ------- ------------------------------ --- ----------

    3 /u03/cdb01/cdb01/redo03.log no. 0 online

    /U03/cdb01/cdb01/redo02.log no. 0 2 online

    1 /u03/cdb01/cdb01/redo01.log no. 0 online

    4 /u03/cdb01/cdb01/stdredo01.log WATCH No. 0

    /U03/cdb01/cdb01/stdredo02.log EVE 5 No. 0

    DR sys@cdb01 SQL > ! ls - ltr /u03/cdb01/cdb01/stdredo01.log

    method: cannot access the /u03/cdb01/cdb01/stdredo01.log: no such file or directory

    DR sys@cdb01 SQL >! ls - ltr /u03/cdb01/cdb01/stdredo02.log

    -rw - r-. 1 oracle oinstall 52429312 17 Oct 15:32 /u03/cdb01/cdb01/stdredo02.log

    DR sys@cdb01 SQL > alter database force claire logfile 4;

    change the database group claire logfile 4

    *

    ERROR on line 1:

    ORA-01156: recovery or current flashback may need access to files

    DR sys@cdb01 SQL > alter database recover managed standby database cancel;

    Database altered.

    DR sys@cdb01 SQL > change the database group claire logfile 4;

    Database altered.

    DR sys@cdb01 SQL > ! ls - ltr /u03/cdb01/cdb01/stdredo01.log

    -rw - r-. 1 oracle oinstall 52429312 17 Oct 15:33 /u03/cdb01/cdb01/stdredo01.log

    DR sys@cdb01 SQL >

    If you do, you can recreate the controlfile without waiting for redo log entry...

    If you still think it's something is not acceptable, you must have SR with support to analyze why he does not abandon SRL when controlfile_type is "underway".

    Thank you

  • How to change the directory on the Archives of parameter permanently? Try with change change is again after stopping

    Hi all

    I got this setting on the backup database

    SQL > parameters shows Archives;

    VALUE OF TYPE NAME

    ------------------------------------ ----------- ------------------------------

    ARCHIVE_LAG_TARGET integer 0

    log_archive_config string dg_config = (erpdb, stdby)

    Log_archive_dest chain

    location of string Log_archive_dest_1 = 'G:\ARC', valid_for =

    (ALL_LOGFILES, ALL_ROLES)

    LOG_ARCHIVE_DEST_10 string

    Service string LOG_ARCHIVE_DEST_2 = erpdb valid LGWR ASYNC

    _for = (online_logfiles, primary_

    db_unique_name role) = erpdb

    location of string log_archive_dest_3 = "F:\ARC", valid_for =

    (STANDBY_LOGFILE, STANDBY_ROLE)

    VALUE OF TYPE NAME

    ------------------------------------ ----------- ------------------------------

    log_archive_dest_4 string

    log_archive_dest_5 string

    log_archive_dest_6 string

    log_archive_dest_7 string

    log_archive_dest_8 string

    log_archive_dest_9 string

    log_archive_dest_state_1 string ENABLE

    allow the chain of log_archive_dest_state_10

    allow the chain of LOG_ARCHIVE_DEST_STATE_2

    log_archive_dest_state_3 string ENABLE

    allow the chain of log_archive_dest_state_4

    VALUE OF TYPE NAME

    ------------------------------------ ----------- ------------------------------

    allow the chain of log_archive_dest_state_5

    allow the chain of log_archive_dest_state_6

    allow the chain of log_archive_dest_state_7

    allow the chain of log_archive_dest_state_8

    allow the chain of log_archive_dest_state_9

    log_archive_duplex_dest string

    log_archive_format string %t_%s_%r.ARC

    log_archive_local_first Boolean TRUE

    log_archive_max_processes integer 2

    log_archive_min_succeed_dest integer 1

    log_archive_start boolean FALSE

    VALUE OF TYPE NAME

    ------------------------------------ ----------- ------------------------------

    log_archive_trace integer 0

    real chain of remote_archive_enable

    standby_archive_dest string F:\ARC

    SQL >

    I use this to change the directory to F:\ARC to G:\ARC\STANDBY

    SQL > alter system set standby_archive_dest = 'G:\ARC\STANDBY' scope = both;

    Modified system.

    SQL > alter system set log_archive_dest_3 =' location = G:\ARC\STANDBY' scope = both;

    Modified system.

    SQL >

    I checked the new parameter

    SQL > parameters shows Archives;

    VALUE OF TYPE NAME

    ------------------------------------ ----------- ------------------------------

    ARCHIVE_LAG_TARGET integer 0

    log_archive_config string dg_config = (erpdb, stdby)

    Log_archive_dest chain

    location of string Log_archive_dest_1 = 'G:\ARC', valid_for =

    (ALL_LOGFILES, ALL_ROLES)

    LOG_ARCHIVE_DEST_10 string

    Service string LOG_ARCHIVE_DEST_2 = erpdb valid LGWR ASYNC

    _for = (online_logfiles, primary_

    db_unique_name role) = erpdb

    location of string log_archive_dest_3 = G:\ARC\STANDBY

    log_archive_dest_4 string

    VALUE OF TYPE NAME

    ------------------------------------ ----------- ------------------------------

    log_archive_dest_5 string

    log_archive_dest_6 string

    log_archive_dest_7 string

    log_archive_dest_8 string

    log_archive_dest_9 string

    log_archive_dest_state_1 string ENABLE

    allow the chain of log_archive_dest_state_10

    allow the chain of LOG_ARCHIVE_DEST_STATE_2

    log_archive_dest_state_3 string ENABLE

    allow the chain of log_archive_dest_state_4

    allow the chain of log_archive_dest_state_5

    VALUE OF TYPE NAME

    ------------------------------------ ----------- ------------------------------

    allow the chain of log_archive_dest_state_6

    allow the chain of log_archive_dest_state_7

    allow the chain of log_archive_dest_state_8

    allow the chain of log_archive_dest_state_9

    log_archive_duplex_dest string

    log_archive_format string %t_%s_%r.ARC

    log_archive_local_first Boolean TRUE

    log_archive_max_processes integer 2

    log_archive_min_succeed_dest integer 1

    log_archive_start boolean FALSE

    log_archive_trace integer 0

    VALUE OF TYPE NAME

    ------------------------------------ ----------- ------------------------------

    real chain of remote_archive_enable

    standby_archive_dest string G:\ARC\STANDBY

    SQL > parameters shows Archives;

    VALUE OF TYPE NAME

    ------------------------------------ ----------- ------------------------------

    ARCHIVE_LAG_TARGET integer 0

    log_archive_config string dg_config = (erpdb, stdby)

    Log_archive_dest chain

    location of string Log_archive_dest_1 = 'G:\ARC', valid_for =

    (ALL_LOGFILES, ALL_ROLES)

    LOG_ARCHIVE_DEST_10 string

    Service string LOG_ARCHIVE_DEST_2 = erpdb valid LGWR ASYNC

    _for = (online_logfiles, primary_

    db_unique_name role) = erpdb

    location of string log_archive_dest_3 = G:\ARC\STANDBY

    log_archive_dest_4 string

    VALUE OF TYPE NAME

    ------------------------------------ ----------- ------------------------------

    log_archive_dest_5 string

    log_archive_dest_6 string

    log_archive_dest_7 string

    log_archive_dest_8 string

    log_archive_dest_9 string

    log_archive_dest_state_1 string ENABLE

    allow the chain of log_archive_dest_state_10

    allow the chain of LOG_ARCHIVE_DEST_STATE_2

    log_archive_dest_state_3 string ENABLE

    allow the chain of log_archive_dest_state_4

    allow the chain of log_archive_dest_state_5

    VALUE OF TYPE NAME

    ------------------------------------ ----------- ------------------------------

    allow the chain of log_archive_dest_state_6

    allow the chain of log_archive_dest_state_7

    allow the chain of log_archive_dest_state_8

    allow the chain of log_archive_dest_state_9

    log_archive_duplex_dest string

    log_archive_format string %t_%s_%r.ARC

    log_archive_local_first Boolean TRUE

    log_archive_max_processes integer 2

    log_archive_min_succeed_dest integer 1

    log_archive_start boolean FALSE

    log_archive_trace integer 0

    VALUE OF TYPE NAME

    ------------------------------------ ----------- ------------------------------

    real chain of remote_archive_enable

    standby_archive_dest string G:\ARC\STANDBY

    SQL >

    log_archive_dest_3 has changed from F to G

    standby_archive_dest changed from F to G

    I check again with the judgment

    SQL > alter database recover managed standby database cancel;

    Database altered.

    SQL > shutdown

    ORA-01109: database is not open

    The database is dismounted.

    ORACLE instance stops.

    SQL >

    start the DB again

    SQL > startup nomount;

    ORACLE instance started.

    Total System Global Area 9663676416 bytes

    Bytes of size 2093360 fixed

    2650803920 variable size bytes

    6996099072 of database buffers bytes

    Redo buffers 14680064 bytes

    SQL > alter database base_de_donnees eve of Mount;

    Database altered.

    SQL >

    I check again the parameter

    SQL > parameters shows Archives;

    VALUE OF TYPE NAME

    ------------------------------------ ----------- ------------------------------

    ARCHIVE_LAG_TARGET integer 0

    log_archive_config string dg_config = (erpdb, stdby)

    Log_archive_dest chain

    location of string Log_archive_dest_1 = 'G:\ARC', valid_for =

    (ALL_LOGFILES, ALL_ROLES)

    LOG_ARCHIVE_DEST_10 string

    Service string LOG_ARCHIVE_DEST_2 = erpdb valid LGWR ASYNC

    _for = (online_logfiles, primary_

    db_unique_name role) = erpdb

    location of string log_archive_dest_3 = "F:\ARC", valid_for =

    (STANDBY_LOGFILE, STANDBY_ROLE)

    VALUE OF TYPE NAME

    ------------------------------------ ----------- ------------------------------

    log_archive_dest_4 string

    log_archive_dest_5 string

    log_archive_dest_6 string

    log_archive_dest_7 string

    log_archive_dest_8 string

    log_archive_dest_9 string

    log_archive_dest_state_1 string ENABLE

    allow the chain of log_archive_dest_state_10

    allow the chain of LOG_ARCHIVE_DEST_STATE_2

    log_archive_dest_state_3 string ENABLE

    allow the chain of log_archive_dest_state_4

    VALUE OF TYPE NAME

    ------------------------------------ ----------- ------------------------------

    allow the chain of log_archive_dest_state_5

    allow the chain of log_archive_dest_state_6

    allow the chain of log_archive_dest_state_7

    allow the chain of log_archive_dest_state_8

    allow the chain of log_archive_dest_state_9

    log_archive_duplex_dest string

    log_archive_format string %t_%s_%r.ARC

    log_archive_local_first Boolean TRUE

    log_archive_max_processes integer 2

    log_archive_min_succeed_dest integer 1

    log_archive_start boolean FALSE

    VALUE OF TYPE NAME

    ------------------------------------ ----------- ------------------------------

    log_archive_trace integer 0

    real chain of remote_archive_enable

    standby_archive_dest string F:\ARC

    SQL >

    log_archive_dest_3 is back in G F

    standby_archive_dest is returning f G

    How do a perm

    Thank you

    Eddy

    Hello again;

    No matter how lucky you have configured DGMGRL and mix SQL with it? Broker would cause the problem you are reporting.

    Once you start using DGMGRL must stick to it, you cannot mix.

    Best regards

    mseberg

Maybe you are looking for

  • Problems with iTunes 12.1.3.6 when you try to upgrade phone IOS 10.1

    Cannot update my iPhone software iTunes is prompting me to update 12.1.2 but I'm already on 12.1.3.6! Tried to uninstall and reinstall iTunes but same error. When I click on update to iTunes, the update briefly button greys out but then reappears, an

  • iMessage connected me, now I can not log back?

    I tried to send a text message this morning, but would not deliver. Then I restarted my iPad. When we turn back, iMessage asked me to connect. So, I did. But it says "network error." But I tried this on several different internet connections. The sam

  • UPDATES SEVERAL TIMES KB2656352 AND KB2572073

    KB2656352 AND KB2572073 OFFERED SEVERAL TIMES EVEN THOUGH HISTORY SHOWS THAT THEY HAVE BEEN UPDATED CORRECTLY?

  • BlackBerry smartphones cannot set up email

    I just got a new bold 9900 but can not set up my email account, he checks the updates and said 'YOUR DEVICE ENCOUNTERED A PROBLEM WITH THIS APPLICATION SERVER'

  • Create Split layers above original layer

    Hello everyone.In the Preferences-general- if you uncheck the Split create layers above original layer. So if you split a layer, Ctrl + Shift + D. You'll see the Split layer under your original layer. I was creating an animation for a movie and I wan