db_recovery_file_dest

Hi all

EBS R12.2.4

11 GR 2

Rhel6.5

If I put db_recovery_file_dest = / u01/FRA


This means - he staged my RMAN backup in this folder? It is in the interest of RMAN backup?


Run what happens if my rman as > backup as database format compressed more archivelog backupset ' / u01/RMAN/db_%U';


Folder in which is used or followed?


Help, please...



Thank you very much

JC

OK I rephrase my questiom > run what happens if my rman as > backup as backupset compressed database format ' / u01/RMAN/db_%U';

This will replace the FRA? I guess my database backup will be written in/u01/folder RMAN even if I put my recovery_file_dest = / u01/FRA right?

Yes

Tags: Oracle Applications

Similar Questions

  • Why RMAN is not storage of backups in "db_recovery_file_dest?

    Hello

    Select the name, the value of the parameter $ v

    WHERE name = 'db_recovery_file_dest.

    NAMEVALUE
    db_recovery_file_dest/ora_backup

    When I run the following commands, RMAN lance store backups in "$ORACLE_HOME/dbs':

    RMAN > check effective additional 0 logical backup

    format "% d_ %%U t_ level0_.

    as a database of compressed backupset;

    My understanding is that RMAN should save the backups under '/ora_backup'.

    And when I run the following command, RMAN stores save under ' / ora_backup ':

    RMAN > backup database;

    Could you get it someone please let me know what I'm missing here?

    Here is the config RMAN:

    RMAN > show all.

    RMAN configuration parameters are:

    CONFIGURE RETENTION POLICY TO RECOVERY OF 7-DAY WINDOW;

    CONFIGURE BACKUP OPTIMIZATION # by default

    SET UP DEFAULT DISK DEVICE TYPE; # by default

    CONFIGURE CONTROLFILE AUTOBACKUP ON;

    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO "%F" # by default

    SET UP THE DEVICE TYPE DISK PARALLELISM 10 TYPE OF BACKUP BACKUPSET.

    CONFIGURE BACKUP OF DATA TO DISK FILE TYPE DEVICE TO 1; # by default

    CONFIGURE BACKUP ARCHIVELOG FOR DEVICE TYPE DISK TO 1; # by default

    CONFIGURE CHANNEL DEVICE TYPE DISK MAXPIECESIZE 5 G;

    CONFIGURE MAXSETSIZE TO UNLIMITED; # by default

    CONFIGURE ENCRYPTION OF DATABASE # by default

    CONFIGURE THE ENCRYPTION ALGORITHM "AES128"; # by default

    CONFIGURE THE NONE ARCHIVELOG DELETION POLICY; # by default

    CONFIGURE the SNAPSHOT CONTROLFILE NAME to "/app/oracle/product/10.2.0/dbee/dbs/snapcf_< SID >.f '; # by default

    RMAN >

    Best regards

    If you specify a FORMAT, it substitutes for the use of the FRA.

    Run a

    INCREMENTAL LEVEL 0 AS COMPRESSED BACKUPSET DATABASE BACKUP

    Hemant K Collette

  • change the parameter db_recovery_file_dest

    Hello!

    We use oracle 10g on windows 2008r2. the flash recovery area is d:\fra. in recent months the amount of data has increased considerably, then move the fra to another (of course largest) partition/hard disk.

    now the question: is it possible to change the setting "db_recovery_file_dest' from d:\fra to e:\fra and then copy the games of backup and archive logs to the new location OR is - it enough to change the parameter and oracle knows that data before changing the setting must be placed in d:\fra? hope this is understandable... ;-)

    Thanks in advance!

    Ciao,.
    Christian

    Hello;

    Yes, you can change it and it should work fine.

    The catalogue will know on older backups, so it shouldn't be a problem.

    alter system set db_recovery_file_dest='e:\fra';
    

    No doubt it would change too: (at the same time)

    alter system set db_recovery_file_dest_size=??G scope=both;
    

    The audit is easy

    SQL>show parameter db_recovery
    

    Do NOT move the old files. Oracle will bark.

    If you want more than all of the files from an MRE old new FIU, follow the link

    http://download.Oracle.com/docs/CD/B19306_01/backup.102/b14191/TOC.htm

    Best regards

    mseberg

  • Using DB_RECOVERY_FILE_DEST and LOG_ARCHIVE_DEST - error ORA-16019

    Hello

    My database is currently configured to write files archivelog on two sites: / u01/app/oracle/archivelogs (which is the log_archive_dest parameter and is ranked number 1) and USE_DB_RECOVERY_FILE_DEST (10), which is
    under/u01/app/oracle/flash_recovery_area. I want to spend the two locations for separate discs.

    When I attempt to move or the other, I get the following errors:


    SQL > ALTER SYSTEM SET DB_RECOVERY_FILE_DEST = ' / u03/fra/MYDB/flash_recovery_area "SCOPE = BOTH;
    ALTER SYSTEM SET DB_RECOVERY_FILE_DEST = ' / u03/fra/MYDB/flash_recovery_area "SCOPE = BOTH
    *
    ERROR on line 1:
    ORA-02097: the parameter cannot be changed because specified value is not valid
    ORA-16019: cannot use db_recovery_file_dest with LOG_ARCHIVE_DEST or LOG_ARCHIVE_DUPLEX_DEST

    SQL > alter system set log_archive_dest = "/ u02/archive/MYDB";
    ALTER system set log_archive_dest = "/ u02/archive/MYDB '.
    *
    ERROR on line 1:
    ORA-02097: the parameter cannot be changed because specified value is not valid
    ORA-16018: cannot use LOG_ARCHIVE_DEST with LOG_ARCHIVE_DEST_n or DB_RECOVERY_FILE_DEST


    Any ideas?

    Thanks for any help.

    cool, finally you got. Thanks for pasting of output.

    Barry K.

  • How do I rename DB_RECOVERY_FILE_DEST

    I'm on 10.2.0.4 and trying to rename this destination of/data/abc to/data/dfg. So, I'm not moving files to the new destination. If I renamed the directory while the db is down, I can not bring it to the top, as he complains of having db_recovery_file_dest parameter in init.ora (we use spfile, however). If I db upward, I can't rename the directory because it is in use. Any suggestions? Thank you!

    You can create a pfile from spfile file and then bring your db down, make the change in the file pfile... starting with pfile = 'name of the pfile file' and once started and checked the new create spfile from pfile location.

  • How to stop using db_recovery_file_dest

    Dear oracle users,

    I created a new database that uses flash as a recovery area archive destination defined in the parameter db_recovery_file_dest newspapers. However, I decided to stop using this feature and continue to use the traditional archives journal log_archive_dest parameter. How can I check that?

    Thank you in advance.

    Belinda

    Belinda,
    If it's a good choice to use FRA to store the archive logs. But you can change the destination of a normal filesystem. You must use Log_archive_dest_n (where n varies from 1.10) instead of itself a non - FRA.
    You can use this command to set the default location,
    {{code
    ALTER system set log_archive_dest_1 = ' LOCATION = / some/location/on/disc;

    This would set the first non-FRA location for your archiver. You can also do the same from Enterprise Manager also, Maintenance->Recovery settings.It will be easier to follow from there.
    Please have a rea of the following page for the same ,
    http://68.142.116.68/docs/cd/B19306_01/server.102/b14231/archredo.htm#sthref1068
    HTH
    Aman....                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            
    
  • What to do after starting db_recovery_file_dest is full

    We have an installer of data with the primary and secondary physical custody both on 11g 2. A few days before Flash_recover_area is full and replication stopped. Now, I cleaned the disc and made room for log shipping. I saw the new files, redo log shipped to the standby host.
    The last post I've seen in alert_log Eve's daughter
    Archived Log entry 6191 added for thread 3 sequence 2735 rlc 710164789 ID 0x33498a35 dest 2:
    RFS[14958]: Opened log for thread 3 sequence 2737 dbid 860456245 branch 710164789
    Archived Log entry 6192 added for thread 3 sequence 2736 rlc 710164789 ID 0x33498a35 dest 2:
    Archived Log entry 6193 added for thread 3 sequence 2737 rlc 710164789 ID 0x33498a35 dest 2:
    Mon Aug 16 14:11:30 2010
    RFS[14960]: Assigned to RFS process 29122
    RFS[14960]: Identified database type as 'physical standby': Client is Foreground pid 22317
    Check v$ archived_log
    select thread#,applied, max(sequence#) max_sequence#,count(*) from v$archived_log 
     group by thread#, applied order by 1;
    
    THREAD#                     APPLIED                     MAX_SEQUENCE#               COUNT(*)                    
    1                           NO                          3731                        831                         
    1                           YES                         2912                        1544                        
    2                           NO                          2642                        786                         
    2                           YES                         1868                        1069                        
    3                           NO                          2739                        810                         
    3                           YES                         1941                        1157                  
    I didn't know the current session in standby. is the replication in the treatment? If so, why, all sessions are inactive.
    What should I do to restart the replication if it is not already started.

    Dear user13148231,

    Oracle cannot feel the archivelogs you copied across. You must register all first to inform Oracle that the missing archivelog sequences are there in the archivelog directory.

    ALTER DATABASE REGISTER LOGFILE '/PATH/ARC_LOG_FILE_NAME';
    

    After registering, you must restart the MRP (managed recovery) process and see that the MRP will apply this archivelogs one by one. He traces the v $ archived_log corrected display.

    Hope that helps.

    Ogan

  • PHSICAL standby db, cannot receive the archivelogs elementary school

    Hello

    Yesterday, I created a new physical standby database, based on

    http://dbatricksworld.com/steps-to-configure-Oracle-11g-Data-Guard-physical-standby-Data-Guard-part-i/

    everything seems fine.

    Today morning, found the day before the database is initially elementary, then copied manually lack archivelogs and recover the database shall catch up.

    unique name of the primary data base: orcl2

    unique name of standby db: orcl2_stby

    Looks that ensure db cannot receive the primary archivelogs,

    standby mode

    SQL > SELECT nom_dest, STATUS, DESTINATION

    2 FROM V$ ARCHIVE_DEST

    3. WHERE THE DESTINATION IS NOT NULL;

    DESTINATION STATUS NOM_DEST
    ------------------------- --------- ------------------------------
    LOG_ARCHIVE_DEST_1 BAD PARAM E:\oracle\oradata\orcl2\recovery\ORCL2\ARCHIVELOG

    LOG_ARCHIVE_DEST_2 VALID ORCL2
    STANDBY_ARCHIVE_DEST VALID USE_DB_RECOVERY_FILE_DEST

    why it displays "bad param.

    How to fix?

    Thank you

    CC:

    standby mode

    SQL > show parameter db_recovery

    VALUE OF TYPE NAME
    ------------------------------------ ----------- ------------------------------
    db_recovery_file_dest string E:\ORACLE\ORADATA\ORCL2\recovery
    whole large db_recovery_file_dest_size 150G

    Hey all,.

    the problem is resolved,

    The problem is caused by the file password. in windows server, the password file is created differently from the Linux server.

    Thank you all very much.

  • Insert - Performance problem

    Hi Experts,

    I am new to Oracle. Ask for your help to fix the performance of a query of insertion problem.

    I have an insert query that is go search for records of the partitioned table.

    Background: the user indicates that the query was running in 30 minutes to 10 G. The database is upgraded to 12 by one of my colleague. Now the query works continuously for hours, but no result. Check the settings and SGA is 9 GB, Windows - 4 GB. DB block size is 8192, DB Multiblock read file Count is 128. Overall target of PGA is 2457M.

    The parameters are given below


    VALUE OF TYPE NAME
    ------------------------------------ ----------- ----------
    DBFIPS_140 boolean FALSE
    O7_DICTIONARY_ACCESSIBILITY boolean FALSE
    whole active_instance_count
    aq_tm_processes integer 1
    ARCHIVE_LAG_TARGET integer 0
    asm_diskgroups chain
    asm_diskstring chain
    asm_power_limit integer 1
    asm_preferred_read_failure_groups string
    audit_file_dest string C:\APP\ADM
    audit_sys_operations Boolean TRUE

    AUDIT_TRAIL DB string
    awr_snapshot_time_offset integer 0
    background_core_dump partial string
    background_dump_dest string C:\APP\PRO
    \RDBMS\TRA
    BACKUP_TAPE_IO_SLAVES boolean FALSE
    bitmap_merge_area_size integer 1048576
    blank_trimming boolean FALSE
    buffer_pool_keep string
    buffer_pool_recycle string
    cell_offload_compaction ADAPTIVE channel


    cell_offload_decryption Boolean TRUE
    cell_offload_parameters string
    cell_offload_plan_display string AUTO
    cell_offload_processing Boolean TRUE
    cell_offloadgroup_name string
    whole circuits
    whole big client_result_cache_lag 3000
    client_result_cache_size big integer 0
    clonedb boolean FALSE
    cluster_database boolean FALSE
    cluster_database_instances integer 1


    cluster_interconnects chain
    commit_logging string
    commit_point_strength integer 1
    commit_wait string
    string commit_write
    common_user_prefix string C#.
    compatible string 12.1.0.2.0
    connection_brokers string ((TYPE = DED
    ((TYPE = EM
    control_file_record_keep_time integer 7
    control_files string G:\ORACLE\

    TROL01. CTL
    FAST_RECOV
    NTROL02. CT
    control_management_pack_access string diagnostic
    core_dump_dest string C:\app\dia
    bal12\cdum
    cpu_count integer 4
    create_bitmap_area_size integer 8388608
    create_stored_outlines string
    cursor_bind_capture_destination memory of the string + tell
    CURSOR_SHARING EXACT string

    cursor_space_for_time boolean FALSE
    db_16k_cache_size big integer 0
    db_2k_cache_size big integer 0
    db_32k_cache_size big integer 0
    db_4k_cache_size big integer 0
    db_8k_cache_size big integer 0
    db_big_table_cache_percent_target string 0
    db_block_buffers integer 0
    db_block_checking FALSE string
    db_block_checksum string TYPICAL
    Whole DB_BLOCK_SIZE 8192

    db_cache_advice string WE
    db_cache_size large integer 0
    db_create_file_dest chain
    db_create_online_log_dest_1 string
    db_create_online_log_dest_2 string
    db_create_online_log_dest_3 string
    db_create_online_log_dest_4 string
    db_create_online_log_dest_5 string
    db_domain chain
    db_file_multiblock_read_count integer 128
    db_file_name_convert chain

    DB_FILES integer 200
    db_flash_cache_file string
    db_flash_cache_size big integer 0
    db_flashback_retention_target around 1440
    chain of db_index_compression_inheritance NONE
    DB_KEEP_CACHE_SIZE big integer 0
    chain of db_lost_write_protect NONE
    db_name string ORCL
    db_performance_profile string
    db_recovery_file_dest string G:\Oracle\
    y_Area


    whole large db_recovery_file_dest_size 12840M
    db_recycle_cache_size large integer 0
    db_securefile string PREFERRED
    channel db_ultra_safe
    db_unique_name string ORCL
    db_unrecoverable_scn_tracking Boolean TRUE
    db_writer_processes integer 1
    dbwr_io_slaves integer 0
    DDL_LOCK_TIMEOUT integer 0
    deferred_segment_creation Boolean TRUE
    dg_broker_config_file1 string C:\APP\PRO


    \DATABASE\
    dg_broker_config_file2 string C:\APP\PRO
    \DATABASE\
    dg_broker_start boolean FALSE
    diagnostic_dest channel directory
    disk_asynch_io Boolean TRUE
    dispatchers (PROTOCOL = string
    12XDB)
    distributed_lock_timeout integer 60
    dml_locks whole 2076
    whole dnfs_batch_size 4096

    dst_upgrade_insert_conv Boolean TRUE
    enable_ddl_logging boolean FALSE
    enable_goldengate_replication boolean FALSE
    enable_pluggable_database boolean FALSE
    event string
    exclude_seed_cdb_view Boolean TRUE
    fal_client chain
    fal_server chain
    FAST_START_IO_TARGET integer 0
    fast_start_mttr_target integer 0
    fast_start_parallel_rollback string LOW


    file_mapping boolean FALSE
    fileio_network_adapters string
    filesystemio_options chain
    fixed_date chain
    gcs_server_processes integer 0
    global_context_pool_size string
    global_names boolean FALSE
    global_txn_processes integer 1
    hash_area_size integer 131072
    channel heat_map
    hi_shared_memory_address integer 0

    hs_autoregister Boolean TRUE
    iFile file
    inmemory_clause_default string
    inmemory_force string by DEFAULT
    inmemory_max_populate_servers integer 0
    inmemory_query string ENABLE
    inmemory_size big integer 0
    inmemory_trickle_repopulate_servers_ integer 1
    percent
    instance_groups string
    instance_name string ORCL


    instance_number integer 0
    instance_type string RDBMS
    instant_restore boolean FALSE
    java_jit_enabled Boolean TRUE
    java_max_sessionspace_size integer 0
    JAVA_POOL_SIZE large integer 0
    java_restrict string no
    java_soft_sessionspace_limit integer 0
    JOB_QUEUE_PROCESSES around 1000
    LARGE_POOL_SIZE large integer 0
    ldap_directory_access string NONE


    ldap_directory_sysauth string no.
    license_max_sessions integer 0
    license_max_users integer 0
    license_sessions_warning integer 0
    listener_networks string
    LOCAL_LISTENER (ADDRESS = string
    = i184borac
    (NET) (PORT =
    lock_name_space string
    lock_sga boolean FALSE
    log_archive_config string


    Log_archive_dest chain
    Log_archive_dest_1 chain
    LOG_ARCHIVE_DEST_10 string
    log_archive_dest_11 string
    log_archive_dest_12 string
    log_archive_dest_13 string
    log_archive_dest_14 string
    log_archive_dest_15 string
    log_archive_dest_16 string
    log_archive_dest_17 string
    log_archive_dest_18 string


    log_archive_dest_19 string
    LOG_ARCHIVE_DEST_2 string
    log_archive_dest_20 string
    log_archive_dest_21 string
    log_archive_dest_22 string
    log_archive_dest_23 string
    log_archive_dest_24 string
    log_archive_dest_25 string
    log_archive_dest_26 string
    log_archive_dest_27 string
    log_archive_dest_28 string


    log_archive_dest_29 string
    log_archive_dest_3 string
    log_archive_dest_30 string
    log_archive_dest_31 string
    log_archive_dest_4 string
    log_archive_dest_5 string
    log_archive_dest_6 string
    log_archive_dest_7 string
    log_archive_dest_8 string
    log_archive_dest_9 string
    allow the chain of log_archive_dest_state_1


    allow the chain of log_archive_dest_state_10
    allow the chain of log_archive_dest_state_11
    allow the chain of log_archive_dest_state_12
    allow the chain of log_archive_dest_state_13
    allow the chain of log_archive_dest_state_14
    allow the chain of log_archive_dest_state_15
    allow the chain of log_archive_dest_state_16
    allow the chain of log_archive_dest_state_17
    allow the chain of log_archive_dest_state_18
    allow the chain of log_archive_dest_state_19
    allow the chain of LOG_ARCHIVE_DEST_STATE_2

    allow the chain of log_archive_dest_state_20
    allow the chain of log_archive_dest_state_21
    allow the chain of log_archive_dest_state_22
    allow the chain of log_archive_dest_state_23
    allow the chain of log_archive_dest_state_24
    allow the chain of log_archive_dest_state_25
    allow the chain of log_archive_dest_state_26
    allow the chain of log_archive_dest_state_27
    allow the chain of log_archive_dest_state_28
    allow the chain of log_archive_dest_state_29
    allow the chain of log_archive_dest_state_3

    allow the chain of log_archive_dest_state_30
    allow the chain of log_archive_dest_state_31
    allow the chain of log_archive_dest_state_4
    allow the chain of log_archive_dest_state_5
    allow the chain of log_archive_dest_state_6
    allow the chain of log_archive_dest_state_7
    allow the chain of log_archive_dest_state_8
    allow the chain of log_archive_dest_state_9
    log_archive_duplex_dest string
    log_archive_format string ARC%S_%R.%
    log_archive_max_processes integer 4

    log_archive_min_succeed_dest integer 1
    log_archive_start Boolean TRUE
    log_archive_trace integer 0
    whole very large log_buffer 28784K
    log_checkpoint_interval integer 0
    log_checkpoint_timeout around 1800
    log_checkpoints_to_alert boolean FALSE
    log_file_name_convert chain
    whole MAX_DISPATCHERS
    max_dump_file_size unlimited string
    max_enabled_roles integer 150


    whole max_shared_servers
    max_string_size string STANDARD
    memory_max_target big integer 0
    memory_target large integer 0
    NLS_CALENDAR string GREGORIAN
    nls_comp BINARY string
    nls_currency channel u
    string of NLS_DATE_FORMAT DD-MON-RR
    nls_date_language channel ENGLISH
    string nls_dual_currency C
    nls_iso_currency string UNITED KIN

    nls_language channel ENGLISH
    nls_length_semantics string OCTET
    string nls_nchar_conv_excp FALSE
    nls_numeric_characters chain.,.
    nls_sort BINARY string
    nls_territory string UNITED KIN
    nls_time_format HH24.MI string. SS
    nls_time_tz_format HH24.MI string. SS
    chain of NLS_TIMESTAMP_FORMAT DD-MON-RR
    NLS_TIMESTAMP_TZ_FORMAT string DD-MON-RR
    noncdb_compatible boolean FALSE


    object_cache_max_size_percent integer 10
    object_cache_optimal_size integer 102400
    olap_page_pool_size big integer 0
    open_cursors integer 300
    Open_links integer 4
    open_links_per_instance integer 4
    optimizer_adaptive_features Boolean TRUE
    optimizer_adaptive_reporting_only boolean FALSE
    OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES boolean FALSE
    optimizer_dynamic_sampling integer 2
    optimizer_features_enable string 12.1.0.2

    optimizer_index_caching integer 0
    OPTIMIZER_INDEX_COST_ADJ integer 100
    optimizer_inmemory_aware Boolean TRUE
    the string ALL_ROWS optimizer_mode
    optimizer_secure_view_merging Boolean TRUE
    optimizer_use_invisible_indexes boolean FALSE
    optimizer_use_pending_statistics boolean FALSE
    optimizer_use_sql_plan_baselines Boolean TRUE
    OPS os_authent_prefix string $
    OS_ROLES boolean FALSE
    parallel_adaptive_multi_user Boolean TRUE


    parallel_automatic_tuning boolean FALSE
    parallel_degree_level integer 100
    parallel_degree_limit string CPU
    parallel_degree_policy chain MANUAL
    parallel_execution_message_size integer 16384
    parallel_force_local boolean FALSE
    parallel_instance_group string
    parallel_io_cap_enabled boolean FALSE
    PARALLEL_MAX_SERVERS integer 160
    parallel_min_percent integer 0
    parallel_min_servers integer 16

    parallel_min_time_threshold string AUTO
    parallel_server boolean FALSE
    parallel_server_instances integer 1
    parallel_servers_target integer 64
    parallel_threads_per_cpu integer 2
    pdb_file_name_convert string
    pdb_lockdown string
    pdb_os_credential string
    permit_92_wrap_format Boolean TRUE
    pga_aggregate_limit great whole 4914M
    whole large pga_aggregate_target 2457M

    -
    Plscope_settings string IDENTIFIER
    plsql_ccflags string
    plsql_code_type chain INTERPRETER
    plsql_debug boolean FALSE
    plsql_optimize_level integer 2
    plsql_v2_compatibility boolean FALSE
    plsql_warnings DISABLE channel: AL
    PRE_PAGE_SGA Boolean TRUE
    whole process 300
    processor_group_name string
    query_rewrite_enabled string TRUE


    applied query_rewrite_integrity chain
    rdbms_server_dn chain
    read_only_open_delayed boolean FALSE
    recovery_parallelism integer 0
    Recyclebin string on
    redo_transport_user string
    remote_dependencies_mode string TIMESTAMP
    remote_listener chain
    Remote_login_passwordfile string EXCLUSIVE
    REMOTE_OS_AUTHENT boolean FALSE
    remote_os_roles boolean FALSE

    replication_dependency_tracking Boolean TRUE
    resource_limit Boolean TRUE
    resource_manager_cpu_allocation integer 4
    resource_manager_plan chain
    result_cache_max_result integer 5
    whole big result_cache_max_size K 46208
    result_cache_mode chain MANUAL
    result_cache_remote_expiration integer 0
    resumable_timeout integer 0
    rollback_segments chain
    SEC_CASE_SENSITIVE_LOGON Boolean TRUE

    sec_max_failed_login_attempts integer 3
    string sec_protocol_error_further_action (DROP, 3)
    sec_protocol_error_trace_action string PATH
    sec_return_server_release_banner boolean FALSE
    disable the serial_reuse chain
    service name string ORCL
    session_cached_cursors integer 50
    session_max_open_files integer 10
    entire sessions 472
    Whole large SGA_MAX_SIZE M 9024
    Whole large SGA_TARGET M 9024


    shadow_core_dump string no
    shared_memory_address integer 0
    SHARED_POOL_RESERVED_SIZE large integer 70464307
    shared_pool_size large integer 0
    whole shared_server_sessions
    SHARED_SERVERS integer 1
    skip_unusable_indexes Boolean TRUE
    smtp_out_server chain
    sort_area_retained_size integer 0
    sort_area_size integer 65536
    spatial_vector_acceleration boolean FALSE


    SPFile string C:\APP\PRO
    \DATABASE\
    sql92_security boolean FALSE
    SQL_Trace boolean FALSE
    sqltune_category string by DEFAULT
    standby_archive_dest channel % ORACLE_HO
    standby_file_management string MANUAL
    star_transformation_enabled string TRUE
    statistics_level string TYPICAL
    STREAMS_POOL_SIZE big integer 0
    tape_asynch_io Boolean TRUE

    temp_undo_enabled boolean FALSE
    entire thread 0
    threaded_execution boolean FALSE
    timed_os_statistics integer 0
    TIMED_STATISTICS Boolean TRUE
    trace_enabled Boolean TRUE
    tracefile_identifier chain
    whole of transactions 519
    transactions_per_rollback_segment integer 5
    UNDO_MANAGEMENT string AUTO
    UNDO_RETENTION integer 900

    undo_tablespace string UNDOTBS1
    unified_audit_sga_queue_size integer 1048576
    use_dedicated_broker boolean FALSE
    use_indirect_data_buffers boolean FALSE
    use_large_pages string TRUE
    user_dump_dest string C:\APP\PRO
    \RDBMS\TRA
    UTL_FILE_DIR chain
    workarea_size_policy string AUTO
    xml_db_events string enable

    Thanks in advance

    Firstly, thank you for posting the 10g implementation plan, which was one of the key things that we were missing.

    Second, you realize that you have completely different execution plans, so you can expect different behavior on each system.

    Your package of 10g has a total cost of 23 959 while your plan of 12 c has a cost of 95 373 which is almost 4 times more.  All things being equal, cost is supposed to relate directly to the time spent, so I expect the 12 c plan to take much more time to run.

    From what I can see the 10g plan begins with a scan of full table on DEALERS, and then a full scan on SCARF_VEHICLE_EXCLUSIONS table, and then a full scan on CBX_tlemsani_2000tje table, and then a full scan on CLAIM_FACTS table.  The first three of these analyses tables have a very low cost (2 each), while the last has a huge cost of 172K.  Yet once again, the first three scans produce very few lines in 10g, less than 1,000 lines each, while the last product table scan 454 K lines.

    It also looks that something has gone wrong in the 10g optimizer plan - maybe a bug, which I consider that Jonathan Lewis commented.  Despite the full table scan with a cost of 172 K, NESTED LOOPS it is part of the only has a cost of 23 949 or 24 K.  If the math is not in terms of 10g.  In other words, maybe it's not really optimal plan because 10g optimizer may have got its sums wrong and 12 c might make his right to the money.  But luckily this 'imperfect' 10g plan happens to run fairly fast for one reason or another.

    The plan of 12 starts with similar table scans but in a different order.  The main difference is that instead of a full table on CLAIM_FACTS scan, it did an analysis of index on CLAIM_FACTS_AK9 beach at the price of 95 366.  It is the only main component of the final total cost of 95 373.

    Suggestions for what to do?  It is difficult, because there is clearly an anomaly in the system of 10g to have produced the particular execution plan that he uses.  And there is other information that you have not provided - see later.

    You can try and force a scan of full table on CLAIM_FACTS by adding a suitable example suspicion "select / * + full (CF) * / cf.vehicle_chass_no...". "However, the tips are very difficult to use and does not, guarantee that you will get the desired end result.  So be careful.  For the essay on 12 c, it may be worth trying just to see what happens and what produces the execution plan looks like.  But I would not use such a simple, unique tip in a production system for a variety of reasons.  For testing only it might help to see if you can force the full table on CLAIM_FACTS scan as in 10g, and if the performance that results is the same.

    The two plans are parallel ones, which means that the query is broken down into separate, independent steps and several steps that are executed at the same time, i.e. several CPUS will be used, and there will be several readings of the disc at the same time.  (It is a mischaracterization of the works of parallel query how).  If 10g and 12 c systems do not have the SAME hardware configuration, then you would naturally expect different time elapsed to run the same parallel queries.  See the end of this answer for the additional information that you may provide.

    But I would be very suspicious of the hardware configuration of the two systems.  Maybe 10 g system has 16-core processors or more and 100's of discs in a matrix of big drives and maybe the 12 c system has only 4 cores of the processor and 4 disks.  That would explain a lot about why the 12 c takes hours to run when the 10 g takes only 30 minutes.

    Remember what I said in my last reply:

    "Without any contrary information I guess the filter conditions are very low, the optimizer believes he needs of most of the data in the table and that a table scan or even a limited index scan complete is the"best"way to run this SQL.  In other words, your query takes just time because your tables are big and your application has most of the data in these tables. "

    When dealing with very large tables and to do a full table parallel analysis on them, the most important factor is the amount of raw hardware, you throw the ball to her.  A system with twice the number of CPUS and twice the number of disks will run the same parallel query in half of the time, at least.  It could be that the main reason for the 12 c system is much slower than the system of 10g, rather than on the implementation plan itself.

    You may also provide us with the following information which would allow a better analysis:

    • Row counts in each tables referenced in the query, and if one of them are partitioned.
    • Hardware configurations for both systems - the 10g and the 12 a.  Number of processors, the model number and speed, physical memory, CPU of discs.
    • The discs are very important - 10g and 12 c have similar disk subsystems?  You use simple old records, or you have a San, or some sort of disk array?  Are the bays of identical drives in both systems?  How are they connected?  Fast Fibre Channel, or something else?  Maybe even network storage?
    • What is the size of the SGA in both systems?  of values for MEMORY_TARGET and SGA_TARGET.
    • The fact of the CLAIM_FACTS_AK9 index exist on the system of 10g.  I guess he does, but I would like that it confirmed to be safe.

    John Brady

  • ARCHIVELOG deletion policy vs v$ flash_Recovery_Area_usage - delicate...

    Hi all

    Reading of

    http://docs.Oracle.com/CD/E11882_01/backup.112/e10643/rcmsynta016.htm#RCMRF121

    DELETE ARCHIVELOG ALLconsiders that the deletion of archived newspaper policy


    Reading on archivelog deletion policy

    Only newspapers in the field of quick recovery can be deleted automatically by the database.
    When the deletion policy is set on NONE , RMAN considers archived redo log files as eligible for deletion if they meet both of the following


    -The archive redo logs, whether in or outside of it, fast recovery area have been transferred to remote required destinations specified byLOG_ARCHIVE_DEST_n


    -Log redo archived in the area of fast recovery, files have been saved at least once to the disk or SBT or newspapers are obsolete according to the backup retention policy.

    The esteem newspapers backup retention policy obsolete If newspapers are needed by a guaranteed restore not point and that newspapers are not required by Flashback Database

    =======================================================

    LOG_ARCHIVE_DEST_1 Q1) is configured to use 'db_recovery_file_dest' and archivelogs are kept here-> I have meet the #1 condition?

    Q2) I have a retention policy recovery backup window 1 day and I haven't done ANY backup at all yet. I don't have a flashback database, or doesn't I created a ganrantee restore point - I meet the #2 condition?


    Q3) I am able to issue "delete archivelog all ', what does that say I satisfy the condition #1 and #2 and the archivelogs are eligible for deletion?

    Q4) if above is a Yes, and with newspapers in fast recovery area may be removed automatically-> why is he in V$ FLASH_RECOVERY_AREA_USAGE, 'PERCENT_SPACE_RECLAIMABLE' always show 0 for the log archiving?

    I thought that the space which are recyclable are files that are eligible for deletion
    ?

    Kind regards

    Noob

    So, this giving some more thought and reading the documentation again. I conclude the following:

    The confusion might be in the context of "eligible for deletion" and management of automatic space of files inside the fast recovery area.

    If the Archivelog deletion policy is set to 'none', then delete archivelog all the will remove all archivelogs, without worrying. Otherwise, the archivelogs policy are not deleted.

    According to the documentation:


    DELETE ARCHIVELOG ALL considers only the deletion of archived newspaper policy and does not take into account the configured retention policy.

    Everything under "eligible for deletion" applies to the space management FRA, who considers the RMAN retention policy and, as far as I understand it, also believes the Archivelog deletion policy when it is not set to 'none '.

    Archive redo log deletion policy is set to NONE by default. In this case, RMAN considers archived redo log files in the recovery area as eligible for deletion if they fulfill the following two conditions:

    • The archived redo logs, whether in the flash recovery area or outside of it, have been transferred to the required remote destinations specified by LOG_ARCHIVE_DEST_ n .
    • Archived recovery logs have been saved at least once to the disk or SBT or newspapers are obsolete according to the backup retention policy.

    Regarding your recoverable space issue... If you have access to MOS, I suggest to see Doc ID 315098.1.

    What I take from this document, is that archivelogs are included in the column SPACE RECUPERABLE if the free space of FRA becomes less then 15% and the archivelogs are not required by the current backups. Or, archivelogs were saved and not deleted.

  • Backup RMAN when FRA is configured

    I configured on a db Flash recovery area 11.2.0.4;  I want RMAN full backup to a different location of FRA.

    I ran the full backup of rman with specified format...

    backup in the database format compressed more archivelog backupset ' / backup/BKPD15/%U';

    I see, some files will FRA and a few other small files are there in /backup/BKPD15 location as well.


    What is the right way to take backup to any location of fra.

    Hello

    1. According to my knowledge, you must use two format, since the format applies to only backup of database archives goes for fra (see demo below)

    Demo:

    SYS@demo1 December 27, 15 > db_rec setting sho

    VALUE OF TYPE NAME

    ------------------------------------ ----------- ------------------------------

    db_recovery_file_dest string/u04/temp_data/demo1

    whole large db_recovery_file_dest_size 10G

    db_recycle_cache_size large integer 0

    [oracle@host1 test_backup] $ pwd

    u04 / / test_backup

    [oracle@host1 test_backup] $ ls - tl

    Total 0

    VALUE OF TYPE NAME

    ------------------------------------ ----------- ------------------------------

    db_recovery_file_dest string/u04/temp_data/demo1

    whole large db_recovery_file_dest_size 10G

    db_recycle_cache_size large integer 0

    [oracle@host1 test_backup] $ pwd

    u04 / / test_backup

    [oracle@host1 test_backup] $ ls - tl

    Total 0

    RMAN > backup as database format compressed more archivelog backupset ' / u04/test_backup/%U';

    From December 27, 15 backup

    Current archived log

    using channel ORA_DISK_1

    channel ORA_DISK_1: starting compressed archived log backup set

    channel ORA_DISK_1: specifying the newspapers archived in the backup set

    archived journal thread = 1 = 3 sequence entry RECID = STAMP 3 = 899558724

    channel ORA_DISK_1: from room 1 to 27 December 15

    channel ORA_DISK_1: finished piece 1 to December 27, 15

    total, handle = / u04/test_backup/01qpsba4_1_1 tag = TAG20151227T132524 comment = NONE - location no fra

    channel ORA_DISK_1: complete set of backups, time: 00:00:01

    Backup over at December 27, 15

    From December 27, 15 backup

    using channel ORA_DISK_1

    channel ORA_DISK_1: from complete compressed datafile backup set

    channel ORA_DISK_1: specifying datafile (s) in the backup set

    Enter a number of file datafile = 00005 name=/u02/demo1/oradata/demo1/example01.dbf

    Enter a number of file datafile = name=/u02/demo1/oradata/demo1/mssm_tbs.dbf 00002

    Enter a number of file datafile = name=/u02/demo1/oradata/demo1/assm_tbs.dbf 00007

    Enter a number of file datafile = 00001 name=/u02/demo1/oradata/demo1/system01.dbf

    Enter a number of file datafile = name=/u02/demo1/oradata/demo1/sysaux01.dbf 00003

    Enter a number of file datafile = name=/u02/demo1/oradata/demo1/undotbs01.dbf 00004

    Enter a number of file datafile = name=/u02/demo1/oradata/demo1/users01.dbf 00006

    channel ORA_DISK_1: from room 1 to 27 December 15

    channel ORA_DISK_1: finished piece 1 to December 27, 15

    piece handle=/u04/temp_data/demo1/DEMO1/backupset/2015_12_27/o1_mf_nnndf_TAG20151227T132526_c7z6cgf1_.bkp tag = TAG20151227T132526 comment = NONE - fra position control file

    channel ORA_DISK_1: complete set of backups, time: 00:01:05

    channel ORA_DISK_1: from complete compressed datafile backup set

    channel ORA_DISK_1: specifying datafile (s) in the backup set

    including the current control in the backup set file

    including current SPFILE in the backup set

    channel ORA_DISK_1: from room 1 to 27 December 15

    channel ORA_DISK_1: finished piece 1 to December 27, 15

    piece handle=/u04/temp_data/demo1/DEMO1/backupset/2015_12_27/o1_mf_ncsnf_TAG20151227T132526_c7z6fjyk_.bkp tag = TAG20151227T132526 comment = NONE - fra location spfile

    channel ORA_DISK_1: complete set of backups, time: 00:00:02

    Backup over at December 27, 15

    From December 27, 15 backup

    Current archived log

    using channel ORA_DISK_1

    channel ORA_DISK_1: starting compressed archived log backup set

    channel ORA_DISK_1: specifying the newspapers archived in the backup set

    archived journal thread = 1 = 4 sequence entry RECID = STAMP 4 = 899558794

    channel ORA_DISK_1: from room 1 to 27 December 15

    channel ORA_DISK_1: finished piece 1 to December 27, 15

    total, handle = / u04/test_backup/04qpsbca_1_1 tag = TAG20151227T132634 comment = NONE - location no fra

    channel ORA_DISK_1: complete set of backups, time: 00:00:01

    Backup over at December 27, 15

    channel ORA_DISK_1: complete set of backups, time: 00:00:02

    Backup over at December 27, 15

    From December 27, 15 backup

    Current archived log

    using channel ORA_DISK_1

    channel ORA_DISK_1: starting compressed archived log backup set

    channel ORA_DISK_1: specifying the newspapers archived in the backup set

    archived journal thread = 1 = 4 sequence entry RECID = STAMP 4 = 899558794

    channel ORA_DISK_1: from room 1 to 27 December 15

    channel ORA_DISK_1: finished piece 1 to December 27, 15

    total, handle = / u04/test_backup/04qpsbca_1_1 tag = comment TAG20151227T132634 = NONE

    channel ORA_DISK_1: complete set of backups, time: 00:00:01

    Backup over at December 27, 15

    Correct way:

    RMAN > run {}

    2 > allocate channel backup_disk1 type disk;

    3 > backup format COMPRESSED BACKUPSET DATABASE TAG 'DB' ' / u04/test_backup/%U.db'

    4 > format PLUS ARCHIVELOG ' / u04/test_backup/%U.arc';

    {5 >}

    output channel: ORA_DISK_1

    allocated channel: backup_disk1

    channel backup_disk1: SID = 48 type device = DISK

    From December 27, 15 backup

    Current archived log

    channel backup_disk1: compressed boot archived log backup set

    channel backup_disk1: specifying the newspapers archived in the backup set

    archived journal thread = 1 = 3 sequence entry RECID = STAMP 3 = 899558724

    archived journal thread = 1 = 4 sequence entry RECID = STAMP 4 = 899558794

    archived journal entry thread = 1 sequence = RECID 5 = 5 STAMP = 899559573

    channel backup_disk1: from room 1 to 27 December 15

    channel backup_disk1: finished piece 1 to December 27, 15

    piece handle=/u04/test_backup/05qpsc4l_1_1.arc tag = TAG20151227T133933 comment = NONE

    channel backup_disk1: complete set of backups, time: 00:00:01

    Backup over at December 27, 15

    From December 27, 15 backup

    channel backup_disk1: from complete compressed datafile backup set

    channel backup_disk1: specifying datafile (s) in the backup set

    Enter a number of file datafile = 00005 name=/u02/demo1/oradata/demo1/example01.dbf

    Enter a number of file datafile = name=/u02/demo1/oradata/demo1/mssm_tbs.dbf 00002

    Enter a number of file datafile = name=/u02/demo1/oradata/demo1/assm_tbs.dbf 00007

    Enter a number of file datafile = 00001 name=/u02/demo1/oradata/demo1/system01.dbf

    Enter a number of file datafile = name=/u02/demo1/oradata/demo1/sysaux01.dbf 00003

    Enter a number of file datafile = name=/u02/demo1/oradata/demo1/undotbs01.dbf 00004

    Enter a number of file datafile = name=/u02/demo1/oradata/demo1/users01.dbf 00006

    channel backup_disk1: from room 1 to 27 December 15

    channel backup_disk1: finished piece 1 to December 27, 15

    piece handle=/u04/test_backup/06qpsc4m_1_1.db tag = comment DB = NONE

    channel backup_disk1: complete set of backups, time: 00:00:55

    channel backup_disk1: from complete compressed datafile backup set

    channel backup_disk1: specifying datafile (s) in the backup set

    including the current control in the backup set file

    including current SPFILE in the backup set

    channel backup_disk1: from room 1 to 27 December 15

    channel backup_disk1: finished piece 1 to December 27, 15

    piece handle=/u04/test_backup/07qpsc6d_1_1.db tag = comment DB = NONE

    channel backup_disk1: complete set of backups, time: 00:00:01

    Backup over at December 27, 15

    From December 27, 15 backup

    Current archived log

    channel backup_disk1: compressed boot archived log backup set

    channel backup_disk1: specifying the newspapers archived in the backup set

    archived journal entry thread = 1 sequence = RECID 6 = 6 STAMP = 899559632

    channel backup_disk1: from room 1 to 27 December 15

    channel backup_disk1: finished piece 1 to December 27, 15

    piece handle=/u04/test_backup/08qpsc6g_1_1.arc tag = TAG20151227T134032 comment = NONE

    channel backup_disk1: complete set of backups, time: 00:00:01

    Backup over at December 27, 15

    output channel: backup_disk1

    RMAN >

    RMAN > list backupset;

    List of backup sets

    ===================

    Time of accomplishment time BS key size Device Type

    ------- ---------- ----------- ------------ ---------------

    5.89 5 M DISK 00:00:00 DECEMBER 27, 15

    BP key: 5 location: AVAILABLE Tablet: YES Tag: TAG20151227T133933

    Part name: /u04/test_backup/05qpsc4l_1_1.arc

    List of newspapers archived on backup the value 5

    The next time that THRD Seq YVERT low low time next YVERT

    ---- ------- ---------- --------- ---------- ---------

    1 3 2871101 DECEMBER 27, 15 2875107 27 DECEMBER 15

    1 4 2875107 27 DECEMBER 15 2875234 27 DECEMBER 15

    1 5 2875234 27 DECEMBER 15 2875749 27 DECEMBER 15

    Time of accomplishment BS key Type LV size device Type elapsed time

    ------- ---- -- ---------- ----------- ------------ ---------------

    6 integer M DISK 00:00:55 360,54 27 December 15

    BP key: 6 location: AVAILABLE Tablet: YES Tag: DB

    Part name: /u04/test_backup/06qpsc4m_1_1.db

    List of files of backup data the value 6

    Name of file LV Type cash SNA cash time

    ---- -- ---- ---------- --------- ----

    2875757 full 1 /u02/demo1/oradata/demo1/system01.dbf December 27, 15

    2 full 2875757 /u02/demo1/oradata/demo1/mssm_tbs.dbf 27 December 15

    3 full 2875757 /u02/demo1/oradata/demo1/sysaux01.dbf 27 December 15

    4 integer 2875757 /u02/demo1/oradata/demo1/undotbs01.dbf 27 December 15

    5 integer 2875757 /u02/demo1/oradata/demo1/example01.dbf 27 December 15

    6 integer 2875757 /u02/demo1/oradata/demo1/users01.dbf 27 December 15

    7 full 2875757 /u02/demo1/oradata/demo1/assm_tbs.dbf 27 December 15

    Time of accomplishment BS key Type LV size device Type elapsed time

    ------- ---- -- ---------- ----------- ------------ ---------------

    7 full 1.03 M DISK 00:00:02 27 December 15

    BP key: 7 location: AVAILABLE Tablet: YES Tag: DB

    Part name: /u04/test_backup/07qpsc6d_1_1.db

    SPFILE included: Change Date: 27 December 15

    SPFILE db_unique_name: DEMO1

    Control file included: cash SNA: 2875780 cash time: 27 December 15

    Time of accomplishment time BS key size Device Type

    ------- ---------- ----------- ------------ ---------------

    13.50 8 K DISK 00:00:00 DECEMBER 27, 15

    BP key: situation 8: AVAILABLE Tablet: YES Tag: TAG20151227T134032

    Part name: /u04/test_backup/08qpsc6g_1_1.arc

    List of newspapers archived in backup set of 8

    The next time that THRD Seq YVERT low low time next YVERT

    ---- ------- ---------- --------- ---------- ---------

    1 6 2875749 DECEMBER 27, 15 2875786 27 DECEMBER 15

    It may be useful

    -Pavan Kumar N

  • Problems with special characters with Apex5

    Hello together,

    I hope, I'm right on this forum with this problem, I have found no other best match.

    I have a single database called apex12D on 12.1.0.2 it's my database of the 5 Apex develompent. Apex 5.0.2 is installed, also the German language of Apex. The OS is Oracle Linux 6.

    On a second machine that I have configured the Oracle Data Service remains with tomcat (installed from the repositories) and apache (also installed deposits), running on the Oracle Linux 7 operating system. This machine is my "http server" to connect to my Apex environment. Everything is very well workung, but when I go on my Apex Admin Backend all special characters in German (A, U, O,...) are displayed incorrectly. Which looks very good.

    What makes that I made:

    To the database (apex12D), I put all the nls paramereters the installation of the database of German letters:

    SYS@apex12D> select * from nls_database_parameters;
    
    
    PARAMETER VALUE
    ---------------------------------------------------------
    NLS_RDBMS_VERSION 12.1.0.2.0
    NLS_NCHAR_CONV_EXCP FALSE
    NLS_LENGTH_SEMANTICS BYTE
    NLS_COMP BINARY
    NLS_DUAL_CURRENCY ?
    NLS_TIMESTAMP_TZ_FORMAT DD.MM.RR HH24:MI:SSXFF TZR
    NLS_TIME_TZ_FORMAT HH24:MI:SSXFF TZR
    NLS_TIMESTAMP_FORMAT DD.MM.RR HH24:MI:SSXFF
    NLS_TIME_FORMAT HH24:MI:SSXFF
    NLS_SORT GERMAN
    NLS_DATE_LANGUAGE GERMAN
    NLS_DATE_FORMAT DD.MM.RR
    NLS_CALENDAR GREGORIAN
    NLS_NUMERIC_CHARACTERS ,.
    NLS_NCHAR_CHARACTERSET UTF8
    NLS_CHARACTERSET AL32UTF8
    NLS_ISO_CURRENCY GERMANY
    NLS_CURRENCY ?
    NLS_TERRITORY GERMANY
    NLS_LANGUAGE GERMAN
    
    
    20 rows selected.
    

    SPFILE parameters tells a different story, I tried to put them with "change the database < parameter > = < value > scope = spfile" and if a database has restarted, nothing changes.

    SYS@apex12D> show parameter nls
    
    
    NAME                                 TYPE        VALUE
    ------------------------------------ ----------- ------------------------------
    nls_calendar                         string      GREGORIAN
    nls_comp                             string      BINARY
    nls_currency                         string      $
    nls_date_format                      string      DD-MON-RR
    nls_date_language                    string      AMERICAN
    nls_dual_currency                    string      $
    nls_iso_currency                     string      AMERICA
    nls_language                         string      AMERICAN
    nls_length_semantics                 string      BYTE
    nls_nchar_conv_excp                  string      FALSE
    nls_numeric_characters               string      .,
    nls_sort                             string      BINARY
    nls_territory                        string      AMERICA
    nls_time_format                      string      HH.MI.SSXFF AM
    nls_time_tz_format                   string      HH.MI.SSXFF AM TZR
    nls_timestamp_format                 string      DD-MON-RR HH.MI.SSXFF AM
    nls_timestamp_tz_format              string      DD-MON-RR HH.MI.SSXFF AM TZR
    SYS@apex12D>
    

    Interestingly, it is that when I write a pfile to the bottom of my spfile and open it with vi, everything looks great. But OK.

    apex12D.__data_transfer_cache_size=0
    apex12D.__db_cache_size=2030043136
    apex12D.__java_pool_size=50331648
    apex12D.__large_pool_size=385875968
    apex12D.__oracle_base='/usr/local/oracle'#ORACLE_BASE set from environment
    apex12D.__pga_aggregate_target=536870912
    apex12D.__sga_target=3221225472
    apex12D.__shared_io_pool_size=150994944
    apex12D.__shared_pool_size=570425344
    apex12D.__streams_pool_size=16777216
    *.audit_file_dest='/usr/local/oracle/admin/apex12D/adump'
    *.audit_trail='db'
    *.compatible='12.1.0.2.0'
    *.control_files='+DATA_QUM169/APEX12D/CONTROLFILE/current.505.898513523','+FRA_QUM169/APEX12D/CONTROLFILE/current.2094.898513525'
    *.db_block_size=8192
    *.db_create_file_dest='+DATA_QUM169'
    *.db_create_online_log_dest_1='+DATA_QUM169'
    *.db_create_online_log_dest_2='+FRA_QUM169'
    *.db_domain=''
    *.db_name='apex12D'
    *.db_recovery_file_dest='+FRA_QUM169'
    *.db_recovery_file_dest_size=10240m
    *.diagnostic_dest='/usr/local/oracle'
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=apex12DXDB)'
    *.local_listener='LISTENER_APEX12D'
    *.log_archive_dest_1='LOCATION=+FRA_QUM169'
    *.log_archive_dest_2='LOCATION=+DATA_QUM169'
    *.log_archive_format='%t_%s_%r.dbf'
    *.nls_currency='$'
    *.nls_date_language='GERMAN'
    *.nls_dual_currency='$'
    *.nls_iso_currency='GERMANY'
    *.nls_language='GERMAN'
    *.nls_territory='GERMANY'
    *.open_cursors=300
    *.pga_aggregate_target=512m
    *.processes=600
    *.recyclebin='OFF'
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=3072m
    *.undo_tablespace='UNDOTBS1'
    

    The server' http' ADR is configured under the orards user, so I've put this in the .bash_profile:

    NLS_LANG=GERMAN_GERMANY.AL32UTF8
    
    
    export NLS_LANG
    

    The same I did for the root, just for test user, because only root can start tomcat and apache systemctl. Apache and tomcat user/bin/nologin as Bash, so I think that their bash_profile will not if I create a.

    But nothing really worked. Can anyone help please?

    Thank you and best regards,
    David

    Hello

    good German special characters are agree on the first server in ther, but not the second? I have this behavior when I set NLS_LANG = GERMAN_GERMANY. AL32UTF8 when I install the extension of the German language.

    Best regards

    Thomas

    (Grussle aus Böblingen)

  • *.db_file_name_convert, *.log_file_name_convert

    Hello guys,.

    I have two servers:

    SERVER01 Production

    Server02 Standby

    I have these settings in the file pfile (standby):

    *.audit_file_dest='/U01/app/Oracle/admin/BD/adump '

    * .audit_trail = "db".

    * full = '11.2.0.0.0'

    *.control_files='/data/BD/controlfile/control01.ctl','/U01/app/Oracle/flash_recovery_area/BD/control02.ctl'#restore Controlfile

    * .control_management_pack_access = "DIAGNOSTIC + TUNING".

    * .db_block_size = 8192

    * .db_domain = "

    data *.db_file_name_convert='/U01/app/Oracle/oradata/bd','/data/BD/fichier ' (change directory)

    * .db_name = "bd".

    *.db_recovery_file_dest='/media/ORACLE_BACKUP/Atual/FRA '

    * .db_recovery_file_dest_size = 85899345920

    *.diagnostic_dest='/U01/app/Oracle '

    *. Dispatchers ='(Protocol=TCP) (SERVICE = bdXDB)"

    *. Log_archive_dest_1 = 'LOCATION = / Media/ORACLE_BACKUP/atual/fra/BD/archivelog'

    *.log_archive_format='arc_%t_%s_%r.arc'

    *.log_file_name_convert='/U01/app/Oracle/oradata/bd','/data/BD/onlinelog ' (change directory)

    * .open_cursors = 300

    * .pga_aggregate_target = 11884560384

    * runoff = 150

    * .remote_login_passwordfile = "EXCLUSIVE."

    * .sga_target = 1610612736

    * .standby_file_management = "AUTO".

    * .undo_tablespace = "UNDOTBS1.

    First question:

    Once the process is finished, I had create pfile from spfile, but is the same code that above...

    So, what is a utility I have these commands in standby mode:

    data *.db_file_name_convert='/U01/app/Oracle/oradata/bd','/data/BD/fichier ' (change directory)

    *.log_file_name_convert='/U01/app/Oracle/oradata/bd','/data/BD/onlinelog ' (change directory)


    ??

    If I can remove these commands (db_file_name_convert, log_file_name_convert - standby server) in sleep mode? some problem? How can I remove these commands?


    Thank you

    In Server 03 (new day), I alreadyhave/data/bd/datafile and/data/bd/onlinelog... Yes, in the case... is not necessary, I have these settings. You undertood now? I'll create the server03/data/bd/datafile and/data/bd/onlinelog, is equal to the server 02...

    After that, in my view, is not necessary I have these settings. You got?

    The parameters above may be omitted if the directory of the primary structure is the same in the standby mode, or if you configure settings DB_CREATE_FILE_DEST & DB_CREATE_ONLINE_LOG_DEST_n primary and secondary school with the correct values.

    OK, I got it. And I give you the answer in my previous post.

    In this case, you can reset or change or delete the pfile these parameters.

    Kind regards

    Juan M

  • RMAN backup location must change and remove obsolete

    Dear Experts,

    I'm working on the following environment,

    Operating system: Windows server 2012 R2

    Oracle version: 11.2.0.1.0 release

    Type of RMAN backup: Cumulative (preferred by the direction and I can't change it in differential backup)

    My current situation,

    I set up RMAN and H: drive for purposes of backup and G: for Archivelog.

    Recently, my city was covered by the flood waters and the server works not for a week so that we delete obsolete level 0 does not work.

    Currently there is not enough space to run a level 1 and level 0, while I should change the backup location.

    My Question is,

    (1) I need to change this location of drive H: to G: drive backup backup correctly without error not enough space.

    (2) after having taken a level 1 and level 0, depending on my 1 redundancy, RMAN mechanism should delete obsolete backup disk without failure G: and H:.

    Kindly help me how to get to my need. This will help me, thanks in advance

    You can

    1. replace the DB_RECOVERY_FILE_DEST G:

    2 run backups

    3. wait for the DELETE OBSOLETE delete obsolete backups of H:

    and then

    4. change the DB_RECOVERY_FILE_DEST return h as it was more early

    In this way, you don't have to use the FORMAT or CHANNEL of CONFIGURE commands (I was referring to CONFIGURE CHANNEL, not ALLOCATE CHANNEL) to change the destination of the backup

    Hemant K Collette

  • Duplicate active Linux64 on Windows32

    Hello

    I try active duplicate of Centos5.9 64-bit to 32-bit of Windows XP. Because from what I've read, this should be possible, at least they have the same endian format. However, when I try it fails with the following error. Tried searching to no avail. If you can give some ideas that could be the reason.

    RMAN output:

    C:\Documents and Settings\Oracle > auxiliary rman to 'sys@ORCL as sysdba' target 'sys@ORAKAL as sysdba'

    Recovery Manager: release 11.2.0.3.0 - Production on Mon Nov 30 16:43:30 2015

    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

    target password database:

    connected to target database: ORCL (DBID = 1421427435)

    database Assistant password:

    connected to the auxiliary database: ORAKAL (unassembled)

    RMAN > database of target ORAKAL of the active database duplicate;

    From 30 November 15 Db double

    using the control file of the target instead of recovery catalog database

    allocated channel: ORA_AUX_DISK_1

    channel ORA_AUX_DISK_1: SID = 192 type device = DISK

    content of Script memory:

    {

    clone of SQL 'alter system set = db_name

    "ORCL" comment =

    ' Modified by RMAN duplicate "scope = spfile;

    clone of SQL 'alter system set db_unique_name =

    "ORAKAL" comment =

    ' Modified by RMAN duplicate "scope = spfile;

    clone to stop immediately;

    Start clone force nomount

    backup format copy current controlfile auxiliary ' C:\APP\ORACLE\FAST_RECOVERY_AREA\ORAKAL\CONTROLFILE\O1_MF_C5RR507Q_. CTL';

    change the clone database mount;

    }

    execution of Script memory

    SQL statement: change the system db_name set = comment "ORCL" = "modified by RMAN duplicate" scope = spfile

    SQL statement: alter system set db_unique_name = comment "ORAKAL" = "modified by RMAN duplicate" scope = spfile

    Instance Oracle to close

    Oracle instance started

    Total System Global Area 523108352 bytes

    Bytes of size 1385752 fixed

    314575592 variable size bytes

    201326592 of database buffers bytes

    Redo buffers 5820416 bytes

    From 30 November 15 backup

    allocated channel: ORA_DISK_1

    channel ORA_DISK_1: SID = 147 type device = DISK

    channel ORA_DISK_1: from data file copy

    copy the current control file

    tag name=/u01/app/oracle/product/11.2.0.3/dbs/snapcf_orcl.f output file = TAG20151

    130T 064402 RECID = STAMP 5 = 897115443

    channel ORA_DISK_1: datafile copy complete, duration: 00:00:01

    Backup completed on 30 November 15

    RMAN-00571: ===========================================================

    RMAN-00569: = ERROR MESSAGE STACK FOLLOWS =.

    RMAN-00571: ===========================================================

    RMAN-03002: failure of the command duplicate Db at 30/11/2015-16:44

    RMAN-04006: auxiliary database error: ORA-12518: TNS:listener could not hand off client connection

    RMAN-03015: an error has occurred in the script stored memory Script

    RMAN-06136: the auxiliary database ORACLE error: ORA-03113: end of file on communication channel

    Process ID: 2652

    Session ID: 63 serial number: 5

    SPFile to the database:

    * .audit_trail = "db".

    * full = '11.2.0.0.0'

    * .control_files = "

    * .db_block_size = 8192

    *.db_file_name_convert='/U01/app/Oracle/oradata/ORCL / ',' C:\app\oracle\oradata'

    * .db_name = "ORAKAL".

    * .db_recovery_file_dest = "C:\app\oracle\fast_recovery_area".

    * .db_recovery_file_dest_size = 4322230272

    * .db_unique_name = "ORAKAL".

    * .diagnostic_dest = "C:\app\oracle".

    *.log_file_name_convert='/U01/app/Oracle/oradata/ORCL ',' C:\app\oracle\oradata'

    * .memory_target = 524288000

    * .open_cursors = 300

    * runoff = 150

    * .remote_login_passwordfile = "EXCLUSIVE."

    * .standby_file_management = "auto".

    * .undo_tablespace = "UNDOTBS1.

    Listener.ora on server to the:

    SID_LIST_LISTENER =

    (SID_LIST =

    (SID_DESC =

    (GLOBAL_DBNAME = ORAKAL)

    (SID_NAME = ORAKAL)

    (ORACLE_HOME = C:\app\oracle\product\11.2.0.3)

    )

    )

    LISTENER =

    (DESCRIPTION_LIST =

    (DESCRIPTION =

    (ADDRESS = (PROTOCOL = TCP) (HOST = < ip >)(PORT = 1521))

    )

    )

    ADR_BASE_LISTENER = C:\app\Oracle

    I managed to pass this error. The problem was that I had specifically put control_files empty string "(always check the alerts log!). I just put it in a file specific to the place. But now, to recover, I got the error:

    ORA-00600: internal error code, arguments: [ktbrcl:CDLC not in Czech Republic]

    which is described in several notes of support but the cause is if the primary database has been improved v10 or lower, but in my case - I installed 11.2.0.3 own on both machines.

    ---

    EDIT:

    Note, application of redo is not supported between Linux and Windows except with a database of pending. This means that the backup must be a

    cold (consistent) backup, requiring no application of redo.  If redo applies is required to recover the database on the new platform, it will fail.

    The backup method using coherent (cold) should be used for duplication of cross-platform.

Maybe you are looking for