dbwr_io_slaves or db_writer_processes

Im having a little trouble to understand the situations in which the dbwr_io_slaves use would be better db_writer_processes, it would be good if someone could explain it clearly or point me to documentation that explains it cleary.

I have a secondary course related to im, which is the issue, one of the databases that I deal with has a large number of dirty buffers (ie: > 600 000) at the moment the only way I know to clear it and reduce the number is to restart the database from time to time (this is not something I want to do and need a better solution).

Some basic information:

The database gets loaded information in batch on a daily basis, the server on the database works has two dual core cpu popular with 8 GB of ram, there are also a few other databases running on these systems, as well.

have I not my process dbwr configured OK?
db_writer_processes = 1
dbwr_io_slaves = 2

There is no "golden rule" nor a formula for 'golden '. Configure the DBWR settings only if you really have a problem with DBWR write. Even then, first you would look at the (SAN or NAS or DASD) storage, i/o channels and OS settings and exhaust all other options before we look at these.

If you use "dbwr_io_slaves" then you ask Oracle to simulate asynchronous IO, on the assumption that the operating system doesn't implement asynchronous IO or not doing well.
Note that 'dbwr_io_slaves' and 'db_writer_processes' should not be set up together.

See the MetaLink Notes 62172.1 and 97291.1

Your first step is to determine if you really have a problem of performance DBWR.
There is no evidence of what you have presented so far.

The number of "Dirty buffers" is more a function of the way and the number of transactions is performed - IE the load, rather than the flow.

Tags: Database

Similar Questions

  • RMAN backup fails

    Hello
    We back up our database with RMAN via NetBackup and we always have the following error (always on the same RBS_02.DBF file which is 26 GB):

    INF - RMAN-08502: set_count = set_stamp 69062 = creation_time 663733174 = 26 August 08
    INF - RMAN-08522: fno = name=S:\ORADATA\DATABASE\RBS_02.DBF 00003 input data file
    INF - RMAN-03026: recovery error release of channel resources
    INF - RMAN-08031: released channel: ch00
    INF - RMAN-08031: released channel: ch01
    INF - RMAN-08031: released channel: ch02
    INF - RMAN-08031: released channel: ch03
    INF - RMAN-00571: ===========================================================
    INF - RMAN-00569: = ERROR MESSAGE STACK FOLLOWS =.
    INF - RMAN-00571: ===========================================================
    INF - RMAN-10035: exception raised in RPC: O
    INF - RMAN-10031: ORA-19583 has occurred during the call to DBMS_BACKUP_RESTORE. BACKUPPIECECREATE
    INF - RMAN-10035: exception raised in RPC: O
    INF - RMAN-10031: ORA-19583 has occurred during the call to DBMS_BACKUP_RESTORE. BACKUPPIECECREATE
    INF - RMAN-10035: exception raised in RPC: O
    INF - RMAN-10031: ORA-19624 has occurred during the call to DBMS_BACKUP_RESTORE. BACKUPPIECECREATE
    INF - RMAN-03006: one-time error occurred during the execution of the command: backup
    INF - RMAN-07004: unhandled exception during execution of the order on channel ch01
    INF - RMAN-10035: exception raised in RPC: ORA-27192: skgfcls: sbtclose2 returns the error - failed to close the file
    INF - ORA-19511: VxBSAEndTxn: failed with the error:

    Would you be kind enough to give your idea?

    Thank you.

    PS: ORA-19583: conversation interrupted due to error
    Cause: An error has occurred which forced the cessation of the current backup or restore conversation.
    Action: There should be other error messages to help identify the cause of the problem. Correct the error and start another conversation.

    Hi user522961

    I found this solution on metalink

    The NB_ORA_CLIENT and NB_ORA_SERV MML parameters are case-sensitive. So make sure you give these settings in the same matter by allocating or RMAN channel configuration, as it is set up in server to Veritas.

    Difficulty:

    Use DB_WRITER_PROCESSES instead of DBWR_IO_SLAVES in the init.ora for target
    database as below:

    1 do the following changes to the init.ora:

    dbwr_io_slaves = 0
    db_writer_processes =

    2. stop and restart the instance to make the changes take effect.

    Oracle recommends using DB_WRITER_PROCESSES if ASYCH IO is available in the
    System.

    FIX: Specify the correct Pool name
    The 'settings' section, if necessary, change the value of
    NSR_DATA_VOLUME_POOL so that it displays 'by default'

    NOTE: The word "Default" is capitalized.

    You can control it please that the driver reverse name resolution please?

    Please control have also
    (1) check libobk.so.1 incompatible (and other documented symbolism)
    joints)
    (2) backup to disk (on backkup_level0.rcv: published ' allocate channel t1)
    type of disc ;')

    make the ins_rdbms.mk ioracle, LLIBOBK-lobk = f

    Please also read link: [http://news.support.veritas.com/dnewsweb.exe?cmd=article&group=veritas.netbackup.datacenter.english&item=13252&utag=]

    The restoration works.

    Can you please put the syntax of your command and the parameters of your rman session?

    Thank you

    Published by: Hub on August 26, 2008 02:15

  • increase the db_writer_processes

    My DB gives the buffer without waiting for...
    In the settings file

    DB_writer_processes = 1,
    DBwr_IO_slaves = 0

    I think I have to increase the db_writer_processes... How to increase?
    I use Spfile to start the DB
    alter system set db_writer_processes=2 scope=spfile;
    

    and restart db

  • cpu_count & db_writer_processes

    Hello

    I created in my 11.2.0.4.1 database deployed Linux on a 16-core server and instance.

    After that, I put cpu_count = 2 and select the Resource Manager using the Instance caging.

    Looking for parameter db_writer_processes, it = 2 and not 1.

    It takes the original cpu_count/8 value (16/8).

    Why it is not reduced to 1?

    Concerning

    I found the problem.

    It depends on the Numa support that has been activated.

    It is a Ref note:

    db_writer_processes was changed from xx to xx EU requirements of NUMA. (Doc ID 1625316.1)
  • DB_WRITER_PROCESSES in Oracle 10.2.0.4 HP UX

    Hi all

    I noticed that the asynchronous i/o is not used with the regular file system on HP UX (only raw devices).
    As far as I know what I can do is to configure multiple writers db with the DB_WRITER_PROCESSES parameter.

    DB a CPU_COUNT in 16, but 2 engravers of db (default value = CPU_COUNT / 8).

    Should I have any advantage by increasing DB_WRITER_PROCESSES?
    Or if there is a bottleneck in writing of Sales pads?

    Thank you

    Diego wrote:
    Hi all

    I noticed that the asynchronous i/o is not used with the regular file system on HP UX (only raw devices).
    As far as I know what I can do is to configure multiple writers db with the DB_WRITER_PROCESSES parameter.

    DB a CPU_COUNT in 16, but 2 engravers of db (default value = CPU_COUNT / 8).

    Should I have any advantage by increasing DB_WRITER_PROCESSES?

    None

    Or if there is a bottleneck in writing of Sales pads?

    If the bottleneck exists, it would be the end of the disc & not the end of the CPU.
    Server process are at least 1000 times faster than mechanical drives.
    a single DB_WRITER could occupy 10 disk controllers & not be a bottleneck.

  • db_writer_processes data sets

    Hi all

    Can you please inform me on below things. In addition, please correct me if I'm wrong

    1.
    db_writer_processes control the number of db writers that are started when the instance is initialized. So, once DB lets say we have 6 dbwriter process then your buffer cache is divided into 6 series of work data. Can you please explain what is this datasets are? or maybe if I'm wrong.

    2.
    If yes it's true how do we find that what dataset is used by which dbwrite deals. I mean how to find the mapping one to say 0 x address-0y address is worked by processes of dbwr1 and so on. How to find this

    3.
    If we have several cache buffers such as db_16k and db_32k then how we this card.

    The reason behind this, use us 12 dbwriter process because we have 12 GB of sga and 10 GB of pga. Some time we receive expected busy buffer. Then, to find that we use multiple buffer pools. So, if that's where I can suggest management to use only the default block size and not multiple. I already know of this pool of multiple buffers have a structure memory management overhead more so should study and learn from you.

    My idea is to reduce it and then start to add if necessary processes

    This may resolve your doubts: Re: datasets db_writer_processes

    Concerning
    Jonathan Lewis
    http://jonathanlewis.WordPress.com
    http://www.jlcomp.demon.co.UK

    "Science is more than a body of knowledge; It's a way of thinking. "
    Carl Sagan

  • DB_WRITER_PROCESSES

    Hi guys,.

    The [Oracle reference | http://www.acs.ilstu.edu/docs/oracle/server.101/b10755/initparams055.htm] says to DB_WRITER_PROCESSES

    Integer parameter type
    By default the value 1 or CPU_COUNT / 8, if it is greater
    Not editable
    Range of values from 1 to 20
    Basic No.

    Why are told that it is not editable? I thought that we could change this setting to specify the number of DBW we want began at the start of the instance.

    Thank you

    Don't forget that:

    Database writer (DBWR) process.

    1 performs all writing of buffers of data files.
    2 is responsible for the management of the buffer cache.

    Management of the buffer Cache.

    1. when a buffer in the buffer cache is modified, it is marked "dirty".
    2 DBWR maintains the 'clean' buffer cache by writing Sales buffers to disk.
    3. as the buffers are filled and dirtied by the user process, the number of free buffers decreases. If the number of free buffer drops too low, process user who should read disk blocks in the cache is not able to find free tampons. DBWR manages the buffer cache so that the user process can always find free tampons.
    4. an LRU (least recently used) algorithm keeps the most recently used data in memory blocks

    The process DBWR writes dirty buffer on the drive under these conditions:

    * When a server process moves a buffer to the dirty list and discovers that the dirty list has reached a length threshold, the server signals DBWR to write process.
    * When a process server research a threshold buffer LRU list without finding a buffer without, it stops searching and signals DBWR to write (because that not free enough buffers are available and DBWR must make room for more).
    * When a timeout occurs (three seconds), DBWR reports itself.
    * In the case of a checkpoint, the Log Writer (LGWR) signals DBWR process.

    Reference: Concepts Oracle http://tahiti.oracle.com

    See you soon,.

    Francisco Munoz Alvarez
    http://oraclenz.WordPress.com

    Published by: F.Munoz Alvarez on November 30, 2012 11:05

  • Insert - Performance problem

    Hi Experts,

    I am new to Oracle. Ask for your help to fix the performance of a query of insertion problem.

    I have an insert query that is go search for records of the partitioned table.

    Background: the user indicates that the query was running in 30 minutes to 10 G. The database is upgraded to 12 by one of my colleague. Now the query works continuously for hours, but no result. Check the settings and SGA is 9 GB, Windows - 4 GB. DB block size is 8192, DB Multiblock read file Count is 128. Overall target of PGA is 2457M.

    The parameters are given below


    VALUE OF TYPE NAME
    ------------------------------------ ----------- ----------
    DBFIPS_140 boolean FALSE
    O7_DICTIONARY_ACCESSIBILITY boolean FALSE
    whole active_instance_count
    aq_tm_processes integer 1
    ARCHIVE_LAG_TARGET integer 0
    asm_diskgroups chain
    asm_diskstring chain
    asm_power_limit integer 1
    asm_preferred_read_failure_groups string
    audit_file_dest string C:\APP\ADM
    audit_sys_operations Boolean TRUE

    AUDIT_TRAIL DB string
    awr_snapshot_time_offset integer 0
    background_core_dump partial string
    background_dump_dest string C:\APP\PRO
    \RDBMS\TRA
    BACKUP_TAPE_IO_SLAVES boolean FALSE
    bitmap_merge_area_size integer 1048576
    blank_trimming boolean FALSE
    buffer_pool_keep string
    buffer_pool_recycle string
    cell_offload_compaction ADAPTIVE channel


    cell_offload_decryption Boolean TRUE
    cell_offload_parameters string
    cell_offload_plan_display string AUTO
    cell_offload_processing Boolean TRUE
    cell_offloadgroup_name string
    whole circuits
    whole big client_result_cache_lag 3000
    client_result_cache_size big integer 0
    clonedb boolean FALSE
    cluster_database boolean FALSE
    cluster_database_instances integer 1


    cluster_interconnects chain
    commit_logging string
    commit_point_strength integer 1
    commit_wait string
    string commit_write
    common_user_prefix string C#.
    compatible string 12.1.0.2.0
    connection_brokers string ((TYPE = DED
    ((TYPE = EM
    control_file_record_keep_time integer 7
    control_files string G:\ORACLE\

    TROL01. CTL
    FAST_RECOV
    NTROL02. CT
    control_management_pack_access string diagnostic
    core_dump_dest string C:\app\dia
    bal12\cdum
    cpu_count integer 4
    create_bitmap_area_size integer 8388608
    create_stored_outlines string
    cursor_bind_capture_destination memory of the string + tell
    CURSOR_SHARING EXACT string

    cursor_space_for_time boolean FALSE
    db_16k_cache_size big integer 0
    db_2k_cache_size big integer 0
    db_32k_cache_size big integer 0
    db_4k_cache_size big integer 0
    db_8k_cache_size big integer 0
    db_big_table_cache_percent_target string 0
    db_block_buffers integer 0
    db_block_checking FALSE string
    db_block_checksum string TYPICAL
    Whole DB_BLOCK_SIZE 8192

    db_cache_advice string WE
    db_cache_size large integer 0
    db_create_file_dest chain
    db_create_online_log_dest_1 string
    db_create_online_log_dest_2 string
    db_create_online_log_dest_3 string
    db_create_online_log_dest_4 string
    db_create_online_log_dest_5 string
    db_domain chain
    db_file_multiblock_read_count integer 128
    db_file_name_convert chain

    DB_FILES integer 200
    db_flash_cache_file string
    db_flash_cache_size big integer 0
    db_flashback_retention_target around 1440
    chain of db_index_compression_inheritance NONE
    DB_KEEP_CACHE_SIZE big integer 0
    chain of db_lost_write_protect NONE
    db_name string ORCL
    db_performance_profile string
    db_recovery_file_dest string G:\Oracle\
    y_Area


    whole large db_recovery_file_dest_size 12840M
    db_recycle_cache_size large integer 0
    db_securefile string PREFERRED
    channel db_ultra_safe
    db_unique_name string ORCL
    db_unrecoverable_scn_tracking Boolean TRUE
    db_writer_processes integer 1
    dbwr_io_slaves integer 0
    DDL_LOCK_TIMEOUT integer 0
    deferred_segment_creation Boolean TRUE
    dg_broker_config_file1 string C:\APP\PRO


    \DATABASE\
    dg_broker_config_file2 string C:\APP\PRO
    \DATABASE\
    dg_broker_start boolean FALSE
    diagnostic_dest channel directory
    disk_asynch_io Boolean TRUE
    dispatchers (PROTOCOL = string
    12XDB)
    distributed_lock_timeout integer 60
    dml_locks whole 2076
    whole dnfs_batch_size 4096

    dst_upgrade_insert_conv Boolean TRUE
    enable_ddl_logging boolean FALSE
    enable_goldengate_replication boolean FALSE
    enable_pluggable_database boolean FALSE
    event string
    exclude_seed_cdb_view Boolean TRUE
    fal_client chain
    fal_server chain
    FAST_START_IO_TARGET integer 0
    fast_start_mttr_target integer 0
    fast_start_parallel_rollback string LOW


    file_mapping boolean FALSE
    fileio_network_adapters string
    filesystemio_options chain
    fixed_date chain
    gcs_server_processes integer 0
    global_context_pool_size string
    global_names boolean FALSE
    global_txn_processes integer 1
    hash_area_size integer 131072
    channel heat_map
    hi_shared_memory_address integer 0

    hs_autoregister Boolean TRUE
    iFile file
    inmemory_clause_default string
    inmemory_force string by DEFAULT
    inmemory_max_populate_servers integer 0
    inmemory_query string ENABLE
    inmemory_size big integer 0
    inmemory_trickle_repopulate_servers_ integer 1
    percent
    instance_groups string
    instance_name string ORCL


    instance_number integer 0
    instance_type string RDBMS
    instant_restore boolean FALSE
    java_jit_enabled Boolean TRUE
    java_max_sessionspace_size integer 0
    JAVA_POOL_SIZE large integer 0
    java_restrict string no
    java_soft_sessionspace_limit integer 0
    JOB_QUEUE_PROCESSES around 1000
    LARGE_POOL_SIZE large integer 0
    ldap_directory_access string NONE


    ldap_directory_sysauth string no.
    license_max_sessions integer 0
    license_max_users integer 0
    license_sessions_warning integer 0
    listener_networks string
    LOCAL_LISTENER (ADDRESS = string
    = i184borac
    (NET) (PORT =
    lock_name_space string
    lock_sga boolean FALSE
    log_archive_config string


    Log_archive_dest chain
    Log_archive_dest_1 chain
    LOG_ARCHIVE_DEST_10 string
    log_archive_dest_11 string
    log_archive_dest_12 string
    log_archive_dest_13 string
    log_archive_dest_14 string
    log_archive_dest_15 string
    log_archive_dest_16 string
    log_archive_dest_17 string
    log_archive_dest_18 string


    log_archive_dest_19 string
    LOG_ARCHIVE_DEST_2 string
    log_archive_dest_20 string
    log_archive_dest_21 string
    log_archive_dest_22 string
    log_archive_dest_23 string
    log_archive_dest_24 string
    log_archive_dest_25 string
    log_archive_dest_26 string
    log_archive_dest_27 string
    log_archive_dest_28 string


    log_archive_dest_29 string
    log_archive_dest_3 string
    log_archive_dest_30 string
    log_archive_dest_31 string
    log_archive_dest_4 string
    log_archive_dest_5 string
    log_archive_dest_6 string
    log_archive_dest_7 string
    log_archive_dest_8 string
    log_archive_dest_9 string
    allow the chain of log_archive_dest_state_1


    allow the chain of log_archive_dest_state_10
    allow the chain of log_archive_dest_state_11
    allow the chain of log_archive_dest_state_12
    allow the chain of log_archive_dest_state_13
    allow the chain of log_archive_dest_state_14
    allow the chain of log_archive_dest_state_15
    allow the chain of log_archive_dest_state_16
    allow the chain of log_archive_dest_state_17
    allow the chain of log_archive_dest_state_18
    allow the chain of log_archive_dest_state_19
    allow the chain of LOG_ARCHIVE_DEST_STATE_2

    allow the chain of log_archive_dest_state_20
    allow the chain of log_archive_dest_state_21
    allow the chain of log_archive_dest_state_22
    allow the chain of log_archive_dest_state_23
    allow the chain of log_archive_dest_state_24
    allow the chain of log_archive_dest_state_25
    allow the chain of log_archive_dest_state_26
    allow the chain of log_archive_dest_state_27
    allow the chain of log_archive_dest_state_28
    allow the chain of log_archive_dest_state_29
    allow the chain of log_archive_dest_state_3

    allow the chain of log_archive_dest_state_30
    allow the chain of log_archive_dest_state_31
    allow the chain of log_archive_dest_state_4
    allow the chain of log_archive_dest_state_5
    allow the chain of log_archive_dest_state_6
    allow the chain of log_archive_dest_state_7
    allow the chain of log_archive_dest_state_8
    allow the chain of log_archive_dest_state_9
    log_archive_duplex_dest string
    log_archive_format string ARC%S_%R.%
    log_archive_max_processes integer 4

    log_archive_min_succeed_dest integer 1
    log_archive_start Boolean TRUE
    log_archive_trace integer 0
    whole very large log_buffer 28784K
    log_checkpoint_interval integer 0
    log_checkpoint_timeout around 1800
    log_checkpoints_to_alert boolean FALSE
    log_file_name_convert chain
    whole MAX_DISPATCHERS
    max_dump_file_size unlimited string
    max_enabled_roles integer 150


    whole max_shared_servers
    max_string_size string STANDARD
    memory_max_target big integer 0
    memory_target large integer 0
    NLS_CALENDAR string GREGORIAN
    nls_comp BINARY string
    nls_currency channel u
    string of NLS_DATE_FORMAT DD-MON-RR
    nls_date_language channel ENGLISH
    string nls_dual_currency C
    nls_iso_currency string UNITED KIN

    nls_language channel ENGLISH
    nls_length_semantics string OCTET
    string nls_nchar_conv_excp FALSE
    nls_numeric_characters chain.,.
    nls_sort BINARY string
    nls_territory string UNITED KIN
    nls_time_format HH24.MI string. SS
    nls_time_tz_format HH24.MI string. SS
    chain of NLS_TIMESTAMP_FORMAT DD-MON-RR
    NLS_TIMESTAMP_TZ_FORMAT string DD-MON-RR
    noncdb_compatible boolean FALSE


    object_cache_max_size_percent integer 10
    object_cache_optimal_size integer 102400
    olap_page_pool_size big integer 0
    open_cursors integer 300
    Open_links integer 4
    open_links_per_instance integer 4
    optimizer_adaptive_features Boolean TRUE
    optimizer_adaptive_reporting_only boolean FALSE
    OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES boolean FALSE
    optimizer_dynamic_sampling integer 2
    optimizer_features_enable string 12.1.0.2

    optimizer_index_caching integer 0
    OPTIMIZER_INDEX_COST_ADJ integer 100
    optimizer_inmemory_aware Boolean TRUE
    the string ALL_ROWS optimizer_mode
    optimizer_secure_view_merging Boolean TRUE
    optimizer_use_invisible_indexes boolean FALSE
    optimizer_use_pending_statistics boolean FALSE
    optimizer_use_sql_plan_baselines Boolean TRUE
    OPS os_authent_prefix string $
    OS_ROLES boolean FALSE
    parallel_adaptive_multi_user Boolean TRUE


    parallel_automatic_tuning boolean FALSE
    parallel_degree_level integer 100
    parallel_degree_limit string CPU
    parallel_degree_policy chain MANUAL
    parallel_execution_message_size integer 16384
    parallel_force_local boolean FALSE
    parallel_instance_group string
    parallel_io_cap_enabled boolean FALSE
    PARALLEL_MAX_SERVERS integer 160
    parallel_min_percent integer 0
    parallel_min_servers integer 16

    parallel_min_time_threshold string AUTO
    parallel_server boolean FALSE
    parallel_server_instances integer 1
    parallel_servers_target integer 64
    parallel_threads_per_cpu integer 2
    pdb_file_name_convert string
    pdb_lockdown string
    pdb_os_credential string
    permit_92_wrap_format Boolean TRUE
    pga_aggregate_limit great whole 4914M
    whole large pga_aggregate_target 2457M

    -
    Plscope_settings string IDENTIFIER
    plsql_ccflags string
    plsql_code_type chain INTERPRETER
    plsql_debug boolean FALSE
    plsql_optimize_level integer 2
    plsql_v2_compatibility boolean FALSE
    plsql_warnings DISABLE channel: AL
    PRE_PAGE_SGA Boolean TRUE
    whole process 300
    processor_group_name string
    query_rewrite_enabled string TRUE


    applied query_rewrite_integrity chain
    rdbms_server_dn chain
    read_only_open_delayed boolean FALSE
    recovery_parallelism integer 0
    Recyclebin string on
    redo_transport_user string
    remote_dependencies_mode string TIMESTAMP
    remote_listener chain
    Remote_login_passwordfile string EXCLUSIVE
    REMOTE_OS_AUTHENT boolean FALSE
    remote_os_roles boolean FALSE

    replication_dependency_tracking Boolean TRUE
    resource_limit Boolean TRUE
    resource_manager_cpu_allocation integer 4
    resource_manager_plan chain
    result_cache_max_result integer 5
    whole big result_cache_max_size K 46208
    result_cache_mode chain MANUAL
    result_cache_remote_expiration integer 0
    resumable_timeout integer 0
    rollback_segments chain
    SEC_CASE_SENSITIVE_LOGON Boolean TRUE

    sec_max_failed_login_attempts integer 3
    string sec_protocol_error_further_action (DROP, 3)
    sec_protocol_error_trace_action string PATH
    sec_return_server_release_banner boolean FALSE
    disable the serial_reuse chain
    service name string ORCL
    session_cached_cursors integer 50
    session_max_open_files integer 10
    entire sessions 472
    Whole large SGA_MAX_SIZE M 9024
    Whole large SGA_TARGET M 9024


    shadow_core_dump string no
    shared_memory_address integer 0
    SHARED_POOL_RESERVED_SIZE large integer 70464307
    shared_pool_size large integer 0
    whole shared_server_sessions
    SHARED_SERVERS integer 1
    skip_unusable_indexes Boolean TRUE
    smtp_out_server chain
    sort_area_retained_size integer 0
    sort_area_size integer 65536
    spatial_vector_acceleration boolean FALSE


    SPFile string C:\APP\PRO
    \DATABASE\
    sql92_security boolean FALSE
    SQL_Trace boolean FALSE
    sqltune_category string by DEFAULT
    standby_archive_dest channel % ORACLE_HO
    standby_file_management string MANUAL
    star_transformation_enabled string TRUE
    statistics_level string TYPICAL
    STREAMS_POOL_SIZE big integer 0
    tape_asynch_io Boolean TRUE

    temp_undo_enabled boolean FALSE
    entire thread 0
    threaded_execution boolean FALSE
    timed_os_statistics integer 0
    TIMED_STATISTICS Boolean TRUE
    trace_enabled Boolean TRUE
    tracefile_identifier chain
    whole of transactions 519
    transactions_per_rollback_segment integer 5
    UNDO_MANAGEMENT string AUTO
    UNDO_RETENTION integer 900

    undo_tablespace string UNDOTBS1
    unified_audit_sga_queue_size integer 1048576
    use_dedicated_broker boolean FALSE
    use_indirect_data_buffers boolean FALSE
    use_large_pages string TRUE
    user_dump_dest string C:\APP\PRO
    \RDBMS\TRA
    UTL_FILE_DIR chain
    workarea_size_policy string AUTO
    xml_db_events string enable

    Thanks in advance

    Firstly, thank you for posting the 10g implementation plan, which was one of the key things that we were missing.

    Second, you realize that you have completely different execution plans, so you can expect different behavior on each system.

    Your package of 10g has a total cost of 23 959 while your plan of 12 c has a cost of 95 373 which is almost 4 times more.  All things being equal, cost is supposed to relate directly to the time spent, so I expect the 12 c plan to take much more time to run.

    From what I can see the 10g plan begins with a scan of full table on DEALERS, and then a full scan on SCARF_VEHICLE_EXCLUSIONS table, and then a full scan on CBX_tlemsani_2000tje table, and then a full scan on CLAIM_FACTS table.  The first three of these analyses tables have a very low cost (2 each), while the last has a huge cost of 172K.  Yet once again, the first three scans produce very few lines in 10g, less than 1,000 lines each, while the last product table scan 454 K lines.

    It also looks that something has gone wrong in the 10g optimizer plan - maybe a bug, which I consider that Jonathan Lewis commented.  Despite the full table scan with a cost of 172 K, NESTED LOOPS it is part of the only has a cost of 23 949 or 24 K.  If the math is not in terms of 10g.  In other words, maybe it's not really optimal plan because 10g optimizer may have got its sums wrong and 12 c might make his right to the money.  But luckily this 'imperfect' 10g plan happens to run fairly fast for one reason or another.

    The plan of 12 starts with similar table scans but in a different order.  The main difference is that instead of a full table on CLAIM_FACTS scan, it did an analysis of index on CLAIM_FACTS_AK9 beach at the price of 95 366.  It is the only main component of the final total cost of 95 373.

    Suggestions for what to do?  It is difficult, because there is clearly an anomaly in the system of 10g to have produced the particular execution plan that he uses.  And there is other information that you have not provided - see later.

    You can try and force a scan of full table on CLAIM_FACTS by adding a suitable example suspicion "select / * + full (CF) * / cf.vehicle_chass_no...". "However, the tips are very difficult to use and does not, guarantee that you will get the desired end result.  So be careful.  For the essay on 12 c, it may be worth trying just to see what happens and what produces the execution plan looks like.  But I would not use such a simple, unique tip in a production system for a variety of reasons.  For testing only it might help to see if you can force the full table on CLAIM_FACTS scan as in 10g, and if the performance that results is the same.

    The two plans are parallel ones, which means that the query is broken down into separate, independent steps and several steps that are executed at the same time, i.e. several CPUS will be used, and there will be several readings of the disc at the same time.  (It is a mischaracterization of the works of parallel query how).  If 10g and 12 c systems do not have the SAME hardware configuration, then you would naturally expect different time elapsed to run the same parallel queries.  See the end of this answer for the additional information that you may provide.

    But I would be very suspicious of the hardware configuration of the two systems.  Maybe 10 g system has 16-core processors or more and 100's of discs in a matrix of big drives and maybe the 12 c system has only 4 cores of the processor and 4 disks.  That would explain a lot about why the 12 c takes hours to run when the 10 g takes only 30 minutes.

    Remember what I said in my last reply:

    "Without any contrary information I guess the filter conditions are very low, the optimizer believes he needs of most of the data in the table and that a table scan or even a limited index scan complete is the"best"way to run this SQL.  In other words, your query takes just time because your tables are big and your application has most of the data in these tables. "

    When dealing with very large tables and to do a full table parallel analysis on them, the most important factor is the amount of raw hardware, you throw the ball to her.  A system with twice the number of CPUS and twice the number of disks will run the same parallel query in half of the time, at least.  It could be that the main reason for the 12 c system is much slower than the system of 10g, rather than on the implementation plan itself.

    You may also provide us with the following information which would allow a better analysis:

    • Row counts in each tables referenced in the query, and if one of them are partitioned.
    • Hardware configurations for both systems - the 10g and the 12 a.  Number of processors, the model number and speed, physical memory, CPU of discs.
    • The discs are very important - 10g and 12 c have similar disk subsystems?  You use simple old records, or you have a San, or some sort of disk array?  Are the bays of identical drives in both systems?  How are they connected?  Fast Fibre Channel, or something else?  Maybe even network storage?
    • What is the size of the SGA in both systems?  of values for MEMORY_TARGET and SGA_TARGET.
    • The fact of the CLAIM_FACTS_AK9 index exist on the system of 10g.  I guess he does, but I would like that it confirmed to be safe.

    John Brady

  • buffer without waiting

    We see a few free buffers expects restraint and I wanted to get comments from the participants in the forum if you have some time to solve.

    He works on a system of HP - UX 11.31 Itanium running Oracle 11.2.0.3. It is a data warehouse staging database and there are about 60 instructions merge at the same time (being also parallel dml) making millions of updates against 60 different tables with a number of lines total about 2 billion.

    The dev team is put in a more efficient update statement so se, which can solve these problems, but still I would like to see if the number of db writer process, we did feel. As far as I know, we do not have asych that e/s configured on our BONES because of some bugs, we have seen in the past. The server has 14 processors and 95 gig of memory. Here are the top 5 events of our AWR report:
    Top 5 Timed Foreground Events
    
    Event                   Waits      Time(s)  Avg wait (ms) % DB time Wait Class 
    free buffer waits       319,324    261,188            818     46.08 Configuration 
    db file parallel read   134,710    62,404             463     11.01 User I/O 
    DB CPU                             60,818                     10.73   
    db file sequential read 11,783,603 26,032               2      4.59 User I/O 
    write complete waits    4,015      13,828            3444      2.44 Configuration 
    Does make sense that I should increase the number of db writers?

    db_writer_processes = 4 in our system. With 14 CPUs it is supposed to be sufficient as I understand it. But we have 60 dedicated server process, make updates and only four making the Scriptures db writer process, it makes little sense to increase the number of writers.

    I am researching this on my own, but I would appreciate any input you have on this issue.

    Thank you
    Bobby

    Bobby, it's hard to argue against what works, but you want to try db_writer_processes = 1 and dbwr_io_slaves = 16 and 24 to verify that the 32 is the optimal choice for your environment.

    HTH - Mark D Powell.

  • Error in data protection

    Hi guys,.

    Can someone help me with this error? ORA-16057: DGCID server not in the Data Guard configuration.

    This is the primary and backup configs. I just want to know what was im missing.

    Primay config:
    * icts001.__db_cache_size = 20250099712 *.
    * icts001.__java_pool_size = 16777216 *.
    * icts001.__large_pool_size = 16777216 *.
    * icts001.__shared_pool_size = 1056964608 *.
    * icts001.__streams_pool_size = 117440512 *.
    .aq_tm_processes = 6 *.
    . ARCHIVE_LAG_TARGET = 0 *.
    .audit_file_dest='/data/oradata/Admin/icts001/adump'**
    . AUDIT_TRAIL = "DB."
    .background_dump_dest='/data/oradata/Admin/icts001/bbdump'**
    full = '10.2.0.1.0' *.
    .control_file_record_keep_time = 30 *.
    .control_files='/data/oradata/icts001/control01.ctl','/dbworkspc01/multiplex/control02.ctl','/dbworkspc02/multiplex/control03.ctl'**
    .core_dump_dest='/data/oradata/Admin/icts001/cdump'**
    . CURSOR_SHARING = "SIMILAR."
    . DB_BLOCK_SIZE = 8192 *.
    .db_cache_size = 4194304000 *.
    .db_domain = "*.
    .db_file_multiblock_read_count = 8 *.
    .db_name = "icts001."
    .db_recovery_file_dest='/dbworkspc02/flash_recovery_area'**
    .db_recovery_file_dest_size = 16106127360 *.
    .db_unique_name = "ICTS001."
    .db_writer_processes = 4 *.
    .dbwr_io_slaves = 4 *.
    .dg_broker_start = FALSE *.
    . Dispatchers = "*.
    .fal_client = "icts001."
    .fal_server = "drs001", "SMS."
    .fast_start_mttr_target = 30 *.
    .global_names = TRUE *.
    .JOB_QUEUE_PROCESSES = 10 *.
    .log_archive_config = 'DG_CONFIG = (ICTS001, SMS, drcs001)' *.
    * icts001.log_archive_dest_1='location="/EMC_HD/oradata/archlog"','valid_for=(ONLINE_LOGFILE,ALL_ROLES) ' *.
    . Log_archive_dest_1 = 'location = / oradata/EMC_HD/archlog valid_for = (ONLINE_LOGFILE, ALL_ROLES)' *.
    . LOG_ARCHIVE_DEST_2 = "SERVICE = drcs001 LGWR ASYNC VALID_FOR =(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME = drcs001'*"
    .log_archive_dest_3 = "SERVICE = ASM LGWR ASYNC VALID_FOR =(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME = ' SMS *"
    .log_archive_dest_state_10 = 'REPORTER. "
    * * icts001.log_archive_dest_state_1 = 'ENABLE' *.
    . LOG_ARCHIVE_DEST_STATE_2 = 'REPORTER. "
    .log_archive_dest_state_3 = "ACTIVATE."
    .log_archive_dest_state_4 = 'REPORTER. "
    .log_archive_dest_state_5 = 'REPORTER. "
    .log_archive_dest_state_6 = 'REPORTER. "
    .log_archive_dest_state_7 = 'REPORTER. "
    .log_archive_dest_state_8 = 'REPORTER. "
    .log_archive_dest_state_9 = 'REPORTER. "
    .log_archive_format='arch%t_%s_%r.arc'**
    **icts001.log_archive_format='arch%t_%s_%r.arc'**
    .log_archive_max_processes = 15 *.
    .log_archive_min_succeed_dest = 1 *.
    * * icts001.log_archive_trace = 0 *.
    .log_checkpoint_timeout = 0 *.
    .log_checkpoints_to_alert = TRUE *.
    . NLS_DATE_FORMAT = 'YYYY-MM-DD HH24:MI:SS. "
    .open_cursors = 8000 *.
    . PARALLEL_MAX_SERVERS = 13 *.
    .parallel_min_servers = 10 *.
    .parallel_threads_per_cpu = 6 *.
    .pga_aggregate_target = 15032385536 *.
    runoff = 1500 *.
    .recovery_parallelism = 6 *.
    . Remote_login_passwordfile = "EXCLUSIVE."
    .resource_limit = FALSE *.
    .service_names = "icts001."
    .session_cached_cursors = 200 *.
    Once = 1500 *.
    . SGA_MAX_SIZE = 25769803776 *.
    . SGA_TARGET = 21474836480 *.
    .shared_pool_size = 1048576000 *.
    . SHARED_SERVERS = 0 *.
    * icts001.standby_archive_dest = "*.
    .standby_file_management = "AUTO."
    . STREAMS_POOL_SIZE = 117440512 *.

    Ensure Config:
    icts001.__db_cache_size = 754974720
    icts001.__java_pool_size = 16777216
    icts001.__large_pool_size = 16777216
    icts001.__shared_pool_size = 436207616
    icts001.__streams_pool_size = 0
    *.audit_file_dest='/u01/app/oracle/product/10.2.0/db_1/admin/adump'
    *.background_dump_dest='/u01/app/oracle/product/10.2.0/db_1/admin/bdump'
    * full = '10.2.0.1.0'
    *.control_files='/u02/oradata/controlfile/control01.ctl','/u02/flash_recovery_area/controlfile/control02.ctl','/u02/flash_recovery_area/controlfile/control03.ctl'
    *.core_dump_dest='/u01/app/oracle/product/10.2.0/db_1/admin/cdump'
    * .db_block_size = 8192
    * .db_file_multiblock_read_count = 16
    *.db_file_name_convert='/data/oradata/icts001','/U02/oradata','/Data3/data2c/oradata/icts001','/U02/oradata','/data1/oradata/icts001','/U02/oradata '
    * .db_name = "icts001".
    * .db_recovery_file_dest = "/ u02/flash_recovery_area.
    * .db_recovery_file_dest_size = 47185920000
    * .db_unique_name = "SMS".
    *. Dispatchers ='(Protocol=TCP) (SERVICE = smsXDB)"
    * .fal_client = "SMS".
    * .fal_server = "PROD".
    * .instance_name = "icts001".
    * .job_queue_processes = 10
    * .log_archive_config = 'dg_config = (prod, sms)'
    * .log_archive_dest_1 = "LOCATION = use_db_recovery_file_dest VALID_FOR =(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME = SMS'"
    * .log_archive_dest_2 = "service = PROD = (online_logfiles, primary_role) db_unique_name = icts001 valid_for'"
    * .log_archive_dest_state_1 = 'enable '.
    * .log_archive_dest_state_2 = 'ENABLE '.
    *.log_file_name_convert='/data/oradata/icts001','/u02/flash_recovery_area/onlinelog','/dbworkspc01/multiplex','/u02/flash_recovery_area/onlinelog','/data3/data2c/oradata/icts001','/u02/flash_recovery_area/standbylog'
    * .open_cursors = 300
    * .pga_aggregate_target = 409993216
    * runoff = 5000
    * .remote_login_passwordfile = "exclusive."
    * .service_names = "SMS".
    * Once = 5505
    * .sga_target = 1231028224
    * .standby_file_management = "auto".
    * .thread = 1
    * .undo_management = "AUTO".
    * .undo_tablespace = "UNDOTBS1.
    *.user_dump_dest='/u01/app/oracle/product/10.2.0/db_1/admin/udump'

    Kind regards
    cmadiam

    You have * 1 * primary & * 2 * databases on hold.

    In your primary you configured

    .log_archive_config = 'DG_CONFIG = (ICTS001, SMS, drcs001)' *.

    The syntax is > log_archive_config = 'DG_CONFIG = (primary_db_unique_name, Standby1_db_unique_name, Standby2_db_unique_name)' and so on.

    Ensure that you have configured in mode?

    It should be from the eve SMS

    log_archive_config = 'DG_CONFIG = (ICTS001, SMS)'

    It should be from the eve drcs001

    log_archive_config = 'DG_CONFIG = (ICTS001, drcs001)'

    It should be DB_UNIQUE_NAME

  • what setting to change to avoid the "unable to allocate new log"

    Hello everyone.
    I'm on 9i r2 on windows server 2003 edition of the SD with attached ISCSI. I have the drive 1 recovery as well as DBF newspaper group in the data directory. 2 other groups redo are on the 2 local disks separated and archiving logs (in another folder).

    I'm getting some errors "cannot allocate new logs" every two days and in Event Viewer "process Archive error: Instance ORACLE prdps - cannot allocate Journal, check-in required.


    I do not know what setting I should change.

    Current configuration:
    db_writer_processes 1
    dbwr_io_slaves 0



    Here is the output from v$ sysstat:
    49 8 7410575 written DBWR checkpoint buffers
    50 8 7748 written DBWR transaction table
    51 DBWR undo block writes 8 4600265
    52 DBWR revisited being written 8 5313 buffer
    DBWR 53 free ask 8 26383
    54 DBWR free buffers are 8 19838373
    55 DBWR lru scans 21831-8
    56 DBWR summed scan depth 8 21265425
    57 8 21265425 scanned DBWR buffers
    58 control DBWR 8 1719 points
    59 DBWR cross instance writes 40 0
    60 fusion DBWR written 40 0


    It's alert.log:
    Fri Mar 06 00:25:52 2009
    Arc0: Complete 1 thread 1 log archiving sequence 7004
    Fri Mar 06 00:25:54 2009
    Thread 1 Advanced for you connect to sequence 7006
    Currently Journal # 3 seq # 7006 mem # 0: E:\ORACLE\ORADATA\PRDPS\REDO03A. JOURNAL
    Currently Journal # 3 seq # 7006 mem # 1: F:\ORACLE\ORADATA\PRDPS\REDO03B. JOURNAL
    Currently Journal # 3 seq # 7006 mem # 2: G:\ORACLE\ORADATA\PRDPS\REDO03C. JOURNAL
    Fri Mar 06 00:25:54 2009
    Arc1: Evaluating archive log 2 thread 1 sequence 7005
    Arc1: Begins to archive log 2 thread 1 sequence 7005
    Creating archives LOG_ARCHIVE_DEST_1 destination: "F:\ORACLE\ORADATA\PRDPS\ARCHIVE\PRDPS_001_07005.ARC."
    Fri Mar 06 00:26:03 2009
    Thread 1 Advanced for you connect to sequence 7007
    Currently journal # 1, seq # 7007 mem # 0: E:\ORACLE\ORADATA\PRDPS\REDO01A. JOURNAL
    Currently journal # 1, seq # 7007 mem # 1: F:\ORACLE\ORADATA\PRDPS\REDO01B. JOURNAL
    Currently journal # 1, seq # 7007 mem # 2: G:\ORACLE\ORADATA\PRDPS\REDO01C. JOURNAL
    Fri Mar 06 00:26:03 2009
    Arc0: Assessment of the archive log 2 thread 1 sequence 7005
    Arc0: Impossible to archive log 2 thread 1 sequence 7005
    Newspapers archived by another process
    Arc0: Assessment of the 3 thread 1 sequence 7006 log archives
    Arc0: Starts to archive log 3 thread 1 sequence 7006
    Creating archives LOG_ARCHIVE_DEST_1 destination: "F:\ORACLE\ORADATA\PRDPS\ARCHIVE\PRDPS_001_07006.ARC."
    Fri Mar 06 00:26:15 2009
    Arc1: Finished 2 thread 1 log archiving sequence 7005
    Arc1: Evaluating archive log 3 thread 1 sequence 7006
    Arc1: Impossible to archive log 3 thread 1 sequence 7006
    Newspapers archived by another process
    Fri Mar 06 00:26:16 2009
    Thread 1 cannot allot of new newspapers, sequence 7008
    All newspapers need to check-in online
    Currently journal # 1, seq # 7007 mem # 0: E:\ORACLE\ORADATA\PRDPS\REDO01A. JOURNAL
    Currently journal # 1, seq # 7007 mem # 1: F:\ORACLE\ORADATA\PRDPS\REDO01B. JOURNAL
    Currently journal # 1, seq # 7007 mem # 2: G:\ORACLE\ORADATA\PRDPS\REDO01C. JOURNAL
    Thread 1 Advanced for you connect to sequence 7008
    Currently Journal # 2 seq # 7008 mem # 0: E:\ORACLE\ORADATA\PRDPS\REDO02A. JOURNAL
    Currently Journal # 2 seq # 7008 mem # 1: F:\ORACLE\ORADATA\PRDPS\REDO02B. JOURNAL
    Currently Journal # 2 seq # 7008 mem # 2: G:\ORACLE\ORADATA\PRDPS\REDO02C. JOURNAL
    Fri Mar 06 00:26:16 2009
    Arc1: Evaluating archive log 3 thread 1 sequence 7006
    Arc1: Impossible to archive log 3 thread 1 sequence 7006
    Newspapers archived by another process
    Arc1: Evaluating archive log 1 thread 1 sequence 7007
    Arc1: Beginning to archive journal 1-wire 1 sequence 7007


    Should I just change
    db_writer_processes 1
    dbwr_io_slaves 2

    Thank you
    Any help appreciated.

    This message indicates that Oracle wants to re-use a redo log file, but
    the
    associated with corresponding control point is not over. In this case,.
    Oracle
    Wait for the control point is carried out entirely. This situation
    may be encountered especially when transactional activity is
    important.

    search for:
    -Background checkpoint started.
    -Control your mat point.

    These two statistics should not differ more than once. If it is

    not true, your base weighs on the control points. LGWR does not reach
    continue
    next write operations that ends the control points.

    Three reasons may explain this difference:

    -A frequency of control points which is too high.
    -A control points are starting but not filling not
    -One DBWR writing too slowly.

    How to resolve incomplete control points is through tuning
    control points and
    newspapers:

    (1) gives the checkpoint process more time to scroll through newspapers
    -Add more redo log groups
    -increase the size of the logs again
    (2) to reduce the frequency of control points
    -increase the LOG_CHECKPOINT_INTERVAL
    -increase the size of newspapers in restoration online
    (3) improve the effectiveness of the control points for the CKPT process
    CHECKPOINT_PROCESS = TRUE
    (4) set LOG_CHECKPOINT_TIMEOUT = 0. This disables the verification script
    based on
    time interval.
    (5) another way to solve this error is for DBWR to write quickly
    the dirty
    buffers to disk. The parameter associated with this task is:

    DB_BLOCK_CHECKPOINT_BATCH.

    DB_BLOCK_CHECKPOINT_BATCH specifies the number of blocks that are
    dedicated
    inside the batch size for writing the control points. When you want to
    speed up
    control points, it is necessary to increase this value.

  • RAC init parameters

    Hello

    We have 2 node rac database version 11.2.0.4 on Linux platform.

    There is a different opinion between our system dba and application dba on init parameter of the cars.

    System dba definied *.processes = 2000 in rac db init file.

    This means that processes 2000 can connect to instance 1 and 2000 process can connect to instance 2 at the same time?

    Or does this mean that 2000 process can connect to all DB?

    We (application dba) says 2000 process can connect to instance 1 and 2000 process can connect to instance 2 at the same time.

    then sysdba said dba 2000 process can connect to all DB IE 1500 to 1 and 500 (max) to the instance 2, but total will not be more than 2000.

    Help, please.

    Thank you

    Ashish

    We (application dba) says 2000 process can connect to instance 1 and 2000 process can connect to instance 2 at the same time.

    then sysdba said dba 2000 process can connect to all DB IE 1500 to 1 and 500 (max) to the instance 2, but total will not be more than 2000.

    This DBA application is correct. 2000 process can connect to instance 1 and 2000 process can connect to instance 2 at the same time. Also... Here is the link to the documentation on this setting: http://docs.oracle.com/database/121/REFRN/refrn10175.htm#REFRN10175

    Note that it says that "several instances can have different values.

    Finally, ask your SYSDBA check RESOURCE_LIMIT $ GV in an Oracle RAC database. Note that each instance has a limit. Here's my setting for the setting PROCESS of my database to RAC 3-blink of eye:

    SQL > show parameter processes

    VALUE OF TYPE NAME

    ------------------------------------ ----------- ------------------------------

    aq_tm_processes integer 1

    db_writer_processes integer 1

    gcs_server_processes integer 2

    global_txn_processes integer 1

    JOB_QUEUE_PROCESSES integer 20

    log_archive_max_processes integer 5

    whole process 1500

    So, I can have up to 1500 process. Fortunately, his instance the because here is what my current use looks like:

    SQL > select INST_ID, resource_name, current_utilization, limit_value

    2 from resource_limit $ SGS where resource_name = '' process. ''

    INST_ID SELECT RESOURCE_NAME CURRENT_UTILIZATION LIMIT_VALU

    ---------- ------------------------------ ------------------- ----------

    1 724 1500 process

    2 709 1500 process

    3 720 1500 processes

    I have more than 2000 process total in all instances, higher than my setting. This should prove that the application DBA statement is correct.

    HTH,
    Brian

  • ORA-00304: requested INSTANCE_NUMBER is busy

    Hello

    I still have a problem with the 12 c CARS.  After fixing a problem wih oracle support (spfile in ASM, can not read, 'old problem'), I now have the spfile outdoors. I fixed it on the first node, but not solve this problem on the second node. I have inexplicable the second node. What is going on?  The second node has not found the spfile, and I do not have a spfile off ASM, only that one on the first node. I created a file pfile from spfile from the first node, copy it on the second node, but now I got:

    ORA-00304: requested INSTANCE_NUMBER is busy

    Yes, there is 'help' on the net... but not a solution that fits

    ORA-00304: requested INSTANCE_NUMBER is busy
    Action: Or the other

    (a) specify an another INSTANCE_NUMBER,.
    (b) stopping the instance running with this number
    (c) wait for example recovery to complete on the instance with this number.

    But I can't change


    The spfile a like a pfile to one instance of node?

    I copied a node:


    bsr_1.__data_transfer_cache_size = 0

    bsr_1.__db_cache_size = 46170898432

    bsr_1.__java_pool_size = 1879048192

    bsr_1.__large_pool_size = 2147483648

    bsr_1.__oracle_base='/U01/app/oracle'#ORACLE_BASE the value of the environment

    bsr_1.__pga_aggregate_target = 37849399296

    bsr_1.__sga_target = 56639881216

    bsr_1.__shared_io_pool_size = 268435456

    bsr_1.__shared_pool_size = 5637144576

    bsr_1.__streams_pool_size = 0

    * ._disk_sector_size_override = TRUE

    *.audit_file_dest='/U01/app/Oracle/admin/BSR/adump '

    * .audit_trail = "db".

    true.cluster_database = false

    * .cluster_database = FALSE

    bsr_1.cluster_database = TRUE

    * full = '12.1.0.0.0'

    *.control_files='+data/BSR/CONTROLFILE/current.278.840122161','+data/BSR/CONTROLFILE/current.279.840122161 '

    * .db_block_size = 16384

    * .db_create_file_dest = "+ DATA.

    * .db_domain = "

    * .db_name = "SSB".

    * .db_recovery_file_dest = "+ DATA.

    * .db_recovery_file_dest_size = 1000g

    * .db_writer_processes = 8

    *.diagnostic_dest='/U01/app/Oracle '

    *. Dispatchers ='(Protocol=TCP) (SERVICE = bsrXDB)"

    * .enable_pluggable_database = true

    *.log_archive_format='%t_%s_%r.dbf'

    * .max_string_size = "EXTENDED."

    * .memory_max_target = 94371840000

    * .memory_target = 94371840000

    * .nls_language = "GERMAN".

    * .nls_length_semantics = "CHAR".

    * .nls_territory = "GERMANY".

    * .open_cursors = 3000

    * .open_links = 100

    * .open_links_per_instance = 100

    * runoff = 3000

    * .remote_login_passwordfile = "exclusive."

    * Once = 3305


    The nodes have had the exact same material. I do not understand why

    * .cluster_database = FALSE

    bsr_1.cluster_database = TRUE



    so, what is the best way to start the second node and create a spfile don't?

    Thanks for help.



    I corrected, I changed bsr_1 to bsr_2 * .cluster_database = TRUE and start the databe.

  • Confusion of RMAN

    Hi all

    I have a confusion here related to the archiving log backup!

    If my logs to archive is too big in size and by making a backup of rman through the following command:

    RMAN > backup archivelog all;

    But it takes more time to take backup of the archive logs. Is there a solution to make this performance block so that the big archive newspapers could also be saved soon.

    Please suggest me

    Kind regards

    Michel

    Hi Michel,.

    You back up files on tape or on disk? How many channels do you have configured? What is the amount of RAM and CPU on your server?

    Check if your operating system supports the e/s synchronous, if yes, then you can configure DBWR_IO_SLAVES

    Take a look on the link which will give you more details on setting of rman.

    http://docs.Oracle.com/CD/B28359_01/backup.111/b28270/rcmtunin.htm#BRADV99981

    Kind regards

    Suntrupth

  • ORA-04031: unable to allocate 4120 bytes of shared memory

    Guys,

    OS = RHEL-3, 64-bit
    10.2.0.5
    node 2 CARS db

    We continuously reeving error in alert log file below.

    ORA-00603: ORACLE Server Session concluded with fatal error
    ORA-00604: an error has occurred at the SQL level 1 recursive
    ORA-04031: unable to allocate 4120 bytes of shared memory ('shared pool', ' select name, $ online, content... ") (","Typecheck","kgghteInit")
    ORA-00604: an error has occurred at the SQL level 1 recursive
    ORA-04031: unable to allocate 32 bytes of shared memory ('shared pool', 'select count (*) from sys.job... "," sql area', 'tmp')
    ORA-04031: unable to allocate 4120 bytes of shared memory ('shared pool', "select / * + index (idl_ub1$ i_...","Typecheck","kgghteInit") ")
    -etc...

    shared_pool_size = 608 M
    BMG = 9 GB

    Parameter file-

    process = 2000
    sessions = 2205
    SGA_MAX_SIZE = 26843545600
    shared_pool_size = 637534208
    LARGE_POOL_SIZE = 0
    JAVA_POOL_SIZE = 117440512
    STREAMS_POOL_SIZE = 536870912
    filesystemio_options = SETALL
    SGA_TARGET = 0
    control_files = STGRACDB01/STGDB/control/control01.ctl, STGRACDB02/STGDB/control/control02.ctl, /STGRACDB03/STGDB/control/control03.ctl
    control_file_record_keep_time = 14
    DB_BLOCK_SIZE = 8192
    __db_cache_size = 7516192768
    db_cache_size = 7516192768
    db_writer_processes = 2
    compatible = 10.2.0.5
    Log_archive_dest_1 = LOCATION = / STGRACDB04/STGDB/archives
    log_archive_format = %t_%s_%R.dbf
    log_buffer = 30455808
    DB_FILES = 800
    db_file_multiblock_read_count = 16
    cluster_database = TRUE
    cluster_database_instances = 2
    thread = 2
    instance_number = 2
    UNDO_MANAGEMENT = AUTO
    undo_tablespace = UNDOTBS2
    UNDO_RETENTION = 900
    Remote_login_passwordfile = EXCLUSIVE lock
    instance_name = STGDB2
    noms_service = STGDB, SYS$ STRMADMIN.CAPTURE_DEVTRAVST1_QUEUE.STGDB.SAPIAS.COM, STGDB.SAPIAS.COM
    dispatchers = (PROTOCOL = TCP) (SERVICE = STGDBXDB)
    session_cached_cursors = 70
    JOB_QUEUE_PROCESSES = 10
    usequeue_interval = 1
    CURSOR_SHARING = SIMILAR
    hash_area_size = 524288
    background_dump_dest = / app/oracle/admin/STGDB/bdump
    user_dump_dest = / app/oracle/admin/STGDB/udump
    core_dump_dest = / app/oracle/admin/STGDB/cdump
    audit_file_dest = / app/oracle/admin/STGDB/adump
    session_max_open_files = 20
    Open_links = 150
    open_links_per_instance = 150
    optimizer_features_enable = 10.2.0.4
    sort_area_size = 0
    db_name = STGDB
    db_unique_name = STGDB
    open_cursors = 300
    optimizer_mode = CHOOSE
    optimizercost_based_transformation = OFF
    pga_aggregate_target = 996147200
    PGAmax_size = 1288490188
    optimizerjoin_elimination_enabled = TRUE
    optimizer_secure_view_merging = FALSE


    -Can someone help me on this issue? I think that this is not a simple matter to ignore and his Dungeon to throw in alertlog file.

    Thank you
    Hari

    Why not just increase the size of your shared_pool_size parameter? It is not very big at the moment, maybe make 1 G.
    And while you're at it, you need to sort the PGA. You have the overall goal set at less than 1 G, and at the same time you have set the limit of session pgamax_size to more than 1 G, which is impossible.

Maybe you are looking for

  • iMovie options greyed out

    I use iMovie 10.1.2 with Mac OS X El Capitan 10.11.6.  When I click on file in the iMovie menu bar, the options to create a new movie or a new trailer are grayed out.  No idea how to a-grey them? I tried closing iMovie, rampage the Settings.plist fil

  • Should what settings I use to make a link to an email opens in Firefox and not Internet Explorer?

    When I click on a link in an email [I use roadrunner] my computer tries to open Explorer. I want to open Firefox. What I have to change?

  • OfficeJet Pro 8610: business card printing Officejet Pro 8610

    Recently purchased the officejet Pro 8610 and I was wondering what is the best type of paper or stock for printing business cards (Matt and glossy paper types)?

  • Why my defrag took almost 24 hours?

    My HP Pavilion dv6 started an automatic process of height, last night.  He has been slain on this stage of defragmentation for nearly 24 hours.  Why is it so much to take?  This never happened before.  I need this computer for a project and I can't w

  • Replace the HDD with a SSD

    Hi, I am a new consumer of HP and I would like to replace the HARD original with an SSD drive. The question is: If I do that, then I lost the warranty? (In English) Salvo, sono a nuovo consumatore di prodotti HP e sapere I wanted is sostituendo Hard