pga_aggregate_target

Hi the gems...

In the documentation of the Oracle 11 g 2 Administrator's guide, chapter Part1. 6. memory management, I read...

"You".
can omit the statements that these (PGA_AGGREGATE_TARGET) parameter values set to zero and
Leave the values of two positive numbers also. In this case, the
values act as the minimum values for the sizes of the SGA or instance
PGA. »


But at the end of the chapter, I read...

"You can control this amount by setting the.
PGA_AGGREGATE_TARGET initialization parameter. Oracle database then tries to make sure
the total amount of memory PGA divided between all the database server process
"and the background process never exceeds that goal."



These two lines are opposite each other... Please do understand to the me... thanks in advance...
hi gems...

In the Oracle 11gr2 administrator's guide documentation, chapter Part1. 6. Managing memory, I read..

"You
can omit the statements that set these parameter (PGA_AGGREGATE_TARGET) values to zero and
leave either or both of the values as positive numbers. In this case, the
values act as minimum values for the sizes of the SGA or instance
PGA."
But at the end of the chapter, I read...

"You can control this amount by setting the
initialization parameter PGA_AGGREGATE_TARGET. Oracle Database then tries to ensure
that the total amount of PGA memory allocated across all database server processes
and background processes never exceeds this target."

These two lines are opposing each other...please make it clear to me...thanks in advance..

Yes its correctly written in the documentation... But the first part and second part are different in the context of the handling of SGA... Means that if you manage your LMS with AMM (automatic memory management) then the first is correct... If you manage your LMS with EAMA either manually (SGA_TARGET and SGA_MAX_SIZE parameter), then second statement is true.

Tags: Database

Similar Questions

  • WARNING: Pellets of pga_aggregate_target 96 may not exceed memory_tar

    Hi all

    11.2.0.1
    OEL 6.4

    I am tracking our PROD alerts log and I can't occurrence of warnings:
    WARNING: pellets of pga_aggregate_target 96 may not exceed memory_target (101) - sga_target (0) or min_sga (6)
    I ignore this warning only? Does affect the performance?


    Thank you

    zxy

    Here's your on IE 6464 M 6.1 G memory_target and your pga_aggregate_target = 6174015488 which is about 6.1 G and your sga_max_target = 6.4 GB is.

    According to oracle documentation

    Make sure settings MEMORY_TARGET/MEMORY_MAX_TARGET are on at least the sum of SGA_MAX_SIZE/SGA_TARGET and the parameter PGA_AGGREGATE_TARGET

    Your memory_target must therefore more than sga_max_size + pga_aggregate_target

    That's why your target memory should be more than 6.4 + 6.1 = 12.5 G

    For example:

    ALTER system set memory_max_target = 13G scope = spfile;
    ALTER system set memory_target = 13g scope = spfile

    and restart the DB.

    Hope this can help you :)

    Published: sga_max_target to sga_max_size

  • ORA-04032: pga_aggregate_target must be set before moving on to auto mode

    Hello

    I use Windows server 2008 and Oracle 10 g R2.
    I have set the PGA_AGGREGRATE_TARGET 0 and manual workarea_size_policy and bounced to the database.
    But while trying to start the database, it shows me the error below...

    ORA-04032: pga_aggregate_target must be set before moving on to auto mode

    Can someone tell me how can I start the database or change the setting, thank you.

    Forget a moment EM. Make a file pfile from spfile with the command create pfile from spfile . In this document, change the setting and the value Manual and use this option for the instance using startup pfile =started. Post here the results of the steps of the copy paste the SQL * plus the term.

    Aman...

  • Relevance of the parameter PGA_AGGREGATE_TARGET

    Hello

    The PGA_AGGREGATE_TAGET parameter is defined as below in the oracle documentation.

    + "PGA_AGGREGATE_TARGET specifies the overall target PGA memory available for all server processes associated with the instance. +

    What ever we value for this parameter, for example I put it at 200M. But oracle it still increases the value according to its condition and the availability of (sharp RAM).

    On the one hand to my database, I see PGA utilizaion 800M in AWR reports.

    When oracle increases internal PGA value according to his need, so what is the relevance of the PGA_AGGREGATE_TARGET parameter.

    Maybe I'm missing somewhere in the parameter definition (Meaning), please correct me if I'm wrong.

    My version of oracle 10.2.0.3.

    Thanks in advance!


    Best regards
    oratest

    There is an explanation in the 10g Performance Tuning Guide, Chapter 7, under the PGA memory management.

    Query V$ PGA_TARGET_ADVICE. If ESTD_OVERALLOC_COUNT is not null for the PGA_TARGET_FOR_ESTIMATE that corresponds to your setting of PGA_AGGREGATE_TARGET, then Oracle will ignore your settings and allot more if necessary.

    Kind regards
    Bob

  • How to increase SGA_MAX_SIZE and PGA_AGGREGATE_TARGET in 10g RAC

    All,

    We built the new RAC environment with the following configurations.

    Linux - RHEL5
    Oracle - 10.2.0.4.0
    2 node RAC environment.

    The database is now live.

    1. now, I need to increase the SGA_MAX_SIZE and PGA_AGGREGATE_TARGET in this RAC environment.
    Can anyone tell me the steps. Do I need to stop the database?
    Note: Always OEM is not setup for this one.

    2 performance testers are complaining about "high response time. No indication on this one?

    1. now, I need to increase the SGA_MAX_SIZE and PGA_AGGREGATE_TARGET in this RAC environment.
    Can anyone tell me the steps. Do I need to stop the database?

    You have to stop database... When you change the sga_max_size initialization parameter.
    But you need to check on kernel.shmmax (all nodes) before
    # sysctl - a | grep shmmax
    kernel.shmmax = xxxxx

    If you need more... and you have a lot of physical memory ;)
    You can modify /etc/sysctl.conf and then "sysctl Pei" kernel.shmmax as you said Mufalani.

    SQL > alter system set sga_max_size = new_size sid ='* ' scope = spfile;
    or
    SQL > alter system set sga_max_size = new_size sid = "DB1" scope = spfile;

    http://download.Oracle.com/docs/CD/B19306_01/server.102/b14237/initparams192.htm#REFRN10198

    Furthermore, if you need new dimensions on LMS, you must change the parameter initialized so SGA_TARGET.
    http://download.Oracle.com/docs/CD/B19306_01/server.102/b14237/initparams193.htm#REFRN10256

    You cannot change... no stop on PGA_AGGREGATE_TARGET,

    SQL > alter system set PGA_AGGREGATE_TARGET = new_size sid ='* ';
    or
    SQL > alter system set PGA_AGGREGATE_TARGET = new_size sid = "DB1".
    http://download.Oracle.com/docs/CD/B19306_01/server.102/b14237/initparams157.htm#REFRN10165

    When you replace "sga_max_size"... spfile, you need to stop/start database, I think
    You should stop/start a front knot (don't make sure no error)... and stop/start another node on the other.

    Example: 2 nodes (racdb1, racdb2)
    Check shmmax 2 nodes
    # sysctl - a | grep shmmax
    kernel.shmmax = 8G

    SQL > create pfile = ' / tmp/pfile-backup ' of spfile;
    SQL > alter system set sga_max_size = 8 G sid ='* ' scope = spfile;

    $ srvctl stop d racdb-i o racdb1 immediate instance
    $ srvctl start instance racdb d-i racdb1

    If error... Check the log of alerts on racdb1 and solve (remember to also change "SGA_TARGET"), if no error stop/start on racdb2

    $ srvctl stop d racdb-i o racdb2 immediate instance
    $ srvctl start instance racdb d-i racdb2

    Date of arrival:
    Select * from v$ sgainfo;
    Select * from v$ sga;

    PGA_AGGREGATE_TARGET you can... (no power DB)
    SQL > alter system set PGA_AGGREGATE_TARGET = 2 G sid ='* ';

    2 performance testers are complaining about "high response time. No indication on this one?

    Use ADDM + AWR or Statspack...
    make the snapshot on each node of the CWA and reporting during the test.

    Also "High response times" are based on
    -Index
    -partition tables
    -High interconnect (NIC)
    -Disk RAID...
    -Variable SQL (if using the same on OLTP sql statement)
    and...

    Good luck

    Edited by: Surachart Opun (HunterX) on August 14, 2009 10:57

  • Increase in value of pga_aggregate_target during migration?

    Is there than a particular pga_Aggregate_target reasonwhy must be increased during migration from 9i and 10 g?

    I was reading metalink notes 316889.1


    WHAT is generic like any other parameter memory which must be increased because of added features?

    PGA_AGGREGATE_TARGET must be set correctly when you migrate 9i and 10g. 10g is a more powerful version, it has many new features and some of them have to do with how SQL statements are processed, and all these new features must be paid with the memory.

    There are certainly more memory than the version 9i, you can take a look at these articles where I documented some problems related to errors ORA-04030 who showed after the migration of a productive of 9iR2 to 10 gr 2 environment and what to do with the memory allocated for the Oracle process:

    [ORA-04030 after upgrade to 9iR2 10gr 2 on Windows 2003 | http://hrivera99.blogspot.com/2008/01/ora-04030-after-10gr2-upgrade-on.html]

    ORASTACK

    * ~ Madrid *.
    http://hrivera99.blogspot.com

  • Insert - Performance problem

    Hi Experts,

    I am new to Oracle. Ask for your help to fix the performance of a query of insertion problem.

    I have an insert query that is go search for records of the partitioned table.

    Background: the user indicates that the query was running in 30 minutes to 10 G. The database is upgraded to 12 by one of my colleague. Now the query works continuously for hours, but no result. Check the settings and SGA is 9 GB, Windows - 4 GB. DB block size is 8192, DB Multiblock read file Count is 128. Overall target of PGA is 2457M.

    The parameters are given below


    VALUE OF TYPE NAME
    ------------------------------------ ----------- ----------
    DBFIPS_140 boolean FALSE
    O7_DICTIONARY_ACCESSIBILITY boolean FALSE
    whole active_instance_count
    aq_tm_processes integer 1
    ARCHIVE_LAG_TARGET integer 0
    asm_diskgroups chain
    asm_diskstring chain
    asm_power_limit integer 1
    asm_preferred_read_failure_groups string
    audit_file_dest string C:\APP\ADM
    audit_sys_operations Boolean TRUE

    AUDIT_TRAIL DB string
    awr_snapshot_time_offset integer 0
    background_core_dump partial string
    background_dump_dest string C:\APP\PRO
    \RDBMS\TRA
    BACKUP_TAPE_IO_SLAVES boolean FALSE
    bitmap_merge_area_size integer 1048576
    blank_trimming boolean FALSE
    buffer_pool_keep string
    buffer_pool_recycle string
    cell_offload_compaction ADAPTIVE channel


    cell_offload_decryption Boolean TRUE
    cell_offload_parameters string
    cell_offload_plan_display string AUTO
    cell_offload_processing Boolean TRUE
    cell_offloadgroup_name string
    whole circuits
    whole big client_result_cache_lag 3000
    client_result_cache_size big integer 0
    clonedb boolean FALSE
    cluster_database boolean FALSE
    cluster_database_instances integer 1


    cluster_interconnects chain
    commit_logging string
    commit_point_strength integer 1
    commit_wait string
    string commit_write
    common_user_prefix string C#.
    compatible string 12.1.0.2.0
    connection_brokers string ((TYPE = DED
    ((TYPE = EM
    control_file_record_keep_time integer 7
    control_files string G:\ORACLE\

    TROL01. CTL
    FAST_RECOV
    NTROL02. CT
    control_management_pack_access string diagnostic
    core_dump_dest string C:\app\dia
    bal12\cdum
    cpu_count integer 4
    create_bitmap_area_size integer 8388608
    create_stored_outlines string
    cursor_bind_capture_destination memory of the string + tell
    CURSOR_SHARING EXACT string

    cursor_space_for_time boolean FALSE
    db_16k_cache_size big integer 0
    db_2k_cache_size big integer 0
    db_32k_cache_size big integer 0
    db_4k_cache_size big integer 0
    db_8k_cache_size big integer 0
    db_big_table_cache_percent_target string 0
    db_block_buffers integer 0
    db_block_checking FALSE string
    db_block_checksum string TYPICAL
    Whole DB_BLOCK_SIZE 8192

    db_cache_advice string WE
    db_cache_size large integer 0
    db_create_file_dest chain
    db_create_online_log_dest_1 string
    db_create_online_log_dest_2 string
    db_create_online_log_dest_3 string
    db_create_online_log_dest_4 string
    db_create_online_log_dest_5 string
    db_domain chain
    db_file_multiblock_read_count integer 128
    db_file_name_convert chain

    DB_FILES integer 200
    db_flash_cache_file string
    db_flash_cache_size big integer 0
    db_flashback_retention_target around 1440
    chain of db_index_compression_inheritance NONE
    DB_KEEP_CACHE_SIZE big integer 0
    chain of db_lost_write_protect NONE
    db_name string ORCL
    db_performance_profile string
    db_recovery_file_dest string G:\Oracle\
    y_Area


    whole large db_recovery_file_dest_size 12840M
    db_recycle_cache_size large integer 0
    db_securefile string PREFERRED
    channel db_ultra_safe
    db_unique_name string ORCL
    db_unrecoverable_scn_tracking Boolean TRUE
    db_writer_processes integer 1
    dbwr_io_slaves integer 0
    DDL_LOCK_TIMEOUT integer 0
    deferred_segment_creation Boolean TRUE
    dg_broker_config_file1 string C:\APP\PRO


    \DATABASE\
    dg_broker_config_file2 string C:\APP\PRO
    \DATABASE\
    dg_broker_start boolean FALSE
    diagnostic_dest channel directory
    disk_asynch_io Boolean TRUE
    dispatchers (PROTOCOL = string
    12XDB)
    distributed_lock_timeout integer 60
    dml_locks whole 2076
    whole dnfs_batch_size 4096

    dst_upgrade_insert_conv Boolean TRUE
    enable_ddl_logging boolean FALSE
    enable_goldengate_replication boolean FALSE
    enable_pluggable_database boolean FALSE
    event string
    exclude_seed_cdb_view Boolean TRUE
    fal_client chain
    fal_server chain
    FAST_START_IO_TARGET integer 0
    fast_start_mttr_target integer 0
    fast_start_parallel_rollback string LOW


    file_mapping boolean FALSE
    fileio_network_adapters string
    filesystemio_options chain
    fixed_date chain
    gcs_server_processes integer 0
    global_context_pool_size string
    global_names boolean FALSE
    global_txn_processes integer 1
    hash_area_size integer 131072
    channel heat_map
    hi_shared_memory_address integer 0

    hs_autoregister Boolean TRUE
    iFile file
    inmemory_clause_default string
    inmemory_force string by DEFAULT
    inmemory_max_populate_servers integer 0
    inmemory_query string ENABLE
    inmemory_size big integer 0
    inmemory_trickle_repopulate_servers_ integer 1
    percent
    instance_groups string
    instance_name string ORCL


    instance_number integer 0
    instance_type string RDBMS
    instant_restore boolean FALSE
    java_jit_enabled Boolean TRUE
    java_max_sessionspace_size integer 0
    JAVA_POOL_SIZE large integer 0
    java_restrict string no
    java_soft_sessionspace_limit integer 0
    JOB_QUEUE_PROCESSES around 1000
    LARGE_POOL_SIZE large integer 0
    ldap_directory_access string NONE


    ldap_directory_sysauth string no.
    license_max_sessions integer 0
    license_max_users integer 0
    license_sessions_warning integer 0
    listener_networks string
    LOCAL_LISTENER (ADDRESS = string
    = i184borac
    (NET) (PORT =
    lock_name_space string
    lock_sga boolean FALSE
    log_archive_config string


    Log_archive_dest chain
    Log_archive_dest_1 chain
    LOG_ARCHIVE_DEST_10 string
    log_archive_dest_11 string
    log_archive_dest_12 string
    log_archive_dest_13 string
    log_archive_dest_14 string
    log_archive_dest_15 string
    log_archive_dest_16 string
    log_archive_dest_17 string
    log_archive_dest_18 string


    log_archive_dest_19 string
    LOG_ARCHIVE_DEST_2 string
    log_archive_dest_20 string
    log_archive_dest_21 string
    log_archive_dest_22 string
    log_archive_dest_23 string
    log_archive_dest_24 string
    log_archive_dest_25 string
    log_archive_dest_26 string
    log_archive_dest_27 string
    log_archive_dest_28 string


    log_archive_dest_29 string
    log_archive_dest_3 string
    log_archive_dest_30 string
    log_archive_dest_31 string
    log_archive_dest_4 string
    log_archive_dest_5 string
    log_archive_dest_6 string
    log_archive_dest_7 string
    log_archive_dest_8 string
    log_archive_dest_9 string
    allow the chain of log_archive_dest_state_1


    allow the chain of log_archive_dest_state_10
    allow the chain of log_archive_dest_state_11
    allow the chain of log_archive_dest_state_12
    allow the chain of log_archive_dest_state_13
    allow the chain of log_archive_dest_state_14
    allow the chain of log_archive_dest_state_15
    allow the chain of log_archive_dest_state_16
    allow the chain of log_archive_dest_state_17
    allow the chain of log_archive_dest_state_18
    allow the chain of log_archive_dest_state_19
    allow the chain of LOG_ARCHIVE_DEST_STATE_2

    allow the chain of log_archive_dest_state_20
    allow the chain of log_archive_dest_state_21
    allow the chain of log_archive_dest_state_22
    allow the chain of log_archive_dest_state_23
    allow the chain of log_archive_dest_state_24
    allow the chain of log_archive_dest_state_25
    allow the chain of log_archive_dest_state_26
    allow the chain of log_archive_dest_state_27
    allow the chain of log_archive_dest_state_28
    allow the chain of log_archive_dest_state_29
    allow the chain of log_archive_dest_state_3

    allow the chain of log_archive_dest_state_30
    allow the chain of log_archive_dest_state_31
    allow the chain of log_archive_dest_state_4
    allow the chain of log_archive_dest_state_5
    allow the chain of log_archive_dest_state_6
    allow the chain of log_archive_dest_state_7
    allow the chain of log_archive_dest_state_8
    allow the chain of log_archive_dest_state_9
    log_archive_duplex_dest string
    log_archive_format string ARC%S_%R.%
    log_archive_max_processes integer 4

    log_archive_min_succeed_dest integer 1
    log_archive_start Boolean TRUE
    log_archive_trace integer 0
    whole very large log_buffer 28784K
    log_checkpoint_interval integer 0
    log_checkpoint_timeout around 1800
    log_checkpoints_to_alert boolean FALSE
    log_file_name_convert chain
    whole MAX_DISPATCHERS
    max_dump_file_size unlimited string
    max_enabled_roles integer 150


    whole max_shared_servers
    max_string_size string STANDARD
    memory_max_target big integer 0
    memory_target large integer 0
    NLS_CALENDAR string GREGORIAN
    nls_comp BINARY string
    nls_currency channel u
    string of NLS_DATE_FORMAT DD-MON-RR
    nls_date_language channel ENGLISH
    string nls_dual_currency C
    nls_iso_currency string UNITED KIN

    nls_language channel ENGLISH
    nls_length_semantics string OCTET
    string nls_nchar_conv_excp FALSE
    nls_numeric_characters chain.,.
    nls_sort BINARY string
    nls_territory string UNITED KIN
    nls_time_format HH24.MI string. SS
    nls_time_tz_format HH24.MI string. SS
    chain of NLS_TIMESTAMP_FORMAT DD-MON-RR
    NLS_TIMESTAMP_TZ_FORMAT string DD-MON-RR
    noncdb_compatible boolean FALSE


    object_cache_max_size_percent integer 10
    object_cache_optimal_size integer 102400
    olap_page_pool_size big integer 0
    open_cursors integer 300
    Open_links integer 4
    open_links_per_instance integer 4
    optimizer_adaptive_features Boolean TRUE
    optimizer_adaptive_reporting_only boolean FALSE
    OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES boolean FALSE
    optimizer_dynamic_sampling integer 2
    optimizer_features_enable string 12.1.0.2

    optimizer_index_caching integer 0
    OPTIMIZER_INDEX_COST_ADJ integer 100
    optimizer_inmemory_aware Boolean TRUE
    the string ALL_ROWS optimizer_mode
    optimizer_secure_view_merging Boolean TRUE
    optimizer_use_invisible_indexes boolean FALSE
    optimizer_use_pending_statistics boolean FALSE
    optimizer_use_sql_plan_baselines Boolean TRUE
    OPS os_authent_prefix string $
    OS_ROLES boolean FALSE
    parallel_adaptive_multi_user Boolean TRUE


    parallel_automatic_tuning boolean FALSE
    parallel_degree_level integer 100
    parallel_degree_limit string CPU
    parallel_degree_policy chain MANUAL
    parallel_execution_message_size integer 16384
    parallel_force_local boolean FALSE
    parallel_instance_group string
    parallel_io_cap_enabled boolean FALSE
    PARALLEL_MAX_SERVERS integer 160
    parallel_min_percent integer 0
    parallel_min_servers integer 16

    parallel_min_time_threshold string AUTO
    parallel_server boolean FALSE
    parallel_server_instances integer 1
    parallel_servers_target integer 64
    parallel_threads_per_cpu integer 2
    pdb_file_name_convert string
    pdb_lockdown string
    pdb_os_credential string
    permit_92_wrap_format Boolean TRUE
    pga_aggregate_limit great whole 4914M
    whole large pga_aggregate_target 2457M

    -
    Plscope_settings string IDENTIFIER
    plsql_ccflags string
    plsql_code_type chain INTERPRETER
    plsql_debug boolean FALSE
    plsql_optimize_level integer 2
    plsql_v2_compatibility boolean FALSE
    plsql_warnings DISABLE channel: AL
    PRE_PAGE_SGA Boolean TRUE
    whole process 300
    processor_group_name string
    query_rewrite_enabled string TRUE


    applied query_rewrite_integrity chain
    rdbms_server_dn chain
    read_only_open_delayed boolean FALSE
    recovery_parallelism integer 0
    Recyclebin string on
    redo_transport_user string
    remote_dependencies_mode string TIMESTAMP
    remote_listener chain
    Remote_login_passwordfile string EXCLUSIVE
    REMOTE_OS_AUTHENT boolean FALSE
    remote_os_roles boolean FALSE

    replication_dependency_tracking Boolean TRUE
    resource_limit Boolean TRUE
    resource_manager_cpu_allocation integer 4
    resource_manager_plan chain
    result_cache_max_result integer 5
    whole big result_cache_max_size K 46208
    result_cache_mode chain MANUAL
    result_cache_remote_expiration integer 0
    resumable_timeout integer 0
    rollback_segments chain
    SEC_CASE_SENSITIVE_LOGON Boolean TRUE

    sec_max_failed_login_attempts integer 3
    string sec_protocol_error_further_action (DROP, 3)
    sec_protocol_error_trace_action string PATH
    sec_return_server_release_banner boolean FALSE
    disable the serial_reuse chain
    service name string ORCL
    session_cached_cursors integer 50
    session_max_open_files integer 10
    entire sessions 472
    Whole large SGA_MAX_SIZE M 9024
    Whole large SGA_TARGET M 9024


    shadow_core_dump string no
    shared_memory_address integer 0
    SHARED_POOL_RESERVED_SIZE large integer 70464307
    shared_pool_size large integer 0
    whole shared_server_sessions
    SHARED_SERVERS integer 1
    skip_unusable_indexes Boolean TRUE
    smtp_out_server chain
    sort_area_retained_size integer 0
    sort_area_size integer 65536
    spatial_vector_acceleration boolean FALSE


    SPFile string C:\APP\PRO
    \DATABASE\
    sql92_security boolean FALSE
    SQL_Trace boolean FALSE
    sqltune_category string by DEFAULT
    standby_archive_dest channel % ORACLE_HO
    standby_file_management string MANUAL
    star_transformation_enabled string TRUE
    statistics_level string TYPICAL
    STREAMS_POOL_SIZE big integer 0
    tape_asynch_io Boolean TRUE

    temp_undo_enabled boolean FALSE
    entire thread 0
    threaded_execution boolean FALSE
    timed_os_statistics integer 0
    TIMED_STATISTICS Boolean TRUE
    trace_enabled Boolean TRUE
    tracefile_identifier chain
    whole of transactions 519
    transactions_per_rollback_segment integer 5
    UNDO_MANAGEMENT string AUTO
    UNDO_RETENTION integer 900

    undo_tablespace string UNDOTBS1
    unified_audit_sga_queue_size integer 1048576
    use_dedicated_broker boolean FALSE
    use_indirect_data_buffers boolean FALSE
    use_large_pages string TRUE
    user_dump_dest string C:\APP\PRO
    \RDBMS\TRA
    UTL_FILE_DIR chain
    workarea_size_policy string AUTO
    xml_db_events string enable

    Thanks in advance

    Firstly, thank you for posting the 10g implementation plan, which was one of the key things that we were missing.

    Second, you realize that you have completely different execution plans, so you can expect different behavior on each system.

    Your package of 10g has a total cost of 23 959 while your plan of 12 c has a cost of 95 373 which is almost 4 times more.  All things being equal, cost is supposed to relate directly to the time spent, so I expect the 12 c plan to take much more time to run.

    From what I can see the 10g plan begins with a scan of full table on DEALERS, and then a full scan on SCARF_VEHICLE_EXCLUSIONS table, and then a full scan on CBX_tlemsani_2000tje table, and then a full scan on CLAIM_FACTS table.  The first three of these analyses tables have a very low cost (2 each), while the last has a huge cost of 172K.  Yet once again, the first three scans produce very few lines in 10g, less than 1,000 lines each, while the last product table scan 454 K lines.

    It also looks that something has gone wrong in the 10g optimizer plan - maybe a bug, which I consider that Jonathan Lewis commented.  Despite the full table scan with a cost of 172 K, NESTED LOOPS it is part of the only has a cost of 23 949 or 24 K.  If the math is not in terms of 10g.  In other words, maybe it's not really optimal plan because 10g optimizer may have got its sums wrong and 12 c might make his right to the money.  But luckily this 'imperfect' 10g plan happens to run fairly fast for one reason or another.

    The plan of 12 starts with similar table scans but in a different order.  The main difference is that instead of a full table on CLAIM_FACTS scan, it did an analysis of index on CLAIM_FACTS_AK9 beach at the price of 95 366.  It is the only main component of the final total cost of 95 373.

    Suggestions for what to do?  It is difficult, because there is clearly an anomaly in the system of 10g to have produced the particular execution plan that he uses.  And there is other information that you have not provided - see later.

    You can try and force a scan of full table on CLAIM_FACTS by adding a suitable example suspicion "select / * + full (CF) * / cf.vehicle_chass_no...". "However, the tips are very difficult to use and does not, guarantee that you will get the desired end result.  So be careful.  For the essay on 12 c, it may be worth trying just to see what happens and what produces the execution plan looks like.  But I would not use such a simple, unique tip in a production system for a variety of reasons.  For testing only it might help to see if you can force the full table on CLAIM_FACTS scan as in 10g, and if the performance that results is the same.

    The two plans are parallel ones, which means that the query is broken down into separate, independent steps and several steps that are executed at the same time, i.e. several CPUS will be used, and there will be several readings of the disc at the same time.  (It is a mischaracterization of the works of parallel query how).  If 10g and 12 c systems do not have the SAME hardware configuration, then you would naturally expect different time elapsed to run the same parallel queries.  See the end of this answer for the additional information that you may provide.

    But I would be very suspicious of the hardware configuration of the two systems.  Maybe 10 g system has 16-core processors or more and 100's of discs in a matrix of big drives and maybe the 12 c system has only 4 cores of the processor and 4 disks.  That would explain a lot about why the 12 c takes hours to run when the 10 g takes only 30 minutes.

    Remember what I said in my last reply:

    "Without any contrary information I guess the filter conditions are very low, the optimizer believes he needs of most of the data in the table and that a table scan or even a limited index scan complete is the"best"way to run this SQL.  In other words, your query takes just time because your tables are big and your application has most of the data in these tables. "

    When dealing with very large tables and to do a full table parallel analysis on them, the most important factor is the amount of raw hardware, you throw the ball to her.  A system with twice the number of CPUS and twice the number of disks will run the same parallel query in half of the time, at least.  It could be that the main reason for the 12 c system is much slower than the system of 10g, rather than on the implementation plan itself.

    You may also provide us with the following information which would allow a better analysis:

    • Row counts in each tables referenced in the query, and if one of them are partitioned.
    • Hardware configurations for both systems - the 10g and the 12 a.  Number of processors, the model number and speed, physical memory, CPU of discs.
    • The discs are very important - 10g and 12 c have similar disk subsystems?  You use simple old records, or you have a San, or some sort of disk array?  Are the bays of identical drives in both systems?  How are they connected?  Fast Fibre Channel, or something else?  Maybe even network storage?
    • What is the size of the SGA in both systems?  of values for MEMORY_TARGET and SGA_TARGET.
    • The fact of the CLAIM_FACTS_AK9 index exist on the system of 10g.  I guess he does, but I would like that it confirmed to be safe.

    John Brady

  • Hash Join pouring tempspace

    I am using 11.2.0.4.0 - oracle Version. I have underwear paremters on GV$ parameter
    pga_aggregate_target - 8GB
    hash_area_size - 128 KB
    Sort_area_size - 64 KB

    Now under query plan is running for ~ 1 HR and resulting in question tempspace. Unable to extend segment temp of 128 in tablespace TEMP.
    We have currently allocated ~ 200GB at the tempspace. This query runs good for the daily race with Nested loop and the required index, but to run monthly that it changes the plan due to the volume I think and I went for the join of HASH, who believe is good decision by the optimizer.

    AFAIK, the hash join reverse to temp will slow query response time, so I need expert advice, if we increase the pga_aggregate_target so that HASH_AREA_SIZE will be raised to adequate to accommadate the driving table in this? howmuch size and should put us, it should be the same as the size of the array of conduct? or are there other work around the same? Note - the size of the driving table B is "~ 400GB.

    -----------------------------------------------------------------------------------------------------------------------------------
    | Id  | Operation                      | Name                     | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |
    -----------------------------------------------------------------------------------------------------------------------------------
    |   0 | INSERT STATEMENT               |                          |       |       |       |    10M(100)|          |       |       |
    |   1 |  LOAD TABLE CONVENTIONAL       |                          |       |       |       |            |          |       |       |
    |   2 |   FILTER                       |                          |       |       |       |            |          |       |       |
    |   3 |    HASH JOIN                   |                          |  8223K|  1811M|       |    10M  (1)| 35:30:55 |       |       |
    |   4 |     TABLE ACCESS STORAGE FULL  | A_GT                     |    82 |   492 |       |     2   (0)| 00:00:01 |       |       |
    |   5 |     HASH JOIN                  |                          |  8223K|  1764M|   737M|    10M  (1)| 35:30:55 |       |       |
    |   6 |      PARTITION RANGE ITERATOR  |                          |  8223K|   643M|       |    10M  (1)| 34:18:55 |   KEY |   KEY |
    |   7 |       TABLE ACCESS STORAGE FULL| B                        |  8223K|   643M|       |    10M  (1)| 34:18:55 |   KEY |   KEY |
    |   8 |      TABLE ACCESS STORAGE FULL | C_GT                     |    27M|  3801M|       |   118K  (1)| 00:23:48 |       |       |
    -----------------------------------------------------------------------------------------------------------------------------------
    
    
    

    Find plans by trial and error is not an efficient use of the time - and if it was a good idea to avoid joins and hash, then Oracle have set up their in the first place. I can understand your DBA with a yen to avoid, however, because any spill of a hash for disc often join a (relative) effect much more important than you might expect.  In this case, however, you have a loop nested in A_GT which operates 39M times to access a table of 82 lines index - clearly (a) CPU work to achieve would be reduced if you included table columns in the index definition, but more significantly the cost of CPU of the A_GT/C_GT join would drop if you have built a hash in memory of A_GT table that is not a hash join.

    What you ask for is a description of how to optimize a warehouse of data on Exadata machine - a forum is not the right place for this discussion; all I can say is that you and your databases need to do some testing to find out the best way to match queries to the Exadata has, so keep an eye on queries that produces the application in case of change of usage patterns.  There are a few trivial generalities that anyone could offer:

    (a) partitioning a day is good, so you can ensure that your queries are able to do partitioning to remove only the days where they want; even better is if there is a limited set of partitions that you can

    (b) e/s for joins of large hash spilling to disk can be catastrophic compared to the underlying i/o for tablescans for the first access to the data, which means that simple queries can give the impression that Exadata is incredibly fast (especially when the index the flash cache and storage are effective), but slightly more complex queries are surprisingly slow in comparison.

    (c) once you have passed the flash server cell cache, single block reads are very large and slow - queries that do a lot of single e/s (viz: big reports using nested against randomly scattered data loops joins) can cause very slow IO.

    You must know the data types, know the general structure of your queries, be ready to generate of materialized views for derived complex data and understand the strengths and weaknesses of the Exadata.

    Concerning

    Jonathan Lewis

  • Name for the DB file conversion

    Dear friends

    Here is my init .ora file

    I have an application working on a database called PRDSPRT, I cloned this server of database on a new server to install a custodian of data on this subject the db_name should be the same as his need for the application

    How can I install the DBfile _ name convert?

    & the log_file convert its pointing to the same path now

    * ._b_tree_bitmap_plans = FALSE # required 11i setting

    * ._fast_full_scan_enabled = FALSE

    * ._like_with_bind_as_equality = TRUE

    * ._sort_elimination_cost_ratio = 5

    * ._sqlexec_progression_cost = 2147483647

    * ._system_trig_enabled = true

    * ._trace_files_public = TRUE

    * .aq_tm_processes = 1

    *.background_dump_dest='/oracle/dev2/d01/db/tech_st/10.2.0/admin/PRDSPRT_srv-hq-on01/bdump'

    * full = '10.2.0'

    *.control_files='/oracle/dev2/d01/db/apps_st/data/cntrl01.dbf','/oracle/dev2/d01/db/apps_st/data/cntrl02.dbf','/oracle/dev2/d01/db/apps_st/data/cntrl03.dbf'

    *.core_dump_dest='/oracle/dev2/d01/db/tech_st/10.2.0/admin/PRDSPRT_srv-hq-on01/cdump'

    * .cursor_sharing = "TRUE" # required 11i settting

    * .db_block_checking = "FALSE".

    * .db_block_checksum = 'TRUE '.

    * .db_block_size = 8192

    * .db_file_multiblock_read_count = # required 8 put 11i

    * .db_files = 512 # Max database files lol

    * .db_name = "PRDSPRT".

    * .db_unique_name = "PROD".

    * .dml_locks = 10000

    * .job_queue_processes = 50

    * .event = "10298 trace name forever, context level 32.

    *. LOG_ARCHIVE_DEST_1 = ' LOCATION = / oracle/dev2/d01/db/archive.

    *. LOG_ARCHIVE_FORMAT='%t_%s_%r.dbf'

    * .log_buffer = 10485760

    * .log_checkpoint_interval = 100000

    * .log_checkpoint_timeout = 1200 # control point at least every 20 minutes.

    * .log_checkpoints_to_alert = TRUE

    * .max_dump_file_size = size of the trace file '20480' #.

    * .nls_comp = "binary" 11i parameter # required

    * .nls_date_format = "DD-MON-RR.

    * .nls_language = "to the American.

    * .nls_length_semantics = "BYTE" # required put 11i

    * .nls_numeric_characters ='.,'

    * .nls_sort = "binary" 11i parameter # required

    * .nls_territory = "america."

    * .olap_page_pool_size = 4194304

    * .open_cursors = process memory consumes 600 #, unless you use MTS.

    * .optimizer_secure_view_merging = false

    * .parallel_max_servers = 20

    * .parallel_min_servers = 0

    * .pga_aggregate_target = 1 G

    * .plsql_code_type 'INTERPRETED' = # 11i preset

    *.plsql_native_library_dir='/oracle/dev2/d01/db/tech_st/10.2.0/plsql/nativelib'

    * .plsql_native_library_subdir_count = 149

    * .plsql_optimize_level = 2 # required put 11i

    * runoff = # 500 max. lol x 2 users

    * .session_cached_cursors = 500

    * Once = process X 400 # 2

    * .sga_target = 2G

    * .shared_pool_reserved_size = 500 M

    * .shared_pool_size = 1 G

    * .timed_statistics = true

    * .undo_management = "AUTO" # required put 11i

    * .undo_tablespace = "APPS_UNDOTS1" # required put 11i

    *.user_dump_dest='/oracle/dev2/d01/db/tech_st/10.2.0/admin/PRDSPRT_srv-hq-on01/udump'

    *.utl_file_dir='/usr/tmp','/usr/tmp','/patch/ora_tmp/PROD','/Oracle/DEV2/D01/DB/tech_st/10.2.0/appsutil/outbound/PRDSPRT_srv-HQ-on01','/usr/tmp '

    * .workarea_size_policy = "AUTO" # required put 11i

    * .remote_login_passwordfile = "EXCLUSIVE."

    #standbydbsetup

    DB_UNIQUE_NAME = PROD

    SERVICE_NAME = PROD

    #Physical db ensures

    log_archive_config = "DG_CONFIG = (PROD, TEST)"

    log_archive_dest1 = "LOCATION = / oracle/dev2/d01/db/archive /"

    VALID_FOR = (ALL_LOGFILES, ALL_ROLES)

    DB_UNIQUE_NAME = PROD'

    log_archive_dest2 = "SERVICE = TEST LGWR ASYNC

    VALID_FOR = (ONLINE_LOGFILES, PRIMARY_ROLE)

    DB_UNIQUE_NAME = TEST'

    log_archive_dest_state_1 = enable

    LOG_ARCHIVE_DEST_STATE_2 = enable

    log_archive_max_processes = 15

    #failover settings

    * .fal_Server = TEST

    * .fal_client = PROD

    db_file_name_convert = ('/ oracle/dev2/d01/db/Archives / ',' / oracle/dev2/d01/db/Archives /',)

    Oracle/dev2/d01/db/apps_st/data / ',' / oracle/dev2/d01/db/apps_st/data / ")"

    log_file_name_convert =('/Oracle/DEV2/D01/DB/Archive/','/Oracle/DEV2/D01/DB/Archive/')

    standby_file _managemnt = auto

    data guard.jpg

    Hello

    If the new server or secondary server has the same directory as the primary structure or old, so you don't have to worry about these settings.

    (Edit)

    You can read about the settings in the following link:

    https://docs.Oracle.com/CD/E11882_01/server.112/e41134/create_ps.htm#i76119

    Concerning

    Juan M

  • Oracle database (11.2.0.4) partial cloning on the same solaris 5.10 Server

    Requirement:

    Recovery of copied on the same server the database object level. (don't want to use o the PITR RMAN automatic) I'll do import export manually. (I did this activity on the other server, but in the face of the question while on the same server)

    to recover the corrupted mutiple tables that I want to create the clone database (with the instance name different and unique name of DB) backup affcted and SYSTEM, UNDO, SYSAUX tablespace

    Steps followed.

    HAND DB

    DB_NAME = PROD

    instance_name = prod

    CLONE OF DB:

    DB_NAME = PROD

    instance_name = test

    db_unique_name = test

    1) created the inittest.ora file

    *. COMPATIBLE = '11.2.0'

    *. CONTROL_FILES ='/ App/ae9/dyn/ora_backup/test/control01. CTL'

    *. DB_BLOCK_SIZE = 8192

    *. DB_NAME = "PROD".

    * .instance_name = test

    * .db_unique_name = test

    *. DIAGNOSTIC_DEST = "/ opt/oracle.

    *. LOG_ARCHIVE_DEST_1 =' location = / app/AE9/dyn/ora_backup/test /'

    *. LOG_ARCHIVE_FORMAT='IEQPEQ1X_%r_%t_%s.arc'

    *. PGA_AGGREGATE_TARGET = 838860800

    * .sga_target = 1258291200

    *. UNDO_MANAGEMENT = 'AUTO '.

    *. UNDO_RETENTION = 0

    *. UNDO_TABLESPACE = 'UNDOTBS1.

    (2) nomount DB

    (3) created the backup controlfile binay on the main database

    (4) connected to RMAN for clone database

    RMAN > auxiliary connection.

    connected to the auxiliary database: PROD (not mounted)

    RMAN > restore clone controlfile to ' / app/AE9/dyn/ora_backup/test/ctlbackup.ctl';

    (5) mounted to the clone database

    RMAN > sql clone "change the basic clone Assembly."

    SQL statement: alter the clone basic Assembly

    (6) restored the DB clone with below command

    RMAN > auxiliary connection.

    connected to the auxiliary database: PROD (DBID = 775073000, is not open)

    RMAN >

    RMAN > Connect target /.

    connected to target database: PROD (DBID = 775073000, is not open)

    run

    {

    the value of newname for datafile 1 to ' / app/AE9/dyn/ora_backup/test/ac/system01.dbf';

    the value of newname for datafile 2 to ' / app/AE9/dyn/ora_backup/test/ac/sysaux01.dbf';

    the value of newname for datafile 3 to ' / app/AE9/dyn/ora_backup/test/ac/undotbs01.dbf';

    the value of newname for datafile 39 to ' / app/AE9/dyn/ora_backup/test/ac/userss01.dbf';

    the value of newname for tempfile 1 to ' / app/AE9/dyn/ora_backup/test/ac/temp01.dbf';

    clone tempfile switch all;

    restore the system of clone sysaux tablespace undotbs1, userss;

    switch from clone datafile;

    }

    (7) but while the recovery bet the below error.

    run

    {

    SQL clone ' alter database datafile 1 online;

    SQL clone ' alter database datafile 3 online;

    SQL clone ' alter database datafile 2 online "

    SQL clone 'alter database datafile 39 online. "

    recover the clone database system tablespace sysaux, undotbs1 userss;

    }

    From pick up to 30 December 15

    using the control file of the target instead of recovery catalog database

    allocated channel: ORA_AUX_SBT_TAPE_1

    channel ORA_AUX_SBT_TAPE_1: SID = 801 device type = SBT_TAPE

    channel ORA_AUX_SBT_TAPE_1: Veritas NetBackup for Oracle - version 7.5 (2012020801)

    allocated channel: ORA_AUX_SBT_TAPE_2

    channel ORA_AUX_SBT_TAPE_2: SID = 833 device type = SBT_TAPE

    channel ORA_AUX_SBT_TAPE_2: Veritas NetBackup for Oracle - version 7.5 (2012020801)

    allocated channel: ORA_AUX_DISK_1

    channel ORA_AUX_DISK_1: SID = 865 type device = DISK

    RMAN-00571: ===========================================================

    RMAN-00569: = ERROR MESSAGE STACK FOLLOWS =.

    RMAN-00571: ===========================================================

    RMAN-03002: failure of the command recover at 30/12/2015 12:55:47

    RMAN-06173: SET NEWNAME command has not been issued for datafile /app/AE9/dyn/ora_backup/test/ac/system01.dbf restore when auxiliary

    RMAN >

    Need help in the DB partially on the same server of cloning. If anyone can share the appropriate measures.

    Thank you for the help

    You can turn off the track changes of block from his krccacp_badfile - maybe it would allow you to complete a task

    -Pounet

  • SGA, PGA is greater than the memory and oracle does not work

    Hello

    Wrongly, I put the following:

    ALTER system SET sga_max_size = 1500M scope = spfile;

    ALTER system SET SGA_TARGET = 1400M scope = spfile;

    ALTER SYSTEM SET PGA_AGGREGATE_TARGET = 9000M scope = spfile;

    My total memory is 16 GB, Win8

    When I rebooted the oracle, he could not start and I m sure 100% that these sizes of memory are the problem.

    Now, I can not connect to the DB set these sizes of memory.

    How can I change the SGA, PGA, while the entire oracle is down? It is not only the DB instance that is nt working. It s the entire oracle.

    Is it possible to change the spfile from a text editor?

    I d appreciate your quick responses.

    Kind regards

    Hussien Sharaf

    Post edited by: 3008910

    Vidar, great recommendation.  There are cases where the spfile directly change can cause problems. If you happen who meet and the spfile is not usable, you can also create a new pfile of content in the journal of the alerts, start the instance by using the new file pfile, then make a copy of the pfile to the spfile.  Here are the basic steps if the spfile is corrupted and you need to create a new:

    (1) find the alerts log, copy the lines below the comment "parameters of the system with default values:" in a new file calledinit .ora and save the file in the directory by default (dbs or database) file.

    (2) make sure that the bad spfile is not in the directory of the file/start setting and start the Oracle service & the instance should now be available.  If you are not able to connect as "/" try to use sys / as sysdba

    (3) create a copy of the spfileinit .ora file: sql > create spfile from pfile;     -You can specify the directories, or leave the default value.

    (4) return & validate the parameters according to the directives of our discussion earlier.

    Hussien, I hope this helps.

    CP

  • Problems with special characters with Apex5

    Hello together,

    I hope, I'm right on this forum with this problem, I have found no other best match.

    I have a single database called apex12D on 12.1.0.2 it's my database of the 5 Apex develompent. Apex 5.0.2 is installed, also the German language of Apex. The OS is Oracle Linux 6.

    On a second machine that I have configured the Oracle Data Service remains with tomcat (installed from the repositories) and apache (also installed deposits), running on the Oracle Linux 7 operating system. This machine is my "http server" to connect to my Apex environment. Everything is very well workung, but when I go on my Apex Admin Backend all special characters in German (A, U, O,...) are displayed incorrectly. Which looks very good.

    What makes that I made:

    To the database (apex12D), I put all the nls paramereters the installation of the database of German letters:

    SYS@apex12D> select * from nls_database_parameters;
    
    
    PARAMETER VALUE
    ---------------------------------------------------------
    NLS_RDBMS_VERSION 12.1.0.2.0
    NLS_NCHAR_CONV_EXCP FALSE
    NLS_LENGTH_SEMANTICS BYTE
    NLS_COMP BINARY
    NLS_DUAL_CURRENCY ?
    NLS_TIMESTAMP_TZ_FORMAT DD.MM.RR HH24:MI:SSXFF TZR
    NLS_TIME_TZ_FORMAT HH24:MI:SSXFF TZR
    NLS_TIMESTAMP_FORMAT DD.MM.RR HH24:MI:SSXFF
    NLS_TIME_FORMAT HH24:MI:SSXFF
    NLS_SORT GERMAN
    NLS_DATE_LANGUAGE GERMAN
    NLS_DATE_FORMAT DD.MM.RR
    NLS_CALENDAR GREGORIAN
    NLS_NUMERIC_CHARACTERS ,.
    NLS_NCHAR_CHARACTERSET UTF8
    NLS_CHARACTERSET AL32UTF8
    NLS_ISO_CURRENCY GERMANY
    NLS_CURRENCY ?
    NLS_TERRITORY GERMANY
    NLS_LANGUAGE GERMAN
    
    
    20 rows selected.
    

    SPFILE parameters tells a different story, I tried to put them with "change the database < parameter > = < value > scope = spfile" and if a database has restarted, nothing changes.

    SYS@apex12D> show parameter nls
    
    
    NAME                                 TYPE        VALUE
    ------------------------------------ ----------- ------------------------------
    nls_calendar                         string      GREGORIAN
    nls_comp                             string      BINARY
    nls_currency                         string      $
    nls_date_format                      string      DD-MON-RR
    nls_date_language                    string      AMERICAN
    nls_dual_currency                    string      $
    nls_iso_currency                     string      AMERICA
    nls_language                         string      AMERICAN
    nls_length_semantics                 string      BYTE
    nls_nchar_conv_excp                  string      FALSE
    nls_numeric_characters               string      .,
    nls_sort                             string      BINARY
    nls_territory                        string      AMERICA
    nls_time_format                      string      HH.MI.SSXFF AM
    nls_time_tz_format                   string      HH.MI.SSXFF AM TZR
    nls_timestamp_format                 string      DD-MON-RR HH.MI.SSXFF AM
    nls_timestamp_tz_format              string      DD-MON-RR HH.MI.SSXFF AM TZR
    SYS@apex12D>
    

    Interestingly, it is that when I write a pfile to the bottom of my spfile and open it with vi, everything looks great. But OK.

    apex12D.__data_transfer_cache_size=0
    apex12D.__db_cache_size=2030043136
    apex12D.__java_pool_size=50331648
    apex12D.__large_pool_size=385875968
    apex12D.__oracle_base='/usr/local/oracle'#ORACLE_BASE set from environment
    apex12D.__pga_aggregate_target=536870912
    apex12D.__sga_target=3221225472
    apex12D.__shared_io_pool_size=150994944
    apex12D.__shared_pool_size=570425344
    apex12D.__streams_pool_size=16777216
    *.audit_file_dest='/usr/local/oracle/admin/apex12D/adump'
    *.audit_trail='db'
    *.compatible='12.1.0.2.0'
    *.control_files='+DATA_QUM169/APEX12D/CONTROLFILE/current.505.898513523','+FRA_QUM169/APEX12D/CONTROLFILE/current.2094.898513525'
    *.db_block_size=8192
    *.db_create_file_dest='+DATA_QUM169'
    *.db_create_online_log_dest_1='+DATA_QUM169'
    *.db_create_online_log_dest_2='+FRA_QUM169'
    *.db_domain=''
    *.db_name='apex12D'
    *.db_recovery_file_dest='+FRA_QUM169'
    *.db_recovery_file_dest_size=10240m
    *.diagnostic_dest='/usr/local/oracle'
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=apex12DXDB)'
    *.local_listener='LISTENER_APEX12D'
    *.log_archive_dest_1='LOCATION=+FRA_QUM169'
    *.log_archive_dest_2='LOCATION=+DATA_QUM169'
    *.log_archive_format='%t_%s_%r.dbf'
    *.nls_currency='$'
    *.nls_date_language='GERMAN'
    *.nls_dual_currency='$'
    *.nls_iso_currency='GERMANY'
    *.nls_language='GERMAN'
    *.nls_territory='GERMANY'
    *.open_cursors=300
    *.pga_aggregate_target=512m
    *.processes=600
    *.recyclebin='OFF'
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=3072m
    *.undo_tablespace='UNDOTBS1'
    

    The server' http' ADR is configured under the orards user, so I've put this in the .bash_profile:

    NLS_LANG=GERMAN_GERMANY.AL32UTF8
    
    
    export NLS_LANG
    

    The same I did for the root, just for test user, because only root can start tomcat and apache systemctl. Apache and tomcat user/bin/nologin as Bash, so I think that their bash_profile will not if I create a.

    But nothing really worked. Can anyone help please?

    Thank you and best regards,
    David

    Hello

    good German special characters are agree on the first server in ther, but not the second? I have this behavior when I set NLS_LANG = GERMAN_GERMANY. AL32UTF8 when I install the extension of the German language.

    Best regards

    Thomas

    (Grussle aus Böblingen)

  • *.db_file_name_convert, *.log_file_name_convert

    Hello guys,.

    I have two servers:

    SERVER01 Production

    Server02 Standby

    I have these settings in the file pfile (standby):

    *.audit_file_dest='/U01/app/Oracle/admin/BD/adump '

    * .audit_trail = "db".

    * full = '11.2.0.0.0'

    *.control_files='/data/BD/controlfile/control01.ctl','/U01/app/Oracle/flash_recovery_area/BD/control02.ctl'#restore Controlfile

    * .control_management_pack_access = "DIAGNOSTIC + TUNING".

    * .db_block_size = 8192

    * .db_domain = "

    data *.db_file_name_convert='/U01/app/Oracle/oradata/bd','/data/BD/fichier ' (change directory)

    * .db_name = "bd".

    *.db_recovery_file_dest='/media/ORACLE_BACKUP/Atual/FRA '

    * .db_recovery_file_dest_size = 85899345920

    *.diagnostic_dest='/U01/app/Oracle '

    *. Dispatchers ='(Protocol=TCP) (SERVICE = bdXDB)"

    *. Log_archive_dest_1 = 'LOCATION = / Media/ORACLE_BACKUP/atual/fra/BD/archivelog'

    *.log_archive_format='arc_%t_%s_%r.arc'

    *.log_file_name_convert='/U01/app/Oracle/oradata/bd','/data/BD/onlinelog ' (change directory)

    * .open_cursors = 300

    * .pga_aggregate_target = 11884560384

    * runoff = 150

    * .remote_login_passwordfile = "EXCLUSIVE."

    * .sga_target = 1610612736

    * .standby_file_management = "AUTO".

    * .undo_tablespace = "UNDOTBS1.

    First question:

    Once the process is finished, I had create pfile from spfile, but is the same code that above...

    So, what is a utility I have these commands in standby mode:

    data *.db_file_name_convert='/U01/app/Oracle/oradata/bd','/data/BD/fichier ' (change directory)

    *.log_file_name_convert='/U01/app/Oracle/oradata/bd','/data/BD/onlinelog ' (change directory)


    ??

    If I can remove these commands (db_file_name_convert, log_file_name_convert - standby server) in sleep mode? some problem? How can I remove these commands?


    Thank you

    In Server 03 (new day), I alreadyhave/data/bd/datafile and/data/bd/onlinelog... Yes, in the case... is not necessary, I have these settings. You undertood now? I'll create the server03/data/bd/datafile and/data/bd/onlinelog, is equal to the server 02...

    After that, in my view, is not necessary I have these settings. You got?

    The parameters above may be omitted if the directory of the primary structure is the same in the standby mode, or if you configure settings DB_CREATE_FILE_DEST & DB_CREATE_ONLINE_LOG_DEST_n primary and secondary school with the correct values.

    OK, I got it. And I give you the answer in my previous post.

    In this case, you can reset or change or delete the pfile these parameters.

    Kind regards

    Juan M

  • Get ORA-00942 error with the clause, but not when the user sys.

    Hello

    About 3 weeks ago we increased our memary to PGA_aggregate_target = 60 GB, SGA_target = 58 GB Oracle instance. About 1 week ago our cognos user started having errors ORA-00942 for these queries generated with clause, with the same authorization. i.e.

    with 'aBmtQuerySubject4' as
    (select "BANK_NOTE_ADI_INFO_T". ' ' PRINT_BATCH_ID ' 'PRINT_BATCH_ID '.
    'BANK_NOTE_ADI_INFO_T '. ' ' PROCESS_RUN_DT ' 'PROCESS_RUN_DT '.
    'BANK_NOTE_ADI_INFO_T '. ' ' RDP_ID ' 'RDP_ID '.
    'BANK_NOTE_ADI_INFO_T '. ' ' FI_ID ' 'FI_ID '.
    'BANK_NOTE_ADI_INFO_T '. ' ' DEPOSIT_NB ' 'DEPOSIT_NB '.
    'BANK_NOTE_ADI_INFO_T '. ' ' PROCESS_MACHINE_ID ' 'PROCESS_MACHINE_ID '.
    'BANK_NOTE_ADI_INFO_T '. ' ' OUTPUT_STACKER_TYPE_CE ' 'OUTPUT_STACKER_TYPE_CE '.
    'BANK_NOTE_ADI_INFO_T '. ' ' PARTITION_KEY ' 'PARTITION_KEY '.
    'BANK_NOTE_ADI_INFO_T '. ' ' LOAD_ID ' 'LOAD_ID '.
    'BANK_NOTE_ADI_INFO_T '. ' ' SERIAL_NUMBER_ID ' 'SERIAL_NUMBER_ID '.
    'BANK_NOTE_ADI_INFO_T '. ' ' SHIFT_NB ' 'SHIFT_NB '.
    'BANK_NOTE_ADI_INFO_T '. ' ' BANK_NOTE_COUNT_NB ' 'BANK_NOTE_COUNT_NB '.
    of "BOISI '." BANK_NOTE_ADI_INFO_T' 'BANK_NOTE_ADI_INFO_T '.
    )
    'CountResultQuery5' as
    (select count ("aBmtQuerySubject4". "BANK_NOTE_COUNT_NB") 'C_1' "
    , count (1) 'C_2' of 'aBmtQuerySubject4 '.
    After having count (*) > 0)
    Select 'CountResultQuery5 '. "' C_2 ' 'Count1.
    of 'CountResultQuery5 '.
    ;


    with 'aBmtQuerySubject4' as
    (select "BANK_NOTE_ADI_INFO_T". ' ' LOAD_ID ' 'LOAD_ID '.
    of "BOISI '." BANK_NOTE_ADI_INFO_T' 'BANK_NOTE_ADI_INFO_T '.
    )
    'CountResultQuery5' as
    (select count ("aBmtQuerySubject4". "LOAD_ID") 'C_1' "
    , count (1) 'C_2 '.
    of 'aBmtQuerySubject4' having count (*) > 0
    )
    Select 'CountResultQuery5 '. "' C_2 ' 'Count1' of 'CountResultQuery5 '.
    ;

    -output like:

    'BANK_NOTE_ADI_INFO_T '. ' ' PROCESS_RUN_DT ' 'PROCESS_RUN_DT '.
    *
    ERROR at line 3:
    ORA-00942: table or view does not exist


    of "BOISI '." BANK_NOTE_ADI_INFO_T' 'BANK_NOTE_ADI_INFO_T '.
    *
    ERROR at line 3:
    ORA-00942: table or view does not exist

    Since 2 days ago, we get ORA-0403.

    One thing I noticed that the coguser can run above queries correctly after they are run by a user sys...

    Could you please help me on how I can resolve ORA-00942 error?

    Thank you very much, much in advance for all your help and your advice! :-)

    Jihong.

    "One thing I've noticed the coguser can run over queries correctly after they are run by a user sys... »

    Jihong,

    Do you mean that queries can be run successfully as a sys user, or as long as once a sys cognos user user has run the query at least once?

    Gerard

  • Database startup time

    Hello

    We have a database running oracle 11.1.0.7 on Solaris 11, which takes 10 minutes start... The database is very small (a little less than 3 GB) that hosts b2b software patterns.


    Stop the database with "stop immediately" and then he started through 'start-up' and we noticed that its takes a while. We had a glance on the alert log and it doesn't show much:

    MMNL started with pid = 62 OS id = 25708

    Mon Aug 24 11:59:44 2015

    DISM started, OS id = 25721

    Mon Aug 24 12:09:03 2015

    Here is an excerpt for the complete startup out of the instance's alert log log:

    Oracle@b2bbd02:/Home/Oracle > f tail /u01/app/oracle/product/11.1.0/db_1/log/diag/rdbms/b2bd/b2bd/trace/alert_b2bd.log

    Mon Aug 24 11:58:35 2015

    Starting ORACLE instance (normal)

    LICENSE_MAX_SESSION = 0

    LICENSE_SESSIONS_WARNING = 0

    SNA system picked latch-free 3

    Mon Aug 24 11:58:47 2015

    Using the default for the LOG_ARCHIVE_DEST_10 as USE_DB_RECOVERY_FILE_DEST parameter value

    Autotune undo retention is enabled.

    IMODE = BR

    ILAT = 73

    LICENSE_MAX_USERS = 0

    SYS audit is disabled

    Mon Aug 24 11:59:15 2015

    Commissioning ORACLE RDBMS Version: 11.1.0.7.0.

    Using parameters in Server pfile /u01/app/oracle/product/11.1.0/db_1/dbs/initb2bd.ora-side

    Parameters of the system with default values:

    process = 600

    sessions = 665

    SPFile = "/ u01/app/oracle/admin/b2bd/spfileb2bd.ora".

    memory_target = 4G

    control_files = "/ oradata/b2bd/control01.ctl".

    control_files = "/ oradata/b2bd/control02.ctl".

    control_files = "/ oralog/b2bd/control03.ctl".

    DB_BLOCK_SIZE = 8192

    compatible = "11.1.0.

    db_file_multiblock_read_count = 16

    db_recovery_file_dest = ' / orabkup/flash_recovery_area.

    db_recovery_file_dest_size = 10G

    UNDO_MANAGEMENT = 'AUTO '.

    undo_tablespace = 'UNDOTBS1.

    Remote_login_passwordfile = "EXCLUSIVE."

    db_domain = "reg.gov.ab.ca."

    JOB_QUEUE_PROCESSES = 40

    PARALLEL_MAX_SERVERS = 10

    audit_file_dest = ' / u01/app/oracle/admin/b2bd/adump.

    db_name = "b2bd".

    open_cursors = 1000

    pga_aggregate_target = 256 M

    aq_tm_processes = 2

    Mon Aug 24 11:59:16 2015

    PMON started with pid = 2, OS id = 25070

    Mon Aug 24 11:59:18 2015

    VKTM started with pid = 6, OS id = 25086

    VKTM clocked in accuracy (100ms)

    Mon Aug 24 11:59:18 2015

    DIAG started with pid = 10, OS id = 25135

    Mon Aug 24 11:59:19 2015

    DBRM started with pid = 14, OS id = 25153

    Mon Aug 24 11:59:20 2015

    PSP0 started with pid = 18, OS id = 25160

    Mon Aug 24 11:59:21 2015

    DIA0 started with pid = 22, OS id = 25171

    Mon Aug 24 11:59:22 2015

    MA started with pid = 26, OS id = 25211

    Mon Aug 24 11:59:23 2015

    DBW0 started with pid = 30, OS id = 25219

    Mon Aug 24 11:59:25 2015

    DBW1 started with pid = 3, OS id = 25228

    Mon Aug 24 11:59:27 2015

    DBW2 started with pid = 4, OS id = 25261

    Mon Aug 24 11:59:28 2015

    DBW3 started with pid = 5, OS id = 25280

    Mon Aug 24 11:59:29 2015

    DBW4 started with pid = 34, OS id = 25323

    Mon Aug 24 11:59:31 2015

    DBW5 started with pid = 7, OS id = 25358

    Mon Aug 24 11:59:32 2015

    DBW6 started with pid = 8, OS id = 25387

    Mon Aug 24 11:59:33 2015

    DBW7 started with pid = 9, OS id = 25406

    Mon Aug 24 11:59:35 2015

    DBW8 started with pid = 38, OS id = 25462

    Mon Aug 24 11:59:36 2015

    DBW9 started with pid = 11, OS id = 25545

    Mon Aug 24 11:59:38 2015

    Csilla started with pid = 12, OS id = 25554

    Mon Aug 24 11:59:39 2015

    DBWb started with pid = 13, OS id = 25561

    Mon Aug 24 11:59:39 2015

    DBWc started with pid = 42, OS id = 25567

    Mon Aug 24 11:59:39 2015

    DBWd started with pid = 15, OS id = 25570

    Mon Aug 24 11:59:39 2015

    DBWe started with pid = 16, OS id = 25574

    Mon Aug 24 11:59:39 2015

    DBWf started with pid = 17, OS id = 25586

    Mon Aug 24 11:59:40 2015

    LGWR started with pid = 19, OS id = 25598

    Mon Aug 24 11:59:42 2015

    CKPT started with pid = 46, OS id = 25608

    Mon Aug 24 11:59:42 2015

    SMON started with pid = 50, OS id = 25638

    Mon Aug 24 11:59:43 2015

    RECCE has started with pid = 54, OS id = 25663

    Mon Aug 24 11:59:43 2015

    MMON started with pid = 58, OS id = 25691

    Mon Aug 24 11:59:44 2015

    MMNL started with pid = 62 OS id = 25708

    Mon Aug 24 11:59:44 2015

    DISM started, OS id = 25721

    Mon Aug 24 12:09:03 2015

    ORACLE_BASE is not set in the environment. It is recommended that

    as ORACLE_BASE be defined in the environment

    Mon Aug 24 12:09:03 2015

    ALTER DATABASE MOUNT

    The incarnation of recovery setting target 1

    Mon Aug 24 12:09:07 2015

    Mount of redo thread 1, with mount id 462930143

    Database mounted in exclusive Mode

    Disabled lost write protect

    Completed: ALTER DATABASE MOUNT

    Mon Aug 24 12:09:08 2015

    ALTER DATABASE OPEN

    LGWR: FROM PROCESS ARCH

    Mon Aug 24 12:09:08 2015

    Arc0 started with pid = 66, OS id = 11683

    Mon Aug 24 12:09:08 2015

    Arc1 started with pid = 27, OS id = 11686

    Mon Aug 24 12:09:09 2015

    ARC2 started with pid = 20, OS id = 11688

    Arc0: Started archiving

    Mon Aug 24 12:09:09 2015

    ARC3 started with pid = 21, OS id = 11690

    Arc1: Started archiving

    ARC2: Started archiving

    ARC3: Started archiving

    LGWR: FROM PROCESS ARCH COMPLETE

    Thread 1 is open to the sequence of journal 7869

    Currently Journal # 4, seq # 7869 mem # 0: /oralog/b2bd/redo01a.log

    Currently Journal # 4, seq # 7869 mem # 1: /oralog/b2bd/redo01b.log

    Opening of redo thread 1

    View MTTR is disabled, because FAST_START_MTTR_TARGET is not defined

    ARC3: become the "no FAL' ARCH

    ARC3: become the "no SRL" ARCH

    Arc0: Become the heartbeat ARCH

    Mon Aug 24 12:09:09 2015

    SMON: enabling cache recovery

    Successfully online 2 undo Tablespace.

    Check the compatibility of the header files for tablespace 11g encryption...

    Compatibility check header files 11g for tablespace encryption completed

    SMON: enabling the recovery of tx

    Database character set is AL32UTF8

    Opening with the internal resources management plan: 4 X 32 DIGITAL system

    From FBDA background process

    Mon Aug 24 12:09:11 2015

    FBDA started with pid = 31, OS id = 11711

    off replication_dependency_tracking (no replication multimaster async found)

    From QMNC background process

    Mon Aug 24 12:09:11 2015

    QMNC started with pid = 70, OS id = 11726

    Completed: ALTER DATABASE OPEN

    Mon Aug 24 12:09:14 2015

    10240 MB db_recovery_file_dest_size is 60.90% used. It is a

    user-specified limit on the amount of space that will be used by the present

    for the files related to the recovery of databases and does not reflect the amount of

    space available in the underlying file system or ASM diskgroup.

    Mon Aug 24 12:14:19 2015

    From OCMS background process

    Mon Aug 24 12:14:20 2015

    OCMS started with pid = 78, OS id = 20505

    Memory_target/memory_max_target has the value 4 GB.

    Any idea to understand what is happening please?

    Thank you

    Problems starting database with DISM. Tuning knife Consulting

Maybe you are looking for