memory_target and sga_target

Our data base is 11g 2 on Redhat 52. I have a question about the new parameter 11g memory_target and the old sga_target. I understand that if memory_target is defined, sga and pga will automatic managed and optimized. So should I adjust sga_target. What restrictions and recommendations contained in the definition of the sga_target when memory_target is defined?

Thank you.

In Oracle 11 g MEMORY_TARGET controls the total amount of memory to Oracle for SGA and PGA. Oracle allocates memory from the PGA to the sessions, based on demand.
You have not need define SGA_TARGET if you already MEMORY_TARGET game.

Set possibly 0 SGA_TARGET, PGA_AGGREGATE_TARGET and other memory configuration settings (for example, SHARED_POOL_SIZE). If each of these parameters have nonzero values while the AMM is in effect, the values of the parameters define the minimum sizes for the specified memory region.

ZlT

Tags: Database

Similar Questions

  • Question about MEMORY_TARGET and MEMORY_MAX_TARGET and EM

    Good evening

    After a lot of reading and research on Google and ask Tom (and Jerry too!) and visit a multitude of websites, I still do not understand the exact scope of MEMORY_TARGET and MEMORY_MAX_TARGET. I would be very grateful to represent a clearexplanation and sensible, verifiable and technically correct as to what these two parameters and how they interact with each other, and why two different parameters are needed instead of one.

    The more clear statement, that I found so far is:

    Of [Oracle frequently asked Questions Wiki | http://www.orafaq.com/wiki/Memory_target]

    >
    MEMORY_TARGET stipulates the following:

    A single parameter for total size SGA and PGA
    < li > automatically resizes the SGA and PGA components
    < li > memory is transferred to the more need
    < li > uses workload information
    < li > uses internal advisory predictions
    < li > can be by DBCA at the time of the creation of the database.
    >

    This seems quite reasonable, but given that MEMORY_TARGET includes the SGA and PGA size, which would be necessary MEMORY_MAX_TARGET for? How did the memory represented by MEMORY_MAX_TARGET - MEMORY_TARGET used by Oracle? It is simply wasted? (Note: the maximum value of MEMORY_TARGET should, wisely, be lower or equal to MEMORY_MAX_TARGET)

    From the different point of view, when a database is created, we can take away from the o/s, a number X of GBs and drop it to Oracle by setting the MEMORY_MAX_TARGET parameter. Why Oracle not use all (by definition of a value of less than MEMORY_MAX_TARGET MEMORY_TARGET)? that said MEMORY_TARGET Oracle that doesn't MEMORY_MAX_TARGET (or vice versa)?

    Why would anyone want to define MEMORY_TARGET as inferior to MEMORY_MAX_TARGET?

    I appreciate a lot any help to clarify the issues above.

    Thank you very much

    John.

    I could not find the document that says that. Where did you find this statement? I ask because it would be nice if the document gave few details about how it is used on Unix.

    Here's the link:

    http://download.Oracle.com/docs/CD/B28359_01/server.111/b32009/tuning.htm#BABBJHAC

    HTH
    Girish Sharma

  • Memory_target and sga_max_size parameter

    With the help of Oracle 11.2, I updated SGA_TARGET SGA_MAX_SIZE to 800M and 500M.

    Now, I decided to let Oracle completely assume management (MSA, right?) and set MEMORY_TARGET on 1000M and 1200 m MEMORY_MAX_TARGET.

    SGA_TARGET is now the minimum of storage allocated by SGA, as far as I figured out.

    However, this topic SGA_MAX_SIZE? This value remains "serves" and it is sensible to put it?

    Concerning
    Christian

    If SGA_MAX_SIZE is set, which is a limit of upper limit for your LMS to grow, you can encounter errors ora-4031 if you need an LMS more than set.

    Kind regards

    Suntrupth

  • Fixing Memory_target and Memory_max_target

    Hello

    I have 2 GB configured LMS.

    How calculate/repair the Memory_target parameter and memory_max_target?

    Thank you
    KSG

    Salvation;

    Oracle made in-house. Please check:
    Management (AMM) automatic memory on 11g [443746.1 ID]

    Respect of
    HELIOS

  • Insert - Performance problem

    Hi Experts,

    I am new to Oracle. Ask for your help to fix the performance of a query of insertion problem.

    I have an insert query that is go search for records of the partitioned table.

    Background: the user indicates that the query was running in 30 minutes to 10 G. The database is upgraded to 12 by one of my colleague. Now the query works continuously for hours, but no result. Check the settings and SGA is 9 GB, Windows - 4 GB. DB block size is 8192, DB Multiblock read file Count is 128. Overall target of PGA is 2457M.

    The parameters are given below


    VALUE OF TYPE NAME
    ------------------------------------ ----------- ----------
    DBFIPS_140 boolean FALSE
    O7_DICTIONARY_ACCESSIBILITY boolean FALSE
    whole active_instance_count
    aq_tm_processes integer 1
    ARCHIVE_LAG_TARGET integer 0
    asm_diskgroups chain
    asm_diskstring chain
    asm_power_limit integer 1
    asm_preferred_read_failure_groups string
    audit_file_dest string C:\APP\ADM
    audit_sys_operations Boolean TRUE

    AUDIT_TRAIL DB string
    awr_snapshot_time_offset integer 0
    background_core_dump partial string
    background_dump_dest string C:\APP\PRO
    \RDBMS\TRA
    BACKUP_TAPE_IO_SLAVES boolean FALSE
    bitmap_merge_area_size integer 1048576
    blank_trimming boolean FALSE
    buffer_pool_keep string
    buffer_pool_recycle string
    cell_offload_compaction ADAPTIVE channel


    cell_offload_decryption Boolean TRUE
    cell_offload_parameters string
    cell_offload_plan_display string AUTO
    cell_offload_processing Boolean TRUE
    cell_offloadgroup_name string
    whole circuits
    whole big client_result_cache_lag 3000
    client_result_cache_size big integer 0
    clonedb boolean FALSE
    cluster_database boolean FALSE
    cluster_database_instances integer 1


    cluster_interconnects chain
    commit_logging string
    commit_point_strength integer 1
    commit_wait string
    string commit_write
    common_user_prefix string C#.
    compatible string 12.1.0.2.0
    connection_brokers string ((TYPE = DED
    ((TYPE = EM
    control_file_record_keep_time integer 7
    control_files string G:\ORACLE\

    TROL01. CTL
    FAST_RECOV
    NTROL02. CT
    control_management_pack_access string diagnostic
    core_dump_dest string C:\app\dia
    bal12\cdum
    cpu_count integer 4
    create_bitmap_area_size integer 8388608
    create_stored_outlines string
    cursor_bind_capture_destination memory of the string + tell
    CURSOR_SHARING EXACT string

    cursor_space_for_time boolean FALSE
    db_16k_cache_size big integer 0
    db_2k_cache_size big integer 0
    db_32k_cache_size big integer 0
    db_4k_cache_size big integer 0
    db_8k_cache_size big integer 0
    db_big_table_cache_percent_target string 0
    db_block_buffers integer 0
    db_block_checking FALSE string
    db_block_checksum string TYPICAL
    Whole DB_BLOCK_SIZE 8192

    db_cache_advice string WE
    db_cache_size large integer 0
    db_create_file_dest chain
    db_create_online_log_dest_1 string
    db_create_online_log_dest_2 string
    db_create_online_log_dest_3 string
    db_create_online_log_dest_4 string
    db_create_online_log_dest_5 string
    db_domain chain
    db_file_multiblock_read_count integer 128
    db_file_name_convert chain

    DB_FILES integer 200
    db_flash_cache_file string
    db_flash_cache_size big integer 0
    db_flashback_retention_target around 1440
    chain of db_index_compression_inheritance NONE
    DB_KEEP_CACHE_SIZE big integer 0
    chain of db_lost_write_protect NONE
    db_name string ORCL
    db_performance_profile string
    db_recovery_file_dest string G:\Oracle\
    y_Area


    whole large db_recovery_file_dest_size 12840M
    db_recycle_cache_size large integer 0
    db_securefile string PREFERRED
    channel db_ultra_safe
    db_unique_name string ORCL
    db_unrecoverable_scn_tracking Boolean TRUE
    db_writer_processes integer 1
    dbwr_io_slaves integer 0
    DDL_LOCK_TIMEOUT integer 0
    deferred_segment_creation Boolean TRUE
    dg_broker_config_file1 string C:\APP\PRO


    \DATABASE\
    dg_broker_config_file2 string C:\APP\PRO
    \DATABASE\
    dg_broker_start boolean FALSE
    diagnostic_dest channel directory
    disk_asynch_io Boolean TRUE
    dispatchers (PROTOCOL = string
    12XDB)
    distributed_lock_timeout integer 60
    dml_locks whole 2076
    whole dnfs_batch_size 4096

    dst_upgrade_insert_conv Boolean TRUE
    enable_ddl_logging boolean FALSE
    enable_goldengate_replication boolean FALSE
    enable_pluggable_database boolean FALSE
    event string
    exclude_seed_cdb_view Boolean TRUE
    fal_client chain
    fal_server chain
    FAST_START_IO_TARGET integer 0
    fast_start_mttr_target integer 0
    fast_start_parallel_rollback string LOW


    file_mapping boolean FALSE
    fileio_network_adapters string
    filesystemio_options chain
    fixed_date chain
    gcs_server_processes integer 0
    global_context_pool_size string
    global_names boolean FALSE
    global_txn_processes integer 1
    hash_area_size integer 131072
    channel heat_map
    hi_shared_memory_address integer 0

    hs_autoregister Boolean TRUE
    iFile file
    inmemory_clause_default string
    inmemory_force string by DEFAULT
    inmemory_max_populate_servers integer 0
    inmemory_query string ENABLE
    inmemory_size big integer 0
    inmemory_trickle_repopulate_servers_ integer 1
    percent
    instance_groups string
    instance_name string ORCL


    instance_number integer 0
    instance_type string RDBMS
    instant_restore boolean FALSE
    java_jit_enabled Boolean TRUE
    java_max_sessionspace_size integer 0
    JAVA_POOL_SIZE large integer 0
    java_restrict string no
    java_soft_sessionspace_limit integer 0
    JOB_QUEUE_PROCESSES around 1000
    LARGE_POOL_SIZE large integer 0
    ldap_directory_access string NONE


    ldap_directory_sysauth string no.
    license_max_sessions integer 0
    license_max_users integer 0
    license_sessions_warning integer 0
    listener_networks string
    LOCAL_LISTENER (ADDRESS = string
    = i184borac
    (NET) (PORT =
    lock_name_space string
    lock_sga boolean FALSE
    log_archive_config string


    Log_archive_dest chain
    Log_archive_dest_1 chain
    LOG_ARCHIVE_DEST_10 string
    log_archive_dest_11 string
    log_archive_dest_12 string
    log_archive_dest_13 string
    log_archive_dest_14 string
    log_archive_dest_15 string
    log_archive_dest_16 string
    log_archive_dest_17 string
    log_archive_dest_18 string


    log_archive_dest_19 string
    LOG_ARCHIVE_DEST_2 string
    log_archive_dest_20 string
    log_archive_dest_21 string
    log_archive_dest_22 string
    log_archive_dest_23 string
    log_archive_dest_24 string
    log_archive_dest_25 string
    log_archive_dest_26 string
    log_archive_dest_27 string
    log_archive_dest_28 string


    log_archive_dest_29 string
    log_archive_dest_3 string
    log_archive_dest_30 string
    log_archive_dest_31 string
    log_archive_dest_4 string
    log_archive_dest_5 string
    log_archive_dest_6 string
    log_archive_dest_7 string
    log_archive_dest_8 string
    log_archive_dest_9 string
    allow the chain of log_archive_dest_state_1


    allow the chain of log_archive_dest_state_10
    allow the chain of log_archive_dest_state_11
    allow the chain of log_archive_dest_state_12
    allow the chain of log_archive_dest_state_13
    allow the chain of log_archive_dest_state_14
    allow the chain of log_archive_dest_state_15
    allow the chain of log_archive_dest_state_16
    allow the chain of log_archive_dest_state_17
    allow the chain of log_archive_dest_state_18
    allow the chain of log_archive_dest_state_19
    allow the chain of LOG_ARCHIVE_DEST_STATE_2

    allow the chain of log_archive_dest_state_20
    allow the chain of log_archive_dest_state_21
    allow the chain of log_archive_dest_state_22
    allow the chain of log_archive_dest_state_23
    allow the chain of log_archive_dest_state_24
    allow the chain of log_archive_dest_state_25
    allow the chain of log_archive_dest_state_26
    allow the chain of log_archive_dest_state_27
    allow the chain of log_archive_dest_state_28
    allow the chain of log_archive_dest_state_29
    allow the chain of log_archive_dest_state_3

    allow the chain of log_archive_dest_state_30
    allow the chain of log_archive_dest_state_31
    allow the chain of log_archive_dest_state_4
    allow the chain of log_archive_dest_state_5
    allow the chain of log_archive_dest_state_6
    allow the chain of log_archive_dest_state_7
    allow the chain of log_archive_dest_state_8
    allow the chain of log_archive_dest_state_9
    log_archive_duplex_dest string
    log_archive_format string ARC%S_%R.%
    log_archive_max_processes integer 4

    log_archive_min_succeed_dest integer 1
    log_archive_start Boolean TRUE
    log_archive_trace integer 0
    whole very large log_buffer 28784K
    log_checkpoint_interval integer 0
    log_checkpoint_timeout around 1800
    log_checkpoints_to_alert boolean FALSE
    log_file_name_convert chain
    whole MAX_DISPATCHERS
    max_dump_file_size unlimited string
    max_enabled_roles integer 150


    whole max_shared_servers
    max_string_size string STANDARD
    memory_max_target big integer 0
    memory_target large integer 0
    NLS_CALENDAR string GREGORIAN
    nls_comp BINARY string
    nls_currency channel u
    string of NLS_DATE_FORMAT DD-MON-RR
    nls_date_language channel ENGLISH
    string nls_dual_currency C
    nls_iso_currency string UNITED KIN

    nls_language channel ENGLISH
    nls_length_semantics string OCTET
    string nls_nchar_conv_excp FALSE
    nls_numeric_characters chain.,.
    nls_sort BINARY string
    nls_territory string UNITED KIN
    nls_time_format HH24.MI string. SS
    nls_time_tz_format HH24.MI string. SS
    chain of NLS_TIMESTAMP_FORMAT DD-MON-RR
    NLS_TIMESTAMP_TZ_FORMAT string DD-MON-RR
    noncdb_compatible boolean FALSE


    object_cache_max_size_percent integer 10
    object_cache_optimal_size integer 102400
    olap_page_pool_size big integer 0
    open_cursors integer 300
    Open_links integer 4
    open_links_per_instance integer 4
    optimizer_adaptive_features Boolean TRUE
    optimizer_adaptive_reporting_only boolean FALSE
    OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES boolean FALSE
    optimizer_dynamic_sampling integer 2
    optimizer_features_enable string 12.1.0.2

    optimizer_index_caching integer 0
    OPTIMIZER_INDEX_COST_ADJ integer 100
    optimizer_inmemory_aware Boolean TRUE
    the string ALL_ROWS optimizer_mode
    optimizer_secure_view_merging Boolean TRUE
    optimizer_use_invisible_indexes boolean FALSE
    optimizer_use_pending_statistics boolean FALSE
    optimizer_use_sql_plan_baselines Boolean TRUE
    OPS os_authent_prefix string $
    OS_ROLES boolean FALSE
    parallel_adaptive_multi_user Boolean TRUE


    parallel_automatic_tuning boolean FALSE
    parallel_degree_level integer 100
    parallel_degree_limit string CPU
    parallel_degree_policy chain MANUAL
    parallel_execution_message_size integer 16384
    parallel_force_local boolean FALSE
    parallel_instance_group string
    parallel_io_cap_enabled boolean FALSE
    PARALLEL_MAX_SERVERS integer 160
    parallel_min_percent integer 0
    parallel_min_servers integer 16

    parallel_min_time_threshold string AUTO
    parallel_server boolean FALSE
    parallel_server_instances integer 1
    parallel_servers_target integer 64
    parallel_threads_per_cpu integer 2
    pdb_file_name_convert string
    pdb_lockdown string
    pdb_os_credential string
    permit_92_wrap_format Boolean TRUE
    pga_aggregate_limit great whole 4914M
    whole large pga_aggregate_target 2457M

    -
    Plscope_settings string IDENTIFIER
    plsql_ccflags string
    plsql_code_type chain INTERPRETER
    plsql_debug boolean FALSE
    plsql_optimize_level integer 2
    plsql_v2_compatibility boolean FALSE
    plsql_warnings DISABLE channel: AL
    PRE_PAGE_SGA Boolean TRUE
    whole process 300
    processor_group_name string
    query_rewrite_enabled string TRUE


    applied query_rewrite_integrity chain
    rdbms_server_dn chain
    read_only_open_delayed boolean FALSE
    recovery_parallelism integer 0
    Recyclebin string on
    redo_transport_user string
    remote_dependencies_mode string TIMESTAMP
    remote_listener chain
    Remote_login_passwordfile string EXCLUSIVE
    REMOTE_OS_AUTHENT boolean FALSE
    remote_os_roles boolean FALSE

    replication_dependency_tracking Boolean TRUE
    resource_limit Boolean TRUE
    resource_manager_cpu_allocation integer 4
    resource_manager_plan chain
    result_cache_max_result integer 5
    whole big result_cache_max_size K 46208
    result_cache_mode chain MANUAL
    result_cache_remote_expiration integer 0
    resumable_timeout integer 0
    rollback_segments chain
    SEC_CASE_SENSITIVE_LOGON Boolean TRUE

    sec_max_failed_login_attempts integer 3
    string sec_protocol_error_further_action (DROP, 3)
    sec_protocol_error_trace_action string PATH
    sec_return_server_release_banner boolean FALSE
    disable the serial_reuse chain
    service name string ORCL
    session_cached_cursors integer 50
    session_max_open_files integer 10
    entire sessions 472
    Whole large SGA_MAX_SIZE M 9024
    Whole large SGA_TARGET M 9024


    shadow_core_dump string no
    shared_memory_address integer 0
    SHARED_POOL_RESERVED_SIZE large integer 70464307
    shared_pool_size large integer 0
    whole shared_server_sessions
    SHARED_SERVERS integer 1
    skip_unusable_indexes Boolean TRUE
    smtp_out_server chain
    sort_area_retained_size integer 0
    sort_area_size integer 65536
    spatial_vector_acceleration boolean FALSE


    SPFile string C:\APP\PRO
    \DATABASE\
    sql92_security boolean FALSE
    SQL_Trace boolean FALSE
    sqltune_category string by DEFAULT
    standby_archive_dest channel % ORACLE_HO
    standby_file_management string MANUAL
    star_transformation_enabled string TRUE
    statistics_level string TYPICAL
    STREAMS_POOL_SIZE big integer 0
    tape_asynch_io Boolean TRUE

    temp_undo_enabled boolean FALSE
    entire thread 0
    threaded_execution boolean FALSE
    timed_os_statistics integer 0
    TIMED_STATISTICS Boolean TRUE
    trace_enabled Boolean TRUE
    tracefile_identifier chain
    whole of transactions 519
    transactions_per_rollback_segment integer 5
    UNDO_MANAGEMENT string AUTO
    UNDO_RETENTION integer 900

    undo_tablespace string UNDOTBS1
    unified_audit_sga_queue_size integer 1048576
    use_dedicated_broker boolean FALSE
    use_indirect_data_buffers boolean FALSE
    use_large_pages string TRUE
    user_dump_dest string C:\APP\PRO
    \RDBMS\TRA
    UTL_FILE_DIR chain
    workarea_size_policy string AUTO
    xml_db_events string enable

    Thanks in advance

    Firstly, thank you for posting the 10g implementation plan, which was one of the key things that we were missing.

    Second, you realize that you have completely different execution plans, so you can expect different behavior on each system.

    Your package of 10g has a total cost of 23 959 while your plan of 12 c has a cost of 95 373 which is almost 4 times more.  All things being equal, cost is supposed to relate directly to the time spent, so I expect the 12 c plan to take much more time to run.

    From what I can see the 10g plan begins with a scan of full table on DEALERS, and then a full scan on SCARF_VEHICLE_EXCLUSIONS table, and then a full scan on CBX_tlemsani_2000tje table, and then a full scan on CLAIM_FACTS table.  The first three of these analyses tables have a very low cost (2 each), while the last has a huge cost of 172K.  Yet once again, the first three scans produce very few lines in 10g, less than 1,000 lines each, while the last product table scan 454 K lines.

    It also looks that something has gone wrong in the 10g optimizer plan - maybe a bug, which I consider that Jonathan Lewis commented.  Despite the full table scan with a cost of 172 K, NESTED LOOPS it is part of the only has a cost of 23 949 or 24 K.  If the math is not in terms of 10g.  In other words, maybe it's not really optimal plan because 10g optimizer may have got its sums wrong and 12 c might make his right to the money.  But luckily this 'imperfect' 10g plan happens to run fairly fast for one reason or another.

    The plan of 12 starts with similar table scans but in a different order.  The main difference is that instead of a full table on CLAIM_FACTS scan, it did an analysis of index on CLAIM_FACTS_AK9 beach at the price of 95 366.  It is the only main component of the final total cost of 95 373.

    Suggestions for what to do?  It is difficult, because there is clearly an anomaly in the system of 10g to have produced the particular execution plan that he uses.  And there is other information that you have not provided - see later.

    You can try and force a scan of full table on CLAIM_FACTS by adding a suitable example suspicion "select / * + full (CF) * / cf.vehicle_chass_no...". "However, the tips are very difficult to use and does not, guarantee that you will get the desired end result.  So be careful.  For the essay on 12 c, it may be worth trying just to see what happens and what produces the execution plan looks like.  But I would not use such a simple, unique tip in a production system for a variety of reasons.  For testing only it might help to see if you can force the full table on CLAIM_FACTS scan as in 10g, and if the performance that results is the same.

    The two plans are parallel ones, which means that the query is broken down into separate, independent steps and several steps that are executed at the same time, i.e. several CPUS will be used, and there will be several readings of the disc at the same time.  (It is a mischaracterization of the works of parallel query how).  If 10g and 12 c systems do not have the SAME hardware configuration, then you would naturally expect different time elapsed to run the same parallel queries.  See the end of this answer for the additional information that you may provide.

    But I would be very suspicious of the hardware configuration of the two systems.  Maybe 10 g system has 16-core processors or more and 100's of discs in a matrix of big drives and maybe the 12 c system has only 4 cores of the processor and 4 disks.  That would explain a lot about why the 12 c takes hours to run when the 10 g takes only 30 minutes.

    Remember what I said in my last reply:

    "Without any contrary information I guess the filter conditions are very low, the optimizer believes he needs of most of the data in the table and that a table scan or even a limited index scan complete is the"best"way to run this SQL.  In other words, your query takes just time because your tables are big and your application has most of the data in these tables. "

    When dealing with very large tables and to do a full table parallel analysis on them, the most important factor is the amount of raw hardware, you throw the ball to her.  A system with twice the number of CPUS and twice the number of disks will run the same parallel query in half of the time, at least.  It could be that the main reason for the 12 c system is much slower than the system of 10g, rather than on the implementation plan itself.

    You may also provide us with the following information which would allow a better analysis:

    • Row counts in each tables referenced in the query, and if one of them are partitioned.
    • Hardware configurations for both systems - the 10g and the 12 a.  Number of processors, the model number and speed, physical memory, CPU of discs.
    • The discs are very important - 10g and 12 c have similar disk subsystems?  You use simple old records, or you have a San, or some sort of disk array?  Are the bays of identical drives in both systems?  How are they connected?  Fast Fibre Channel, or something else?  Maybe even network storage?
    • What is the size of the SGA in both systems?  of values for MEMORY_TARGET and SGA_TARGET.
    • The fact of the CLAIM_FACTS_AK9 index exist on the system of 10g.  I guess he does, but I would like that it confirmed to be safe.

    John Brady

  • I understand SGA_TARGET and SGA_MAX_SIZE well?

    Let's say that I updated SGA_TARGET and SGA_MAX_SIZE 1024Mo 512 MB.

    As I understand it, this means that the total size of the SGA will never exceed 1024Mo (it's the "hard cap"). Oracle will try to keep things around 512 MB, but can explode up to 1024 MB, depending on the needs.

    Is this correct? Otherwise, I don't see the point of SGA_MAX_SIZE SGA_TARGET vs.

    BTW... I know that MEMORY_TARGET and MEMORY_MAX_TARGET are the new 11g params, but Oracle XE comes with those turned off and the 10g SGA_ * params and PGA_AGGREGATE_TARGET updated...?

    It is not correct.

    This means that you can increase up to SGA_MAX_SIZE SGA_TARGET without having to bounce the database.
    Also Oracle won't increase SGA_TARGET, it will just make sure that the sizes of the different basins do not exceed SGA_TARGET.
    You are still responsible for the management of SGA_TARGET.

    ------------
    Sybrand Bakker
    Senior Oracle DBA

  • Increase SGA_TARGET and SGA_MAX_SIZE

    Oracle Version: 10.2.0.4
    OS: Windows 64-bit

    Hi s/n,.
    I want to perform under task
    increase the size SGA_TARGET to the other 3 where 1 GB is allocated to the shared pool and 2 GB for the buffer cache. () Our SGA_MAX_SIZE = SGA_TARGET)
    kindly share your thoughts on the points below:
    is there Ant limited with size SGA_TARGET with 64-bit as 4 GB with 32-bit system.
    Is there any negative impact on the database when we increase size SGA_TARGET and SGA_MAx_Size
    is there a relationship between RAM or between RAM and SGA_TARGET and SGA_MAX_SIZE.
    Thank you...

    H,

    Is there any Ant limited with size SGA_TARGET with 64-bit as 4 GB with 32-bit system.

    See
    Windows memory configuration: 32-bit and 64-bit [873752.1 ID]

    Is there any negative impact on the database when we increase size SGA_TARGET and SGA_MAx_Size

    the SGA_MAX_SZE parameter is not dynamic in oracle 10g.

    To increase the size of the sga for your instance or down,
    1. make the entry in the pfile as sga_max_size = value. (usually pfile reside in "ORACLE_HOME\database") *.
    2 judgment of the DB
    3. start the PB with the file pfile as "SQL > STARTUP PFILE = 'path of the pfile edited'"
    4. your DB now works on pfile, to make the changes to spfile "SQL > CREATE SPFILE = 'path of the spfile' OF PFILE".
    5. immediate stop
    6. launch of the DB with the newly created spfile "SQL > STARTUP SPFILE = 'path of the spfile'"
    7 sql > show parameter LMS to verify your changes take effect
    8. you may need to increase the value of the parameter SGA_TARGET if it has been set. Its a dynamic parameter and can be changed by the ALTER command.

    * If you do not have a pfile file and then create a pfile file by "SQL > CREATE PFILE ="the pfile file path"OF SPFILE.

    Details:
    http://forums.Oracle.com/forums/thread.jspa?MessageID=9514566#9514566

    Respect of
    HELIOS

  • MEMORY_TARGET questions

    Hello Sir,

    When you use MEMORY_TARGET and MEMORY_MAX_TARGET, I know you are supposed to define the PGA_AGGREGATE_TARGET and SGA_TARGET to zero. But what you guys usually do with your SGA_MAX_SIZE setting?

    Thinking about this, I guess that if performance problems have not changed I leave the same and put my MEMORY_MAX_TARGET to the sum of the SGA_MAX_SIZE and max (pga_aggregate_target, allotted maximum PGA)
    . Do you agree with that?

    I saw on some customer case that the SGA_MAX_SIZE and MEMORY_MAX_SIZE have been set the same. I guess that is not wise because it would allow the CMS to use all of the memory (although Oracle should not allow this). You would avoid this configuration?

    Thank you!

    Personally, unless you have reason to believe that Oracle will allow the SGA get too big and the PGA to get too small, which seems unlikely, I let SGA_MAX_SIZE = MEMORY_MAX_TARGET. If SGA_MAX_SIZE< memory_max_target,="" you're="" creating="" a="" floor="" on="" the="" size="" of="" the="" pga="" and="" a="" limit="" on="" the="" size="" of="" the="" sga="" for="" no="" apparent="" reason="" (again,="" assuming="" you="" don't="" have="" a="" particular="" reason="" to="" believe="" that="" oracle's="" memory="" management="" is="" having="" a="" problem="" with="" this="" particular="" database="" workload).="" if="" you're="" going="" to="" let="" oracle="" manage="" the="" memory,="" i'd="" just="" let="" oracle="" manage="" the="">

    Justin

  • Ask the operation slow with large SGA and fast with little CMS

    Hello

    We have a situation where one of the insert is running slow and fast QA in PROD. Both are the same versions of database - Oracle 10.2.0.4 on HP Unix 11.31. To avoid the cause of databases running on another server, we copied from our Production database to the same server where the QA database is running and began with init.ora PROD that has 7 GB 6 GB SGA_TARGET and SGA_MAX_SIZE. For the QA database, SGA_MAX_SIZE is 700 MB and SGA_TARGET is 600 MB. Both are running on the same server, and with the same data. We have refreshed QA with data from PROD. If we start with PROD init.ora QA database, QA also behaves the same way PROD.

    This problem is only with the specific insert. Here is the result of this specific statement tkprof. Can someone please interpret this for me? I am poor in SQL tuning :-( Why the statement behaves ODD with the size of the SGA PROD? Generally, we would think THAT larger SGA should give better performance.

    call the query of disc elapsed to cpu count current lines
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse        1      0.00       0.00          0          0          0           0
    Run 1 56710.39 56067,75 7343 311186373 0 0
    Fetch        0      0.00       0.00          0          0          0           0
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Total 2 56710.39 56067,76 7343 311186373 0 0

    Chess in the library during parsing cache: 1
    Optimizer mode: CHOOSE
    The analysis of the user id: 27 (TEST)

    Rows Row Source operation
    -------  ---------------------------------------------------
    0 CRDETAIL of SEQUENCE (cr = 0 pr = 0 pw = time 0 = 29 US)
    0 REVIEWS (cr = 0 pr = 0 pw = time 0 = 21 US)
    0 SORT GROUP BY (cr = 0 pr = 0 pw = time 0 = 20 US)
    401 HASH RIGHT SEMI JOIN (cr = 23299915 pr 7343 pw = time = 0 = 93982966 en)
    237 TABLE ACCESS BY INDEX ROWID CR_STRUCTURE_VALUES2 (cr = 96 pr = 0 pw = time 0 = 504 en)
    253 CR_STRUCTURE_VALUES2_PK INDEX RANGE SCAN (cr = 4 pr = 0 pw = time 0 = 278 en)(object id 1467582)
    TABLE ACCESS BY INDEX ROWID CR_COST_REPOSITORY 841 (cr = 23306003 pr 7343 pw = time = 0 = 94546465 en)
    1317368058 NESTED LOOPS (cr = 79721182 pr 7343 pw = time = 0 = 18565176955 en)
    VIEW 26912 (cr = pr 9874 7343 pw = time = 0 = 5269231 en)
    26912 MINUS (cr = pr 9874 7343 pw = time = 0 = 5242317 en)
    27462 SORT UNIQUE (cr = 9627 pr = 7329 pw = time 0 = 5040815 en)
    271564 CR_STRUCTURE_VALUES2 TABLE FULL ACCESS (cr = 9627 pr = 7329 pw = time 0 = 1357961 en)
    568 SORT UNIQUE (cr = 247 pr = 14 pw = time 0 = 43467 US)
    TABLE ACCESS BY INDEX ROWID CR_STRUCTURE_VALUES2 2357 (cr = 247 pr = 14 pw = time 0 = US 14751)
    2357 CR_STRUCTURE_VALUES2_PK INDEX RANGE SCAN (cr = pr 11 = 14 pw = time 0 = 10028)(object id 1467582) US
    INDEX RANGE SCAN CRCR_MN_IX 1317341146 (cr = 79711308 pr = 0 pw = time 0 = 50420511 US)(object id 1469401)


    Implementation plan of lines
    -------  ---------------------------------------------------
    0 THE INSERT STATEMENT MODE: CHOOSE
    SEQUENCE "CRDETAIL" 0 (SEQUENCE)
    0 REVIEWS
    0 TRI (GROUP BY)
    401 HASH JOIN (RIGHT HALF)
    HOW TO ACCESS THE TABLE 237: ANALYSES (BY INDEX ROWID) OF
    "CR_STRUCTURE_VALUES2" (TABLE)
    INDEX 253 MODE: SCANNED (SCAN INTERVAL) OF
    "CR_STRUCTURE_VALUES2_PK" ((UNIQUE) INDEX)
    ACCESS MODE TO THE 841 TABLE: ANALYSIS (BY INDEX ROWID) OF "CR_COST_REPOSITORY" (TABLE)
    1317368058 NESTED LOOPS
    VIEW 26912
    26912 LESS
    27462 SORT (SINGLE)
    TABLE 271564 ACCESS MODE: ANALYZED (FULL) OF
    "CR_STRUCTURE_VALUES2" (TABLE)
    568 (SINGLE) SORT
    HOW TO ACCESS THE TABLE 2357: ANALYSES (BY INDEX ROWID)
    OF "CR_STRUCTURE_VALUES2" (TABLE)
    INDEX 2357 MODE: SCANNED (SCAN INTERVAL) OF
    "CR_STRUCTURE_VALUES2_PK" ((UNIQUE) INDEX)
    MODE 1317341146 INDEX: ANALYSIS (SCAN INTERVAL) OF "CRCR_MN_IX".
    (INDEX)

    ********************************************************************************

    And here is the statement in question:

    INSERT
    INTO cr_allocations_stg
      (
        "ID",
        "COMPANY",
        "GL_ACCOUNT",
        "COST_CENTER",
        "COST_ELEMENT",
        "PROFIT_CENTER",
        "MASTER_ORDER",
        "ORDER_NUMBER",
        " FUNDING_PROJECT",
        "POSTING_ORDER",
        "POSTING_COST_CENTER",
        "ORIG_COST_ELEMENT",
        "ORIG_COST_CENTER",
        "ORIG_PROFIT_CENTER",
        "TRADING_PARTNER",
        "WORK_ORDER_NUMBER",
        "DR_CR_ID",
        "LEDGER_SIGN",
        "QUANTITY",
        "AMOUNT",
        "MONTH_NUMBER",
        "MONTH_PERIOD",
        "GL_JOURNAL_CATEGORY",
        "AMOUNT_TYPE",
        "ALLOCATION_ID",
        "TARGET_CREDIT",
        "CROSS_CHARGE_COMPANY"
      )
    SELECT crdetail.nextval,
      "COMPANY",
      "GL_ACCOUNT",
      "COST_CENTER",
      '5253000',
      "PROFIT_CENTER" ,
      "MASTER_ORDER",
      "ORDER_NUMBER",
      "FUNDING_PROJECT",
      ' ',
      "POSTING_COST_CENTER",
      "ORIG_COST_ELEMENT",
      "ORIG_COST_CENTER",
      "ORIG_PROFIT_CENTER",
      " TRADING_PARTNER",
      "WORK_ORDER_NUMBER",
      CASE
        WHEN amount > 0
        THEN 1
        ELSE -1
      END,
      1,0,
      ROUND(amount * 0.0574000000, 2),
      month_number,
      0,
      '593',
      1 ,
      7,
      'TARGET',
      ' '
    FROM
      (SELECT "COMPANY",
        "GL_ACCOUNT",
        "COST_CENTER",
        "PROFIT_CENTER",
        "MASTER_ORDER",
        "FUNDING_PROJECT",
        "POSTING_COST_CENTER",
        "ORDER_NUMBER",
        "ORIG_COST_ELEMENT",
        "ORIG_COST_CENTER",
        "ORIG_PROFIT_CENTER",
        "TRADING_PARTNER",
        "WORK_ORDER_NUMBER",
        month_n umber,
        0,
        SUM(amount) amount,
        SUM(quantity) quantity
      FROM CR_COST_REPOSITORY
      WHERE (amount_type    = 1 )
      AND (month_number     = 201404)
      AND ( "MASTER_ORDER" IN MASTER_ORDER
      AND EXISTS
        (SELECT 1
        FROM
          (SELECT SUBSTR(ELEMENT_VALUE, 1, DECODE(INSTR(ELEMENT_VALUE, ':'), 0, L ENGTH(ELEMENT_VALUE) + 1, INSTR(ELEMENT_VALUE, ':')) - 1) AS ELEMENT
          FROM CR_STRUCTURE_VALUES2
          WHERE STRUCTURE_ID       = 2
          AND DETAIL_BUDGET        = 1
          AND STATUS               = 1
          AND UPPER(PARENT_VALUE) IN ('ELECTRIC ALL OTHER','ELECTRIC COR')
          MINUS
          SELECT SUBSTR(ELEMENT_VALUE, 1, DECODE(INSTR(ELEMENT_VALUE, ':'), 0, LENGTH(ELEMENT_VALUE) + 1, INSTR(ELEMENT_VALUE, ':')) - 1) AS ELEME NT
          FROM CR_STRUCTURE_VALUES2
          WHERE STRUCTURE_ID      = 9
          AND DETAIL_BUDGET       = 1
          AND STATUS              = 1
          AND UPPER(PARENT_VALUE) = 'A&G OH ORDER EXCLUSION'
          ) Z
        WHERE Z.ELEMENT = MASTER_ORDER
        )
      AND "GL_ACCOUNT"   <> '91081001'
      AND "COST_ELEMENT" IN COST_ELEMENT
      AND EXISTS
        (SELECT 1
        FROM CR_ST RUCTURE_VALUES2 A
        WHERE A.STRUCTURE_ID = 5
        AND A.DETAIL_BUDGET  =1
        AND A.STATUS         = 1
        AND COST_ELEMENT     = A.ELEMENT_VALUE
        )
      AND "GL_ACCOUNT" NOT IN ('5100000','5325000','5327000')
      AND "SOURCE_ID"      <> '7' )
      GROUP BY "COMPANY",
        "GL_ACCOUNT",
        "COST_CENTER",
        "PROFIT_CENTER",
        "MASTER_ORDER",
        "FUNDING_PROJECT",
        "POSTING_COST_CENTER",
        "ORDER_NUMBER",
        "ORIG_COST_ELEMENT",
        "ORIG_COST_CENTER",
        "ORIG_PROFIT_CENTER",
        "TRADING_PARTNER",
        "WORK_ORDER_NUMBER",
        month_number
      )
    
    

    Enjoy your first answer on this.

    Thank you and best regards,

    Murali

    Option 1:

    You run with two different ORACLE_HOMEs - that is potentially two different copies of the Oracle executable to the patch different sets?

    Option 2:

    A change of size of SGA is unlikely to directly affect the execution plan, but it also has change the size of the PGA TOUR at the same time? A change in the pga_aggregate_target could affect the choice of the mechanism of the join optimizer. (But not to change an in a join; but hash semi-join to nested loop is possible).

    Possibility 3:

    The db_file_multiblock_read_count leaves then the size that oracle defines by default depends on the db_cache_size divided by process; so, if you have reduced the sga_target_size you (implicitly or explicitly, no doubt) reduced the db_cache_size, and if you don't reduce the process in the same way, then the default db_file_multiblock_read_count been reduced. What you did on your system stats, this could change the cost of (for example) full tablescans, which could lead to a change in execution plan.

    It would be useful to see the results of a call to explain the plan / dbms_xplan.display in both cases so that we can see the variation in estimates of Oracle.

    Concerning

    Jonathan Lewis

  • Automatick on 11 G R2 memory management

    Hi all

    I am confused with a weird behavior of the AMM.

    I'm using Oracle 11.2.0.3 64bits SE1 on Windows 2008 R2 64 bit.

    Configured

    Memory_target = 60 gigs

    SGA_target = 0

    PGA_Aggregete_target = 0

    Get the slow complain about the user and when I check the Manager tasks, I see used oracle

    Oracle.EXE 33487400 KB (32 GB) with 30% of CPU

    I want to know this system of delays then why memory not increase of 32 G to 50 or 60 G automatic.

    Thank you

    Cedric

    Hi cedric,

    OK this is what I guessed: your oracle process allocates 28, 25 GB of CMS (as mentioned in V$ MEMORY_DYNAMIC_COMPONENT views) and V$ PGASTAT mentions that your process allocates 4, 66Gb of memory. The total is 33 GB, which is very close to what mentions Windows system tools.

    The reason is that PGA is allocated on demand.

    For other messages, you have not clearly a problem with your PGA. But (even if I don't trust a lot of advisers), it would be prudent to increase SGA_TARGET 42 GB.

    I recommend you to disable the memory automatic (AMM) target of memory_target and setting memory_max_target to 0 and the value 42 GB SGA_TARGET and a PGA of 15 GB.

    And please, take a look at this 422844.1 note to implement large pages on Windows. Your system will certainly play with a lot of pages and this technique can reduce the memory pressure.

    HTH

    Laurent

  • Why is there a setting duplicate when I create a file pfile from spfile on a DB of RAC?

    Hello world

    I ran ORACHK on a 11g (11.2.0.4) database RAC and one of the results shows "check no input parameter in the init.ora (spfile) database duplicate" that I generated a pfile from two of the database file and the following repeated, I saw:

    (BD1)

    amagua2.__streams_pool_size = 536870912

    amagua1.__streams_pool_size = 536870912

    amagua2.__streams_pool_size = 536870912

    amagua1.__streams_pool_size = 536870912

    (BD2)

    * .db_16k_cache_size = 1610612736

    METRO1.db_16k_cache_size = 2147483648

    METRO2.db_16k_cache_size = 2147483648

    I don't know why this happens, in BD2 when I select the setting of $ v that I see the greatest value, which was the last one I assigned to the parameter, so I don't know why the lower value appears in my file pfile.

    in BD1 two lines appear just a copy with the same value.

    I would fix this, any help will be appreciated.

    Thanks in advance.

    Giancarlo Giammaria

    (BD1)

    amagua2.__streams_pool_size = 536870912

    amagua1.__streams_pool_size = 536870912

    amagua2.__streams_pool_size = 536870912

    amagua1.__streams_pool_size = 536870912

    This is because Oracle manages the size of the SGA on its own. Either you MEMORY_TARGET or SGA_TARGET value. Oracle wrote its flow Pool size in the SPFILE so that when the instance is started, it can resume with that component SGA size to its last known value, based on the workload of your application. It is curious to see the two sets, and I have no idea why. But you can always delete the duplicate entries and re-create the SPFILE.

    (BD2)

    * .db_16k_cache_size = 1610612736

    METRO1.db_16k_cache_size = 2147483648

    METRO2.db_16k_cache_size = 2147483648

    In the case above, you have a set specifically for the METRO1 instance. And another parameter specifically for instance METRO2. The higher setting is global for all cases, unless the instance-specific setting would be substitute one world. If you have only two instances, then the global is not much until you add a third instance.

    Now for Oracle RAC... it is normally a good idea to have all the instances with the same cache size setting. It is rare that there is a good reason for instance1 to have a different value than instance2. In addition, if you use TAF and/or want a second instance to support connections over the failover of the first instance, then you want your LMS components sized so as to be able to manage not only the current work load, but the additional workload due to the failure of the instance.

    See you soon,.
    Brian

  • WARNING: Pellets of pga_aggregate_target 96 may not exceed memory_tar

    Hi all

    11.2.0.1
    OEL 6.4

    I am tracking our PROD alerts log and I can't occurrence of warnings:
    WARNING: pellets of pga_aggregate_target 96 may not exceed memory_target (101) - sga_target (0) or min_sga (6)
    I ignore this warning only? Does affect the performance?


    Thank you

    zxy

    Here's your on IE 6464 M 6.1 G memory_target and your pga_aggregate_target = 6174015488 which is about 6.1 G and your sga_max_target = 6.4 GB is.

    According to oracle documentation

    Make sure settings MEMORY_TARGET/MEMORY_MAX_TARGET are on at least the sum of SGA_MAX_SIZE/SGA_TARGET and the parameter PGA_AGGREGATE_TARGET

    Your memory_target must therefore more than sga_max_size + pga_aggregate_target

    That's why your target memory should be more than 6.4 + 6.1 = 12.5 G

    For example:

    ALTER system set memory_max_target = 13G scope = spfile;
    ALTER system set memory_target = 13g scope = spfile

    and restart the DB.

    Hope this can help you :)

    Published: sga_max_target to sga_max_size

  • HugePages - Instance ASM

    Hi all, I have a doubt on the HugePage, we have allowed huge Pages on our Linux Red Hat 5 of 64-bit server. My doubt is because the documentation says I can't use memory_target my databases are therefore with pga_aggregate_target and sga_target but my what ASM instance is with memory_target and it works normally. I have IG + DB version 11.2.0.2.
    So I want to ask if I can have the ASM instance in memory_target with a huge Pages configured under Linux without problems?

    Concerning

    Huge pages and MSA are mutually exclusive, but this does not mean that you cannot run a database using the MSA and the other using huge pages or standard memory shared on the same computer. If you set pga_aggregate_target and sga_target parameters in an Instance that is configured for the AMM, these values set the minimum values. The ASM instance requires AMM.

  • I use the same query in two difference db (exp/imp) but run different times!

    Hello

    I have 2 database. One of the remote server, one at my local. I used exp/imp to retrieve data from remote to my local database. Structers, tables and indexes is the same.
    My local machine is more powerful than the remote computer. (Ram, cpu County ext)

    But when I connect remotely from my local computer, this application runs 4 second (with view). But when I connect to my local db, this query run 5 minutes.
    Number of rows returned is 18,500

    What's wrong? Why get the bytes of the value are different?

    Local explain plan is:
    SELECT STATEMENT ALL_ROWS Cost: 203 Bytes: 19,062,160 Cardinality: 18,329                                         
    Remote control explain plan is:
    SELECT STATEMENT ALL_ROWS Cost: 226 Bytes: 3,855,214 Cardinality: 18,446                                         
    My plans to explain (remote and the) and other (auto track, oracle params) data is in excel file (because of this forum not to accept more than 30,000 words);

    http://www.eksicevap.com/tech/explainplan.xls
    for winrar:
    http://www.eksicevap.com/tech/explainplan.rar

    Thanks for help

    Oracel version information:
    Local:
    Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
    Distance:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi

    Published by: Melike on 15.Mar.2011 02:42

    Published by: Melike on 15.Mar.2011 03:39

    Melike says:
    Thanks, Johan, I'll try.
    My first post for local oracle param, I got it 3 days ago. or I confuse my file params to oracle. (This problem continues from last week)
    Now I'm waiting still current oracle local params and these value is worse:

    Local:
    pga_aggregate_target 0
    SGA_MAX_SIZE 536.870.912
    SGA_TARGET 0

    I update the new oracle local params in my previous message.

    I'll try the update of these values, but I hope, my local oracle is not drop down :)

    Can't be worse...
    You have a parameter memory_target (and max_memory_target)? If so, you could add memory to these settings as well and only set pga_aggregate_target (no CMS or sga_max)

    My settings (on a test/dev/lab-box):
    System@oracle11 SQL > see the memory settings

    VALUE OF TYPE NAME
    ------------------------------------ ----------- ------------------------------
    hi_shared_memory_address integer 0
    whole big memory_max_target 3G
    whole large memory_target 3G
    shared_memory_address integer 0
    System@oracle11 SQL > show parameter sga

    VALUE OF TYPE NAME
    ------------------------------------ ----------- ------------------------------
    lock_sga boolean FALSE
    PRE_PAGE_SGA boolean FALSE
    SGA_MAX_SIZE large integer 0
    SGA_TARGET large integer 0
    System@oracle11 SQL > show parameter pga

    VALUE OF TYPE NAME
    ------------------------------------ ----------- ------------------------------
    pga_aggregate_target large integer 0

    This way I pushed things like Oracle by itself, memory allocation to everywhere where oracle needs him the most.

    Brgds
    Johan

  • session_cached_cursor


    Hello

    I use the suite of applications (from the oracle documentation) to check session_cached_cursors.

    I need to increase session_cached_cursors?  Know if it would be part of the SGA and PGA? It must be that the amount of memory in this case increases to place cursors? Work on 11 GR 2.

    If it is independent of multiple sessions created for the same user?

    Percent found in the cache is too high, I don't know why...

    Please provide suggestions.

    A.value SELECT curr_cached, p.value max_cached, s.username, s.sid, s.serial # FROM v$ sesstat a, v$ statname b, v$ session s, v$ parameter p 2
    WHERE a.statistic # b.statistic = #, s.sid = a.sid and a.sid = and sid AND p.name = 'session_cached_cursors'
    AND b.name = 'session cursor cache count ';

    curr_cached max_cached username sid, serial #.
    299 300 211 28771 PMS


    SELECT cach.value cache_hits, prs.value all_parses,
    Round ((cach. Value / PRS.value) * 100, 2) "% found in the cache.
    V $ sesstat cach, v$ sesstat prs, v$ statname nm1, v$ statname nm2
    WHERE cach.statistic # = nm1.statistic #.
    AND nm1.name = 'session cursor cache hits ".
    AND prs.statistic #= nm2.statistic #.
    AND nm2.name = 'analysis count (total)'
    AND cach.sid cach.sid = & sid and prs.sid =;

    cache_hits all_parses % found in the cache
    36012 683 5272.62

    Hi Don,

    > My cursors opened for the session are about 330.

    So if you want to keep more than 330 cursors open, you need to increase and open_cursors session_cached_cursors. However, if you ever notice the cursor manipulation depends on your application and you should not focus on ratios, but rather on the response time which is devoted to the analysis of soft (latch, lock, etc.).

    > I just activated automatic memory and configure the settings memory_max_target and memory_target. I can't ignore sga, pga components now?

    How should we know? If you set big enough memory_target and Oracle does its job of automatic memory management right-, then Yes.

    > cursor_sharing is set to EXACT, what I need to change that to FORCE to get the slider adjustment... sharing?

    First of all, to answer the question: is your application using the cursor cache in the case of PL/SQL cursor or cursor in the session. Then, you must answer the question, if ACS applies in your case. However you should also consult the documentation to get the idea behind ACS as mentioned parameter has nothing (directly) to do with the ACS. It is no longer relevant in the case of literals, but if set you it in the context to your question session_cached_cursors you probably rather a problem of implementation of request a matter of setting of database.

    Concerning

    Stefan

Maybe you are looking for