Insertion performance woes

Hello

I have a simple application (actually a Perl script using the BerkeleyDB module) which inserts records in a Btree database. He got a total of about 9 million records to be inserted. Keys are long 4 bytes (using the pack of Perl ("L", $id)), and the values are binary strings (packed together various fields of digital type help ("L *", @vals) and pack ("f *", @vals)), with length 16 bytes of the minimum value, total length about 500 bytes and median duration approximately 25 bytes.

My script chugs happily him along until it inserts approximately 6 million records, then gets really slow. Here's a graph of time taken by the number of insertions:

http://limnus.com/~Ken/insertions.PNG

During the slow, I mean the disk having a lot of activity, and the CPU of the script is down around 5 percent or more. I conclude that it is I/O bound. When it starts to be slow, memory of the script (VSZ) size is about 2G, and the size of the database is around 500 Mr La eventual size of the database if I let it run until the end is about 700 m.

There is only 1 process of opening the DB (this insert script), and it does not use environments or operations; He opens the DB as follows:

$db = BerkeleyDB::Btree-> new (-Filename = > 'foo.db',-Cachesize = > 10 * 1024 * 1024,-Flags = > DB_CREATE);

Any suggestions on how to investigate this, or ideas on what's going on?

I study with a profiler (Devel::NYTProf) and indicated that the vast majority of the time is spent in BerkeleyDB::Common:db_put ().

Thank you

-Ken

Quick question: what is the BDB cache size? Given that the final size of the database is 700 m, could run an experiment with the BDB cache greater than 700 MB? -What gives you better performance?

Best regards.
Ashok

Tags: Database

Similar Questions

  • Insert - Performance problem

    Hi Experts,

    I am new to Oracle. Ask for your help to fix the performance of a query of insertion problem.

    I have an insert query that is go search for records of the partitioned table.

    Background: the user indicates that the query was running in 30 minutes to 10 G. The database is upgraded to 12 by one of my colleague. Now the query works continuously for hours, but no result. Check the settings and SGA is 9 GB, Windows - 4 GB. DB block size is 8192, DB Multiblock read file Count is 128. Overall target of PGA is 2457M.

    The parameters are given below


    VALUE OF TYPE NAME
    ------------------------------------ ----------- ----------
    DBFIPS_140 boolean FALSE
    O7_DICTIONARY_ACCESSIBILITY boolean FALSE
    whole active_instance_count
    aq_tm_processes integer 1
    ARCHIVE_LAG_TARGET integer 0
    asm_diskgroups chain
    asm_diskstring chain
    asm_power_limit integer 1
    asm_preferred_read_failure_groups string
    audit_file_dest string C:\APP\ADM
    audit_sys_operations Boolean TRUE

    AUDIT_TRAIL DB string
    awr_snapshot_time_offset integer 0
    background_core_dump partial string
    background_dump_dest string C:\APP\PRO
    \RDBMS\TRA
    BACKUP_TAPE_IO_SLAVES boolean FALSE
    bitmap_merge_area_size integer 1048576
    blank_trimming boolean FALSE
    buffer_pool_keep string
    buffer_pool_recycle string
    cell_offload_compaction ADAPTIVE channel


    cell_offload_decryption Boolean TRUE
    cell_offload_parameters string
    cell_offload_plan_display string AUTO
    cell_offload_processing Boolean TRUE
    cell_offloadgroup_name string
    whole circuits
    whole big client_result_cache_lag 3000
    client_result_cache_size big integer 0
    clonedb boolean FALSE
    cluster_database boolean FALSE
    cluster_database_instances integer 1


    cluster_interconnects chain
    commit_logging string
    commit_point_strength integer 1
    commit_wait string
    string commit_write
    common_user_prefix string C#.
    compatible string 12.1.0.2.0
    connection_brokers string ((TYPE = DED
    ((TYPE = EM
    control_file_record_keep_time integer 7
    control_files string G:\ORACLE\

    TROL01. CTL
    FAST_RECOV
    NTROL02. CT
    control_management_pack_access string diagnostic
    core_dump_dest string C:\app\dia
    bal12\cdum
    cpu_count integer 4
    create_bitmap_area_size integer 8388608
    create_stored_outlines string
    cursor_bind_capture_destination memory of the string + tell
    CURSOR_SHARING EXACT string

    cursor_space_for_time boolean FALSE
    db_16k_cache_size big integer 0
    db_2k_cache_size big integer 0
    db_32k_cache_size big integer 0
    db_4k_cache_size big integer 0
    db_8k_cache_size big integer 0
    db_big_table_cache_percent_target string 0
    db_block_buffers integer 0
    db_block_checking FALSE string
    db_block_checksum string TYPICAL
    Whole DB_BLOCK_SIZE 8192

    db_cache_advice string WE
    db_cache_size large integer 0
    db_create_file_dest chain
    db_create_online_log_dest_1 string
    db_create_online_log_dest_2 string
    db_create_online_log_dest_3 string
    db_create_online_log_dest_4 string
    db_create_online_log_dest_5 string
    db_domain chain
    db_file_multiblock_read_count integer 128
    db_file_name_convert chain

    DB_FILES integer 200
    db_flash_cache_file string
    db_flash_cache_size big integer 0
    db_flashback_retention_target around 1440
    chain of db_index_compression_inheritance NONE
    DB_KEEP_CACHE_SIZE big integer 0
    chain of db_lost_write_protect NONE
    db_name string ORCL
    db_performance_profile string
    db_recovery_file_dest string G:\Oracle\
    y_Area


    whole large db_recovery_file_dest_size 12840M
    db_recycle_cache_size large integer 0
    db_securefile string PREFERRED
    channel db_ultra_safe
    db_unique_name string ORCL
    db_unrecoverable_scn_tracking Boolean TRUE
    db_writer_processes integer 1
    dbwr_io_slaves integer 0
    DDL_LOCK_TIMEOUT integer 0
    deferred_segment_creation Boolean TRUE
    dg_broker_config_file1 string C:\APP\PRO


    \DATABASE\
    dg_broker_config_file2 string C:\APP\PRO
    \DATABASE\
    dg_broker_start boolean FALSE
    diagnostic_dest channel directory
    disk_asynch_io Boolean TRUE
    dispatchers (PROTOCOL = string
    12XDB)
    distributed_lock_timeout integer 60
    dml_locks whole 2076
    whole dnfs_batch_size 4096

    dst_upgrade_insert_conv Boolean TRUE
    enable_ddl_logging boolean FALSE
    enable_goldengate_replication boolean FALSE
    enable_pluggable_database boolean FALSE
    event string
    exclude_seed_cdb_view Boolean TRUE
    fal_client chain
    fal_server chain
    FAST_START_IO_TARGET integer 0
    fast_start_mttr_target integer 0
    fast_start_parallel_rollback string LOW


    file_mapping boolean FALSE
    fileio_network_adapters string
    filesystemio_options chain
    fixed_date chain
    gcs_server_processes integer 0
    global_context_pool_size string
    global_names boolean FALSE
    global_txn_processes integer 1
    hash_area_size integer 131072
    channel heat_map
    hi_shared_memory_address integer 0

    hs_autoregister Boolean TRUE
    iFile file
    inmemory_clause_default string
    inmemory_force string by DEFAULT
    inmemory_max_populate_servers integer 0
    inmemory_query string ENABLE
    inmemory_size big integer 0
    inmemory_trickle_repopulate_servers_ integer 1
    percent
    instance_groups string
    instance_name string ORCL


    instance_number integer 0
    instance_type string RDBMS
    instant_restore boolean FALSE
    java_jit_enabled Boolean TRUE
    java_max_sessionspace_size integer 0
    JAVA_POOL_SIZE large integer 0
    java_restrict string no
    java_soft_sessionspace_limit integer 0
    JOB_QUEUE_PROCESSES around 1000
    LARGE_POOL_SIZE large integer 0
    ldap_directory_access string NONE


    ldap_directory_sysauth string no.
    license_max_sessions integer 0
    license_max_users integer 0
    license_sessions_warning integer 0
    listener_networks string
    LOCAL_LISTENER (ADDRESS = string
    = i184borac
    (NET) (PORT =
    lock_name_space string
    lock_sga boolean FALSE
    log_archive_config string


    Log_archive_dest chain
    Log_archive_dest_1 chain
    LOG_ARCHIVE_DEST_10 string
    log_archive_dest_11 string
    log_archive_dest_12 string
    log_archive_dest_13 string
    log_archive_dest_14 string
    log_archive_dest_15 string
    log_archive_dest_16 string
    log_archive_dest_17 string
    log_archive_dest_18 string


    log_archive_dest_19 string
    LOG_ARCHIVE_DEST_2 string
    log_archive_dest_20 string
    log_archive_dest_21 string
    log_archive_dest_22 string
    log_archive_dest_23 string
    log_archive_dest_24 string
    log_archive_dest_25 string
    log_archive_dest_26 string
    log_archive_dest_27 string
    log_archive_dest_28 string


    log_archive_dest_29 string
    log_archive_dest_3 string
    log_archive_dest_30 string
    log_archive_dest_31 string
    log_archive_dest_4 string
    log_archive_dest_5 string
    log_archive_dest_6 string
    log_archive_dest_7 string
    log_archive_dest_8 string
    log_archive_dest_9 string
    allow the chain of log_archive_dest_state_1


    allow the chain of log_archive_dest_state_10
    allow the chain of log_archive_dest_state_11
    allow the chain of log_archive_dest_state_12
    allow the chain of log_archive_dest_state_13
    allow the chain of log_archive_dest_state_14
    allow the chain of log_archive_dest_state_15
    allow the chain of log_archive_dest_state_16
    allow the chain of log_archive_dest_state_17
    allow the chain of log_archive_dest_state_18
    allow the chain of log_archive_dest_state_19
    allow the chain of LOG_ARCHIVE_DEST_STATE_2

    allow the chain of log_archive_dest_state_20
    allow the chain of log_archive_dest_state_21
    allow the chain of log_archive_dest_state_22
    allow the chain of log_archive_dest_state_23
    allow the chain of log_archive_dest_state_24
    allow the chain of log_archive_dest_state_25
    allow the chain of log_archive_dest_state_26
    allow the chain of log_archive_dest_state_27
    allow the chain of log_archive_dest_state_28
    allow the chain of log_archive_dest_state_29
    allow the chain of log_archive_dest_state_3

    allow the chain of log_archive_dest_state_30
    allow the chain of log_archive_dest_state_31
    allow the chain of log_archive_dest_state_4
    allow the chain of log_archive_dest_state_5
    allow the chain of log_archive_dest_state_6
    allow the chain of log_archive_dest_state_7
    allow the chain of log_archive_dest_state_8
    allow the chain of log_archive_dest_state_9
    log_archive_duplex_dest string
    log_archive_format string ARC%S_%R.%
    log_archive_max_processes integer 4

    log_archive_min_succeed_dest integer 1
    log_archive_start Boolean TRUE
    log_archive_trace integer 0
    whole very large log_buffer 28784K
    log_checkpoint_interval integer 0
    log_checkpoint_timeout around 1800
    log_checkpoints_to_alert boolean FALSE
    log_file_name_convert chain
    whole MAX_DISPATCHERS
    max_dump_file_size unlimited string
    max_enabled_roles integer 150


    whole max_shared_servers
    max_string_size string STANDARD
    memory_max_target big integer 0
    memory_target large integer 0
    NLS_CALENDAR string GREGORIAN
    nls_comp BINARY string
    nls_currency channel u
    string of NLS_DATE_FORMAT DD-MON-RR
    nls_date_language channel ENGLISH
    string nls_dual_currency C
    nls_iso_currency string UNITED KIN

    nls_language channel ENGLISH
    nls_length_semantics string OCTET
    string nls_nchar_conv_excp FALSE
    nls_numeric_characters chain.,.
    nls_sort BINARY string
    nls_territory string UNITED KIN
    nls_time_format HH24.MI string. SS
    nls_time_tz_format HH24.MI string. SS
    chain of NLS_TIMESTAMP_FORMAT DD-MON-RR
    NLS_TIMESTAMP_TZ_FORMAT string DD-MON-RR
    noncdb_compatible boolean FALSE


    object_cache_max_size_percent integer 10
    object_cache_optimal_size integer 102400
    olap_page_pool_size big integer 0
    open_cursors integer 300
    Open_links integer 4
    open_links_per_instance integer 4
    optimizer_adaptive_features Boolean TRUE
    optimizer_adaptive_reporting_only boolean FALSE
    OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES boolean FALSE
    optimizer_dynamic_sampling integer 2
    optimizer_features_enable string 12.1.0.2

    optimizer_index_caching integer 0
    OPTIMIZER_INDEX_COST_ADJ integer 100
    optimizer_inmemory_aware Boolean TRUE
    the string ALL_ROWS optimizer_mode
    optimizer_secure_view_merging Boolean TRUE
    optimizer_use_invisible_indexes boolean FALSE
    optimizer_use_pending_statistics boolean FALSE
    optimizer_use_sql_plan_baselines Boolean TRUE
    OPS os_authent_prefix string $
    OS_ROLES boolean FALSE
    parallel_adaptive_multi_user Boolean TRUE


    parallel_automatic_tuning boolean FALSE
    parallel_degree_level integer 100
    parallel_degree_limit string CPU
    parallel_degree_policy chain MANUAL
    parallel_execution_message_size integer 16384
    parallel_force_local boolean FALSE
    parallel_instance_group string
    parallel_io_cap_enabled boolean FALSE
    PARALLEL_MAX_SERVERS integer 160
    parallel_min_percent integer 0
    parallel_min_servers integer 16

    parallel_min_time_threshold string AUTO
    parallel_server boolean FALSE
    parallel_server_instances integer 1
    parallel_servers_target integer 64
    parallel_threads_per_cpu integer 2
    pdb_file_name_convert string
    pdb_lockdown string
    pdb_os_credential string
    permit_92_wrap_format Boolean TRUE
    pga_aggregate_limit great whole 4914M
    whole large pga_aggregate_target 2457M

    -
    Plscope_settings string IDENTIFIER
    plsql_ccflags string
    plsql_code_type chain INTERPRETER
    plsql_debug boolean FALSE
    plsql_optimize_level integer 2
    plsql_v2_compatibility boolean FALSE
    plsql_warnings DISABLE channel: AL
    PRE_PAGE_SGA Boolean TRUE
    whole process 300
    processor_group_name string
    query_rewrite_enabled string TRUE


    applied query_rewrite_integrity chain
    rdbms_server_dn chain
    read_only_open_delayed boolean FALSE
    recovery_parallelism integer 0
    Recyclebin string on
    redo_transport_user string
    remote_dependencies_mode string TIMESTAMP
    remote_listener chain
    Remote_login_passwordfile string EXCLUSIVE
    REMOTE_OS_AUTHENT boolean FALSE
    remote_os_roles boolean FALSE

    replication_dependency_tracking Boolean TRUE
    resource_limit Boolean TRUE
    resource_manager_cpu_allocation integer 4
    resource_manager_plan chain
    result_cache_max_result integer 5
    whole big result_cache_max_size K 46208
    result_cache_mode chain MANUAL
    result_cache_remote_expiration integer 0
    resumable_timeout integer 0
    rollback_segments chain
    SEC_CASE_SENSITIVE_LOGON Boolean TRUE

    sec_max_failed_login_attempts integer 3
    string sec_protocol_error_further_action (DROP, 3)
    sec_protocol_error_trace_action string PATH
    sec_return_server_release_banner boolean FALSE
    disable the serial_reuse chain
    service name string ORCL
    session_cached_cursors integer 50
    session_max_open_files integer 10
    entire sessions 472
    Whole large SGA_MAX_SIZE M 9024
    Whole large SGA_TARGET M 9024


    shadow_core_dump string no
    shared_memory_address integer 0
    SHARED_POOL_RESERVED_SIZE large integer 70464307
    shared_pool_size large integer 0
    whole shared_server_sessions
    SHARED_SERVERS integer 1
    skip_unusable_indexes Boolean TRUE
    smtp_out_server chain
    sort_area_retained_size integer 0
    sort_area_size integer 65536
    spatial_vector_acceleration boolean FALSE


    SPFile string C:\APP\PRO
    \DATABASE\
    sql92_security boolean FALSE
    SQL_Trace boolean FALSE
    sqltune_category string by DEFAULT
    standby_archive_dest channel % ORACLE_HO
    standby_file_management string MANUAL
    star_transformation_enabled string TRUE
    statistics_level string TYPICAL
    STREAMS_POOL_SIZE big integer 0
    tape_asynch_io Boolean TRUE

    temp_undo_enabled boolean FALSE
    entire thread 0
    threaded_execution boolean FALSE
    timed_os_statistics integer 0
    TIMED_STATISTICS Boolean TRUE
    trace_enabled Boolean TRUE
    tracefile_identifier chain
    whole of transactions 519
    transactions_per_rollback_segment integer 5
    UNDO_MANAGEMENT string AUTO
    UNDO_RETENTION integer 900

    undo_tablespace string UNDOTBS1
    unified_audit_sga_queue_size integer 1048576
    use_dedicated_broker boolean FALSE
    use_indirect_data_buffers boolean FALSE
    use_large_pages string TRUE
    user_dump_dest string C:\APP\PRO
    \RDBMS\TRA
    UTL_FILE_DIR chain
    workarea_size_policy string AUTO
    xml_db_events string enable

    Thanks in advance

    Firstly, thank you for posting the 10g implementation plan, which was one of the key things that we were missing.

    Second, you realize that you have completely different execution plans, so you can expect different behavior on each system.

    Your package of 10g has a total cost of 23 959 while your plan of 12 c has a cost of 95 373 which is almost 4 times more.  All things being equal, cost is supposed to relate directly to the time spent, so I expect the 12 c plan to take much more time to run.

    From what I can see the 10g plan begins with a scan of full table on DEALERS, and then a full scan on SCARF_VEHICLE_EXCLUSIONS table, and then a full scan on CBX_tlemsani_2000tje table, and then a full scan on CLAIM_FACTS table.  The first three of these analyses tables have a very low cost (2 each), while the last has a huge cost of 172K.  Yet once again, the first three scans produce very few lines in 10g, less than 1,000 lines each, while the last product table scan 454 K lines.

    It also looks that something has gone wrong in the 10g optimizer plan - maybe a bug, which I consider that Jonathan Lewis commented.  Despite the full table scan with a cost of 172 K, NESTED LOOPS it is part of the only has a cost of 23 949 or 24 K.  If the math is not in terms of 10g.  In other words, maybe it's not really optimal plan because 10g optimizer may have got its sums wrong and 12 c might make his right to the money.  But luckily this 'imperfect' 10g plan happens to run fairly fast for one reason or another.

    The plan of 12 starts with similar table scans but in a different order.  The main difference is that instead of a full table on CLAIM_FACTS scan, it did an analysis of index on CLAIM_FACTS_AK9 beach at the price of 95 366.  It is the only main component of the final total cost of 95 373.

    Suggestions for what to do?  It is difficult, because there is clearly an anomaly in the system of 10g to have produced the particular execution plan that he uses.  And there is other information that you have not provided - see later.

    You can try and force a scan of full table on CLAIM_FACTS by adding a suitable example suspicion "select / * + full (CF) * / cf.vehicle_chass_no...". "However, the tips are very difficult to use and does not, guarantee that you will get the desired end result.  So be careful.  For the essay on 12 c, it may be worth trying just to see what happens and what produces the execution plan looks like.  But I would not use such a simple, unique tip in a production system for a variety of reasons.  For testing only it might help to see if you can force the full table on CLAIM_FACTS scan as in 10g, and if the performance that results is the same.

    The two plans are parallel ones, which means that the query is broken down into separate, independent steps and several steps that are executed at the same time, i.e. several CPUS will be used, and there will be several readings of the disc at the same time.  (It is a mischaracterization of the works of parallel query how).  If 10g and 12 c systems do not have the SAME hardware configuration, then you would naturally expect different time elapsed to run the same parallel queries.  See the end of this answer for the additional information that you may provide.

    But I would be very suspicious of the hardware configuration of the two systems.  Maybe 10 g system has 16-core processors or more and 100's of discs in a matrix of big drives and maybe the 12 c system has only 4 cores of the processor and 4 disks.  That would explain a lot about why the 12 c takes hours to run when the 10 g takes only 30 minutes.

    Remember what I said in my last reply:

    "Without any contrary information I guess the filter conditions are very low, the optimizer believes he needs of most of the data in the table and that a table scan or even a limited index scan complete is the"best"way to run this SQL.  In other words, your query takes just time because your tables are big and your application has most of the data in these tables. "

    When dealing with very large tables and to do a full table parallel analysis on them, the most important factor is the amount of raw hardware, you throw the ball to her.  A system with twice the number of CPUS and twice the number of disks will run the same parallel query in half of the time, at least.  It could be that the main reason for the 12 c system is much slower than the system of 10g, rather than on the implementation plan itself.

    You may also provide us with the following information which would allow a better analysis:

    • Row counts in each tables referenced in the query, and if one of them are partitioned.
    • Hardware configurations for both systems - the 10g and the 12 a.  Number of processors, the model number and speed, physical memory, CPU of discs.
    • The discs are very important - 10g and 12 c have similar disk subsystems?  You use simple old records, or you have a San, or some sort of disk array?  Are the bays of identical drives in both systems?  How are they connected?  Fast Fibre Channel, or something else?  Maybe even network storage?
    • What is the size of the SGA in both systems?  of values for MEMORY_TARGET and SGA_TARGET.
    • The fact of the CLAIM_FACTS_AK9 index exist on the system of 10g.  I guess he does, but I would like that it confirmed to be safe.

    John Brady

  • Insert performance

    How the performance when inserting a record between a standard and a strcture dénormalisée?

    A standardised structure:
    customer in the table (cust_id, cust_name, cust_address);
    orders in the table (order_id, cust_id);


    A denormalized structure:
    orders in the table (order_id, cust_id, cust_name, cust_address);


    I understand as a high-performance insert in a normalized schema than a denormalized schema. Can someone explain why?
    When I insert a record in the orders in a standardized schema, I need to adhere to the customer in the table. However, in a denormalized schema, I have to insert the values cust_name, cust_address, and but I don't have to join any other table and simply insert into orders. Someone can describe the logic why the standardized structure can perform better?

    Thank you.

    I'm not sure what you're asking here. I don't see where you would have a join with the customer on insert table in the orders. When the user creates the order, but it is happening, they would need specify the customer who is placnig the order. In typical applications, this would be with something like a drop-down list, probably with an ability to specify all or part of the name of a customer before generating the list. One of the things that would be contained in the drop-down list (possibly seen by the user) would be the client code. Which would be inserted in the control panel once the user has completed the entire treatment.

    One way or another, you'll need to transform the name of a customer in a customer code. One application that I have worked with a number of years has allowed customers to place their orders. One of the fields in the table user is the ID of the customer associated with this user. So when they logged into the system, the application also got the client ID and when they passed an order that it was added to the order, as well as the user ID so that the client could say which of their employees actually passed the command.

    John

  • SELECT from Bulk INSERT - Performance Clarification

    I have 2 tables-emp_new & emp_old. I need to load all the data from emp_old to emp_new. Is there a transaction_id column in emp_new whose value should be extracted from a main_transaction table that includes a column of region Code. Something like -

    TRANSACTION_ID REGION_CODE
    ------------------------- ------------
    100. WE
    AMER 101
    APAC 102

    My bulk insert query looks like this-

    INSERT INTO emp_new
    (col1,
    col2,
    ...,
    ...,
    ...,
    transaction_id)
    SELECT
    col1,
    col2,
    ...,
    ...,
    ...,
    * (Select transaction_id from main_transaction WHERE region_code = 'US') *.
    Of emp_old

    There are millions of rows that need to be loaded this way. I would like to know if the Subselection to fetch the transaction_id would be re-executed for each line, which would be very expensive and I'm actually looking for a way to avoid this. The main_transcation table is pre-loaded and its values will not change. Is there a way (via some SUSPICION) to indicate that the subselect should not re-run for each line?

    On a different note, the implementation plan of the whole above INSERT looks like-

    --------------------------------------------------------------------------
    | ID | Operation | Name | Lines | Bytes | Cost (% CPU).
    --------------------------------------------------------------------------
    | 0 | INSERT STATEMENT. 11 M | 54 M | 6124 (4) |
    | 1. FULL RESTRICTED INDEX SCAN FAST | EMPO_IE2_IDX | 11 M | 54 M | 6124 (4) |
    --------------------------------------------------------------------------
    EMPO_IE2_IDX-> Index on emp_old

    I'm surprised to see that the main_transaction of the table is not in the execution plan at all. Does this mean that the subselect is not executed for each line? However, at least for the first reading, I suppose that the table must appear in the plan.

    Can someone help me understand this?

    Why the explain command plan includes no information about the table of main_transaction
    Can someone please clarify?

    As I said originally (and repeated in a later post) - probably because PLAN_TABLE is an older version.
    More recent versions of PLAN_TABLE are required to correctly report "most recent" functions implementation plans.

  • Performance of XMLType Insert too SLOW

    Hello

    Goal: Insert performance 0.1 seconds
    AMRS, Client 11 g server is presnt in Asia. Network intranet =
    Size of object = 15000 bytes. Size max = 30000 bytes

    Problem: executeUpdate takes about 0.33 sec. However, getTemporaryClob is very slow, arnd - 1.33 sec

    In getTemporaryClob()->
    CLOB.createTemporary-> 0.2 s
    CLOB.getCharacterOutputStream-> 0.2 s
    Writer.Close-> 0.8 s

    Alternative:
    XMLType.createXML (conn, InputStream) is far too slow.

    My approach:
    SQL = Insert into mytable Values (XMLType (?))
    CLOB c = getTemporaryClob()
    preparedStatement.setObject (1, c);
    preparedStatement.executeUpdate ();

    1. is there another solution? (for example: could I send the bytes directly rather than create a CLOB)
    2. in the creation of CLOB, for each object I create temporary Clob, getStream & Flush writer, writer close CLOB, before I run the query. Is there a way to create TempClob only once and insert the number of records with the same temporary clob (I should require only Writer.write (String))?

    Thanks in advance.

    In the following help: Re: performance problems loading XML into a remote table into a local table (also follow the AskTom url)
    Try to understand the difference in the XMLType storage, which is mainly the cause of some problems.

  • Performance of the queries XMLAGG degrading in an exponential way.

    There is a serious performance problem with my query using XMLAGG

    CREATE TABLE tmp_test_xml

    (

    acc_ID NUMBER (12).

    CLOB CUS_DTLS

    )

    INSERT INTO tmp_test_xml

    SELECT tab.acc_id acc_id

    XMLSERIALIZE (DOCUMENT XMLELEMENT ("holders"

    XMLAGG (XMLELEMENT ("holder"

    XMLELEMENT ("Gender", tab.sex_cde)

    XMLELEMENT ("Name", tab.name)

    XMLFOREST (tab.drivers_licence AS "DL")

    XMLFOREST (tab.empr_name LIKE "emp_name")

    XMLELEMENT ("Address", tab.addr)

    ..

    ...

    ...

    () ))) AS cus_dtls

    ON the TABLE tab

    Tab.acc_id group

    table 'TABLE' has 3 million records

    The Insert performance degrades as follows:

    INSERT

    10K REB - 1 s

    30K REB - 45 dry

    50K REB - 3 mins

    100K REB - 16 mins

    Please let me know if I can improve performance in some way. I can imagine how I can insert records 3 million in here...

    There is no problem of table space. Tried the 1 million without XMLAGG CER - 2 minutes.

    Y at - it another way to aggregate my xml data. In fact, I'm trying to aggregate the data for all customers for a single account.

    Version information:

    ------------------------------------------------------------------------------------------------------------

    Oracle Database 11 g Enterprise Edition Release 11.2.0.3.0 - 64 bit Production

    PL/SQL Release 11.2.0.3.0 - Production

    CORE Production 11.2.0.3.0

    AMT for Linux: Version 11.2.0.3.0 - Production

    NLSRTL Version 11.2.0.3.0 - Production

    Bravo!

    Sofiane

    Why do you think the problem is with XMLAgg?

    Try with the following definition of the table

    CREATE TABLE tmp_test_xml
    (
      acc_ID     NUMBER(12),
      CUS_DTLS  XMLTYPE -- changed storage. Defaults to SECUREFILE BINARY XML in your version
    )
    

    and also remove the XMLSERIALIZE of your SQL statement as well.

    So the performance degradation that show you reads like a memory leak, just testing to see if it is in the conversion of an XMLType to a CLOB.  You can also open an SR with Oracle's Support on the issue as well as they would have been better.

  • Growth of induced partition insert slow

    Hello gurus,

    Of you to please explain what could be the possible causes of a slow down insert because the score becomes more complete?

    On a table with range partitions weekly insert takes 8secs to 200 k lines when the partition is empty and it pushes up to 6000secs for 200 k rows when there are 90 M rows in the partition. Data in storage space with:

    extent_management = local
    uniform = allocation_type
    SEGMENT_SPACE_MANAGEMENT = auto

    A lot of insertion will have the same date and are all made with the / * + APPEND * /.
    I see no reason theoretical insert performance degradation in relation to the volume of the destination segment.

    10.2.0.4
    This means that the 200K rows inserts may gave to update a total of (say) 1,000 leaf blocks when the partition is empty, but have to update 200,000 different leaf blocks when the partition is full - and that might mean 200,000 physical disk reads.
    

    Mr President.

    Please explain this.

    I think I understand:
    If it is insert lines of 200K, then update a constant number of blocks of sheet (whether the partition is full or empty). But since the area of the index is low when the partition is empty, then the update of index block will consume less amount of time.
    But if the partition is about to be completely filled, the area of the index will be greater and update the same amount of blocks of leaves, to browse more. So, it will consume a little more time.

    But how do you say which inserts will update 1000 sheets blocks if the partition is empty and must update the 200 000 leaves blocks if the partition is full. The number of blocks of leaves which will be modified should be the same.

    Kind regards
    S.K.

  • Dell XPS 15 L502X restore Windows 10 to 8 - is not the disks

    A few months, I installed Windows 10 back on my XPS 15 L502X. It seemed OK, but a bit clunky. Now I'm sick of it. It is so much slower, freeze things upwards, the Start button does not work, it takes forever to connect, webcam does not always work etc, etc.

    So, I want to return Windows 8.1. In fact, I grew to love this OS. Problem is that I don't have the installation diskettes. I tried to get some via Dell website but it came to a bit of a dead end and I'm not sure if they will be provided after the expiration of the warranty.

    Also, I don't seem to have an activation code on the case - is a problem.

    Any help appreciated.

    Thank you

    Chris

    You can get Windows 8.1 by following the instructions here:

    https://www.YouTube.com/watch?v=Is6I9B9kjFM&list=PL1RkaknDn7v9wEeh0YXFqOFsrmws-9fYn&index=1

    http://dellwindowsreinstallationguide.com/download-Windows-8-1-retail-and-OEM-ISO/

    But the performance woes are likely due to the fact that you have made an initial upgrade installation. They always perform significantly worse. A clean install of Windows 10 TH2 will work much better:

    https://www.YouTube.com/playlist?list=PL1RkaknDn7v-Ucth4gt0U3BHVSY7oNkWr

    http://dellwindowsreinstallationguide.com/download-Windows-10-OEM-and-retail-ISO/

  • removal of the waterfall does not work

    Hello
    I am trying to achieve the following requirement.
    When the equipment is deleted, should be deleted instances of equipment also T_EQUIPMENT_INSTANCE.

    Here I have two table in DB, namely T_special_equipment(equipment_id,equipment name, equipment_desc,..., availability_count), here equipment _id is generated through the sequence is not get value from the portal
    There is another table (Equipment_id, Instance_id) t_equipment_instance in this table instance_id is generated through sequence.

    the rquirement said here that if a line is removed from T_special_equipment then all related rows should be deleted from T_equipment_instance table also corresponding to the equipment_id which has been deleted from the T_special_equipment table.

    Here, I tried to delete cascade of the association know EquipmentEOtoInstanceEO by selecting the link-> behavior-> composition association-> Optimize to delete cascade of database.

    as I tried with writing what follows in doDML() of EquipmentEOImpl.java
    If (operation is DML_DELETE)
    {
    instanceCount int = Integer.parseInt (getAvalibilityCount () m:System.NET.SocketAddress.ToString ());

    while(instanceCount>0)
    {
    this.getEquipmentInstanceEO () .removeCurrentRow ();
    instanceCount-;
    }

    }


    overall, my doDML method is as follows that has code to insert also (but this isn't my requiement as insert performs its task successfully)

    protected void doDML (int operation, TransactionEvent e) {}
    If (operation == DML_INSERT) {}
    S SequenceImpl =
    new SequenceImpl ("SEQ_SPECIAL_EQUIPMENT", getDBTransaction());
    setAttribute ("EquipmentId", s.getSequenceNumber ());

    instanceCount int = Integer.parseInt (getAvalibilityCount () m:System.NET.SocketAddress.ToString ());
    SequenceImpl seqInstance = null;
    While (instanceCount > 0) {}
    Rank rr = this.getEquipmentInstanceEO () .createRow ();

    seqInstance = new SequenceImpl ("SEQ_EQUIPMENT_INSTANCE_ID", getDBTransaction());
    rr.setAttribute ("EquipmentInstanceId", seqInstance.getSequenceNumber ());
    rr.setAttribute ("EquipmentInstanceId", new BigDecimal (Integer.parseInt (seqInstance.getSequenceNumber (m:System.NET.SocketAddress.ToString (())));
    rr.setAttribute ("EquipmentId", this.getEquipmentId ());

    getEquipmentInstanceEO () .insertRow (rr);

    instanceCount-;
    }
    }

    If (operation is DML_DELETE)
    {
    instanceCount int = Integer.parseInt (getAvalibilityCount () m:System.NET.SocketAddress.ToString ());

    while(instanceCount>0)
    {
    this.getEquipmentInstanceEO () .removeCurrentRow ();
    instanceCount-;
    }

    }
    super.doDML (operation, e);
    }
    }


    but nothing has worked.

    can someone give me some suggetions for the same thing.

    You shouldn't need to write the code for it. The docs say it on the delete cascade option:

    When selected, this option allows the entity object composition to remove
    unconditionally, so that all composed of the child entities. If the option optimize related
    for the database Cascade Delete option is deselected, then the entity composed objects
    perform their normal DELETE statement at the time of validation of transaction to make the
    permanent changes. If the option is selected, then the entities composed am not
    execute the DELETE statement on the assumption that the ON DELETE database
    CASCADE constraint will manage the deletion of matching rows.

    So, if you have selected "optimise for cascade delete", then you said ADF "I have a constraint in the database with CASCADE DELETE set, so expect the database to remove the child records for me." If you do not have such a requirement, before turning itself off 'optimize for delete cascade '.

  • Export data from one schema to another SQL schema

    Hello.

    I have 2 plans. One is called MICC_ADMIN and the other is called MICC_PROD. What I want is to export from MICC_ADMIN to import into MICC_PROD. I tried to do with the tool of data workshop, one of the table has approximately 19,000 records, so he gets frozen trying to export data. So I was wondering if is it possible to do this via the sql command. Thank you.

    Best regards, Bernardo.

    Hello

    You give the right to select on MICC_APEX_ADMIN. SRDB_MAIN to MICC_APEX_PROD;
    Then sign in as MICC_APEX_ADMIN and run

    GRANT SELECT ON MICC_APEX_ADMIN.SRDB_MAIN TO MICC_APEX_PROD;
    

    Then log in as MICC_APEX_PROD and INSERT performance

    Kind regards
    Jari

  • read the value of the item page after filling value pop-up window

    I can't read or insert and also I cannot perform dynamic action of a page element data after the filling of the data page of the window popup (pass back to the point of the page). But I can read and insert, perform a dynamic action data while filling in the data using ajax and PLSQL code...


    How to read the value after the filling of database page popup window?


    SKUD

    Set value using Javascript is that in your browser page but is not yet available in the session. This is why the report does not return the expected results.

    Before you refresh the report, you need to set the session state from this point.

    You can configure after assigning a new value or before the report is updated to the element
    For interactive reports, is available built in
    You can go to report attributes > attributes advanced-> Page elements to submit and specify P2_X1 for the field (and any other page element that depends on the report)

    For standard reports, you can create a dynamic Action that is triggered 'before updating"report region, select PLSQL as action types, give a code BÉGIN END NULL; model and specify the name of the item in the item page to submit the field so that it affects the value of items in the session.

  • Timeline control quesiton:

    Hello

    Total newbie Flash here,

    I have an animated film of text in my project that goes to the frame 120 and stops. The content of the project go to frame 300. Is there a simple way to make the entire project to restart after image 300?

    Sorry if this is a stupid question, but it is one more because I do not know Flash lingo or I am completely desperate I don't think the answer to. thanks!

    I use the text ' osterone text insertion performed in my project. What happens is when I put them in the project, what they end up far from the 300th context. To solve the problem, I have inserted the images at 300 for each movie and then put the script "gotoandplay (1) ' on the last object of film played in the timeline to resolve the problem. This is what he had advised?

  • Difference in performance between the CTA and INSERT / * + APPEND * / IN

    Hi all

    I have a question about the ETG and "Insert / * + Append * / Into" statements.

    Suite deal, I have a question that I did not understand the difference in operating times EXADATA.

    The two tables of selection (g02_f01 and g02_f02) have not any partition. But I could partition tables with the method of partition by column "ip_id" hash and I tried to run the same query with partition tables. Change anything in execution times.

    I executed plan gather statistics for all tables. The two paintings were 13.176.888 records. The two arrays have same "ip_id' unique columns. I want to combine these tables into a single table.

    First request:

    Insert / * + append parallel (a, 16) * / in dg.tiz_irdm_g02_cc one

    (ip_id, process_date,...)

    Select / * + parallel (a, 16) parallel (16B) * / *.

    tgarstg.tst_irdm_g02_f01 a.,

    tgarstg.tst_irdm_g02_f02 b

    where a.ip_id = b.ip_id


    Elapsed = > 45: 00 minutes


    Second request:

    create table dg.tiz_irdm_g02_cc nologging parallel 16 compress for than query

    Select / * + parallel (a, 16) (b, 16) parallel * / *.

    tgarstg.tst_irdm_g02_f01 a.,

    tgarstg.tst_irdm_g02_f02 b

    where a.ip_id = b.ip_id

    Elapsed = > 04:00 minutes


    Execution plans are:


    1. Enter the statement execution Plan:

    Hash value of plan: 3814019933

    ------------------------------------------------------------------------------------------------------------------------------------------

    | ID | Operation | Name | Lines | Bytes | TempSpc | Cost (% CPU). Time |    TQ | IN-OUT | PQ Distrib.

    ------------------------------------------------------------------------------------------------------------------------------------------

    |   0 | INSERT STATEMENT.                  |    13 M |    36G |       |   127K (1) | 00:00:05 |        |      |            |

    |   1.  LOAD SELECT ACE | TIZ_IRDM_G02_CC |       |       |       |            |          |        |      |            |

    |   2.   COORDINATOR OF PX |                  |       |       |       |            |          |        |      |            |

    |   5:    PX SEND QC (RANDOM). : TQ10002 |    13 M |    36G |       |   127K (1) | 00:00:05 |  Q1, 02 | P > S | QC (RAND) |

    |*  4 |     IN THE BUFFER HASH JOIN |                  |    13 M |    36G |   921 M |   127K (1) | 00:00:05 |  Q1, 02 | SVCP |            |

    |   3:      RECEIVE PX |                  |    13 M |    14G |       |  5732 (5) | 00:00:01 |  Q1, 02 | SVCP |            |

    |   6.       PX SEND HASH | : TQ10000 |    13 M |    14G |       |  5732 (5) | 00:00:01 |  Q1 00 | P > P | HASH |

    |   7.        ITERATOR BLOCK PX |                  |    13 M |    14G |       |  5732 (5) | 00:00:01 |  Q1 00 | ISSUE |            |

    |   8.         STORE TABLE FULL ACCESS | TST_IRDM_G02_F02 |    13 M |    14G |       |  5732 (5) | 00:00:01 |  Q1 00 | SVCP |            |

    |   9.      RECEIVE PX |                  |    13 M |    21G |       | 18353 (3) | 00:00:01 |  Q1, 02 | SVCP |            |

    |  10.       PX SEND HASH | : TQ10001 |    13 M |    21G |       | 18353 (3) | 00:00:01 |  Q1, 01 | P > P | HASH |

    |  11.        ITERATOR BLOCK PX |                  |    13 M |    21G |       | 18353 (3) | 00:00:01 |  Q1, 01 | ISSUE |            |

    |  12.         STORE TABLE FULL ACCESS | TST_IRDM_G02_F01 |    13 M |    21G |       | 18353 (3) | 00:00:01 |  Q1, 01 | SVCP |            |

    ------------------------------------------------------------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):

    ---------------------------------------------------

    4 - access("AIRDM_G02_F01".") IP_ID '= 'AIRDM_G02_F02'.' IP_ID")

    2 - DEC execution Plan:

    Hash value of plan: 3613570869

    ------------------------------------------------------------------------------------------------------------------------------------------

    | ID | Operation | Name | Lines | Bytes | TempSpc | Cost (% CPU). Time |    TQ | IN-OUT | PQ Distrib.

    ------------------------------------------------------------------------------------------------------------------------------------------

    |   0 | CREATE TABLE STATEMENT.                  |    13 M |    36G |       |   397K (1) | 00:00:14 |        |      |            |

    |   1.  COORDINATOR OF PX |                  |       |       |       |            |          |        |      |            |

    |   2.   PX SEND QC (RANDOM). : TQ10002 |    13 M |    36G |       |   255K (1) | 00:00:09 |  Q1, 02 | P > S | QC (RAND) |

    |   3.    LOAD SELECT ACE | TIZ_IRDM_G02_CC |       |       |       |            |          |  Q1, 02 | SVCP |            |

    |*  4 |     HASH JOIN |                  |    13 M |    36G |  1842M |   255K (1) | 00:00:09 |  Q1, 02 | SVCP |            |

    |   5.      RECEIVE PX |                  |    13 M |    14G |       | 11465 (5) | 00:00:01 |  Q1, 02 | SVCP |            |

    |   6.       PX SEND HASH | : TQ10000 |    13 M |    14G |       | 11465 (5) | 00:00:01 |  Q1 00 | P > P | HASH |

    |   7.        ITERATOR BLOCK PX |                  |    13 M |    14G |       | 11465 (5) | 00:00:01 |  Q1 00 | ISSUE |            |

    |   8.         STORE TABLE FULL ACCESS | TST_IRDM_G02_F02 |    13 M |    14G |       | 11465 (5) | 00:00:01 |  Q1 00 | SVCP |            |

    |   9.      RECEIVE PX |                  |    13 M |    21G |       | 36706 (3) | 00:00:02 |  Q1, 02 | SVCP |            |

    |  10.       PX SEND HASH | : TQ10001 |    13 M |    21G |       | 36706 (3) | 00:00:02 |  Q1, 01 | P > P | HASH |

    |  11.        ITERATOR BLOCK PX |                  |    13 M |    21G |       | 36706 (3) | 00:00:02 |  Q1, 01 | ISSUE |            |

    |  12.         STORE TABLE FULL ACCESS | TST_IRDM_G02_F01 |    13 M |    21G |       | 36706 (3) | 00:00:02 |  Q1, 01 | SVCP |            |

    ------------------------------------------------------------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):

    ---------------------------------------------------

    4 - access("AIRDM_G02_F01".") IP_ID '= 'AIRDM_G02_F02'.' IP_ID")

    Oracle version:

    Oracle Database 11 g Enterprise Edition Release 11.2.0.4.0 - 64 bit Production

    PL/SQL Release 11.2.0.4.0 - Production

    CORE Production 11.2.0.4.0

    AMT for Linux: Version 11.2.0.4.0 - Production

    NLSRTL Version 11.2.0.4.0 - Production

    Notice how this additional distribution has disappeared from the non-partitioned table.

    I think that with the partitioned table that oracle has tried to balance the number of slaves against the number of scores he expected to use and decided to distribute the data to get a 'fair sharing' workload, but had not authorized for the side effects of the buffer hash join which was to appear and extra messaging for distribution.

    You could try the indicator pq_distribute() for the insert to tell Oracle that he should not disrtibute like that. for example, based on your original code:

    Insert / * + append pq_distribute parallel (a, 16) (a zero) * / in dg.tiz_irdm_g02_cc one...

    This can give you the performance you want with the partitioned table, but check what it does to the space allocation that it can introduce a large number (16) of extensions by segment that are not completely filled and therefore be rather waste of space.

    Concerning

    Jonathan Lewis

  • Insert / * + parallel * / performance index

    Insert / * + parallel * / performance index

    Hello

    I performed the procedure below

    CREATE OR REPLACE PROCEDURE bulk_collect

    IS

    SID TYPE TABLE IS NUMBER;

    Screated_date TYPE IS an ARRAY OF DATE;

    Slookup_id TYPE TABLE IS NUMBER;

    Surlabasedesdonneesdufabricantduballast ARRAY TYPE IS VARCHAR2 (50);

    l_sid sid;

    l_screated_date screated_date;

    l_slookup_id slookup_id;

    l_sdata surlabasedesdonneesdufabricantduballast;

    l_start NUMBER;

    BEGIN

    l_start: = DBMS_UTILITY.get_time;

    SELECT id, created_date, lookup_id, data

    BULK COLLECT INTO l_sid, l_screated_date, l_slookup_id, l_sdata

    FROM big_table;

    -dbms_output.put_line (' after collection in bulk: ' | systimestamp);

    FORALL indx IN l_sid. FIRST... l_sid. LAST

    INSERT / * + parallel (big_table2, 2) * / INTO big_table2 values (l_sid (indx), l_screated_date (indx), l_slookup_id (indx), l_sdata (indx));

    -dbms_output.put_line (' after FORALL: ' | systimestamp);

    COMMIT;

    Dbms_output.put_line ('Total elapsed:-' |) (DBMS_UTILITY.get_time - l_start) | "hsecs");

    END;

    /

    DISPLAY ERRORS;

    I want to confirm if the query is running in parallel. I checked the tables below to confirm if the insert statement is run in parallel, but none of them returns all the rows.

    Select * from V$ PX_SESSION where sid = 768

    Select * from V$ PX_SESSTAT

    Select * from V$ PX_PROCESS

    Select * from V$ PX_PROCESS_SYSSTAT

    Select * from V$ PQ_SESSTAT

    Please may I know how to find out the parallel execution of / * + parallel (table_name, 2) * / reference

    Thank you

    I'd go for the SQL insert/selection option as suggested.

    Bulk insert is the APPEND_VALUES of 11r2 trick that will lead to a direct path load. Parallel is to directly load path, but if you are bench marking may include this as an additional test.

  • Remove all lines and insert them into Oracle can make performance worse?

    I m working in a project that I need to make a batch update regularly (every 4 months) of excel files. These files have doesn´t excellent key in their ranks.

    The development of a code that deletes all lines and inserts the entire base again is easier than one who checks in all the ranks of its primary key and if necessary update. (sometimes may be a key to 5 columns).

    My question is: if I delete all the rows in the tables of the insert it again, it will cause tablespace fragmentation and in a future loss of performance?

    Is there a way to avoid this?

    Thanks in advance

    Alexander

    This response helped me a lot.

    Thank you all

    Remove all lines and insert them into Oracle can make performance worse? -Stack overflow

Maybe you are looking for