CURSOR_SHARING = thoughts FORCE parameter / SIMILAR

Hello

With the help of 11.2.0.3 for a data warehouse with the reporting tool business objects.

In order to meet intermittent bad database performance DBA suggested using bind variable but as tool uses literals, should define CURSOR_SHARING base on data of FORCE (or SIMILAR)

Sometimes can have 20 or if users run similar reports just available parameters.

Different opinions as to whether this performance a big deal or make things worse.

Someone at - it ideas?

Generally, it is likely to be a bad idea for a data warehouse; But if administrators see a problem because the same report is called with the changes made to the settings so you can be in a case of niche of the problem I described in this blog post: https://jonathanlewis.wordpress.com/2014/12/09/parse-time-2/

The type of query that I used against v$ active_session_history in this note blog should give you an idea of the question if you have the same problem - you need to refine the report query in a group time the sql_id, the event and the in_parse, in_hard_parse flags to see how much a problem you could get of what SQL statements.

Concerning

Jonathan Lewis

Tags: Database

Similar Questions

  • Force CURSOR_SHARING and similar

    Hi all
    In the case of a session using mainly interview a PK of a table (different values each time),
    are there disadvantages of using ALTER SESSION SET CURSOR_SHARING = FORCE?
    And in the case of a session to help main query against a column with histogram (different values each time),
    are there disadvantages of using ALTER SESSION SET CURSOR_SHARING = SIMILAR?
    is it depends on the edition of oracle (9i / 10g..)?

    As far as I understand, there is no, but I tend to hear a lot of DBA saying setting this parameter to something else that RIGHT will cause a lot of trouble.

    I am currently working on 11 GR 2

    Thank you! :)

    Hello

    There is nothing wrong on the use of FORCE or SIMILAR on a session-level (even better, on a level of education), if you know what you're doing.

    On an instance level, nothing EXACT is likely to be a disaster. I understand the DBA feel to this topic very well... it is hard to feel otherwise
    When you see something like this:

    SELECT * (SELECT .... FROM ... ORDER BY :SYS_B0) WHERE ROWNUM=1
    

    (seen on the realization of a large-scale application)

    or something like this:

    SELECT sql_id, count(*) num_versions
    FROM v$sql
    GROUP BY sql_id
    

    sql_id num_versions
    0q76895qvaz 56 053
    9zfav935aafb 39 899
    ...

    etc.

    Best regards
    Nikolai

    Best regards
    Nikolai

  • Need help to synchronize the pre-production and production box init parameter

    Hello

    I have a HP - UX 11i production database. The database is the CARS with 3 instances. The storage is on ASM. Recently, we have created a database of pre-production for tests on RedHat Linux 4 worm. Here are the details of the environment 2:

    Production: HP - UX 11i, 24 GB of RAM, 8 CPU, Oracle 10g R2

    Pre-production: RED HAT ver 4, 16 GB RAM, 4 CPUS, Oracle 10 g R2

    Basically everything in creation the two environment are not identical configuration. Now, I need to bring to the pre-production environment in harmony with the environment of Production from the point of view oracle init parameter. I have listed down below below the parameters that is different. Please tell me how many of these parameters can be bring in sync. Setting shmmax doesn't set on the two core area is 4 GB.

    Name of the parameter of V$, value on Production - value on the pre-production
    -----------------------------------------------------------------------------------------------------
    cpu_count----------------------------------8-------------------------------------     4
    CURSOR_SHARING - EXACT - FORCE
    db_recovery_file_dest_size - 32212254720 - 15032385536
    dml_locks----------------------------------1472----------------------------------748
    fast_start_parallel_rollback - LOW - FALSE
    filesystemio_options - asynch - no
    LARGE_POOL_SIZE - 33554432 - 0
    log_buffer - 14493696 - 7012352
    log_checkpoints_to_alert - TRUE - FALSE
    open_cursors - 1000 - 300
    parallel_execution_message_size - 2152 - 2148
    PARALLEL_MAX_SERVERS - 160-80
    pga_aggregate_target - 2147483648 - 1692401664
    processes----------------------------------     300-----------------------------------150
    session_cached_cursors - 100-20
    sessions------------------------------------     335-----------------------------------170
    SGA_MAX_SIZE - 5368709120 - 675282944
    SGA_TARGET - 3221225472 - 675282944
    sql92_security - FALSE - TRUE
    standby_file_management - MANUAL - AUTO
    transactions - 368-187
    UNDO_RETENTION - 10800 - 1800

    Thank you

    NOTE: You must be a query V$ PARAMETER ISDEFAULT = "FALSE" only to identify the paraemters which are or have to be defined.
    You must identify these paraemters which are explicitly defined by the DBA, rather automatically determined by Oracle.

    1. you do not explicitly set or change cpu_count. Let Oracle automatically set it based on the number of processors / cores he sees as being available. (if you have multiple instances of database on the same server, each instance will 'see' the same number of processors / cores)

    2 cursor_sharing must always be the same development, pre-production and Production. However, I would first of all ask allows you to determine why it has been set by the dint of Production. That must be done only after careful consideration and test!

    3 db_recovery_dest_file size is relevant for backups RMAN identified by db_recovery_dest disk space. If it is not relevant to this discussion.

    4 dml_locks, sessions, transactions, etc. are all automatically determined by Oracle based on the PROCESS.

    5 fast_start_parallel_rollback is irrelevant. I wonder why it has been explicitly defined?

    6 filesystemio_options really depends on your platform. HP - UX 11i and Redhat Linux are very different.

    7 are defined explicitly large_pool_size and log_buffer? Or they remain to be determined automatically by Oracle based on other parameters (SGA_TARGET for example)?

    8 open_cursors, generally not be set unless you have an application that really opens up very many cursors by session. Why is defined differently? It is a per session setting.

    9. I suppose that parallel_max_servers is not explicitly defined, but remains to be determined by Oracle based on other parameters (for example cpu_count).

    10. I wonder if parallel_execution_message_size has been explicitly defined or is automatically defined by Oracle and OS dependent.

    11 pga_aggregate_target would have explicitly set the appropriate value based on your hardware and its use must be determined.

    12 session_cached_cursors generally didn't need to be explicitly defined. If it is set, why he is set on these different values?

    13. Why is SQL92_SECURITY explicitly different?

    14 standby_file_management is not relevant here.

    15 undo_retention could be about the same.

    Your REAL concern should be the values defined for CURSOR_SHARING and SQL92_SECURITY. These are the only ones who are really identical in both environments. The others are "setting buttons" while these "fix"behavior ".

    Hemant K Collette

  • Disconnect-VIServer-Server $Server - Force does not work as documented

    Hello

    Disconnect-VIServer-Server $Server - Force

    Can someone explain why this command still causes a promp Confirm occur?

    Disconnect-VIServer-Server $Server - Force

    Confirm
    Are you sure you want to perform this action?
    Operation "disconnect VIServer' target ' user: domain\ME, server: us-vc02a, Port: 443".»
    [Y] yes [A] Yes to all [N] no [L] no to all [S] Suspend [?] Help (default is "Y"):

    Thank you

    The Force parameter on the Disconnect-VIServer cmdlet does not cause promtp to be deleted, to do this, you use the Confirm switch.

    Disconnect-VIServer-Server $server - confirm: $false

  • Pass parameter to web.show_document

    Hello

    I want to pass a parameter to the web.show_document instead of using the paremter form. I know it's possible if I use run_report_object but y at - it a way to pass a parameter similar to the report_other object?

    r_url: = "/ reports/rwservlet? ' |' & desformat =' | rpt_format
    |' & destype ='. r_dest
    |' & paramform ='. r_param_Form
    |' & report =' | r_name;

    Web.show_document (r_url, '_blank');

    Thank you
    faoilean.

    Hello

    Yes, just add your parameter in the URL such as destype, paramform...

    Ex:

    r_url: = "/ reports/rwservlet? ' |' & desformat =' | rpt_format
    |' & destype ='. r_dest
    |' & paramform ='. r_param_Form
    |' & report =' | r_name
    |' & paramname =' | ParamValue;

    Web.show_document (r_url, '_blank');

    Concerning

  • dynamic distribution and an order for various devices of construction

    I have a code that is written on a device. The device will be changed, but perform the same basic (frequency setting, power etc) commands. So, I wrote a dynamic dispatch VI that takes a command string predefined and written to the appropriate device based on what communication mechanism is used (SNMP, VISA, etc.). My question is related to the predefined command string that feeds this VI. I think that the creation of the chain of command VI should also be dynamic distribution and should build the command appropriate for the given device. But some devices might have several parameters required for the construction of this string (for example, snmp) and something like VISA may not be only one parameter. Dynamic distribution screw must have the same connector components, so what is the best way to handle this? I thought about having a class of 'settings' in the connector pane that holds all the specific parameters for a device. But it seems exaggerated to make a class for something like this, when a variant would do the trick as input. However, having a variant as it tends to make me think of distribution dynamic is not necessarily the right choice in the first place because you force a similar connector component.

    I also consider if some necessary inputs are, in fact, the State of the object and that it could be found in the object itself. This would reduce the connector pane entries and can mitigate some of these problems all together.

    Thoughts?

    for (imstuck) wrote:

    I also consider if some necessary inputs are, in fact, the State of the object and that it could be found in the object itself. This would reduce the connector pane entries and can mitigate some of these problems all together.

    That's what I'd do. Make settings of the object data, so when you then call the function to generate the dynamic distribution VI command string has all the right data available to it.

  • Hard or soft parse analysis?

    Hi Experts,

    If we execute a SELECT statement in SQL * MORE by passing the value using operator, if hard analysis will take place or soft the analysis?

    SQL> select last_name from employees where department_id = &dept_no;
    Enter value for dept_no: 100
    old   1: select last_name from employees where department_id = &dept_no
    new   1: select last_name from employees where department_id = 100
    
    SQL> select last_name from employees where department_id = &dept_no;
    Enter value for dept_no: 110
    old   1: select last_name from employees where department_id = &dept_no
    new   1: select last_name from employees where department_id = 110
    
    

    When I execute V$ SQL, for each different value I see different SQL_ID. Please explain about it.


    Thaks a lot in advance for your help.

    See you soon,.

    Suri

    The analysis is the action of database server. SQL * more is tool on the client side and the substitution variables are applied on the client side. Therefore, whenever you provide another & dept_no, database will receive a new statement (one that is not in the common pool) and so, in general, it will be a difficult analysis. Why in general? This will depend on the database parameter cursor_sharing arrangement. If is set to force or similar, Oracle can substtitute

    Select last_name employees where department_id = some_literal

    with

    Select last_name employees where department_id =: bind_variable

    and then it will be soft analysis (except the first time instruction is issued the first time after it ages out shared pool).

    SY.

  • Performance problem in production; Please help me out

    Hi all,

    I'd really appreciate if someone can help me with this.

    Every night, the server's SWAP, Sysadmin add more space disk swap every night and for the last 4 days.
    I run ADDM report from 22:00 to 04:00 (when the server is running out of memory)
    I had the problem of performance of this query:
    RECOMMENDATION 4: SQL Tuning, 4.9% benefit (1329 seconds)
          ACTION: Investigate the SQL statement with SQL_ID "b7f61g3831mkx" for 
             possible performance improvements.
             RELEVANT OBJECT: SQL statement with SQL_ID b7f61g3831mkx and 
             PLAN_HASH 881601692
    I can't find what the problem is and why it is a source of performance problem, could you help me please
    *WORKLOAD REPOSITORY SQL Report*
    
    Snapshot Period Summary
    
    DB Name         DB Id      Instance     Inst Num Release     RAC Host        
    ------------ ----------- ------------ -------- ----------- --- ------------
    ****       1490223503 ****             1 10.2.0.1.0  NO  ****
    
                  Snap Id      Snap Time      Sessions Curs/Sess
                --------- ------------------- -------- ---------
    Begin Snap:      9972 21-Apr-10 23:00:39       106       3.6
      End Snap:      9978 22-Apr-10 05:01:04       102       3.4
       Elapsed:              360.41 (mins)
       DB Time:              451.44 (mins)
    
    SQL Summary                         DB/Inst: ****/****  Snaps: 9972-9978
    
                    Elapsed 
       SQL Id      Time (ms)
    ------------- ----------
    b7f61g3831mkx  1,329,143
    Module: DBMS_SCHEDULER
     GATHER_STATS_JOB
    select /*+ no_parallel(t) no_parallel_index(t) dbms_stats cursor_sharing_exact u
    se_weak_name_resl dynamic_sampling(0) no_monitoring */ count(*),count("P_PRODUCT
    _ID"),count(distinct "P_PRODUCT_ID"),count("NAME"),count(distinct "NAME"),count(
    "DESCRIPTION"),count(distinct "DESCRIPTION"),count("UPC"),count(distinct "UPC"),
    
              -------------------------------------------------------------       
    
    SQL ID: b7f61g3831mkx               DB/Inst: ***/***  Snaps: 9972-9978
    -> 1st Capture and Last Capture Snap IDs
       refer to Snapshot IDs witin the snapshot range
    -> select /*+ no_parallel(t) no_parallel_index(t) dbms_stats cursor_shari...
    
        Plan Hash           Total Elapsed                 1st Capture   Last Capture
    #   Value                    Time(ms)    Executions       Snap ID        Snap ID
    --- ---------------- ---------------- ------------- ------------- --------------
    1   881601692               1,329,143             1          9973           9974
              -------------------------------------------------------------       
    
    
    Plan 1(PHV: 881601692)
    ---------------------- 
    
    Plan Statistics                     DB/Inst: ***/***  Snaps: 9972-9978
    -> % Total DB Time is the Elapsed Time of the SQL statement divided 
       into the Total Database Time multiplied by 100
    
    Stat Name                                Statement   Per Execution % Snap 
    ---------------------------------------- ---------- -------------- -------
    Elapsed Time (ms)                         1,329,143    1,329,142.7     4.9
    CPU Time (ms)                                26,521       26,521.3     0.7
    Executions                                        1            N/A     N/A
    Buffer Gets                                 551,644      551,644.0     1.3
    Disk Reads                                  235,239      235,239.0     1.5
    Parse Calls                                       1            1.0     0.0
    Rows                                              1            1.0     N/A
    User I/O Wait Time (ms)                     233,212            N/A     N/A
    Cluster Wait Time (ms)                            0            N/A     N/A
    Application Wait Time (ms)                        0            N/A     N/A
    Concurrency Wait Time (ms)                        0            N/A     N/A
    Invalidations                                     0            N/A     N/A
    Version Count                                     2            N/A     N/A
    Sharable Mem(KB)                                 71            N/A     N/A
              -------------------------------------------------------------       
    
    Execution Plan
    ---------------------------------------------------------------------------------------------------
    | Id  | Operation               | Name    | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    ---------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT        |         |       |       | 24350 (100)|          |       |       |
    |   1 |  SORT GROUP BY          |         |     1 |   731 |            |          |       |       |
    |   2 |   PARTITION RANGE SINGLE|         |  8892 |  6347K| 24350   (1)| 00:04:53 |   KEY |   KEY |
    |   3 |    PARTITION LIST ALL   |         |  8892 |  6347K| 24350   (1)| 00:04:53 |     1 |     5 |
    |   4 |     TABLE ACCESS SAMPLE | PRODUCT |  8892 |  6347K| 24350   (1)| 00:04:53 |   KEY |   KEY |
    ---------------------------------------------------------------------------------------------------
     
    
    
    Full SQL Text
    
    SQL ID       SQL Text                                                         
    ------------ -----------------------------------------------------------------
    b7f61g3831mk select /*+ no_parallel(t) no_parallel_index(t) dbms_stats cursor_
                 _sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitori
                 ng */ count(*), count("P_PRODUCT_ID"), count(distinct "P_PRODUCT_
                 ID"), count("NAME"), count(distinct "NAME"), count("DESCRIPTION")
                 , count(distinct "DESCRIPTION"), count("UPC"), count(distinct "UP
                 C"), count("ADV_PRODUCT_URL"), count(distinct "ADV_PRODUCT_URL"),
                  count("IMAGE_URL"), count(distinct "IMAGE_URL"), count("SHIPPING
                 _COST"), count(distinct "SHIPPING_COST"), sum(sys_op_opnsize("SHI
                 PPING_COST")), substrb(dump(min("SHIPPING_COST"), 16, 0, 32), 1, 
                 120), substrb(dump(max("SHIPPING_COST"), 16, 0, 32), 1, 120), cou
                 nt("SHIPPING_INFO"), count(distinct "SHIPPING_INFO"), sum(sys_op_
                 opnsize("SHIPPING_INFO")), substrb(dump(min(substrb("SHIPPING_INF
                 O", 1, 32)), 16, 0, 32), 1, 120), substrb(dump(max(substrb("SHIPP
                 ING_INFO", 1, 32)), 16, 0, 32), 1, 120), count("P_STATUS"), count
                 (distinct "P_STATUS"), sum(sys_op_opnsize("P_STATUS")), substrb(d
                 ump(min(substrb("P_STATUS", 1, 32)), 16, 0, 32), 1, 120), substrb
                 (dump(max(substrb("P_STATUS", 1, 32)), 16, 0, 32), 1, 120), count
                 ("EXTRA_INFO1"), count(distinct "EXTRA_INFO1"), sum(sys_op_opnsiz
                 e("EXTRA_INFO1")), substrb(dump(min(substrb("EXTRA_INFO1", 1, 32)
                 ), 16, 0, 32), 1, 120), substrb(dump(max(substrb("EXTRA_INFO1", 1
                 , 32)), 16, 0, 32), 1, 120), count("EXTRA_INFO2"), count(distinct
                  "EXTRA_INFO2"), sum(sys_op_opnsize("EXTRA_INFO2")), substrb(dump
                 (min(substrb("EXTRA_INFO2", 1, 32)), 16, 0, 32), 1, 120), substrb
                 (dump(max(substrb("EXTRA_INFO2", 1, 32)), 16, 0, 32), 1, 120), co
                 unt("ANALISIS_DATE"), count(distinct "ANALISIS_DATE"), substrb(du
                 mp(min("ANALISIS_DATE"), 16, 0, 32), 1, 120), substrb(dump(max("A
                 NALISIS_DATE"), 16, 0, 32), 1, 120), count("OLD_STATUS"), count(d
                 istinct "OLD_STATUS"), sum(sys_op_opnsize("OLD_STATUS")), substrb
                 (dump(min("OLD_STATUS"), 16, 0, 32), 1, 120), substrb(dump(max("O
                 LD_STATUS"), 16, 0, 32), 1, 120) from "PARTNER_PRODUCTS"."PRODUCT
                 " sample ( 12.5975349658) t where TBL$OR$IDX$PART$NUM("PARTNER_PR
                 ODUCTS"."PRODUCT", 0, 4, 0, "ROWID") = :objn
                 

    Dear friend,

    Why do you think you have problems with shared pool? In the ASH report you are provided, there was just 2.5 medium active sessions and 170 requests during this period, he is very low with swimming pool shared, problems
    you have some queries that use literals, it will be better to replace literals with bind variable if possible, or you can set the init cursor_sharing parameter to force or similar, this is the dynamic parameter.
    But it is not so dramatic problem in your case!

    From ASHES of your report, we can see that top wait events is "CPU + wait for CPU", "RMAN backup & recovery i/o" and "log file sync" and 65% of your database of waiting time. And even in the background waiting events.
    If I understand well report, you have two members in your redo log groups, you have problems with log IO writer speed, check the distribution of files on the disc, newspaper editor is slow causing it to wait for the other sessions. High
    processor can be related to rman compression. Best service can we sea GATHER_STATS_JOB consumes 16% of activity 33% consumes rman and only 21% your applications and also there is something running
    SQL * more under the sys (?) account. There is from the top of the sql page, this is the sql in your application, if I understand correctly, 'scattered db reading file' event indicates that full scans have a place, is it normal that your application? If Yes, then try using
    running in parallel, as we can see in the section "Sessions running PQs Top" your report there is no running in parallel, but as I understand it there are 8 processors, try to use parallel executions or avoid full scans. But consider that
    When you do full scans in parallel PGA memory not used CMS, then decrees setting pga_aggregate_target SGA and increase respectively.

    Is there another application or a program running on the server except oracle?
    Is the performance degradation was strong, I mean yesterday, everything was ok, but today all the evil, or it was good?

    Check the reasons for the slow newspaper writer, it can greatly affect performance.
    Also of 90% of the performance problems generally because of the poor sql, poor execution plans.
    Also if you use automatic memory management, tap Settings, but you must know that in this case the settings will identify the minimum values, this is why define them to lower values in oracle can manage
    entirely.
    Don't increase your SGA at this stage, get the awr report, use @$ORACLE_HOME/rdbms/admin/awrrpt.sql, check your cover shot.

    BUT first, you must change your backup strategy, look at my first post, after that check performance again, before you do that it will be very difficult to help you.

    Good luck

  • Query is running slowly (30 sec) first, then quickly - how to find out why?

    Hi all

    First of all, a warning: we are a company of rapidly growing software which provides a SaaS-like solution and is hosting an Oracle database for our own software running at client locations. I'm still not a DBA, but I've been learning for a while now and will follow appropriate courses, next year. This problem becomes urgent enough, however.

    We have an Oracle 12 c (12.1.0.1.7) running on a pretty Beefy hardware (card SSD 8 cores, 72GO,) and it works fine. One of our applications raise queries that run slowly very often, but when I run the same query through Toad, it is very fast again. Sometimes I can make the same query to be slow again by changing settings, adding values to an in(), etc., but not always. By not being is not in a position to know WHY the query is slow, I can not as a solution. We have been struggling with this problem for some time (more than a year, I think) and had the same problem when you still run Oracle 11.

    Some more bits of information that might be relevant:

    -This user defines cursor_sharing for force, per my request (the software itself does not lie)

    -J' put optimizer_index_caching at 25, because I know the DB all be in memory.

    -J' put optimizer_index_cost_adj at 50, because I saw full table scan too for my taste. The two parameters when it is while the queries were not already productive.

    -Memory: memory_target = 40 GB, db_cache_size = 5.5 GB, pga_aggregate_target = 8 GB sga_target = 30 GB

    -Optimizer_dynamic_sampling is the default, 2.

    -J' saw a few query plans have a note, something like "Explain plan was made with the statistics from previous executions. All do not have this, however.

    -J' saw some explain plans in which the actual recordings differ a large amount of the expected records, planned somethings was 1, real was 4 billion...

    -We actually had to add / * ordered * / for some queries to speed up execution. It's been a few other issues have better results (20 sec every time! to 200 ms), but worse when Oracle has finally found the execution plan (40ms).

    -Many queries seemed to start with arrays of harm to our taste.

    -A copy of the database, running locally, also had some problems with queries - the plan of the explain command took 2 seconds, the query itself only a few Ms. the same database of course does not receive any insert/update statements.

    It may be important to note that this scheme receives a lot of updates and inserts, so I thought that it could screw up the statistics?

    As I wrote, we have been struggling with this problem for a long, long time. We have made changes to hardware, the parameters of the Oracle, the client software, everywhere, but are no closer to a permanent solution. Any help would be greatly appreciated!

    Kind regards

    Jelmer

    Post edited by: optimizer_cursor_caching 1449188-> optimizer_index_caching

    1449188 wrote:

    Time for a summary:

    -We switched off forced cursor sharing. This reduces overall Oracle CPU usage a bit.

    -We removed the / + SORTED * / in queries with a theatrical performance

    -optimizer_dynamic_sampling was fixed at 11, but _optimizer_use_feedback is always enabled

    -All the tables and indexes were analyzed, I created statistics expanded on the task, address, person

    Many queries work MUCH better, another big thank you to all who have helped me so far.

    ...

    In particular the line 25 confuses me. Always this BITMAP AND even that crazy E-lines / difference A-lines. I'm sure that's the only reason for the poor performance of this query...

    I think you have a mixture of different effects here, which doesn't makes it really simple to understand exactly what is happening. To some extent this seems to be caused by the new features added in 11g and 12 c.

    1. looking at the trace of the files provided above, there seems to be some problem with dynamic sampling overload (DS). This seems to be one of the reasons why you have this "first slow exec", the second fast exec, in this case without a change of regime. In this trace file, it takes 5 seconds or more to run all these queries recursive dynamic sampling as part of the generation of the plan, and there are two DS queries that contribute significantly to that time (more than 3 seconds for these two DS queries).

    In order to have OPTIMIZER_DYNAMIC_SAMPLING 11 value causes these additional queries. The advantage of this setting is that you get good enough (join) estimates. Of course, it is difficult to judge how much you make this setting and how you lose the overhead of generating plan. New DS queries make use of the RESULT_CACHE have, in principle, reassessment of the same DS queries should come from the RESULT_CACHE. However, if your RESULT_CACHE is too small to contain all of the query results (although these result sets is usually very small) you can sometimes get slowdowns in generational terms if these queries need to be run instead of pull results from the cache.

    2. the function of vibration of statistics of course is involved and sometimes leads to a re-optimizing. Once again, this could then lead to the problem mentioned in point 1., if DS requests takes a long time to generate the plan.

    So these two functions together could lead to some difficult to predict behavior (slow first => possibly caused by DS, second time slow => caused by Feedback: statistics, leading again to DS, third etc. quick time.). It may also explain why you see 'first exec' slow, second fast exec, if re-optimizing na not take too much time and lead to a better execution plan.

    3. the special regard that seems to be the wide track as part of an internal source of lines join nested loop - and in this case, the estimates are * iterative *, so be careful when comparing E-lines (iteratively) with A-lines (cumulative across all iterations). The index used functionality combine to combine two indexes TASK_IDX1 and TASK_IDX2 suggest that it would be beneficial to add PERSON_ID to TASK_IDX1 and maybe get rid of TASK_IDX2 - but it might be appropriate to other queries, which I can't really judge. You can then check if adding more attributes to TASK_IDX1 it would make even more selective about the level of the index, as most of the lines for this particular query is filtered on the WORK table, so adding TASK level. FIXED, TASK. CLOSED and/or TASK. PLAN_START / PLAN_END it would make an even more effective operation. But adding PERSON_ID should solve this particular problem.

    4. the 'quick' this query (mentioned somewhere at the beginning) variation has no merge view inline (SELECT HELP... GROUP BY...), then you can try this by adding a NO_MERGE hint to the inline query (SELECT / * + NO_MERGE * / AID...), just to see if that gives you several times the plan 'rapid '. I still think that the index change described might be the best choice.

    5 SQL Directives of the Plan have been mentioned a couple of times, and I think that it is not quite clear to you what they are. They are not "advice", but are a new feature added to 12 c where Oracle persists some information on bad estimates and then evaluates these SQL Plan plan generation for dynamic additional sample queries and the production of statistics to create additional guidance extended statistics. I don't have enough experience with this new feature and therefore cannot judge how they play a role here. You can ask DBA_SQL_PLAN_DIR_OBJECTS and DBA_SQL_PLAN_DIRECTIVES to see if there are tickets for your particular objects in question.

    Randolf

  • Presentation of Jonathan Lewis on statistics more intelligent in 11g

    After viewing the excellent presentation of Jonathan Lewis, ( http://www.speak-tech.com/lewis-20130610 ), the essential

    I'm on is for the "approximate_ndv" the true value and to avoid using histograms.

    So, I checked my global preferences and confirmed that they were set to the default values, specifically:

    SNAME SPARE4

    ------------------- ----------------------------

    APPROXIMATE_NDV FALSE

    METHOD_OPT FOR ALL COLUMNS SIZE AUTO


    Thus, by the presentation of Jonathan, I'll put APPROXIMATE_NDV to TRUE.

    But my question is in what regards METHOD_OPT: That I should (or would that be safe)

    to set the global_prefs for this " " FOR ALL COLUMNS size 1 ".


    Basically, it would eliminate all histograms (possibly when it updates the statistics), including those that Oracle uses on its own internal tables.


    977635 wrote:

    As a follow-up on the issue, in our case, our application almost exclusively uses liternals instead of bind variables.

    But because of the link peeking (at our histograms generated from the default setting of method_opt), we found that it seems to have better performance with cursor_sharing set to EXACT (instad of STRENGTH).

    But, if I understand correctly, if we removed the histograms and the value cursor_sharing STRENGTH, then the optimizer wouldn't bind peeking (since we have no histograms) and our performance improve because we would then use the estimated number of distinct values instead.  Is this OK, or I'm confusing?

    It's a small issue that does need a long answer. A starting point, however, would be one of my reviews of 'Philosophy': http://jonathanlewis.wordpress.com/2009/05/06/philosophy-1/

    Your first paragraph says that worked better histograms with literals with bind variables built by cursor_sharing forced - which is typical; If you have data that is sufficiently skewed that you need histograms, then you need to use literals so that the optimizer can see the skew when it optimizes.

    If you delete the histograms (while Oracle Gets a view "average" of your data) and set cursor_sharing to force then Oracle will always look, but it will not be able to tell if the peeked value is a special case, and SOMETIMES it is better to develop an extreme plan.

    Oracle 9i's response was to allow cursor_sharing must be set to "similar" - but this meant that Oracle could then rewrite a query to use links but can re-optimize some queries very frequently because take a look at the new variable of binding every time - resulting in excessive optimization and a large number of cursors of the child; the presence of histograms on columns in the predicates has been a trigger for re-optimization.

    11g adaptive cursor sharing introduced and suggested using cursor_sharing = force if you have excessive use of literals - you can get several child cursors after a statement involving the histograms, but ideally it should be only a very small number by statement.  The feature is still a bit fragile.

    A useful tip is / * + cursor_sharing_exact * / so whatever you do with cursor sharing, you may put this in all the instructions where you don't want Oracle to perform the conversion to the dregs. It is perhaps that a judicious use of this feature is enough to give you the best compromise of performance and stability.

    Bottom line - histograms are difficult, you probably need some and you may need to their engineer carefully, but your front-end code needs to know about them, if you want the best performance.

    Concerning

    Jonathan Lewis

  • VM Power Off Date

    I created annotations for my virtual computers called LastPowerOn and LastPowerOff, I had planned to transfer the CreationDate from the entrance to the event log for 'VmPoweredOnEvent' and 'VmPoweredOffEvent' respectively.  Unfortunately, it seems that not all the events of power in newspapers are determined by the name of the event.  In vCenter, I view turn the events that do not appear as such.

    After trying my hand in this area and short coming, I borrowed a screenplay written by RvdNieuwendijk, which was designed to analyze the event logs VM for exactly this info.  This script also resulted in empty info for multiple servers.  So looking at the event log, this event seems not to appear the same as a manifestation of normal power:

    On DRS on the virtual machine
    Esx1.Jeff.com
    Info
    28/10/2011-13:57:25
    Power on virtual machine
    server100

    This is the code I use to test:

    Get-VIEvent -Entity $vm -MaxSamples 500 | Sort-Object -Property CreatedTime -Descending | Where-Object { $_.Gettype().Name -eq "VmPoweredOnEvent" } | Select-Object -First 1

    I fell the MaxSamples up to 500 to see if that was the problem.

    I can find the exact event, so I'm looking for:

    Template             : False
    Key                  : 11657
    ChainId              : 11651
    CreatedTime          : 10/28/2011 1:57:25 PM
    UserName             : 
    Datacenter           : VMware.Vim.DatacenterEventArgument
    ComputeResource      : VMware.Vim.ComputeResourceEventArgument
    Host                 : VMware.Vim.HostEventArgument
    Vm                   : VMware.Vim.VmEventArgument
    Ds                   : 
    Net                  : 
    Dvs                  : 
    FullFormattedMessage : DRS powered On server100 on esx1.jeff.com in
                           DataCenter1
    ChangeTag            : 
    DynamicType          : 
    DynamicProperty      :

    Any suggestions on where to go from here would be very useful.

    I thought to go to the data store and pulling the last modified date date the vmdk to establish a 'rough' power.

    Thank you

    Jeff

    Hi jeff,

    Thanks for your question. You found a serious flaw in my script from the thread to get a SSN last turning off date based on events of the virtual machine.

    And thanks to your suggestion, it was not difficult for me to change the line:

    Where-Object {$_.} GetType(). Name - eq "VmPoweredOnEvent"} | `

    in:

    Where-Object {$_.} GetType(). Name - eq "VmPoweredOnEvent" - or $_. GetType(). Name - eq "DrsVmPoweredOnEvent"} | `

    to solve your problem.

    Looking again my previous script, I found two other holes myself. You can probably say: "age comes with insight".

    The most serious is that the Get-VIEvent cmdlet returns only by the 100 default events. Maybe it's too small for some VM. So I added the MaxSamples parameter with value of 10000. Even if it is necessary to make the script executed well, it will make the script much slower.

    The other minor problem is that the previous script did the Sort-Object cmdlet in the pipeline before the Where-Object cmdlet. It is best to do the opposite. It is much faster because the Sort-Object cmdlet must sort the records less.

    Here's the new script will all changes:

    function Get-VMLastPoweredOffDate {
      param([Parameter(Mandatory=$true,ValueFromPipeline=$true)]
            [VMware.VimAutomation.ViCore.Impl.V1.Inventory.VirtualMachineImpl] $vm)
      process {
        $Report = "" | Select-Object -Property Name,LastPoweredOffDate,UserName
        $Report.Name = $vm.Name
        $Event = Get-VIEvent -Entity $vm -MaxSamples 10000 | `
          Where-Object { $_.GetType().Name -eq "VmPoweredOffEvent" } | `
          Sort-Object -Property CreatedTime -Descending | `
          Select-Object -First 1
        $Report.LastPoweredOffDate = $Event.CreatedTime
        $Report.UserName = $Event.UserName
        $Report
      }
    }
    
    function Get-VMLastPoweredOnDate {
      param([Parameter(Mandatory=$true,ValueFromPipeline=$true)]
            [VMware.VimAutomation.ViCore.Impl.V1.Inventory.VirtualMachineImpl] $vm)
    
      process {
        $Report = "" | Select-Object -Property Name,LastPoweredOnDate,UserName
        $Report.Name = $vm.Name
        $Event = Get-VIEvent -Entity $vm -MaxSamples 10000 | `
          Where-Object { $_.GetType().Name -eq "VmPoweredOnEvent" -or $_.GetType().Name -eq "DrsVmPoweredOnEvent"} | `
          Sort-Object -Property CreatedTime -Descending |`
          Select-Object -First 1
        $Report.LastPoweredOnDate = $Event.CreatedTime
        $Report.UserName = $Event.UserName
        $Report
      }
    }
    
    New-VIProperty -Name LastPoweredOffDate -ObjectType VirtualMachine -Value {(Get-VMLastPoweredOffDate -vm $Args[0]).LastPoweredOffDate} -Force
    New-VIProperty -Name LastPoweredOffUserName -ObjectType VirtualMachine -Value {(Get-VMLastPoweredOffDate -vm $Args[0]).UserName} -Force
    New-VIProperty -Name LastPoweredOnDate -ObjectType VirtualMachine -Value {(Get-VMLastPoweredOnDate -vm $Args[0]).LastPoweredOnDate} -Force
    New-VIProperty -Name LastPoweredOnUserName -ObjectType VirtualMachine -Value {(Get-VMLastPoweredOnDate -vm $Args[0]).UserName} -Force
    
    Get-VM | Select-Object -property Name,LastPoweredOnDate,LastPoweredOnUserName,LastPoweredOffDate,LastPoweredOffUserName
    

    Best regards, Robert

    Post edited by: RvdNieuwendijk added the - Force parameter to the commands of the new VIProperty.

  • In bulk Opt-In

    We have close to 2000 registered whose members expect an email from us every day.

    The form they are part on automatically opt them out newsletters etc.

    This is why each Member on our database is not their emails.

    can you please do a bulk opt-in on our database of emergency

    Catalyst for business - bulk Opt-in Contacts

    You have a Web Adobe Business Catalyst site with many customers that you want to Opt-In?

    You can ask BusinessCatalyst to position the master opt - in on your website, for all existing customers are "opt-in" to your marketing emails. [Here forum Thread]

    Send a support ticket to Business Catalyst to turn on the master opt - in option for your Web site and ask them to opt - in your entire database at the same time.

    What is double opt-in?

    When a customer joins a list of newsletter on your Web site, Business Catalyst automatically sends the answering machine to the client. The tag {tag_verificationurl} in the email autoresponder is replaced by a link, the customer clicks to confirm that they want to receive your newsletters. This is known as double opt-in.

    What happens if the customer does not click on the verification link?

    If the client do not click the link, they will not receive your newsletters unless you force opt-in to their account in the admin panel, have used the double opt-in function of derivation in your Web Forms or have implemented opt-in master in service in support of BC.

    What happens if I turned off the answering machine for a form?

    If you have disabled the autoresponder on your form, the verification link will not be sent to your customers. In this case you will need to either:

    • Use the "opt-in" feature to force on your Web Forms
    • or ask BC to opt-in all your customers
    • or your customers involved individually

    Why Business Catalyst uses double opt-in?

    Business catalyst uses double opt-in to comply with the legislation and the anti-spam measures, but many Web sites have a Newsletter Signup form or similar where customers are expressly willing to participate to your marketing list.

    By default, all forms of the newsletter are double opt-in and there are a few things you can do to ignore this process.

    How to: force Opt-in Newsletter subscription forms

    To ensure that your customers will receive your newsletters when they subscribe, you must add the force parameter participate in your form.

    1. Change your newsletter forms and add & OPTIN = true at the end of the action of the form ( < form action = "...) » )

    All new subscribers will then be selected in as soon as they subscribe to the newsletter using this form.

    Hello

    Thanks for posting the URL of the site that this about.

    We can then perform a 'master opt - in' to all the contacts who have not yet requested to unsubscribe.

    Kind regards

    -Sidney

  • I can't comment or whatever it is on Facebook as when using Firefox. Other browsers work, but not Facebook.

    I tried to clear my history, cookies and everything else on Firefox, but it does not solve the problem.

    Whenever I try to comment on a post on facebook it says "Unable to display the comment." And it wont let me as any positions either.

    Google Chrome works perfectly with Facebook. I don't want to quit smoking using Firefox, but unless I get soon a solution that may be what I have to do.

    I recently got my Facebook working again when I disabled my extensions at the same time. It was a mistake, but because by then, I didn't know what could be the cause of the problem. After activating one by one, do the reboots needed, etc, I noticed that the only time where Facebook will not load my messages or I would like / comment is when the Plugin of effective Community measure 3.1.0 (by effective measure Pty Ltd) is enabled. I thought that perhaps similar plugins or products of the same manufacturer are not compatible with Facebook and Firefox 17. IDK.

    Bottomline is: there is an extension/plugin that went rogue. There may be more.

  • HP Envy x 360: DIN Wireless: hard to prioritize the sound output

    I bought a pair of headphones Skullcandy tumult wireless on sale today, but have had trouble having sound output for more than a few seconds, here are the details of the situation:

    > the pair of succesffully wireless headset with my system via the Bluetooth Menu upward.

    > when I select them on the Mixer's Volumeand click on the bar of his adjustment to the "Bwoooom" sound to check out, he plays through the headphones.

    > immediately after the closure of the Mixer's Volume, the system deletes the selection of headphones and has by default to the accumulation in the speaker system. The 'Bwoom' and any other its time now through the built-in speakers.

    > keep the open Mixer's Volume does not ensure the connection: in fact, only the system sounds are heard through the headphones. Including the "Bwooom".

    > I contacted Skullcandy and concluded that the product works, but for some reason, my computer is prioritizing its own system of loudspeaker on the helmet of tumult. My computer has Beats Audio as audio system.

    If someone could lend a hand, it would be very helpful, because I think he can come down to the Drivers Bluetooth, or something with my Audio settings.

    Just solved my problem, but thought that other similar problems with their audio devices can be useful to know how I solved the problem:

    I went to...

    > the volume Mixer icon > right click > playback devices

    > set the Headhphones as default playback device.

    It works fine now! He just kept making the built in speakers on the helmet is a reason any. But now I know, and do you too.

  • How to clear the power on password in Toshiba Tecra M3

    Hello

    I * Toshiba Tecra M3 *, my wife set * power on password * and she forgot what was the password. Everytime I turn on my laptop it's toshiba Boot logo and then it comes black page and the password required like this:-* PASSWORD = *.

    I sent it to the PC technician but he failed to fix it, he said, it is best to clear the CMOS settings, but it didn't find there because maybe he thought his old similar notebook that has battery; I told her my laptop is a computer laptop last may he joined the CMOS chip with its battery.

    * Any help please, I have valuable information in my laptop.*

    Thankx

    Allytz

    Hello

    I think you're talking about the BIOS password...
    Bad new budd... the password of BIOS should be removed by Toshiba ASP.

    You may not remove if you forgot!

    Then, contact the local ASP and ask for later handling.

    Good bye

Maybe you are looking for

  • TLS Cert fail on Exchange server

    Hello We enable the TLS on exchange server and when we checked on this link (http://www.checktls.com/perl/TestReceiver.pl?FULL), its projection TLSCert fail. We have server local exchange 2010 and use third party e-mail filtering service. Please find

  • report amenazas por e-mail

    Buenas tardes quisiera report amenzas en mi trabajo este email * address email is removed from the privacy * mi number are ROBERTO CORTES RAMIREZ MI CORREO ES EL SIGUIENTE * address email is removed from the privacy * y me entan enviando amenzas a mi

  • Windows XP cannot reformat my external drive to seagate expansion of exFAT

    I'm trying to reformat my external drive to seagate expansion of exFAT with windows xp, but I have "windows could not complete the format". So I reformatted it on another computer and now my usual computer XP even open the drive or reformat.

  • Printer banding in portrait mode only and only with the small font size

    Grayscale printing let horiz white stripes in portrait, not landscape. Bands are equidistant policies 48 pt or less, does not occur on larger fonts. Band spacing varies from 1.5 to 2 cm according to the size of the font. Band occurred using Word or P

  • As a user, but as another wobbly keyboard.

    I have a laptop running Windows 7.  I used one of the two admin accounts, and when I typed all the letters in the top row of letters on the keyboard, he typed in numbers instead. When I went to the other account, the keyboard worked fine.  Any ideas?