Composite running creates performance problems

Hi all

I have a scenario where my soa case are in working order, until the other transaction is completed. Assume that, if a purchase Transaction is blocked for days, composites would be waiting for the answer until he answers. Are these composites running creates impacts on performance, assuming that 100 instances are run daily and all are waiting for the answer.

Two hundred
Knani

As long as your instances are dehydrate while you wait for the answer, you're good.

Tags: Fusion Middleware

Similar Questions

  • The created performance problems running in XP system restore

    I ran the XP system restore after installing new software for my dsl modem. Now many programs do not run, my computer is continually updated and some updates fail and my computer is slower than ever. My Zune is not yet recognized. What can I do?

    Hello

    · Why do you run system restore after the installation of the DSL modem?

    · What is the number and the model of the computer?

    · What is the service pack installed?

    · You receive an error message or error code when you use programs?

    It will restrict the issue by following the steps below:

    1. check if the error codes and error message appears in Event Viewer: how to view and manage the event logs in Event Viewer in Windows XP: http://support.microsoft.com/kb/308427

    2. search for the installation error code that your computer has saved when the installation failed. To do this, follow these steps:

    a. Click Start, click all programs and then click Windows Update or Microsoft Update.

    b. on the Windows Update Web site or on the Microsoft Update Web site, click on view update history. A window opens that displays the updates that have been installed or that have failed to install on the computer.

    (c) in the status column of this window, find the failed to install update and then click on the red X.

    (d) a new window opens that displays the installation error code.

    3. you can follow the procedure of the link below to improve the performance of the question: how to make a computer faster: 6 ways to speed up your PC: http://www.microsoft.com/atwork/maintenance/speed.aspx

    You can also check: your Zune player is not detected by your computer or the Zune software: http://support.microsoft.com/kb/944909

  • SEM_MATCH queries performance problems

    Hello

    We run into performance problems when you use queries SEM_MATCH.

    We have a model for data (ABox) containing a triple 12 000 000.
    We have a model for the plan (TBox) containing 800 triple.
    We ran an implication of "OWLprime" and built the model and entailment index with the SEM_APIS commands.

    The number of triplets after the execution of entailment was 35.000.000 triples.

    We use the following hardware configuration:
    OS: Windows Server 2008
    CPU: Intel CPU Xeon X 5460 @3.15 GHz (2 CPUs).
    64-bit operating system.
    Memory: 32 GB

    The results below, it seems that whenever we execute a query against the index of inference, execution time increases significantly.

    Here are the results:


    1. single template query using index of inference:
    SELECT

    x
    TABLE)
    SEM_MATCH (' (? x < http://www.we.com/weo.owl#has_symptom > < http://www.we.com/weo.owl #allergyA >)',)
    SEM_Models ('TOM_ONTOLOGY', 'TOM_ONTOLOGY_TA'),

    SDO_RDF_RULEBASES ('OWLPrime'), NULL, NULL, NULL)
    +);+
    Execution time: 0.02 seconds

    2.
    a.Double query model using index of inference:
    SELECT

    x, y
    TABLE)
    SEM_MATCH (' (? x < http://www.we.com/weo.owl#has_symptom > < http://www.we.com/weo.owl #allergyA >))

    +(?x RDF:type <http://www.We.com/WEO.Owl#patient>) +.

    + (? x < http://www.we.com/weo.owl#id >? y)', +.
    SEM_Models ('TOM_ONTOLOGY', 'TOM_ONTOLOGY_TA'),

    SDO_RDF_RULEBASES ('OWLPrime'), NULL, NULL, NULL)
    +);+
    Run time: 127 seconds

    b. double model query without indication of inference:
    SELECT

    x, y
    TABLE)
    SEM_MATCH (' (? x < http://www.we.com/weo.owl#has_symptom > < http://www.we.com/weo.owl #allergyA >))

    +(?x RDF:type <http://www.We.com/WEO.Owl#patient>) +.

    + (? x < http://www.we.com/weo.owl#id >? y)', +.
    SEM_Models ('TOM_ONTOLOGY', 'TOM_ONTOLOGY_TA'),

    NULL, NULL, NULL, NULL)
    +);+
    Execution time: 2.5 seconds


    3.
    a. triple query model using index of inference:
    SELECT
    x, y, z
    TABLE)
    SEM_MATCH (' (? x < http://www.we.com/weo.owl#has_symptom > < http://www.we.com/weo.owl #allergyA >))
    (? x rdf:type < http://www.we.com/weo.owl#patient >)
    (? x < http://www.we.com/weo.owl#id >? y)
    (? x < http://www.we.com/weo.owl#gender >? z)',
    SEM_Models ('TOM_ONTOLOGY', 'TOM_ONTOLOGY_TA'),
    SDO_RDF_RULEBASES ('OWLPrime'), NULL, NULL, NULL)
    );

    Running time: 146 seconds
    b. triple query model without using the index of inference:
    SELECT
    x, y, z
    TABLE)
    SEM_MATCH (' (? x < http://www.we.com/weo.owl#has_symptom > < http://www.we.com/weo.owl #allergyA >))
    (? x rdf:type < http://www.we.com/weo.owl#patient >)
    (? x < http://www.we.com/weo.owl#id >? y)
    (? x < http://www.we.com/weo.owl#gender >? z)',
    SEM_Models ('TOM_ONTOLOGY', 'TOM_ONTOLOGY_TA'),
    NULL, NULL, NULL, NULL)
    );

    Execution time: 9 seconds

    Thank you

    Doron

    If you use 11.1.0.7.0 and you have installed the 7600122 hotfix, please use:
    -Brace for the SEM_MATCH query syntax
    -Virtual model (using SEM_Models ('TOM_ONTOLOGY', 'TOM_ONTOLOGY_TA'),
    SDO_RDF_RULEBASES ('OWLPrime'))
    -ALLOW_DUP = T (as part of the parameter options of SEM_MATCH)

    For more details, please see
    http://download.Oracle.com/docs/CD/B28359_01/AppDev.111/b28397/sdo_rdf_newfeat.htm

  • question about a view that I have created to solve performance problems

    Dear alll;

    I have an interesting problem. I created a view to help solve some performance problems, I've had with my query

    See below
    create or replace view view_test as 
    
    Select trunc(c.close_date, 'YYYY-MM-DD') as close_date, t.names
    from tbl_component c, tbl_joborder t
    where c.t_id = t.p_id
    and c.type = 'C'
    group by trunc(c.close_date, 'YYYY-MM-DD'), t.names
    ;
    and I tried test the view using the following syntax and I get the following errors
    select k.close_date, k.names from view_test k
    where k.names = 'Kay'
    and k.close_date between to_date('2010-01-01', 'YYYY-MM-DD') and to_date('2010-12-31', 'YYYY-MM-DD')
    However, I get the below error messages
    ora-o1898: too many precision specifiers
    I Googled it and tried a lot of things online but I can't solve the problem unfortunately, and I don't know why.



                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        

    What you trying to accomplish with TRUNC:

    SQL> select trunc(sysdate, 'YYYY-MM-DD') from dual;
    select trunc(sysdate, 'YYYY-MM-DD') from dual
                          *
    ERROR at line 1:
    ORA-01898: too many precision specifiers
    

    I think you meant simply TRUNC (c.close_date)

  • New Partition number creates Performance.

    Hello

    Let me give you my current scenario.

    My Txn table is huge and used very often. That's why it was designed earlier as ITO with partition range on quarterly basis. so 3 months of data will reside in each partition and we MAXVALUe as HIGH score, has daily transaction data you won't receive data at a later date

    We have also a statistics program runs every day to update the statistics in this table txn.

    So for the abstracts

    *. TXN table is IOT
    *. All indexes are local index
    *. Quarterly Partition by range.
    *. Statistics updated daily.

    So my request on this table can work any given input less than a second...


    During the 2nd quarter, we added new tablespace, partition and index modified to include the new partition. (say its created by end of March). so, when comes on April 1 and the data is loaded to this partition (only on folders arounf 20)... my query took a lot of time... Infact he started to hung... When checked in OEM (Oracle enterprise manager). It seems that Oracle has created the 2-3 query plan and started using the BAD query plan...

    Therefore, the CPU usage was very high... and same query take ages to return the result.

    When we asked DBA to check on this... they did something and he began, works fine... When I checked the OEM my query did not use the BAD query plan... He used the different one.

    time spent...

    Same problem happened when we tried to add the partition for Q3.

    I really don't understand... Add partition must causes the performance issue. So I analyze the table after you create the partition or rebuilt the index? how it works?


    Concerning
    Balaji Tr.

    We used to have a similar problem. We have monthly partitions and we had performance problems on the 1st that some queries have access to the new (empty) partition and some queries whre checking data to yesterday (full score).

    Problem of the classical art. Had time by copying his stats before we used the new partition.

    David Marcos.

  • 4.2 APEX performance problem

    Hi all

    We are facing the problem of performance when we are by selecting one of the underside of responsibility that rests in the Organization as if we select APEX_01 Reporting ALL then in this case, we can see all the organization such as ETL, BRN, CHICKEN ect in home page, but I am facing performance problem when it forms that select area of responsibility to the home page. Please indicate how to check this page's performance in the APEX and also advise how to solve this problem.

    apex_performance_issue_4.2.jpg

    LnTInfotech wrote:

    so you want to say below takes more time and we need to address this request?

    15.34901 224.01663 ... Run the statement: SELECT DISTINCT papf.full_name a, papf.full_name b
    OF po_agents pa
    per_all_people_f women's wear
    org_organization_definitions org
    WHERE 1 = 1
    AND papf.person_id = pa.agent_id
    AND org.organization_CODE = NVL(:P1_WARE_HOUSE,org.organization_CODE)
    AND TRUNC (SYSDATE) BETWEEN papf.effective_start_date AND papf.effective_end_date
    AND EXISTS SELECT (SEPARATE 1
    OF po_headers_all poh
    WHERE poh.agent_id = pa.agent_id
    AND poh.org_id = org.operating_unit)

    This request seems to be missing a join. An obligation to use DISTINCT in application code usually indicates that there is something seriously wrong with the data model or query...

  • Oracle 9i Java component performance problems

    Hello

    I am reorganizing a java component inherited because of performance problems.


    A java stored procedure is used to trigger a shell script that runs a java component. The component connects to a series of remote directories via ftp, gets the files and load them using slq * charger.

    Is there a better way to do this? I saw a few articles talking using a FTP interface directly from PL - SQL jump the java component entirely. It would be preferable to the current solution its?

    Thanks in advance,
    Pedro

    I am reorganizing a java component inherited because of performance problems.

    The first step is to identify what are the problems of performance, where they occur, and what causes them.

    View details on

    1. WHAT you do

    2. HOW to

    3. WHAT results you get

    4 what ARE the results you expect to get.

  • ViewObject range Paging performance problem

    Hi all

    I am facing a performance problem with the implementation of an obligation to programmatically add a number of extra where the parameters of the clause (using bind) variable in combination with range paging.

    My code looks like this

    ...
    
    ApplicationModule am = Configuration.createRootApplicationModule("services.DossierAM", "DossierAMLocal");
    ViewObject vo = am.findViewObject("DossierListView");
    
    // apply programmatic view criteria
    ViewCriteria vc = vo.createViewCriteria();
    ViewCriteriaRow vcr = vc.createViewCriteriaRow();
    vcr.setAttribute("Reference", "15/%");
    vc.addElement(vcr);
    vo.applyViewCriteria(vc, true);
    
    
    // enable range paging
    vo.setAccessMode(RowSet.RANGE_PAGING);
    vo.setIterMode(RowIterator.ITER_MODE_LAST_PAGE_PARTIAL);
    vo.setRangeSize(50);
    vo.scrollToRangePage(5); // Cause a java.sql.SQLException: Parameter IN or OUT missing for index.....debugging learned that the :vc_temp_1 bind variable is not filled
    // vo.scrollToRange(250); // Cause a java.sql.SQLException: Parameter IN or OUT missing for index.....debugging learned that the :vc_temp_1 bind variable is not filled
    
    ... 
      ...
    

    I found 2 solutions, but they both require an application of additional database that is, performance wise, is not acceptable.

    The first solution is to slip into an additional call to exectueQuery() before the call to function scrollToRangePage (int) or scrollToRange (int).

    The second solution is to use the method (int) setRangeStart instead of variants scrollToRange (Page). This method performs also 2 database calls.

    My question to you:

    Is there another way to satisfy the requirement of programming add a certain number of parameters of the additional where clause (using the variable binding) in combination with the pagination of the range without the need to perform queries of database 2?

    The code is tested with JDeveloper, 11.1.2.4.0, and 12.1.3.0.0 and behaves the same on both versions.

    Kind regards

    Steven.

    Have you tried to create truly VC with bind variable (rather than use binding implied var created by frame)?

    Something like: http://www.jobinesh.com/2010/10/creating-view-criteria-having-bind.html

    Dario

  • HP Pavilion 15-n005sg: performance problems and worsens

    Hello

    I bought this PC here is a year, has been a very good PC with good performance, but has been doing more and more slow to the month, for example, I used to run Diablo 3 on average - high specification with 60 fps constant with no performance problems, but that in the end, I can't still running at ultra low care without screen freezes and fps drops to 1-2 as well as the League of legends and other games.

    It takes more and more time for my pc to turn on, which was like 30 ~ 40 seconds not takes like 3 minutes.

    the performance gets really bad and should not happen on a pc of 1 year.

    What can I do?

    An update of the system may be your best course of action.

    Your data, files, etc. will be retained. Any Windows apps store is kept.

    Any program that came pre-installed is preserved.

    BUT all programs which are not in the above categories will have to be resettled; your games, printer, etc.

    Windows 8 system Refresh

    If you found my answer helpful please say thank you by clicking the s cursor Up icon. Thank you!

  • New performance problem

    Hi all

    We encounter a performance problem once again

    The batch deletes lines 1 M every night which took regular 30 minutes.

    But last night (midnight) it took more than 2 hours and crashes.

    Help if I run gather_schena stats regularly when it is constant DELETE on the table?

    Please help me check our ASHES, AWR, ADDM to solve the problem.

    ADDM

    https://app.box.com/s/7o734e70aa2m2zg087hf

    ASH

    https://app.box.com/s/xadlxfk0r5y7jvtxfsz7

    AWR

    https://app.box.com/s/x8ordka2gcc6ibxatvld

    Thank you...

    zxy

    yxes2013 wrote:

    Are you & twin SB

    Good for say you that because you have a minimum of charges/taks in you and your company is paid too high

    For me, Im so overwhelmed with a lot of assignments include not less than the ff:

    GG, DBV, TDE, Ebiz, MySql, Sqlserver, Db2, Sybase, Foxpro, RMAN, Dataguard, database security audit, database administrator, etc., & etc.

    And I get paid only USD1, 500 per month. Is this reasonable?

    Why is - all the dba from here seem to not complain about their jobs?

    Do you mean most of you have minimum assignments with your respective companies and is paid at the top?

    Thank you

    Fair enough. He turns off the stage (again) and is going no where.

    Action mod.: blocking of the thread.

    Nicolas.

  • Isolate storage performance problems

    I'm looking for the most effective strategy to isolate a possible performance problem and to determine if storage is the cause.  VMware said that this physical device read latency and physical device write latency must be sheer 20ms lower and 10 ms on average.  I see peripheral physics average 11 percent read latency and write the average latency on some HBAs.  I see 120ms peak read latency and 240ms latency of writing on some HBAs.

    Suppose I want to isolate this issue - how to efficiently determine if these measures indicate the storage as the cause of the poor performance of the application?  (For example, storage VMotion virtual machine which is own LUN with no other VM and see if there is a difference in performance - something that does not require a lot of time).

    Migrate the virtual computer on a local storage on one of your hosts to see if latency goes away. Then you know almost for some it's a problem on the ESX Server or on the fabric. My bet is that it is on the fabric. Run perfstats on table and you will most likely find that you are maxing out the SECOND battery on the diskgroup to support for the LUN that you write to.

    CD.

  • Accrual accounting reconciliation executed load performance problem report.

    We have significant performance problems when you run the reconciliation of charge accumulation report run. We had to cancel after having run for a day. any idea on how to solve it?

    We had a similar question. As this report runs depends on the input parameters. Remember, your first round of what this report will take a lot of time and subsequent executions will be much shorter.

    But w had to apply the fixes mentioned in the MOS article to solve the performance issue.
    Accrual accounting reconciliation Load Run has slow performance [ID 1490578.1]

    Thank you
    MIA...

  • AE CS6 performance problem

    I have an eccentric performance issue that I am looking for help on. I found this in AE CS5, and it's still in AE CS6, so I thought I should sue here. It is on a platform of win7. For the record, everything is fully updated and patched, including display drivers.

    First of all, a context. Most of my work is interviews and classes, for the most part two cameras. Most of the interviews are one-man-band with me handling everything. A few times, I had to film in conditions of low light, resulting in a noisy capture one or two cameras. Both cameras are capture in AVCHD 1080 p 30. When we are lucky, we get one interviewee who is hinged, and has much to say, these clips can be long for long minutes. If we were talking about a second clip seven here I won't be wasting your precious time. I'm talking as much as 25 minutes of film.

    What I've done so far, is send the noisy images of the body to AE via dynamic links and run the effect of "reducing grain. This isn't a problem of editing - I just disable the effect in AE so it does not interfere. But when it comes to export, a few seconds of these sequences is painful and a few minutes is ridiculous. I had one piece of 3.5 minutes take 16 hours to export, or right to 10.25 seconds for each image.

    Naturally, I ran win7 Task Manager / resource monitor to see what was going on. What I've found, is that the machine was not limited - I couldn't find a bottleneck. It's a nuclei of i7 930 machine, so four eight hyper-threads. Everything works in about 20-30% in this case. I got 12 GB memory and had as much as 6 GB free. I have three hard drives and none of them were busy - the target drive, getting a continuous line of images in Scripture, but of it is average queue length was zero. The system was written only about 2 MB per second. Like I said, without bottlenecks. So, what's the problem?

    I have since tried to isolate the problem. Using this same sequence, if I have AE make internal make the queue, she turns on the machine - all Hyperthreaded running hard (85-100%), all of the memory used and target the disc is hammered. 10.25 sec/frame to 1fps, it goes 10 times faster. But if I made the same images through the Media Encoder, I'm back for a very slow rendering.

    It seems that when EI is called through hotlinks, it runs only on a single hyper-thread. And when AE must touch each image (as it does in using an effect of "reducing grain"), he actually makes a hyper-threaded machine eight in one hyper-thread machine. Why is this? What can I do about it?

    I can't find the settings for this effect in some way. I played with memory and multiprocessing preferences without success (my best settings to aid internal queue of EI turned out be book 4 GB memory for other processes and turn on the simultaneous rendering of multiple images with the allocation of RAM / CPU at 1.5 GB resulting 4 hyper-fils used for this background). Y at - it the parameters for this effect?

    In summary, if I turn off the grain effect to reduce in AE and the export of the body, the machine blazes far and finished my export of 3.5 minutes in about 35 minutes. If I turn on the effect of grain reduction, it takes more than 16 hours to do the exact same job. If I turn just 3.5 minutes of the camera which needs the grain effect to reduce, with ME directly (taking the body of the equation), it takes the same 16 + hours.

    So I think that it is a performance problem with AE and how it works with dynamic linking.

    Certainly I'm not the first person to see it. So, I hope that there is good workaround solutions. I don't want to make an intermediate AE file if there are good alternatives. This is a workflow that dynamic binding was supposed to eliminate. In addition, I want to avoid re - compress these files, and I don't have the disk space to make an intermediate file lossless image I (even a CODEC moderately compressed as Cineform generates more of 10 x).

    Suggestions? Please?

    Nothing to correct because nothing is broken. Read it here at the end down.

    Mylenium

  • EBS 11i performance problem

    Hi hussein, helios, & all,


    Happy New Year to you :)
    I hope we have a better year 2011 year of the rabbit.
    I hope that we have even better in 2011.

    I just want to learn something from each of you through the year 2010.

    Can you share how tackle you performance problems in EBS?
    I know that there is joint action, that you have done to improve performance.
    It should be after all statistical analyses of performance as Statspack, SAR,
    and other output of the performance monitoring report.

    What actions / solutions did you resolve performance issues in EBS?

    Is it upgraded the server itself for a more powerful?

    For example, our users are complaining about the Oracle Apps very slow race at any time of the day. This produces 20% of the day(performance problem) and 80%, as the Oracle applications is running fine (no complaints of users).
    The appearance of performance problem is not fixed in time, sometimes in the morning and in the afternoon sometimes. Sometimes, even when there are a few users logged in and sometimes it is no problem of performance, even if many users are connected. If you run a statspack you see programs to eat a lot of resources, but they are oracle programs if you can not touch or tune right? So to keep users calm, they agreed to get rebooted server. Temporarily, this solves the performance problem. So we just keep it this way. My suspect is that there are many programs of concurrent background (on request) who is running who is right for which users get slow even if only a few is connected to it. I want to know what are simultaneous programs running at the time where the performance issue of gets users.
    Is there a sqlplus program to list all concurrent programs running?


    Thank you very much

    Mrs. Mina

    What actions / solutions did you resolve performance issues in EBS?

    Depends on what kind of performance issues you have.

    Is it upgraded the server itself for a more powerful?

    Not necessarily.

    For example, our users are complaining about the Oracle Apps very slow race at any time of the day. This produces 20% of the day(performance problem) and 80%, as the Oracle applications is running fine (no complaints of users).

    What part of the application? What activities do you have when there are performance issues?

    The appearance of performance problem is not fixed in time, sometimes in the morning and in the afternoon sometimes. Sometimes, even when there are a few users logged in and sometimes it is no problem of performance, even if many users are connected. If you run a statspack you see programs to eat a lot of resources, but they are oracle programs if you can not touch or tune right? So to keep users calm, they agreed to get rebooted server. Temporarily, this solves the performance problem. So we just keep it this way.

    Please see old similar threads.

    Performance optimization
    http://forums.Oracle.com/forums/search.jspa?threadID=&q=performance+tuning&objid=C3&DateRange=all&userid=&NumResults=15&rankBy=10001

    My suspect is that there are many programs of concurrent background (on request) who is running who is right for which users get slow even if only a few is connected to it. I want to know what are simultaneous programs running at the time where the performance issue of gets users.
    Is there a program of sqlplus to the list of current jobs running?

    Please see these documents.

    How to find database Session & process associated with a competitor, which is currently running program. [735119.1 ID]
    How to extract information from SID to apply for operation [ID 280391.1]
    How to find abandoned UNIX PID for a competing program [154368.1 ID]

    Thank you
    Hussein

  • Performance problem in production; Please help me out

    Hi all,

    I'd really appreciate if someone can help me with this.

    Every night, the server's SWAP, Sysadmin add more space disk swap every night and for the last 4 days.
    I run ADDM report from 22:00 to 04:00 (when the server is running out of memory)
    I had the problem of performance of this query:
    RECOMMENDATION 4: SQL Tuning, 4.9% benefit (1329 seconds)
          ACTION: Investigate the SQL statement with SQL_ID "b7f61g3831mkx" for 
             possible performance improvements.
             RELEVANT OBJECT: SQL statement with SQL_ID b7f61g3831mkx and 
             PLAN_HASH 881601692
    I can't find what the problem is and why it is a source of performance problem, could you help me please
    *WORKLOAD REPOSITORY SQL Report*
    
    Snapshot Period Summary
    
    DB Name         DB Id      Instance     Inst Num Release     RAC Host        
    ------------ ----------- ------------ -------- ----------- --- ------------
    ****       1490223503 ****             1 10.2.0.1.0  NO  ****
    
                  Snap Id      Snap Time      Sessions Curs/Sess
                --------- ------------------- -------- ---------
    Begin Snap:      9972 21-Apr-10 23:00:39       106       3.6
      End Snap:      9978 22-Apr-10 05:01:04       102       3.4
       Elapsed:              360.41 (mins)
       DB Time:              451.44 (mins)
    
    SQL Summary                         DB/Inst: ****/****  Snaps: 9972-9978
    
                    Elapsed 
       SQL Id      Time (ms)
    ------------- ----------
    b7f61g3831mkx  1,329,143
    Module: DBMS_SCHEDULER
     GATHER_STATS_JOB
    select /*+ no_parallel(t) no_parallel_index(t) dbms_stats cursor_sharing_exact u
    se_weak_name_resl dynamic_sampling(0) no_monitoring */ count(*),count("P_PRODUCT
    _ID"),count(distinct "P_PRODUCT_ID"),count("NAME"),count(distinct "NAME"),count(
    "DESCRIPTION"),count(distinct "DESCRIPTION"),count("UPC"),count(distinct "UPC"),
    
              -------------------------------------------------------------       
    
    SQL ID: b7f61g3831mkx               DB/Inst: ***/***  Snaps: 9972-9978
    -> 1st Capture and Last Capture Snap IDs
       refer to Snapshot IDs witin the snapshot range
    -> select /*+ no_parallel(t) no_parallel_index(t) dbms_stats cursor_shari...
    
        Plan Hash           Total Elapsed                 1st Capture   Last Capture
    #   Value                    Time(ms)    Executions       Snap ID        Snap ID
    --- ---------------- ---------------- ------------- ------------- --------------
    1   881601692               1,329,143             1          9973           9974
              -------------------------------------------------------------       
    
    
    Plan 1(PHV: 881601692)
    ---------------------- 
    
    Plan Statistics                     DB/Inst: ***/***  Snaps: 9972-9978
    -> % Total DB Time is the Elapsed Time of the SQL statement divided 
       into the Total Database Time multiplied by 100
    
    Stat Name                                Statement   Per Execution % Snap 
    ---------------------------------------- ---------- -------------- -------
    Elapsed Time (ms)                         1,329,143    1,329,142.7     4.9
    CPU Time (ms)                                26,521       26,521.3     0.7
    Executions                                        1            N/A     N/A
    Buffer Gets                                 551,644      551,644.0     1.3
    Disk Reads                                  235,239      235,239.0     1.5
    Parse Calls                                       1            1.0     0.0
    Rows                                              1            1.0     N/A
    User I/O Wait Time (ms)                     233,212            N/A     N/A
    Cluster Wait Time (ms)                            0            N/A     N/A
    Application Wait Time (ms)                        0            N/A     N/A
    Concurrency Wait Time (ms)                        0            N/A     N/A
    Invalidations                                     0            N/A     N/A
    Version Count                                     2            N/A     N/A
    Sharable Mem(KB)                                 71            N/A     N/A
              -------------------------------------------------------------       
    
    Execution Plan
    ---------------------------------------------------------------------------------------------------
    | Id  | Operation               | Name    | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    ---------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT        |         |       |       | 24350 (100)|          |       |       |
    |   1 |  SORT GROUP BY          |         |     1 |   731 |            |          |       |       |
    |   2 |   PARTITION RANGE SINGLE|         |  8892 |  6347K| 24350   (1)| 00:04:53 |   KEY |   KEY |
    |   3 |    PARTITION LIST ALL   |         |  8892 |  6347K| 24350   (1)| 00:04:53 |     1 |     5 |
    |   4 |     TABLE ACCESS SAMPLE | PRODUCT |  8892 |  6347K| 24350   (1)| 00:04:53 |   KEY |   KEY |
    ---------------------------------------------------------------------------------------------------
     
    
    
    Full SQL Text
    
    SQL ID       SQL Text                                                         
    ------------ -----------------------------------------------------------------
    b7f61g3831mk select /*+ no_parallel(t) no_parallel_index(t) dbms_stats cursor_
                 _sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitori
                 ng */ count(*), count("P_PRODUCT_ID"), count(distinct "P_PRODUCT_
                 ID"), count("NAME"), count(distinct "NAME"), count("DESCRIPTION")
                 , count(distinct "DESCRIPTION"), count("UPC"), count(distinct "UP
                 C"), count("ADV_PRODUCT_URL"), count(distinct "ADV_PRODUCT_URL"),
                  count("IMAGE_URL"), count(distinct "IMAGE_URL"), count("SHIPPING
                 _COST"), count(distinct "SHIPPING_COST"), sum(sys_op_opnsize("SHI
                 PPING_COST")), substrb(dump(min("SHIPPING_COST"), 16, 0, 32), 1, 
                 120), substrb(dump(max("SHIPPING_COST"), 16, 0, 32), 1, 120), cou
                 nt("SHIPPING_INFO"), count(distinct "SHIPPING_INFO"), sum(sys_op_
                 opnsize("SHIPPING_INFO")), substrb(dump(min(substrb("SHIPPING_INF
                 O", 1, 32)), 16, 0, 32), 1, 120), substrb(dump(max(substrb("SHIPP
                 ING_INFO", 1, 32)), 16, 0, 32), 1, 120), count("P_STATUS"), count
                 (distinct "P_STATUS"), sum(sys_op_opnsize("P_STATUS")), substrb(d
                 ump(min(substrb("P_STATUS", 1, 32)), 16, 0, 32), 1, 120), substrb
                 (dump(max(substrb("P_STATUS", 1, 32)), 16, 0, 32), 1, 120), count
                 ("EXTRA_INFO1"), count(distinct "EXTRA_INFO1"), sum(sys_op_opnsiz
                 e("EXTRA_INFO1")), substrb(dump(min(substrb("EXTRA_INFO1", 1, 32)
                 ), 16, 0, 32), 1, 120), substrb(dump(max(substrb("EXTRA_INFO1", 1
                 , 32)), 16, 0, 32), 1, 120), count("EXTRA_INFO2"), count(distinct
                  "EXTRA_INFO2"), sum(sys_op_opnsize("EXTRA_INFO2")), substrb(dump
                 (min(substrb("EXTRA_INFO2", 1, 32)), 16, 0, 32), 1, 120), substrb
                 (dump(max(substrb("EXTRA_INFO2", 1, 32)), 16, 0, 32), 1, 120), co
                 unt("ANALISIS_DATE"), count(distinct "ANALISIS_DATE"), substrb(du
                 mp(min("ANALISIS_DATE"), 16, 0, 32), 1, 120), substrb(dump(max("A
                 NALISIS_DATE"), 16, 0, 32), 1, 120), count("OLD_STATUS"), count(d
                 istinct "OLD_STATUS"), sum(sys_op_opnsize("OLD_STATUS")), substrb
                 (dump(min("OLD_STATUS"), 16, 0, 32), 1, 120), substrb(dump(max("O
                 LD_STATUS"), 16, 0, 32), 1, 120) from "PARTNER_PRODUCTS"."PRODUCT
                 " sample ( 12.5975349658) t where TBL$OR$IDX$PART$NUM("PARTNER_PR
                 ODUCTS"."PRODUCT", 0, 4, 0, "ROWID") = :objn
                 

    Dear friend,

    Why do you think you have problems with shared pool? In the ASH report you are provided, there was just 2.5 medium active sessions and 170 requests during this period, he is very low with swimming pool shared, problems
    you have some queries that use literals, it will be better to replace literals with bind variable if possible, or you can set the init cursor_sharing parameter to force or similar, this is the dynamic parameter.
    But it is not so dramatic problem in your case!

    From ASHES of your report, we can see that top wait events is "CPU + wait for CPU", "RMAN backup & recovery i/o" and "log file sync" and 65% of your database of waiting time. And even in the background waiting events.
    If I understand well report, you have two members in your redo log groups, you have problems with log IO writer speed, check the distribution of files on the disc, newspaper editor is slow causing it to wait for the other sessions. High
    processor can be related to rman compression. Best service can we sea GATHER_STATS_JOB consumes 16% of activity 33% consumes rman and only 21% your applications and also there is something running
    SQL * more under the sys (?) account. There is from the top of the sql page, this is the sql in your application, if I understand correctly, 'scattered db reading file' event indicates that full scans have a place, is it normal that your application? If Yes, then try using
    running in parallel, as we can see in the section "Sessions running PQs Top" your report there is no running in parallel, but as I understand it there are 8 processors, try to use parallel executions or avoid full scans. But consider that
    When you do full scans in parallel PGA memory not used CMS, then decrees setting pga_aggregate_target SGA and increase respectively.

    Is there another application or a program running on the server except oracle?
    Is the performance degradation was strong, I mean yesterday, everything was ok, but today all the evil, or it was good?

    Check the reasons for the slow newspaper writer, it can greatly affect performance.
    Also of 90% of the performance problems generally because of the poor sql, poor execution plans.
    Also if you use automatic memory management, tap Settings, but you must know that in this case the settings will identify the minimum values, this is why define them to lower values in oracle can manage
    entirely.
    Don't increase your SGA at this stage, get the awr report, use @$ORACLE_HOME/rdbms/admin/awrrpt.sql, check your cover shot.

    BUT first, you must change your backup strategy, look at my first post, after that check performance again, before you do that it will be very difficult to help you.

    Good luck

Maybe you are looking for