Query to consume too much time.

Hello
I use the Release of oracle 10.2.0.4.0 version. I have a query, its takes too long (about 7 minutes) for indexed read. Please help me understand the reason and workaround for same.
  select *
 FROM a,
         b
   WHERE  a.xdt_docownerpaypk = b.paypk
         AND a.xdt_doctype = 'PURCHASEORDER'
         AND b.companypk = 1202829117
         AND a.xdt_createdt BETWEEN TO_DATE (
                                                             '07/01/2009',
                                                             'MM/DD/YYYY')
                                                      AND TO_DATE (
                                                             '01/01/2010',
                                                             'MM/DD/YYYY')
ORDER BY a.xdt_createdt DESC;


--------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name                    | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------------------
|   1 |  SORT ORDER BY                 |                         |      1 |      1 |    907 |00:06:45.83 |   66716 |  60047 |   478K|   448K|  424K (0)|
|*  2 |   TABLE ACCESS BY INDEX ROWID  | a                       |      1 |      1 |    907 |00:06:45.82 |   66716 |  60047 |       |       |          |
|   3 |    NESTED LOOPS                |                         |      1 |      1 |   6977 |00:06:45.64 |   60045 |  60030 |       |       |          |
|   4 |     TABLE ACCESS BY INDEX ROWID| b                       |      1 |      1 |      1 |00:00:00.01 |       4 |      0 |       |       |          |
|*  5 |      INDEX RANGE SCAN          | IDX_PAYIDENTITYCOMPANY  |      1 |      1 |      1 |00:00:00.01 |       3 |      0 |       |       |          |
|*  6 |     INDEX RANGE SCAN           | IDX_XDT_N7              |      1 |   3438 |   6975 |00:06:45.64 |   60041 |  60030 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(("a"."XDT_CREATEDT"<=TO_DATE(' 2010-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "a"."XDT_CREATEDT">=TO_DATE(' 2009-07-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')))
   5 - access("b"."COMPANYPK"=1202829117)
   6 - access("XDT_DOCTYPE"='PURCHASEORDER' AND "a"."XDT_DOCOWNERPAYPK"="b"."PAYPK")
       filter("a"."XDT_DOCOWNERPAYPK"="b"."PAYPK")


32 rows selected.


index 'idx_xdt_n7' is on (xdt_doctype,action_date,xdt_docownerpaypk).
index idx_xdt_n7 details are as below.
blevel   distinct_keys   avg_leaf_blocks_per_key   avg_data_blocks_per_key   clustering_factor       num_rows 
3         868840             1                         47                     24020933               69871000


But when i am deriving exact value of paypk from table b and applying to the query, its using another index(idx_xdt_n4) which is on index 'idx_xdt_n4' is on (month,year,xdt_docownerpaypk,xdt_doctype,action_date)
and completes within ~17 seconds. below is the query/plan details.


  select *
  FROM a
    WHERE a.xdt_docownerpaypk = 1202829132
          AND xdt_doctype = 'PURCHASEORDER'
         AND a.xdt_createdt BETWEEN TO_DATE (
                                                              '07/01/2009',
                                                              'MM/DD/YYYY')
                                                       AND TO_DATE (
                                                              '01/01/2010',
                                                              'MM/DD/YYYY')
 ORDER BY xdt_createdt DESC;

 ------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                    | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------------------------------------------
|   1 |  SORT ORDER BY               |                         |      1 |   3224 |    907 |00:00:02.19 |    7001 |    339 |   337K|   337K|  299K (0)|
|*  2 |   TABLE ACCESS BY INDEX ROWID| a                       |      1 |   3224 |    907 |00:00:02.19 |    7001 |    339 |       |       |          |
|*  3 |    INDEX SKIP SCAN           | IDX_XDT_N4              |      1 |  38329 |   6975 |00:00:02.08 |     330 |    321 |       |       |          |
------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(("a"."XDT_CREATEDT"<=TO_DATE(' 2010-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "a"."XDT_CREATEDT">=TO_DATE(' 2009-07-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')))
   3 - access("a"."XDT_DOCOWNERPAYPK"=1202829132 AND "XDT_DOCTYPE"='PURCHASEORDER')
       filter(("a"."XDT_DOCOWNERPAYPK"=1202829132 AND "XDT_DOCTYPE"='PURCHASEORDER'))


 
index idx_xdt_n4 details are as below.

blevel   distinct_keys   avg_leaf_blocks_per_key   avg_data_blocks_per_key   clustering_factor       num_rows 
3         868840             1                         47                     23942833              70224133
Published by: 930254 on April 26, 2013 05:04

the first query uses the predicate "XDT_DOCTYPE" = "PURCHASEORDER" to determine the order of the index IDX_XDT_N7, which needs to be scanned, and uses the other predicates to filter most of the index blocks. The second query uses an INDEX SKIP SCAN without consider the first column of the index IDX_XDT_N4 and use the predicates for the following columns ('a'. ("XDT_DOCOWNERPAYPK"= 1202829132 AND "XDT_DOCTYPE" = "PURCHASEORDER") to get a much more selective access (reading only 330 blocks instead of > 60 K).

I think that there are two possible options to improve performance:

1. If the creation of a new index is an option, you can define an index on the table A (xdt_doctype, xdt_docownerpaypk, xdt_createdt)
2. If the creation of a new index is not an option, you can use an indicator of INDEX SKIP SCAN (INDEX_SS (an IDX_XDT_N4)) to order the CBO to use the second index (without a hint the CBO tends to ignore the possibility of using a SKIP SCAN in a join of NL). But with production advice is rarely a good idea... 11g you could you baselines sql to avoid these indications in the code.

Concerning

Martin

Tags: Database

Similar Questions

  • A query that takes too much time with the dates?

    Hello people,
    I'm trying to pull some data using the status date, and for somereason its taking too long to return the data
       and trunc(al.activity_date) = TRUNC (SYSDATE, 'DD') - 1     --If i use this its takes too much time
    
    
      and al.activity_date >= to_date('20101123 000000', 'YYYYMMDD HH24MISS')
       and al.activity_date <= to_date('20101123 235959', 'YYYYMMDD HH24MISS') -- If i use this it returns the data in a second. why is that?
    
    How do i get the previous day without using the hardcoded to_date('20101123 000000', 'YYYYMMDD HH24MISS'), if i need to retrieve it faster??

    It seems that al.activity_date is indexed and is not TRUNC (al.activity_date). Your problem is not with the TRUNC(sysdate,'dd')-1. So use:

    and al.activity_date >= TRUNC(SYSDATE)-1
    and al.activity_date < TRUNC(SYSDATE)
    
  • Firefox consumes too much RAM, caused by js

    I have problems with Firefox consumes too much memory RAM. Many people are complaining about this, but I have done some additional research before posting here, using the about command: memory and I discovered that js-no-window and js-main-runtime are the cause.

    When I start an empty Firefox session I immediately an allowance of 260 MB of RAM. About 75-80% this is taken up by these two processes of js (javascript?).

    After opening about 10 tabs with different Web pages, the use of the RAM went up to about 750 MB. During the time that the tabs are open most of the extra of the MB are consumed by the window objects (40 MB per page, yet much seems to just a news with an article site...) but after their closing will not use of memory to the bottom and he transferred to the two already mentioned js process. This transfer to the js process is not always the case, it seems to happen if I open a lot of tabs, let's say 10 + maybe.

    I tried to disable Javascript using the SettingSanity plugin, but makes navigation difficult at home of course and the worst of all this does not solve the problem. Specified processes continue to use the same amount of RAM.

    I have this same problem on all my computers and laptops, so it can't be a coincidence. Is there anything I can do about it? It is being investigated? How a browser with 10 open web pages may need almost 800 MB of RAM?

    And please don't reply not to "install more RAM". The pc with the fact that I have more problems has 2 GB of physical RAM and I really don't think it is possible on a computer of this logon type 10 web pages. Do the same thing in Internet Explorer consumes less than half the amount of memory RAM.

    PS: I sent this request using IE because I could ' t access this page in Firefox. The button 'Ask the question' did not work...

    I have 2 GB Ram too and can open up to 20-30 tabs before slowing down. Maybe it could be an addon like Adblock more which can cause the memory leak.

    Safe MODE: Troubleshooting extensions, themes and problems of hardware acceleration to resolve common Firefox problems

    Also in this article is to disable hardware acceleration, which can reduce the CPU and RAM use on some systems (if it doesn't, turn it back on).

  • I removed 3 SP in a continuous effort to 'fix' iexplore.exe to take too much time processor to do everything he does. Now I can't reinstall because of "access denied."

    original title: cannot re - install SP 3

    I removed 3 SP in a continuous effort to 'fix' iexplore.exe to take too much time processor to do everything he does.  Now I can't reinstall because of "access denied."  I have REMOVED all antivirus programs and malware and always get "access denied".  Need help!

    Don B

    Click HERE. Scroll down the page and click on the automated FixIt. Follow the prompts to run it. After that he did try to install the service pack that you downloaded from the source of IT.

  • I have a dell pc n series (Intel core duo) it take too much time starting last I started at 3: 00.

    Original title: problem in multidisciplinary

    I have a dell pc n series (Intel core duo) it take too much time starting last I started at 3 a.m. what will I do?

    you have eliminated the problems of virus/malware?

    have you recently installed on the computer or updated drivers, etc?

    See: http://windows.microsoft.com/en-au/windows/optimize-windows-better-performance#optimize-windows-better-performance=windows-7

  • I signed up for a year of 10 photos per month.  I am connected to adobe. I can't communicate with a real human being. I can't upload a picture to my computer. Mac 10.5.8. desktop computer. It takes too much time.

    I signed up for a year of 10 photos per month.  I am connected to adobe. I can't communicate with a real human being. I can't upload a picture to my computer. Mac 10.5.8. desktop computer. It takes too much time.

    Since this is an open forum, not Adobe support... you must contact Adobe personnel to help

    Chat/phone: Mon - Fri 05:00-19:00 (US Pacific Time)<=== note="" days="" and="">

    Don't forget to stay signed with your Adobe ID before accessing the link below

    Creative cloud support (all creative cloud customer service problems)

    http://helpx.Adobe.com/x-productkb/global/service-CCM.html

  • What I do when I get an error message indicating that my ID adobe was allowed too much time

    What I do when I get an error message indicating that my ID adobe was allowed too much time

    Hi Kjsoden,

    I found a thread that might help you: http://forums.adobe.com/thread/821424

    Let me know if this helped.

    Thank you

    Preran

  • Breaking point taking up too much time for each step for every time to Jdeveloper 11.1.1.7

    Hi all

    I use Jdeveloper to ADF application development 11.1.1.7 and I use 64 bit JDK. When I run my application in debug mode, if the program control passes to the location of the breakpoint taking too in time for the next step (F8) a few times because of this I need to restart the server from WL. During this time of progress/loading showing symbol on the screen. I don't know why this is happening... ? Please someone help me on this.

    Vinoth-

    Try closing the Panel of the Structure of the ADF. I noticed a lot of requres JDev struggle to maintain synchronization with the Structure of the ADF. It helps in the design of JSF page and in brakepoints in support beans and other JSF related classes.

    I also had problems with 64 bit JDK on Windows and came back to 32-bit. (do not remember if I had WLS crashes but I know I had a lot of problems)

    Try growing JDev memory and heap WLS.

  • Query took too much time when adding new column to the table and the index set on this

    I added a new column to the table that contains thousands of records. and created the composite index with three columns (those newly added + two existing column)

    for the specifics. TBL table there are two columns col1, col2

    I added the new column col3 to TBL and created composit index (col1, col2, col3).

    Now for all the records in col3 is NULL. When I choose on this table, it takes too long...

    Any idea what my I do bad., I have check the query plan, it is using the index

    It is solved using collection of statistics using the

    DBMS_STATS. GATHER_TABLE_STATS

    @Top.Gun thanks for your review...

  • Open with options: I want to add a new application to the list right click on file-&gt; open with and rearrange the list so my search app is first. I know that I can use open with-&gt; other and select any application, but it takes too much time since I u

    I want to add a new application to the list right click on file-> open with

    and also reorder the list so my search app is first / top of the list.

    I know that I can use open with-> other and select any application, but it takes too long I should use my intended application in this way several times a day.

    Basically, I'm looking for a configurable way configure open with, something more to send to Windows.

    Thank you very much for all the advice offered.

    The operating system, not you, not control that open with the menu command.

  • Too much time in the queue empty in the loop of consumption

    I'm working on a VI used to screen analog converters above temperatures from room temperature to 175 C.  I have it set in place with a producer and a consumer loop.  The loop of producer makes all the measures, while consumer loop allows you to write tests that all measures in a spreadsheet file. There are 23 steps taken and recorded every second.  The test is run for about 60 minutes.

    Initially, I had trouble with data in the queue being lost when I stopped the VI.  I fixed that by ensuring that the queue was completely empty before you stop the VI.  Now my VI will continue to work long after I pressed the stop button.  I let it run for up to an hour and finally abandoned the operation and still lost a large number of measures on the end of the queue.  From my reading on the forums, it's because of the overload to write to a file of measure.  Opening and closing of the file every second is any slowdown.

    I've seen examples of creating a table of measured data and only writing every 100 samples or more.  Can someone help me see how I could implement something like this in my code?  Is there another reason, it takes so long to write all my test data?

    Thank you

    Matt

    You must use the real file i/o functions.  That made the express VI to write files to measure opens the file, writing the data and then close it.  When you do this several times, you get a really slow performance.  If you must open the file before you consumer loop and close it after the loop.  You will then be able to write as much as you want inside the loop.  But now, you need to format the data yourself.

  • Report takes too much time to PROD and ends in a few seconds on UAT

    Hello

    11.2.0.3 Oracle database version

    OS: RHEL 6

    EBS Version 12.1.3

    A strange thing is happening in PROD and CLONED bodies.

    The user triggers a (CRI) and it runs for hours on the PROD instance and ends a few seconds on the UAT.

    > I have checked and found no blockages.

    > Other programs are underway on the PROD, but the difference is much too huge (several hours and seconds).

    > I cannot grant the request using sql tuning advisor as id sql changes dynamically.

    > Despite this I tried tuning, but it errors with timeout (increased from 1800 to 3000) but always gets timeout error.

    Please suggest what else I can check this report?

    Concerning

    Karan


    Problem has been resolved.

    In one of the SR, Oracle had asked to remove the stats on one of the tables GT.

    I had removed the same two weeks ago. UAT clone has been a month and so it reflected there.

    Today, I tried to remember what all has changed have been made and I found that this has been done.

    I collected stats at 100% for this table. The report running for almost 5-6 hours on PROD now complete in just 35 seconds.

    Learning: track the changes make you.

    Concerning

    Karan

  • vCenter takes too much time to respond after the withdrawal of DC?

    It is a weird one and might or might not be related, but I need help.

    Yesterday, I finally had the time to blow up one of my old DCs (leaving other 3 x all global catalogs).   When I tried to take it off I hit the problem where it says

    The operation failed because

    Active Directory domain services could not transfer the remaining data in the directory DC = ForestDnsZone partition, etc...


    Research in this and following the guide here I could change it to a correct domain controller, and then remove all good scheduled class.  Form of share a brief replication error I think it was changes it all seems good.  dcdiags and repadmin checks show everything goes OK then I thought any good.

    But then, a few hours later, I went to connect to vCenter to do something else and I get

    "The server xxxxx was taking too long to respond. (The order has expired because the remote server taking too long to respond.)


    Now, I had not been in vCenter that day but it was fine the day before so I'm not 100% sure if its related or not.  In any case, last night my backups of Veeam while working, took twice as long, but all long worked OK, if something looks upward.   My first thought was SSO was pointing to the deleted domain controller, it was not that I checked this and it shouldn't be anyway that one I removed was only a temp DC anyway.  So this morning, I restarted the VM vCenter and it restarts OK, services start but I cannot yet connect his too long to answer said.  Any ideas?  Is it related to the removal of DC?  Does not see why it would be with the exception of SSO who seems to find but chronology going home.

    What's weird, is that if I try the web client on that a few times, he sits authentication in a long time and then finally allows me in and everything seems fine.  BUT, if say I go to the permissions section and try and add a my domain account, it is right there for a solid 3-5 minutes before completing all of my usernames, so it's out of AD...   You really don't know why he would do this because when I installed vCenter 4 years it's just 2 x DCs, this one I removed yesterday only installed 6-8 months when I was going to upgrade the domain controllers, but never got round to it, so it's just kind of sitting there waiting but I is no longer necessary so he decided to remove.

    For reference, my domain name has 2 x 2003 DCs (DNS, GC etc. which are preferred and secondary), 1 x new 2008 R2 DC, which was OK in a few weeks and the 2008 deleted domain controller.  Also have Exchange 2003 still (going) but touch wood it seems OK as well, all correct DCs and GCs statement.

    vCenter is 5.1 with 3 x 5.1 hosts also running.  vCenter is running on a 2003 R2 x 64 VM with SQL running on the same virtual computer.  DB is about 8 GB and log is 1.5 GB.

    FWIW, I think I found the problem.

    SSO was the basis of my domain name, that she always, but recently our office parent put in place a program called quest address book synchronization, which meant our small estate of 200 users had suddenly and extra few thousands of added entries.  Don't know no why he decided now is the time to break, but change the SSO to watch a smaller ORGANIZATIONAL unit and not the base for the domain DN solved my problem of access.

  • loading page takes too much time

    Hi, I run the storedprocedure in SQL management studio. It runs in less than a second. In my page I interrogate the storeprocedure 5 times.  It takes about 15 seconds for the page to load.  Isn't accelerate it?  Thank you!

    I'm a beginnner job to be an intermediary.  The stored procedure is listed here. I use the same code for each region in which I am a query.

    < cfstoredproc procedure = "" dbo." StatewideFatalCollisionsByRegion"datasource ="cfpMeasure">

    < name cfprocresult 'fatal' = >

    < / cfstoredproc >

    cfquery name = "fatal" dbtype = "query" >

    Select collisionYear, total as fatal

    OF fatal

    where region = 'NW'

    < / cfquery >

    I have six regions and I have this code repeated on the page for each region. I use the game for graphs of results.

    < cfchart font = 'arial '.

    FontBold = 'yes '.

    xaxistitle = 'year '.

    yaxistitle = "" # fatal ""

    title = "NW région" Fatal

    scaleFrom = '0 '.

    scaleTo = '60 '.

    format = "jpg".

    FontSize = "14".

    chartwidth = "270".

    chartheight = "275".

    showborder = 'yes '.

    ShowLegend = "no" >

    < cfchartseries

    Type = "line".

    SeriesColor = "# 000099".

    Query = 'fatal '.

    PostesColonne = "collisionYear".

    ValueColumn 'fatal' = >

    < / cfchart >

    Your mistake is that your name of Q of Q is the same as your name to procresult.  That essentially replaces the fatal name variable.

  • iOS 10 message app consumes too much battery

    I upgraded my iPhone SE iOS 10 today with the percentage of the battery to 90%

    After a few hours update, my phone battery is 53%, when I checked the battery I found the message app has consumed more than 20% of battery.

    My iPhone is used for the last 2 days until now.

    Update after a few days, the battery issue seems to have disappeared, it could be due to the upgrade.

    No problems and I'm in love now

Maybe you are looking for