The BLASTP_ALIGN query performance decreases as increases the size table Ref?

Newbie here.

I'm using Oracle 11.2.0.3.

I am currently running and a loop through the cursor according to who uses the tool BLASTP_ALIGN from Oracle:

FOR MyALIGN_TAB IN

(

Select a.query_string, H.AA_SEQUENCE target_string, t_seq_id, pct_identity, alignment_length, incompatibilities, positive, gap_openings, gap_list, q_seq_start, q_frame, q_seq_end, t_seq_start, t_seq_end, t_frame, score, wait

from (select t_seq_id, pct_identity, alignment_length, incompatibilities, positive, gap_openings, gap_list, q_seq_start, q_frame, q_seq_end, t_seq_start, t_seq_end, t_frame, score, wait

table (BLASTP_ALIGN ((p_INPUT_SEQUENCE SELECT query_string FROM DUAL),

CURSOR (Select GB_ACCESSION, AA_SEQUENCE from HUMAN_DB1. HUMAN_PROTEINS),

1-1, 0, 0, 'PAM30',. 1, 10, 1, 2, 0, 0)

)

),

(SELECT p_INPUT_SEQUENCE FROM DUAL Query_string).

HUMAN_DB1. HUMAN_PROTEINS H

WHERE UPPER (t_seq_id) = UPPER (H.gb_accession) and gap_openings = 0

)

LOOP


This initial query works relatively well (about 2 seconds) on a table target of approximately 20,000 documents (reproduced above, as the HUAMN_DB1. Table HUMAN_PROTEINS. However, if I had to choose a selected target table that contains approximately 170 000 records, the query performance are significantly reduced in about 45 seconds. The two tables have identical ratings.


I was wondering if there are ways to improve the performance of BLASTP_ALIGN on large tables? There only seems to be a lot of documentation on BLASTP_ALIGN. I could find this (http://docs.oracle.com/cd/B19306_01/datamine.102/b14340/blast.htm), but it wasn't that useful.


Any ideas would be greatly appreciated.



In case one is interested... it looked like the AA_SEQUENCE column in the following slider: SLIDER (Select GB_ACCESSION, AA_SEQUENCE from HUMAN_DB1. HUMAN_PROTEINS) was a CLOB field. In my second target, my column correspodoning table was VARCHAR2. One hypothesis is that BLASTP_ALIGN made a VARCHAR2-> CLOB conversion internally. I changed the table to have a CLOB column and with success against BLASTP_ALIGN 170 000 documents about 8 seconds (not much, but better than 45).

I will mark it as answered.

Tags: Database

Similar Questions

  • Question about the construction of cube / query performance (11.2.0.3)

    Hi, I have a stupid question on the performance of cube generation. By choosing the precalc %, is linear (or nearly linear) construction time to that? for example if you select 10% going to be 3 times faster than the selection of 30%? Also, is it fair to assume that if only 10% of the values are precalculated, an average end-user queries have to hit 3 times more data and therefore be about 3 times slower?

    Sorry, this on a virtual computer on your laptop, so test different configs build takes forever (I still have a load of cube really complete). Guess I should not be trying a cube Sun 15 on a virtual computer on your laptop, but trying to sell a DBA on the fact that she could improve the performance of our mini - DW.

    Thank you
    Scott

    Aggregation based on costs (aka "Pre-computes percent") was introduced in 11.1 as a simpler alternative based on the level of aggregation. Product management dream was a linear parameter, but the complexity is quickly apparent. Which could measure the linearity against? The generation time? Time of the query? Total size of the disk? The result the balances of all of these factors, but is linear against any of them. Fortunately, behavior level percentage precompute was fairly consistent between cubes and patterns in our experience, so I can give you a rough characterization. But keep in mind that this is a guide only - you have to experience on your own scheme and a system to see what works for you. In particular, you must balance your own requirements on construction time, time of the request and the disk size.

    * 0% *-this means no precomputation at all, so all data access will be dynamic. This is the recommended setting for the top of the page partition of a cube. If, for any reason, you want to use for the scores of leaves as well, then I advise you pass a cube uncompressed.

    * 1% *-he pre-computes the smallest part of the cube that is allowed by the algorithm and would take certainly greater than 1% of the time taken by an accumulation of 100%. For partitions of leaves, it is usually best to increase the amount because you'll get much better query response time for not much more profitable in terms of disc size and the generation time. It may be a good level for the top of the page partition of a cube, but should be used with caution because the top of the page scores are often too big to pre-computes.

    * 2%-19% *-these levels does not seem to be a lot of benefits since the amount of time and the total size of the disk is almost identical to a construction of 20%, but queries are slower.

    * 20%-50% *-this range is probably the best compromise in terms of construction compared to the time of the query. The default value of the AWM is 35%, which is a good starting point. Lower it to 20% if you want a faster version and get up to 50% if you want to replace the faster queries. The setting is close to linear in this interval than outside it.

    * 51%-99% *-you should probably avoid these levels, although I've seen 60% used in practice. The reason is that while the size of the cube and the length of the construction increase rapidly, the queries do not receive proportionally faster. Indeed, you may find that queries are slower because the code spends more time swapping in the pages of the disc.

    * 100% *-this will be pre-computes all (non-NULL) cells in the cube. It may be a surprise after my advice about 51%-99%, but 100% is a reasonable level to choose. This is because the code is much simpler when you know that everything is precalculated and then stored in the disk pages.

  • Partitioning strategy for the OBIEE query performance

    I use partitioning for the first time and I'll have trouble determining if I partition my fact table in a way that will allow the partition size to work with queries generating OBIEE.  I've set up a simple example using query I wrote to illustrate my problem.  In this example, I have a star with a fact table schema and I join in two dimensions.  My fact table is partitioned on JOB_ID and TIME_ID RANGE LIST and those are the keys that link the two dimensions that I use in this application.


    Select sum (boxbase)

    TEST_RESPONSE_COE_JOB_QTR a

    Join DIM_STUDY C on A.job_id = C.job_id

    Join DIM_TIME B on A.response_time_id = B.time_id

    where C.job_name = "FY14 CSAT"

    and B.fiscal_quarter_name = ' quarter 1';


    What can I say, because the query is in fact done filtering on columns in the dimensions instead of they, the columns in the table, the size isn't actually is happening.  I see actually slightly better performance of a non-partitioned table, even though I wrote this application specifically for partitioning strategy which is now in place.


    If I run the next statement, it runs a lot faster and a plan to explain is very simple and it seems to me that it's size down to a partition of sub as I hoped.  This isn't any query generated by OBIEE how will seem so.


    Select sum (boxbase)

    of TEST_RESPONSE_COE_JOB_QTR

    where job_id = 101123480

    and response_time_id < 20000000;


    Any suggestions?  I get some benefits from the exchange of partition by using this configuration, but if I'm going to sacrifice performance reports then that maybe isn't useful, or at the very least, I would need to get rid of my partitions void if they are not providing any benefit.


    Here are the plans to explain that I got for two queries in my original post:

    Operation

    Name of the object

    Lines

    Bytes

    Cost

    Object node

    In/Out

    PStart

    PStop

    INSTRUCTION SELECT optimizer Mode = ALL_ROWS


    1


    20960





    AGGREGATION OF TRI


    1

    13






    VIEW

    SYS. VW_ST_5BC3A99F

    101 K

    1 M

    20960





    NESTED LOOPS


    101 K

    3 M

    20950





    PARTITION LIST SUBQUERY


    101 K

    2 M

    1281



    KEY (SUBQUERY)

    KEY (SUBQUERY)

    RANGE OF PARTITION SUBQUERY


    101 K

    2 M

    1281



    KEY (SUBQUERY)

    KEY (SUBQUERY)

    CONVERSION OF BITMAP IN ROWID


    101 K

    2 M

    1281





    BITMAP AND









    MERGE TO BITMAP IMAGE









    KEY ITERATION BITMAP









    BUFFER









    INDEX SKIP SCAN

    CISCO_SYSTEMS. DIM_STUDY_UK

    1

    17

    1





    BITMAP INDEX RANGE SCAN

    CISCO_SYSTEMS. FACT_RESPONSE_JOB_ID_BMID_12






    KEY

    KEY

    MERGE TO BITMAP IMAGE









    KEY ITERATION BITMAP









    BUFFER









    VIEW

    CISCO_SYSTEMS.index$ _join$ _052

    546

    8 K

    9





    HASH JOIN









    INDEX RANGE SCAN

    CISCO_SYSTEMS. DIM_TIME_QUARTER_IDX

    546

    8 K

    2





    INDEX FULL SCAN

    CISCO_SYSTEMS. TIME_ID_PK

    546

    8 K

    8





    BITMAP INDEX RANGE SCAN

    CISCO_SYSTEMS. FACT_RESPONSE_TIME_ID_BMIDX_11






    KEY

    KEY

    TABLE ACCESS BY ROWID USER

    CISCO_SYSTEMS. TEST_RESPONSE_COE_JOB_QTR

    1

    15

    19679



    ROWID

    L LINE









    Operation

    Name of the object

    Lines

    Bytes

    Cost

    Object node

    In/Out

    PStart

    PStop

    INSTRUCTION SELECT optimizer Mode = ALL_ROWS


    1


    1641





    AGGREGATION OF TRI


    1

    13






    SIMPLE LIST OF PARTITION


    198 K

    2 M

    1641



    KEY

    KEY

    RANGE OF SINGLE PARTITION


    198 K

    2 M

    1641



    1

    1

    TABLE ACCESS FULL

    CISCO_SYSTEMS. TEST_RESPONSE_COE_JOB_QTR

    198 K

    2 M

    1641



    36

    36


    It seems unreasonable to think that relying on our index in a table partitioned (or partitioned in a way only focused on the help of ETL) can actually work better than partitioning in a way that we could get some size dynamic, but never static pruning?

    Yes - standard tables with indexes can often outperform partitioned tables. It all depends on types of queries and predicates to query which are typically used and the number of rows returned in general.

    Size of the partition eliminates the partitions TOGETHER - regardless of the number of rows in the partition or table. An index, on the other hand, is prohibited if the query predicate needs a significant number of lines since Oracle can determine that the cost is lower simply use close readings and make a full scan.

    A table with 1 million rows and a query predicate who wants 100 K of them probably will not use an index at all. But the same table with two partitions could easily have one of the partitions pruned by the "number of effective lines" only 500 k or less.

    If you are partitioning for performance, you should test your critical queries to make sure partitioning/pruning is effective for them.

    Select sum (boxbase)

    TEST_RESPONSE_COE_JOB_QTR a

    Join DIM_STUDY C on A.job_id = C.job_id

    Join DIM_TIME B on A.response_time_id = B.time_id

    where C.job_name = "FY14 CSAT"

    and B.fiscal_quarter_name = ' quarter 1';

    So, what is a typical value for 'A.response_time_id '? That represents a 'B.time_id '?

    Because a way of providing explicit partition keys may be to use a range of 'response_time_id' of the FACT table rather than a value of 'fiscal_quarter_name' of the DIMENSION table.

    If "1 quarter" could correspond to a range of dates from 01/01/YYYY ' at ' 03/31/yyyy '. "."

    Also, you said that on the partitioning: JOB_ID and TIME_ID

    But if your questions relate mainly to the DATES / TIMES, you might be better use the TIME_ID for PARTITIONS and JOB_ID, if necessary, for the subpartitioning.

    Date range partitioning is one of the most common around.and serves both performance and ease of maintenance (delete/archive old data).

  • To improve performance, increase the size of the overall storage cache

    I have an ASO in 11.1.2.1 on unix Box

    I get this message in my logs

    "To improve performance, increase the size of the cache of global storage.

    What is the best practice to update the Cache of ASO for dérirez best time

    Please notify

    Right-click on the database in the EA and select Edit-> properties. It is on the first tab, remember to restart the application so that they enter into force

  • The DIMINFO affects query performance?

    Hi all

    A USER_SDO_GEOM_METADATA can. DIMINFO well defined to improve the query performance?


    For all the tables in my system, I have the USER_SDO_GEOM_METADATA view like this:
    DIMINFO
    X; -2147483648; 2147483648; 5TH-5
    Y; -2147483648; 2147483648; 5TH-5
    Z; -2147483648; 2147483648; 5TH-5




    Thanks to you all

    The simple answer is Yes - it provides an alternative and faster I/O path.

    The real question is whether it is supposed that was the data model and its use.

    So your question is similar to asking if a varchar2 column indexing is good or not. The answer is "+ depends on +".

  • Not able to set a shortcut customized to increase/decrease the size of brush in 2014 CC Photoshop on Win 8

    I use Windows 8 and PS CC2014 and the problem is that I can't assign shortcut customized to the dot. I've been using, and. declining and an increase in brush size and it worked well in every photoshop, same cc2014 in win7, but not in win8 it seems. He let'schange me the and just tells me that it is in service in another place, but I can always change. But I can't do the same thing with. as he complains that it is used by an application, although I have reset to zero from the original location and left empty, photoshop it still recognizes if I press it, although it would not empty but I can't assign this key to anything else either.

    Small thing, but the readers makes me crazy I had to assign it to an another key and I'm using the shortcuts in the muscle memory for most and eventually change the brush when I had the intention to increase the size. I tried from google but could not find a similar thing another solution.

    Please check that Flakesie. Looks like a problem with the Finnish version of PS CC 2014.

    I have notified the respective team. They are working.

    ~ David

  • How to increase the size of a .png image

    Hi friends,

    How can I increase or decrease the size of a .png image using the method scaleImage32() of the EncodedImage class.
    Code samples will be very useful...

    Thank you
    Elizabeth

    Check this thread.

    http://supportforums.BlackBerry.com/Rim/Board/message?board.ID=java_dev&message.ID=378&query.ID=8274...

  • increase the 'size of the cache pending' of our cube ASO to capture whole database in memory?

    Happy new year everyone,

    We want to take full advantage of our 54 GB of free memory on our server Exalytics X 4.

    From this site, I read more in anticipation of the ASO cache size will reduce the disk i/o.

    The default / .dat file size of data from our database of the ASO is 9.9 GB with more 100 million cells entry level.

    We should increase the 'size of the cache in waiting"our ASO to 9.9 GB cube so that the entire .dat will be cached?

    Thank you.

    ===============

    PS. Here's what I found in ASO Tuning white paper, which seems to recommend 512 MB or 1 GB.

    ASO cache size has an effect on the performance of loading data. For

    small databases with cells of entry 2 million or less, the default

    ASO cache size to 32 MB is sufficient. For a larger database

    with 20 million cells, 64 or 128 MB of cache is more appropriate.

    A database with 1 billion cells or more, the size of the cache can

    be placed as high as 512 MB or 1 GB if memory permits.

    Put all of the memory base looks really cool, but my experience is that in most cases it is not necessary. In my experience, the benefit of the ASO cache decreases rapidly. Here's an example I wrote about running aggregations:

    http://Essbase-day.blogspot.com/2015/09/calcparallel-with-ASO.html

  • Workstation 8 - decrease the size of the disk

    How one decreases the size of unallocated drive in advance single file?

    Vmware-vdiskmanager a possibility to increase, but does not the disk size

    How to decrease the maximum size of 150 GB to 75GB?

    How to decrease the maximum size of 150 GB to 75GB?

    You have more then a few options and to name a few...

    Make an Image, like a ghost Image and drive him back a smaller.

    Add a smaller disk to the virtual machine and the Image directly from one to the other, and then exchange the smallest to the largest.  Ghost and that too.

    Use any other third-party utility to perform the foregoing.

    Zero free space on the existing drive and then reduce it (if VMware Tools are installed automatically retractable function zeros space free first) then resize the virtual partition to a smaller size, leaving half of it unpartitioned.  This prevents the drive to grow larger then the size of the new virtual partition size.  The dynamic (non-destructive) resizing can be done with a software like GParted Live.

    VMware vCenter Converter to create a new virtual machine when resizing of the hard drive.

  • How to increase the size of the metadata to be 2000 characters?

    Hi all

    Can increase us the size of the metadata in the AAU to about 2000 characters? Currently the maximum size for custom metadata are 255 characters (it must be of the type: memo). Can increase us the size of the memo to 2000 characters? Or is it possible to create a different kind of field, for example long memo size 2000 ch?

    Please suggest.

    Thank you
    Madhur.

    Published by: madhur prajapati, January 7, 2011 05:49

    Many have asked this. THIS will CAUSE PERFORMANCE DEGRADATION, this is why the development didn't miss do, but too many have asked, then...

    Accomplish a lot of architectural planning of additional hardware and performance optimization... Get a good DBA and someone who understands the middleware software to help you examine how upgraded hardware, network and all other performance bottlenecks, as in need of several knots to deal with downturns. DB etc. RAC. It's not trivial.

    Which allows huge metadata fields and especially if as is normal users wishing to 20 + metadata fields (all this great too) will be easy. Optimization of performance it will take someone with skill level.

    You will need to contact support (or wait until it is available later) for the last patch of 10g. There are 2 variables, you will need to set after having patched.

    .
    Added a configuration variable 'DoPrepStmtForMetaValueMod' to allow using metadata
    fields more than 4000 characters. If this variable is prepared set that we use
    instructions of query, which allow us to use fields of metadata more than 4000
    characters.
    .
    By default MemoFieldSize is set to 2000 in order to have a longer Memo fields
    game of the MemoFieldSize in the config.cfg

    So MemoFieldSize under 4 k should be allowed without the patch and the new config variable, but again it will slow things down

    When we look at it not forget multiple-byte characters.

    Published by: Kent on January 7, 2011 06:32

    Published by: Kent on January 7, 2011 06:34

  • HP - 50g RPN - how to increase the size of the variable icon in the command screen in the RPN

    Greetings.

    How can I increase the size of the icons 'F' keys for variables in the RPN command line window?

    My reason for this is to be able to see the full name of the variable.

    Thank you.

    Edit: Perhaps sizes of keys to function 'F' is what determines the size of the variable icon. Does this mean that it cannot be increased?

    If you press SHIFT RIGHT and then press the ARROW KEY down, you can see more than 5 characters that are currently displayed for the softkey text.

    a second solution to think abou... to help the CSE function and custom designed GROBs.

    (However, take into account the current size of the police... how small it really is possible and stay legible-

    You must take account of the resolution of the screen of 50 G... it's not a 'Tablet' hi-res-like display)

    See the example on page 20-4 of the 50G users guide.

    a copy of the user guide can be downloaded here:

    http://h20000.www2.HP.com/bizsupport/TechSupport/DocumentIndex.jsp?ContentType=SupportManual & lang = to & CC = US & docIndexId = 64179 & TaskID = 120 & prodTypeId = 215348 & prodSeriesId = 3235173

    Here is the example used on page 20-4

    % HP: T (3) A (R) F (.);
    { {
    21 8 00000EF908FFF900FFF9B3FFF9A2FFF9A3FFF9A0FFF388FF GROB
    "hp"}}

    the list above on the battery... run

    MENU

    If you try to increase the amount of visible characters when running programs, you can use INFORMATION or an application like GUIPLUS located here:

    http://www.HPCalc.org/search.php?query=guiplus

  • Increases the size of the "HDD Recovery" folder on the partition "D".

    I am the owner of laptop Toshiba Satellite, recently, I noticed that my D: drive (unused) is saturated and after careful analysis I realized that the "HDDRecovery" file size is > 200 GB.

    I don't want to remove the file or remove image files inside that I need the functionality of recovery, but I have not a clue how to handle the recovery process to decrease the volume.

    Do you have an idea how this issue?

    Wojtek

    Hello

    It seems strange to me that the size of the HDD recovery folder would increase up to 200 GB continuously. Probably for laptop computer stores files backup automatically in this folder, but I m not very well why

    However, I recommend you to create a recovery media (recovery disk) using the Toshiba
    Recovery media creator (this software pre-installed on the Toshiba laptop).

    The use of disk recovery (or USB key recovery) would fix the laptop back to the factory and even if settings you should remove the folder in HARD drive recovery or the entire partition, this partition, and these files would be created again.

    Hope that this information could be useful for you.

  • How can I increase the size of the text? I could do it easily by zooming the page when I was with Internet Explorer. Thank you

    I would like to be able to increase the size of the text on the home page. I don't know how to do this. Thank you

    Hold down the CTRL key and use your mouse wheel. Scroll up will increase the font size, down will decrease. Alternatively, you can use the main menu and click on view-> Zoom to set the zoom level. If you do not have your main menu enabled in Firefox 4, you can turn it on by selecting the Firefox menu at the top left of your window of Firefox, then Options and check "menu bar".

  • Increase the size of the font when composing emails in Mail

    I saw a number of posts about this, but so far no solution that doesn't involve alternative e-mail software download...

    When I compose an email on my iMac 27 inches with OS X El Capitan Version 10.11.2 and using the Mail application, the text is too small!  I could obviously increase the font size in each email (or zoom with command +)- but then the recipient of the email would be a size unusually large police (unless I remember to reduce it again before I hit send)!  Much of your time to do this for each font.

    This isn't a problem when reading the emails received in the Mail application.  Can I use the keys cmd + to increase the police * looks a * size without increasing the size of the current font.

    The question: to compose e-mails from Mac Mail, how can I increase the size displayed without increasing the actual font size when I send the police?

    Thank you very much

    You answered your own question.  Using "command +" to zoom in as you type so you can see clearly, and then use "command - 'to decrease after you have click Send.  This way the person on the other end doesn't end by with an email with the type of massive police.

  • Keyboard shortcut to increase the size of the text

    On my machine W7 running LV 2010 SP1, the keyboard shortcut Ctrl += does not work. However Ctrl + - decreases the size of the text, as expected.

    No problem on Windows XP Home edition and LV 2010 SP1.

    Is this a bug in W7 known/behaviour? This key combination is reserved under W7?

    To reproduce this behavior:

    1. Type a text on the FP or the diagram
    2. Select the text
    3. Press of += Ctrl on the keyboard (Ctrl Shift =)
    4. the size of the text is expected to increase

    Hello.

    I created a CAR with the number: 292879.

    In the next version of LabVIEW, you see, when this bug is fixed.

    Best wishes

Maybe you are looking for

  • My DT Java Plug - in 8.0.660.18 is supposed to be vulnerable, and I don't know how to upgrade. Should I turn it off?

    I had problems with Firefox crashing lately, and it may have something to do with one of the plugins. So I checked the plugins that I have, and I noticed that I have Java DT 8.0.660.18 installed, but there is a warning that the plugin can be vulnerab

  • Assistance/vision a bit please

    I honestly have no idea where to put this topic, so I'm sorry for this before anything else... so... Sorry if this is in the wrong place. Alright. My computer shuts down for reasons I am unaware of. I suspect that there is a power failure, however. A

  • Windows package install Patch?

    My sister gave me her old computer and it cannot install Microsoft Silverlight updates. It keeps the package install Windows hotfix is not found. I don't know what it is. My computer is a Dell inspiron 531.html any help would be greatly appreciated.

  • How can I create a virtual device bootable USB to install Windows 7 on a hard drive?

    Hello Ho can I prepare a USB key, in order to to use, so that I can throw a Windows 7 Ultimate machine installed locally on a virtual drive? I don't want to change the BCD on the computer laptop e. (laptop running Windows XP) Kind regards BOGDAN Cipr

  • Migration of VM of XenServer to vSphere

    I'm migrating a Centos VM XenServer 6.5 6.6 to vSphere 5.5 but the virtual machine after that migration hangs when starting to "EDD survey."I export the VM of XenServer to OVF (Citrix OVF files seem incompatible with vSphere, Workstation and VitualBo