Performance issues with large number of nodes

I am creating an application to display graphics (large), for example:

graph.png

But I ran into some performance issues, even for a relatively small number of nodes in the scene graph (+-2000 in the picture above). The graph is built, step by step, adding circles and paths to a StackPane. Circles and paths can be semi-transparant. For a small number of nodes, I get a solid 60 FPS, but this decreases over time to about 5 frames per second. As soon as I stop adding new nodes, the framerate shoot again up to 60 images per second. The framerate drop even when all the nodes are outside the viewport.

My questions are:

* Is Platform.runLater () call to 2000 times a minute too?

* This might just be a problem with my graphics card? (I have an Intel HD Graphics 3000)
* JavaFX pulse recorder says such things, are there meaningful information in that I'm missing?

PULSE: 1287 [163MS:321MS]

T14 (0 + 0ms): col of CSS

T14 (0 + 5ms): layout pass

T14 (6 + 152ms): waiting for the minutes of the previous

T14 (158 + 0ms): copy the State for graphic rendering

T12 (159 + 0ms): dirty opts calculated

T12: Path of the slow form for null

T12 (159 + 160ms): painted

T12 (319 + 2ms): Presentable.present

T12 (321 + 0ms): completed the presentation of painter

Counters:

Background image of the region used cached: 14

NGRegion renderBackgroundShape slow path: 1

Nodes displayed: 1839

Nodes visited during rendering: 1840

Kind regards

Yuri

Basically, try some optimization of performance ranging from the simple to the complex.   Each of the changes below may provide you with an increase in performance.  Some will probably increase performance much more that others (depending on where is the real bottleneck).  I would probably start by replace the paths to the lines and reduce the number of Platform.runLater calls (as adding nodes that fall within the viewport can be difficult only).

> The framerate drop even when all the nodes are outside the viewport.

Place the nodes in the graph, which fall inside the viewport.

> Is Platform.runLater () call to 2000 times a minute too?

Yes, there is no reason to call it more than 60 times a minute when the framerate of JavaFX is capped at 60 frames per second by default.

> This might just be a problem with my graphics card? (I have an Intel HD Graphics 3000)

Yes, it's a graphics system relatively low-end.  But, look at the comment of developer below - your CPU and choice of graphic primitives can also affect rendering speed.

> Adding circles and the paths to a StackPane

You don't need a StackPane to this, a group is a container easier and probably better.

> may be semi-transparant

Remove transparency * may * cause acceleration.

----

You run may be:

https://JavaFX-JIRA.Kenai.com/browse/RT-20405 : improve the rendering of path performance

Maybe if you use lines rather than the paths, performance might improve.

A comment by a developer on this performance tweak application is:

«It is quite normal for applications that use arbitrary paths (if the node path objects, SVGPath, polyline, or polygon) because these paths are rendered in software.» As card circle, Ellipse, line and Rectangle very primitive forms easily to the operations that can be performed entirely on the GPU, which makes them essentially cheap. There is no need to compare the rendering of a complicated shape for rendering of simple primitives for this reason. »

----

Setting the cache indicators can help, but probably only if you animate nodes.

----

Present level of detail of your graph functionality so a chart with zoom out is not make as many nodes as a part of zoomed in.

----

You run any code important calculation on the JavaFX application thread that could stall it?

Can make you available a ftom so that others can reproduce your problems?

Tags: Java

Similar Questions

  • Performance problems with large Photo library size

    Did anyone seen performance issues when the library becomes too great? I used to keep a few different libraries, since some of my libraries are already 1 TB. So, if I merge into one, the file will be 2-3 Tb when it's said and done. Anyone know if it is recommended to size limits or is it more important with El Capitan. Obviously, the response time will be affected by the hard drive and the speed of the laptop, but before you begin merging into one file libraries, I wanted to see if anyone had any suggestions/recommendations or other related advice. Thank you!

    It is not documented, but how many pictures a library can contain pictures, you can migrate photos iPhoto libraries and libraries Aperture, so you should be able to store at least 1 000 000 photos in a photo library.

    I haven't noticed any performance issue with a large library of Photos.  My largest library has 40000 photos.

    During the regular work with the size of the library is not serious at all, as long as you want to keep enough free storage on your system drive. Don't let the photo library fill in your system drive. Keep at least 10 GB free at all times, better more.

    However, there are rare occasions when the size will be discussed:

    • When you need repair library, it will take more time, if the library is large.
    • When you first migrate your libraries of photos, the initial treatment will take longer - scanning for faces and places, creating thumbnails.
    • If you use iCloud library synchronization can take a long time for a large library.
    • Too many smart albums can make pictures a little slower, if the library is large.  I noticed a drop in performance, after that I created 200 smart albums with a search for text across the entire library.
  • Performance issue with the submodels

    Hello
    I'm working on submodels and my requirement needs only one main calling location several specific submodels (a location may have several own submodels, e.g.: US_subtemp1, US_subtemp2, US_subtemp3... in the same way to other locations).
    I would like to know is there any performance issue associated with calls in the submodels and on how this decrease in performance can be minimized.

    Kind regards
    Arvind

    Use of the submodels with components imported alongside repeated sections in the main template layout is not recommended when the number of repetitions is high and the size of the document is large. BI Publisher optimize the xsl transformation internally, but it is not applied to xsl inside models of void.
    One of the extreme scenario is a main model set only a repeating section that matter a submodel of the header and the footer.
    A body of the repeating section is defined in the subtemplate. In this scenario, there is no xsl optimization made and you can easily see some differences in performance.

    As long as you don't need to repeat the layouts of submodels, you don't need to worry about this performance issue.

    Thank you
    Shinji

  • Performance issues on large applications of RT NSV

    Hello

    The end of http://www.ni.com/white-paper/12176/en says "...". misuse of shared Variables in LabVIEW Real-time application may cause performance poor machine... Bad typical uses include the use of too many shared Variables... In deployed applications to small targets in real-time such as RIO Compact or Compact FieldPoint, not more than 25 variables should be used. »

    That apply to the alias of e/s network-published as well? I intended to have 100 ~ i/o alias (cRIO-9082 + 2 x OR-9144), exposed to a Windows PC for DSC logging/alarming - or would that be a problem?)

    Hi JKSH,

    Using the cRIO 9082, you should be allowed to accommodate a large number of IO alias. According to the controller, you can run into problems where there are limitations of memory on the controller. However, this controller must have more than enough memory to support that a large number of variables. The best way to get an idea of how variables affect your system is to monitor the CPU usage and the memory of MAX. I also recommend to install just the necessary software on the cRIO and not everything is possible. I hope this helps and if you have any other questions please let me know.

  • CRUD for tables with large number of columns

    Hello

    I work for a government agency in Colombia, and we do a poll with about 1200 variables. Our database model has few paintings, but they have a large number of columns. Our Oracle 11 g database is, and we do not use Oracle APEX. I've read about APEX and it seems to be useful to generate quick forms and reports. However, I would like to know if it is possible to generate CRUD for tables with many columns, and that would be the best way to do it.

    Thanks in advance.

    Carlos.

    With the help of 250 point on a single page is actually really bad for the user experience. I would probably be it cut into pieces and using several pages to guide the customer through it as a workflow.

    You could also add some pop-up windows which includes some elements of your table, just saved on other pages. But I probably don't like that.

    Tobias

  • How to work with large number of files to a cloud export in Adobe's PDF format

    I have about 1000 PDF files to be converted to Excel.

    What means are optimal (i.e. with less manual intervention) in which I can do these tasks:

    * Download

    * convert

    * Download

    Pointers to a link will be useful. The FAQ page doesn't have this info: FAQ | Export of Adobe PDF

    I found a way to download it via a web browser and convert them via a web browser (with the help of pointing and clicking in the GUI). These methods REQUIRE the browser to be open so that the download and conversion happens - and this leads to frequent crashes / freezing of the browser windows while download / convert several files at once.

    How background sync (i.e. as in Dropbox or box etc, in which I save the files in a folder and it automatically syncs in the background) for uploading / downloading files?

    It would be great if someone can point out a optimal workflow / recommended for (conversion) handling a large number of PDF files using Adobe Document Cloud export in PDF format

    There is no best or recommended workflows. It is not designed for $ 20. Not even the product of Acrobat $500 would be 1000 PDFs without pain, but it would be a little better.

  • Performance issues with Photoshop CS6 64-Bit

    Hello-

    Issue at hand: in recent weeks, I noticed significant problems with performance since the last update PS CS6 using the Adobe Application Manager, from the unexpected shut downs to bring my computer to a crawl (literally, my cursor seems to crawl on my screens). I'm curious to know if anyone knows these issues, or if there is a solution I have not yet tried. Here is a list of actions that give rise to these performance issues - there are probably more that I have not experienced due to my frustration, or were not documented as occurring several times:

    • Opening files - results by hanging processes, takes 3-10 seconds to solve
    • Paste from the Clipboard - results by hanging processes, takes 3-10 seconds to solve
    • Saving files - takes 3-10 seconds to open the dialog box, another 3 to 10 seconds to return to the normal window (saving an image PNG compressed)
    • The eyedropper tool - will crash Photoshop on the desktop, or take 5 to 15 seconds to load
    • Try to navigate any menu - Photoshop crash on the office or take 5 to 15 seconds to load

    Attempts I took to solve this issue, which failed:

    • Uninstalled all fonts I have added since the last update (it's a pain in the *, thanks to be glitch Windows Explorer)
    • Uninstall and reinstall the application
    • Edition 32-bit use
    • Change process priority to above normal
    • Confirm the process for all available cores of the processor affinity
    • Change the configuration of the performance of Photoshop options
      • 61% of memory is available to Photoshop to use (8969 MB)
      • History States: 20; Cache levels: 6; Tile cache size: 1024K
      • Records of work: active on the production of SSD, ~ 10GB of available space
      • Dedicated graphics processor is selected (2 x nVidia cards in SLI)

    System information:

    • Intel i7 2600 K @ 3.40 GHz
    • 16 GB of RAM Dual Channel DDR3 memory
    • 2 x nVidia GeForce GTS 450 cards, 1 GB each
    • Windows 7 Professional 64 bit
    • Adobe Creative Cloud

    This problem is cost me time that I work every day, and I'm about to start looking for alternatives and cancel my subscription if I can't get this resolved.

    Have you tried to restore your preferences to set the parameters of performance return to their default values and restart Photoshop? http://blogs.Adobe.com/Crawlspace/2012/07/Photoshop-basic-troubleshooting-steps-to-fix-MOS t - issues.html #Preferences

  • Performance issue with an external HARD drive

    Hi all

    I wish you all a happy new year and good luck for 2008

    I have a performance problem with a HARD drive: 3.5 inch USB 2.0 HDD with Alu - case. (320 GB of capacity)
    The average copy files is about 470 KB/Sec.

    When I connect the external HARD drive to my laptop (for any USB port), I have a ToolTip message: "this device can perform faster - this USB device can be faster if you connect it to a USB 2.0 port. I can assure you, my computer laptop USB USB 2.0 port.

    I conneted my drive external HARD to both PC and I have the same ToolTip message: "this device can perform faster - this USB device can be faster if you connect it to a USB 2.0 port.

    Is this problem known to you?
    I think the problem is in the firmware of the external HARD drive.

    + Other ++ information: +.

    + OS: Windows XP Pro SP2 +.
    + USB port all tested are USB 2.0 +.
    + HARD drive divided on 4 logical drives.
    + Antivirus: Symantec. I had test with and without my antivirus. +
    + I have had this problem for 3 weeks and before that I had no problem. +

    Thanks in advance & best regards,
    Marouane

    Hello

    I got the same info after the connection of a digital camera.
    The same message appears in the taskbar. I simply disbled this message;
    In Device Manager, click the USB host controller, click the Advanced tab, and then check the box at the bottom of the window that says don't tell me about USB errors.

    You must do this for each USB controller listed under USB in Device Manager, so that the error message no no matter which controller is used.

    I goggled for this + this device can perform faster + and got many successes. It seems that this is not a hardware problem, but simply a problem of operating system software

    That s all I can suggest

  • Performance issue with select only one option

    Hello

    We have performance problems with the selected component to a choice that we use on our page. As soon as I have one in the drop-down list select downwards, it displays a "BUSY" icon Here is the code of the component I have on my jsff.
    <af:selectOneChoice id="socInterval" label="Interval"
                                                  value="#{pageFlowScope.usageMB.selectUsageTrend}"
                                                  autoSubmit="true">   
                                <f:selectItem id="si023" itemLabel="Hourly"
                                              itemValue="Hourly"/>
                                <f:selectItem id="si123" itemLabel="Daily"
                                              itemValue="Daily"/>
                                <f:selectItem id="si112" itemLabel="Weekly"
                                              itemValue="Weekly"/>
                                <f:selectItem id="si113" itemLabel="Monthly"
                                              itemValue="Monthly"/>
                                <f:selectItem id="si1123" itemLabel="Yearly"
                                              itemValue="Yearly"/>
                              </af:selectOneChoice>
    Please let me know if you have any suggestions.

    Thank you and best regards,
    Kiran

    Why meet autoSubmit = "true" for the component?
    Are you doing any treatment/PPR on the same? This can cause the problem.

    Thank you
    Nini

  • Performance issues with dynamic action (PL/SQL)

    Hello!


    I have problems of perfomance with dynamic action that is triggered on click of a button.

    I have 5 drop-down lists to select the columns that users want filter, 5 drop-down lists to select an operation and 5 boxes of input values.

    After that, it has a filter button that submits just the page based on the selected filters.

    This part works fine, the data are filtered almost instantly.

    After that, I have 3 selectors 3 boxes where users put the values they wish to update the filtered rows and column

    There is a update button that calls the dynamic action (a procedure which is written below).

    It should be in a straight line, the issue of performance might be the decoding section, because I need to cover the case when the user wants to set a null (@), and when it won't update the 3 columns, but less (he leaves ").

    That's why P99_X_UC1 | ' = decode(' ||) P99_X_UV1 |', "«,» | P99_X_UC1 ||',''@'',null,'|| P99_X_UV1 |')

    However, when I click finally on the button update, my browser freezes and nothing happens on the table.

    Can anyone help me solve this problem and improve the speed of the update?

    Kind regards

    Ivan

    PS The procedure code is below:

    create or replace

    DWP PROCEDURE. PROC_UPD

    (P99_X_UC1 in VARCHAR2,

    P99_X_UV1 in VARCHAR2,

    P99_X_UC2 in VARCHAR2,

    P99_X_UV2 in VARCHAR2,

    P99_X_UC3 in VARCHAR2,

    P99_X_UV3 in VARCHAR2,

    P99_X_COL in VARCHAR2,

    P99_X_O in VARCHAR2,

    P99_X_V in VARCHAR2,

    P99_X_COL2 in VARCHAR2,

    P99_X_O2 in VARCHAR2,

    P99_X_V2 in VARCHAR2,

    P99_X_COL3 in VARCHAR2,

    P99_X_O3 in VARCHAR2,

    P99_X_V3 in VARCHAR2,

    P99_X_COL4 in VARCHAR2,

    P99_X_O4 in VARCHAR2,

    P99_X_V4 in VARCHAR2,

    P99_X_COL5 in VARCHAR2,

    P99_X_O5 in VARCHAR2,

    P99_X_V5 in VARCHAR2,

    P99_X_CD in VARCHAR2,

    P99_X_VD in VARCHAR2

    ) IS

    l_sql_stmt varchar2 (32600);

    nom_table_p varchar2 (30): = ' DWP. IZV_SLOG_DET';

    BEGIN

    l_sql_stmt: = "update". nom_table_p | 'set '.

    || P99_X_UC1 | ' = decode(' ||) P99_X_UV1 |', "«,» | P99_X_UC1 ||',''@'',null,'|| P99_X_UV1 |'),'

    || P99_X_UC2 | ' = decode(' ||) P99_X_UV2 |', "«,» | P99_X_UC2 ||',''@'',null,'|| P99_X_UV2 |'),'

    || P99_X_UC3 | ' = decode(' ||) P99_X_UV3 |', "«,» | P99_X_UC3 ||',''@'',null,'|| P99_X_UV3 |') where ' |

    P99_X_COL | » '|| P99_X_O | » ' || P99_X_V | «and» |

    P99_X_COL2 | » '|| P99_X_O2 | » ' || P99_X_V2 | «and» |

    P99_X_COL3 | » '|| P99_X_O3 | » ' || P99_X_V3 | «and» |

    P99_X_COL4 | » '|| P99_X_O4 | » ' || P99_X_V4 | «and» |

    P99_X_COL5 | » '|| P99_X_O5 | » ' || P99_X_V5 | «and» |

    P99_X_CD |       ' = '         || P99_X_VD;

    -dbms_output.put_line (l_sql_stmt);

    EXECUTE IMMEDIATE l_sql_stmt;

    END;

    Hello Ivan,.

    I don't think that the decoding is relevant performance. Perhaps the update is suspended because another transaction has changes that are uncommitted to any of the affected rows or where clause is not quite selective and has a huge amount of documents to update.

    In addition - and I may be wrong, because I only have a portion of your app - the code here looks like you've got a guy here huge sql injection vulnerability. Perhaps you should consider re - write your logic in the static sql. If this is not possible, you must make sure that the entered user contains only allowed values, for example by P99_X_On white list (i.e. to ensure that they contain only values known as 'is', ')<', ...),="" and="" by="" using="" dbms_assert.enquote_name/enquote_literal="" on="" the="" other="" p99_x_nnn="">

    Kind regards

    Christian

  • Performance issues with adapter Jena

    Hello

    I run the following query using Jena Adapater 11.2.0.3:
    SELECT * WHERE {
       :g :hasNode   ?node .
       OPTIONAL {
           ?node    :location   ?loc .
       } 
    }
    I expect that this results in a single sdo_rdf_match+ query, but the log messages show that there is 1 sdo_rdf_match+ request for results of the *: g: hasNode? node * part and for each result of recoveries, another sdo_rdf_match+ query is issued.

    The master query:
    Final clause = SELECT ... FROM table(sdo_rdf_match('(<..#g> <..#hasNode> ?node)', sdo_rdf_models('model'), ...,'  ALLOW_DUP=T '))
    Internal queries issued to look like an option:
    Final clause = SELECT ... FROM table(sdo_rdf_match('(<..#MyNode1> <..#location> ?loc)', sdo_rdf_models('model'), ...,'  ALLOW_DUP=T '))
    Final clause = SELECT ... FROM table(sdo_rdf_match('(<..#MyNode2> <..#location> ?loc)', sdo_rdf_models('model'), ...,'  ALLOW_DUP=T '))
    (hundreds more...)
    It takes like 10 seconds to run, however, with a single SEM_MATCH+ call of the SPARQL query above, it takes less than a second.

    Am I missing any special installation to make it work as expected?

    Hello

    You must use the API of attachment and set a flag 'true' to indicate the use of the virtual model.

    String [] modelNames = {'model_B'};
    String [] rulebaseNames = {};

    Piece attached attachment = Attachment.createInstance (modelNames, rulebaseNames,
    InferenceMaintenanceMode.NO_UPDATE, QueryOptions.ALLOW_QUERY_VALID_AND_DUP);
    Chart GraphOracleSem = new GraphOracleSem (oracle, "model_A", room attached, true);

    Thank you

    Zhe Wu

  • RAC 11.2.0.1 to 11.2.0.2 to the new Cluster upgrade with any number of nodes

    I'd like suggestions on upgrading a RAC Oracle 11.2.0.1 in Linux x86_64 RHEL 5.3 with 6 cases of a new cluster with 4 linux x86_64, RHEL 5.5 and upgrading to 11.2.0.2 nodes group of hotfixes.
    New SAN storage will be used. Both systems use ASM.

    Current system... New system
    11.2.0.1... 11.2.0.2
    6 instances... 4 instances
    Linux RHEL 5.3 x86_64... Linux RHEL 5.5 x86_64

    Method:
    (1) run controls and upgrades pre scripts
    (2) save the current database with RMAN and put at the disposal of the backup files to the new host
    (3) install grid Infrastcure on the new cluster 11.2.0.2
    (4) set up ASM on the new Cluster
    (5) install the Oracle RAC 11.2.0.2 on the new cluster database and create the database with the same name of the original database.
    (6) startup nomount using a parameter file and the source of DBID edited without the cars parameters (cluster_database = false, etc...)
    7-restore Rman backup controlfile to step 1
    (8) restore / restore incomplete database
    (9) OPEN RESETLOGS UPGRADE
    10) proceed to updates manual steps (script catupgrd.sql, etc.)
    (11) remove the cancellations additional tablespaces e redo log son
    (12) CONVERTED to THE CARS USING DBCA or RCONFIG

    This method is correct? Do you offer other methods or corrections?

    I also want to know if there is another thing to delete, besides redo threads and undo tablespaces, due to the smaller number of cases.

    I also considered using RMAN DUPLICATE to the same version (11.2.0.1) and then MOVE to 11.2.0.2, but DOUBLE would change the DBID.

    Thank you

    Hello

    your efforts seem valid, but its awfully difficult.

    Why not simply create a watch on 11.2.0.1 and then update.
    If you are using "duplicate for standby" should not change the dbid.

    Then, you can just 'turn on' the night before and continue the upgrade.

    Concerning
    Sebastian

  • Performance issue with the Update statement

    Oracle 10204

    I have a problem related to updaing one performance table.
    only 5000 lines should be updated.
    Hir are some details on the tables/M.V concerned:
    TABLE_NAME     LAST_ANALYZED          NUM_ROWS     SAMPLE_SIZE
    PS_RF_INST_PROD     1/20/2010 1:14:22 AM     7194402          7194402
    BL_TMP_INTRNT     1/27/2010 9:08:54 AM     885445          885445
    NAP_INTERNET     1/25/2010 11:47:02 AM     7053990          560777
    I tried to run the update with two options:
    1. with the plan than oracle choose.
    2. with notes I added.
    In both cases I he collapsed after more than an hour.

    Can any one suggest how to speed it up?

    Below are for the two option tkprof.
    Please note that beside the defualt statistics on those tables i also gathered statistics on two column level as followed:
    BEGIN
      SYS.DBMS_STATS.GATHER_TABLE_STATS (
          OwnName        => 'B'
         ,TabName        => 'BL_TMP_INTRNT'
        ,Estimate_Percent  => 100
        ,Degree            => 8
        ,Cascade           => TRUE
        ,No_Invalidate     => FALSE);
    END;
    /
    
    exec dbms_stats.gather_table_stats('B' , 'BL_TMP_INTRNT', cascade=>TRUE, method_opt=>'for columns SERVICE_UID size 254');
    exec dbms_stats.gather_table_stats('B' , 'BL_TMP_INTRNT', cascade=>TRUE, method_opt=>'for columns FIX_IP_USER size 254');
    Plan 1
    UPDATE BL_TMP_INTRNT A
       SET A.FIX_IP_USER =
              (SELECT C.PRODUCT_ID
                 FROM NAP_INTERNET B, PS_RF_INST_PROD C
                WHERE B.INST_PROD_ID = A.SERVICE_UID
                  AND B.SETID = 'SHARE'
                  AND C.INST_PROD_ID = B.NAP_SURF_UID)
     WHERE A.TERM_DESC LIKE '%ip%'
    
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse        1      0.03       0.02          0          0          0           0
    Execute      1   1166.64    4803.78   17989002   18792167        117           0
    Fetch        0      0.00       0.00          0          0          0           0
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total        2   1166.67    4803.81   17989002   18792167        117           0
    
    Misses in library cache during parse: 1
    Optimizer mode: FIRST_ROWS
    Parsing user id: 13
    
    Rows     Row Source Operation
    -------  ---------------------------------------------------
          0  UPDATE  BL_TMP_INTRNT (cr=0 pr=0 pw=0 time=2 us)
         46   TABLE ACCESS FULL BL_TMP_INTRNT (cr=586400 pr=22228 pw=0 time=15333652 us)
         15   HASH JOIN  (cr=18170453 pr=17931639 pw=0 time=3991064192 us)
         46    MAT_VIEW ACCESS FULL NAP_INTERNET (cr=5659886 pr=5655436 pw=0 time=988162557 us)
    329499624    MAT_VIEW ACCESS FULL PS_RF_INST_PROD (cr=12545734 pr=12311281 pw=0 time=2636471644 us)
    plan 2
    UPDATE BL_TMP_INTRNT A
       SET A.FIX_IP_USER =
              (SELECT /*+ index(b NAP_INTERNET_PK) index(c PS_RF_INST_PROD_PK)*/ C.PRODUCT_ID
                 FROM NAP_INTERNET B, PS_RF_INST_PROD C
                WHERE B.INST_PROD_ID = A.SERVICE_UID
                  AND B.SETID = 'SHARE'
                  AND C.INST_PROD_ID = B.NAP_SURF_UID)
     WHERE A.TERM_DESC LIKE '%ip%'
    
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse        1      0.02       0.02          0          0          0           0
    Execute      1   4645.25    4613.70      95783   39798095        735           0
    Fetch        0      0.00       0.00          0          0          0           0
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total        2   4645.27    4613.73      95783   39798095        735           0
    
    Misses in library cache during parse: 1
    Optimizer mode: FIRST_ROWS
    Parsing user id: 13
    
    Rows     Row Source Operation
    -------  ---------------------------------------------------
          0  UPDATE  BL_TMP_INTRNT (cr=0 pr=0 pw=0 time=1 us)
        473   TABLE ACCESS FULL BL_TMP_INTRNT (cr=10461 pr=10399 pw=0 time=4629385 us)
        408   MAT_VIEW ACCESS BY INDEX ROWID PS_RF_INST_PROD (cr=39776109 pr=85381 pw=0 time=4605125045 us)
       1350    NESTED LOOPS  (cr=39784584 pr=84974 pw=0 time=4601874262 us)
        470     MAT_VIEW ACCESS BY INDEX ROWID NAP_INTERNET (cr=23569112 pr=50472 pw=0 time=2544364336 us)
        470      INDEX FULL SCAN NAP_INTERNET_PK (cr=23568642 pr=50005 pw=0 time=2540300981 us)(object id 11027362)
        408     INDEX FULL SCAN PS_RF_INST_PROD_PK (cr=16215472 pr=34502 pw=0 time=2057500175 us)(object id 10980137)
    
    
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file scattered read                       1300        0.50          4.27
      db file sequential read                     85707        0.51         29.88
      latch: cache buffers chains                     1        0.00          0.00
      log file sync                                   1        0.00          0.00
      SQL*Net break/reset to client                   1        0.00          0.00
      SQL*Net message to client                       1        0.00          0.00
      SQL*Net message from client                     1       14.73         14.73
    ********************************************************************************

    The problem in your update statement that is the query in your set clause is executed many times that there are lines in BL_TMP_INTRNT of 'intellectual property' in their column of term_desc. You mentioned there are about 5000, then the query joining NAP_INTERNET with PS_RF_ISNT_PROD is begun 5000 times.
    The trick is to join only once, be updated using join views - provided that it is preserved - key or by using the merge statement.

    Here is an example:

    SQL> create table bl_tmp_intrnt (fix_ip_user,service_uid,term_desc)
      2  as
      3   select level
      4        , level
      5        , 'aipa'
      6     from dual
      7  connect by level <= 5000
      8  /
    
    Tabel is aangemaakt.
    
    SQL> create table nap_internet (inst_prod_id,setid,nap_surf_uid)
      2  as
      3   select level
      4        , 'SHARE'
      5        , level
      6     from dual
      7  connect by level <= 10
      8  /
    
    Tabel is aangemaakt.
    
    SQL> create table ps_rf_inst_prod (product_id,inst_prod_id)
      2  as
      3   select level
      4        , level
      5     from dual
      6  connect by level <= 10
      7  /
    
    Tabel is aangemaakt.
    
    SQL> exec dbms_stats.gather_table_stats(user,'bl_tmp_intrnt')
    
    PL/SQL-procedure is geslaagd.
    
    SQL> exec dbms_stats.gather_table_stats(user,'nap_internet')
    
    PL/SQL-procedure is geslaagd.
    
    SQL> exec dbms_stats.gather_table_stats(user,'ps_rf_inst_prod')
    
    PL/SQL-procedure is geslaagd.
    
    SQL> set serveroutput off
    SQL> update ( select a.fix_ip_user
      2                , c.product_id
      3             from bl_tmp_intrnt a
      4                , nap_internet b
      5                , ps_rf_inst_prod c
      6            where a.term_desc like '%ip%'
      7              and a.service_uid = b.inst_prod_id
      8              and b.setid = 'SHARE'
      9              and b.nap_surf_uid = c.inst_prod_id
     10         )
     11     set fix_ip_user = product_id
     12  /
       set fix_ip_user = product_id
           *
    FOUT in regel 11:
    .ORA-01779: cannot modify a column which maps to a non key-preserved table
    

    Join is one key kept in the case of b.inst_prod_id and c.inst_prod_id are unique. Please refer to the documentation for more information here: http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/views.htm#sthref3055

    SQL> alter table nap_internet add primary key (inst_prod_id)
      2  /
    
    Tabel is gewijzigd.
    
    SQL> alter table ps_rf_inst_prod add primary key (inst_prod_id)
      2  /
    
    Tabel is gewijzigd.
    
    SQL> update /*+ gather_plan_statistics */
      2         ( select a.fix_ip_user
      3                , c.product_id
      4             from bl_tmp_intrnt a
      5                , nap_internet b
      6                , ps_rf_inst_prod c
      7            where a.term_desc like '%ip%'
      8              and a.service_uid = b.inst_prod_id
      9              and b.setid = 'SHARE'
     10              and b.nap_surf_uid = c.inst_prod_id
     11         )
     12     set fix_ip_user = product_id
     13  /
    
    10 rijen zijn bijgewerkt.
    
    SQL> select * from table(dbms_xplan.display_cursor(null,null,'allstats last'))
      2  /
    
    PLAN_TABLE_OUTPUT
    --------------------------------------------------------------------------------------------------------------------------------------
    SQL_ID  c7nqbxwzpyq5p, child number 0
    -------------------------------------
    update /*+ gather_plan_statistics */        ( select a.fix_ip_user               , c.product_id            from bl_tmp_intrnt
    a               , nap_internet b               , ps_rf_inst_prod c           where a.term_desc like '%ip%'             and
    a.service_uid = b.inst_prod_id             and b.setid = 'SHARE'             and b.nap_surf_uid = c.inst_prod_id        )
    set fix_ip_user = product_id
    
    Plan hash value: 1745632149
    
    ---------------------------------------------------------------------------------------------------------------------------------------
    | Id  | Operation                      | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
    ---------------------------------------------------------------------------------------------------------------------------------------
    |   1 |  UPDATE                        | BL_TMP_INTRNT   |      1 |        |      0 |00:00:00.01 |      32 |       |       |          |
    |   2 |   NESTED LOOPS                 |                 |      1 |     10 |     10 |00:00:00.01 |      21 |       |       |          |
    |   3 |    MERGE JOIN                  |                 |      1 |     10 |     10 |00:00:00.01 |       9 |       |       |          |
    |*  4 |     TABLE ACCESS BY INDEX ROWID| NAP_INTERNET    |      1 |     10 |     10 |00:00:00.01 |       2 |       |       |          |
    |   5 |      INDEX FULL SCAN           | SYS_C00132995   |      1 |     10 |     10 |00:00:00.01 |       1 |       |       |          |
    |*  6 |     SORT JOIN                  |                 |     10 |    250 |     10 |00:00:00.01 |       7 |   267K|   256K|  237K (0)|
    |*  7 |      TABLE ACCESS FULL         | BL_TMP_INTRNT   |      1 |    250 |   5000 |00:00:00.01 |       7 |       |       |          |
    |   8 |    TABLE ACCESS BY INDEX ROWID | PS_RF_INST_PROD |     10 |      1 |     10 |00:00:00.01 |      12 |       |       |          |
    |*  9 |     INDEX UNIQUE SCAN          | SYS_C00132996   |     10 |      1 |     10 |00:00:00.01 |       2 |       |       |          |
    ---------------------------------------------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       4 - filter("B"."SETID"='SHARE')
       6 - access("A"."SERVICE_UID"="B"."INST_PROD_ID")
           filter("A"."SERVICE_UID"="B"."INST_PROD_ID")
       7 - filter("A"."TERM_DESC" LIKE '%ip%')
       9 - access("B"."NAP_SURF_UID"="C"."INST_PROD_ID")
    
    32 rijen zijn geselecteerd.
    
    SQL> rollback
      2  /
    
    Rollback is voltooid.
    

    And it's your current statement. Please note the number of 5000 in the column begins:

    SQL> UPDATE /*+ gather_plan_statistics */ BL_TMP_INTRNT A
      2     SET A.FIX_IP_USER =
      3            (SELECT C.PRODUCT_ID
      4               FROM NAP_INTERNET B, PS_RF_INST_PROD C
      5              WHERE B.INST_PROD_ID = A.SERVICE_UID
      6                AND B.SETID = 'SHARE'
      7                AND C.INST_PROD_ID = B.NAP_SURF_UID)
      8   WHERE A.TERM_DESC LIKE '%ip%'
      9  /
    
    5000 rijen zijn bijgewerkt.
    
    SQL> select * from table(dbms_xplan.display_cursor(null,null,'allstats last'))
      2  /
    
    PLAN_TABLE_OUTPUT
    --------------------------------------------------------------------------------------------------------------------------------------
    SQL_ID  f1qtnpa0nmbh8, child number 0
    -------------------------------------
    UPDATE /*+ gather_plan_statistics */ BL_TMP_INTRNT A    SET A.FIX_IP_USER =           (SELECT
    C.PRODUCT_ID              FROM NAP_INTERNET B, PS_RF_INST_PROD C             WHERE B.INST_PROD_ID
    = A.SERVICE_UID               AND B.SETID = 'SHARE'               AND C.INST_PROD_ID =
    B.NAP_SURF_UID)  WHERE A.TERM_DESC LIKE '%ip%'
    
    Plan hash value: 3190675455
    
    -----------------------------------------------------------------------------------------------------------
    | Id  | Operation                     | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    -----------------------------------------------------------------------------------------------------------
    |   1 |  UPDATE                       | BL_TMP_INTRNT   |      1 |        |      0 |00:00:00.10 |    5076 |
    |*  2 |   TABLE ACCESS FULL           | BL_TMP_INTRNT   |      1 |    250 |   5000 |00:00:00.01 |       7 |
    |   3 |   NESTED LOOPS                |                 |   5000 |      1 |     10 |00:00:00.02 |      24 |
    |*  4 |    TABLE ACCESS BY INDEX ROWID| NAP_INTERNET    |   5000 |      1 |     10 |00:00:00.01 |      12 |
    |*  5 |     INDEX UNIQUE SCAN         | SYS_C00132995   |   5000 |      1 |     10 |00:00:00.01 |       2 |
    |   6 |    TABLE ACCESS BY INDEX ROWID| PS_RF_INST_PROD |     10 |     10 |     10 |00:00:00.01 |      12 |
    |*  7 |     INDEX UNIQUE SCAN         | SYS_C00132996   |     10 |      1 |     10 |00:00:00.01 |       2 |
    -----------------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       2 - filter("A"."TERM_DESC" LIKE '%ip%')
       4 - filter("B"."SETID"='SHARE')
       5 - access("B"."INST_PROD_ID"=:B1)
       7 - access("C"."INST_PROD_ID"="B"."NAP_SURF_UID")
    
    29 rijen zijn geselecteerd.
    

    Kind regards
    Rob.

  • Performance issue with the point of SHUTTLE with 2000 records

    Hello

    I have a FORM with a SHUTTLE item based on a LOV which is to have about 2000 records.
    When I submit this page (CREATE/APPLY the CHANGES), ask it something about 3-4 seconds to submit.

    When I delete the SHUTTLE of the form point, the sending process is very fast... say 1 sec...
    so it takes more time when we have a SHUTTLE point with 2000 records.

    So, how do I fix this perfomance?... any suggestions?


    Thank you
    Deepak

    Deepak,

    When you have more number of records to select the list, Select Multi, elements of the shuttle, then it will take more time to render the page. In most cases, you cannot remove it.

    Check the performance of the underlying query LOV. If it is a complex query, no doubt, you can see his plan to explain and you can enter.

    Alternatively, you can move this deadline to the popup page. I want to say, create a custom page to pop up and provide multi select such items from the shuttle on this page. Allow the user to select the point of shuttle options in the popup page and return values selected to main page shape :)

    See you soon,.
    Hari

  • Performance issue with 2.5 "HDD + NAND MQ01ABD100H

    Hello everyone.

    I use this car for several days.
    The problem is that player has lower estimate in the Windows system assessment tool then my old drive (WD500BEVT).

    I run the benchmark in HDTune Pro and I get very bad result in n of the speed to the beginning of the trial. Overall performance is also much lower then I read in the comments.

    Can you help me with this problem?
    I am running Windows 7 64-bit teacher, all the drivers are installed, my laptop is A.c.e.r 5820TG (i5 - 450 M, 4 GB of DDR3 memory, ATi HD5650). Only a few days ago, is original and SP1.

    Thans for all help.

    I am attaching a link to a screenshot
    http://imagizer.imageshack.us/v2/640x480q90/585/j5nl.PNG

    Hello

    According to the [page specifications MQ01ABDxxxH | http://storage.toshiba.eu/cms/en/hdd/solid_state_hybrid_drives/product_detail.jsp?productid=525], the HYBRID 2.5 INCH SATA DRIVE supports SATA III

    For best performance, your laptop should also supports the SATA III HARD disk controller.
    In the event that your laptop does not support SATA III, performance may be lower due to the limitation of HARD disk controller.

Maybe you are looking for