How to improve the performance of queries

Hello everyone, this is the first time that I am responsible for investigating the performance of the queries.

I have no permission to market the autotrace, so I just used the Plan to explain on the table on the development site.

Here can someone tell me if there are obvious problems with the query below?

This query is the basis for a single report. The amount of data on the Production site is about 25 times amount of data on the development site.

User says that they must wait about 10 minutes for the report to load.

Thank you very much in advance.

Databases:

Oracle 11 GR 1 matter (CARS)

Application running the report is Apex Oracle version 4.0.2

explain plan for

Select vw. NPP, vw. PROFILE_NAME, vw. ADDRESS, vw. COMPETENCE, vw. ASSIGNED_CPO_ID,

VW. STATUS_NAME, vw. DOMICILE_ID, vw. DOMICILE_NAME, vw. OVERPAYMENT_GROUP_ID, vw. CPO_NAME, vw. OCCURRENCES, Act. STATUS_CHANGE_DATE,

(select name from opay_status where id = ACTION_ID) action_name,.

ACTIONED_DATE, "CLOSED_BY,.

(select name from OPAY_PAYMENT_SOURCE_TYPES where id = PAY_TYPE_ID) Payment_Source,

VW. REVIEW_DATE

ftrx.opay_overpayments_summary_vw vw, OPAY_OVERPAYMENT_ACTIONS Act

where vw. OVERPAYMENT_GROUP_ID = Act.ID (+) and upper (Volkswagen. <>STATUS_NAME) "TRANSFORMED."

and OPAY_PROCESSED_OVERPAYMENT (vw. OVERPAYMENT_GROUP_ID) = 'N'

Union of all the

Select RPC. NPP, CPP. PROFILE_NAME, CPP. ADDRESS, RPC. COMPETENCE, CPP. ASSIGNED_CPO_ID,

CPP. STATUS_NAME, CPP. DOMICILE_ID, CPP. DOMICILE_NAME, CPP. OVERPAYMENT_GROUP_ID, CPP. ASSIGNED_CPO_NAME CPO_NAME, cpp.cycle_count OCCURRENCES, CPP. STATUS_CHANGE_DATE,

CPP.action_name,

CPP. ACTIONED_DATE, (select name of CPO where id = CPO_ID) CLOSED_BY,.

CPP. Payment_Source,

CPP. STATUS_CHANGE_DATE DUMMY_DATE

the CPP OPAY_CPO_PROFILE_PROCESSED;

/

Select * from table (dbms_xplan.display);

Hash value of plan: 3275359346

-------------------------------------------------------------------------------------------------------------------------------

| ID | Operation | Name                         | Lines | Bytes | TempSpc | Cost (% CPU). Time |

-------------------------------------------------------------------------------------------------------------------------------

|   0 | SELECT STATEMENT |                              |   260K |    65 M |       | 44949 (4) | 00:09:00 |

|   1.  UNION-ALL |                              |       |       |       |            |          |

|   2.   TABLE ACCESS BY INDEX ROWID | OPAY_STATUS |     1.    16.       |     1 (0) | 00:00:01 |

|*  3 |    INDEX UNIQUE SCAN | OPAY_PK_STATUS |     1.       |       |     0 (0) | 00:00:01 |

|   4.   TABLE ACCESS BY INDEX ROWID | OPAY_PAYMENT_SOURCE_TYPES |     1.    12.       |     1 (0) | 00:00:01 |

|*  5 |    INDEX UNIQUE SCAN | OPAY_PK_SOURCE_TYPES |     1.       |       |     0 (0) | 00:00:01 |

|*  6 |   EXTERNAL RIGHT HASH JOIN |                              |   197K |    86.       | 43459 (1) | 00:08:42 |

|   7.    TABLE ACCESS FULL | OPAY_OVERPAYMENT_ACTIONS |  1224 | 29376 |       |   209 (0) | 00:00:03 |

|   8.    VIEW                                | OPAY_OVERPAYMENTS_SUMMARY_VW |   197K |    51 M |       | 43249 (1) | 00:08:39 |

|   9.     UNION-ALL |                              |       |       |       |            |          |

| * 10 |      HASH JOIN |                              |     2.   570 |       | 17032 (1) | 00:03:25 |

| * 11 |       VIEW                             |                              |     2.   560.       |  7456 (1) | 00:01:30 |

|  12.        KIND OF WINDOW.                              |     2.   278.       |  7456 (1) | 00:01:30 |

|  13.         NESTED EXTERNAL LOOPS |                              |     2.   278.       |  7455 (1) | 00:01:30 |

|  14.          NESTED LOOPS |                              |     2.   242.       |  7453 (1) | 00:01:30 |

| * 15 |           OUTER HASH JOIN |                              |     2.   214.       |  7451 (1) | 00:01:30 |

|  16.            NESTED EXTERNAL LOOPS |                              |     2.   194.       |  7448 (1) | 00:01:30 |

| * 17.             TABLE ACCESS FULL | OPAY_OVERPAYMENTS |     2.   168.       |  7447 (1) | 00:01:30 |

|  18.             TABLE ACCESS BY INDEX ROWID | OPAY_OVERPAYMENT_ACTIONS |     1.    13.       |     1 (0) | 00:00:01 |

| * 19.              INDEX UNIQUE SCAN | OPAY_PK_OVERPAY_ACTIONS |     1.       |       |     0 (0) | 00:00:01 |

|  20.            TABLE ACCESS FULL | STATUS                       |     3.    30.       |     3 (0) | 00:00:01 |

|  21.           TABLE ACCESS BY INDEX ROWID | HOME |     1.    14.       |     1 (0) | 00:00:01 |

| * 22.            INDEX UNIQUE SCAN | PK_DOMICILE |     1.       |       |     0 (0) | 00:00:01 |

|  23.          TABLE ACCESS BY INDEX ROWID | CPO                          |     1.    18.       |     1 (0) | 00:00:01 |

| * 24.           INDEX UNIQUE SCAN | PK_CPO                       |     1.       |       |     0 (0) | 00:00:01 |

|  25.       VIEW                             |                              |   197K |   963K |       |  9574 (1) | 00:01:55 |

|  26.        HASH GROUP BY.                              |   197K |  7127K |  8544K |  9574 (1) | 00:01:55 |

| * 27.         EXTERNAL RIGHT HASH JOIN |                              |   197K |  7127K |       |  7664 (1) | 00:01:32 |

|  28.          TABLE ACCESS FULL | STATUS                       |     3.     9.       |     3 (0) | 00:00:01 |

| * 29.          EXTERNAL RIGHT HASH JOIN |                              |   197K |  6549K |       |  7660 (1) | 00:01:32 |

|  30.           TABLE ACCESS FULL | OPAY_OVERPAYMENT_ACTIONS |  1224 | 11016.       |   209 (0) | 00:00:03 |

| * 31.           HASH JOIN |                              |   197K |  4816K |       |  7449 (1) | 00:01:30 |

|  32.            INDEX SCAN FULL | PK_DOMICILE |    73.   292.       |     1 (0) | 00:00:01 |

|  33.            TABLE ACCESS FULL | OPAY_OVERPAYMENTS |   197K |  4045K |       |  7446 (1) | 00:01:30 |

| * 34 |      HASH JOIN |                              |   197K |    53 M |  3280K | 26217 (1) | 00:05:15 |

|  35.       VIEW                             |                              |   197K |   963K |       |  9574 (1) | 00:01:55 |

|  36.        HASH GROUP BY.                              |   197K |  7127K |  8544K |  9574 (1) | 00:01:55 |

| * 37 |         EXTERNAL RIGHT HASH JOIN |                              |   197K |  7127K |       |  7664 (1) | 00:01:32 |

|  38.          TABLE ACCESS FULL | STATUS                       |     3.     9.       |     3 (0) | 00:00:01 |

| * 39 |          EXTERNAL RIGHT HASH JOIN |                              |   197K |  6549K |       |  7660 (1) | 00:01:32 |

|  40.           TABLE ACCESS FULL | OPAY_OVERPAYMENT_ACTIONS |  1224 | 11016.       |   209 (0) | 00:00:03 |

| * 41.           HASH JOIN |                              |   197K |  4816K |       |  7449 (1) | 00:01:30 |

|  42.            INDEX SCAN FULL | PK_DOMICILE |    73.   292.       |     1 (0) | 00:00:01 |

|  43.            TABLE ACCESS FULL | OPAY_OVERPAYMENTS |   197K |  4045K |       |  7446 (1) | 00:01:30 |

| * 44 |       VIEW                             |                              |   197K |    52 M |       | 13757 (1) | 00:02:46 |

|  45.        KIND OF WINDOW.                              |   197K |    26 M |    28 M | 13757 (1) | 00:02:46 |

| * 46 |         EXTERNAL RIGHT HASH JOIN |                              |   197K |    26 M |       |  7673 (1) | 00:01:33 |

|  47.          TABLE ACCESS FULL | CPO                          |    91.  1638.       |     4 (0) | 00:00:01 |

| * 48 |          EXTERNAL RIGHT HASH JOIN |                              |   197K |    22 M |       |  7667 (1) | 00:01:33 |

|  49.           TABLE ACCESS FULL | STATUS                       |     3.    30.       |     3 (0) | 00:00:01 |

| * 50 |           EXTERNAL RIGHT HASH JOIN |                              |   197K |    20 M |       |  7663 (1) | 00:01:32 |

|  51.            TABLE ACCESS FULL | OPAY_OVERPAYMENT_ACTIONS |  1224 | 15912 |       |   209 (0) | 00:00:03 |

| * 52 |            HASH JOIN |                              |   197K |    18 M |       |  7452 (1) | 00:01:30 |

|  53.             TABLE ACCESS FULL | HOME |    73.  1022 |       |     3 (0) | 00:00:01 |

| * 54 |             TABLE ACCESS FULL | OPAY_OVERPAYMENTS |   197K |    15 M |       |  7448 (1) | 00:01:30 |

|  55.   TABLE ACCESS BY INDEX ROWID | CPO                          |     1.    18.       |     1 (0) | 00:00:01 |

| * 56 |    INDEX UNIQUE SCAN | PK_CPO                       |     1.       |       |     0 (0) | 00:00:01 |

|  57.   TABLE ACCESS FULL | OPAY_CPO_PROFILE_PROCESSED | 63108 |  9429K |       |  1490 (1) | 00:00:18 |

-------------------------------------------------------------------------------------------------------------------------------

Information of predicates (identified by the operation identity card):

---------------------------------------------------

3 - access("ID"=:B1)

5 - access("ID"=:B1)

6 - access("VW".") OVERPAYMENT_GROUP_ID "=" LAW ". "ID" (+)) "

10 - access("A".") WITH THE ID '=' B '. (' ' ID ')

11 filter (SUPERIOR ("a.") "<>STATUS_NAME"), "TRANSFORMED" AND "OPAY_PROCESSED_OVERPAYMENT"("A"."" OVERPAYMENT_GROUP_ID') = 'N')

15 - access("STS".") ID "(+) = TO_NUMBER ("ACT"". ")" STATUS'))

17 - filter("OPAY".") (OVERPAYMENT_GROUP_ID"IS NULL)

19 - access("OPAY".") OVERPAYMENT_GROUP_ID "=" LAW ". "ID" (+)) "

22 - access("OPAY".") DOMICILE_ID "=" DOM ". (' ' ID ')

24 - access("ACT".") ASSIGNED_CPO_ID "=" DPC ". "ID" (+)) "

27 - access("STS".") ID "(+) = TO_NUMBER ("ACT"". ")" STATUS'))

29 - access("OPAY".") OVERPAYMENT_GROUP_ID "=" LAW ". "ID" (+)) "

31 - access("OPAY".") DOMICILE_ID "=" DOM ". (' ' ID ')

34 - access("A".") WITH THE ID '=' B '. (' ' ID ')

37 - access("STS".") ID "(+) = TO_NUMBER ("ACT"". ")" STATUS'))

39 - access("OPAY".") OVERPAYMENT_GROUP_ID "=" LAW ". "ID" (+)) "

41 - access("OPAY".") DOMICILE_ID "=" DOM ". (' ' ID ')

44 filter (SUPERIOR ("a.") "<>STATUS_NAME"), "TRANSFORMED" AND "OPAY_PROCESSED_OVERPAYMENT"("A"."" OVERPAYMENT_GROUP_ID') = 'N')

46 - access("ACT".") ASSIGNED_CPO_ID "=" DPC ". "ID" (+)) "

48 - access("STS".") ID "(+) = TO_NUMBER ("ACT"". ")" STATUS'))

50 - access("OPAY".") OVERPAYMENT_GROUP_ID "=" LAW ". "ID" (+)) "

52 - access("OPAY".") DOMICILE_ID "=" DOM ". (' ' ID ')

54 - filter("OPAY".") OVERPAYMENT_GROUP_ID' IS NOT NULL)

56 - access("ID"=:B1)

- And this is the content of the function used in the query

create or replace

FUNCTION "opay_PROCESSED_OVERPAYMENT" (NUMBER p_opay_group_id) RETURN varchar AS

fake number;

result varchar (1);

BEGIN

If p_opay_group_id is null returns 'n';

on the other

SELECT count (distinct is_processed_yn) in a model of opay_overpayments where overpayment_group_id = p_opay_group_id;

If dummy <>1 then back 'n';

On the other

Select distinct is_processed_yn as a result of opay_overpayments where overpayment_group_id = p_opay_group_id;

return the result;

end if;

end if;

END;

- And here are the details of the view.

CREATE OR REPLACE VIEW 'OPAY_OVERPAYMENTS_SUMMARY_VW '.

As

Select one. «' REVIEW_DATE ', a. "ID", a. "NPP", a. "PROFILE_NAME «, a» ADDRESS", a. "COMPETENCE", a"ASSIGNED_CPO_ID", a. "STATUS_NAME", a. "DOMICILE_ID", a. "DOMICILE_NAME", a. "OVERPAYMENT_GROUP_ID", a. "CPO_NAME", a. "OCCURRENCES»

(select review_date, id, NPP, profile_name, address

domicile_name, status_name, assigned_cpo_id, competence

overpayment_group_id, cpo_name

, count (NPP) over (PARTITION BY ppn) AS OCCURRENCES

(SELECT review_date, id, NPP, profile_name, address, assigned_cpo_id

status_name, domicile_id, domicile_name

overpayment_group_id, cpo_name

OF ftrx.opay_overpayments_vw

where overpayment_group_id is null)) a

Join (select max (id) id, overpayment_group_id, NPP group opay_overpayments_vw by overpayment_group_id, ppn) b

on all the a.id = b.id union

Select one. "' REVIEW_DATE ', a." ID ", a." NPP ", a." PROFILE_NAME «, a» ADDRESS ", a." COMPETENCE ", a" ASSIGNED_CPO_ID ", a." STATUS_NAME ", a." DOMICILE_ID ", a." DOMICILE_NAME ", a." OVERPAYMENT_GROUP_ID ", a." CPO_NAME ", a." OCCURRENCES "(select review_date, id, NPP

profile_name

address

skill

assigned_cpo_id

status_name

domicile_id

domicile_name

overpayment_group_id

cpo_name

, count (overpayment_group_id) over (PARTITION BY overpayment_group_id) AS OCCURRENCES

(SELECT review_date, id, NPP

profile_name

address

skill

assigned_cpo_id

status_name

domicile_id

domicile_name

overpayment_group_id

cpo_name

OF ftrx.opay_overpayments_vw

where overpayment_group_id is not null)) a

Join (select max (id) id, overpayment_group_id, NPP group opay_overpayments_vw by overpayment_group_id, ppn) b

on a.id = b.id

Let me know if my explanation is not clear enough.

Thanks for reading.

I thought that the function can be part of the problem. He called lots and made several requests by rank. Calls to PL/SQL in SQL a) require a context switch in the execution of the query (from SQL, PL/SQL) and (b) are completely opaque to the optimizer - it can not re - write to be faster than the procedure one call at a time. Instead of trying to re - write that it is faster, try to write it out of the query completely by turning it into a piece of SQL in the container view.

Assuming that opay_overpayments.is_processed_yn is OPAY_PROCESSED_OVERPAYMENT(vw., changement ou «N», «Y» OVERPAYMENT_GROUP_ID) = 'N'

in

AND (vw. OVERPAYMENT_GROUP_ID or vw. OVERPAYMENT_GROUP_ID in

(select OVERPAYMENT_GROUP_ID in the opay_overpayments where is_processed_yn = ' n and OVERPAYMENT_GROUP_ID is not null)

)

Having done this, you can probably also turn the query into an outer join between vw, the OPAY_OVERPAYMENT_ACTIONS Act ftrx.opay_overpayments_summary_vw and (select OVERPAYMENT_GROUP_ID in the opay_overpayments where is_processed_yn = ' only and OVERPAYMENT_GROUP_ID is not null) if the optimizer which has not already done so. That would minimize the amount of querying.

Tags: Database

Similar Questions

  • How to improve the performance of the import

    I'm converting a database of its current WE8ISO88591 character set AL32UTF8
    I use NLS_LENGTH_SEMANTICS to TANK so I don't have to increase the length of the column.

    I followed the instructions in the Oracle:
    144808.1
    313175.1

    Except import that takes too long.

    I took a full database export WE8ISO8859p1 and now its importation in AL32UTF8.

    It is always difficult to prove slow source; but I think it has to do something with NLS_LENGTH_SEMANTICS. On the same
    Server; If the new database was in WE8ISO8859P1, a 5 million row table import took 2 hours; but in Al32UTF8 with NLS_LENGTH_SEMANTICS it takes 1 day!

    No idea how to improve the performance of the import.

    DOUBLE WIRE!

    Please, don't post duplicate discussions.

    Mark this thread ANSWER and continue to use your original thread where you are already helped.

    You have NOT given the info that has been ask in your other thread.
    Re: NLS_LENGTH_SEMANTICS = CHAR import is too slow

  • How to improve the performance of your computer and free up space.

    Original title: the unwanted temporary files of windows is at the origin of the problems of proformanace

    According to a check of problem: the unwanted files temporary windows could take to improve the performance of your computer and free up space.

    Can anyone help with this simple problem.

    Angelo

    Hi Angelo,.

    1. which edition of Windows are you running?
    Example: Windows 7 Professional 32 bit.
    Please follow the below link to clean unwanted temporary files.
    Delete files using disk cleanup
     
    Microsoft at home.
    Slow PC? Optimize your computer for peak performance
    Make slate: how to remove the unwanted files and programs
     
    I hope this helps.
  • How to improve the performance of the Intel X 3100 on Satellite L300

    Hello everyone.

    First of all, I don't know if this is an appropriate place to post this thread here or no, but I would like to share with you the experience relating to the improvement of Intel X 3100 (GMA965).

    As we know, there is no driver is good for this chipset from Intel for the moment, especially on VIsta.
    This improvement is made by editing the registry of Intel driver.

    * Steps: *.
    * 1 * open run (Windows + R)
    * 2 * enter Regedit
    * 3 * open the subfolder: * HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Contr ol\Video\ *.
    * 4 * from there you'll see lots of subfolders in video, open each one until you see the folder * 0000 * with + _3DMark03.exe +.
    (For example, my is {HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Contr ol\Video\ {B5990899-FBCB-45E1-B0B0 *} and reg DWord _3Dmark03.exe lies in two subfolders * 0000 * and * 0001 *})
    * 5 * in * 0000 * and * 0001 *, create two new * DWORD format: * _software.exe* and * ~ software.exe*, value = 1 (hexadecimal). Here, * software * represents the name of the program you want to accelerate as * _photoshop.exe* and * ~photoshop.exe*
    * 6 * in these files, search for * GFX_Options * and change its value to * 1 *.
    * 7 * restart your computer to activate GFX
    * 8 * repeat step 4 to add more programs you want

    Other updates for the drivers are under developed. However, this method will help you to speed up your computer performance with + heavy + graphical software required. I got double FPS on some software on my Toshiba L300 using this method such as Corel, WarcraftIII and Titan Quest (24-32 fps measurement by Fraps)

    Hope it's useful for everyone.
    Best wishes

    Hi Luong Phan

    Thanks a lot for the details and this great instruction.

    I also found this thread. It s your ;)

    + Improve the performance of Satellite L300 graphics software and games-Vista +.
    http://forums.computers.Toshiba-Europe.com/forums//message.jspa?MessageID=119890#119890

    It s the same statement. I am happy. Thank you

  • How to improve the performance of EI

    I have a Panasonic HMC 151 and it generates MTS (this is AVCHD) files. I used Sony Vegas as my video editor NLE now for more than a year and acquired After Effects to work on some scenes, where a NLE Editor is short. I'm a bit disappointed by the performance of EI. Don't get me wrong: I love AE! It can work miracles, and he will certainly become a very tool in my toolbox. But at the same time, I thought I'd eploy all processor possible resources, in order to speed things up.

    I have a file MTS, 10 seconds of time, and put that on the timeline in AE, no effect at all. I go try it in AVI, the best settings. AE uses 4 minutes for this rendering. I load the same file MTS in Sony Vegas and go out to AVI here too. Vegas uses 23 seconds! And the resulting files are almost exactly the same size. Using the Windows Task Manager, I can is that when rendering AE, make employment resources 23% occuopies. In Las Vegas, on the other hand, it's completely - 93 to 97%. I have a Quad Core and of course After Effects use a fraction of the power available from my PC.

    As is the case, the rendering speed is not my main problem. I'm more interested to increase overall performance when you work in AE, preview, application of effects and so on. And when it comes to this, things are surprisingly slow and my CPU is essentially idle, or about 25% occupied.

    Is there a parameter of performance in AE, I forgot? Something like "use the n number of cores?

    ingvarai

    ingvarai:

    First and foremost, it is to be noted that the comparison of After Effects to any editing application only in terms of file i/o is not a good idea. Obviously, software editing will be more responsive to that. Performance of the AE should be evaluated in the conext of complex layering and image processing, which is what it was designed to do. In other words, the measure of how long it takes to AE making a naked video clip is not representative of what you can do in After Effects.

    That said, After Effects has a very powerful multiprocessing system in which it engenders a given instance (not a wire but a whole headless version of the application) for each carrot in your computer. Thus, for a quad core CPU, for example, AE could transparently use 4 instances of rendering. However, this technique requires that you have at least 2 GB of RAM by heart, or it will be counterproductive - the rendering of background process would starve and slow things down. To enable this feature, go to Preferences > memory and multiprocessing and activate the section "made simultaneous multiple images. If you do not have at least 2 GB / processor/core, it is important that you limit the number of instances, so they receive no less than 2 GB of RAM each.  This by defining a number of cores/processors free for other applications in this section. In addition, it is not a good idea to spawn instances of rendering for virtual cores (i7 processors in the family may report twice as many physical cores like here because of hyperthreading).

    For more information, see the help page for After Effects on the memory and multiprocessing preferences.

    Note that for formats such as AVCHD, if your project is more intensive I/O-intensive processor (several elements of film with little processing), you can see little improvement because the compression intertrame / time is very intense for decode. If possible, it is not a bad idea to convert these points to a combination of format-codec lossless/no compressed for beter performance. For typical AE projects, that is to say a lot of layers and composting operations, the function of "Render multiple frames simultaneously" can provide drastic improvements in making gears. Still, this depends on the nature of the project.

  • How to improve the performance of XControl?

    Hi all

    LabVIEW 2009 + DSC

    Following project example, there are 1 XControl and tester.vi. My goal was to display object flexible to build (1 XControl) because my user interface varies a lot depending on the current configuration of the client. The amount of these XControls are in general 100-200 pieces per display. The problem is that this structure of code generates much CPU load (even it's very simple). This can be seen easily when tester.vi is set to run. In general is it a way to improve the perfomance XControl?

    Some ways to improve performance:

    1. Never use a value property when you can use a local variable.  The difference in speed is two or three orders of magnitude.  You have several instances of this.
    2. Consider changing your data structure so that the number of the object and the position of the object are parallel arrays.  This allows to easily search item number using the primitive Research 1 table D , then check out the item appropriate to the position of the object by using the Table of Index.
    3. You should really not have anything in the XControl enabling to stop or delay the execution, because this will block your user interface thread.  Your VI popup for this.  If you need to pop up a dialog, run it with the method run a VI wait until is set to FALSE and Auto have Ref set to TRUE. To communicate the result to the XControl, drop a control hidden on the Panel before the XControl, pass a reference to this control in your dialog box, then use the value (signalling) method to contact the XControl to the dialog box.  Use a change in the value of the hidden control event to manage the receipt of data on the XControl.
  • How to improve the performance of my T61?

    Hello

    I got Lenovo T61 T7300, which means he has Intel Core 2 Duo 2.00 Ghz, 2 GB ram and nVidia Quadro NVS 140 M.

    And I want to improve its performance, then, what are the materials that I can use on it... (can eg. I attach another 2 GB of ram, or replace it with 4GB).

    Also, I have a T61 and am very happy with it.  An SSD would certainly speed things up but would not perform to its potential because of the interface SATA on the T61 is limited to SATA 150 speeds.

    I have a beautiful Seagate 7200 RPM 320g turning regularly drive and am happy with the performance.

    About the memory your T61 can support up to 8 GB of RAM.  Four should be much, but if you want to get out the machine max and don't mind a little extra money spending, I'd recommend the following memory: http://www.newegg.com/Product/Product.aspx?Item=N82E16820231210

    Also, be sure to disable unnecessary software on your computer.  It can easily be slowed down by the suites of security and background process.

    HTH

  • How to improve the performance of Flash Builder 4.5

    Hello guys!

    I use Flash Builder 4.5 for my apps, but my apps have a lot of class and when will the compiler or anywhere, change that in my code the flash builder it takes a long time to build. How to improve it?

    I solved this problem, I change the FlashBuilder.ini with this line

    -vmargs

    -Xms256m

    -Xmx1024m

    -XX: MaxPermSize = 256 m

    -XX: PermSize = 64 m

    Thank you

  • How to improve the performance of this loop?

    Hello

    I have this code below performs a loop on both issues, which compares the files in them. The performance is bad. Can someone tell me how to accomplish this task in an effective way?
    Please note that both folders can be on the network and most often contains 5000 + files.
    for (int i = 0; i < sourceList.length; i++) {
    for (int j = 0; j < targetList.length; j++) {
        if (sourceList.getName().equalsIgnoreCase(targetList[j].getName())) {
    if (sourceList[i].lastModified() > targetList[j].lastModified())
    {
    newInSourceList.add(sourceList[i]); //ArrayList 1
    }
    else if(sourceList[i].lastModified() < targetList[j].lastModified())
    {
    newInTargetList.add(targetList[j]); //ArrayList 2
    }
    }
    }
    }


    Thanks in advance.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                

    This minute is spent for the most part between the two points I mentioned below (Point A & B).

    I guess time will especially isFile() calls. That should make a stat() system call, which calls the network file server.

    If you can target Java 7 and greater, you can use the new java.nio.file and BasicFileAttributes. Allows you to extract the data to stat() each file in a single call, so that lastModified() and isFile() must separate calls to the file server. It will not yet be fast like lightning, but maybe almost 2 x faster - I think that all other treatments are overshadowed by calls on the network.

        // Read file attributes to a list (or use a Map if you prefer)
        String files[] = new File(Directory).list();
        BasicFileAttributes attributes[] = new BasicFileAttributes[files.length];
        for (int n = 0; n < files.length; n++)
            attributes[n] = Files.readAttributes(Paths.get(directory, files[n]), BasicFileAttributes.class);
    
        // Now you can use
        attributes[n].lastModifiedTime()
        attributes[n].isRegularFile()
        // etc without each call hitting the file server.
    
  • How to improve the performance of the scheme to collect statistics CP?

    Hello

    However, we have developed a few modules like Oracle Financials, SCM, OPM, HR, but 'Schema statistics gather' CP is running for 10:30 hours. The parameters are the following:
    Schema name: ALL
    Estimate for %: 10
    Degree: Null
    Flag of backup: NOBACKUP
    ID of the request to reboot: Null
    Story mode: LASTRUN
    Collect Options: COLLECT
    Line changes: Null
    Invalidating dependent users: Y

    How can I set this program to better performance? How can I include only the modules that are being implemented not all (patterns)?

    Concerning
    Ariz

    How can I set this program to better performance? How can I include only the modules that are being implemented not all (patterns)?

    There is no precise answer to your question and you will need to test the different settings until you are satisfied with the performance - definition of the parameters used to collect statistics of the schema [556466.1 ID] program

    Try with "Estimate percent" between 10% and 40%

    Thank you
    Hussein

  • How to improve the performance of compact

    Hello

    I have a question about the scale of the DB-> compact:

    My setup is pretty simple: I have a 32 environments with 512 MB of memory cache and 5 databases each. The largest database is about 8 GB and is 66% full. My application is running nonstop, and I would like to recover my database disk space.

    So I added a code that manually compact the large database step by step (about 100 pages of 64K at each stage) and using 90% fillpercent, but I found that compact made a lot of IO and strongly affected the performance of my application...

    The only solution I've found so far is to increase the cache to 3 GB for EPS be compacted; then my application works normally, but as I can not afford this amount of memory, I have the following questions:

    -Why compact requires it hiding? fact that many IOs?
    Is there another way that does not involve it hiding?
    -I am running Berkeley DB 4.7.25 on FreeBSD. It would be upgraded to a newer version?

    Any comments that could help me understand why things are so would be much appreciated.
    Well well, thanks in advance!

    Hello

    Compaction of the database can be an intensive operation of e/s very like data must be moved between pages of database, dirty pages in the cache must be flushed to disk in order to have enough room for all of the pages required by the compaction to stay in the pool of buffers etc.
    There are strategies to reduce the impact of the I/O. below are a series of suggestions you might try:

    1. If you can upgrade to the latest version of Berkeley DB, then you should do it. There was a lot of improvements and bug fixes in the code of compaction. Download the latest version of BDB hereproduction.

    2. When do the compaction of a database open in a transactional environment compact without a transaction explicitly to the DB-> compact() method, that is, using a txnid NULL pointer. In this case, the operation will implicitly be transaction protected using several transactions. These transactions will be engaged periodically to avoid locking large sections of the tree. Encountered blockages cause the compacting operation to be retried from the point of the last committed transaction. Compact does not keep the pages it scans locked. Any page which doesn't have to be updated is unlocked immediately. BDB locked pages that are below a page single parent from the leaf level only and does not start the free list pages when removing them from the file. only the metadata page will be locked.

    3. in execution of compaction in order to free up space and return the empty database pages in the file system, it is generally recommended to repeat the compact with a lower compact_fillpercent. The following output statistics fields in the structure DB_COMPACT, compact_pages_truncated and compact_pages_free should be reviewed to determine if there is a point, continuing to run the compaction with the same compact_fillpercent, or if a lower compact_fillpercent should be used. If the values are strict positive you then call DB-> compact with the same compact_fillpercent (and specify the DB_FREE_SPACE flag).
    The compact algorithm allows a single pass on the pages of the database; pages so not empty at the end of the file will prevent the free pages (which are placed on the free list) to be returned to the file system.

    4. If you can logically split your database into sections of pairs of key/data based on the values of the keys, you can then perform compaction only on certain parts of the database. Use the start and stop keys when you make the call to DB-> compact() .

    5. you can check the statistics of pool memory to see if your cache is performing. Refer to the documentation section by selecting a cache size . It contributes generally to, especially when you make an optimal compaction, to increase the size of the cache.

    6 perform the compact operation when there is no intensive writing in the database activity, in other words, when the other threads of control are not written to the database (or when this activity is moderate to low).

    7. use the pool of submissions flowing by calling DB_ENV-> memp_trickle() to try to keep a certain percentage of pages in the cache of cleanliness.

    Kind regards
    Andrei

  • How to improve the performance of ebs 11i

    Hello

    Metalink document to check the performance problem wit of 11i EBS RAC on Linux, such as forms and PCP and Jserve.

    Hi again;

    In addition to my post please also consult

    performance-ebs

    Oracle EBS R12 performance is very slow...

    Note: 744143.1 - performance Tuning on e-Business suite
    Note: 864226.1 - How can I diagnosis Performance mediocre E-Business Suite?

    EBS, performance problem
    Re: EBS, performance problem

    Oracle Apps Tuning
    Re: Oracle Apps Tuning

    Note: 69565.1 - a comprehensive approach to Oracle Applications systems performance

    You can click on http://blogs.oracle.com/stevenChan/2007/12/performance_tuning_for_the_ebu.html and http://blogs.oracle.com/mt/mt-search.cgi?blog_id=101&tag=performance&limit=20

    Hopefully those who are can be useful

    Respect of
    HELIOS

  • How to improve the performance of the computer which is very slow at startup in Vista?

    WOT programs I need to run at startup my slow cam of computers verry because someone helpRemember - this is a public forum so never post private information such as numbers of mail or telephone!

    WOT programs I need to run at startup because it takes 7 to 10 minutes to get running?

    Ideas:

    • You have problems with programs
    • Error messages
    • Recent changes to your computer
    • What you have already tried to solve the problem
    Saturday, may 29, 2010, 11:46:15 + 0000, dave wilson332010 wrote:
     
     
    > full throttle programs do I need to run at startup because my computer verry slow cam, someone help
     
     
     
    Everyone is different and has different needs and different desires.
    There are * no. * programs that everyone should run. See below for
    Tips on what you should maybe stop start automatically,
    But first of all to consider the possibility that your problem with what he is
    slowly maybe nothing to do with what starts automatically.
     
    Perhaps the most common performance issue of today is malware
    infection and the first thing you should do is be sure that you are not
    infected. So, what anti-virus and anti-spyware programs run, and
    If they are updated?
     
    Once you talked about this question and that is rid of all
    Malware or get assured that you are not infected, you can address
    the issue of automatic startup programs:
     
    First of all, note that you should be in * all * programs that
    starts automatically, not only those who enter in the system tray.
    Not all the programs auto-start manifesting by an icon in the
    Status bar.
     
    On each program, you don't want to automatically start, check its
    Options to see if he has the choice of not start (make sure you)
    Indeed choose not to run, not just a "don't show icon.
    Optional). Many can easily and better be arrested like that. If that is not
    work, run MSCONFIG from the start. Run the line, and then click the Startup tab.
    Uncheck the programs that you do not want to automatically start.
     
    However, if I were you, I wouldn't do it just for the application of
    the minimum number of running programs. Despite what many people say
    you, you should be concerned, not with the way * a lot * of these programs
    you run, but * that *. Some of them can degrade performance severely, but
    others have no effect on performance.
     
    Don't stop all programs to run willy-nilly. What you need to do
    is to determine what each program is, what its value is up to you which
    the performance cost is its running all the time. You can try
    Internet search and ask questions about the details here.
     
    Once you have this information, you can make a smart informed
    decision on what you want to keep and what you want to get rid of.
     
     
    Ken Blake, Microsoft MVP (Windows desktop experience) since 2003
     

    Ken Blake

  • How to improve the performance of office for Windows Aero' under-score?

    Graphics subscore is 4.8, but games graphics is 5.3. I wonder if I can improve the premiera to reach the other.

    Not really, without changing your vid card, assuming that it is a desktop computer

  • How to improve the performance of Windows 7 64-bit with 2 TB of data?

    How can I improve my computer with hush 2 t discs full of stuff?

    I 8Giga ram and cpu I7 2600 always search or avg scan takes a long time.

    Yes - well - you said you have 2 TB of stuff and research/research of virus could affect all the files you have / designate to the touch.  Defragment and CHKDSK would take time as well - because you have 2 TB of things.  However, every day use applications, etc. should not be affected by your disks are.

    It's like having a nice, well-made workbook... Opening the drawers is not really more difficult as long as the drawers are not too filled where something can hang on a mechanism - but when you start to look for a certain file or find another reason to finger - per each page in a certain drawer (or the entire workbook) - it takes time.

    Can help your processor speed, the amount of memory can help, have indexed the whole disc will help with research (just as with applications other than those built into Windows - there are better there, IMHO) - but analysis all virus files - this isn't something you can accelerate much and will be determined by the speed of the drive the size and the location/organizing files and tastes.

    Wants best performance on a drive - get a SSD, or three or more quickly from SATA and RAID them together.  This will help the speed of these things.

Maybe you are looking for

  • Add new Mac full time Capsule (delete old backups)

    So I have an old time Capsule (mini mac in the form). It is almost full. It backs up all three machines. We just be removed and replaced with a new one. The new machine will have to be added, but the TC is too full. I'm happy to delete the backups fr

  • Unable to connect to Internet in Firefox 33.0

    I just installed the FF33.0 update and I am unable to browse the Internet. I'll have to use a different browser on the same laptop to access this site. I have Windows installed on a computer 8.1 laptop i7. I connect via WiFi using my router broadband

  • I need keystrokes to import and export bookmarks

    I'm blind and use programs output speech to run my windows 8.1 machine and of course impossible to use a traditional mouse. I arrow and tab through each accessible section of the menu of bookmarks, but also with key applications and could find no way

  • maximum bookmarks

    I'm new to Firefox. Before, I used IE and I recorded a lot of favorites with 24 records each one having one or more subfolders.At the same time, I'm migrating from XP to W7 on another PC.I copied all my favorites in C:/USER/Jacques/Favorites (IE arch

  • Need PCI Simple Communications Controller driver for Win7 for Portege A600-122

    Hello I put Windows 7 x 64 on my Portege A600-122 and have a "PCI of Simple Communications controller" in Device Manager. It seems that it is because the Intel AMT (Active Management Technology) driver is not installed. I am struggling to find it, th