optimization techniques: cacheAsBitmap

I read that cacheAsBitmap is beneficial when used on display objects. I have a btimap not a vector in my game that I convert it to a clip - a ship of hero for example. Can I use cacheAsBitmap on it too even if it's already a bitmap (png though)? I have also read that the scale and rotation using cacheAsBitmap is OK.

Edit:

http://forums.Adobe.com/thread/758774

Just read this post. The information I gleaned was:

a. cacheAsBitmapMatrix -is necessary or preferred if you want to rotate and fit the mcs

b. you DO use cacheAsBitmapMatrix even if the mc is a bitmap (png)

c. even static images of circles should be cached


However at the end ot the post, it is said that if you use a big bottom, then add just for the library as a bitmap

var myLibraryBitmap:Bitmap = new Bitmap (new LibraryBitmapSymbol());

Nothing to cache as Bitmap, also there is no overload of the memory of your movieclip

This is incredibly useful if an expert could confirm the foregoing that all games should be optimized properly.

Post edited by: codeBeastAdobe

All this says is true, but I would like to add a few notes.

cacheAsBitmapMatrix is good when you're not literally rotation/scaling/etc. being constantly. It is for an object that adapts simply x / y properties most of the time but can sometimes rotate or scale. If it is a ship any who is constantly moving because it's just going to re - draw the music constantly in any case I don't even bother with cacheAsBitmapMatrix.

Static backgrounds / bitmaps (buttons, graphics, etc.) must always be cached to reduce the refresh.

Immense horizons should use a technique of blitting to maintain the simplified view list. Images bitmap for the background will remove indeed some additional processing. Equally important, backgrounds and all other non-interactive objects should also be

mouseChildren = false set so events don't phase through them. Every single object that has no interactive purpose must assign to drastically reduce the events.

Finally, keep in mind that cacheAsBitmap is a toggle and works best when you have multiple objects in a single element. Caching of a single object within a clip isn't really a big advantage unless it is a vector. But as you've driven it, if you know that a complex object will not change for a period of time you can activate cacheAsBitmap. Then, when the object will turn, simply turn off until you are finished and then reselect as a toggle.

Tags: Adobe Animate

Similar Questions

  • How to optimize the performance of this code?

    I have two clips on a flash project. One of them is fixed and the other can be moved with the arrow keys of the keyboard. The two clips have irregular shapes, so HitTestObject and HitTestPoint does not work very well. I have a function that detects the collision of two clips using bitmap. I wanted to update the position of the Movie clip mobile so I put the function of collision detection under the ENTER_FRAME event listener code. It works fine, but when I add many fixed film clips (about 10 clips fixed in an image), the game (.swf file) becomes slower and slows down the performance of the PC. I thought that my collision detection function has a negative effect on the performance of the PC, so I used the class on this page: https://forums.adobe.com/thread/873737
    but the same thing happens.

    You told me how to do to speed up execution of my codes?

    Here's the part of my code:

    stage.addEventListener (Event.ENTER_FRAME, myOnEnterFrame);

    function myOnEnterFrame(event:Event):void

    {

    If (doThisFn) / / doThisFn is a variable to allow or prevent the kind of mobile film clip moved with arrow keys

    {

    If (left & &! right) {}

    Player.x = speed;

    Player.rotation = player.rotation - speed;

    }

    If (right & &! left) {}

    Player.x += speed;

    Player.rotation = player.rotation + speed;

    }

    If (up & &! down) {}

    Player.y = speed;

    }

    If (down & &! up) {}

    Player.y += speed;

    }

    The clips of film sets are wall1, wall2, wall3, wall4,... and so on
    the following code checks to see how many walls exist on each image and pushes them in table wallA

    for (var i: int = 0; i < 1000; i ++) / / you can put up to 1000 object of wall in the table wallA

    {

    If (this ['wall' + i]) / / if the wall object exists, push it into the table wallA

    {

    wallA.push (this, ['wall' + i]);

    }

    }

    for (i = 0; i < wallA.length; i ++)

    {

    If ( h.hitF (player, wallA [i]) | gameOverTest) / / this code checks whether or not the player (the mobile clip) hit the walls

    {

    trace ("second try");

    gameOver.visible = true;

    doThisFn = false;

    }

    }

    I think the following codes are easy to turn and run. I think that the performance problem is due to previous codes.


    If (player.hitTestObject (door))

    {

    Win.Visible = true;

    doThisFn = false;

    }

    If (key) / / if there is a key on chassis

    {

    If (player.hitTestObject (key))

    {

    Key.Visible = false;

    switch (currentFrame)

    {

    case 4:

    wallA [0] .visible = false;

    .x wallA [0] = 50000;

    break;

    case 5:

    wall14. Play();

    wall8.x = 430;

    break;

    }

    }

    }

    }

    }

    It's a simple question that doesn't usually have a simple answer.

    Here is an excerpt of a book (Flash game development: in a Social, Mobile and 3D world) I wrote.

    Optimization techniques

    Unfortunately, I don't know any way completely satisfactory to organize this information. In what follows, I discuss memory management first with sub-themes listed in alphabetical order. Then I discuss the management of CPU/GPU with subheadings listed in alphabetical order.

    This may sound logical, but at least, there are two problems with this organization.

    1. I don't think it's the most useful way to organize this information.

    2. memory management affects the CPU/GPU use, so that everything in the section of memory management can also be listed in the section CPU/GPU.

    In any case, I'll also list information in two other ways, from the easiest to the most difficult to implement and more for much less.

    Two of these later inscriptions are subjective and dependent on experience developer and capabilities, as well as environmental test and the test situation. I very much doubt there would be a consensus on the order of these lists.  However, I think that they are still valid.

    Easier to the more difficult to implement

    1. do not use the filters.

    2. always use the reverse for loops and avoid loops and avoid while loops.

    3. explicitly stop timers for their loan for gc (garbage collection).

    4. use the weak event listeners and remove headphones.

    5. strictly type variable when possible.

    6. explicitly disable interactivity mouse when interactivity smile not necessary.

    7. replace dispatchEvents with callback functions whenever possible.

    8 it would be gc stop sounds for the sounds and SoundChannels.

    9. use the DisplayObject most basic need.

    10. always use cacheAsBitmap and cacheAsBitmapMatrix with air applications (i.e., mobile).

    11. reuse of objects when possible.

    12 Event.ENTER_FRAME loops: use different listeners and different listener functions applied to DisplayObjects as little as possible.

    13. the pool instead of creating objects and gc objects ' ing.

    14. use partial blitting.

    15. use step blitting.

    16 use Stage3D

    Biggest advantage less

    1. Use the blitting Stadium (if there is enough memory system).
    2. Use Stage3D.
    3. Use partial blitting.
    4. Use cacheAsBitmap and cacheAsBitmapMatrix with mobile devices.
    5. Disable explicitly interactivity mouse when interactivity smile not necessary.
    6. Do not use filters.
    7. Use the most basic necessary DisplayObject.
    8. Reuse objects whenever possible.
    9. Event.ENTER_FRAME loops: use different listeners and different listener functions applied to DisplayObjects as little as possible.
    10. Use reverse for loops and avoid the do loops and while loops.
    11. The pool instead of creating objects and gc'ing.
    12. Strictly, type variable when possible.
    13. Use weak event listeners and remove headphones.
    14. Replace dispatchEvents by the callback functions whenever possible.
    15. Explicitly stop timers on loan for the gc.

    16 stop sounds for the sounds and SoundChannels be gc would be.

  • Optimization of the rules

    Hello

    I want to know ways to optimize the modules. Can someone list on optimization techniques? Those can really fast substantially determination? Thank you

    .. and if I can close the loop on what this has to do with optimizing the performance.  This is my opinion.

    I think that when Jasmine says that the rule is a rule of 'BAD', (be careful not to speak too loudly for him) she means not only from a point of view maintenance rule.

    First, the engine optimizes the path until the OPA don't just use every bit of data 'seeds' and create a static rule execution plan.

    Consider that when you give enough forced a SQL statement in the where clause, a query may return several responses and poor performance.  OPA, however, can still make a unique determination given incomplete sets attributes of database (unknowns and might.)  OPA may provide explanations that include what basic attributes must always be provided (OPA has a forward and backward chaining).  In short, the OPA may use only a small set of basic data to determine a result, even if OPA Gets a lot of seed data. In this respect, the internal dev team Oracle OPA provides optimization performance and optimizations to the engine itself to a path of optimal execution.  This way can and will change dynamically.

    Traditionally, tuning execution path is what .net/java developers worry about and most often is the prospect that developers pull then ask questions about optimizing the performance of OPA.  So my reply may seem unsatisfactory to anyone who does not participate in effective policies of the organization.

    As a second review, OPA has strengths in optimization, modeling and optimization of the policy itself, which rarely concern of the developer.  Were necessary intermediate decisions by the company and OPA said these determinations were while they were not (perhaps due to a bad mix of procedural rules / background)?  Can reduce what is necessary for improved or new political determination or remove the dependencies of base?

    So, if politics is already optimized, then OPA will not be be slower than any other method of execution that requires the same information, but probably much, much faster.  So I think that Brad question above...

    A developer usually should not change the policy, but policy changes can have the most impact on performance for the end user and the business.  Jasmine guidelines provide something of a foundation on the writing of strategy using proven methods of political organization / readability.  This provides visibility into changes policies to non-developers. Thus, when you create policy documents, we (or at least I) follow the guidelines, devote an extra effort to exploit the isomorphism of the OPA and make political visible to policy work.

    It's observation, and I speak in generalizations.  (Once the policy is well written, btw, if I have a problem which is not now obvious, so I just call the engine for the intermediate attributes know where my problem lies - that I should have for test cases in Excel and/or SoapUI.)  Be careful, however, as the need for optimization of performance may indicate really need something else entirely...)  If the answer is still not satisfactory, although I hope not, is there a policy statement of example and source documents that can be provided to get advice on the setting?  -I would recommend starting a new thread in the forum for this.

  • The output html when rendering on browser is slow

    Html when rendering on browser output is quite slow. Made real on Animate is 12 seconds. When the HTML is displayed, it takes up to 26 dry

    performance optimization is a complex subject.  This is an excerpt of development of Flash games: in a Social, Mobile and 3D world which deals with the performance of swf, not html5, but there may be useful especially when you manage vector shapes.

    Optimization techniques

    Unfortunately, I don't know any way completely satisfactory to organize this information. In what follows, I discuss memory management first with sub-themes listed in alphabetical order. Then I discuss the management of CPU/GPU with subheadings listed in alphabetical order.

    This may sound logical, but at least, there are two problems with this organization.

    1. I don't think it's the most useful way to organize this information.

    2. memory management affects the CPU/GPU use, so that everything in the section of memory management can also be listed in the section CPU/GPU.

    In any case, I'll also list information in two other ways, from the easiest to the most difficult to implement and more for much less.

    Two of these later inscriptions are subjective and dependent on experience developer and capabilities, as well as environmental test and the test situation. I very much doubt there would be a consensus on the order of these lists.  However, I think that they are still valid.

    Easier to the more difficult to implement

    1. do not use the filters.

    2. always use the reverse for loops and avoid loops and avoid while loops.

    3. explicitly stop timers for their loan for gc (garbage collection).

    4. use the weak event listeners and remove headphones.

    5. strictly type variable when possible.

    6. explicitly disable interactivity mouse when interactivity smile not necessary.

    7. replace dispatchEvents with callback functions whenever possible.

    8 it would be gc stop sounds for the sounds and SoundChannels.

    9. use the DisplayObject most basic need.

    10. always use cacheAsBitmap and cacheAsBitmapMatrix with air applications (i.e., mobile).

    11. reuse of objects when possible.

    12 Event.ENTER_FRAME loops: use different listeners and different listener functions applied to DisplayObjects as little as possible.

    13. the pool instead of creating objects and gc objects ' ing.

    14. use partial blitting.

    15. use step blitting.

    16 use Stage3D.

    Biggest advantage less

    1. use the blitting Stadium (if there is enough memory system).

    2. use Stage3D.

    3. use the partial transfer.

    4. use cacheAsBitmap and cacheAsBitmapMatrix with mobile devices.

    5. explicitly disable interactivity mouse when interactivity smile not necessary.

    6. do not use the filters.

    7. use the DisplayObject most basic need.

    8. reuse objects whenever possible.

    9 Event.ENTER_FRAME loops: use different listeners and different listener functions applied to DisplayObjects as little as possible.

    10. use the reverse for loops and avoid the do loops and while loops.

    11. the pool instead of creating objects and gc objects ' ing.

    12. strictly type variable when possible.

    13 use weak event listeners and remove headphones.

    14. replace dispatchEvents with callback functions whenever possible.

    15. explicitly stop timers on loan for the gc.

    16 stop sounds for the sounds and SoundChannels be gc would be.

  • Animate animation jerky in chrome 51 CC

    I created an ad that works smoothly on chrome 50 but on chrome 51, it is very choppy.

    I used a mix of vector components as well as a few PNG images. Cannot know what the problem is.

    Can someone please?

    performance optimization is a complex subject.  This is an excerpt of development of Flash games: in a Social, Mobile and 3D world

    Optimization techniques

    Unfortunately, I don't know any way completely satisfactory to organize this information. In what follows, I discuss memory management first with sub-themes listed in alphabetical order. Then I discuss the management of CPU/GPU with subheadings listed in alphabetical order.

    This may sound logical, but at least, there are two problems with this organization.

    1. I don't think it's the most useful way to organize this information.
    2. Memory management affects the CPU/GPU use, so that everything in the section of memory management can also be listed in the section CPU/GPU.

    In any case, I'll also list information in two other ways, from the easiest to the most difficult to implement and more for much less.

    Two of these later inscriptions are subjective and dependent on experience developer and capabilities, as well as environmental test and the test situation. I very much doubt there would be a consensus on the order of these lists.  However, I think that they are still valid.

    Easier to the more difficult to implement

    1. Do not use filters.
    2. Always use the reverse for loops and avoid loops and avoid while loops.
    3. Explicitly stop timers for their loan for gc (garbage collection).
    4. Use weak event listeners and remove headphones.
    5. Strictly, type variable when possible.
    6. Disable explicitly interactivity mouse when interactivity smile not necessary.
    7. Replace dispatchEvents by the callback functions whenever possible.
    8. Stop sounds for the sounds and SoundChannels be gc would be.
    9. Use the most basic necessary DisplayObject.
    10. Always use cacheAsBitmap and cacheAsBitmapMatrix with air applications (i.e., mobile).
    11. Reuse objects whenever possible.
    12. Event.ENTER_FRAME loops: use different listeners and different listener functions applied to DisplayObjects as little as possible.
    13. The pool instead of creating objects and gc'ing.
    14. Use partial blitting.
    15. Use the blitting Stadium.
    16. Use Stage3D.

    Biggest advantage less

    1. Use the blitting Stadium (if there is enough memory system).
    2. Use Stage3D.
    3. Use partial blitting.
    4. Use cacheAsBitmap and cacheAsBitmapMatrix with mobile devices.
    5. Disable explicitly interactivity mouse when interactivity smile not necessary.
    6. Do not use filters.
    7. Use the most basic necessary DisplayObject.
    8. Reuse objects whenever possible.
    9. Event.ENTER_FRAME loops: use different listeners and different listener functions applied to DisplayObjects as little as possible.
    10. Use reverse for loops and avoid the do loops and while loops.
    11. The pool instead of creating objects and gc'ing.
    12. Strictly, type variable when possible.
    13. Use weak event listeners and remove headphones.
    14. Replace dispatchEvents by the callback functions whenever possible.
    15. Explicitly stop timers on loan for the gc.
    16. Stop sounds for the sounds and SoundChannels be gc would be.
  • Why animation can break even if oscilate between 60-90% CPU usage?

    I made a simple animation.

    Standalone player it is seamless, but in browser every few seconds it stops and jump even if through all the CPU time use never exceed 100%, but range between 60 and 90%.

    I tested it on two machines, two dual core above 4 GHz and the result is the same.

    Any ideas why and what it takes to fix?

    Link: lucidwork.com/emma2/index2.php

    Concerning

    S.J.

    This is an excerpt from a chapter (game performance optimization) of a book that I wrote (Flash game development: in a Social, Mobile and 3D world) and is intended to show that this is not necessarily a simple subject lends itself to a forum-fix.

    Optimization techniques

    Unfortunately, I don't know any way completely satisfactory to organize this information. In what follows, I discuss memory management first with sub-themes listed in alphabetical order. Then I discuss the management of CPU/GPU with subheadings listed in alphabetical order.

    This may sound logical, but at least, there are two problems with this organization.

    1. I don't think it's the most useful way to organize this information.
    2. Memory management affects the CPU/GPU use, so that everything in the section of memory management can also be listed in the section CPU/GPU.

    In any case, I'll also list information in two other ways, from the easiest to the most difficult to implement and more for much less.

    Two of these later inscriptions are subjective and dependent on experience developer and capabilities, as well as environmental test and the test situation. I very much doubt there would be a consensus on the order of these lists.  However, I think that they are still valid.

    Easier to the more difficult to implement

    1. Do not use filters.
    2. Always use the reverse for loops and avoid loops and avoid while loops.
    3. Explicitly stop timers for their loan for gc (garbage collection).
    4. Use weak event listeners and remove headphones.
    5. Strictly, type variable when possible.
    6. Disable explicitly interactivity mouse when interactivity smile not necessary.
    7. Replace dispatchEvents by the callback functions whenever possible.
    8. Stop sounds for the sounds and SoundChannels be gc would be.
    9. Use the most basic necessary DisplayObject.
    10. Always use cacheAsBitmap and cacheAsBitmapMatrix with air applications (i.e., mobile).
    11. Reuse objects whenever possible.
    12. Event.ENTER_FRAME loops: use different listeners and different listener functions applied to DisplayObjects as little as possible.
    13. The pool instead of creating objects and gc'ing.
    14. Use partial blitting.
    15. Use the blitting Stadium.
    16. Use Stage3D.

    Biggest advantage less

    1. Use the blitting Stadium (if there is enough memory system).
    2. Use Stage3D.
    3. Use partial blitting.
    4. Use cacheAsBitmap and cacheAsBitmapMatrix with mobile devices.
    5. Disable explicitly interactivity mouse when interactivity smile not necessary.
    6. Do not use filters.
    7. Use the most basic necessary DisplayObject.
    8. Reuse objects whenever possible.
    9. Event.ENTER_FRAME loops: use different listeners and different listener functions applied to DisplayObjects as little as possible.
    10. Use reverse for loops and avoid the do loops and while loops.
    11. The pool instead of creating objects and gc'ing.
    12. Strictly, type variable when possible.
    13. Use weak event listeners and remove headphones.
    14. Replace dispatchEvents by the callback functions whenever possible.
    15. Explicitly stop timers on loan for the gc.

    16 stop sounds for the sounds and SoundChannels be gc would be.

  • Help me optimimizate my game

    Im making a game, but do the animation a lot of lag how I optimizate my game?
    I accept the codes without packages

    I devote an entire chapter (56 pages) to the optimization of the game in http://www.amazon.com/Flash-Game-Development-Social-Mobile/dp/1435460200/ref=sr_1_1?ie=UTF 8 & qid = 1389454383 & sr = 8-1 & keywords = gladstien

    It won't be possible to paste into this forum.

    but here's an excerpt from this chapter:

    Optimization techniques

    Unfortunately, I don't know any way completely satisfactory to organize this information. In what follows, I discuss memory management first with sub-themes listed in alphabetical order. Then I discuss the management of CPU/GPU with subheadings listed in alphabetical order.

    This may sound logical, but at least, there are two problems with this organization.

    1. I don't think it's the most useful way to organize this information.
    2. Memory management affects the CPU/GPU use, so that everything in the section of memory management can also be listed in the section CPU/GPU.

    In any case, I'll also list information in two other ways, from the easiest to the most difficult to implement and more for much less.

    Two of these later inscriptions are subjective and dependent on experience developer and capabilities, as well as environmental test and the test situation. I very much doubt there would be a consensus on the order of these lists.  However, I think that they are still valid.

    Easier to the more difficult to implement

    1. Do not use filters.
    2. Always use the reverse for loops and avoid loops and avoid while loops.
    3. Explicitly stop timers for their loan for gc (garbage collection).
    4. Use weak event listeners and remove headphones.
    5. Strictly, type variable when possible.
    6. Disable explicitly interactivity mouse when interactivity smile not necessary.
    7. Replace dispatchEvents by the callback functions whenever possible.
    8. Stop sounds for the sounds and SoundChannels be gc would be.
    9. Use the most basic necessary DisplayObject.
    10. Always use cacheAsBitmap and cacheAsBitmapMatrix with air applications (i.e., mobile).
    11. Reuse objects whenever possible.
    12. Event.ENTER_FRAME loops: use different listeners and different listener functions applied to DisplayObjects as little as possible.
    13. The pool instead of creating objects and gc'ing.
    14. Use partial blitting.
    15. Use the blitting Stadium.
    16. Use Stage3D.

    Biggest advantage less

    1. Use the blitting Stadium (if there is enough memory system).
    2. Use Stage3D.
    3. Use partial blitting.
    4. Use cacheAsBitmap and cacheAsBitmapMatrix with mobile devices.
    5. Disable explicitly interactivity mouse when interactivity smile not necessary.
    6. Do not use filters.
    7. Use the most basic necessary DisplayObject.
    8. Reuse objects whenever possible.
    9. Event.ENTER_FRAME loops: use different listeners and different listener functions applied to DisplayObjects as little as possible.
    10. Use reverse for loops and avoid the do loops and while loops.
    11. The pool instead of creating objects and gc'ing.
    12. Strictly, type variable when possible.
    13. Use weak event listeners and remove headphones.
    14. Replace dispatchEvents by the callback functions whenever possible.
    15. Explicitly stop timers on loan for the gc.
    16. Stop sounds for the sounds and SoundChannels be gc would be.
  • Database Link Performance

    Hey Geeks,

    My project needs to extract information from a remote database with huge data through the local database (for the most part having given fixed). I've implemented it with the following approach:

    1 > create the local user account

    2 > created views for all tables with fixed data in the remote database using the links in the database.

    for example, CREATE or REPLACE of VIEWS REMOTE_TABLE_NAME AS SELECT * FROM REMOTE_TABLE_NAME@DATABASELINK;

    3 > this way, I am able to implement abstraction, the developer will not be able to know if the Remote_table_name object is a view of local user account or the table in the remote database. Application behaves as if all the data are present locally. Here, all good.

    The real problem begins when performance comes in the picture. During the execution of data mining that uses tables on the remote db (now through with dblink views in the local data base), I found that some of the jobs take 4 to 5 times as long to perform the same task when the same test is run directly on the remote database.

    I tried several things like:

    1 > use indicators of performance in creating views for the: for example remote tables in the local account for example

    CREATE the "REMOTE_TABLE_NAME" AS SOME VIEW / * + DRIVING_SITE (REMOTE_TABLE_NAME) * / * FROM REMOTE_TABLE_NAME@DBLINK;

    2 > performance using tips that querying the tables of remote database through views in the local account for example

    / * + DRIVING_SITE (REMOTE_TABLE_NAME) SELECT * / * FROM REMOTE_TABLE_NAME; "REMOTE_TABLE_NAME" is HERE the local view in the database account

    3 > performance using tips that questioning the tables of remote database by dblink hadcoding with the namee.g of the table.

    / * + DRIVING_SITE (REMOTE_TABLE_NAME) SELECT * / * FROM REMOTE_TABLE_NAME@DBLINK; "REMOTE_TABLE_NAME" is the name of the table in the remote account

    I tried many other things, but could not see improved performance.

    Any suggestions?

    Thank you

    Amrit Pandey

    In the original script when the remote table has been in the different database I tried with several possible optimization techniques but could not seen a substantial improvement in performance. Later as an experience (as I mentioned in my previous post) I tried to recover a local table to the other schema using the link of database to cut the network aspects and focus on the effect of the database link.

    To be precise, above queries are just to observe the charge of link database as part of an experiement in real time scenario, the tables will be at the remote database when the network latency will also play its part to hit performance.

  • Block of preloading of confusion

    Hello Experts,

    I read Troubleshooting Oracle Performance by Christian Antognini for awhile. In the context of optimization of joins, I came across a term called "Block Prefetching". It is written the next thing on it.

    "In order to improve the effectiveness of the nested loops joins, the database engine is able to take advantage of the early reading block. This optimization technique is intended to replace several single unit of physical reads performed on adjacent blocks, with a single physical multiblock read. This is true for tables and indexes. »

    In addition, he mentions also that watching a path can not tell you if the database engine will use the preload.

    As far as I know, multiblock I/O (as know, like playback of files scattered event db) is only used for FULL TABLE SCAN and FAST FULL INDEX SCAN. Other types of readings must be unique I/O (also known as the, the events db file sequential read). Thus, for example, how is it a systematic index scan range can use multi/o?

    I'm just trying to understand the logic behind it. Anyone can put light on this subject?

    Concerning

    Charlie

    A nested loops join is as a loop - for each line in the rowsource external/screwing, probe internal/probed rowsource.

    So, in general, we expect nested loops of transformation to be in the sense of:

    for each line in the rowsource external/driving (1)

    search indexed on the inside/probed rowsource (2)

    then use the rowid/s of the index to do a search on the table (3)

    end loop;

    Preloading can help take advantage of the physical i/o needed in steps 2 and 3.

    For example, index search, in older versions, as you point out, any request for physical i/o for such an INDEX RANGE SCAN will be via a sequential file to read a single block of the index.

    The index preloading optimization allows Oracle to anticipate on the other physical indexes IO calls, it might have to do in future iterations of this loop and therefore, instead of reading in the same block that he must now, to read in several blocks so that they are in the buffer when required.

  • CC of Photoshop hangs annoyingly often

    It's especially when I run a specific action (strong noise reduction). I tried a few optimization techniques, but nothing seems to work. Any ideas? Thank you.

    This means that your graphics card does not communicate well with Photoshop.

    Please ensure that the graphics card must be listed in the supported in the connection list, below if it appears then you need to update your graphics card in order to to use with Photoshop

    Photoshop GPU Troubleshooting FAQ

    Concerning

    Sarika

  • Can layers of generator with .png24 cause transparency in 2015 CC?

    I can get the layer to save outside with transparency if I use png8 and png32 but not when I have png24.

    There are a few settings that I need to click in the preferences for this?

    Thank you.

    The reason why PNG32 contains this number: an additional bit of 8 is required for transparency and color (24 bits): 24 + 8 = 32.

    PNG24 cannot contain transparency information - simply color.

    PNG8bit supports indexed both transparency (only 1 bit transparency: enabled or disabled) and transparency alpha (up to 256 levels of transparency).

    If the minimum file size and maximum quality are important to you, you will need to look elsewhere: ColorQuantizr gives much better results with advanced control yet simple in this case, compared to Photoshop.

    http://x128.HO.UA/color-quantizer.html

    If you want that to squeeze the last bit of memory off Photoshop png options, read these articles:

    http://www.smashingmagazine.com/2009/07/15/clever-PNG-optimization-techniques/

    http://www.smashingmagazine.com/2009/07/25/PNG-Optimization-Guide-more-clever-techniques/

    These have been written before, but it is possible to export png images transparent 8-bit Photoshop with generator.

    Yet, CQ is much more convenient and will still result in smaller files and better quality.

  • TimesTen turns more slowly than Oracle RDBMS

    Hello

    I installed timesten, and I just wanted to compare the performance of the following pl/sql block on with same block on Oracle timesten.

    declare

    date of temp_date;

    date of temp_date1;

    number of my_id;

    my_data varchar2 (200);

    cursor c1 is select MASTER_ID, MONTANA

    of AKS_TAB_MASTER;

    c2 (p_id number) cursor is select detail_ID dDATA of

    AKS_TAB_DETAIL

    where master_id = p_id;

    Start

    t loop c1

    Open c2 (t.master_id);

    extract the c2 in my_id, my_data.

    insert into aks_temp values (t.master_id, my_id, t.MDATA, my_data);

    Close c2;

    end loop;

    end;

    I created a group of cache in Timesten for caching table AKS_TAB_MASTER & AKS_TAB_DETAIL

    I created the AKS_TAB_DETAIL table in Oracle and timesten separately to avoid transmission

    In some ways, TimesTen takes 4 times longer than Oracle.

    I went through a link TimesTen Database Performance Tuning and database as my settings as follows:

    Permanent data format 640

    Temporary data of size 300

    Replicate parallel buffer MB 480

    Log File Size (MB) NULL

    Size of the buffer log (Mo) 320

    Cach AWT method 1-PLSQL

    AWT parallelism NULL cAche

    PL/SQL Connection limited memory (MB) 320

    PL/SQL optimization level 2

    PL/SQL Memory Size (MB) 240

    PL/SQL Timeout (seconds) 600

    I get always poor performance of TimesTen.

    No idea what could be wrong on my instance.

    Please suggest.

    Thank you

    Amit

    I was watching just the info you posted and I was about to point out the missing foreign key. In TimesTen set a primary or foreign key results in an index being created. You would have seen improvement even if you create an index on the MTAX. AKS_TAB_DETAIL (MASTER_ID). Without this index each execution of the query cursor c2 was a full table of param2tres table scan (which is obviously much slower than the indexed access).

    TimesTen is a database in memory you will need to apply database optimization techniques usual whose correct indexing is very important.

    Chris

  • Question for the experts: SVG vs GIF

    Hello everyone

    Lets say I have a website that is filled with a variety of simple and complex, smooth white icons. These icons will be resized in muse where necessary, sometimes they will be with reduced opacity. The site is heavy on Parallax scrolling and fading (opacity) and most - if not all - of the icons will use one or two scroll effects.

    Exporting since illustrator GIFs are sometimes smaller in size than the SVG.

    The target audience is supposed to be decent and updated computers for example, IE 8 and below not relevant. Assume that all users have a browser able to handle SVG and parallax.

    What is better or easier for the browser? SVG of GIFs? even if the size of the GIFs is reduced?

    renders SVG quite complex (several anchor points) 'harder' than the GIF?

    No reason to prefer not SVG except for worms backward compatibility with older browsers/machines?

    What would be the best practice here?

    Any discussion/advice greatly appreciated.

    Jorge Vallejo

    First, GIF should be avoided in favour of PNG: a well optimized PNG is always smaller in file size and also offers several options of quality (for example more than 256 colors which allows to GIF). The problem is that Adobe products cannot achieve this without jumping through a LOT of hoops. More information below.

    This is a bitmap - SVG performance comparison:

    (1) SVG files should be drawn & rendered in the browser. This means that the xml data must be read, interpreted and displays - which is much slower than a bitmap. A png must still be uncompressed, which does not take a lot of processing power. Make an SVG, it is much slower (incomparably slower depending on the complexity).

    (2) bitmaps are usually hardware accelerated (via the GPU), a number of operations of svg is not. For example, I believe that css transforms for SVG is not hardware accelerated and so very slow animate with this method.

    (3) some old browsers have trouble made a lot of individual objects of SVG. For example, only in March 2014 this problem was solved in Firefox.

    (4) svg performance depends heavily on the specific browser as well. Take this test:

    http://jsperf.com/D3-SVG-opacity

    It runs faster on opera and IE 11 on my system! Firefox is terribly slow.

    (5) the SVG rendering is much slower on mobile platforms. Unless very simple, convert png files.

    Now, regarding your files: a size to 500Ko SVG is a LOT - and then you mention that you will have more files SVG from less then 100 KB up to more than 200 KB. My question would be what kind of chart, they will be on your page: background graphics, icons,...?

    In addition, SVG images produced by the Illustrator can (should) be optimized. A good tool for this is SVGO: svg/svgo · GitHub a Visual version can be downloaded at svgo/svg-gui · GitHub

    A few items must read:

    http://CSS-tricks.com/using-SVG/

    http://jaydenseric.com/blog/how-to-optimize-SVG

    http://kylefoster.me/SVG-slides/#/

    If you do a lot of animation, and there are a lot of other things happening in a page, then PNG is the way to go. SVG is preferable for icons and graphics simple logo. Developers of games using Flash always convert all their vector files into bitmaps for much better in Flash!

    Now to my second point: Adobe Save for Web function is catastrophic for the optimization of png. Yes, there are Photoshop techniques to improve optimization of PNG images, but it takes a lot of effort:

    http://www.smashingmagazine.com/2009/07/15/clever-PNG-optimization-techniques/

    PNG Optimization Guide: more intelligent Techniques - Smashing Magazine

    And even after all that work, again, you cannot export a PNG image with a color palette limited with total transparency in Photoshop or Illustrator. It is important for the size of files: especially Muse Web sites tend to be so inflated that they take a lot of time to load up - and there's really no reason for this if you optimize your graphics properly.

    To optimize PNG files, use the quantifier of color:

    Color quantization

    I can't live without this small tool (freeware) - you have to start up your mac in Windows mode just to use CQ to optimize your png files. It supports automatically optimize your png files, reducing to a minimum - compared with the standard backup of Photoshop for the Web disaster, files are at least 50% smaller file size or more. And it includes a very simple to use quality mask brush that allows areas of control/fix which can be annoying when the reduction of the number of colors.

    That said, it really depends on the type of chart, that you intend to put at your disposal. Could you post some examples?

  • Performance tests: bind variables

    When you write a SQL for the application, I want to do some performance testing of SQL before providing to JAVA developers. I understand that I need to test using bind variables, can someone guide me how do? What are the best tools out there to achieve? Please provide some guidance. Thank you!

    Rinne says:
    I read more about bind variable and I can test the use of bind variables. I understand that testing SQL using bind variables is a closer representation of the real world. Even when queries have large tables, the performance would be similar if I run the query using literals twice (to avoid hard analysis) compared to tests with bind variables? I am trying to understand more thoroughly the need for tests with bind variables. Why we take more time than the other? Thank you!

    The main thing is that the opimizer will / can do if bind variables different optimization techniques are used or not.
    There are two contradictory effects its regarding the binding settings.

    Effect 1: Reuse of cursor
    If the same is done again and again and again. Maybe different sessions. Then using binding settings is essential. Why? Because the same cursor can be reused, which saves a lot of time for analysis. This occurs mainly in OLTP systems. The idea behind this is that to do the same action, just for a different ID (ID in order for example) will result in an identical implementation plan.

    The result: Faster analysis time and less consumption of memory, because the same cursor can be resused between different sessions.

    Effect 2: Correct assumptions

    According to a filter expression, some using the value of the OBC will make an assumption based on statistical data the number of rows is returned because of this expression and the value.
    Speculation between a literal value and a bound value may be different. In many cases exist technical opimization (bind peeking etc.) to the same literal conjecture binded proposal.

    But there are exceptions, for example a condition such as the following will result in different estimates

    column between 10 and 20
    
    column between :P1 and :P2
    

    There are as well other effects.

    Result: The CBO can make assumptions better if literals. But most of the cases the proposal is identical.

    Conclusion: Literal values are useful if you do large queries where the output size depends strongly on the parameters provided, and where you run that very few of these (OLAP) queries.
    Binding settings are usfull when the same execution (OLTP) need a grand plan number of queries.

  • Setting up queries

    (1) for the use of toad for sql tuning, it is adding + 0 to the where clause column?
    What is the importance of adding zero?
    (2) one last thing, I have observed this is rewrite joins:
    For ex:
    Select *.
    Tablea a.,
    TableB b,
    c TableC,
    deposited d
    WHERE a.CREATED = b.USER
    AND a.UPDATED = c.USER
    AND a.BOOK_ID = d.BOOK_ID (+)

    Toad to rewrite the query as follows:


    Select *.
    TableA A1,
    TableB b1,
    TableC c1,
    D1 deposited
    WHERE a1. CREATED = b1. USER
    AND a1. UPDATE = c1. USER
    AND a1. BOOK_ID = d1. BOOK_ID
    UNION ALL
    TableA A3,
    TableB b3,
    TableC c3
    WHERE the a3. CREATED = b3. USER
    AND a3. UPDATE = c3. USER
    AND (NOT EXISTS (SELECT ' X'
    OF deposited d3
    WHERE the a3. BOOK_ID = d3. BOOK_ID))


    (3) can I join with another view main view to generate reports or to join view with tables of the other view directly. In my requirement, using the main view is mandatory and the use of another point of view is optional?

    Thank you
    Kiran

    1 & 2 appear to be associated with the use of empirical rules and optimization techniques that are at least 10 years.
    I wouldn't trust any other tool to provide 'reliable' advice

    + 0 to a numberic column add or concatenate a null value for a column varchar2 will prevent an index not to be considered.

    Rewriting the query to use a UNION ALL operation is to avoid using an EXTERNAL JOIN that people used to advise to avoid at all costs.

    Developed by the heuristic is rarely effective and there is a reason that the optimizer is more based on rules.

    (3) can I join with another view main view to generate reports or to join view with tables of the other view directly. In my requirement, using the main view is mandatory and the use of another point of view is optional?

    You can reach the main view with another view.
    Or with other views, tables directly.
    If you need to do one or the other depends on the specific circumstances.

Maybe you are looking for