HP 15 af114au: 15 HP af114au GPU memory

How much memory does GPU HP 15 af114au have? Heard that there a 7410 A8 APU Quad Core, with R5 Radeon graphics card. Is - this 2 GB graphics card? Not the Specs of the 15-af114au HP with HP 15-af008ax 100% match? If this is not the case, what is the difference?

It is not fixed video memory as with a dedicated video card. The processor is an APU so the video is actually rendered in the processor. The memory is shared with the system and is variable. It seems than 2 laptops that you interviewed have the same processor, which means they have the same video.

The video is not true powerful:

http://cpuboss.com/CPUs/Intel-Core-i5-5200U-vs-AMD-A8-7410

Tags: Notebooks

Similar Questions

  • Thermal Pad thickness, brand and Type of GPU memory (surrounds the GPU chip) iMac late 2009?

    Hello. I'll be repair (and cleaning after 6 years) 2009 end iMac coz ' the screen has five 2cm thick, uniformly distributed, striped of vertical lines (it is still usable in Mode safe), which indicates a good chip GPU (used to pass all the tests of stress for the GPU and GPU memory) but a subcomponent that surrounds it has weld fracture joint coz' it may have overheated in the summer (which has a record high 36 to 42 degrees Celsius) and OS X might not revved up fans enough that may have contributed to cause solder joint breaking.

    When I open the iMac Monday, the thermal pads for memories of the GPU (surrounds the GPU chip I use a thermal paste with) may not be in good condition more. I was wondering how thick cushion is suitable for the GPU of the iMac (an ATI Radeon HD 4850 512 MB), what brand and what type (if the thickness is perhaps the most important)?

    Thank you. God bless you. Revelation 21:4

    I would like to contact https://www.ifixit.com/

    If anyone should know...

  • Satellite Pro A100-828 has an interface 64-bit on X 1600 GPU memory?

    Hello

    Satellite Pro A100-828 has an interface 64-bit on X 1600 GPU memory? Please help I get only 2500 points on 3dMark05 on a Satellite Pro A100-828. The GPU is X 1600 with memory dedicated 128 MB + 384 MB Hypermemory. This card should be approximately 3500 brands.

    Thank you

    Shahid

    I studied in the World Wide Web, and it seems that the X 1600 supports a 128-bit interface
    I not found any x 1600 with 64 bit so I think that it s a 128 bit

  • Satellite A110-195: which means that the GPU memory can be up to 128 MB?

    What is saying that the memory of my grphic card can be up to 128 MB? How do I upgrade memory grafic? My laptop is Toshiba Satellite A110-195 with Mobile Intel® 945GM Express chipset.

    Hello

    Specifications of your laptop for graphics is described as follows: 8 MB - 128 MB shared memory. This means that the use of a shared Intel technology memory graphics card.

    With short words: 'shared' video memory with the video chipset on the motherboard, RAM your laptop will share some of the ways reducing its own RAM memory for the rest of the chipsets motherboard and the process.

    If you want to find some info on shared memory technology, Google around and you'll find many interesting pages.

  • The upgrade of old Dell Dimension 2400 CPU, GPU, memory, etc...

    Mainly interested in CPU upgrade at this time. I have download CPU - Z to determine my profile and I provide this stitched-together set screenshots of this program:

    I just put the CPU of my Dimension 5150 friend successfully. I would only reproduce on my 2400 - a much smaller machine. This computer is mainly used for the casual internet browsing, and sometimes I watch a Youtube video. The only programs that really tax the CPU are my disk defragmentation and the program CCleaner. When I have a great session of internet browsing of 100 + tabs, the CPU also emerged at this time here.

    I want to the maximum this computer using cheap parts upgrades, that I can buy it on eBay. On my 5150 friend, I bought a processor upgrade for $5 delivered and a thermal paste to ebay for 80 cents. I wouldn't mind buying a cheap video card and processor to upgrade this old workhorse

    sva50233

    The Dimension 2400 can use 400 MHz or 533 MHz, Socket 478, P - 4 Northwood or Celeron processor with a 512 k or more small cache.

    You cannot use all processors Prescott [1 MB cache].

    The maximum support of the processor is P - 4 Northwood, 533 MHz, Socket 478, 3.06 GHz.

    Before upgrading the processor, make sure you have the latest version of the BIOS installed.

    With regard to the video card, only the PCI cards are supported, no PCI Express card is supported.

    The Dimension 2400 supports a maximum of 2 GB (2 x 1 GB Modules) memory DDR.

    Bev.

  • Total memory vs memory GPU

    Scout splits the memory usage between the GPU and the total memory, but the iPhone/iPod touch/iPad all have a Unified Memory Architecture which mean that the CPU and GPU share system memory, i.e. There is no memory GPU dedicated on these devices.

    So in this context, how to interpret memory total vs GPU memory during the profiling on an iOS device? For example, if I want to determine the total memory used by my game, I have to manually add all total memory + memory GPU?

    The "total memory" reported in Scout is not the same as the memory usage of processes reported by the operating system - it is the total memory that Flash Player knows is attributed to him. It will be a little smaller than the process memory, because it does not include some memory that is allocated by the operating system on behalf of Flash Player (for example, the resources of the BONE). It includes no memory GPU, even if behind the scenes, they use the same memory system, because it follows that benefits memory CPU by Flash Player. So, to get the total memory, you must add CPU and GPU memory.

  • Adjustable memory for NB550D resources

    So after upgrading the memory in my NB550D to a module of 4 GB (3 GB usable) steals the netbook, but I noticed the HD 6250 graphics uses 1 GB of memory for himself, obviously such a graphic of a budget entry has not even need half of this sum, especially that the screen is only 1024 x 600 , so if an admin is to check this, I wondered if in a future BIOS, an option could be added to adjust the amount of GPU memory?

    I'd rather have the GPU using approximately 256MB and rest in the direction of memory system, it does not require as much memory.

    Hey,.

    Who knows if Toshiba released a BIOS where you can adjust the graphics memory but such a function isn't known to me and I doubt it will happen.
    The memory of the graphics card will always be controlled automatically and there is no value that you can change.

    I mean that even 3 GB of RAM is really enough for performance netbook and I doubt you need more RAM and even some programs need more RAM you can use the virtual memory on the HARD drive.

  • How to extend the memory of card display for Satellite Pro A120?

    I would like to expand the memory used by the graphics card because its at this stage only 8 MB, I want to make this 64 MB. Total memory in the computer laptop AI 512 MB. Normally you can do this in the BIOS but I can't find any option to do this.

    Someone at - it an idea how to proceed?

    What graphics card you have?
    What is a processor ATI or nVidia GPU?

    Graphics memory of my knowledge depends on the size of the main memory.
    The larger main memory, the largest memory GPU.

    However, if your graphics card supports shared memory and then the graphics card driver controls the amount of GPU automatically.
    In other words, if the GPU memory is not required then shared the GPU memory is set to a lower value.

  • Improved memory and image - Satellite A30

    Hi all!

    I'm sure you all know, my satellite A30 has an integrated graphics card. On processors Intel site it says with RAM on approximately 197 MB (I can't remember the exact amount, but no matter), the GPU will run to about 64 MB. It does not say that any further improvements in the main text. But, there is a caption under one of the photos on the site that shows that the maximum possible GPU memory is 128 MB.

    I intend to improve the memory on my computer anyway, but I was wondering how this would effect the graphics card, and if it could improve the GPU would be what I need to do to maximize its potential?
    Thank you! Ben

    Hi Ben,

    Currently, I run my A30 - 141 (ATI Radeon GPU) with 1 GB of RAM 128 MB are reserved for the use of graphics. I also use the latest graphic drivers Catalyst Omega and I find that this combination gives excellent performance with games (e.g. FarCry), and also when I do a video editing with Nero or Ulead.

    I think that the GPU Intel changes the allocation of dynamic video ram based on the amount of RAM installed so that my ATI installation allows me to pre-select the amount of RAM of 16 MB to 128 MB (in the properties of the video).

    Adding extra RAM to your A30 should certainly improve things.

    Kind regards

  • XPS 15 9550 GPU and CPU to reach maximum temperature?

    Hello

    I recently got the XPS 15 9550 i5-6300HQ and GTX 960 M. It's beautiful but when NLB is games that it becomes warm to the touch but not excessive, but I followed the material time and he reaches up to 85 degrees GPU (then fan goes crazy, until it lowers once again) and 80 degrees CPU. It can sometimes keep 80 degrees on both for the entire session. I appreciate is when they work hard but these temperatures too high for my laptop? I can't find that Dells limits recommended. I have not noticed any limitation. Other users have comparable temperatures?

    For additional info in these temperatures, the GPU running at 100 percent and the CPU on all 4 cores about 70 percent.

    Thank you!

    A little more follow-up, in the case where it is useful.  I found a review of the XPS 15 9550 on notebookcheck.net, dated 12/23/2015.  They said "the GPU temperature does not fluctuate as much as the CPU: it can reach 90 ° C, where limitation of the sets in and the temperature of the GPU will stabilize at about 70 ° C."

    They give an image of a video stress test which includes a photo of the utility HWInfo of surveillance.  It shows the GPU reaches 90 c, but strangled at 70 ° C by reducing its clock speed.  In their fatigue test, the GPU clock is reduced to 600 s and GPU memory up to 400 MHz.

    When I run stress tests, accompanied by HWInfo for monitoring sensors, my GPU clock remains higher than 1100 MHz and GPU memory remains at 1252.8.  It never decreases and the temp is around 90 C.

    I have no idea if the limitation is controlled by software or the GPU itself.  Dell, nor NVIDIA me could provide useful information on this.  If it's a software framework somewhere in the NVIDIA Control Panel, I don't know where to find it.

  • problems with dedicated and shared graphics memory

    So basically I want to run a game that I have all the specs (above actually) that I need to play outside of memory dedicated to the graphics card. But what confuses the life out of me, is that I have about double the necessary memory required in system memory shared which as I understand, it is used on an as-needed-basis. However, clearly this is not the case, as soon as I try and run it crashes immediately.

    Okay, so I have the graphics card with the following specifications:

    chip type: mobile intel (R) 4 series express chipsets
    channel of the card: mobile Intel GMA 4500 M
    total memory: 797 MB
    memory: 64 MB dedicated
    shared system memory: 733 mb

    In addition to what BossDweebe wrote, I would like to comment on your comment

    "I more than double the necessary memory required in system memory shared which as I understand, it is used on an as-needed-basis." However, clearly This is not the case, as soon as I try and run it crashes immediately. »

    Shared memory (part of your system RAM) is provided by Windows on a dynamic basis, it is added if required and with a limited amount by the RAM of the system itself needs to run applications (= your shared graphics memory size can be based on available RAM total). It's a help for some graphical features, but not necessarily a guarantee to run games that explicitly requires a certain base of dedicated RAM video. As the dedicated RAM (own GPU memory) is much faster than the share of the contributions of RAM, two cases can appear:

    (1) the game does not start. Many games check the video hardware before starting, and if they are 64 MB VRAM and require more, often it is game over. The dynamically allocated shared memory cannot be controlled and is of no interest here.

    (2) other games may be more forgiving. But you're going to be faced with a vicious circle: you're hoping using shared memory, is a clear indicator of a graphic solution already slow and Asthenique (i.e. an integrated graphics card). Unfortunately, the slow system RAM does not speed up your graphics card, and taking a part of the RAM for the graphics features: with a slower running your game and can still finish in the dysfunctions.

  • 2 network K1 but only 4 GPU cards appear with nvidia-smi and no share option of PCI device to add virtual machine hardware

    We run Dell R720 servers with 2 cards NETWORK K1, ESXi 6.0 Update 1 b and that you have installed the NVIDIA drivers vGPU-vGPU-Kepler - 352 VMware_ESXi_6.0_Host_Driver, 70 - 1OEM.600.0.0.2494585 NVIDIA VMwareAccepted 2016-01-29.

    Why only 4 GPU appear not when I run the NVidia-smi command?

    Why don't I see "Shared PCI Device" when I change the settings of the virtual machine in vSphere?

    Screenshots below. Any help would be greatly appreciated.

    NVIDIA-smi

    Thu Jan 28 22:40:50 2016

    +------------------------------------------------------+

    | NVIDIA-SMI 352.70 driver version: 352.70.

    |-------------------------------+----------------------+----------------------+

    | GPU name persistence-M | Bus - Id Disp.A | Volatile Uncorr. ECC |

    | Fan Temp Perf Pwr:Usage / Cap |         The memory usage | GPU-Util Compute M. |

    |===============================+======================+======================|

    |   0 GRID K1 on | Off 0000:06:00.0 |                  N/A |

    | S/O 36 C P8 10W / 31W |      8MiB / 4095MiB |      0% by default.

    +-------------------------------+----------------------+----------------------+

    |   K1 GRID 1 on | Off 0000:07:00.0 |                  N/A |

    | N/A 37 C P8 10W / 31W |      8MiB / 4095MiB |      0% by default.

    +-------------------------------+----------------------+----------------------+

    |   GRID 2 K1 on | Off 0000:08:00.0 |                  N/A |

    | S/O 31 C P8 10W / 31W |      8MiB / 4095MiB |      0% by default.

    +-------------------------------+----------------------+----------------------+

    |   GRID 3 K1 on | Off 0000:09:00.0 |                  N/A |

    | S/O 33 P8 10W / 31W |      8MiB / 4095MiB |      0% by default.

    +-------------------------------+----------------------+----------------------+

    +-----------------------------------------------------------------------------+

    | Process: GPU memory.

    |  The name of Type PID GPU use process |

    |=============================================================================|

    |  No common process found |

    +-----------------------------------------------------------------------------+

    After the removal and reinstallation of the vib a few times it displays all the 8 GPU - there are 2 installed with 4 GPU cards each.

    Also, when I upgraded the hardware in the virtual computer to version 11 I could choose the "Shared PCI Device" and add the K1 GRID of NVIDIA.

    Thanks for the reply.

  • vGPU/ESXi 6.0 "hardware GPU is needed but is not available."

    Hello

    I have problems turning on a virtual machine configured with hardware 3D rendering. It throws an error "hardware GPU is required but not available. The virtual machine will not start until the GPU resources are available or the virtual machine is configured to allow software rendering". I shrugged the 9 to 11 VM hardware version, just to see if it has made all the difference.

    I looked around and different people have had different problems but brave people here ask the same info, so hopefully provide below will help.

    [root@localhost:~] nvidia-smi

    Sat Apr 25 15:45:04 2015

    +------------------------------------------------------+

    | NVIDIA-SMI 346.42 driver version: 346.42 |

    |-------------------------------+----------------------+----------------------+

    | GPU name persistence-M | Bus - Id Disp.A | Volatile Uncorr. ECC |

    | Fan Temp Perf Pwr:Usage / Cap |         The memory usage | GPU-Util Compute M. |

    |===============================+======================+======================|

    |   0 GRID K1 on | Off 0000:04:00.0 |                  N/A |

    | S/O 39 P8 10W / 31W |     10MiB / 4095MiB |      0% by default.

    +-------------------------------+----------------------+----------------------+

    |   K1 GRID 1 on | Off 0000:05:00.0 |                  N/A |

    | S/O 40 C P8 10W / 31W |     10MiB / 4095MiB |      0% by default.

    +-------------------------------+----------------------+----------------------+

    |   GRID 2 K1 on | Off 0000:06:00.0 |                  N/A |

    | S/O 33 P8 10W / 31W |     10MiB / 4095MiB |      0% by default.

    +-------------------------------+----------------------+----------------------+

    |   GRID 3 K1 on | Off 0000:07:00.0 |                  N/A |

    | S/O 34 C P8 10W / 31W |     10MiB / 4095MiB |      0% by default.

    +-------------------------------+----------------------+----------------------+

    +-----------------------------------------------------------------------------+

    | Process: GPU memory.

    |  The name of Type PID GPU use process |

    |=============================================================================|

    |  No common process found |

    +-----------------------------------------------------------------------------+

    gpuvm [root@localhost:~]

    Xserver unix: 0, 0:4:0:0 PCI ID, vGPU: not defined, the maximum memory of GPU 4183620 KB

    GPU memory left Ko 4183620.

    Xserver unix: 1, PCI ID 0:5:0:0, vGPU: not defined, the maximum memory of GPU 4183620 KB

    GPU memory left Ko 4183620.

    Xserver unix: 2, ID PCI 0:6:0:0, vGPU: not defined, the maximum memory of GPU 4183620 KB

    GPU memory left Ko 4183620.

    Xserver unix: 3, PCI ID 0:7:0:0, vGPU: not defined, the maximum memory of GPU 4183620 KB

    GPU memory left Ko 4183620.

    [root@localhost:~] cat /var/log/vmkernel.log | grep nvidia

    (2015 04-25 T 15: 15:14.942Z cpu1:33338) loading module nvidia...

    (2015 04-25 T 15: 15:14.947Z cpu1:33338) Elf: 1865: module a license NVIDIA nvidia

    (2015 04-25 T 15: 15:15.017Z cpu1:33338) CpuSched: 592: latency 33339 tq:nvidia_timer_queue user 0 changed by 33338 vmkeventd-6

    (2015 04-25 T 15: 15:15.269Z cpu1:33338) device: 191: registered driver "nvidia" from 19

    (2015 04-25 T 15: 15:15.269Z cpu1:33338) Mod: 4942: nvidia initialization succeeded with module ID 19.

    (2015 04-25 T 15: 15:15.269Z cpu1:33338) nvidia loaded successfully.

    (2015 04-25 T 15: 15:15.503Z cpu15:33337) device: 326: find nvidia for device 0x281c4302a6589365 driver

    (2015 04-25 T 15: 15:15.541Z cpu15:33337) device: 326: find nvidia for device 0x4eaa4302a65895fc driver

    (2015 04-25 T 15: 15:15.579Z cpu15:33337) device: 326: find nvidia for device 0x7c044302a65898cd driver

    (2015 04-25 T 15: 15:15.616Z cpu15:33337) device: 326: find nvidia for device 0x2dfe4302a6589b17 driver

    NVRM: nvidia_associate vmgfx0

    NVRM: nvidia_associate vmgfx1

    NVRM: nvidia_associate vmgfx2

    NVRM: nvidia_associate vmgfx3

    (2015 04-25 T 15: 15:50.363Z cpu11:35362) IntrCookie: 1852: cookie 0x3a moduleID 19 < nvidia > exclusive, 0x1d flags

    (2015 04-25 T 15: 15:51.041Z cpu11:35362) IntrCookie: 1852: cookie 0x3b moduleID 19 < nvidia > exclusive, 0x1d flags

    (2015 04-25 T 15: 15:51.693Z cpu11:35362) IntrCookie: 1852: cookie 0x3c moduleID 19 < nvidia > exclusive, 0x1d flags

    (2015 04-25 T 15: 15:52.385Z cpu11:35362) IntrCookie: 1852: cookie 0x3d moduleID 19 < nvidia > exclusive, 0x1d flags

    (2015 04-25 T 15: 15:54.655Z cpu7:35676) IntrCookie: 1852: cookie 0x3e moduleID 19 < nvidia > exclusive, 0x1d flags

    (2015 04-25 T 15: 15:54.871Z cpu7:35676) IntrCookie: 1852: cookie 0x3f moduleID 19 < nvidia > exclusive, 0x1d flags

    (2015 04-25 T 15: 15:55.084Z cpu7:35676) IntrCookie: 1852: cookie 0 x 40 moduleID 19 < nvidia > exclusive, 0x1d flags

    (2015 04-25 T 15: 15:55.302Z cpu7:35676) IntrCookie: 1852: cookie 0 x 41 moduleID 19 < nvidia > exclusive, 0x1d flags

    (2015 04-25 T 15: 15:56.431Z cpu24:35841) IntrCookie: 1852: cookie 0 x 42 moduleID 19 < nvidia > exclusive, 0x1d flags

    (2015 04-25 T 15: 15:56.667Z cpu24:35841) IntrCookie: 1852: the moduleID cookie 0 x 43 19 < nvidia > exclusive, 0x1d flags

    (2015 04-25 T 15: 15:56.897Z cpu24:35841) IntrCookie: 1852: cookie 0 x 44 moduleID 19 < nvidia > exclusive, 0x1d flags

    (2015 04-25 T 15: 15:57.125Z cpu24:35841) IntrCookie: 1852: cookie 0 x 45 moduleID 19 < nvidia > exclusive, 0x1d flags

    (2015 04-25 T 15: 15:58.255Z cpu6:36015) IntrCookie: 1852: cookie 0 x 46 moduleID 19 < nvidia > exclusive, 0x1d flags

    (2015 04-25 T 15: 15:58.478Z cpu6:36015) IntrCookie: 1852: cookie 0 x 47 moduleID 19 < nvidia > exclusive, 0x1d flags

    (2015 04-25 T 15: 15:58.695Z cpu6:36015) IntrCookie: 1852: cookie 0 x 48 moduleID 19 < nvidia > exclusive, 0x1d flags

    (2015 04-25 T 15: 15:58.914Z cpu6:36015) IntrCookie: 1852: cookie 0 x 49 moduleID 19 < nvidia > exclusive, 0x1d flags

    [root@localhost:~] esxcli list vib software | grep nvidia-i

    NVIDIA-vgx - VMware_ESXi_6.0_Host_Driver 346, 42 - 1OEM.600.0.0.2159203 NVIDIA VMwareAccepted 2015-04-25

    Enjoy every thought, that everybody can have!

    Thank you!

    Kim

    Ah, ok.

    'material' in the VM configuration means vSGA, which might not be obvious, but it's like that.

    To use vGPU, you must add a new device to the virtual machine and select the vGPU profile that you want to assign, you must set the graphics in 'software '.

    Linjo

  • Performance is worse with the GPU acceleration that without

    I have a Windows 7 system fast, i7 980 X processor with a monitor @ 2560 x 1440 pixels. As a photographer, I'm processing of thousands of photos and my PC can not be fast enough, so I thought that I invest a little in a new graphics card. OK, I have not bought a Quadro, but with GTX 970 from Nvidia, I always feel a modern, fast, with a value of more than $ 400 map.

    My first experiences after the installation of the new graphics card are more then disappointing - a lot is worse, running without any GPU acceleration. Before, I could scroll an image to the image almost no imperceptible lag. With the GPU acceleration turned on, each image must be "rendered" first first you get a blurry picture of Pexelated / of complitely and flackers of color shortly, then after a while, it becomes more clear. So it seems that with GPU acceleration turned on, Lightroom 6 completely ignores the preview images pre-rendered - FACT IT MAKE any SENSE WHAT? Yes, all the sliders mode develop work faster, but the preview shown instantly image is a false low resolution and it always takes time for an image high resolution to display - as much time, as it should, making it with a CPU.

    For me, it would make sense that next image will be loaded into GPU memory by actually DURATION and rendering the previous image would be preserved in memory as well. Also miss me some detail for the GPU acceleration settings - if it remains at the current level, rudimentary, (for me) the best compromise would be to disable the GPU acceleration to scroll through the photos, but using just to develop sliders.

    I would want guus at Adobe to improve the product real, not just marketing slogans on the improvement of performance!

    kirillxxx wrote:

    . My suggestion is simple enough - if the guys at Adobe can not program properly, the easiest would be (to make an option) to disable the GPU acceleration for image swithiching and just take his profit for the developer sliders.

    You can not simply switch off to change the image. The image * should be loaded into the GPU, which is where the delay comes from. Even though you might think about what would happen: of course the image would change quickly, but then you would still wait before you can deal with.

    For now the acceleration GPU in LR is very it is childhood: things first generation, and we are all good enough beta testers.

    Some people are lucky: they love him and have no problem.

    Some people are out of luck: it causes nothing but problems.

    Some people like me are sitting on the fence: we have mixed experiences.

    I don't think you threw $400, at some point (I live in optimism) questions will be fixed and LR will work on your system. He just had to wait. Isn't is funny, but you only need to take a casual look at any photography forum to see that there are problems with GPU acceleration and now is not the time to buy the latest and greatest card.

    I've personally held off buying a new card to replace my aging one until I see that there is zero chance of all the problems of proof.

  • How to reduce the suttering in Air for iOS GPU mode?

    Currently, my game runs at 60 fps on iPad 2 and has no problem concerning the rendering speed, but I'm rather annoyed by a constant stutter that causes the game to stop for 0.1 to 0.5 sec from time to time. The stuttering behavior is similar to when the garbage man is run and I assumed this is casued in Exchange for GPU memory as my game uses a lot of bitmapdata.

    Problem is that my game goes one scene to another scene seamlessly without stopping animations in the game screen leaving old-fashioned scene slide off the screen and new stage sliding in. There is therefore not the time to preload/precache active graphics used in the new scene. After various scenes of transiting a few times, the game starts stuttering trying to show new images.

    My game works fine without stuttering on old PC but on iPad 2 is quite obvious. Could someone tell me some tips to reduce stuttering when using Air for iOS? In the game, all vector graphics are drawn pre bitmapdata (so no vector graphics are shown) and the size of the active graphics that each scene is approximately 2048 * 1024 pixels. There are about 10 scenes. In addition to this, there are ways to graph common interface that are used in every scene and the size is about 30 x 400 * 400 pixels.

    I know that the game uses a lot of graphics resources. What makes the game preloads the assets before transiting to a new scene will eliminate stutter but I want to see if I can keep the scene transition seamless on iOS again.

    I am currently using Air3.5 + Flash CS6.

    * I mean cached prior/preloading displaying actually (addChild and visible = true) on stage and make time for GPU cache. All the graphics data is already loaded.

    Have you tried playing with the StageQuality class? Setting to 'low' solved my problem of rotation and orientation screen bumpy...

Maybe you are looking for