best practices to configure or detect the size of the screen?

Hi all

Try to determine a best practice for setting or the detection of the size of the screen. Playbook and iOS, I can put them. But for Android, the number of devices is too big, so I would rather to detect. My first choice is to use the stage.stageWidth, stage.stageHeight. It works fine if I put my stage with standard metadata properties:

[SWF (height ="320"width ="480"frameRate ="64", backgroundColor = "" "#010101")]

However, if I use the application descriptor file to set the dimensions of the stage (like the one proposed by Christian Cantrell here http://www.adobe.com/devnet/flash/articles/authoring_for_multiple_screen_sizes.html()

<initialWindow>

<>aspectRatiolandscape< /aspectRatio>

<>autoOrientsfalse< /autoOrients>

<>width320< /width>

<>height480< /> height

<full screen>true< /> fullscreen

< /initialWindow>

Then the stage.stageWidth, stage.stageHeight are not the correct numbers when my main class is added to the stage. Shortly after the main class is added to the scene, these figures are very well. Is there an event that I can wait to know the stage.stageWidth, stage.stageHeight are correct?

Thanks in advance!

I'm struggling to think of what the problem might be with stageWidth/height not set correctly during the test with IE, but who was a flex project, I noticed a strange behavior before (width/height is not correct until the preloader ran)

It has intrigued me, so I can grab the google project and test, but it may be some days I'm a bit busy at the mo.

WAG - erg is nice idea btw

Tags: Adobe AIR

Similar Questions

  • Best practices to configure NLB for Secure Gateway and Web access

    Hi team,

    I'm vworksapce the facility and looking for guidance on best practices on NLB with webaccess and secure gateway. My hosted environment is Hyper-v 2012R2

    My first request is it must be configure NLB, firstly that the role of set up or vice versa.

    do we not have any document of best practice to configure NLB with 2 node web access server.

    Hello

    This video series has been created for 7.5 and 2008r2 but must still be valid for what you are doing today:

    https://support.software.Dell.com/vWorkspace/KB/87780

    Thank you, Andrew.

  • How to detect the screen off the power of the event

    Is it possible to detect the screen off the power of the event?  I am looking to detect when the screen turns off due to time-out or press the button of the lock screen.  Similarly, I would need to detect where the screen turns back on.  I searched through the API WebWorks and forums but have not found anything relevant.  Maybe I'm looking just the wrong thing. Any help would be appreciated.

    Hi deedubbu,

    I'm certainly not an Expert in Java BB so it might be a good idea to post something on these forums, but a quick search on Google, it seems that you should be able to encapsulate event

    In Exchange for my Google search, I humbly request post you the result on our community API github repo

  • What is the best practice to block through several layers sizes: hardware and hypervisor VM OS?

    The example below is not a real Setup, I work with, but should get the message. Here's my example of what I'm doing as a reference layer:

    (LAYER1) Hardware: The hardware RAID controller

    • -1 TB Volume configured in the 4K block size. (RAW)?


    (Layer2) Hypervisor: Data store ESXi

    • -1 TB of Raid Controller formatted with VMFS5 @ block size of 1 MB.


    Layer (3) the VM OS: Server 2008 R2 w/SQL

    • -100 GB virtual HD using NTFS @ 4 K for the OS block size.
    • -900 GB virtual HD set up using NTFS @ 64 K block size to store the SQL database.

    It seems that vmfs5 is limited to only having a block size of 1 MB. It would be preferable that all or part of the size of the blocks matched on different layers and why or why not? What are the different block sizes on other layers and performance? Could you suggest better alternative or best practices for the sample configuration above?

    If a San participated instead of a hardware on the host computer RAID controller, it would be better to store the vmdk of OS on the VMFS5 data store and create an iSCSI separated THAT LUN formatted to a block size of 64 K, then fix it with the initiator iSCSI in the operating system and which size to 64 K. The corresponding block sizes through layers increase performance or is it advisable? Any help answer and/or explaining best practices is greatly appreciated.

    itsolution,

    Thanks for the helpful response points.  I wrote a blog about this which I hope will help:

    Alignment of partition and blocks of size VMware 5 | blog.jgriffiths.org

    To answer your questions here, will:

    I have 1 TB of space (around) and create two Virutal Drives.

    Virtual Drive 1-10GB - to use for OS Hyper-visiere files

    Virtual Drive 2 - 990 GB - used for the storage of data/VM VMFS store

    The element size of default allocation on the Perc6 / i is 64 KB, but can be 8,16,32,64,128,256,512 or 1024 KB.

    What size block would you use table 1, which is where the real hyper-visiere will be installed?

    -> If you have two tables I would set the size of the block on the table of the hypervisor to 8 KB

    What block size that you use in table 2, which will be used as the VM data store in ESXi?

    ->, I'd go with 1024KO on VMFS 5 size

    -Do you want 1024KO to match the VMFS size which will be finally formatted on top he block?

    -> Yes

    * Consider that this database would eventually contain several virtual hard drives for each OS, database SQL, SQL logs formatted to NTFS to the recommended block, 4K, 8K, 64K size.

    -> The problem here is THAT VMFS will go with 1 MB, no matter what you're doing so sculpture located lower in the RAID will cause no problems but does not help either.  You have 4 k sectors on the disk.  RAID 1 MB, 1 MB invited VMFS, 4 k, 8K, 64 K.   Really, 64K gains are lost a little when the back-end storage is 1 MB.

    If the RAID stripe element size is set to 1 024 Ko so that it matches the VMFS 1 MB size of block, which would be better practice or is it indifferent?

    -> So that's 1024KB, or 4 KB chucks it doesn't really matter.

    What effect this has on the OS/Virtual HD and their sizes respective block installed on top of the tape and the size of block VMFS element?

    -> The effect is minimal on the performance but that exists.   It would be a lie to say that he didn't.

    I could be completely on the overall situation of the thought, but for me it seems that this must be some kind of correlation between the three different "layers" as I call it and a best practice in service.

    Hope that helps.  I'll tell you I ran block size SQL and Exchange time virtualized without any problem and without changing the operating system.  I just stuck with the standard size of microsoft.  I'd be much more concerned by the performance of the raid on your server controller.  They continue to do these things cheaper and cheaper with cache less and less.  If performance is the primary concern then I would consider a matrix or a RAID5/6 solution, or at least look at the amount of cache on your raid controller (reading is normally essential to the database)

    Just my two cents.

    Let me know if you have any additional questions.

    Thank you

    J

  • Best practices Check - Configuration of the Communication Service

    So, we have the following here the use case...

    Background:

    We have a FMS instance that has several teams using multiple applications, air conditioned and have their own specific communication needs. Teams of infrastructure, such as the database or Server team are also included on this FMS. We have services configured around applications and their dependencies, so a single object will exist in several services.

    To work around the lack of granularity configuration and horrible the service-based E-mail in the form, I created an event rule that queries for FSMServices affected for a given event and iterates over all the unique services, pulling on the notification settings through our way of soil and trigger Actions from the command line or appropriate accordingly EmailActions... we use of the-d in the field of shortDesc service options.

    Example:

    (You can ask for a detailed explanation of what do the settings, if you wish, or that the levels called - but they are just an additive representation of the levels of severity in foglight)

    Here's our new problem:

    We have teams who want notification on certain rules of certain severities (such as criticism), but not others. This has created the need for a 'white list' or the 'black list' of the original names of rule for the event to determine whether an event should be communicated to our NOC or paged on our teams.

    My solution of thought:


    We will create a new cartridge (FoglightCommunication) that contains a custom dashboard and the definition of the topology for a FSMServiceConfig object. This object contains a white list or black list for some rules should be provided for each service. This TopologyObject would also resume functionality which serve as my current shortDesc variables... Essentially, it would contain all THE information relevant to its corresponding FSMService object configuration. We have experience in creating modules advanced both in the creation of Foglight cartridge/agents/topology definitions.

    The dashboard would exist to facilitate the configuration of this new object and to facilitate the visualization of the current communication service. This would also allow our team to allow the teams less educated with Foglight feature more easily and completely configure their own communication service. Empowering the team owner is always a good thing

    My Question:

    Did someone in the quest (it's such a name cooler than Dell) sees a problem with this? My only concern is that we could lose all our configuration information to uninstall the cartridge for a upgrade problem. He might consider a work around with an option to export/import... but it's a messy solution and a non-human evidence. Is there a way to specify the data to be persisted, even if the cartridge that has defined this topology definition is uninstalled?

    I'd appreciate any comments or thoughts. Thank you!

    Hi Adam.

    This looks like a very useful customization. I don't see why your team should not move forward with that.

    I also like the idea of building an import/export feature in your cartridge in order to preserve the configurations in case you need to uninstall the cartridge. Note that even in this case, type of custom topology that was written to the repository data Foglight will always be there (i.e. it will not be served unless you specifically request this) - so you can be able to get Foglight to save the configuration information important for you.

    I encourage you to update the community on your progress on this and send questions, screencaps, etc., as needed.

    Thank you!

    Robert Statsinger

  • Best practices using clusters to create the queue/notifier/bundles?

    I'm in a block diagram, a queue, the notifier and several instances of cluster of bundle

    that all use the same data structure.   There is a typedef of cluster for the data structure.

    Of course, each of these objects (define the queue, set notifier, bundle)

    you want to know how do you define the cluster.

    What is considered best practices?

    (1) create a dummy instance of the cluster across data structure

    definition is necessary (and hide all on the public Service)

    (2) create only one instance and son at all places, it is necessary

    But there is no stream on this thread: it's only the cluster * definition *.

    which is used, so this seems to clutter the comic.

    (3) create only one instance of the cluster control and use local variables

    everywhere else the definition of cluster is required.  It's _value_ is never

    assigned or given read-so no problem with race conditions.

    (4) another way?

    If you were to clean up someone else's code, how do you expect

    See this Treaty?

    It occurred to me during this writing that here where I

    "unbundle...... code bundle" I could wire the original beam to the

    the two "unbundle" and "bundle" - but that would be too complicated

    and the size of the comics with useless thread?

    Thank you and best regards,

    -- J.

    Hi Jeff,

    I think that this question is about "sharing" the typedef and not how share data (?)  If the cluster control is registered as a typedef (or a strict typedef) but NOT SIMPLY as a CONTROL, then when a Diagram-constant of the typedef is created, it will be updated when you update the .ctl typedef!  (and there is no FP control to hide )  Of course if the typdef is already available "close" if necessary, you will be able to use instead - save a spacer of diagram.

    See you soon.

  • Best practices for configuring network ESX

    Suppose I have a small resource ESX server with only two physical network cards to work only a few virtual machines.  There are only two physical network adapters cannot be added.  Still, best practice would dictate that this service console would be on its own dedicated physical NIC?  In this scenario puts service console and all the VMs on a pair of network cards grouped better because if a NETWORK card fails both the service console and all virtual machines are still available?  In this case the bandwidth is very low and contention for the network bandwidth is not a problem. Thank you

    Hello.

    Check out "Blue Gears - 2 with VMware ESX physical NIC" of Edward Haletky for some good info on it.

    Good luck!

  • Best practices to declare and initialize the string?

    What is the best practice for the way in which strings are declared in a class?

    Should it be

    private String chBonjour = "";

    or I should have initialization in the constructors?

    The constructor of servlet is usually called only once, when the servlet is first accessed. But then again maybe something happens, google, servlet life cycle if you must know.

    But let's take a step back here. It seems you are trying to put the fields in servlets. Don't, don't. When two users extract URL of the servlet at the same time, the fields are shared between the two shots. If you store something like HTTP settings in the fields, parameters two hits' will get truncated. The hits may eventually see each and other parameter values.

    The best way is not to have fields of servlets. (Except the constant "final static" maybe, sometimes rarely something else). Well the pain of simultaneity go, servlet life cycle concerns go away, builders servlet if go, init() disappears usually.

  • Best practices for adding text to the Flex container?

    Hello
    I'm having problems to expose a TextFlow class correctly inside of a Flex container. What is the best practice to achieve, for example adding a lot of text to a small sign?
    Is it possible to do anything other than static width and height DisplayObjectContainerController constructor, or is - not the place to implement this? I guess what I'm looking for is the logic of the layout, I normally wrap in a custom Flex component and implement within Measure [] and so on.

    My use case: a chat application that adds several elements TextFlow to a Flex for example the Panel container. Or use TextFlow substitute UITextField.

    Examples of code would help me greatly.

    I'm using Flex 3.2.

    Kind regards

    Stefan

    You are right, the examples we have provided are specific to TLF, an ActionScript component, rather than a Flex component. Flex Gumbo is implementing what I think you are looking for (a UIComponent to TLF). Gumbo is still under active development, we chose to stay with examples that apply as much 3.2 Flex Gumbo.

    Check out mx.components.FxTextArea; I'd be curious to know if it's the 'start of market' looking in terms of using TLF in Flex.

  • Detect the screen reader is enabled or disabled

    Hi all

    Is it possible to detect whether accessibility screen reader mode is enabled or disabled using native plugin?

    I have not found anything relevant to detection here:

    http://developer.BlackBerry.com/native/reference/Cascades/user_interface_accessibility.html

    Thank you.

    No, it isn't an API to check if the screen reader is enabled or disabled.  If your application implements the correct accessibility tags, it shouldn't really need to know whether or not it is enabled.

  • Intermittent failure to detect the screen on startup on the Satellite X 200

    For awhile now, my X 200 suffered intermittent problems with the screen. When turn he will deliver a long and two short beeps and then proceed to start but without the screen.

    I normally feed the cycle several times and find that it sorts itself, but tonight he would not cooperate so I dug a cable for my TV and used as a monitor and of course the machine starts very well and after POST went immediately to the embedded poster again. Does anyone have an idea what is happening and why?

    A little more background.
    This is slot and turned off for about a month now.
    I used a variety of drivers the two latest WHQL and LV2G releases, but they all have the same problem.

    Right now I'm on the default of most drivers, and I'm going through the process of sprinkled updates having rebuilt just clean the machine because of other problems (in a separate thread), but immediately before reconstruction, the problem has been all too obvious and all the drivers were later so I don't think this is a driver related issue or at least not one that seems be resolved by any of the releases.

    Seems rather strange. Did you check your BIOS options? Maybe you should set this back to the default settings.
    Or try and install the latest version of the BIOS.
    If that's not enough, and I read that you already have a clean reinstall the latest drivers, I would say that contact local ASP, as a beep sounds are not normal.

    They can tell you what mean these sounds.

  • Best practices for exporting images for the web

    You have a video that shows how to save a group of photos for a website. I don't know what resolution/photo size to use. My concern is to have a decent image to the search on the site, but not so high a resolution that she could be 'borrowed' and then expanded.

    Thank you.

    Select the pictures in the library, and then click Export.

    Try the settings in the screenshot below, if you are not satisfied with the result, you can always export again and overwrite the originals.

    Quality 60 is usually a good compromise between quality and file size.

    Using sRGB as space color is essential.

  • Beginner looking for best practices in configuration in AE CS4 (MacPro 8core)

    Hello

    I am a video editor familiar with FCP and motion, moving in AE CS4 because it's more power in a 3D space. I installed the program and started my basic training of Lynda, but I want to make sure I got the installer so as not to cause problems on the road of.

    For example, in FCP, I use a second drive for makes etc., and a RAID array for FCP files and various media (both imported and exported) organized by project and by type. I intend to do the same for EI, but looking at the AE prefs, I see I can assign memory per CPU etc, and I just want to be sure that I use the Procs and RAM effectively. The machine is a machine of 8-Core MacPro with 12 GB of RAM. The program sees 16 processor cores, so I should split the 10 GB hosting AE to use during these carrots (using the pref multiprocessing tab), or simply let AE understand?

    In addition, I am confused about the overflow Volumes option. What gets put there, when and why?

    Finally, if there is a newsletter along the lines of the Larry Jordan FCP (larryjordan.biz) site I'd reference.

    Thanks for your help.

    -C

    Concerning the use of RAM and processor, see the following page:"Performance Tip: don't starve to death your software of RAM '. Following the guidelines there for 12 GB your computer, would you let 2 GB for other applications, perhaps a couple of GB for the trial leading to hold executives preview RAM - leaving you with 2 GB each of four background rendering process.

    Also, don't be fooled. 8-core computer is just an 8-core in After Effects is. The so-called doubling of processor cores created by hyperthreading is not relevant for simultaneous rendering of several images of multiprocessing.

    Regarding overflow volumes, see the following page: "of the volume of overflow and segment settings"

    If you are new, please start here. Pretty please?

  • Oracle best practices Discussion advantages/disadvantages of the use of synonyms

    Share your experience, given the advantages/disadvantages development database of Enterprise Applications using public and private synonyms.

    My recommendation to the developers on my team is to avoid using Public synonyms in their code and instead fully qualify the object of database by the owner of the schema.

    Benefits: When you delete a schema, you do not drop the public synonyms they created. So if you use synonymous, make it private and not public.

    Please, share your experience!

    From my experience, synonymous public are the number 1 name collisions cause. So they stop completely the possibility of the consolidation of databases and Instance and potentially end up costing huge customer money in useless Oracle licenses.

    A much better way to the level of development would be to use ALTER SESSION SET CURRENT_SCHEMA when possible and minimizes the use of any synonyms. When synonyms are necessary, they should be synonymous private and should be handled by the application.

    IMO :-)

    Published by: Hans Forbrich, November 4, 2009 12:59

  • A listener by server or a listener instance?  What is the best practice?

    I joined a company owner and new oracle DBA uses a listener and a port (by default) by server.  We have 7 instances of oracle on a server database using the same listener.  I always created a new listener. / netca or make entries manually by database instance. / dbca

    What is the best practice?  My argument for the creation of a separate listener is to be able to restrict connections and accelerator by database using the parameters and the params of the listener.  With a listener, it seems impossible to use several listener settings or settings since all the dB to use it only a listener.  Also if the listener does not have any new connection for all the dB to use it on the server.

    What is the best practice?

    The best practice is what works best for you in your particular environment

    Personally I have found don't have much need to adjust the configuration of the listener for each separate instance so in my environment of each server has 1 single earphone that is shared by several bodies. I can see your points about the benefits of having separated from listeners, but also additional administration required for the best answer is the one that is right for you. Some of the servers I maintain may have up to 20 instances (development) so having 20 listeners is probably a little more work I want keep.

Maybe you are looking for