Best practices of VCO - combination of workflow

I took a modular approach to a task, I am trying to accomplish in VCO. I want to take action against the network to ANY NAT routed ORG in a specified org.  Here is where I am now. Would like to comment on what is possible or recommended before you start down the wrong path. I have two separate workflows individually do what I need to do and now I need to combine them.  Two of these workflows work well separately and I am now ready to combine both in a single workflow.

1 Both Workflows.jpg

(1) with the help on this forum of Joerg, I was able to return all ORG networks in a particular ORG, then filters according to the fenced mode return array ONLY NAT network routed ORG.  Name of the workflow is «Routed BACK ORG nets» The input parameter is an organization, and the output parameter is an array of NAT routed networks ORG.

1 RETURN ORG Net.jpg

2) there is an existing workflow (comes with the VCD 1.5 plugin) that configures a routed NAT ORG network. The input parameter is a routed Org network of NAT. name of the workflow is 'Routed SECURE ORG Net' and I would like to use this (slightly modified) workflow as it is perfect for a task I need to fill.

1 Secure ORG Nets.jpg

I think there are two options.

(1) to include javascript code and the logic of the 'SECURE Ext ORG' which has several elements in the workflow in the workflow for "Nets of ORG routed BACK" Net inside the loop of the search RETURN flow.

(2) the second option is to add the 'SECURE routed ORG Net' as part of workflow for existing workflow "Nets of ORG routed RETURN" as part of workflow.  I like this approach better because it allows me to keep them separate and reuse elements of the workflow or individually.

Issues related to the:

What is recommended?

Y thre restrictions on what I can pass as an INPUT to the second workflow (embedded) parameter? Can I pass only the JavaScript object references or can I get through an array of '.name' routed networks NAT ORG property in the table?

I assume that the input parameters for the second workflow can be invited in the first and passed as OUTPUT to the second workflow (embedded) where they will be mapped as INPUT?

I read through the developer's guide, but I wanted to get your comments here also.

Hello!

A good principle (from software engineering) is DRY: don't repeat yourself!

So call to a workflow library indeed seems to be the best approach for the majority of use cases. (So you don't have to worry about maintaining the logic, if something changes in the next version,...) That's exactly what the workflow library is for.

To move objects to a workflow item, you can use all the inventory items, basic (such as boolean, string, number) and generic data types ones (all, properties). Each of them can also be an array.

However, the type must adapt to the input of the called Workflow parameter (or be "ANY").

So in your case, I guess that the COURSE called...-Workflow is waiting for just a single network (perhaps by his name, probably also OrgNetwork).

You must create the logic of loop to go through all the networks you want to reconfigure manually (in your "external" workflow).

For an example: see http://www.vcoteam.info/learn-vco/creating-workflow-loops.html

And as Tip: the developer's Guide is unfortunately not really useful for this "methodology" - related issues. See the examples on http://www.vcoteam.info , the beautiful videos on youtube ( http://www.vcoportal.de/2011/11/getting-started-with-workflow-development/ ) and watch the recording of our session at VMworld: http://www.vcoportal.de/2011/10/workflow-development-best-practices/ (audio recording is in the post in mp3 format)

See you soon,.

Joerg

Tags: VMware

Similar Questions

  • Best practices for the application of sharpness in your workflow

    Recently I tried to get a better understanding of some of the best practices for sharpening in a workflow. I guess that I didn't know, but there are several places to sharpen. Who are the best? They are additive?

    My typical workflow involves capture an image with a professional DSLR in RAW or JPEG or import into Lightroom and export to a JPEG file for the screen or printing of two lab and local. 

    There are three places in this workflow to add sharpening. In the DSLR manually into Lightroom and when exporting a JPEG file or print directly from Lightroom

    It is my understanding that sharpening is added to RAW images even if you add sharpening in your DSLR. However sharpening will be added to the JPEG from the camera. 

    Back to my question, it is preferable to manually the sharpness in the SLR in Lightroom or wait until you export or output of your printer or final JPEG file. And additive effects? If I add sharpening to the three places I am probably more sharpening?

    You have to treat them differently both formats. Data BULLIES never have any sharpening applied by the camera, only JPEG files. Sharpening is often considered to be a workflow where there are three steps (see here for a seminal paper on this idea).

    I. a step sharpening of capture which compensates for the loss of sharp in detail due to the Bayer matrix and the filter anti-repliement and sometimes the lens or diffraction.

    II. A creative sharpening step where some details in the image are 'highlighted' sharpness (do you eyelashes on a model's face), and

    III. output sharpening, where you fix the loss of sharpness due to the scale/resampling for the properties of the average output (as blur because of an impression how process works or blurry because of the way that a screen LCD sets out its pixels).

    These three are implemented in Lightroom. I. and II. are essential, and basically must always be performed. II. until your creative minds. I. is the sharpening that see you in the Panel to develop. You need zoom at 1:1 and optimize the settings. The default settings are OK but quite conservative. Usually you can increase the value of the mask a little so that you're not sharpen noise and play with the other three sliders. Jeff Schewe gives an overview of a strategy simple to find the optimal settings here. It is for the cab, but the principle remains the same. Most of the photos will benefit from optimization a bit. Don't go overboard, but just OK for smoothness to 1:1.

    Stage II as I have said, is not essential, but it can be done using the local adjustment brush, or you can go to Photoshop for this. Stage III, however, is very essential. This is done in the export, the print panel Panel or the web. You can't really get a glimpse of these things (especially the sharpening printing-oriented) and it will take a little experimentation to see what you like.

    For jpeg, sharpening is done already in the camera. You could add a small extra capture sharpening in some cases, or simply lowering the sharpening in camera and then have more control in the post, but generally, it is best to leave it alone. Stage II and III, however, are still necessary.

  • Looking for a best practices guide: complete replacement cluster

    Hello

    I was in charge as the complete replacement of our current environment hardware ESXi 5.0 U2 with a new cluster of servers running 5.1.

    Here are the basics:

    Currently, HP Blade Server chassis with 6 hypervisors running ESXi 5.0 U2, the company license, about 100 or so virtual running different operating systems - mainly MS 2003 R2 to 2008 R2, stores the data on without connected through ethernet connections 1 GB.

    Intended to run 7 independent servers as a cluster with ESXi 5.1, license of the company, connections to SAN be improved to 10 GB ethernet or fiber.  The range of virtual machines in the importance of 'can be restarted after hours' to ' should not be restarted or that will cost us money service interruptions.  (Looking for the options live - migrate if possible although I have my doubts, it will be an option given the cluster plans)

    I'm looking for a Guide to best practices (or a combination of the guides) which will help me to determine how best to plan the migration of VM - especially in light of the fact that the new cluster will be not part of the existing.  Given also the fact we upgrade is unable (due to problems on the chassis firmware) 5.1 before this work...

    Any pointers in the right direction would be great - look no no not a handout, just people signs

    See you soon.

    Welcome to the community - from vCenter 5.1 can manage an ESXi 5.0 host just one at a time do guests 5.0 5.1 and vmotion the VMs to new hosts - environment as the two environment will see the same SAN it will be necessary for storage vmotion.

  • Best practices for migrating workflows, actions, patterns between environments

    Hello

    Is there a better document practical to migrate workflows, actions and configuration between vco (-> production development) environments. We want to lock all the workflows/action/configurations in prod. Migration can be done manually through package. Is there something more than that.

    Thank you

    Create a Package using Orchestrator (change the drop-down list in the client in CREATING packages of access mode).

    Add the desired workflow, actions, resources and Configurations to the package. As they added, dependent actions, workflows, resources and configuration items will be detected and added to the package.

    Right-click on the package and either export or synchronize.

    Backup file system export package / archive. Then import it on the server (test or production) target.

    Synchronize to test or prod server when you are ready.

    In both cases, you will be presented with a comparison window that indicates what workflow / actions/configurations/resources will be updated on the target system.

    Make an element by element synchronize would be tedious and could miss dependencies packages are the preferred and recommended best practice.

    For additional info of workflow development lifecycle, see blog of Christophe here: http://bit.ly/vroWdlc

  • Is there a best practice workflow?

    Hello

    I'm new to this first pro Capper.

    Is there a best practice for workflows?

    One of my challenges, or lack of understanding, is editing the files on the external hard drive. Is this OK to do, it causes problems or is it better to have the files on the iMac, and then when the project is completed transfer batch on the external hard drive?

    Thanks for your help!

    PrPro is a little hard on the equipment... it is seizing so many bits & pieces of the time here & there it is reading and/or rendered and all, that it can make points of congestion in the computer processing. Then... a default installation to date has been to have 4 to 6 'internal' on the computer discs, individually or in a stripped hardware RAID 0 and player programs/system/operating system. There are of course many are working on NAS (Network Attached Server) or these other means, with rather... Nice... external hardware.

    For external drives, those who are bound by (in order of preference by a throughput) were Thunderbolt, eSATA II, USB3... with up to recently the first two only really able to good use in a situation of reading/record with one exception: for the clips and 1080 and lower exports, USB3 was enough for one of the two one-way uses : media OR exports.

    Bill Gehrke ( http://ppbm7.com/index.php/tweakers-page ) and one frequently in forum Hardware ( https://forums.adobe.com/community/premiere/hardware_forum ) has tested a few discs SSD T1 Samsun lately and found capable even on USB3 a sustained flow such that there could be several parts of the workload PrPro on one project and change again very well. In addition, there are some internal readers it is used that have the speed to have whole projects on them... when connected via legal means in the computer. So I would check the Page the Hardware Forum and Tweaker for its latest hardware configurations & reader reviews for work on projects of PrPro.

    Neil

  • Dell MD3620i connect to vmware - best practices

    Hello community,

    I bought a Dell MD3620i with 2 x ports Ethernet 10Gbase-T on each controller (2 x controllers).
    My vmware environment consists of 2 x ESXi hosts (each with 2ports x 1Gbase-T) and a HP Lefthand (also 1Gbase-T) storage. The switches I have are the Cisco3750 who have only 1Gbase-T Ethernet.
    I'll replace this HP storage with DELL storage.
    As I have never worked with stores of DELL, I need your help in answering my questions:

    1. What is the best practices to connect to vmware at the Dell MD3620i hosts?
    2. What is the process to create a LUN?
    3. can I create more LUNS on a single disk group? or is the best practice to create a LUN on a group?
    4. how to configure iSCSI 10GBase-T working on the 1 Gbit/s switch ports?
    5 is the best practice to connect the Dell MD3620i directly to vmware without switch hosts?
    6. the old iscsi on HP storage is in another network, I can do vmotion to move all the VMS in an iSCSI network to another, and then change the IP addresses iSCSI on vmware virtual machines uninterrupted hosts?
    7. can I combine the two iSCSI ports to an interface of 2 Gbps to conenct to the switch? I use two switches, so I want to connect each controller to each switch limit their interfaces to 2 Gbps. My Question is, would be controller switched to another controller if the Ethernet link is located on the switch? (in which case a single reboot switch)

    Tahnks in advanse!

    Basics of TCP/IP: a computer cannot connect to 2 different networks (isolated) (e.g. 2 directly attached the cables between the server and an iSCSI port SAN) who share the same subnet.

    The corruption of data is very likely if you share the same vlan for iSCSI, however, performance and overall reliability would be affected.

    With a MD3620i, here are some configuration scenarios using the factory default subnets (and for DAS configurations I have added 4 additional subnets):

    Single switch (not recommended because the switch becomes your single point of failure):

    Controller 0:

    iSCSI port 0: 192.168.130.101

    iSCSI port 1: 192.168.131.101

    iSCSI port 2: 192.168.132.101

    iSCSI port 4: 192.168.133.101

    Controller 1:

    iSCSI port 0: 192.168.130.102

    iSCSI port 1: 192.168.131.102

    iSCSI port 2: 192.168.132.102

    iSCSI port 4: 192.168.133.102

    Server 1:

    iSCSI NIC 0: 192.168.130.110

    iSCSI NIC 1: 192.168.131.110

    iSCSI NIC 2: 192.168.132.110

    iSCSI NIC 3: 192.168.133.110

    Server 2:

    All ports plug 1 switch (obviously).

    If you only want to use the 2 NICs for iSCSI, have new server 1 Server subnet 130 and 131 and the use of the server 2 132 and 133, 3 then uses 130 and 131. This distributes the load of the e/s between the ports of iSCSI on the SAN.

    Two switches (a VLAN for all iSCSI ports on this switch if):

    NOTE: Do NOT link switches together. This avoids problems that occur on a switch does not affect the other switch.

    Controller 0:

    iSCSI port 0: 192.168.130.101-> for switch 1

    iSCSI port 1: 192.168.131.101-> to switch 2

    iSCSI port 2: 192.168.132.101-> for switch 1

    iSCSI port 4: 192.168.133.101-> to switch 2

    Controller 1:

    iSCSI port 0: 192.168.130.102-> for switch 1

    iSCSI port 1: 192.168.131.102-> to switch 2

    iSCSI port 2: 192.168.132.102-> for switch 1

    iSCSI port 4: 192.168.133.102-> to switch 2

    Server 1:

    iSCSI NIC 0: 192.168.130.110-> for switch 1

    iSCSI NIC 1: 192.168.131.110-> to switch 2

    iSCSI NIC 2: 192.168.132.110-> for switch 1

    iSCSI NIC 3: 192.168.133.110-> to switch 2

    Server 2:

    Same note on the use of only 2 cards per server for iSCSI. In this configuration each server will always use two switches so that a failure of the switch should not take down your server iSCSI connectivity.

    Quad switches (or 2 VLAN on each of the 2 switches above):

    iSCSI port 0: 192.168.130.101-> for switch 1

    iSCSI port 1: 192.168.131.101-> to switch 2

    iSCSI port 2: 192.168.132.101-> switch 3

    iSCSI port 4: 192.168.133.101-> at 4 switch

    Controller 1:

    iSCSI port 0: 192.168.130.102-> for switch 1

    iSCSI port 1: 192.168.131.102-> to switch 2

    iSCSI port 2: 192.168.132.102-> switch 3

    iSCSI port 4: 192.168.133.102-> at 4 switch

    Server 1:

    iSCSI NIC 0: 192.168.130.110-> for switch 1

    iSCSI NIC 1: 192.168.131.110-> to switch 2

    iSCSI NIC 2: 192.168.132.110-> switch 3

    iSCSI NIC 3: 192.168.133.110-> at 4 switch

    Server 2:

    In this case using 2 NICs per server is the first server uses the first 2 switches and the second server uses the second series of switches.

    Join directly:

    iSCSI port 0: 192.168.130.101-> server iSCSI NIC 1 (on an example of 192.168.130.110 IP)

    iSCSI port 1: 192.168.131.101-> server iSCSI NIC 2 (on an example of 192.168.131.110 IP)

    iSCSI port 2: 192.168.132.101-> server iSCSI NIC 3 (on an example of 192.168.132.110 IP)

    iSCSI port 4: 192.168.133.101-> server iSCSI NIC 4 (on an example of 192.168.133.110 IP)

    Controller 1:

    iSCSI port 0: 192.168.134.102-> server iSCSI NIC 5 (on an example of 192.168.134.110 IP)

    iSCSI port 1: 192.168.135.102-> server iSCSI NIC 6 (on an example of 192.168.135.110 IP)

    iSCSI port 2: 192.168.136.102-> server iSCSI NIC 7 (on an example of 192.168.136.110 IP)

    iSCSI port 4: 192.168.137.102-> server iSCSI NIC 8 (on an example of 192.168.137.110 IP)

    I left just 4 subnets controller 1 on the '102' IPs for more easy changing future.

  • Best practices for handling logic conditional dialogue/popup in the ADF?

    Here's the scenario:

    I have a page that shows a popup programmatically in a method of bean support. The popup asking the user a yes / no question and subsequent logical path is determined by their response. However, if the popup is visible or not in the first place is conditional. In addition, there is an additional logic in the original method, apart from the logic of popup, that must be met.

    The problem with this is that the ADF seems to spin off the popup in another thread and prevents the execution of logic in the original method at the same time, while you wait for the response from the user. However, the desired effect is that the program stops until the user has answered the question in the context menu.

    I was not able to understand in an elegant way to make this happen. Ideally, I think that the solution is to encapsulate the logic that occurs after the popup is displayed (or not shown) in the original method and call it from the popup action listener if the popup is displayed (if not call it the original method of). However, the logic should be encapsulated requires the use of some local variables that have been put forward for the popup to appear. There is no way to get these values to the popup action listener method to pass them on to the encapsulated logic (aside from the creation of global static variables in the bean, which seems to be a bad solution).

    Another idea I had was to get the logic ' show/do not see the popup' workflow. However, it seems that for every single popup would make the really complicated workflow.

    Is there a 'best practice' recommended to handle this situation? It must be a common problem, and it seems that I will talk all wrong.

    However, the desired effect is that the program stops until the user has answered the question in the context menu.

    This will not happen in any web environment, including ADF.

    You will have different events for each life cycle:

    1 - opening popup: popupFetchListener event

    2 - Click on OK, Cancel buttons: DialogListener event

    3 - Press the Esc button: popupCancelledEvent

    You can share data between these events on pageFlowScope, or viewScope.

    But if you use the ADF BC, you might be better to use transient attributes on the objects in view.

  • What are the best practices for creating only time data types, and not the Date

    Hi gurus,

    We use 12 c DB and we have a requirement to create the column with datatype of time only, if someone please describe what are the best practices for the creation of this.

    I would strongly appreciate ideas and suggestions.

    Kind regards
    Ranjan

    Hello

    How do you intend to use the time?

    If you are going to combine with DATEs or timestamps from a other source, then an INTERVAL DAY TO SECOND or NUMBER may be better.

    Will you need to perform arithmetic operations on time, for example, increase the time to 20%, or take an average?   If so, the NUMBER would be preferable.

    You are just going to display it?  In this case, DAY INTERVAL in SECONDS, DATE or VARCHAR2 would work.

    As Blushadow said, it depends.

  • I can work on after effects being coded by best practices and Media Encoder file

    I just found out that after effects files can be returned outside the program using the media encoder.  I had been made in after effects of .mov, then open the file in photoshop to render the .mov to a mp4.  Not the process faster, but it worked.  If I use media encoder to make my legacy model would still be able to work in AfterEffects and edit and save the file rendered in media encoding, or it is locked?  I hear also the media encoder is slower than the after effects encoder.  What are the best practices for a comp sequelae which makes a mp4 (h264)?  Thank you...

    Depending on what you do the SOUL can be slower than using index rendering, but that's the only thing that you must use to make H.264 files.

    When you send a model to the SOUL a virtual copy of this composition that is the source for this rendering. You can continue to work on the same computer and additional changes but if you want these changes appears you will need to send this model to the SOUL again after making the changes.

    Almost without exception, I'm working on plans not and certainly never movies sequences in After Effects. My average computer is probably seven seconds, my average film is probably 30 minutes so I use AE to work on plans for effects which cannot be treated in my NLE. I almost always send a model to the SOUL to render a h.264 or a suitable production master, or both, and then I continue working in AE because I can't afford to wait for a rendering time. On almost all of it is the more efficient workflow.

  • AWM best practices 11g

    I will build a cube in AWM 11 g. It has 25 dimensions and some of the dimension tables has over 30 million records. I partitioned the cube on the dimension of time with the lowest level being months. One dimension took about 20 hours to build. I am now under the cube and makes more than 30 hours and its construction yet. I don't know what needs to be done to improve performance and just best practices in general. Thank you in advance. What is the recommended number of dimension to be used in the cube and are there recommendations as to how attributes affect the dim or hierarchies or how many measures to include in the cube. These cubes using OBIEE as the reporting tool.

    Thank you in advance.

    Hello

    25 dimensions are really much if you say that each cube is sized by each of them. With Oracle OLAP, you do not have these limits for a number of dimensions of a cube, but a typical good efficient cube can have 5 to 8 dimensions. If you analyze your business needs, you might be able to create smaller cubes much keep each cube to have 5 to 8 dimensions so that the loading process is much faster and suffocate.

    How many members you load dimension which takes 20 hours? You load a view or table? Also if you do a complete refresh of dimesion with synchronization and the cube is also loaded it takes a while to load but still 20 hours indicates that something is wrong here.

    Cube load performance depends on several facts:

    1 percent to precalculate.

    2. the cube dimension. How many members in each a dimension.

    3. What is the depth of the predetermined.

    4. how many measures you have in each cube.

    5. is partitioned cube? If Yes, what level, you need to maybe resolve and arrive at a correct level for the partition. How partititions.

    6. If your cubes are not partitioned, then you make series load that is eating all your time building.

    More you pre-computes, load cube takes more time, AW takes more space on the disk. A cube performance has good combination of precompute so that charge takes in your load limit and also query does not suffer.

    You can get help here

    http://oracleolap.blogspot.de/

    Oracle OLAP: Best practices

    Thank you

    Brijesh

  • Separate management / VMotion Best Practice?

    We're heading to 4.0 ESX ESXi 4.1.  Our servers have 4 physical Gigabit NIC.

    On ESX 4.0, we lack 2 vSwitches:

    vSwitch0

    Service Console - Active vmnic0 - vmnic3 watch

    VMkernel - Active vmnic3 - vmnic0 eve

    (Unique network interface cards / IPs by function)

    vSwitch1

    Port VM - vmnic1 and vmnic2 active groups

    (Several VLANS to resources shared)

    With the changes in ESXi, is recommended to separate the management of VMotion as we did with ESX?  Notice that we use the same subnet for these two functions.

    Personally, I would prefer combining Management and VMotion.  VMotion will not only benefit an additional NIC usage, especially with the multiple simultaneous VMotions?  At the same time, it seems not that management traffic would be impeded to the point of needing to separation, as we use the same subnet.  In addition, security should not be a problem, since the new, we use the same subnet for management and VMotion.

    Your configuration is consistent with "best practices". I prefer separate management taffic VMkernel myself, even if it will cost me some performances of vMotion.

    ---

    MCITP: SA + WILL, VMware vExpert, VCP 3/4

    http://blog.vadmin.ru

  • Work with several sequences-best practices

    Hello.

    I ve just started using Adobe Premiere CS6. My goal is to create a long movie, which is based on 30 hours of footage in raw gopro recorded on a recent trip of 2 hours.

    Now my question is, what is the best practice for working with many sequences/clips?

    You have a single file of heavy project, with all the clips?

    Or make you small chapters that contains number of sequences x/x minutes long and ultimately combine all these?

    Or how would you do it the best way, so its easier to work with?

    Thanks a lot for your help.

    Kind regards

    Lars

    I make a primary sequence in your project and then modify the individual scenes in the form of separate sequences, and then nest them in the primary sequence.

    That way, you would have a single file of project heavy with all the media files (raw video, music, etc.).

    I find it easier to work with. With the help of several project files would get too complex and fragmented throughout the of your computer.

    I work with a lot of video GoPro, so I wonder what your Setup looks like. What device you have, and how your computer/Premiere Pro is configured?

  • NIC Teaming best practices

    Hello

    I have 1 server that has ports gigabit 8 inside. There will be 6 VLAN (including VLAN ID 4095) inside this ESX Server. Is it best practice to all gigabit 8 ports combine and link to vSwitch0 (default) and create the Group of ports by VLAN?

    Kind regards

    T.S.

    is absolutely not a stupid question

    Only for safety and performance. You can use configuration grouping in both cases. (2 NICs for sc and vmotion and 4 NICs for vlan tag vm)

    Keep this is to undertatnd that it is not an absolutely answare, but only a best practice, and that's OK as your design.

    I thank Alberto

  • Sideloading content to the Kindle Fire - best practices

    We are looking at our options for a workflow QA for our magazines check that they work well on fire. We are clear on how to get our sideloaded app customized on fire, but we need some comments and suggestions on how to better implement a workflow for our production teams artistic/readily and easily get their Folios magazines on fire development to test their custom applications. They will have to scroll through these tests repeatedly in the design environment before releasing us a problem to store for sale and setting up a complete for these creations Eclipse-based development environment is not suitable. Does anyone have suggestions and best practices for us cause sideloading content for review in our custom Kindle applications? The current recommendations are to download our content development on the CD and then simply download it the Kindle, but since there is no commissioning of the Kindle app, we know not how we can keep our test Folios private for internal testing only once they are on the CD. I'm wrong to worry about this? Our Folios test would not be visible to the users of other Android devices, once we upload, before even our personalized Kindle app is approved? What we really need, it's an app or an inline function like iTunes for sideloading develop content of test directly to Kindle fire as we currently do with iPads.

    Clarification, suggestions or best practices, others have found useful would be most gratefully appreciated.

    Adobe Content Viewer app is here for the right reason to test your slips on your devices before it becomes, if you do not have to create an application each time you test your folios.

    If the content viewer application is not an option, as recommended above, Alec then go ahead and create a test ID (e-mail which fact sheets are published) application and create a custom Viewer test application, then distribute the application for those who want to test the application on the Kindle.

    • Go ahead and create a viewer application of client with the above as the title ID application ID in the generator of spectator.
    • Download the .apk file
    • Share the .apk file with anyone who wants to test the app

    If your application was never live on the store, you don't have to create a test application ID. Instead, use the same application ID you want to use for the app goes live to publish your folios. Because your application has never been direct from the store, only those that you shared, the custom application (.apk downloaded in step 2 above) will be able to view the content/folio that you put to the test on an application real custom Viewer.

  • 1 hr, 2 users - best practices project

    Hello

    I was the only writer to our company for 6 years and more. I finally have someone to help me; However, this introduces a new challenge. We're going to * two * work on the same project HR, and we will use VSS to source code control. I don't know how we "share" this single HR project.

    Someone at - it of the best practices for when you work in this kind of situation?

    I have questions such as:

    • When the other person creates the Index keywords, what happens if I removed files - how will this affect the addition of keywords?
    • When the other person creates excerpts from news or new variables defined by the user, should immediately check them and let me know so that I can do a get latest and have new clips/variables in my project?
    • How do we manage the two of us working on the same project and saw that he had to extract / archive files, create new topics, etc - what should be our "workflow"?

    Thanks in advance for ANY help/advice anyone of you can provide!

    I like rule of Care author: keep things simple and robust. This topic covers the three basic methods of sharing help authoring tasks. In order of complexity:

    1. creation series. If you do not need to have the two authors of the project at the same time, you can just take turns working on the project. Just move the files back if necessary. It is the simpler and more robust approach.

    2. merger proposals. If you need simultaneous creation, then, Yes, it's an approach simpler and more robust than the source control. However, this works only if you can partition your hardware and your clearly demarcated into two or more parts work assignments. Mergers can be a great solution, but it does fit all cases.

    3. source control. If several authors need simultaneous access to the same material, then source control is the simplest answer.

    Here are a few tips and observations, based on my experience with RoboSource Control, in no particular order:

    1. source code control works best on small projects of medium size. Largest may be unstable.

    2. set up to restrict an author file extractions only. Allowing the two authors to work simultaneously on a single topic is bad.

    3. If possible, try to work in different areas of the project that are not. Remember that a single change in a subject can ripple on many related topics. (For example, if you change the name of file to a topic, all the links in this topic must be changed.) If someone else takes care of one of these topics, you will not be able to complete your initial change.

    4. backup of your projects regularly, even if they are in the source code control.

    5. create an administrator account to use just for that purpose. Do not use this account for the creation of the ordinary. All do not give administrator privileges.

    6 appoint a person as administrator. Have at least one backup administrator. It will be the people who put up user accounts, to substitute the extractions ("I need this file, and Joe's on vacation!"), resurrect the old files, adjust source control conflicts, etc..

    7 archive files as soon as you are finished with them. Don't let them verified any longer than necessary.

    8. If you have large scale projects, your virus scan utility can really degrade performance during certain operations, such as the initials "get" of the project files. If this is the case, you may be able to configure your antivirus program to be more respectful of these activities of source control.

    9. the authors of aid must remain in close communication. The other did know what you are doing, especially if you do something drastic like move folders. Be prepared to check something in the case of someone else in need.

    10 give a lot of thought to the structure of your project. Examine the structure of files, naming conventions, etc.

    11. some actions are more intensive than others source code control. (Move, delete or rename folders are biggies.) Your project is vulnerable, even if these changes are underway. If something is wrong until the process is completed, you can end up with a mess on your hands. For example, let's say there is a network problem while you move a folder, interruption of your connection with source code control. You may find yourself with HR thinking that the folder is in one place, while control of source code it is in another. The result is broken links and missing files. Time for the administrator to intervene and fix things. It is almost never a problem for small projects. It becomes a real problem for large projects.

    12. If you get near a date limit, DO NOT choose this time to reorganize and rename files and folders.

    13 follow the appropriate procedure for adding a project to source code control. Bad really do spoil you. It is easy to add a project to RoboSource Control. I can't speak for other solutions to source control.

    14. it may be necessary to rebuild your cpd file more often than with uncontrolled sources projects.

    15. I just lately that you must back up your source files?

    HTH,

    G

Maybe you are looking for

  • Synchronization of folders and subfolders between Mac

    I would like to sync the folders and subfolders on my MacBook Pro with my Mac Pro workstation.  The folders and subfolders are located under the "mailboxes".   Is this possible?

  • Firefox 13 hogs cpu all of a sudden

    Good, everyone. I'm having a terrible experience with the new firefox 13, and I was wondering if others are experiencing the same. FYI, I use Windows 7, i7 and 8 GB of ram. After the auto-update, after 10-20 minutes in navigation, my hard drive (SSD)

  • The Tecra R940 Windows 8.1 HDMI sound

    If I plug an external monitor or TV, the device is detected, the display is OK, but the sound comes from the speakers of the Notebook, Manager of devices-all goes well, in the Sound window under only the portable speaker playback device is detected.

  • How do I put the printer on line

    How can I get my printer back online Please help [email edited by Moderator]

  • help to reinstall Java

    I uninstalled Java (at least a year ago) after the hearing, he may not be not sure. Now I can not get reinstalled. I tried as well run & save and run. It freezes during the download or will not actually install Java. I have Windows Vista and am wonde