Best practices storage or advice

I try to develop a Java of BB application, currently appearing on local storage. The app will be for public use.

Someone has advice on when it is best to use the SD vs the persistent store? Is there a good best practices or document advice out there somewhere?

This application will have two types of data: preferences and what would be the data files on a desktop system.

I read on using the persistent store, and it seems to be a good option because of the level of control over the data for synchronization and such. But I noticed that some OSS BB applications use the SD card, not the persistent store.

If I'm going to deploy the application to the general public, I know that I'm working with many configurations as well as with the limits set by the policy of the company (assuming that these users can even install the app). So any advice on navigating these issues regarding the storage would be greatly appreciated.

Thank you!

The persistent store is fine for most cases.

If the transient data is very large, or must be copied to the device via the USB cable, then maybe the SD card should be considered.

However, many / most of the people do not have an SD card.

Tags: BlackBerry Developers

Similar Questions

  • Best practice? Storage of large data sets.

    I'm programming a Client to access the customer address information. The data are delivered on a MSSQL Server by a Web service.

    What is the best practice to link these data namely ListFields? String tables? Or an XML file with is analyzed?

    Any ideas?

    Thank you, hhessel

    These debates come from time to time. The big question is how normally geet she on the phone after

    someone asks why BB does not support databases. It is there no magic here - it depends on what you do with

    the data. Regarding the General considerations, see j2me on sun.com or jvm issues more generally. We are all

    should get a reference of material BB too LOL...

    If you really have a lot of data, there are libraries of zip and I often use my own patterns of "compression".

    I personally go with simple types in the store persistent and built my own b-tree indexing system

    which is also j2se virtue persistable and even testable. For strings, we'll store me repeated prefixes

    that only once even though I finally gave up their storage as only Aspire. So if I have hundreds of channels that start "http://www.pinkcat-REC".

    I don't store this time. Before you think of overload by chaining these, who gets picked up

    the indexes that you use to find the channel anyway (so of course you have to time to concatenate pieces)

    back together, but the index needs particular airspace is low).

  • best practices for the storage of the vm and vhd

    no doubt this question has been answered not once... Sorry

    I would like to know the best practice for the storage of the vm and its virtual hard disk to a SAN.

    Any show advantage does make sense to keep them on separate LUNS?

    Thank you.

    It will really depend on the application of the virtual machine - but for most of the applications no problem by storing everything on the same data store

  • Addition of secondary storage to a guest VM - best practices

    Greetings-

    I have a scenario about the addition of additional storage to an existing client VM and any input would be greatly appreciated.

    My implementatino - four ESXi 4.1 Update 1 guests and a vCenter 4.1 Update1 - Enterprise Edition

    Currently, I have a Windows 2008 R2 guest VM installed on a 256GB / 1 MB data store (DS01) block size. The C: drive is 60 GB in size. Recently, we have added approximately 2 TB (1.86TBs) of storage to an existing HP P2000 G3 MSA. I should add that this particular virtual machine storage. This additional storage will house the data user files (doc, PDF, etc.) only and not other operating systems or applications will be installed.

    My thought was to generate a data store (e.g. DS02) with a block size of 8 MB and set her size 1.86TBs size (maximum). From there I add a second virtual disk to the VM guest (via the virtual settigs machine), by specifying "DS02" data store and then import/initialize the disk in Windows 2008 R2 disk Mgr.

    My reasoning is correct? This would be 'best practice' or is there a better approach?

    Any input would be appreciated grealty.

    THX in advance

    Joe

    Nothing of the image based backup solutions backup a RDM in physical compatibility mode. To include a ROW in the backup, you must configure virtual compatibility mode for it.

    Please keep in mind that the data store that contains the basic configuration of the virtual machine must have the block size that is appropriate, given that the snapshot files are created on the default data store.

    André

  • Request for advice: generally speaking, what is the best practice for managing a paid and a free application?

    Hi all

    I recently finished my first app of cascades, and now I want to inspire of having a more feature rich application that I can then sell for a reasonable price. However, my question is how to manage the code base for both applications. Any have any "best practices", I would like to know your opinion.

    You use a revision control system? This should be a prerequisite...

    How the different versions of the application will be?

    Generally if you have two versions that differ only in terms of having a handful of features disabled in the free version, you must use exactly the same code base. You could even just it for packaging (build command) was the only difference, for example by adding an environment variable in one of them that would be checked at startup to turn paid options.

  • Best practices for dealing with Exceptions on members of storage

    We recently encountered a problem where one of our DistributedCaches was closing himself and restart due to a RuntimeException is thrown from our code (see below). As usual, it's our own code and we have updated to not throw a RuntimeException in all circumstances.

    I would like to know if there are some best practices for Exception handling, other than catching Exceptions and their record. We should always catch the Exceptions and ensure that they do not spread back to code that is running from the pot of consistency? Is it possible to configure consistency so that our DistributedCaches are not completed even when filters custom and other throw RuntimeExceptions?


    Thank you, Aidan


    Exception below:

    2010-02-09 12:40:39.222/88477.977 Oracle coherence GE < error > 3.4.2/411 (thread = DistributedCache:StyleCache, Member = 48): a (java.lang.RuntimeException) exception occurred reading Message AggregateFilterRequest Type = 31 for Service = DistributedCache {Name = StyleCache, State = (SERVICE_STARTED), LocalStorage = active, PartitionCount = 1021, BackupCount = 1, AssignedPartitions = 201, 204 = BackupPartitions}
    2010-02-09 12:40:39.222/88477.977 Oracle coherence GE < error > 3.4.2/411 (thread = DistributedCache:StyleCache, Member = 48): DistributedCache ending because of an exception not handled: java.lang.RuntimeException

    We have reproduced you problem in the House and it looks like a global filtering does not
    the correct thing (i.e. having caught a processed) when moving from a runtime exception.
    In general runtime exceptions should simply be exploited and returned to the application
    without compromising the cache server, so we will be solving it.

  • Best practices for the installation of 10 GbE

    I have a new HP DL380G7 who has 4 cards on 1 G and 2 10 network interface cards.

    This will run VSphere 4.1 esxi

    I'm looking for some advice on what would be a good network for this server configuration.

    The storage will be NFS goes to a NAS Server

    I tried to create 2 vswitch one with one of the NICs embedded for the service console and the other switch was a team of the 10G env.

    Support Vmware told me that in this configuration, I had to make sure that the service console was on a different subnet from the kernel NFS or NFS traffic will try to go on the service console port.

    I tried this config but as soon as I add a Vkernel to the 2nd switch I am not out ping.

    So I thought I'd post here with what would be a best practice configuration for this server.

    In the first configuration, it is difficult to say which interface will be used for NFS traffic.  Therefore, it is not a recommended configuration.

    The second configuration must be correct.

  • Best practices for managing strategies of path

    Hello

    I get conflicting advice on best practices for managed paths.

    We are on version 4.0 of ESXi connection to a HP EVA8000. Best practices guide HP recommends setting the strategy of railways handle on Round Robin.

    This seems to give two active paths to the optimized controller. See: http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA1-2185ENW.pdf

    We used certain consultants and they say that the best practices of Vmware for this solution is to use the MRU policy which translates a single path to the optimized controller.

    So, any idea what good practice is best practice? Does make a difference?

    TIA

    Rob.

    Always go with the recommendation of the storage provider.  VMware recommendation is based on the characteristics of the generic array (controller, capable ALUA failover methods, etc.).  The storage provider's recommendation is based on their performance and compatibility testing.  You may want to review their recommendations carefully, however, to ensure that each point is what you want.

    With the 8000, I ran with Round-Robin.  This is the option of creating more robust paths available to you from a failover and performance point of view and can provide performance more even through the ports on the storage controller.

    While I did of the specific tests/validation, the last time that I looked at the docs, the configuration of HP recommends that you configure each IO to the ports in the switch configuration.  This adds the charge to the ESX host, the switch to other ports, but HP claims that their tests showed that it is the optimal configuration.  It was the only parameter I wondered in their recommendation.

    If you haven't done so already, be sure to download the HP doc on configuring ESX and EVA bays.  There are several parameters that you must configure the policy path, as well as a few scripts to help make the changes.

    Virtualization of happy!

    JP

    Please consider awarding points to useful or appropriate responses.

  • Best practices MD3000i

    Hi group.

    I expect to get an idea of what would be the best practices for a MD3000i.  Here is a little history on our purchase.

    MD3000i with two controllers, 15-600 GB SAS 15 k RPM

    3 servers Dell R710 with built in 4 network cards, all the iscsi capable unloading.  Each server is dual core quad, 24 GB of ram each.

    We will only be endearing vsphere esx 4.0 to the San.

    We will moderate a general mix of virtual machines windows.  One of these virtual machines will be sql 2008 which should be light to medium capacity.  Two virtual servers will be citrix xen application servers.  The rest will be your garden variety domain controllers, point sharing, web servers, print servers.

    Basically, I think I know what I want to do, but am looking for validation or advice on what I could not have thougth.  I've never used the MD3000i before and besides, iscsi.  We have always been EMC fiber channel.

    So, I thought that a disk is a hot spare, leaving 14 left.  This is where I need a little advice.

    Is it wise to scratch the other disks in raid 10 for better overall performance?  Is - this 1 raid group or should I think about maybe 2 raid groups (disk 6 and 8 disk)?  RAID 10 will leave us with 4.2 TB of usable space, and it's more than enough for our needs.

    Then the next question is the lun size matter?  I see no valid reason to carve up LUN more than are really necessary.  On our EMC clarions we LUNS in all directions, but it is useful, as migrate us them and resize them all the time.  That was before provisioning and we also have no vmware servers attached to EMC for general sql clustering.

    The only other thing that I think would be good is to have a piece of storage dedicated to backups.  We investigate vizioncore for vm backups and I'd like to store backups here as well.

    Any notice or advice appreciated!

    Thanks in advance.

    To see LUN sizing:

    Best practices in designing LUN

    For the number of virtual disk on the storage, I suggest you keep at least 2 different RAID group with the other fixed disk.

    This because I assume that you have 2 controllers and MD3000i each VD can be 'active' on a single controller.

    So at least 2 different VD you can improve the performance by simpe together different controllers as a preferred owner.

    For more information on the MD3000i, see also:

    http://www.delltechcenter.com/page/VMware ESX4.0andPowerVault MD3000i

    André

  • Server 2003 Best Practice

    Research of best practices advice or links to them with regard to CPU and RAM settings.

    IM create several images of OS from scratch. I was with my DC approx. when I started the second guess my config.

    I have a Dell 2900 with Xeon E5405 (2.0 GHz Quad Core) processor and 16 GB of RAM. Using a 8 x 74 GB SAS disk in a RAID5 storage DataStore1 configuration. Datastore2 waiting for the snapshot storage.

    I'm not going to run 3 or 4 virtual machines on the server to begin (domain controller 1 x 2003, 1xTerminal Services, Server 2003, XP 1 or 2). With a VM 2 or 3 additional possibly implemented thereafter. That would be based on Linux. Its possible (but unlikely) that I could experiment with Server 2008 later on the road

    How many virtual processors should I appoint to the 2003 Server? 1? 2? 4?

    Can I configure affinity to use the processor for DC 1/3 and 2/4 for TS Server?

    I was counting on 2 GB of RAM for the domain controller and 3 GB of RAM for the TS and XP boxes. Is - this an exaggeration?

    I understand that the drive of 20 GB for my domain controller will be adaquite. It will house NO data, IIS, etc. or printers. Essentially just an AD box for users of 50ish. Probably do 30 GB for the TS and XP vm.

    Any help appreciated. I'm looking for someone who can tell me 'don't do this' expierence because "it could happen later."

    -Chris

    Start simple, then tweak as necessary. For the DC 2 GB mem is fine. In fact given the load that she could stand to 50 users it could probably do okay on 512 MB, but memory is inexpensive so no reason to not give him the 2 GB. Single processor

    The TS server should be where you will load then start with 4 GB of RAM and increase as needed. Create two VMDK files for this drive 20 a 1 for the operating system and a disk (start with 20 GB you can grow the latter) for Apps and TS profiles. Start with only one CPU and only add additional processors if the CPU becomes a bottleneck.

    2xGB RAM for XP systems is very good. 1 x CPU each. 30 GB for the discs is good

    Multiple processors on a virtual computer can impact performance as often as aid, so do not use them unless you need it.

    You don't have to bother with the processor affinity that this is an unnecessary complication in an ESX implementation that has more than enough power to manage the virtual machines you have specified will be hosted on it.

    Not to do: place an Exchange or SQL local VM on the data store of this ESX host. You run a large RAID 5 LUN, which gives a lot of space but the poor performance, no problem with the virtual machines you have. Exchange and SQL need high I/O so you can virtual machines placed on RAID 1 + 0 LUNS preference of SAN storage.

  • Question/best practice data warehousing

    I received the copy task a few tables of our production database to a database on a base (one night) once per day of data warehousing. The number of tables will grow over time; Currently, it is 10. I am interested in not only the success of the task, but also best practices. Here's what I came with:

    (1) remove the table in the destination database.
    (2) re - create the destination table from the script provided by the SQL Developer when you click the "SQL" tab while you view the table.
    (3) INSERTION IN the destination table in the source table using a database link. Note: I'm not aware of all the columns in the tables themselves that could be used to filter the lines added/deleted/modified only.
    (4) after importing the data, create indexes and primary keys.

    Issues related to the:
    (1) Developer SQL included the following lines during the generation of the table creation script:

    < creation table DDL commands >
    then
    PCTFREE, PCTUSED, INITRANS 40 10 1 MAXTRANS 255 NOCOMPRESS SLAUGHTER
    STORAGE (INITIAL 251658240 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645)
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 DEFAULT USER_TABLES)
    TABLESPACE "TBLSPC_PGROW".

    It generated this snippet for the table, the primary key and each index.
    Is it necessary to include in my code, if they are all default values? For example, one of the index gets scripted as follows:

    CREATING INDEXES "XYZ". "' PATIENT_INDEX ' ON 'XYZ '. "' PATIENT ' (the ' Patient')
    -the following four lines do I need?
    PCTFREE, INITRANS 10 2 MAXTRANS 255 COMPUTE STATISTICS
    STORAGE (INITIAL 60817408 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645)
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 DEFAULT USER_TABLES)
    TABLESPACE "TBLSPC_IGROW".

    (2) anyone who has advice on best practices for the storage of data like that, I'm very eager to learn from your experience.

    Thanks in advance,

    Carl

    I strongly suggest not dropping and re-creating the tables every day.

    The simplest option would be to create a materialized view on the destination database that queries the database source and do a refresh materialized every evening from this point of view. You can then create a log of materialized on the source table view and then make a gradual refresh of the materialized view.

    You can schedule the refresh of the materialized view or in the definition of the materialized view, as a separate job, or creating a group to refresh and adding one or more materialized views.

    Justin

  • TDMS &amp; Diadem best practices: what happens if my mark has breaks/cuts?

    I created a LV2011 datalogging application that stores a lot of data to TDMS files.  The basic architecture is like this:

    Each channel has these properties:

    To = start time

    DT = sampling interval

    Channel values:

    Table 1 d of the DBL values

    After the start of datalogging, I still just by adding the string values.  And if the size of the file the PDM goes beyond 1 GB, I create a new file and try again.  The application runs continuously for days/weeks, so I get a lot of TDMS files.

    It works very well.  But now I need to change my system to allow the acquisition of data for pause/resume.  In other words, there will be breaks in the signal (probably from 30 seconds to 10 minutes).  I had originally considered two values for each point of registration as a XY Chart (value & timestamp) data.  But I am opposed to this principal in because according to me, it fills your hard drive unnecessarily (twice us much disk footprint for the same data?).

    Also, I've never used a tiara, but I want to ensure that my data can be easily opened and analyzed using DIAdem.

    My question: are there some best practices for the storage of signals that break/break like that?  I would just start a new record with a new time of departure (To) and tiara somehow "bind" these signals... for example, I know that it is a continuation of the same signal.

    Of course, I should install Diadem and play with him.  But I thought I would ask the experts on best practices, first of all, as I have no knowledge of DIAdem.

    Hi josborne;

    Do you plan to create a new PDM file whenever the acquisition stops and starts, or you were missing fewer sections store multiple power the same TDMS file?  The best way to manage the shift of date / time is to store a waveform per channel per section of power and use the channel property who hails from waveform TDMS data - if you are wiring table of orange floating point or a waveform Brown to the TDMS Write.vi "wf_start_time".  Tiara 2011 has the ability to easily access the time offset when it is stored in this property of channel (assuming that it is stored as a date/time and not as a DBL or a string).  If you have only one section of power by PDM file, I would certainly also add a 'DateTime' property at the file level.  If you want to store several sections of power in a single file, PDM, I would recommend using a separate group for each section of power.  Make sure that you store the following properties of the string in the TDMS file if you want information to flow naturally to DIAdem:

    'wf_xname '.
    'wf_xunit_string '.
    'wf_start_time '.
    'wf_start_offset '.
    'wf_increment '.

    Brad Turpin

    Tiara Product Support Engineer

    National Instruments

  • encoding issue "best practices."

    I'm about to add several command objects to my plan, and the source code will increase accordingly. I would be interested in advice on good ways to break the code into multiple files.

    I thought I had a source file (and a header file) for each command object. Does this cause problems when editing and saving the file .uir? When I run the Code-> target... file command, it seems that it changes the file target for all objects, not only that I am currently working on.

    At least, I would like to have all my routines of recall in one file other than the file that contains the main(). Is it a good/bad idea / is not serious? Is there something special I need to know this?

    I guess what I'm asking, what, how much freedom should I when it comes to code in locations other than what the editor of .uir seems to impose? Before I go down, I want to assure you that I'm not going to open a can of worms here.

    Thank you.

    I'm not so comfortable coming to "best practices", maybe because I am partially a self-taught programmer.

    Nevertheless, some concepts are clear to me: you are not limited in any way in how divide you your code in separate files. Personally, I have the habit of grouping panels that are used for a consistent set of functions (e.g. all the panels for layout tests, all the panels for execution of / follow-up... testing) in a single file UIR and related reminders in a single source file, but is not a rigid rule.

    I have a few common callback functions that are in a separate source file, some of them very commonly used in all of my programs are included in my own instrument driver and installed controls in code or in the editor of the IUR.

    When you use the IUR Editor, you can use the Code > target file Set... feature in menu to set the source file where generated code will go. This option can be changed at any time while developing, so ideally, you could place a button on a Panel, set a routine reminder for him, set the target file and then generate the code for this control only (Ctrl + G or Code > Generate > menu control reminders function). Until you change the target file, all code generated will go to the original target file, but you can move it to another source after that time.

  • (Best practices) How to store the adjustment curve values?

    I got two sets of data, Xreal and Xobserved, abbreviated Xr and Xo. Xreal is a data set that contains the values of sensor from a reliable source (it's a pain to collect data for), and Xobserved is a set of data containing the values from a less reliable source, but much less maintenance, sensor. I'll create a VI that receives the entry of these two sources of data, stores it in a database (text file or csv) and crosses some estimators of this database. The output of the VI will be best approximation of linear adjustment (using regression, not the Xreal) of the input value of Xobserved.

    What are best practices for storage Xreal and Xobserved? In addition, I'm not too known using best VI made, take CSV files for entry? How would format it best?

    '

    Keep things simple.  Convert the table to CSV file and write to a text file.  See attached example.

  • Just improved m tips on best practices for sharing files on a Server 2008 std.

    The field contains about 15 machines with two domain controllers, one's data is the app files / print etc...  I just upgraded from 2003 to 2008 and want to get advice on best practices for the establishment of a group of file sharing. Basically I want each user to have their their own records, but also a staff; folder. Since I am usually accustomed to using windows Explorer, I would like to know if these actions can be done in the best conditions. Also I noticed on 2008 there is a feature of contacts. How can it be used? I would like to message or send an email to users their file locations. Also, I want to implement an admin at a lower level to handle the actions without making them far in on the server, not sure.

    I read a certain bbut I don't like test direct more because it can cause problems. So basically a way short and neat to manage shares using the MMC, as well as the way that I approach their mail from the server of their actions. Maybe what kind of access cintrol or permissions are suitable also for documents. Also how can I have them use office templates without changing the format of the model.

    THX

    g

    Hello 996vtwin,

    Thank you for visiting the Microsoft Answers site. The question you have posted is related to Windows Server and would be better suited to the Windows Server TechNet community. Please visit the link below to find a community that will support what ask you:

    http://social.technet.Microsoft.com/forums/en-us/category/WindowsServer

    Hope this helps J

    Adam
    Microsoft Answers Support Engineer
    Visit our Microsoft answers feedback Forum and let us know what you think

Maybe you are looking for