Best practice? Storage of large data sets.

I'm programming a Client to access the customer address information. The data are delivered on a MSSQL Server by a Web service.

What is the best practice to link these data namely ListFields? String tables? Or an XML file with is analyzed?

Any ideas?

Thank you, hhessel

These debates come from time to time. The big question is how normally geet she on the phone after

someone asks why BB does not support databases. It is there no magic here - it depends on what you do with

the data. Regarding the General considerations, see j2me on sun.com or jvm issues more generally. We are all

should get a reference of material BB too LOL...

If you really have a lot of data, there are libraries of zip and I often use my own patterns of "compression".

I personally go with simple types in the store persistent and built my own b-tree indexing system

which is also j2se virtue persistable and even testable. For strings, we'll store me repeated prefixes

that only once even though I finally gave up their storage as only Aspire. So if I have hundreds of channels that start "http://www.pinkcat-REC".

I don't store this time. Before you think of overload by chaining these, who gets picked up

the indexes that you use to find the channel anyway (so of course you have to time to concatenate pieces)

back together, but the index needs particular airspace is low).

Tags: BlackBerry Developers

Similar Questions

  • How to calculate daily, hourly average for the very large data set?

    Some_Timestamp

    "Parameter1" Parameter2
    1 JANUARY 2015 02:00. 00:000000 AM - 07:002341.4534676341.4534
    1 JANUARY 2015 02:00. 01:000000 AM - 07:002341.4533676341.3
    1 JANUARY 2015 03:04. 01:000000 PM - 07:005332.3533676341.53
    1 JANUARY 2015 03:00. 01:000046 PM - 07:0023.3436434.4345
    JANUARY 2, 2015 05:06.01:000236 AM - 07:00352.3343543.4353
    JANUARY 2, 2015 09:00. 01:000026 AM - 07:00234.453453.54
    3 FEBRUARY 2015 10:00. 01:000026 PM - 07:003423.3534634.45
    FEBRUARY 4, 2015 11:08. 01:000026 AM - 07:00324.3532534534.53

    We have data as above a table and we want to calculate all the days, the hourly average for large data set (almost 100 million).  Is it possible to use the sql functions of analysis for better performance instead of the ordinary average and group of?  Or any other way better performance?

    Don't know if that works better, but instead of using to_char, you could use trunc. Try something like this:

    select
        trunc(some_timestamp,'DD'),
        avg(parameter1)
    from
       bigtable
    where
       some_timestamp between systimestamp-180 and systimestamp
    group by
       trunc(some_timestamp,'DD');
    

    Hope that helps,

    dhalek

  • Best practices storage or advice

    I try to develop a Java of BB application, currently appearing on local storage. The app will be for public use.

    Someone has advice on when it is best to use the SD vs the persistent store? Is there a good best practices or document advice out there somewhere?

    This application will have two types of data: preferences and what would be the data files on a desktop system.

    I read on using the persistent store, and it seems to be a good option because of the level of control over the data for synchronization and such. But I noticed that some OSS BB applications use the SD card, not the persistent store.

    If I'm going to deploy the application to the general public, I know that I'm working with many configurations as well as with the limits set by the policy of the company (assuming that these users can even install the app). So any advice on navigating these issues regarding the storage would be greatly appreciated.

    Thank you!

    The persistent store is fine for most cases.

    If the transient data is very large, or must be copied to the device via the USB cable, then maybe the SD card should be considered.

    However, many / most of the people do not have an SD card.

  • Best practices for a NFS data store

    I need to create a data store on a NAS and connect to some servers ESXi 5.0 as a NFS datastore.

    It will be used to host virtual machines less used.

    What are the best practices to create and connect a datastore NFS or networking and storage view bridges in order to get the best possible performance and decrease is not the overall performance of the network?

    Concerning

    Marius

    Create a new subnet of layer 2 for your NFS data warehouses and set it up on his own vSwitch with two uplinks in an active configuration / eve of reunification. Uplink should be variously patches in two distinct physical switches and the subnet must have the disabled bridge so that NFS traffic is not routable in other parts of your network. NFS export can be restricted to the IP address of storage host IP (address of the VM kernel port you created for NFS in the first step), or any address on that subnet. This configuration isolates traffic NFS for performance, ensures the security and redundancy. You should also consult your whitepapers of storage vendors for any specific recommendation of the seller.

    Data warehouses can be made available for the guests you wish and you can use Iometer to compare PAHO are / s and flow rate to see if it meets your expectations and requirements.

  • The graph refresh is very slow with large data sets

    When the graphics of large sets of data in tiara, the construction of the graph is slow (3 pts M takes 30 sec). Fair enough-, the problem is, however, some little change do you later to the curve, it will refresh all over again, and during this time you can't do anything else with DIAdem.

    Any way to relieve it?

    Problem seems to be solved - restart of DIAdem restored time to update to an acceptable level, or at least it seems that restarting is the only change.

    I tried later with the two parameters of charge mentioned by AndreasK and both just as powerful.

    I tried remote desktop access and it works all too well - also go remote DIAdem (to see if it's a graphics driver issue)

    I feel kind of silly not being able to identify what was wrong and I thank you for your help.

  • Best practices of implementation of Data Manipulation package

    Hello
    Would like to ask who is the best application for the manipulation of data (insert, update, delete) for a single table in the stored procedure.
    To create a procedure with an input parameter for the action such as 1 for insert, 2 for update and so on
    or
    creating separate procedures for each procedure pInsData for insert, pUpdData for update...

    Hello

    I propose to create a single procedure that manages all updates DML on a table. As you said, you can spend on one flag indicating the intended operation.
    It may be that one IN parameter, which can have different values, say - 'I' for Insert, 'U' for update and had "for deletion.

    Thank you
    Ankur

  • [Beginner] Best practices for Backing Bean, data binding

    Hello world


    I followed a course of oracle in November ' Oracle Fusion Middleware 11g: Creating Applications with ADF I '

    Awesome, now I know what are the different parts of the framework.

    But now, I would go a little in depth.

    Here's a simple example

    Login page
    Backing Bean
    Session Bean
    Read only the View (the view selection) object
    Application module

    We have a user name and password in a table, but the password is encrypted in the column, in fact, is the checksum of the password.

    Here's what should be done.

    Login (username, password) Page-> proceed button-> to get the data of VO-> compare usernames-> transform password from the login page (MD5)-> compare him the password of database->
    assignment of params session bean-> redirect to Connexion2 page.

    Here's what I actually

    I have an AM listening with a java class having a doLogin (String username, String password) method with a return value of string.

    This method is exposed to the UI via a client interface.

    Here are my questions.

    Where can I check and turn all these params. In playback only VO via a class Java tuned, which extends ViewRowImpl or listening AM class java?

    And now for the session bean where I have to instantiate. I guess the Backing Bean.

    Wouldn't be better to call the client interface of the bean to support, then instantiate the session bean, with params from AOS, in this BackBean.

    I so much question that I don't know where to start. :-(

    Sincerely for your help.

    Senecaux Davis

    Hello

    If you want to keep the information for the duration of session, create a managed bean and configure it to session scope. If you need to access you can use EL directly or from Java in another managed bean (or bean backing) whereby the bean is instantiated. You can also use managed properties to pass the reference of a bean to support session bean.

    As for the location of the method, you must use a View object to query the user table just the user name. You read the encrypted password (assuming a single user entry is found) and compare it with the provided password. You return true/false value and use an OperationBinding in a bean managed to access the customer interface method through the ADF (action method) link layer

    OperationBinding operation = bindings.get ("login") (OperationBinding);
    operation.getParamsMap () .put ("username", username);
    operation.getParamsMap () .put ("passwd", pw);

    Boolean result = operation.execute ();

    ...

    Frank

  • JDBC-fetch data - set and very big (table 35)

    Hello

    I'm trying to get a very large data set and the desired behaviour is that it continues to function without exception of lack of memory (that is, it's good that it takes days but don't blow up on out-of-memory exception) *.
    The task runs on a 64-bit with 16 processors computer.

    For testing purposes, I used a table 35 GB in size (of course this would not be our typical fetch but just to emphasize JDBC)

    +??? Java exception occurred: +.
    + means: Java heap space.
    + java.util.Arrays.copyOf (Unknown Source) +.
    + java.util.Vector.ensureCapacityHelper (Unknown Source) +.
    + java.util.Vector.addElement (Unknown Source) +.
    +          at+
    + com.mathworks.toolbox.database.fetchTheData.dataFetch(fetchTheData.java:737) +.
    + Error in == > cursor.fetch at 114 +.
    + dataFetched = +.
    + dataFetch (fet, resultSetMetaData, p.NullStringRead, tmpNullNumberRead);

    In the light of this problem, I added jdbc-fetch-size = 999999999 on my JDBC connection string.
    ' com.microsoft.sqlserver.jdbc.SQLServerDriver ',' jdbc:sqlserver://SomeDatabaseServer:1433; databaseName = SomeDatabase; integratedSecurity = true; JDBC-fetch-size = 999999999; »

    Slightly different error reported.

    +??? Java exception occurred: +.
    + means: GC overhead limit exceeded+.

    + java.lang.Integer.toString (Unknown Source) +.

    + java.sql.Timestamp.toString (Unknown Source) +.

    +          at+
    + com.mathworks.toolbox.database.fetchTheData.dataFetch(fetchTheData.java:721) +.
    +          +

    + Error in == > cursor.fetch at 114 +.
    + dataFetched = +.
    + dataFetch (fet, resultSetMetaData, p.NullStringRead, tmpNullNumberRead); +

    Any suggestion?

    REF:
    JDBC http://msdn.microsoft.com/en-us/library/ms378526.aspx
    32 bit vs 64 bit: http://support.microsoft.com/kb/294418

    Published by: devvvy may 6, 2011 01:10

    There is one thing to go too far. You won't push ever that a lot of data in memory at a time, which seems to happen due to the vector between the two. The amount of objects that must be created would be already staggering.

    Best is to use a result set and browse results instead, treat the rows one at a time, or maybe a batch. Who will use only as much memory as necessary to keep a line / a lot of lines in memory unless the driver not just preloading of magic.

  • Best practices for the handling of data for a large number of indicators

    I'm looking for suggestions or recommendations for how to better manage a user interface with a 'large' number of indicators. By big I mean enough to make schema-block big enough and ugly after that the processing of data for each indicator is added. Data must be 'unpacked' and then decoded, for example, Boolean, binary bit shifting fields, etc. The indicators are updated once / sec. I'm leanding towards a method that worked well for me before, that is, binding network shared variable for each indicator, then using several sub-vis to treat the particular piece of data, and write in the appropriate variables.

    I was curious what others have done in similar circumstances.

    Bill

    I highly recommend that you avoid references.  They are useful if you need to update the properties of an indicator (color, police visibility, etc.) or when you need to decide which indicator update when running, but are not a good general solution to write values of indicators.  Do the processing in a Subvi, but aggregate data in an output of cluster and then ungroup for display.  It is more efficient (writing references is slow) - and while that won't matter for a 1 Hz refresh rate, it is not always a good practice.  It takes about the same amount of space of block diagram to build an array of references as it does to ungroup data, so you're not saving space.  I know that I have the very categorical air about it; earlier in my career, I took over the maintenance of an application that makes excessive use of references, and it makes it very difficult to follow came data and how it got there.  (By the way, this application also maintained both a pile of references and a cluster of data, the idea being that you would update the front panel indicator through reference any time you changed the associated value in the data set, unfortunately often someone complete either one or another, leading to unexpected behavior.)

  • Addition of secondary storage to a guest VM - best practices

    Greetings-

    I have a scenario about the addition of additional storage to an existing client VM and any input would be greatly appreciated.

    My implementatino - four ESXi 4.1 Update 1 guests and a vCenter 4.1 Update1 - Enterprise Edition

    Currently, I have a Windows 2008 R2 guest VM installed on a 256GB / 1 MB data store (DS01) block size. The C: drive is 60 GB in size. Recently, we have added approximately 2 TB (1.86TBs) of storage to an existing HP P2000 G3 MSA. I should add that this particular virtual machine storage. This additional storage will house the data user files (doc, PDF, etc.) only and not other operating systems or applications will be installed.

    My thought was to generate a data store (e.g. DS02) with a block size of 8 MB and set her size 1.86TBs size (maximum). From there I add a second virtual disk to the VM guest (via the virtual settigs machine), by specifying "DS02" data store and then import/initialize the disk in Windows 2008 R2 disk Mgr.

    My reasoning is correct? This would be 'best practice' or is there a better approach?

    Any input would be appreciated grealty.

    THX in advance

    Joe

    Nothing of the image based backup solutions backup a RDM in physical compatibility mode. To include a ROW in the backup, you must configure virtual compatibility mode for it.

    Please keep in mind that the data store that contains the basic configuration of the virtual machine must have the block size that is appropriate, given that the snapshot files are created on the default data store.

    André

  • Best practices for the integration of the Master Data Management (MDM)

    I work on the integration of MDM with Eloqua and are looking for the best approach to sync data lead/Contact changes of Eloqua in our internal MDM Hub (output only). Ideally, we would like that integration practically in real time but my findings to date suggest that there is no option. Any integration will result in a kind of calendar.

    Here are the options that we had:

    1. "Exotic" CRM integration: using internal events to capture and queue in the queue changes internal (QIP) and allows access to the queue from outside Eloqua SOAP/REST API
    2. Data export: set up a Data Export that is "expected" to run on request and exteernally annex survey via the API SOAP/REST/in bulk
    3. API in bulk: changes in voting that has happened since the previous survey through the API in bulk from Eloqua outside (not sure how this is different from the previous option)

    Two other options which may not work at all and who are potentially antimodel:

    • Cloud connector: create a campaign questioning changes to schedule and configure a connector of cloud (if possible at all) to notify MDM endpoint to query contact/lead "record" of Eloqua.
    • "Native" integration CRM (crazy): fake of a native CRM endpoint (for example, Salesforce) and use internal events and external calls to Eloqua push data into our MDM

    Issues related to the:

    1. What is the best practice for this integration?
    2. Give us an option that would give us the close integration in real-time (technically asynchronous but always / event-based reminder)? (something like the outgoing in Salesforce e-mail)
    3. What limits should consider these options? (for example API daily call, size response SOAP/REST)

    If you can, I would try to talk to Informatica...

    To imitate the integrations of native type, you use the QIP and control what activities it validated by internal events as you would with a native integration.

    You will also use the cloud api connector to allow you to set up an integration CRM (or MDM) program.

    You have fields of identification is added objects contact and account in Eloqua for their respective IDs in the MDM system and keep track of the last update of MDM with a date field.

    A task scheduled outside of Eloqua would go to a certain interval and extract the QAP changes send to MDM and pull the contacts waiting to be sent in place of the cloud connector.

    It isn't really much of anything as outgoing unfortunately use Messaging.  You can send form data shall immediately submit data to Server (it would be a bit like from collections of rule of integration running of the steps in processing of forms).

    See you soon,.

    Ben

  • Best practices for moving to the 1 of 2 VMDK to different data store

    I have several virtual machines who commit a good amount of data on a daily basis.  These virtual machines have two VMDK; one where the operating system and that where data is committed to.  Virtual machines are currently configured to store in the same data store.  There is a growing need to increase the size of the VMDK where data are stored, and so I would like these put special in a separate data store.  What is the best practice to take an existing virtual computer and moving just a VMDK in another data store?

    If you want to split the vmdks (HDDs) on separate data warehouses, just use Storage vMotion and the "Advanced" option

  • Best practices for large virtual disk

    I would add a large virtual disk (16 TB) to use as a backup storage and ideally thin to allow potential available use of, at least initially, a lot of free space.  Headache is the limit of 2 TB for virtual disks in ESXi 5.

    I know that you can extend 2 TB virtual disks in the operating system (win 2008 r2, in this case).  But is it a good idea?  The underlying array is raid 6, so I guess it's safe, but it sounds at least potentially worriesome me on 8 discs.  Not a matter of concern?

    Performance?

    RDM is less flexible and might be impossible (matrix raid is local).

    Anyone have any suggestions on the best practices here?  What would you do?

    I do not recommend the use of the plan span for the file VMDK disks a bad 2 TB and its all parties (several ways that I saw - snapshot problems, corruption etc.)

    Because they are still local drives, you lose flexibility everything that I recommend using RDM.

    A huge ROW for this virtual machine.

    http://blog.davidwarburton.NET/2010/10/25/RDM-mapping-of-local-SATA-storage-for-ESXi/ This departure to configure it.

    As for what I would do, I've done this before and I used RDM, just for 8TB but still.

  • best practices for the storage of the vm and vhd

    no doubt this question has been answered not once... Sorry

    I would like to know the best practice for the storage of the vm and its virtual hard disk to a SAN.

    Any show advantage does make sense to keep them on separate LUNS?

    Thank you.

    It will really depend on the application of the virtual machine - but for most of the applications no problem by storing everything on the same data store

  • Best practices installation CARS in two areas of data center?

    Data Center has two distinct areas.
    In each area, we have a storage system and a rac node.

    We will install RAC 11 GR 2 ASM.

    For data, we want to use diskgroup + DATA, normal redundancy mirrored on two storage systems.

    CRS + vote, we want to use diskgroup + CRS, normal redundancy.
    But for diskgroup CRS + vote with normal redundancy, there are 3 LUNS and we have only 2 storage systems.
    In my view, that the third logical unit number is necessary to avoid situations of split brain.

    If we put two LUNS to storage lun #1 and the other for storage #2, what will happen when storage faills #1 - it means that two of the three disks for diskgroup + CRS are inaccessible?
    What will happen when all the material in the #1 box fails?
    Is the human intervention: at the time of the failure, which #1 area rises again?

    Is there a best practice for configuration 2-zone 2-storage cars?

    Joachim

    Hello

    With regard to the vote of the files are concerned, that a node must be able to access more than half of the files with the right to vote at any time (simple majority). In order to be able to tolerate failure of n files in vote, need at least 2n + 1 configured. (n = number of files with voting rights) for the cluster.
    The problem in a stretch cluster configuration, is that most of the facilities use only two storage systems (one on each site), which means that the site that hosts the majority of voting records is a potential single point of failure for the entire cluster. If the storage or the site where the files of n + 1 vote is configured fails, the entire cluster will go down, because Oracle Clusterware will lose the majority of the files.
    To avoid a complete cluster failure, Oracle will support a third vote file on a cheap lowend, standard NFS mounted device somewhere in the network. Oracle recommends the file NFS vote on a dedicated server, which belongs to a production environment.

    The white paper below allows you to accomplish:
    http://www.Oracle.com/technetwork/database/Clusterware/overview/grid-infra-thirdvoteonnfs-131158.PDF

    Also with regard to the configuration of the vote and OCR (11.2), when you use ASM. How they should be stored?
    I recommend that you read:
    {message: id = 10028550}

    Kind regards
    Levi Pereira

Maybe you are looking for