Looking for ideas on how to get large amounts of data to the line in via APEX

Hi all

I am building a form that will be used to provide large amounts of data in row. Only 1 or 2 columns per line, but potentially dozens or hundreds of lines.

I was initially looking at using a tabular subform, but this feels like a method heavy since more than an insignificant number of lines.

So now I'm wondering what are the solutions others have used?

Theoretically, I could just provide a text box and get the user to paste in a list delimited by lines and use the background to interpret code on submit.

Another method that I've been thinking is to get the user to save and download a CSV file that gets automatically imported by the form.

Is there something else? If not, can someone give me any indication of which of the above would be easier to implement?

Thank you very much

PT

Hi PT,.

I would say that you need a loading data wizard to transfer your data with a CSV file. 17.13 Creating Applications with loading capacity of data

It is available for apex 4.0 and distributions, later.

Kind regards

Vincent

http://vincentdeelen.blogspot.com

Tags: Database

Similar Questions

  • Smart way to save large amounts of data using the circular buffer

    Hello everyone,

    I am currently enter LabView that I develop a measurement of five-channel system. Each "channel" will provide up to two digital inputs, up to three analog inputs of CSR (sampling frequency will be around 4 k to 10 k each channel) and up to five analog inputs for thermocouple (sampling frequency will be lower than 100 s/s). According to the determined user events (such as sudden speed fall) the system should save a file of PDM that contains one row for each data channel, store values n seconds before the impact that happened and with a specified user (for example 10 seconds before the fall of rotation speed, then with a length of 10 minutes).

    My question is how to manage these rather huge amounts of data in an intelligent way and how to get the case of error on the hard disk without loss of samples and dumping of huge amounts of data on the disc when recording the signals when there is no impact. I thought about the following:

    -use a single producer to only acquire the constant and high speed data and write data in the queues

    -use consumers loop to process packets of signals when they become available and to identify impacts and save data on impact is triggered

    -use the third loop with the structure of the event to give the possibility to control the VI without having to interrogate the front panel controls each time

    -use some kind of memory circular buffer in the loop of consumer to store a certain number of data that can be written to the hard disk.

    I hope this is the right way to do it so far.

    Now, I thought about three ways to design the circular data buffer:

    -l' use of RAM as a buffer (files or waiting tables with a limited number of registrations), what is written on disk in one step when you are finished while the rest of the program and DAQ should always be active

    -broadcast directly to hard disk using the advanced features of PDM, and re-setting the Position to write of PDM markers go back to the first entry when a specific amount of data entry was written.

    -disseminate all data on hard drive using PDM streaming, file sharing at a certain time and deleting files TDMS containing no abnormalities later when running directly.

    Regarding the first possibility, I fear that there will be problems with a Crescent quickly the tables/queues, and especially when it comes to backup data from RAM to disk, my program would be stuck for once writes data only on the disk and thus losing the samples in the DAQ loop which I want to continue without interruption.

    Regarding the latter, I meet lot with PDM, data gets easily damaged and I certainly don't know if the PDM Set write next Position is adapted to my needs (I need to adjust the positions for (3analog + 2ctr + 5thermo) * 5channels = line of 50 data more timestamp in the worst case!). I'm afraid also the hard drive won't be able to write fast enough to stream all the data at the same time in the worst case... ?

    Regarding the third option, I fear that classify PDM and open a new TDMS file to continue recording will be fast enough to not lose data packets.

    What are your thoughts here? Is there anyone who has already dealt with similar tasks? Does anyone know some raw criteria on the amount of data may be tempted to spread at an average speed of disk at the same time?

    Thank you very much

    OK, I'm reaching back four years when I've implemented this system, so patient with me.

    We will look at has a trigger and wanting to capture samples before the trigger N and M samples after the outbreak.  The scheme is somewhat complicated, because the goal is not to "Miss" samples.  We came up with this several years ago and it seems to work - there may be an easier way to do it, but never mind.

    We have created two queues - one samples of "Pre-event" line of fixed length N and a queue for event of unlimited size.  We use a design of producer/consumer, with State Machines running each loop.  Without worrying about naming the States, let me describe how each of the works.

    The producer begins in its state of "Pre Trigger", using Lossy Enqueue to place data in the prior event queue.  If the trigger does not occur during this State, we're staying for the following example.  There are a few details I am forget how do ensure us that the prior event queue is full, but skip that for now.  At some point, relaxation tilt us the State. p - event.  Here we queue in the queue for event, count the number of items we enqueue.  When we get to M, we switch of States in the State of pre-event.

    On the consumer side we start in one State 'pending', where we just ignore the two queues.  At some point, the trigger occurs, and we pass the consumer as a pre-event.  It is responsible for the queue (and dealing with) N elements in the queue of pre-event, then manipulate the M the following in the event queue for.  [Hmm - I don't remember how we knew what had finished the event queue for - we count m, or did you we wait until the queue was empty and the producer was again in the State of pre-event?].

    There are a few 'holes' in this simple explanation, that which some, I think we filled.  For example, what happens when the triggers are too close together?  A way to handle this is to not allow a relaxation to be processed as long as the prior event queue is full.

    Bob Schor

  • In FF 7.0.1 the new tab on the band on tabs disappeared after clicking on the icon of the tab group button and try to return, I'm looking for advice on how to get the new tab to reappear on the Strip to tabs.

    In Windows Vista, FF 7.0.1 I've selected the button tab group to try out it. However, when I chose the button group of tabs to close this point of view, the button tab in the strip of the tab no longer appears. I'm a request for assistance to restore the button new tab (the sign '+' on the subject) on the tabs to the right of the tabs open tape.

    You can find the button new tab showing as a '+' on the tab bar.

    You can open the window customize and drag the button new tab that indicates that a sign plus bar tabs on another toolbar and it will become a regular tools like the new button bar button tab you have in versions of Firefox 3.

    Open the Customize via "view > toolbars > customize" or "Firefox > Options > toolbars."

    If you can not find the new button tab then click the button 'Restore Default Set' in the window customize.

    If you would like the button tab at the right end of the tab bar, then place a flexible space to the left of it.

  • Backup and restoration of large amounts of data during the update of the operating system (permanent storage)

    We all have our persistent store backup and restore after updating the OS using Desktop Manager,

    We store the data using intHashtable, how can we implement this?

    While searching, I had the following example, but it shows how to implement custom objects

    http://www.BlackBerry.com/knowledgecenterpublic/livelink.exe/fetch/2000/8067/645045/8655/8656/110625...

    my doubts, is how to manage the tables hash, vectors, Inthashtables...

    You'll need to serialize your data backup and deserialize on restore.  It'll be to you, how to convert these objects into a stream of bytes.  Take a look at the demo of otabackuprestore which comes with the SDK Java BlackBerry and BlackBerry JDE for an example.

  • Memory management by displaying the large amount of data

    Hello

    I have a requirement to display the large amount of data on the front in table 2 & 7 graphic during the time period 100hrs for 3 channels, data read from strings must be written in the binary file, and then converted and displayed in front of the Panel for 3 channels respectively.

    If I get 36 samples after conversion for all hours, up to 83 h 2388 samples displayed in table and graphical data are thin and samples correspond exactly.

    After that 90 hours 45 minutes late is observed after theoretical calculation of samples, what could be the problem

    I have controller dual-core PXI8108 with 1 GB of ram

    As DFGray,

    says there is no problem with the RAM or display, problem with conversion (timming issue) if I am data conversion of large amount it takes even, compared to the conversion less amount of data. So I modifed so that each data point Sec 1 is convereted at once, problem solved

    Thanks for your replies

  • Advice needed on the way to store large amounts of data

    Hi guys,.

    Im not sure what the best way is to put at the disposal of my android application of large amounts of data on the local device.

    For example records of food ingredients, in the 100?

    I have read and managed to create .db using this tutorial.

    http://help.Adobe.com/en_US/air/1.5/devappsflex/WS5b3ccc516d4fbf351e63e3d118666ade46-7d49. HTML

    However, to complete the database, I use flash? If this kind of defeated the purpose of it. No point in me from a massive range of data from flash to a sql database, when I could access the as3 table data live?

    Then maybe I could create the .db with an external program? but then how do I include this .db in the apk file and deploy it for android users device.

    Or maybe I create a class as3 with a xml object initialization and use it as a way to store data?

    Any advice would be appreciated

    You can use any way you want to fill your SQLite database, including using external programs, (temporarily) incorporation of a text file with SQL, executing some statements code SQL from code AS3 etc etc.

    Once you have filled in your db, deploy with your project:

    http://chrisgriffith.WordPress.com/2011/01/11/understanding-bundled-SQLite-databases-in-AI r-for-mobile.

    Cheers, - Jon-

  • Someone has created a website for my business before I joined 5 weeks ago. We do not have a lot of information about how they did it! I know it's good Muse - any ideas on how to get into the back-office to do this editing?

    Someone has created a website for my business before I joined 5 weeks ago. We do not have a lot of information about how they did it! I know it's good Muse - any ideas on how to get into the back-office to do this editing?

    You would need to Muse and the original file of Muse. There is no 'back-office' on Muse like WordPress or Joomla sites! If that's what you're looking for.

  • How to select id when same id has several lines, but I'm looking for ideas lacking a particular value

    I have this my_table_c in the table with the values below

    SELECT * FROM my_table_c
    ID GROUP_ID GROUP_VALUE
    121
    332
    341
    541
    521
    222
    232
    241

    I'm looking for this output where to get only the ID that did not have group_id 2. Besides, I don't want to get the ID where group_id 2 is missing, but the other group IDS are present.

    Group_id if 2 is absent, it is my target id.

    So with the values indicated in the table above, I expect just ID = 3 or returned as other lines ID 1, 2 and 5 have each where group_id = 2.

    Can someone please help with a query to retrieve this result.

    Select *.
    Of
    (
    SELECT id, group_id, group_value, count (case when group_id = END 2 then 1) over (partition by id) as cnt
    of my_table_c
    ) where cnt = 0

  • I have AM5200-E5521A Office. Looking for Council capacity how much I can spend 4 GB.

    I have AM5200-E5521A desktop with windows vista. Looking for Council capacity how much I can spend 4 GB. Not able to find any info on the internet. You want to speed things up. Also, the latest version of windows would allow Accelerator upward.

    Yes exactly. As I wrote, 4x2Gb means 4 slots of 2 GB. Each unit supports up to 2 GB.

    I suggest you to buy the same memory modules. You will find a sticker on the original module. The first ten characters in the bar code is the part number: KN.2G* *. *.

  • Photography plan Cloud creative student and Teacher Edition (one year) and no idea of how to get and start using the programs?

    So I just brought the creative plan Cloud photography student and Teacher Edition (one year) and no idea of how to get and start using the programs?

    Hello

    Go to creative.adobe.com and you identify with your Adobe ID

    From there, you can download the applications you purchased.

    Normally, you are taken there immediately after the purchase.

  • I'm looking for a Script that can list all virtual machines with type of NIC E1000 via the output of the CSV file.

    Hi gurrus and LucD

    I'm looking for a Script that can list all virtual machines with type of NIC E1000 via the output of the CSV file.

    The script should search for information in a multiple Vcenter servers and multiple clusters and list all the VMs name, status (two powers on or off) with type card NETWORK Type E1000 only no other.

    Concerning

    Nauman

    Try like this

    $report = @)

    {foreach ($cluster Get-cluster)

    foreach ($rp in Get-ResourcePool-location $cluster) {}

    foreach ($vm in (Get-VM-location the $rp |)) Where {Get-NetworkAdapter - VM $_______ | where {$_.}} Type - eq "e1000"}})) {}

    $report += $vm. Select @{N = "VM"; E={$_. Name}},

    @{N = 'vCenter'; E={$_. Uid.Split('@') [1]. "Split(':') [0]}},"

    @{N = "Cluster"; E = {$cluster. Name}},

    @{N = "ResourcePool"; E = {$rp. Name}}

    }

    }

    }

    $report | Export Csv C:\temp\report.csv - NoTypeInformation - UseCulture

  • How can I find a large amount of data from a stored procedure?

    How can I find a large amount of data to a stored procedure in an effective way?

    For example do not use a cursor to go through all the lines and then assign values to variables.

    Thanks in advance!

    >
    How can I find a large amount of data to a stored procedure in an effective way?

    For example do not use a cursor to go through all the lines and then assign values to variables.
    >
    Leave the query to create the object back to you.

    Declare a cursor in a package specification than the result set gives you desired. And to declare a TYPE in the package specification which returns a table composed of % rowtype to this cursor.

    Then use this type as the function's return type. Here is the code example that shows how easy it is.

    create or replace
        package pkg4
          as
            CURSOR emp_cur is (SELECT empno, ename, job, mgr, deptno FROM emp);
            type pkg_emp_table_type is table of emp_cur%rowtype;
            function get_emp(
                             p_deptno number
                            )
              return pkg_emp_table_type
              pipelined;
      end;
      / 
    
     create or replace
        package body pkg4
          as
            function get_emp(
                             p_deptno number
                            )
              return pkg_emp_table_type
              pipelined
              is
                v_emp_rec emp_cur%rowtype;
              begin
                  open emp_cur;
                  loop
                    fetch emp_cur into v_emp_rec;
                    exit when emp_cur%notfound;
                    pipe row(v_emp_rec);
                  end loop;
              end;
      end;
      / 
    
    select * from table(pkg4.get_emp(20));
    
         EMPNO ENAME      JOB              MGR     DEPTNO
    ---------- ---------- --------- ---------- ----------
          7369 DALLAS     CLERK2          7902         20
          7566 DALLAS     MANAGER         7839         20
          7788 DALLAS     ANALYST         7566         20
          7876 DALLAS     CLERK           7788         20
          7902 DALLAS     ANALYST         7566         20
    

    If you return a line an actual table (all columns of the table) so you don't need to create a cursor with the query a copy you can just declare the type like this % rowtype tables table.

     create or replace
        package pkg3
          as
            type emp_table_type
              is
                table of emp%rowtype;
            function get_emp(
                             p_deptno number
                            )
              return emp_table_type
              pipelined;
      end;
      / 
    
     create or replace
        package body pkg3
          as
            function get_emp(
                             p_deptno number
                            )
              return emp_table_type
              pipelined
              is
              begin
                  for v_rec in (select * from emp where deptno = p_deptno) loop
                    pipe row(v_rec);
                  end loop;
              end;
      end;
      / 
    
  • Re: Font book. I chose "restore standard fonts" by mistake. How to get back on my choice. The Cancel button is grayed out so I can't cancel it. Any suggestions? Thank you!

    Re: Font book.

    I chose "restore standard fonts" by mistake. How to get back on my selection. The Cancel button is grayed out so I can't cancel it. Any suggestions? Thanks in advance!

    If you click on 'Restore the standard fonts', fonts that are not included in the OS X systems is placed in a "fonts folder (deleted) next to the fonts folder.

    / Library/Fonts

  • I just formatted my laptop and every time I try to intall service pack 1 for windows vista as it gets installed but when I turn the LAPTOP works again it ask me to update?

    I just formatted my laptop and every time I try to intall service pack 1 for windows vista as it gets installed but when I turn the LAPTOP works again it ask me to update? What should I do

    Hello

    Click Start > right click on computer > left click on properties > see if you have Vista 32 bit or Vista 64-bit installed.

    It will also tell you here what Service Packs, if any, are already installed.

    Then, choose the download of 'bit' OK to use to install Service Packs, first of all the installation of SP1.

    Vista SP1 32-bit (x 86): http://www.microsoft.com/en-us/download/details.aspx?id=30

    Vista SP1 64-bit: http://www.microsoft.com/en-us/download/details.aspx?id=21299

    Vista SP2 32-bit (x 86): http://www.microsoft.com/en-us/download/details.aspx?id=16468

    Vista SP2 64-bit: http://www.microsoft.com/en-us/download/details.aspx?id=17669

    And if you have any problems:

    There is a Forum that Microsoft has put in place for problems with Vista Service Packs. If repost you the Forum they will definitely try to help you here...

    http://social.technet.Microsoft.com/forums/en/itprovistasp/threads

    See you soon.

  • In the date picker, how can I default to select * dates if the user has...

    In the date picker, how can I default to select * dates if the user does not select a date.
    Thank you
    Doug

    Doug,

    Now lets say l want everything
    

    Could you post some sample data and the output you want to get... ? It would be very easy to understand the requirements...

    When you mean everything, I guess you need all possible dates between date1 and date2.

    You can use... (to asktom.oracle.com).

      1  select to_date('12-jan-2009','DD-MON-YYYY') + rownum -1
      2    from ALL_OBJECTS
      3    where rownum <= (to_date('20-jan-2009','dd-mon-yyyy') -
      4*                     to_date('12-jan-2009','DD-MON-YYYY') +1 )
    sql> /
    
    TO_DATE('
    ---------
    12-JAN-09
    13-JAN-09
    14-JAN-09
    15-JAN-09
    16-JAN-09
    17-JAN-09
    18-JAN-09
    19-JAN-09
    20-JAN-09
    
    9 rows selected.
    
    For your case, since you have date1 and date2...
    
    select to_date(:p12_date1,'DD-MON-YYYY') + rownum -1
      from ALL_OBJECTS
      where rownum <= (to_date(:p12_date2,'dd-mon-yyyy') -
                        to_date(:p12_date1,'DD-MON-YYYY') +1 )
    

    Should work... in my opinion... Have not tested the other by their Summit.

    Is that what you're looking for... ?? If no, please give details...

    Thank you
    Rajesh.

Maybe you are looking for