Assemble the different tables with large amounts of data

Hello

I need to collect different types of tables:

Example "table1", with columns: DATE, IP, TYPEN, X 1, X 2, X 3
For "table0" with the DATE, IP, REFERENCE columns.

TYPEN in table1 to be inserted in REFERENCE in table0, but by a function that transforms it to another value.


There are several other tables like 'table1', but with slightly different columns, which must be inserted into the same table ("table0").

The amount of data in each table is pretty huge, so the procedure must be made into small pieces and effectively.

If / could I use data pump for this?


Thank you!

user13036557 wrote:
How can I continue with this then?

Should I delete the columns I don't need and transform the data in the first table, and then use data pump.

or should I just do a procedure traversing all ranks (into small pieces) "table1", then threading "table0"?

You have two options... Please test both of them, calculate the time to complete and to implement the best.

Concerning
Rajesh

Tags: Database

Similar Questions

  • Advice needed on the way to store large amounts of data

    Hi guys,.

    Im not sure what the best way is to put at the disposal of my android application of large amounts of data on the local device.

    For example records of food ingredients, in the 100?

    I have read and managed to create .db using this tutorial.

    http://help.Adobe.com/en_US/air/1.5/devappsflex/WS5b3ccc516d4fbf351e63e3d118666ade46-7d49. HTML

    However, to complete the database, I use flash? If this kind of defeated the purpose of it. No point in me from a massive range of data from flash to a sql database, when I could access the as3 table data live?

    Then maybe I could create the .db with an external program? but then how do I include this .db in the apk file and deploy it for android users device.

    Or maybe I create a class as3 with a xml object initialization and use it as a way to store data?

    Any advice would be appreciated

    You can use any way you want to fill your SQLite database, including using external programs, (temporarily) incorporation of a text file with SQL, executing some statements code SQL from code AS3 etc etc.

    Once you have filled in your db, deploy with your project:

    http://chrisgriffith.WordPress.com/2011/01/11/understanding-bundled-SQLite-databases-in-AI r-for-mobile.

    Cheers, - Jon-

  • JS error with large amount of data

    I'm working with a database in JS, I did this car fills the customer information. I have about 50 000. When I put this code in the PDF file it turns me a mistake. ' Java script on line 1 error: internal error: quota space stack script is exhausted. "

    I want to expore some options that I could use to solve this problem. External database is not an option that this form will be used for. Any ideas?

    Thank you

    Zach

    Its been a while that I had this problem. IIRC, the problem is that I tried to copy too much code simultaneously intA the file. The work around is to make different document level scripts, which is some of hassle, but it could work.

    Another thing you can do is go in the acrobat prefs and change the Javascript Editor to something else like dreamweaver or a text editor.

    Good luck! I hope this helps.

  • With the help of the network location and mapped a drive to the server FTP. during the transfer of very large amounts of the login information is always lost.

    With the help of the network location and mapped a drive to the server FTP off site; during the transfer of very large amounts of the login information is always lost.  Computer power settings are configured to not to do no matter what, I'm assuming that the ftp server can publish a scenerio timeout but is there a way for my computer and windows to restart the file transfer?

    Hello

    Thanks for posting your question in the Microsoft Community forums.

    I see from the description of the problem, you have a problem with networking on the FTP server.

    The question you posted would be better suited in the Technet Forums. I would post the query in the link below.

    http://social.technet.Microsoft.com/forums/en/w7itpronetworking/threads

    Hope this information helps you. If you need additional help or information on Windows, I'll be happy to help you. We, at tender Microsoft to excellence.
  • Memory management by displaying the large amount of data

    Hello

    I have a requirement to display the large amount of data on the front in table 2 & 7 graphic during the time period 100hrs for 3 channels, data read from strings must be written in the binary file, and then converted and displayed in front of the Panel for 3 channels respectively.

    If I get 36 samples after conversion for all hours, up to 83 h 2388 samples displayed in table and graphical data are thin and samples correspond exactly.

    After that 90 hours 45 minutes late is observed after theoretical calculation of samples, what could be the problem

    I have controller dual-core PXI8108 with 1 GB of ram

    As DFGray,

    says there is no problem with the RAM or display, problem with conversion (timming issue) if I am data conversion of large amount it takes even, compared to the conversion less amount of data. So I modifed so that each data point Sec 1 is convereted at once, problem solved

    Thanks for your replies

  • Looking for ideas on how to get large amounts of data to the line in via APEX

    Hi all

    I am building a form that will be used to provide large amounts of data in row. Only 1 or 2 columns per line, but potentially dozens or hundreds of lines.

    I was initially looking at using a tabular subform, but this feels like a method heavy since more than an insignificant number of lines.

    So now I'm wondering what are the solutions others have used?

    Theoretically, I could just provide a text box and get the user to paste in a list delimited by lines and use the background to interpret code on submit.

    Another method that I've been thinking is to get the user to save and download a CSV file that gets automatically imported by the form.

    Is there something else? If not, can someone give me any indication of which of the above would be easier to implement?

    Thank you very much

    PT

    Hi PT,.

    I would say that you need a loading data wizard to transfer your data with a CSV file. 17.13 Creating Applications with loading capacity of data

    It is available for apex 4.0 and distributions, later.

    Kind regards

    Vincent

    http://vincentdeelen.blogspot.com

  • OWB, I need to update the target table with the same field for game/update

    OWb, I try to update the target table with the game and the update on the same ground is this possible. I'm a match merge error indicating that you cannot update and match on the same ground. But in SQl is my selection

    Update table
    define RFID = 0
    where RFID = 1
    and ID_processus = 'TEST '.

    Can HWO I do this in OWB.

    I have check but in the case later (last one) that he warns no error if you can go with it.
    and I tested it it works

    You can check the first case (from where we start) if it has been warned and then try to run.

  • How can I find a large amount of data from a stored procedure?

    How can I find a large amount of data to a stored procedure in an effective way?

    For example do not use a cursor to go through all the lines and then assign values to variables.

    Thanks in advance!

    >
    How can I find a large amount of data to a stored procedure in an effective way?

    For example do not use a cursor to go through all the lines and then assign values to variables.
    >
    Leave the query to create the object back to you.

    Declare a cursor in a package specification than the result set gives you desired. And to declare a TYPE in the package specification which returns a table composed of % rowtype to this cursor.

    Then use this type as the function's return type. Here is the code example that shows how easy it is.

    create or replace
        package pkg4
          as
            CURSOR emp_cur is (SELECT empno, ename, job, mgr, deptno FROM emp);
            type pkg_emp_table_type is table of emp_cur%rowtype;
            function get_emp(
                             p_deptno number
                            )
              return pkg_emp_table_type
              pipelined;
      end;
      / 
    
     create or replace
        package body pkg4
          as
            function get_emp(
                             p_deptno number
                            )
              return pkg_emp_table_type
              pipelined
              is
                v_emp_rec emp_cur%rowtype;
              begin
                  open emp_cur;
                  loop
                    fetch emp_cur into v_emp_rec;
                    exit when emp_cur%notfound;
                    pipe row(v_emp_rec);
                  end loop;
              end;
      end;
      / 
    
    select * from table(pkg4.get_emp(20));
    
         EMPNO ENAME      JOB              MGR     DEPTNO
    ---------- ---------- --------- ---------- ----------
          7369 DALLAS     CLERK2          7902         20
          7566 DALLAS     MANAGER         7839         20
          7788 DALLAS     ANALYST         7566         20
          7876 DALLAS     CLERK           7788         20
          7902 DALLAS     ANALYST         7566         20
    

    If you return a line an actual table (all columns of the table) so you don't need to create a cursor with the query a copy you can just declare the type like this % rowtype tables table.

     create or replace
        package pkg3
          as
            type emp_table_type
              is
                table of emp%rowtype;
            function get_emp(
                             p_deptno number
                            )
              return emp_table_type
              pipelined;
      end;
      / 
    
     create or replace
        package body pkg3
          as
            function get_emp(
                             p_deptno number
                            )
              return emp_table_type
              pipelined
              is
              begin
                  for v_rec in (select * from emp where deptno = p_deptno) loop
                    pipe row(v_rec);
                  end loop;
              end;
      end;
      / 
    
  • Is it better to transfer usb2, eSATA (or usb3) in small quantities or large amounts of data? Corrupted data?

    Not sure if this should be in performance/maintenance or hardware/drivers.

    Hello. I was wondering about the usb2, eSATA and a bit on the usb3. I have usb2 and eSATA on my systems.

    Someone I work with told me that there may be corrupted data if you transfer a large amoutns of data via Usb2. It is best to break your files to move, copy, etc., he said. My colleague told me earlier that anything more than 30 or 40 GB start to be transferred correctly from external factors or for some reason any.

    These issues apply to eSATA or Usb3? I guess not, since these other methods are designed to transfer large amounts of data.

    Is this true? Is this due to material limitations? What is the recommended size of transfer? It's Windows XP, Vista or 7 limits?

    Any info or links are appriciated.

    Thank you.

    I have never heard of something like this before and have done some fairly large data movements in the past.  I would recommend using the program Robocopy in Windows Vista/Windows 7 (and available for Windows XP as a download add-on) to drag the move instead of type / move, given that Robocopy includes a number of features and security provisions that are not present in the case.

    'Brian V V' wrote in the new message: * e-mail address is removed from the privacy... *

    Not sure if this should be in performance/maintenance or hardware/drivers.

    Hello. I was wondering about the usb2, eSATA and a bit on the usb3. I have usb2 and eSATA on my systems.

    Someone I work with told me that there may be corrupted data if you transfer a large amoutns of data via Usb2. It is best to break your files to move, copy, etc., he said. My colleague told me earlier that anything more than 30 or 40 GB start to be transferred correctly from external factors or for some reason any.

    These issues apply to eSATA or Usb3? I guess not, since these other methods are designed to transfer large amounts of data.

    Is this true? Is this due to material limitations? What is the recommended size of transfer? It's Windows XP, Vista or 7 limits?

    Any info or links are appriciated.

    Thank you.

  • Impossible to transfer large amounts of data more than 10 GB

    Original title: the maximum data transfer size?

    I recently installed an eSata controller card in location faster PCIex-1 of my computer to benefit from a transfer of data to and from my Fantom drives GreenDrive, of 2 TB external HARD drive.  When I started to copy the files from that drive to a new drive HARD internal, I recently installed I could not transfer large amounts of data more than 10 GB.  The pop-up message indicating files preparing to copy, then nothing would pass.  When I copy or cut smaller amounts of data all works fine.  Perplexed...

    I think I got the question.  It seems that some of the files I transfer were problematic.  When I transferred in small amounts, I was then invited for what I wanted to do about these files.  Thanks for the reply!

  • Transport of large amounts of data from a schema of one database to another

    Hello

    We have large amount of data to a 10.2.0.4 database schema from database to another 11.2.0.3.

    Am currently using datapump but quite slow again - to have done in chunks.

    Also files datapump big enough in order to have to compress and move on the network.

    Is there a quick way to better/more?

    Habe haerd on tablespaces transportable but never used and do not know for speed - if more rapid thana datapump.

    tablespace names in the two databases.

    Also source database on the system of solaris on Sun box opertaing

    target database on aix on the power box series of ibm.

    Any ideas would be great.

    Thank you

    Published by: user5716448 on 08-Sep-2012 03:30

    Published by: user5716448 on 2012-Sep-08 03:31

    >
    Habe haerd on tablespaces transportable but never used and do not know for speed - if more rapid thana datapump.
    >
    Speed? Just copy the data files themselves at the level of the BONE. Of course, you use always EXPDP to export the "metadata" for the tablespace but that takes just seconds.

    See and try the example from Oracle-Base
    http://www.Oracle-base.com/articles/Misc/transportable-tablespaces.php

    You can also use the first two steps listed on your actual DB to see if it is eligible for transport and to see what there could be violations.
    >
    DBMS_TTS EXEC. TRANSPORT_SET_CHECK (ts_list-online 'TEST_DATA', => of incl_constraints, TRUE);

    PL/SQL procedure successfully completed.

    SQL > TRANSPORT_SET_VIOLATIONS The view is used to check violations.

    SELECT * FROM transport_set_violations;

    no selected line

    SQL >

  • Ask the operation slow with large SGA and fast with little CMS

    Hello

    We have a situation where one of the insert is running slow and fast QA in PROD. Both are the same versions of database - Oracle 10.2.0.4 on HP Unix 11.31. To avoid the cause of databases running on another server, we copied from our Production database to the same server where the QA database is running and began with init.ora PROD that has 7 GB 6 GB SGA_TARGET and SGA_MAX_SIZE. For the QA database, SGA_MAX_SIZE is 700 MB and SGA_TARGET is 600 MB. Both are running on the same server, and with the same data. We have refreshed QA with data from PROD. If we start with PROD init.ora QA database, QA also behaves the same way PROD.

    This problem is only with the specific insert. Here is the result of this specific statement tkprof. Can someone please interpret this for me? I am poor in SQL tuning :-( Why the statement behaves ODD with the size of the SGA PROD? Generally, we would think THAT larger SGA should give better performance.

    call the query of disc elapsed to cpu count current lines
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse        1      0.00       0.00          0          0          0           0
    Run 1 56710.39 56067,75 7343 311186373 0 0
    Fetch        0      0.00       0.00          0          0          0           0
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Total 2 56710.39 56067,76 7343 311186373 0 0

    Chess in the library during parsing cache: 1
    Optimizer mode: CHOOSE
    The analysis of the user id: 27 (TEST)

    Rows Row Source operation
    -------  ---------------------------------------------------
    0 CRDETAIL of SEQUENCE (cr = 0 pr = 0 pw = time 0 = 29 US)
    0 REVIEWS (cr = 0 pr = 0 pw = time 0 = 21 US)
    0 SORT GROUP BY (cr = 0 pr = 0 pw = time 0 = 20 US)
    401 HASH RIGHT SEMI JOIN (cr = 23299915 pr 7343 pw = time = 0 = 93982966 en)
    237 TABLE ACCESS BY INDEX ROWID CR_STRUCTURE_VALUES2 (cr = 96 pr = 0 pw = time 0 = 504 en)
    253 CR_STRUCTURE_VALUES2_PK INDEX RANGE SCAN (cr = 4 pr = 0 pw = time 0 = 278 en)(object id 1467582)
    TABLE ACCESS BY INDEX ROWID CR_COST_REPOSITORY 841 (cr = 23306003 pr 7343 pw = time = 0 = 94546465 en)
    1317368058 NESTED LOOPS (cr = 79721182 pr 7343 pw = time = 0 = 18565176955 en)
    VIEW 26912 (cr = pr 9874 7343 pw = time = 0 = 5269231 en)
    26912 MINUS (cr = pr 9874 7343 pw = time = 0 = 5242317 en)
    27462 SORT UNIQUE (cr = 9627 pr = 7329 pw = time 0 = 5040815 en)
    271564 CR_STRUCTURE_VALUES2 TABLE FULL ACCESS (cr = 9627 pr = 7329 pw = time 0 = 1357961 en)
    568 SORT UNIQUE (cr = 247 pr = 14 pw = time 0 = 43467 US)
    TABLE ACCESS BY INDEX ROWID CR_STRUCTURE_VALUES2 2357 (cr = 247 pr = 14 pw = time 0 = US 14751)
    2357 CR_STRUCTURE_VALUES2_PK INDEX RANGE SCAN (cr = pr 11 = 14 pw = time 0 = 10028)(object id 1467582) US
    INDEX RANGE SCAN CRCR_MN_IX 1317341146 (cr = 79711308 pr = 0 pw = time 0 = 50420511 US)(object id 1469401)


    Implementation plan of lines
    -------  ---------------------------------------------------
    0 THE INSERT STATEMENT MODE: CHOOSE
    SEQUENCE "CRDETAIL" 0 (SEQUENCE)
    0 REVIEWS
    0 TRI (GROUP BY)
    401 HASH JOIN (RIGHT HALF)
    HOW TO ACCESS THE TABLE 237: ANALYSES (BY INDEX ROWID) OF
    "CR_STRUCTURE_VALUES2" (TABLE)
    INDEX 253 MODE: SCANNED (SCAN INTERVAL) OF
    "CR_STRUCTURE_VALUES2_PK" ((UNIQUE) INDEX)
    ACCESS MODE TO THE 841 TABLE: ANALYSIS (BY INDEX ROWID) OF "CR_COST_REPOSITORY" (TABLE)
    1317368058 NESTED LOOPS
    VIEW 26912
    26912 LESS
    27462 SORT (SINGLE)
    TABLE 271564 ACCESS MODE: ANALYZED (FULL) OF
    "CR_STRUCTURE_VALUES2" (TABLE)
    568 (SINGLE) SORT
    HOW TO ACCESS THE TABLE 2357: ANALYSES (BY INDEX ROWID)
    OF "CR_STRUCTURE_VALUES2" (TABLE)
    INDEX 2357 MODE: SCANNED (SCAN INTERVAL) OF
    "CR_STRUCTURE_VALUES2_PK" ((UNIQUE) INDEX)
    MODE 1317341146 INDEX: ANALYSIS (SCAN INTERVAL) OF "CRCR_MN_IX".
    (INDEX)

    ********************************************************************************

    And here is the statement in question:

    INSERT
    INTO cr_allocations_stg
      (
        "ID",
        "COMPANY",
        "GL_ACCOUNT",
        "COST_CENTER",
        "COST_ELEMENT",
        "PROFIT_CENTER",
        "MASTER_ORDER",
        "ORDER_NUMBER",
        " FUNDING_PROJECT",
        "POSTING_ORDER",
        "POSTING_COST_CENTER",
        "ORIG_COST_ELEMENT",
        "ORIG_COST_CENTER",
        "ORIG_PROFIT_CENTER",
        "TRADING_PARTNER",
        "WORK_ORDER_NUMBER",
        "DR_CR_ID",
        "LEDGER_SIGN",
        "QUANTITY",
        "AMOUNT",
        "MONTH_NUMBER",
        "MONTH_PERIOD",
        "GL_JOURNAL_CATEGORY",
        "AMOUNT_TYPE",
        "ALLOCATION_ID",
        "TARGET_CREDIT",
        "CROSS_CHARGE_COMPANY"
      )
    SELECT crdetail.nextval,
      "COMPANY",
      "GL_ACCOUNT",
      "COST_CENTER",
      '5253000',
      "PROFIT_CENTER" ,
      "MASTER_ORDER",
      "ORDER_NUMBER",
      "FUNDING_PROJECT",
      ' ',
      "POSTING_COST_CENTER",
      "ORIG_COST_ELEMENT",
      "ORIG_COST_CENTER",
      "ORIG_PROFIT_CENTER",
      " TRADING_PARTNER",
      "WORK_ORDER_NUMBER",
      CASE
        WHEN amount > 0
        THEN 1
        ELSE -1
      END,
      1,0,
      ROUND(amount * 0.0574000000, 2),
      month_number,
      0,
      '593',
      1 ,
      7,
      'TARGET',
      ' '
    FROM
      (SELECT "COMPANY",
        "GL_ACCOUNT",
        "COST_CENTER",
        "PROFIT_CENTER",
        "MASTER_ORDER",
        "FUNDING_PROJECT",
        "POSTING_COST_CENTER",
        "ORDER_NUMBER",
        "ORIG_COST_ELEMENT",
        "ORIG_COST_CENTER",
        "ORIG_PROFIT_CENTER",
        "TRADING_PARTNER",
        "WORK_ORDER_NUMBER",
        month_n umber,
        0,
        SUM(amount) amount,
        SUM(quantity) quantity
      FROM CR_COST_REPOSITORY
      WHERE (amount_type    = 1 )
      AND (month_number     = 201404)
      AND ( "MASTER_ORDER" IN MASTER_ORDER
      AND EXISTS
        (SELECT 1
        FROM
          (SELECT SUBSTR(ELEMENT_VALUE, 1, DECODE(INSTR(ELEMENT_VALUE, ':'), 0, L ENGTH(ELEMENT_VALUE) + 1, INSTR(ELEMENT_VALUE, ':')) - 1) AS ELEMENT
          FROM CR_STRUCTURE_VALUES2
          WHERE STRUCTURE_ID       = 2
          AND DETAIL_BUDGET        = 1
          AND STATUS               = 1
          AND UPPER(PARENT_VALUE) IN ('ELECTRIC ALL OTHER','ELECTRIC COR')
          MINUS
          SELECT SUBSTR(ELEMENT_VALUE, 1, DECODE(INSTR(ELEMENT_VALUE, ':'), 0, LENGTH(ELEMENT_VALUE) + 1, INSTR(ELEMENT_VALUE, ':')) - 1) AS ELEME NT
          FROM CR_STRUCTURE_VALUES2
          WHERE STRUCTURE_ID      = 9
          AND DETAIL_BUDGET       = 1
          AND STATUS              = 1
          AND UPPER(PARENT_VALUE) = 'A&G OH ORDER EXCLUSION'
          ) Z
        WHERE Z.ELEMENT = MASTER_ORDER
        )
      AND "GL_ACCOUNT"   <> '91081001'
      AND "COST_ELEMENT" IN COST_ELEMENT
      AND EXISTS
        (SELECT 1
        FROM CR_ST RUCTURE_VALUES2 A
        WHERE A.STRUCTURE_ID = 5
        AND A.DETAIL_BUDGET  =1
        AND A.STATUS         = 1
        AND COST_ELEMENT     = A.ELEMENT_VALUE
        )
      AND "GL_ACCOUNT" NOT IN ('5100000','5325000','5327000')
      AND "SOURCE_ID"      <> '7' )
      GROUP BY "COMPANY",
        "GL_ACCOUNT",
        "COST_CENTER",
        "PROFIT_CENTER",
        "MASTER_ORDER",
        "FUNDING_PROJECT",
        "POSTING_COST_CENTER",
        "ORDER_NUMBER",
        "ORIG_COST_ELEMENT",
        "ORIG_COST_CENTER",
        "ORIG_PROFIT_CENTER",
        "TRADING_PARTNER",
        "WORK_ORDER_NUMBER",
        month_number
      )
    
    

    Enjoy your first answer on this.

    Thank you and best regards,

    Murali

    Option 1:

    You run with two different ORACLE_HOMEs - that is potentially two different copies of the Oracle executable to the patch different sets?

    Option 2:

    A change of size of SGA is unlikely to directly affect the execution plan, but it also has change the size of the PGA TOUR at the same time? A change in the pga_aggregate_target could affect the choice of the mechanism of the join optimizer. (But not to change an in a join; but hash semi-join to nested loop is possible).

    Possibility 3:

    The db_file_multiblock_read_count leaves then the size that oracle defines by default depends on the db_cache_size divided by process; so, if you have reduced the sga_target_size you (implicitly or explicitly, no doubt) reduced the db_cache_size, and if you don't reduce the process in the same way, then the default db_file_multiblock_read_count been reduced. What you did on your system stats, this could change the cost of (for example) full tablescans, which could lead to a change in execution plan.

    It would be useful to see the results of a call to explain the plan / dbms_xplan.display in both cases so that we can see the variation in estimates of Oracle.

    Concerning

    Jonathan Lewis

  • Smart way to save large amounts of data using the circular buffer

    Hello everyone,

    I am currently enter LabView that I develop a measurement of five-channel system. Each "channel" will provide up to two digital inputs, up to three analog inputs of CSR (sampling frequency will be around 4 k to 10 k each channel) and up to five analog inputs for thermocouple (sampling frequency will be lower than 100 s/s). According to the determined user events (such as sudden speed fall) the system should save a file of PDM that contains one row for each data channel, store values n seconds before the impact that happened and with a specified user (for example 10 seconds before the fall of rotation speed, then with a length of 10 minutes).

    My question is how to manage these rather huge amounts of data in an intelligent way and how to get the case of error on the hard disk without loss of samples and dumping of huge amounts of data on the disc when recording the signals when there is no impact. I thought about the following:

    -use a single producer to only acquire the constant and high speed data and write data in the queues

    -use consumers loop to process packets of signals when they become available and to identify impacts and save data on impact is triggered

    -use the third loop with the structure of the event to give the possibility to control the VI without having to interrogate the front panel controls each time

    -use some kind of memory circular buffer in the loop of consumer to store a certain number of data that can be written to the hard disk.

    I hope this is the right way to do it so far.

    Now, I thought about three ways to design the circular data buffer:

    -l' use of RAM as a buffer (files or waiting tables with a limited number of registrations), what is written on disk in one step when you are finished while the rest of the program and DAQ should always be active

    -broadcast directly to hard disk using the advanced features of PDM, and re-setting the Position to write of PDM markers go back to the first entry when a specific amount of data entry was written.

    -disseminate all data on hard drive using PDM streaming, file sharing at a certain time and deleting files TDMS containing no abnormalities later when running directly.

    Regarding the first possibility, I fear that there will be problems with a Crescent quickly the tables/queues, and especially when it comes to backup data from RAM to disk, my program would be stuck for once writes data only on the disk and thus losing the samples in the DAQ loop which I want to continue without interruption.

    Regarding the latter, I meet lot with PDM, data gets easily damaged and I certainly don't know if the PDM Set write next Position is adapted to my needs (I need to adjust the positions for (3analog + 2ctr + 5thermo) * 5channels = line of 50 data more timestamp in the worst case!). I'm afraid also the hard drive won't be able to write fast enough to stream all the data at the same time in the worst case... ?

    Regarding the third option, I fear that classify PDM and open a new TDMS file to continue recording will be fast enough to not lose data packets.

    What are your thoughts here? Is there anyone who has already dealt with similar tasks? Does anyone know some raw criteria on the amount of data may be tempted to spread at an average speed of disk at the same time?

    Thank you very much

    OK, I'm reaching back four years when I've implemented this system, so patient with me.

    We will look at has a trigger and wanting to capture samples before the trigger N and M samples after the outbreak.  The scheme is somewhat complicated, because the goal is not to "Miss" samples.  We came up with this several years ago and it seems to work - there may be an easier way to do it, but never mind.

    We have created two queues - one samples of "Pre-event" line of fixed length N and a queue for event of unlimited size.  We use a design of producer/consumer, with State Machines running each loop.  Without worrying about naming the States, let me describe how each of the works.

    The producer begins in its state of "Pre Trigger", using Lossy Enqueue to place data in the prior event queue.  If the trigger does not occur during this State, we're staying for the following example.  There are a few details I am forget how do ensure us that the prior event queue is full, but skip that for now.  At some point, relaxation tilt us the State. p - event.  Here we queue in the queue for event, count the number of items we enqueue.  When we get to M, we switch of States in the State of pre-event.

    On the consumer side we start in one State 'pending', where we just ignore the two queues.  At some point, the trigger occurs, and we pass the consumer as a pre-event.  It is responsible for the queue (and dealing with) N elements in the queue of pre-event, then manipulate the M the following in the event queue for.  [Hmm - I don't remember how we knew what had finished the event queue for - we count m, or did you we wait until the queue was empty and the producer was again in the State of pre-event?].

    There are a few 'holes' in this simple explanation, that which some, I think we filled.  For example, what happens when the triggers are too close together?  A way to handle this is to not allow a relaxation to be processed as long as the prior event queue is full.

    Bob Schor

  • CRUD for tables with large number of columns

    Hello

    I work for a government agency in Colombia, and we do a poll with about 1200 variables. Our database model has few paintings, but they have a large number of columns. Our Oracle 11 g database is, and we do not use Oracle APEX. I've read about APEX and it seems to be useful to generate quick forms and reports. However, I would like to know if it is possible to generate CRUD for tables with many columns, and that would be the best way to do it.

    Thanks in advance.

    Carlos.

    With the help of 250 point on a single page is actually really bad for the user experience. I would probably be it cut into pieces and using several pages to guide the customer through it as a workflow.

    You could also add some pop-up windows which includes some elements of your table, just saved on other pages. But I probably don't like that.

    Tobias

  • update to column values (false) in a copy of the same table with the correct values

    Database is 10gr 2 - had a situation last night where someone changed inadvertently values of column on a couple of hundred thousand records with an incorrect value first thing in the morning and never let me know later in the day. My undo retention was not large enough to create a copy of the table as it was 7 hours comes back with a "insert in table_2 select * from table_1 to timestamp...» "query, so I restored the backup previous nights to another machine and it picked up at 07:00 (just before the hour, he made the change), created a dblink since the production database and created a copy of the table of the restored database.

    My first thought was to simply update the table of production with the correct values of the correct copy, using something like this:


    Update mnt.workorders
    Set approvalstat = (select b.approvalstat
    mnt.workorders a, mnt.workorders_copy b
    where a.workordersoi = b.workordersoi)
    where exists (select *)
    mnt.workorders a, mnt.workorders_copy b
    where a.workordersoi = b.workordersoi)

    It wasn't the exact syntax, but you get the idea, I wanted to put the incorrect values in x columns in the tables of production with the correct values of the copy of the table of the restored backup. Anyway, it was (or seem to) works, but I look at the process through OEM it was estimated 100 + hours with full table scans, so I killed him. I found myself just inserting (copy) the lines added to the production since the table copy by doing a select statement of the production table where < col_with_datestamp > is > = 07:00, truncate the table of production, then re insert the rows from now to correct the copy.

    Do a post-mortem today, I replay the scenario on the copy that I restored, trying to figure out a cleaner, a quicker way to do it, if the need arise again. I went and randomly changed some values in a column number (called "comappstat") in a copy of the table of production, and then thought that I would try the following resets the values of the correct table:

    Update (select a.comappstat, b.comappstat
    mnt.workorders a, mnt.workorders_copy b
    where a.workordersoi = b.workordersoi - this is a PK column
    and a.comappstat! = b.comappstat)
    Set b.comappstat = a.comappstat

    Although I thought that the syntax is correct, I get an "ORA-00904: 'A'. '. ' COMAPPSTAT': invalid identifier ' to run this, I was trying to guess where the syntax was wrong here, then thought that perhaps having the subquery returns a single line would be cleaner and faster anyway, so I gave up on that and instead tried this:

    Update mnt.workorders_copy
    Set comappstat = (select distinct)
    a.comappstat
    mnt.workorders a, mnt.workorders_copy b
    where a.workordersoi = b.workordersoi
    and a.comappstat! = b.comappstat)
    where a.comappstat! = b.comappstat
    and a.workordersoi = b.workordersoi

    The subquery executed on its own returns a single value 9, which is the correct value of the column in the table of the prod, and I want to replace the incorrect a '12' (I've updated the copy to change the value of the column comappstat to 12 everywhere where it was 9) However when I run the query again I get this error :

    ERROR on line 8:
    ORA-00904: "B". "" WORKORDERSOI ": invalid identifier

    First of all, I don't see why the update statement does not work (it's probably obvious, but I'm not)

    Secondly, it is the best approach for updating a column (or columns) that are incorrect, with the columns in the same table which are correct, or is there a better way?

    I would sooner update the table rather than delete or truncate then re insert, as it was a trigger for insert/update I had to disable it on the notice re and truncate the table unusable a demand so I was re insert.

    Thank you

    Hello

    First of all, after post 79, you need to know how to format your code.

    Your last request reads as follows:

    UPDATE
      mnt.workorders_copy
    SET
      comappstat =
      (
        SELECT DISTINCT
          a.comappstat
        FROM
          mnt.workorders a
        , mnt.workorders_copy b
        WHERE
          a.workordersoi    = b.workordersoi
          AND a.comappstat != b.comappstat
      )
    WHERE
      a.comappstat      != b.comappstat
      AND a.workordersoi = b.workordersoi
    

    This will not work for several reasons:
    The sub query allows you to define a and b and outside the breakets you can't refer to a or b.
    There is no link between the mnt.workorders_copy and the the update and the request of void.

    If you do this you should have something like this:

    UPDATE
      mnt.workorders     A      -- THIS IS THE TABLE YOU WANT TO UPDATE
    SET
      A.comappstat =
      (
        SELECT
          B.comappstat
        FROM
          mnt.workorders_copy B   -- THIS IS THE TABLE WITH THE CORRECT (OLD) VALUES
        WHERE
          a.workordersoi    = b.workordersoi      -- THIS MUST BE THE KEY
          AND a.comappstat != b.comappstat
      )
    WHERE
      EXISTS
      (
        SELECT
          B.comappstat
        FROM
          mnt.workorders_copy B
        WHERE
          a.workordersoi    = b.workordersoi      -- THIS MUST BE THE KEY
          AND a.comappstat != b.comappstat
      )
    

    Speed is not so good that you run the query to sub for each row in mnt.workorders
    Note it is condition in where. You need other wise, you will update the unchanged to null values.

    I wouold do it like this:

    UPDATE
      (
        SELECT
          A.workordersoi
          ,A.comappstat
          ,B.comappstat           comappstat_OLD
    
        FROM
          mnt.workorders        A      -- THIS IS THE TABLE YOU WANT TO UPDATE
          ,mnt.workorders_copy  B      -- THIS IS THE TABLE WITH THE CORRECT (OLD) VALUES
    
        WHERE
          a.workordersoi    = b.workordersoi      -- THIS MUST BE THE KEY
          AND a.comappstat != b.comappstat
      ) C
    
    SET
      C.comappstat = comappstat_OLD
    ;
    

    This way you can test the subquery first and know exectly what will be updated.
    This was not a sub query that is executed for each line preformance should be better.

    Kind regards

    Peter

Maybe you are looking for