The date in the table external or real

Hi all

We have to load the file .csv to external table, few of the columns are date data type
If I load in an external table as tank and convert to date during the actual table loading
In general, which is faster (making the conversion of the date when loading the external tables or at a real table).

Oracle version: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi

Thank you
Rambeau

Here it is - very basic, no indexes or constraints, a single column. The input file has 99000 rows with 3 distinct values.

create table xtest1 (dt date);

create table xtest2 (dt date)
organization external (
   type oracle_loader
   default directory gic
   access parameters (
      records delimited by newline
      fields
      (
  DT   CHAR(10) DATE_FORMAT DATE MASK "YYYY-MM-DD"
      )
   )
   location ('xtest.data')
)
;

create table xtest3 (dt varchar2(10))
organization external (
   type oracle_loader
   default directory gic
   access parameters (
      records delimited by newline
      fields
      (
  DT   CHAR(10)
      )
   )
   location ('xtest.data')
)
;

create table xtest4 (dt date)
organization external (
   type oracle_loader
   default directory test
   access parameters (
      records delimited by newline
      date_cache 0
      fields
      (
  DT   CHAR(10) DATE_FORMAT DATE MASK "YYYY-MM-DD"
      )
   )
   location ('xtest.data')
)
;

declare
  v_timer number;
begin
  v_timer := DBMS_UTILITY.GET_TIME();
  INSERT INTO xtest1 SELECT dt from xtest2;
  DBMS_OUTPUT.PUT_LINE('External with date cache: Time Taken: '||to_char((DBMS_UTILITY.GET_TIME()-v_timer)/100,'999G999D99'));
  EXECUTE IMMEDIATE 'TRUNCATE TABLE xtest1';
  v_timer := DBMS_UTILITY.GET_TIME();
  INSERT INTO xtest1 SELECT to_date(dt,'YYYY-MM-DD') from xtest3;
  DBMS_OUTPUT.PUT_LINE('In SQL: Time Taken: '||to_char((DBMS_UTILITY.GET_TIME()-v_timer)/100,'999G999D99'));
  EXECUTE IMMEDIATE 'TRUNCATE TABLE xtest1';
  v_timer := DBMS_UTILITY.GET_TIME();
  INSERT INTO xtest1 SELECT dt from xtest4;
  DBMS_OUTPUT.PUT_LINE('External no date cache: Time Taken: '||to_char((DBMS_UTILITY.GET_TIME()-v_timer)/100,'999G999D99'));
  EXECUTE IMMEDIATE 'TRUNCATE TABLE xtest1';
end;
/ 

External with date cache: Time Taken:         .34
In SQL: Time Taken:         .47
External no date cache: Time Taken:         .41

External with date cache: Time Taken:         .34
In SQL: Time Taken:         .47
External no date cache: Time Taken:         .41

External with date cache: Time Taken:         .35
In SQL: Time Taken:         .47
External no date cache: Time Taken:         .42

Kind regards
Bob

Tags: Database

Similar Questions

  • How do to accumulate data acquired in the table for a real-time XY graph plotting

    I have current and voltage values which I would like to draw the graph XY.

    Problem is that these are 1 d signals.

    1. Please tell me how I can earn these value in table, so I can have my XY trace.

    2. is there another way to plot in time real graphic XY of the acquired values.

    (I use NI 9205 analog module. I can see real-time signals i.e. time vs current and voltage time vs on my front.

    I need voltage vs current values in real time).

    You can help.

    Thank you.

    Try something like this...

    Notes:

    • If you simulate "as soon as possible", you put a period inside the loop.
    • As the table grows forever, you will be eventually run out of memory.
    • Since the x and y of the simulation is at the same frequency, all points will be on the same line.
  • Table external slow access of compiled PL/SQL, quick of SQLPLUS

    I'm under Oracle Standard Edition One 12.1.0.1.0 Windows x 64.  Small and simple external table queries since met PL/SQL are run very slowly with a second 18 delay, but the same sqlplus queries run very fast, both on the same instance.  I ran Profiler DBMS and debugged PL/SQL to confirm that it takes 18 seconds to query the file header record in an external table in PL/SQL, but the same exact sqlplus query runs in 0.07 seconds.

    This seems very odd.  I searched online and OTN, but I can find no example of why this would happen between the two access methods in the same instance.  Something is suspended until the execution of the external PL/SQL table compiled very hurt to be 18 seconds vs 0.07 seconds of sqlplus.  Before you buy the license Oracle I tried the table external access on trial Enterprise Edition on a laptop Windows x 64 where approaches both the PL/SQL and SQL executed just as fast (0.07 seconds in this case).  The main difference now is Standard Edition and Enterprise and production running on a Windows x 64 server.  I have no parallel enabled in the environment.

    The log file of external table displays this information message:

    KUP-05004: WARNING: disabled source Intra concurrency because select parallel is not sought.

    I think it's just because I'm not under parallel access on the external table.  The message is the same if the questioning of PL/SQL or sqlplus.

    It seems to be something coherent overall of all external PL/SQL tables in this case, because I studied 3 external tables and they all almost exactly 18 seconds later to PL/SQL vs sqlplus, even if number of rows of tables outside and requests files access to them are different.

    How can I know which slows down access PL/SQL method and correct to my production programs?  I created a test case and ran to share results:

    I create an external table of test:

    -Create table

    create the table TEMP_EXT

    (

    Field1 VARCHAR2 (10),

    VARCHAR2 (10) Field2.

    field3 VARCHAR2 (10)

    )

    external organization

    (

    type ORACLE_LOADER

    STRATESIS_DATA_DIR default directory

    access settings

    (

    RECORDS DELIMITED BY '\n '.

    BADFILE STRATESIS_LOG_DIR: 'temp.bad'

    STRATESIS_LOG_DIR LOG file: 'temp.log'

    NODISCARDFILE

    FIELDS TERMINATED BY ', '.

    SURROUNDED OF POSSIBLY "" "

    MISSING FIELD VALUES ARE NULL

    (Field1, Field2, field3)

    )

    location (STRATESIS_DATA_DIR: 'temp.txt')

    )

    reject the limit 0.

    I already have the directories above put in place in the database.

    I create a file temp.txt in the above data directory. It has two rows:

    Field1, Field2, field3

    2field1, 2field2, 2field3

    I create an autonomous PL/SQL procedure (not in a package, but I get the same result if I put it in a package):

    create or replace procedure tryplsql is

    l_field1 temp_ext.field1%TYPE;

    l_field2 temp_ext.field2%TYPE;

    l_field3 temp_ext.field3%TYPE;

    BEGIN

    SELECT field1, Field2, field3

    IN l_field1, l_field2, l_field3

    OF temp_ext

    WHERE field1 = "field1";

    Dbms_output.put_line(l_field1 ||) ',' || l_field2 | ',' || l_field3);

    end tryplsql;

    I run as a sqlplus pl/sql procedure:

    SQL > exec tryplsql

    Field1, Field2, field3

    PL/SQL procedure successfully completed.

    Elapsed time: 00:00:17.68

    SQL > spool off;

    It takes almost 18 seconds?

    I performed the simple query of sqlplus and it's quick:

    SQL > select Field1, Field2, field3 from temp_ext where field1 = "field1";

    FIELD1 FIELD2 FIELD3

    ---------- ---------- ----------

    Field1 Field2 field3

    Elapsed time: 00:00:00.01

    SQL > spool off;

    Very fast 0.01 second.

    I run the following block of sqlplus:

    SQL > DECLARE

    l_field1 2 temp_ext.field1%TYPE;

    3 l_field2 temp_ext.field2%TYPE;

    4 l_field3 temp_ext.field3%TYPE;

    5

    6 BEGIN

    7 SELECT field1, Field2, field3

    8 l_field1, l_field2, l_field3

    9 FROM temp_ext

    10. WHERE field1 = "field1";

    11

    12 DBMS_OUTPUT.put_line(l_field1 ||) ',' || l_field2 | ',' || l_field3);

    13

    14 END;

    15.

    Field1, Field2, field3

    PL/SQL procedure successfully completed

    Elapsed time: 00:00:00.01

    SQL > spool off

    It is also very fast.  In SQL, and even a PL/SQL block sqlplus are fast, but a procedure have complied is slow?

    I have a lot of packages, procedures, functions, etc., running very fast in the DB as long as there are no external table access (no time 18 seconds).  I ran DBMS Profiler on several sections of code - in all cases, that the call to the external tables takes 18 seconds.  I tried to debug PL/SQL and again the request to the external tables takes 18 seconds every time.

    Probably something obvious I'm missing, but I am confused.  Any help is appreciated.

    Support of Oracle has identified the issue as a known bug in 12.1.0.1.   Bug #18824125

    The workaround until patched is drop the PL/SQL as sys (not a good option), or to grant any directory of the user who executes the PL/SQL will be launched.  It worked.

  • Construction in real time of the table and the data tracing

    Hello

    I have a project in which I am waiting for a message from the chain coming to my serial port that contains two parameters of a voltage sensor compared to the position

    I will then draw two parameters for a XY chart as they arrived at my port to build a chart that is continuously updated with all the points came to the COM port (all from the position of the reading pressure readings).

    I know that to draw the two parameters against each other that I must use the XY-graph and for this, I have to insert my data in the tables first and then give them to the chart.

    The problem is that the message of the series is not at fixed intervals (for example a message now comes, the other may be after 1 minute, then another after one half minute.. .and so on). and the chart should be updated with the points once they arrive (in addition to displaying the previous points too of course).

    I don't know where to start! can someone put me on a track for it?

    Note: I have no problem with the interpretation of the data series, at the end I will have two numeric values which I'll then draw against each other

    Thank you

    One thing that I had not noticed before on your VI is that you use the wrong function when generating data in your table.  You should use table to build.  No insertion in the table which is more intended for stuff that goes in the middle of a table.  And the way you use it, you insert in fact data at the beginning and not at the end.

    I don't know what you have tried and why you think that the circular buffer is not what you want to do.

    Take a look for a function called data queue Pt by Pt that effectively does what you want.

    I will attach a Subvi I used.   I changed it to something that I found.  I think I found somewhere in LabVIEW itself, or an example, maybe the forums, but I can't find the original source.  And I don't see in the comments of the VI. (If anyone knows, please comment.)

  • Move the table in same tablespace is not reorganize the data

    Hello.

    I am facing a problem that I have not used to have.  First of all, a description of our envorinnement:

    We have a few large tables partitioned and performance optimization, our ETLs use bluk, add notes, parallelism and so on.  This create several holes of unused space in tablespaces/data files as well a kind of leak of space on our drives.

    A complete correction would re-create the tablespaces move everything is of opposes another.  It would be impratical, because there are about 15 who are top of 100 GB; the time and effort to recreate everything is not affordable for the Business.

    Instead, we have a single proc that comes to calculate the actual amount of used space (converted to blocks) and makes a move of all objects above this block_id.  Just after this operation, there is a dynamic shrink based on the new HWM (given that the objects have been moved) on the data file freeing disk space.  As we have a datafile by tablespace and a tablespace by schema, we would like to keep this body, if we make a single movement for objects, like 'ALTER TABLE' | owner: '. ' || nom_segment | "MOVE; "(the complete query works with all types of data such as partitions of table objects, the index partitions and the subpartions).  This will move the object in the same space for the first freespace on the tables and free up space at the end of the file to shrink.  In theory.

    This unique proc used to work properly.  In a 650 GB GB 530 tablespace in use moving about 20 that Go (the amount of data beyond the HWM 530 GB) is simpler than to create a new file/TBS and the displacement of 20 GB is faster than Go 530.

    But suddenly things changed when some TBS refused to be narrowed.  What I found out: the command move doesn't fail, it works very well and Oracle really moves the object.  But for reasons that I don't know, he's not moving it at the beginning of the file, it keeps the object at the end.  So the da calculates the new HWM, but because some objects that were in the tail of the queue, the shrink is done with a very high HWM, if no real space is reclaimed.

    So, the main question: How does the ALTER TABLE FOO MOVE really works?  I thought that it would be always to move the object to the beginning of the file thus reorganize, but I analyzed the last objects that gave me this problem (block_id before and after the move, compared to block_ids empty and everything) and actually, I see that they were moved at the end of the file, although there is enough space to accommodate initially.

    Okay, I think I found the problem.  Before that I just pulled the script as posted, but then I had the good idea to improve its performance with parallelism, so I added:

    ALTER SESSION FORCE PARALLEL QUERY 16 PARALLELS;

    ALTER SESSION FORCE PARALLEL DDL PARALLEL 16;

    ALTER SESSION FORCE PARALLEL DML PARALLEL 16;

    Returning to prallel not running, that I could reuse the freespace on the beginning of the file, and then narrow it down.

    Obviously, each writing data in parallel mode reuse freespace, I just forgot that a TABLE ALTER MOVE is also a data write operation.  I fell a bit ridiculous, caught in the same trap that I was trying hard.

    Thank you all for the comments and advice.

  • FDMEE error data import: No. periods have been identified for the loading of the data in the table "AIF_EBS_GL_BALANCES_STG".

    Hi experts,

    I tried to load the data of EBS in HFM via FDMEE.

    Importing data in the rule of loading, I have encountered an error in loading.

    2014-11-21 06:09:18, 601 INFO [AIF]: beginning of the process FDMEE, process ID: 268

    2014-11-21 06:09:18, 601 [AIF] INFO: recording of the FDMEE level: 4

    2014-11-21 06:09:18, 601 [AIF] INFO: FDMEE log file: D:\fdmee\outbox\logs\TESTING_268.log

    2014-11-21 06:09:18, 601 [AIF] INFO: user: admin

    2014-11-21 06:09:18, 601 INFO [AIF]: place: Testing_loc (Partitionkey:3)

    2014-11-21 06:09:18, 601 [AIF] INFO: name: OCT period (period key: 31/10/14 12:00 AM)

    2014-11-21 06:09:18, 601 INFO [AIF]: name of the category: real (category key: 1).

    2014-11-21 06:09:18, 601 INFO [AIF]: name rule: Testing_dlr (rule ID:8)

    2014-11-21 06:09:19, 877 [AIF] INFO: Jython Version: 2.5.1 (Release_2_5_1:6813, September 26 2009, 13:47:54)

    [JRockit (R) Oracle (Oracle Corporation)]

    2014-11-21 06:09:19, 877 INFO [AIF]: Java platform: java1.6.0_37

    2014-11-21 06:09:19, 877 INFO [AIF]: connect the file encoding: UTF-8

    2014-11-21 06:09:21, 368 [AIF] INFO: - START IMPORT STEP -

    2014-11-21 06:09:24, 544 FATAL [AIF]: error in CommData.insertImportProcessDetailsTraceback (most recent call last): File '< string >", line 2672, in insertImportProcessDetail

    RuntimeError: No periods have been identified for the loading of the data in the table 'AIF_EBS_GL_BALANCES_STG'.

    2014-11-21 06:09:24, 748 FATAL [AIF]: load balances data launch GL error

    2014-11-21 06:09:24, 752 [AIF] INFO: end process FDMEE, process ID: 268

    I found a post related to this error, but did not respond.

    I know I'm missing something, gurus please help me to overcome this error.

    ~ Thank you

    I managed to overcome this problem,

    This was caused due to an error in the map of the time.

    In the mapping of source, the name of period should be defined exactly as displayed in the EBS.

    for example: {EBS--> OCT - 14} FDMEE {mapping source--> OCT - 14}

    The names of the time must be identical.

  • Using the procedure to display the table of multiple data

    Hi, I need help for the procedure in oracle

    I want to create the procedure to display the table of multiples with sample plan

    with a parameter imployee_id to display an employee_id, name, function, start_date, end_date

    IAM using this query to select more than one table

    SELECT e.employee_id, e.first_name, j.job_title, h.start_date, h.end_date

    E EMPLOYEES

    JOIN j jobs

    ON j.job_id = e.job_id

    JOIN the job_history:

    ON h.employee_id = e.employee_id

    WHERE e.employee_id = 200;

    Thanks for the help

    Blu and Billy showed you the 'real' solution. You can display the data returned by a cursor ref in SQL Developer, too:

    http://www.thatjeffsmith.com/archive/2011/12/SQL-Developer-tip-viewing-refcursor-output/

    Yet as a duty for a beginner is generally do not have the expected solution. Usually, teachers want to see you using a LOOP and dbms_output. something like

    DECLARE

    Xy CURSOR IS

    SELECT whatever

    As much as;

    BEGIN

    FOR r IN xy LOOP

    dbms_output.put_line (r.col1 |' # ' | r.col2);

    END LOOP;

    END;

    Of course this suggestion will inaugurate a discussion abusing DBMS output but I keep my position that it is authorized to use it for learning the basics.

  • External table Oracle via the Tables API

    Hello world

    I did experiment with the Oracle NoSQL database recently and I became a bit stuck with the new API of Tables. I have so far successfully of the external tables on the data entered using storage techniques 'vanilla' KV and avro (using generic and specific links) scheme, but create API Tables seems to be another matter entirely.

    My question arises in the trainer interface, which has a KeyValueVersion and a KVStore. I can't translate a KeyValueVersion created with the API of Tables in a primary key for recovery (since I don't know what the key generated by the API actually looks like to!) or map it on an avro scheme. The problem seems to be that the Tables API writes data in some format that can be easily translated into a string or an integer (releases from external table lines due to unknown characters if I am trying to retrieve all the values in the database to see what it looks like to), and try to map it to an AVRO map results in the error message 'the data are not as AVRO'.

    Scenario:

    I created a very simple table in the administration tool KV, which consists of a column personId (integer) that is PK, firstName, lastName, emailAddr (all channels) and enter 5 rows with success. What I want to do is to create an external table called person that returns just those 5 values (and brand new I add to the table of course). This means that I have to first understand what the parentKey value must be defined in the .dat file and how to take this key and it becomes a primary key for the recovery of the line.

    Faithful old Google could not find information on how to do this (he was only a thread similar to this with the answer "we'll add it soon"), so I hope that someone here managed to do!

    Thank you

    Richard

    Hi Richard

    I understand the issue you are facing. In the current version (12.1.3.0.9) the external tables feature only works with records of K/V not with the Table model, however, in the next version (which us will very soon be GA) we will support integration of external tables with the data of Table model as well. Please make sure that you have signed up for the announcement of release so that we can inform you of the release. I apologize for the inconvenience, he did to you.

    Best

    Anuj

    Sign up for announcements of NoSQL database , so we can warn you versions futures and other updates from the product of NoSQL database

  • Add new data to the table in a log file

    Hi all. I am new to Oracle and I need to also write new data table in a logfile on Linux in order to live in the display screen. My first thought was to write a trigger, and after some research on googled around, I finally came to this:

    create or replace trigger foo_insert
    After Insert on foo
    for each line
    declare
    f utl_file.file_type;
    s VARCHAR2 (255);
    Start
    s: =: new.udate | '-' || : new.time | ' ' || : new.foo | ' ' || : new.bar | ' ' || : new.xyzzy | ' ' || : new.frobozz | ' ' || : new.quux | ' ' || : new.wombat;
    f: = utl_file.fopen ('BLAH_BLAH', 'current.log', ' a');
    UTL_FILE.put_line (f, s);
    UTL_FILE.fclose (f);
    end foo_insert;

    It seems properly to add new data in the log file as new inserts occur, but open and close the file each time are of course not optimal.
    In the app real new lines could have inserted every second or two. How can I optimize it? In addition, the log file will be archived and turned every day, so there must be a way to effectively report the relaxation of the oracle to reopen the case.


    Thank you!

    >
    I would like to pursue the optimization of the trigger
    >
    As Ed suggested you need to think this through a few others and refine the requirements.

    You said "I am new to Oracle. So you may not realize that anything a trigger didn't REALLY EVEN HAPPEN! The transaction can still be restored by Oracle or by the appellant. Want that all the 'hiccups' look too? If this isn't the case, then you can not use a trigger to do this. You need the process that translates the trigger being called to do logging after the data is stored.

    It should be noted that this requirement is before we can offer solutions to a problem.

    Assuming you want the trigger record all attempts change the data, then the best way I know to do that is to minimize the work does the trigger.
    Another fundamental principle is to follow the advice of the Ed and have a clear separation and distinction between "what" should be done and 'how' to do it.

    To minimize the trigger work change proposed Nicosa approach. Create an AUTONOMOUS_TRANSACTION stored procedure that handles the 'how' and just have the trigger to transfer data to the stored procedure values. The trigger provides data; He doesn't know, or care, what is done with the data.

    The stored procedure is then free to use the files, a table, write to a file or any other method is proving to be the best. You can change the methods without affecting the trigger.

    A queue or table may contain data, but again once you need to think about the obligation. Do you need fair access to data only once? Now, you want a "tail". But what happens if this requirement change tomorrow? You won't have to redesign the architecture.

    With a queue once you delete the queue data it won't here later if you want to get it again. With a table you can take as long as you want.

    I would like to start by using a table to store the data. If you use a sequence number or "insert_date" value, you can always query the data of interest. The table just collects data. He does not care how to use data.

    So, by using proven design principles and knowing that the requirements are for the most part unknown and may change unexpectedly, I would be:

    1. create an AUTONOMOUS_TRANSACTION stored procedure that accepts the parameter data and the thicket in a simple logging table.
    2. change your trigger to call the procedure to step #1
    3. create another procedure that performs a query of 'tail' for you will depend on 'insert_date' number or sequence. This query can write data to a file or return a cursor ref that your script can use to provide data for display.

    The approach described above takes each step in the process relatively independent of the other stages.

    Until put you the finishing touches to the requirements that you do not want to lock up your initial design.

  • How to find the table - is - this normal table or external table?

    Hello


    Regardless of DML operations in normal and reading tables only access tables outside, is there a way to find the specified table is regular table or the external table?

    Thanks in advance.

    Suresh.

    One possibility would be to query the ALL_EXTERNAL_TABLES data dictionary view. If there is a line in ALL_EXTERNAL_TABLES to the table, wondering (I assume you know that you query a table, a view, or a synonym), it is an external table. Otherwise, it's an ordinary table.

    Justin

  • Export data from the table

    Hello. Is it possible to export data from a table in Oracle using SQL Loader? If Yes, can you tell a good examples?

    Hello

    Hello. Is it possible to export data from a table in Oracle using SQL Loader?

    No, with SQL * Loader, you can load data from external files into tables not export.

    coil c:\temp\empdata.txt
    sqlplus abc.sql (assumes that abc.sql runs select * from emp)
    spool off

    It cannot work like this, because the declaration of the COIL is not recognized outside the SQL * Plus the term.

    But, you can include the statement of the COIL in abc.sql like this:

    spool c:\temp\empdata.txt
    select * from emp;
    spool off
    

    Then, you just have to run the SQL script as follows:

    sqlplus  @abc.sql 
    

    However, I advise you to use Oracle SQL Developer, this is a free tool and with it you can export a Table in several types of format (html, xml, csv, xls,...).

    Please find attached a link to this tool:

    http://www.Oracle.com/technetwork/developer-tools/SQL-Developer/Overview/index.html

    Hope this helps.
    Best regards
    Jean Valentine

  • Addign a computed column of the record count in table external

    Hello

    I have a csv file that is loaded using the external table. My need is to give a number to each record in the file and save it in one of the extra column in the table, can anyone suggest how it is possible?

    The structure of the file is:
    $cat emp.txt
    7369,SMITH,CLERK,7902,12/17/1980,800,,20
    7499,ALLEN,SALESMAN,7698,2/20/1981,1600,300,30
    7521,WARD,SALESMAN,7698,2/22/1981,1250,500,30
    7566,JONES,MANAGER,7839,4/2/1981,2975,,20
    7654,MARTIN,SALESMAN,7698,9/28/1981,1250,1400,30
    7698,BLAKE,MANAGER,7839,5/1/1981,2850,,30
    7782,CLARK,MANAGER,7839,6/9/1981,2450,,10
    7788,SCOTT,ANALYST,7566,12/9/1982,3000,,20
    7839,KING,PRESIDENT,,11/17/1981,5000,,10
    7844,TURNER,SALESMAN,7698,9/8/1981,1500,0,30
    7876,ADAMS,CLERK,7788,1/12/1983,1100,,20
    7900,JAMES,CLERK,7698,12/3/1981,950,,30
    7902,FORD,ANALYST,7566,12/3/1981,3000,,20
    7934,MILLER,CLERK,7782,1/23/1982,1300,,10
    
    --and the table structure is:
    
        CREATE TABLE TMP_emp_ext
        (
        EMPNO                                      NUMBER(4),
        ENAME                                              VARCHAR2(10),
        JOB                                                VARCHAR2(9),
        MGR                                                NUMBER(4),
        HIREDATE                                           DATE,
        SAL                                                NUMBER(7,2),
        COMM                                               NUMBER(7,2),
        DEPTNO                                             NUMBER(2)
        )
        ORGANIZATION EXTERNAL
          (  TYPE ORACLE_LOADER
             DEFAULT DIRECTORY DIR_N1
             ACCESS PARAMETERS
               ( records delimited  by newline
            fields  terminated by ','
            missing field values are null
           )
             LOCATION (DIR_N1:'emp.txt')
          )
        REJECT LIMIT UNLIMITED
        NOPARALLEL
        NOMONITORING
     /
    Now, my need is to give a number to each record... like the record from 7369, SMITH should be granted record n ° 1, 7499, ALLEN should be record No. 2 etc... can anyone suggest how it is possible?

    Thank you
    orausern

    T. Kyte write RECNUM should work in http://asktom.oracle.com/pls/apex/f?p=100:11:0:P11_QUESTION_ID:52733181746448 #52977916329285 because it is SQL * Loader syntax. However I've wasn't able to make it work with Oracle 10.2.0.4. But in case of errors of loading, you should find in _XXXXX.log the number of rejected records line (in my example it's the meaning of the 'line 2' "line 2"):

    erreur lors du traitement de la colonne EMPNO, la ligne 2, pour le fichier de données /tmp/emp.txt
    ORA-01722: invalid number
    
  • Insert and add data to the table to a batch file

    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
    PL/SQL Release 11.1.0.7.0 - Production
    CORE Production 11.1.0.7.0
    AMT for 32-bit Windows: Version 11.1.0.7.0 - Production
    NLSRTL Version 11.1.0.7.0 - Production


    My patch to input file looks like this:

    A0397990002000001
    A0459380377000075
    A1291115796000002
    C0483110026000080
    D0491114923000004
    A0348400660000000
    G0209111373-

    Separate columns look like this:

    A0397 990002 000001

    account amount of IDN


    I'm new to PL/SQL and having a problem changing or adding a record in a table. Don't know how to check if a record exists in the table change if not
    Insert the record.

    If the quantity is 000000 or - the record should be deleted. I have code in place to do this however, don't know how to handle change or add the part.

    Here is the code I have so far and thanks for looking:

    Set serveroutput on
    create or replace directory user_dir as 'c:\dataformats\incoming\ ';


    DECLARE


    v_filename VARCHAR2 (100); -The name of the data file
    v_file_exists boolean;
    number of v_file_length;
    number of v_block_size;
    f utl_file.file_type;
    s varchar2 (200);
    lineString varchar (200);

    -not used c_ *.
    c_account ID_REQ_STG.account%TYPE;
    c_IDN ID_REQ_STG. IDN % TYPE;
    c_quantity ID_REQ_STG.quantity%TYPE;

    ID_REQ_TUPLE ID_REQ_STG % ROWTYPE;

    v_account varchar (5);
    v_IDN varchar (6);
    V_quantity varchar (6);

    BEGIN
    v_filename: = ' PTCLICK. MANUAL.12SERIES.TXT';


    DBMS_OUTPUT. Put_line (v_filename); -the name of the file


    UTL_FILE.fgetattr ("USER_DIR", v_filename, v_file_exists, v_file_length, v_block_size);

    IF v_file_exists THEN

    dbms_output.put_line ("'File Exists");

    f: = utl_file.fopen ("USER_DIR", v_filename, "R");

    IF utl_file.is_open (f) THEN

    LOOP
    BEGIN
    UTL_FILE.get_line (f, s);
    lineString: = s;

    dbms_output.put_line (lineString);

    v_account: = substr (lineString, 1, 5);
    v_IDN: = substr (lineString, 6, 6);
    V_quantity: = substr (lineString, 12.6);


    dbms_output.put_line (v_account);
    dbms_output.put_line (v_IDN);
    dbms_output.put_line (V_quantity);

    -REMOVE

    IF v_quantity = '000000' GOLD v_quantity = '-'
    THEN
    REMOVE FROM ID_REQ_STG
    WHERE account = v_account and
    IDN = v_IDN;
    commit;
    dbms_output.put_line ('Deleted the folder' | v_account |) «and» | v_IDN);
    END IF;


    -CHANGE



    -ADD



    EXCEPTION
    WHEN NO_DATA_FOUND THEN
    dbms_output.put_line ("' no data found");
    EXIT;
    END;

    END LOOP;

    END IF; -is open

    UTL_FILE.fclose (f);


    ON THE OTHER

    dbms_output.put_line ('file does not exist');

    END IF; -file exists

    EXCEPTION

    WHEN UTL_FILE. THEN ACCESS_DENIED
    DBMS_OUTPUT. Put_line ("' no access!");
    WHEN UTL_FILE. INVALID_PATH THEN
    DBMS_OUTPUT. PUT_LINE ('PATH DOES NOT EXIST');
    WHILE others THEN
    DBMS_OUTPUT. PUT_LINE ("SQLERRM: ' |") SQLERRM);



    END;
    /

    Hello

    Looks like a good candidate for a MERGER with an external table.

    The external table:

    create table ext_table (
     account varchar2(5),
     idn number(6),
     quantity varchar2(6)
    )
    organization external (
      type oracle_loader
      default directory user_dir
      access parameters (
        records delimited by newline
        fields (
          account position(1:5) char(5),
          idn position(6:11) char(6),
          quantity position(12:17) char(6)
        )
      )
      location ('test.txt')
    )
    reject limit unlimited;
    

    Then a simple MERGER should perform all your needs:

    MERGE INTO id_req_stg t
    USING (
     SELECT account,
            idn,
            decode(quantity, '-', 0, to_number(quantity)) as quantity
     FROM ext_table
    ) v
    ON ( t.account = v.account AND t.idn = v.idn )
    WHEN MATCHED THEN
      UPDATE SET t.quantity = v.quantity
      DELETE WHERE t.quantity = 0
    WHEN NOT MATCHED THEN
      INSERT (account, idn, quantity)
      VALUES (v.account, v.idn, v.quantity);
    

    Documentation related to the MERGER: http://download.oracle.com/docs/cd/E11882_01/server.112/e10592/statements_9016.htm#SQLRF01606
    and on the outdoor tables: http://download.oracle.com/docs/cd/E11882_01/server.112/e10595/tables013.htm#ADMIN12896

    Published by: odie_63 on June 10, 2010 14:26 (added docs)

  • "shuffle" the data in a table...

    Hello world...

    Pleade help me solve this problem.

    Is it possible to combine the data in a table?
    Let say if i have a table with following data..
    
    PERSON_ID    PERSON_LAST_NAME        PERSON_FIRST_NAME        DOB
    
       1             Test1                   Test1              01/01/1970
       2             Test2                   Test2              01/01/1971
       3             Test3                   Test3              01/01/1972
       4             Test4                   Test4              01/01/1973
       5             Test5                   Test5              01/01/1974
    
    I am trying to shuffle the above data so that no person will have their
    "real last,first and dob match".
    
    I need the output like this or in any combinations so that no one can
    identify a person actual "last name,first name and dob"
    
    
    PERSON_ID    PERSON_LAST_NAME        PERSON_FIRST_NAME        DOB
    
       1             Test1                   Test2              01/01/1974
       2             Test2                   Test4              01/01/1975
       3             Test3                   Test5              01/01/1971
       4             Test4                   Test3              01/01/1972
       5             Test5                   Test1              01/01/1973
    
    please help me to solve this issue. Thanks in advance

    Hello

    You can do something like this:

    WITH     got_nums     AS
    (
         SELECT     person_id
         ,     ROW_NUMBER () OVER (ORDER BY dbms_random.value)          AS person_id_num
         ,     person_last_name
         ,     ROW_NUMBER () OVER (ORDER BY dbms_random.value)          AS person_last_name_num
         ,     person_first_name
         ,     ROW_NUMBER () OVER (ORDER BY dbms_random.value)          AS person_first_name_num
         ,     dob
         ,     ROW_NUMBER () OVER (ORDER BY dbms_random.value)          AS dob_num
         FROM     table_x
    --     WHERE     ...     -- Any filtering goes here
    )
    SELECT     id.person_id
    ,     ln.person_last_name
    ,     fn.person_first_name
    ,     db.dob
    FROM     got_nums     id
    JOIN     got_nums     ln     ON id.person_id_num     = ln.person_last_name_num
    JOIN     got_nums     fn     ON id.person_id_num     = fn.person_first_name_num
    JOIN     got_nums     db     ON id.person_id_num     = db.dob_num
    ;
    

    This does not guarantee that no column of the same original line will be on the same line of output. I don't think that you really want it.

  • Reading file from the ftp server and importing data into the table

    Hi experts,

    Well, basically, I text with different layout files have been uploaded to an ftp server. Now, I must write a procedure to recover these files, read and insert data into a table... what to do?

    your help would be greatly helpful.

    Thank you

    user9004152 wrote:
    http://it.Toolbox.com/wiki/index.php/Load_data_from_a_flat_file_into_an_Oracle_table

    See the link, hope it will work.

    It is an old method, using the utl_file_dir parameter that is now obsolete and which is frankly a waste of space when external tables can do exactly the same thing much more easily.

Maybe you are looking for