Data from the CSV into a TABLE

Hello world...

I have the csv file in c:\city.csv (this file have 2 fields that are 'id' and 'short-term'
I have a form... There is a button on it.
and I have a table in my database of CITY and its fields are (id number, city varchar2 (50))

I want a procedure that when I press the button on the form fields CSV file copy of table

I saw the number of threads, but I did not understand...
Please make me a procedure...
don't give me links refrence...
Thanks in advance

concerning
Sani...

Hoek put the right answer :) so... try next time

Here is your code:

-- Test-Table
CREATE TABLE city_table (
  city_id    VARCHAR2(100),
  city_name  VARCHAR2(100)
);

-- Test-File
1000,Cologne
1001,Amsterdam
1002,KFC :)
1003,Blabla

--Forms-Procedure to call
PROCEDURE get_city_data
IS

  file_handle   text_io.file_type;
  seperator     VARCHAR2(1) := ',';
  city_row      VARCHAR2(32767);
  city_id       VARCHAR2(100);
  city_name     VARCHAR2(100);

BEGIN

  file_handle := text_io.fopen('c:\city.csv', 'R');

     BEGIN

       LOOP

         text_io.get_line(file_handle, city_row);
         city_id := SUBSTR(city_row, 1, INSTR(city_row, seperator) - 1);
         city_name := SUBSTR(city_row, INSTR(city_row, seperator) + 1);
         INSERT INTO city_table
         VALUES(city_id, city_name);

          END LOOP;

  EXCEPTION
    WHEN NO_DATA_FOUND THEN
    NULL;
  END;

     text_io.fclose(file_handle);

     COMMIT;

EXCEPTION
     WHEN OTHERS THEN
          message('Error!!');
          RAISE FORM_TRIGGER_FAILURE;
END;

Tags: Database

Similar Questions

  • Read data from the file into ArrayList

    Hello
    I have a class called product describing the charactristics of products such as the name, id, price etc.. I have the whole set and use the get methods in my class of product. I store the information on my products in an ArrayList. I can write the information on my products in a file.
    so far, but so good...

    I want now to re-read the data from the file and store it in my list of tables again. I know I can use Parsing and read in data in an ArrayList as String, int, double etc. but is anyway to read in from 'Product' (then it would be up to my class of product)?

    Thank you in advance.

    1. are you aware of serialization?
    2. get a name, I love not dealing with numbers.

    DB

  • Display data from the database in a table

    Hello

    How to display data from my database in an ADF table using a backing bean? I created an arraylist in the bean, but only the last row of my query is displayed in the table...

    Thank you...

    Hello

    Create a simple Java class that implements Serializable. Create attributes that represent each column in your table. This class represents on the row of your table. A list of these objects, and you can fill your af:table.

    Visit this link below for an example.
    Re: Is it possible to create a static array of ADF and the tree?

    Kind regards
    Amélie Chan

  • Read data from the text column in report

    I have a sql report with an editable column created using apex_item.text (5);

    I tried to read and insert the data from the column in a table, using this process

    because me in 1.apex_application.g_f05.count
    loop
    insert into table (col_a, col_b)
    values (apex_application.g_f05 (i), apex_application.g_f04 (i));
    commit;
    end loop;
    end;

    but without success.

    The message when I execute is ORA-01403: no data found.

    What is the error?

    HI lkefur,

    Try to debug it.

    The reason behind this is that your report do not have the of4, fo5 (one) as text fields at all.

    try to delete value n f04 normal text and check, even to f05.
    I think that most probably the reason whether shud.

    You can view only the source of the page and check which names are your text fields of the report to help.

    Kind regards
    Nandini thakur.

  • Received the error when transferring data from the database file

    Hello...

    I have my using SOA 10.1.3.1.0. I am trying to transfer the data from the CSV of the oracle database.

    The settings of connection in file xml oc4j DBA adapter file in the path C:\product\10.1.3.1\OracleAS_1\j2ee\oc4j_soa\application-deployments\default\DbAdapter is shown below.

    < location connector-factory = name of the connector "ist/DB/DBConnection1" = "Adapter database" >
    < config-property name = "xADataSourceName" value = "jdbc/DBConnection1DataSource" / >
    < config-property name = "dataSourceName" value = "loc/DBConnection1DataSource" / >
    < config-property name = "platformClassName" value="oracle.toplink.platform.database.Oracle9Platform"/ >
    < config-property name = "usesNativeSequencing" value = "true" / >
    < config-property name = "sequencePreallocationSize" value = "50" / >
    < config-property name = "defaultNChar" value = "false" / >
    < config-property name = "usesBatchWriting" value = "true" / >
    < connection pooling using 'none' = >
    < / connection pooling >
    < use security-config 'none' = >
    < / security-config >
    < / connector-factory >


    I get the following error:

    < name of part = "summary" >
    < Summary >
    file:/C:/product/10.1.3.1/OracleAS_1/BPEL/domains/default/tmp/.bpel_FileToDb_v1.0_188277739ed1e0b720c1fefd0275d1c0.tmp/FileToAdapterService.WSDL [FileToAdapterService_ptt::insert (FiletodbCollection)] - SISM JCA Execute of operation "insert" has no reason to: could not create/access the TopLink Session.
    This session is used to connect to the data store. Caused by: loc/DBConnection1DataSource not found
    ; nested exception is:
    ORABPEL-11622
    Could not create/access the TopLink Session.
    This session is used to connect to the data store. [Caused by: loc/DBConnection1DataSource not found]
    See the first exception for the specific exception. You may need to configure the connection settings in the deployment descriptor (i.e. $J2EE_HOME/application-deployments/default/DbAdapter/oc4j-ra.xml), and then restart the server. Caused by the Exception TOPLINK-7060 (Oracle TopLink - 10g Release 3 (10.1.3.1.0) (Build 061004)): oracle.toplink.exceptions.ValidationException

    Description of the exception: could not acquire data source loc/DBConnection1DataSource

    Inner exception: javax.naming.NameNotFoundException: loc/DBConnection1DataSource not found.
    < / Summary >
    < / part >


    What is the error in the loc/DBConnection1DataSource line? Guys thank you in advance...


    Thank you...

    Published by: userus007 on January 2, 2010 17:16

    Published by: userus007 on January 2, 2010 17:18

    Visit similar thread oc4j - RA.xml and data - sources.xml

  • export data from the table in xml files

    Hello

    This thread to get your opinion on how export data tables in a file xml containing the data and another (xsd) that contains a structure of the table.
    For example, I have a datamart with 3 dimensions and a fact table. The idea is to have an xml file with data from the fact table, a file xsd with the structure of the fact table, an xml file that contains the data of the 3 dimensions and an xsd file that contains the definition of all the 3 dimensions. So a xml file fact table, a single file xml combining all of the dimension, the fact table in the file a xsd and an xsd file combining all of the dimension.

    I never have an idea on how to do it, but I would like to have for your advise on how you would.

    Thank you in advance.

    You are more or less in the same situation as me, I guess, about the "ORA-01426 digital infinity. I tried to export through UTL_FILE, content of the relational table with 998 columns. You get very quickly in this case in these ORA-errors, even if you work with solutions CLOB, while trying to concatinate the column into a CSV string data. Oracle has the nasty habbit in some of its packages / code to "assume" intelligent solutions and converts data types implicitly temporarily while trying to concatinate these data in the column to 1 string.

    The second part in the Kingdom of PL/SQL, it is he's trying to put everything in a buffer, which has a maximum of 65 k or 32 k, so break things up. In the end I just solved it via see all as a BLOB and writing to file as such. I'm guessing that the ORA-error is related to these problems of conversion/datatype buffer / implicit in the official packages of Oracle DBMS.

    Fun here is that this table 998 column came from XML source (aka "how SOA can make things very complicated and non-performing"). I have now 2 different solutions 'write data to CSV' in my packages, I use this situation to 998 column (but no idea if ever I get this performance, for example, using table collections in this scenario will explode the PGA in this case). The only solution that would work in my case is a better physical design of the environment, but currently I wonder not, engaged, as an architect so do not have a position to impose it.

    -- ---------------------------------------------------------------------------
    -- PROCEDURE CREATE_LARGE_CSV
    -- ---------------------------------------------------------------------------
    PROCEDURE create_large_csv(
        p_sql         IN VARCHAR2 ,
        p_dir         IN VARCHAR2 ,
        p_header_file IN VARCHAR2 ,
        p_gen_header  IN BOOLEAN := FALSE,
        p_prefix      IN VARCHAR2 := NULL,
        p_delimiter   IN VARCHAR2 DEFAULT '|',
        p_dateformat  IN VARCHAR2 DEFAULT 'YYYYMMDD',
        p_data_file   IN VARCHAR2 := NULL,
        p_utl_wra     IN VARCHAR2 := 'wb')
    IS
      v_finaltxt CLOB;
      v_v_val VARCHAR2(4000);
      v_n_val NUMBER;
      v_d_val DATE;
      v_ret   NUMBER;
      c       NUMBER;
      d       NUMBER;
      col_cnt INTEGER;
      f       BOOLEAN;
      rec_tab DBMS_SQL.DESC_TAB;
      col_num NUMBER;
      v_filehandle UTL_FILE.FILE_TYPE;
      v_samefile BOOLEAN      := (NVL(p_data_file,p_header_file) = p_header_file);
      v_CRLF raw(2)           := HEXTORAW('0D0A');
      v_chunksize pls_integer := 8191 - UTL_RAW.LENGTH( v_CRLF );
    BEGIN
      c := DBMS_SQL.OPEN_CURSOR;
      DBMS_SQL.PARSE(c, p_sql, DBMS_SQL.NATIVE);
      DBMS_SQL.DESCRIBE_COLUMNS(c, col_cnt, rec_tab);
      --
      FOR j IN 1..col_cnt
      LOOP
        CASE rec_tab(j).col_type
        WHEN 1 THEN
          DBMS_SQL.DEFINE_COLUMN(c,j,v_v_val,4000);
        WHEN 2 THEN
          DBMS_SQL.DEFINE_COLUMN(c,j,v_n_val);
        WHEN 12 THEN
          DBMS_SQL.DEFINE_COLUMN(c,j,v_d_val);
        ELSE
          DBMS_SQL.DEFINE_COLUMN(c,j,v_v_val,4000);
        END CASE;
      END LOOP;
      -- --------------------------------------
      -- This part outputs the HEADER if needed
      -- --------------------------------------
      v_filehandle := UTL_FILE.FOPEN(upper(p_dir),p_header_file,p_utl_wra,32767);
      --
      IF p_gen_header = TRUE THEN
        FOR j        IN 1..col_cnt
        LOOP
          v_finaltxt := ltrim(v_finaltxt||p_delimiter||lower(rec_tab(j).col_name),p_delimiter);
        END LOOP;
        --
        -- Adding prefix if needed
        IF p_prefix IS NULL THEN
          UTL_FILE.PUT_LINE(v_filehandle, v_finaltxt);
        ELSE
          v_finaltxt := 'p_prefix'||p_delimiter||v_finaltxt;
          UTL_FILE.PUT_LINE(v_filehandle, v_finaltxt);
        END IF;
        --
        -- Creating creating seperate header file if requested
        IF NOT v_samefile THEN
          UTL_FILE.FCLOSE(v_filehandle);
        END IF;
      END IF;
      -- --------------------------------------
      -- This part outputs the DATA to file
      -- --------------------------------------
      IF NOT v_samefile THEN
        v_filehandle := UTL_FILE.FOPEN(upper(p_dir),p_data_file,p_utl_wra,32767);
      END IF;
      --
      d := DBMS_SQL.EXECUTE(c);
      LOOP
        v_ret := DBMS_SQL.FETCH_ROWS(c);
        EXIT
      WHEN v_ret    = 0;
        v_finaltxt := NULL;
        FOR j      IN 1..col_cnt
        LOOP
          CASE rec_tab(j).col_type
          WHEN 1 THEN
            -- VARCHAR2
            DBMS_SQL.COLUMN_VALUE(c,j,v_v_val);
            v_finaltxt := v_finaltxt || p_delimiter || v_v_val;
          WHEN 2 THEN
            -- NUMBER
            DBMS_SQL.COLUMN_VALUE(c,j,v_n_val);
            v_finaltxt := v_finaltxt || p_delimiter || TO_CHAR(v_n_val);
          WHEN 12 THEN
            -- DATE
            DBMS_SQL.COLUMN_VALUE(c,j,v_d_val);
            v_finaltxt := v_finaltxt || p_delimiter || TO_CHAR(v_d_val,p_dateformat);
          ELSE
            v_finaltxt := v_finaltxt || p_delimiter || v_v_val;
          END CASE;
        END LOOP;
        --
        v_finaltxt               := p_prefix || v_finaltxt;
        IF SUBSTR(v_finaltxt,1,1) = p_delimiter THEN
          v_finaltxt             := SUBSTR(v_finaltxt,2);
        END IF;
        --
        FOR i IN 1 .. ceil( LENGTH( v_finaltxt ) / v_chunksize )
        LOOP
          UTL_FILE.PUT_RAW( v_filehandle, utl_raw.cast_to_raw( SUBSTR( v_finaltxt, ( i - 1 ) * v_chunksize + 1, v_chunksize ) ), TRUE );
        END LOOP;
        UTL_FILE.PUT_RAW( v_filehandle, v_CRLF );
        --
      END LOOP;
      UTL_FILE.FCLOSE(v_filehandle);
      DBMS_SQL.CLOSE_CURSOR(c);
    END create_large_csv;
    
  • Export data from the table

    Hello. Is it possible to export data from a table in Oracle using SQL Loader? If Yes, can you tell a good examples?

    Hello

    Hello. Is it possible to export data from a table in Oracle using SQL Loader?

    No, with SQL * Loader, you can load data from external files into tables not export.

    coil c:\temp\empdata.txt
    sqlplus abc.sql (assumes that abc.sql runs select * from emp)
    spool off

    It cannot work like this, because the declaration of the COIL is not recognized outside the SQL * Plus the term.

    But, you can include the statement of the COIL in abc.sql like this:

    spool c:\temp\empdata.txt
    select * from emp;
    spool off
    

    Then, you just have to run the SQL script as follows:

    sqlplus  @abc.sql 
    

    However, I advise you to use Oracle SQL Developer, this is a free tool and with it you can export a Table in several types of format (html, xml, csv, xls,...).

    Please find attached a link to this tool:

    http://www.Oracle.com/technetwork/developer-tools/SQL-Developer/Overview/index.html

    Hope this helps.
    Best regards
    Jean Valentine

  • Write data from a CSV to table

    Hi all

    I have a CSV (Comma Separated Values) file and I want to write its data to the table.

    How can I get data from a CSV file.

    Knani suggested, you can use external Tables or Sql Loader
    You can check this link for example on Sql Loader http://surachartopun.com/2007/10/example-sql-loader-some-data-into.html

  • Generic procedure to load the data from the source to the table target

    Hi all

    I want to create a generic procedure to load data of X number of the source table to X number of the target table.

    such as:

    Source1-> Target1

    Source2-> Target2

    -> Target3 Source3

    Each target table has the same structure as the source table.

    The indexes are same as well. Constraint are not predefined in the source or target tables.there is no involved in loading the data from the business logic.

    It would simply add.

    This procedure will be scheduled during off hours and probably only once in a month.

    I created a procedure that does this, and not like:

    (1) make a contribution to the procedure as Source and target table.

    (2) find the index in the target table.

    (3) get the metadata of the target table indexes and pick up.

    (4) delete the index above.

    (5) load the data from the source to the target (Append).

    (6) Re-create the indexes on the target table by using the collection of meta data.

    (7) delete the records in the source table.

    sample proc as: (logging of errors is missing)

    CREATE or REPLACE PROCEDURE PP_LOAD_SOURCE_TARGET (p_source_table IN VARCHAR2,

    p_target_table IN VARCHAR2)

    IS

    V_varchar_tbl. ARRAY TYPE IS VARCHAR2 (32);

    l_varchar_tbl v_varchar_tbl;

    TYPE v_clob_tbl_ind IS TABLE OF VARCHAR2 (32767) INDEX OF PLS_INTEGER;

    l_clob_tbl_ind v_clob_tbl_ind;

    g_owner CONSTANT VARCHAR2 (10): = 'STG '.

    CONSTANT VARCHAR2 G_OBJECT (6): = 'INDEX ';

    BEGIN

    SELECT DISTINCT INDEX_NAME BULK COLLECT

    IN l_varchar_tbl

    OF ALL_INDEXES

    WHERE table_name = p_target_table

    AND the OWNER = g_owner;

    FOR k IN l_varchar_tbl. FIRST... l_varchar_tbl. LAST LOOP

    SELECT DBMS_METADATA. GET_DDL (g_object,

    l_varchar_tbl (k),

    g_owner)

    IN l_clob_tbl_ind (k)

    FROM DUAL;

    END LOOP;

    BECAUSE me IN l_varchar_tbl. FIRST... l_varchar_tbl. LAST LOOP

    RUN IMMEDIATELY "DROP INDEX ' |" l_varchar_tbl (i);

    DBMS_OUTPUT. PUT_LINE (' INDEXED DROPED AS :'|| l_varchar_tbl (i));

    END LOOP;

    RUN IMMEDIATELY ' INSERT / * + APPEND * / INTO ' | p_target_table |

    ' SELECT * FROM ' | '. p_source_table;

    COMMIT;

    FOR s IN l_clob_tbl_ind. FIRST... l_clob_tbl_ind LAST LOOP.

    EXECUTE IMMEDIATE l_clob_tbl_ind (s);

    END LOOP;

    RUN IMMEDIATELY 'TRUNCATE TABLE ' | p_source_table;

    END PP_LOAD_SOURCE_TARGET;

    I want to know:

    1 has anyone put up a similar solution if yes what kind of challenges have to face.

    2. it is a good approach.

    3. How can I minimize the failure of the data load.

    Why not just

    create table to check-in as

    Select "SOURCE1" source, targets "TARGET1", 'Y' union flag double all the

    Select "SOURCE2', 'TARGET2', 'Y' in all the double union

    Select "SOURCE3', 'Target3', 'Y' in all the double union

    Select "SOURCE4', 'TARGET4', 'Y' in all the double union

    Select 'Source.5', 'TARGET5', 'Y' in double

    SOURCE TARGET FLAG
    SOURCE1 TARGET1 THERE
    SOURCE2 TARGET2 THERE
    SOURCE3 TARGET3 THERE
    SOURCE4 TARGET4 THERE
    SOURCE.5 TARGET5 THERE

    declare

    the_command varchar2 (1000);

    Start

    for r in (select source, target of the archiving of the pavilion where = 'Y')

    loop

    the_command: = "insert / * + append * / into ' |" r.Target | ' Select * from ' | '. r.source;

    dbms_output.put_line (the_command);

    -execution immediate the_command;

    the_command: = 'truncate table ' | r.source | "drop storage."

    dbms_output.put_line (the_command);

    -execution immediate the_command;

    dbms_output.put_line(r.source ||) 'table transformed');

    end loop;

    end;

    Insert / * + append * / into select destination1 * source1

    truncate table SOURCE1 drop storage

    Treated SOURCE1 table

    Insert / * + append * / to select TARGET2 * in SOURCE2

    truncate table SOURCE2 drop storage

    Treated SOURCE2 table

    Insert / * + append * / into select target3 * of SOURCE3

    truncate table SOURCE3 drop storage

    Treated SOURCE3 table

    Insert / * + append * / into TARGET4 select * from SOURCE4

    truncate table SOURCE4 drop storage

    Table treated SOURCE4

    Insert / * + append * / into TARGET5 select * from source.5

    truncate table source.5 drop storage

    Treated source.5 table

    Concerning

    Etbin

  • Extract data from the table on hourly basis

    Hello

    I have a table that has two columns date all the hours of the base and the response time. I want to extract data from the date corresponding previous hourly basis with the response time. The data will be loaded into the table every midnight.

    for example: today date 23/10/2012
    I want to extract data from 22/10/12 00 to the 22/10/12 23

    The sub query pulls the date as demanded, but I'm not able to take the time to answer.

    with one also
    (select min (trunc (lhour)) as mindate, max (trunc (lhour)) as AVG_HR maxdate)
    SELECT to_char (maxdate + (level/25), "dd/mm/yyyy hh24") as a LEVEL CONNECTION dates < = * (1) 24;

    Please help me on this.

    Try this

    SELECT * FROM table_nm
     WHERE to_char(hour,'DD') = to_char(SYSDATE-1,'DD')
    
  • How to join two tables to retrieve the data from the columns in table two. Tables have primary and foreign key relationships

    Hello

    I want to join the two tables to retrieve the data from the columns of the two table passing parameters to the join query. Tables have primary and foreign key relationships

    Details of the table

    Alert-1 - AlertCode (FK), AlerID (PK)

    2 AlertCode-AlertDefinition-(PK)

    Help, please


    ----------

    Hi Vincent,.

    I think that you have not worked on adf 12.1.3.  In adf 12.1.3 you don't have to explicitly create the association. When you create the EO to your table, Association xxxxFkAssoc, will be created by ADF12.1.3 for you automatically. Please try this and do not answer anything... You can also follow the links below. I solved the problem by using the following link

    Oracle ADF Guide step by step - Oracle ADF tutorial: creating a relationship of the master / detail using Oracle ADF

    ---

  • How to export data from the table with the colouring of cells according to value.

    Hi all

    I use jdeveloper 11.1.1.6

    I want to export data from the table with a lot of formatting. as for color cells based on value and so much. How to do this?

    You can find us apache POI-http://poi.apache.org/

    See this http://www.techartifact.com/blogs/2013/08/generate-excel-file-in-oracle-adf-using-apache-poi.html

  • Is it possible to see/get the data from the table to a dump file

    I have files dmp generated using expdp on oracle 11 g...

    expdp_schemas_18MAY2013_1.dmp

    expdp_schemas_18MAY2013_2.dmp

    expdp_schemas_18MAY2013_3.dmp

    Can I use a settings file given below to get the data from the table in the file sql or impdp the only option to load the data of table in database.

    VI test1.par

    USERID = "/ as sysdba".

    DIRECTORY = DATA

    dumpfile=expdp_schemas_18MAY2013%S.dmp

    SCHEMAS = USER1, USER2

    LOGFILE = user_dump_data.log

    SQLFILE = user_dump_data. SQL

    and impdp parfile = test1.par.

    No,

    DataPump cannot retrieve a dumpfile data in a flat file.

    Dean

  • Error loading of the data in the .csv file

    Hello

    I get error of date below when loading data through Olap tables through .csv file.

    Data stored in .csv is 20071113121100.

    "
    TRANSF_1_1_1 > CMN_1761 Timestamp event: [Mon Mar 29 15:06:17 2010]
    TRANSF_1_1_1 > TE_7007 evaluation of processing error [< < Expression error > > [TO_DATE]: an invalid string for the conversion to date]
    [... t:TO_DATE(u:'2.00711E+13',u:'YYYYMMDDHH24MISS')]
    TRANSF_1_1_1 > CMN_1761 Timestamp event: [Mon Mar 29 15:06:17 2010]
    TRANSF_1_1_1 > TT_11132 Transformation [Exp_FILE_CHNL_TYPE] was a mistake in assessing the output column [CREATED_ON_DT_OUT]. Error message is [< < Expression error > > [TO_DATE]: an invalid string for the conversion to date]
    [.. t:TO_DATE(u:'2.00711E+13',u:'YYYYMMDDHH24MISS')].

    TRANSF_1_1_1 > CMN_1761 Timestamp event: [Mon Mar 29 15:06:17 2010]
    TRANSF_1_1_1 > TT_11019 there is an error in the [CREATED_ON_DT_OUT] port: the default value for the port is on: ERROR (< < Expression error > > [ERROR]: error processing)
    ... nl:ERROR(u:'transformation_error')).
    TRANSF_1_1_1 > CMN_1761 Timestamp event: [Mon Mar 29 15:06:17 2010]
    TRANSF_1_1_1 > TT_11021 an error occurred to transfer data from the Exp_FILE_CHNL_TYPE transformation: towards the transformation of W_CHNL_TYPE_DS.
    TRANSF_1_1_1 > CMN_1761 Timestamp event: [Mon Mar 29 15:06:17 2010]
    TRANSF_1_1_1 > CMN_1086 Exp_FILE_CHNL_TYPE: number of errors exceeded the threshold [1].
    "

    Any help is greatly appreciated.

    Thank you
    Poojak

    What tool to spool the file well? Did he go any where near a GUI tool? I bet it was the precision on the data type or the type of incorrect data in total

    If I paste 20071113121100 into a new excel workbook, the display will return as 2.00711E + 13 - when I put a column data type number that I see all the numbers.
    OK it's not great, but you get what im saying:
    Can run the SQL SQL of plu and coil directly to the file?

  • importing data from the old system to the new system, key to the raw16 column

    Hi, experts,

    now I have a new system, the design of the system is to use raw (16) column as the key column in all tables of database.
    of course, when the new system goes live (in production), new records of transactions are written to the new database system.
    When the new system inserts new records, it manages itself to avoid any conflict of the raw key column value (16)


    Now, I'm dealing with this problem:

    I need to import data from the old system to the new system, I use sys_guid() to fill the column raw (16) into the new database system.
    How can I avoid conflicts of raw column value (16) between the old system data and the new data of database system?

    the sql code I write is very simple:

    insert into new_sys_table_a (key_column_raw_16,...,...)
    Select sys_guid(), old_sys_col_a, old_sys_col_b
    of old_sys_table_a;
    insert into new_sys_table_a (key_column_raw_16, col1, col2 )
    select key_column_raw_16, col1, col2
    from old_sys_table_a;
    

Maybe you are looking for