Insert nvarchar2 on dblink

Hi all.

I have a source DB 10.2.0.5 using EL8ISO8859P7/UTF8 and a DB 10.2.0.4 target using EE8ISO8859P2/AL16UTF16 (CHARSET/NCHARSET).

I want to spend (frequently) data source VARCHAR2 to target on dblink. End from the dblink is target data (EE8ISO8859P2).

Direct conversion/storage is obviously not possible.

Since the target (EE8ISO8859P2) character set is not a superset of EL8ISO8859P7, I hope I can store the data in columns NVARCHAR2, since AL16UTF16 supports the code of source DB points.

However, the dblink made the conversion and the time data reach the target, it is already 'flattened' question marks, so store in NVARCHAR2 stores question marks encoded as UTF16 .

Y at - it a trick I use (while still using dblink, no import/export)?

Can I "force" somehow the dblink to use UTF8 (or somethink like him) at end of 'customer', instead of the source DB character set?

Thank you

Florin


Have you tried to create a view on the side of the source that makes TO_NCHAR for the relevant columns, and then you query this view instead of the original table?

Thank you

Sergiusz

Tags: Database

Similar Questions

  • Increase the performance of the Insert using e DBlink

    Dear friends,

    I try inserting 4 Lakes records in a table using the DB - Link.It is taking a lot of time (1 hour) to insert the records, is there another way to increase performance.
    It's my insert statement.

    Insert / * + APPEND PARALLEL NOLOGGING * / in target_table
    Select * from source_table@dblink_name

    Nowhere in this thread I see the following:

    1. the hardware (servers) and the operating system.
    2. WEP (which is the speed of the network and the bandwidth?).
    3. oracle version number.
    4. any real time
    5. the time inserts take when it is not pushed through the db link but run locally.
    6. explain the plan.
    7. a DML statement.
    8. information about the indexes on the target table.
    9. information about constraints on the target table.
    10. information on the triggers on the target table.
    11. the evidence that the target is, in fact, a table and not a view or another object type.

    I don't see how someone can do more than ask the OP for more information about the fact that what was provided is totally inadequate to make a recommendation.

  • dblink and serialization

    Thank you in advance, I perform a task in oracle instance that is running a procedure in another instance oracle (different machine) via dblink and a local procedure, like this:

    use1. MyProcedure@machineSTG (v_code_id, V_DATE, v_outfile, v_outcacode, v_outcatype);  -procedure of dblink

    User2.my_package.do_something (v_outcacode, v_outcatype, v_outfile); -local procedure, inside reading remote table inserted by procedure dblink

    commit;

    Is it guaranteed that the procedure provided for in the remote computer runs first and finish before the local procedure begins to run? I need this series as the local procedure implementation also read remote tables, inserted by the first procedure (dblink) thank you.

    Yes.

    SY.

  • insert into the table based on the difference in line (or using less)

    Hello

    Oracle Version: 11g

    Operating system: Solaris 10.

    I was wondering if it is possible to insert data in a table based on the operator 'less' Please?

    We have a very large table in a database, we moved to a different database. The table is cleared by a line for a certain range of dates, and we wondered if it is possible to insert this line of data in the remote database using the difference of rank between the two tables.

    Here's the query that we are running:

    SELECT ID , TO_CHAR (creation_datetime, 'yyyy-mm-dd')
    from TABB10 
    where TO_CHAR (creation_datetime, 'yyyy-mm-dd')='2014-03-18' 
    minus 
    SELECT ID , TO_CHAR (creation_datetime, 'yyyy-mm-dd')
    from TABB10@TABB_LINK.APDB00 
    where TO_CHAR (creation_datetime, 'yyyy-mm-dd')='2014-03-18'
    

    TO_CHAR (CR ID

    ---------------------- ----------

    2.4111E + 17-18 / 03 / 2014

    Any ideas please?

    Thank you

    If I don't get me wrong, you can insert as below

    INSERT INTO REMOTE_TABLE@DBLINK

    SELECT ID, TO_CHAR (creation_datetime, "yyyy-mm-dd")

    of TABB10

    where TO_CHAR (creation_datetime, 'yyyy-mm-dd') ='' 2014-03-18

    less

    SELECT ID, TO_CHAR (creation_datetime, "yyyy-mm-dd")

    of TABB10@TABB_LINK. APDB00

    where TO_CHAR (creation_datetime, 'yyyy-mm-dd') ='' 2014-03-18

    Concerning

  • Insert via DB-LINK

    Hi, I am trying insert via given dblink selected in the local table. Here is the code:
    create or replace type T_DATA_IDS as table of number(19);
    export_process_ids t_data_ids;
    And here's the query:
        insert into process@archive
          (select * from process where process_id in
            (select * from table(export_process_ids)));
    And he throws
    ORA-22804: unauthorized actions on the tables of objects or columns of type defined by the remote user

    When I change ' (select * from table (export_process_ids))' some ID of this table, the data is inserted without errors.

    How can I work around this error?

    Edited by: 843706 2011-03-11 05:58

    Welcome to the forums!

    What happened to create a temporary table that contains the process ID you can do something like that?

        insert into process@archive
          (select * from process where process_id in
            (select * from temp_process_ids));
    
  • SYSDATE changed on the database

    Hi all

    Need your advice on 1 issue. Application team runs a procedure to insert data via dblink database UKprod to USprod.

    It used to work well and they have began to receive questions for 3 months and they did not bother to solve on rather used other ways to work around.

    Now, they are back to solved it. When we studied the procedure, Found issueswith sysdate format (August 28, 2015 16:43:26)

    as he takes sysdate entered in the procedure and execute it. The procedure provides the format sysdate as (28 August 15) but when running
    the procedure on UKprod obtaining (August 28, 2015 16:43:26), the one where his failure.

    We have developed and it is resolved by setting the session as nls_date_format = 'DD-MON-YY', which is temporary
    but cannot find why it changed as to the level of the bone too I see no EXPORT DATE COMMAND EMITTED (to provide the RCA)

    I checked the setting nls_date_format is blank on the databases. Please you through some light to see where should I dig

    for evidence

    DB version: 10.2.0.4 (both)
    Version of the OS: AIX (both)

    (1) how can I change it back (requires a rebound)

    UKprod - sysdate
    ------------------
    SQL > select sysdate to double;

    SYSDATE
    --------------------
    August 28, 2015 16:43:26


    USprod - sysdate
    -------------------
    SQL > select sysdate to double;

    SYSDATE
    ---------
    28 AUGUST 15

    Thank you

    Dates isn't a varchar2 format. Somewhere in your code your dates are converted to a varchar2, and you do not specify an explicit date format.

    You should NOT rely on a default date format, because it can be changed by changing the operating system, or even by Oracle in a new release.

    Has nothing to do with sysdate but with flawed code, caused by lack of knowledge developers.

    You must have the corrected code.

    Either stop the conversion in varchar2, or specify a date format.

    ------------

    Sybrand Bakker

    Senior Oracle DBA

  • Trying to understand NVARCHAR with UTF8

    Hey everybody,

    Hoping to help understand the behavior NVARCHAR and others; I've not used it much in the past.

    I'm on: 11.2.0.3.0

    Background:

    -existing database:

    WE8ISO8859P1 NLS_CHARACTERSET

    NLS_NCHAR_CHARACTERSET UTF8

    Most of the columns use 'normal' VARCHAR2 data type, and I'm generally okay with that.

    However, they used NVARCHAR2 data type for the data in the column "french" (although the WE8ISO8859P1 character set they use supports french. * sigh *)

    In any case... is not even with a french character. I think this is the 'Smart' quote of the window.

    Question:

    Running a query to get a picture of the 'bad' production data, shows the character (a quote any: "don't know not if making it will transfer all correctly"), such as code ASCII 49810 or 49809 (we see both)

    In an attempt to install a 'test' in development to happen again and I can't seem to do.

    create unwanted table (ID, vv nvarchar2 (100));

    insert into junk values (1, chr (49809));

    insert into junk values (2, chr (49810));

    insert into junk values (3, chr (96));

    commit;

    SELECT id, vv, dump junk (vv);

    Developer SQL (via Windows) and SQL * Plus (via Unix), see the same thing:

    ID VV                                       DUMP(VV)

    ---------- ---------------------------------------- --------------------------------------------------

    1?                                        Typ = 1 Len = 2: 0.145

    2?                                        Typ = 1 Len = 2: 0,146

    3 `                                        Typ=1 Len=2: 0,96

    3 selected lines.

    In both cases, the 'smart' quotes don't to not correctly store and appear to be "converted".

    I'm not really sure what's going on, but I was trying to figure out how to set my NLS_LANG, but don't know what to put in?

    On Unix, I tried:

    export NLS_LANG = American_America.UTF8

    However, which amends the code of '? ' to 'Â' (re-directed the entire script, the same code: 145/146 stored, so still lose it during the insertion, I guess)

    So I suspect it's a case of this character not actually supported by UTF8, right?

    For SQL Developer, not so lucky... I put a Windows env variable NLS_LANG even above, however, it still shows in SQL Developer as a box empty.

    Issues related to the:

    (1) just to check, I'm even not sure that these characters (IE 49810 and 49809) are based in fact on UTF8? (did some research, but could not find anything that could confirm it for some..)

    (2) how (good) set NLS_LANG for SQL Developer and what to put in so I can read/write characters in those pesky NVARCHAR fields?

    (3) how to enter (even with force)-49809 characters or 49810? (for testing purposes only! )

    (FYI: this is especially for me learning.) The 'solution' to our initial problem is to convert these bad characters of 'normal' quotes: IE code ASCII 39. Of course, be able to properly test the update would be really very nice, this is why I need to 'force' the entry of some bad data in dev

    Thank you!

    Answers:

    (1) just to check, I'm not even sure that these characters (IE 49810 and 49809) are based in fact on UTF8 ? (did some research, but could not find something that could confirm it for some..)

    Yes, this is valid UTF-8 character codes. However, their meaning is not what you expect. 49809 = UTF - 8 0xC291 = U + 0091 = PU1 control character (USE PRIVATE ONE). 49810 is in UTF - 8 0xC292 is U + 0092 = PU2 (TWO of USE PRIVACY) control character

    (2) how (good) set NLS_LANG for SQL Developer and what to set it so I can read/write characters in those pesky NVARCHAR fields?

    Developer SQL does not read NLS_LANG at all. You must not do anything to read and write content NVARCHAR2 using the data of a table editor tab or read the content with a worksheet query. Additional configuration is required to support particular NVARCHAR2 literals in SQL commands. However, you can still use the UNISTR function to encode Unicode characters for type NVARCHAR2 columns.

    (3) How to enter (even with force)-49809 characters or 49810? (for testing only for purposes!)

    Not really possible keyboard because they are control codes. You can insert only with SQL using CHR or UNISTR.

    (4) what is the real problem?

    The real problem is the direct configuration called application insert these characters paired with the fact the application inserts NVARCHAR2 data without mark it as such in the corresponding API Oracle (OCI, Pro * C, JDBC). NLS_LANG is set up as. WE8ISO8859P1 for the application and the database is WE8ISO8859P1 as well. When a Windows client application written for Win32 API ANSI passes these Oracle quotes, the quotes are encode as 0 x 91 / 0 x 92. However, this encoding for the french quotation marks is correct in the Windows 1252 code Page (name of Oracle: WE8MSWIN1252), not in ISO-8859-1 (name of Oracle: WE8ISO8859P1). As character set NLS_LANG and database are the same, no conversion happens to the codes. On the side of the database, Oracle considers that the target column is of type NVARCHAR2, so it converts to WE8ISO8859P1 in UTF8. However, the interpretation of codes does not at the moment and UTF8 codes that result are 0xC2 0 x 91 and 0xC2 0 x 92 (ISO-8859-1 encoding codes control PU1 PU2) instead of the correct 0xE2 0x80 0 x 98 and 0xE2 0x80 0 x 99 (encoding Cp1252 characters SINGLE LEFT quotation MARK and quotation MARK SINGLE RIGHT).

    Solutions:

    1. the best solution is to migrate the database to AL32UTF8 and discard the NVARCHAR2 data type. You will be able to store any language in any column VARCHAR2.

    2. less forward-looking but a simpler solution is to migrate the database character value WE8MSWIN1252. If additional characters are French, get rid of the NVARCHAR2 data type, because it is just extra cost.

    3. the minimum solution is to migrate the database by WE8MSWIN1252 character and keep the NVARCHAR2 columns.

    If the data to be inserted more French and quotes, you should definitely go with the first option. The third solution would work after changes appropriate to the use of the application of the API of the Oracle client.

    In any solution, the NLS_LANG should be replaced by. Application WE8MSWIN1252 (but this will not only help).

    Thank you

    Sergiusz

  • A selection in a table in a table in a different database?

    Is it possible run a select a table in which the recovered data could be inserted into another table in another database and even if in another server?

    Maybe something like this...

    Select... FROM TABLE1
    in server.database.TaBLE2

    Or
    Insert into table1 server.database.TaBLE2?

    Who would you recommend?

    Thanks in advance

    Hello

    You must create a DBLink for the other database table and then can insert values into the table.

    INSERT INTO OTHER_TABLE@DBLINK(COL1, COL2, COL3) SELECT COL1,COL2,COL3 FROM YOUR_TABLE ;
    

    see you soon

    VT

  • Dealing with errors due to newly added/removed columns

    DB version: 11 g


    I don't know if I created a post unnecessarily large to explain a simple problem. Anway, this is here.

    Asked me to encode a package for archiving.

    We will have two schemas; The original schema and a schema of the Archive (connected via a DB link)
    ORIGINAL Schema -------------------------> ARCHIVE Schema
                   via DB Link
    When the records of some tables in the ORIGINAL schema meet the archiving criteria (based on the number of days old, Status Code etc.), it will be moved ("archived") to the schema of the ARCHIVE using the INSERT syntax
    insert into arch_original@dblink
    (
    col1,
    col2,
    col3,
    .
    .
    .
    .
    )
    select col1,
    col2,
    col3,
    .
    .
    .
    .
    from original_table
    The original table and the table of archive has the same structure, except that the Archive table has an additional column called archived_date that records only when a record got archived.
    create table original
    (
    col1 varchar2(33),
    col2 varchar2(35),
    empid number
    );
    
    
    create table arch_original
    (
    col1 varchar2(33),
    col2 varchar2(35),
    empid number,
    archived_date date default sysdate not null
    );
    We have tables with many columns (there are a lot of tables with more than 100 columns), and when all column names are explicitly mentioned as the above syntax, the code becomes huge.

    Alternative syntax:

    So I thougt of using the syntax
    insert into arch_original select original.*,sysdate from original;  -- sysdate will populate archived_date column
    Even if the code looks simple and short, I noticed a downside to this approach.

    Disadvantage:
    For the next version, if developers decide to add/drag a column in the ORIGINAL table in the original schema, this change should be apparent in (ARCHIVE) DDL script of the diagram of the archive_table as well. It is virtually impossible to keep track of all these changes during the development phase.


    If I use
    insert into arch_original select original.*,sysdate from original;  
    syntax, you will realize that there is change in the structure of the table only when you encounter an error (because of lack/news column) in the Runtime. But, if you have all the column names are explicitly as
    insert into arch_original@dblink
    (col1,
    col2,
    col3,
    .
    .
    .
    .
    )
    select col1,
    col2,
    col3,
    .
    .
    .
    .
    from original_table
    Next, you will encounter this error when compiling itself. I prefer the error due to a lack/new column when compiling itself rather than at run time.

    While the guys do you think? I shouldn't go to
    insert into arch_original select original.*,sysdate from original; 
    syntax due to the inconvenience above. Right?

    VitaminD salvation,

    Yes, something like that. Only, I meant to use it as a code generator to produce static sql to use in your archiver plan. I didn't use dynamic sql.

    Start with something simple, to generate lists of columns to insert and select. Decide if you want to generate complete instructions, procedures or the same package.

    For beginners to do something like you suggest

    select ',' || column_name from cols where table_name like :your_table order by column_id;
    

    Out of use allows you to easily create your insert into... Select in the statement. Once you feel able to this topic, consider as much as your taste.

    But my point is still. Use it as a static sql.

    Concerning
    Peter

  • SQL formatting question

    I have a PL/SQL package that essentially the archiving with INSERT in SELECT statements. It looks like
    insert into archive_table@dblink
    (col1,
    col2,
    col3,
    .
    .
    .
    .
    )
    select from original_table
    (col1,
    col2,
    col3,
    .
    .
    .
    .
    )
    For huge tables, I just get a list of the columns in the CREATE table statements (view SQL in PL/SQL developer). But this INSERT statement seems very long because a single column (and his coma) will there be per line.

    The code looks like very long (vertically)
    insert into archive_table@dblink
    (col1,
    col2,
    col3,
    col4,
    col5,
    col6,
    col7,
    col8,
    col9,
    col10,
    col11,
    col12,
    col13,
    col14,
    col15,
    .
    .
    .
    .
    col150
    )
    select from original_table
    (
    col2,
    col3,
    col4,
    col5,
    col6,
    col7,
    col8,
    col9,
    col10,
    col11,
    col12,
    col13,
    col14,
    col15,
    .
    .
    .
    col150
    )
    I want several columns in a single row so that the code looks more like short
    insert into archive_table@dblink
    (col1,col2,col3,col4,col5,col6,
    col7,col8,.......
    )
    select from original_table
    (
    col1,col2,col3,col4,col5,col6,
    col7,col8,.......
    )
    I used SQL in PL/SQL developer beautifier. But it did not work. Is there any tool you are aware of that can do this?

    Go to the menu Tools and then "Preferences...". ».
    In the UI section, click on "Beautifier PL/SQL ' and then click on the Edit button to modify your settings of beautifier.

    In the DML tab change your settings to 'Fit' for insert and select, maybe updated, according to your need.
    You can also adjust the width of the screen in the general tab, under the option of "right margin".

    I would recommend saving your settings in a file so you can reuse it in all cases that you may need to change computers or something.

    Edit: Oops I replied to the wrong post by habit, Alex.

    As a pointer, VitaminD, your insert command has an incorrect syntax in the select part where there should be something like below instead of columns after the name of the table.

    insert into archive_table@dblink
    (col1,
    col2,
    col3,
    .
    .
    .
    .
    )
    select col1,
    col2,
    col3,
    .
    .
    .
    .
    from original_table
    

    Published by: fsitja on January 5, 2010 12:27

  • Bulk collect ForALL - error when insert via DBLink

    HY everybody,

    I have two databases 9i on two servers. DB1 and DB2.

    The DB1 is a DBLink in DB2.

    I insert values into a DB2 table with the values in the tables of db1.

    In proceedings of DB1, I have this code:

    DECLARE
    TYPE TEExtFinanceiro IS TABLE OF EExtFinanceiro@sigrhext%ROWTYPE INDEX OF PLS_INTEGER;
    eExtFinanceiroTab TEExtFinanceiro;
    Start

    ....

    IF eExtFinanceiroTab.count > 0 THEN

    FORALL vIndice IN eExtFinanceiroTab.First... eExtFinanceiroTab.Last
    INSERT INTO eExtFinanceiro@sigrhExt VALUES (vIndice) eExtFinanceiroTab;

    COMMIT;

    END IF;

    ...
    END;

    The fields in the eExtFinanceiro table are nullable.

    This command inserts the rows in the eExtFinanceiro table, but all lines are null.

    What could happen?
    Can someone help me, please?

    Thank you!

    Hello

    FORALL has a limitation, it does not work on a DBLink.
    Operations block (such as FORALL) are used to minimize the change between PL/SQL and SQL context engines.
    For remote database operations, there is no change of context SINCE each DB link must open is no new connection.
    In your scenario, it will be much better to use an approach to PULL rather than PUSH. This way you can get to use FORALL as well.

    As in:
    Having a procedure in DB2 (instead of having it in DB1). This way used inside FORALL DML will not have a DBLink

    I hope that helps!

    See you soon,.
    AA

  • Remote index not used with INSERT in the local table on dblink

    Hi all

    I don't know if anyone has come across this problem before, but for some reason any the remote index remains unused ONLY* in the insertion on the local database operation. Let me explain this pseudo-device code

    insert into LOCAL_TABLE
    Select / * + index_combine (alias_remote_tab IDX_LOG_DATE) * /.
    trunc (log_datetime),
    Count (*)
    of REMOTE_TABLE@DBLINK alias_remote_tab
    When trunc (log_datetime) = trunc(sysdate-1)
    Trunc Group (log_datetime);

    where:
    REMOTE_TABLE is a table partitioned on log_datetime (monthly)
    IDX_LOG_DATE is an index of bitmap of based on a valid function on log_datetime created in the trunc (log_datetime)
    local database: 10 gr 2
    remote database: 11 GR 1 material
    OS: windows (both)

    More funny thing is when I just run the select query independently on the only both local and remote, the index is used. I checked by printing the command explain for the select query plan. But when I prefix the query with the insert lose all hell breaks and local database plays the ignorance about the index. The command for the insert query explain plan has no mention of the index even when I explicitly place the index indicator in the select part of the query.

    If this should not be simple enough for ORACLE? Am I missing something here?

    Jonathan describes the details and the reasons for the behavior you see in following blog post http://jonathanlewis.wordpress.com/2008/12/05/distributed-dml/
    Your SELECTION is performed remotely (filtering and grouping) and sends only the (relatively) small results via dblink local database while the in an INSERTION, filtering only occurs at the remote and data site (relatively) important are sent via dblink to the local database, the consolidation takes place.
    You can give a try to the approach proposed by michaels2. If the approach from the view result grouping and filtering which will take place in the remote database, you will see improved performance.

    PS BTW, if the sql code that I suggested to check the plane, in my previous post using an index, then the cause of your performance issue is certainly not due to the index not used and is due to the amount of data transferred to dblink.

  • Long time, buffer with an insert and select through a dblink

    I do a fairly simple ' insert into select from "statement through a dblink, but something is seriously wrong across the link. I am a huge buffer time in terms of the command explain (line 9) and I don't know why. When I try to run sql tuning on it across the dblink, I get an ora-600 error "ORA-24327: need explicit attach before authenticating a user.
    Here's the original sql:

    INSERT INTO PACE_IR_MOISTURE@PRODDMT00 (SCHEDULE_SEQ, LAB_SAMPLE_ID, HSN, SAMPLE_TYPE, MATRIX, SYSTEM_ID)
    SELECT DISTINCT S.SCHEDULE_SEQ, PI. LAB_SAMPLE_ID, PI. HSN, SAM. SAMPLE_TYPE, SAM. MATRIX: B1 S SCHEDULES
    JOIN THE PERMANENT_IDS ' X '. HSN = S.SCHEDULE_ID
    JOIN SAM SAMPLES WE PI. HSN = SAM. HSN
    JOIN PROJECT_SAMPLES ON PS PS. HSN = SAM. HSN
    JOIN PROJECTS P ON PS. PROJECT_SEQ = PS. PROJECT_SEQ
    WHERE S.PROC_CODE = 'DRY WEIGHT' AND S.ACTIVE_FLAG = 'C' AND S.COND_CODE = 'CH' AND P.WIP_STATUS IN ('WP', 'HO')
    AND SAM. WIP_STATUS = "WP";

    Here's the sql code, as it appears on proddmt00:

    INSERT INTO 'PACE_IR_MOISTURE' ('SCHEDULE_SEQ', 'LAB_SAMPLE_ID', 'HSN","SAMPLE_TYPE","MATRIX","SYSTEM_ID")
    SELECT 'A6' DISTINCT '. SCHEDULE_SEQ', 'A5 '. "' LAB_SAMPLE_ID ', 'A5 '. "" HSN ","A4 ". "" SAMPLE_TYPE ","A4 ". "" MATRIX ": B1
    "SCHEDULES" @! "A6", "PERMANENT_IDS" @! "A5", "SAMPLES" @! "A4", "PROJECT_SAMPLES" @! "A3", "PROJECTS" @! "A2".
    WHERE "A6". «PROC_CODE ' = 'DRY WEIGHT' AND 'A6'.» ' ACTIVE_FLAG '= 'C' AND "A6". "COND_CODE" = "CH", ("A2" ".") " WIP_STATUS '= 'WP' OR 'A2'.' WIP_STATUS "=" HO") AND"A4 ". "WIP_STATUS"= "WP" AND "A3". "PROJECT_SEQ"= "A3". "" PROJECT_SEQ "AND"A3 ". "HSN"= "A4". "" HSN ' AND 'A5 '. "HSN"= "A4". "" HSN ' AND 'A5 '. "HSN"= "A6". "" SCHEDULE_ID. "

    Here is the plan of the command explain on proddmt00:

    PLAN_TABLE_OUTPUT
    ------------------------------------------------------------------------------------------------------------------------------------------------------
    SQL_ID, cvgpfkhdhn835, number of children 0
    -------------------------------------
    INSERT INTO 'PACE_IR_MOISTURE' ('SCHEDULE_SEQ', 'LAB_SAMPLE_ID', 'HSN","SAMPLE_TYPE","MATRIX","SYSTEM_ID")
    SELECT 'A6' DISTINCT '. SCHEDULE_SEQ', 'A5 '. "' LAB_SAMPLE_ID ', 'A5 '. "" HSN ","A4 ". "" SAMPLE_TYPE ","A4 ". "" MATRIX ": B1
    "SCHEDULES" @! "A6", "PERMANENT_IDS" @! "A5", "SAMPLES" @! "A4", "PROJECT_SAMPLES" @! "A3", "PROJECTS" @! "A2".
    WHERE "A6". «PROC_CODE ' = 'DRY WEIGHT' AND 'A6'.» ' ACTIVE_FLAG '= 'C' AND "A6". "COND_CODE" = "CH" AND "
    ("A2". "WIP_STATUS"= "WP" OR "A2". ("" WIP_STATUS "=" HO ") AND"A4 ". "WIP_STATUS" = "WP" AND "
    "A3". "PROJECT_SEQ"= "A3". "" PROJECT_SEQ "AND"A3 ". "HSN"= "A4". "" HSN ' AND 'A5 '. "HSN"= "A4". "" HSN "AND
    'A5 '. "HSN"= "A6". "" SCHEDULE_ID. "

    Hash value of plan: 3310593411

    -------------------------------------------------------------------------------------------------------------------
    | ID | Operation | Name | Lines | Bytes | TempSpc | Cost (% CPU). Time | Inst | IN-OUT |
    -------------------------------------------------------------------------------------------------------------------
    | 0 | INSERT STATEMENT. 5426M (100) | |
    | 1. UNIQUE HASH | 1210K | 118 M | 262 M | 5426M (3) | 999:59:59 |
    |* 2 | HASH JOIN | 763G | 54T | 8152K | 4300M (1) | 999:59:59 |
    | 3. DISTANCE | 231K | 5429K | 3389 (2) | 00:00:41 |! | R > S |
    | 4. THE CARTESIAN MERGE JOIN. 1254G | 61T | 1361M (74). 999:59:59 |
    | 5. THE CARTESIAN MERGE JOIN. 3297K | 128 M | 22869 (5) | 00:04:35 |
    | 4 > DISTANCE | SCHEDULES | 79. 3002 | 75 (0) | 00:00:01 | | R > S |
    | 7. KIND OF BUFFER. 41830 | 122K | 22794 (5) | 00:04:34 |
    | 8. DISTANCE | PROJECTS | 41830 | 122K | 281 (2) | 00:00:04 |! | R > S |
    | 9. KIND OF BUFFER. 380K | 4828K | 1361M (74). 999:59:59 |
    | 10. DISTANCE | PROJECT_SAMPLES | 380K | 4828K | 111 (0) | 00:00:02 |! | R > S |
    -------------------------------------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):
    ---------------------------------------------------

    2 - access("A3".") HSN '= 'A4'.' HSN' AND 'A5 '. "HSN"= "A6". ("' SCHEDULE_ID")

    Hello

    Looking at your SQL it loks like you're not joining to the PROJECT table (where the word dreaded CARTESIAN?):

    Your code:

    JOIN PROJECTS P ON PS.PROJECT_SEQ = PS.PROJECT_SEQ 
    

    ... I think this should probably be...

    JOIN PROJECTS P ON P.PROJECT_SEQ = PS.PROJECT_SEQ 
    

    ....??

  • Insert the string xml in a clob column. ??

    https://drive.Google.com/file/d/0BwAVQqYmX0-zMHZiS1F0SVdOMmc/view?USP=sharing

    DECLARE
            v_clob CLOB := to_clob('xml string from local file'); --pls. download xml file from above url;
            stmt NVARCHAR2(500) := 'INSERT INTO TEST(XMLDATA) VALUES(:x)';
    BEGIN
            EXECUTE IMMEDIATE stmt USING v_clob;
    END;
    

    If I use xmltype so I can use XMLType (bfilename ('test_dir', '"Data.xml" '), nls_charset_id ('AL32UTF8'))

    but I have to use clob for storing the xml string. because oracle xmltype cannot correctly parse these xml data. ;

    Google search for 'oracle insert clob from the file' would you find this answer fairly quickly.

    You don't need to use dynamic sql statements.

  • Insert and update records to MySQL from Oracle

    Hello

    Our application will be insert/update new records in the Oracle database, but I need to insert/update new records in the MySQL database. Oracle and MySQL tables have the same columns.

    Right now, we intend to create DBLink between Oracle and MySQL. The question is to know how to write code to insert/update new records to MySQL from Oracle. I'm a new guy in this area.

    Please give us some examples of code, thanks!

    With Oracle, you should be able to just write the same thing as usual, except for the include link database when you reference the table for example your SQL

    insert into tablename@mysqldblink...

    updating of the game tablename@mysqldblink...

    Services gateway through the database link (odbc drivers or anything else that is used) must make the most of the necessary conversions for you.

Maybe you are looking for