HPCM import the staging tables

Hello

I'm trying to import records in the staging table HPCM_STG_ASGN_RULE_SEL in HPCM application. But he isn't getting imported into application HPCM. When I see the field "import exception" in the staging table HPCM_STG_ASGN_RULE_SEL it shows as ERROR_POV_GROUP_NOT_FOUND. What could be the reason? Help, please.

Kind regards
Vishal

Problem is solved by ordering the appropriate POV.

Kind regards

Vishal

Tags: Business Intelligence

Similar Questions

  • How to jump a line to insert in the staging table

    Hello world

    I'm actually transform data from a source table in the staging table and staged, and then at the final table. I generated a primary key using the sequence. As I put the insert method of the staging table as truncate/insert. So whenever the mapping is loaded, intermediate table is truncated and new data are inserted but as I am with sequence of intermediate table, it will give the new numbering of old data from the source table and it will be duplicated data in the target table. So for this reason I use key look up on top of some attributes of entry and that the use of expression that I try to avoid duplication. At each exit of the attributes in the expression, I'm trying the case statement

    "BOLD" CASE WHEN INGRP1. ROW_ID IS NULL
    THEN
    INGRP1.ID
    END * bold *.

    Because of this condition, I get the error message

    "BOLD"
    Warning
    ORA-01400: cannot insert NULL into ('SCOTT'. "" "" STG_TARGET_TABLE '. "" ROW_ID")
    "BOLD"

    But I'm stuck when the row_id value is zero, that that condition or statement should I write to jump the insertion of data. I want to insert data only when ROW_ID IS NULL.




    Kindly help me.

    Thank you

    Concerning
    Suhail Dayer

    You do not need identical tables to use LESS, only the 'select list' must match. Assuming you have the key of the enterprise (one or more columns that uniquely identifies a row of your data source) in the source and the final table, you can do the following:

    -Use a Set operation where the result is the key to the business of staging table LESS the key to the business of the final table
    -The output of the set operation is then joined to the staging table to get the rest of the attributes for these lines
    -The output of the join is inserted into the final table

    This will ensure that the lines with the new keys to the company are responsible.

    Hope this helps,
    Roald

  • Suppose we have sal, comm in flat file, but in the staging table that we have

    Hello

    Suppose we have sal, comm in flat file, but in the staging table we are totsal col.i need to give sal + comm totsal in ctl file.how I have to give.

    Can you please help on this.

    Kind regards
    Ramanantsoa.

    Have you tried with [BOUNDFILLER | http://www.orafaq.com/wiki/SQL * Loader_FAQ #Can_one_skip_certain_columns_while_loading_data.3F] option?

  • How to import the access table to XE

    Hi all
    I'm a newbie in the XE. I kinda dababase in access. I would like to import this table to XE. There are about 200-300 rows and 7 columns, it is possible to import this table? If Yes, how can I do this? more I'll add a few columns durong or after importation.

    Thank you and best regards,
    Rustam

    Published by: user10897725 on March 9, 2010 13:51

    Hello. I'm still a beginner and still regularly to seek help from the experts.
    But I do not have an alternative to the import and export Access tables with Oracle 10 G Ex.
    PS - I am running Windows XP.

    It is set up as an ODBC source Oracle.

    The notice must be in 2 parts.

    REM - start the Oracle database

    1 - set up the ODBC connection.

    2 export Access table in Oracle and import an Oracle table in Access.

    1 - setting up the ODBC:

    Open:

    Control Panel
    Administration tools
    Data sources (ODBC)

    You will see then 7 tabs.
    I used the first "User DSN" tab

    In this tab, click 'Add '.

    In the drop-down list, select "Oracle XE', which is at the bottom of the list. Click on "Finish".

    You will then go into the Configuration of the ODBC Oracle driver.

    You will be asked for a "Data Source name".
    Type in what whether, for example G ex as long as you don't forget what you call the 'Data Source'.
    The name of G EX later appears when you start using the ODBC connection.

    Description is optional.

    The TNS service name is a drop-down list. Select XE

    User ID. Enter the user name of the Oracle user to establish the connection.
    Don't forget that the user must have been granted connect and privileges of the resource.

    Next step is to "test connection".

    You should get a box that says "successful" etc...

    There are other tabs and drop down menus at the bottom of Oracle configuration. Pilot, but leave these as default and can be changed at a later date.

    You should see your newly created ODBC 'G EX' connection on the user DSN tab.

    Close the Admin Tools screen and Control Panel.

    2. move the table Access.

    You can use ODBC to Access tables to export in Oracle and Oracle Import tables in Access.

    The two will be in Access by using the tables import/link or export.

    Imports and exports follow the same path.

    For example. Export Access table to Oracle.

    In Access go to the display of tables and right-click on the table that you want to export.

    Select export and enregistrer save as Type, select ODBC databases in the drop-down list. (bottom)

    Click Ok.

    You will then get a screen "Select data Source".

    Choose the tab 'Data Source Machine', and you should see your named data source. (EX G)

    Select it and click OK.

    The service name must say XE

    Enter the Oracle user name and the password of the user with connect and resources.

    It should then be exported.
    Reverse the import process.

    Tips for Access tables.

    Oracle likes the names of tables and fields in capital letters. He doesn't like spaces in domain names.
    You may need to export the tables to one another access empty db to rename the table and or fields before exporting to Oracle.

    I hope this helps...

  • Read data from table of $ E and insert in the staging table

    Hi all

    I'm new on ODI. I need your help to understand how to read data from a table ' E$ "and insert in an intermediate table.

    Scenario:

    The name of two columns, in a flat file, the employee and the employee id must be loaded into a data EMPstore +. A check constraint is added so that the data with the employee names in capital letters only to load in the data store. Check the command is set to the static movement . Right-click on the data store, select control , then check. The lines that have violated the check constraint are kept in E$ _EMP+ table.

    Problem:

    Problem is I want to read the data in the table E$ _EMP+ and transform in capital letters in the name of the employee and move the corrected data of E$ _EMP+ EMP+. Please advise me on how to automatically manage the 'soft' exceptions in ODI.

    Thank you

    If I understand, you want to change the columns in the tables of $ E and then load into the target.

    Now, if you notice how ODI recycles the error, there is an incremental update to the target using the E table $ after he filled the I$ table.

    I think you can do the same thing by creating an interface using the table of $ E as source and implement the business logic in this interface to fill the target.

  • Xml file for reading in the clob in the staging table column

    Hello

    I am trying to query the intermediate table with the database adapter that has the column type CLOB containing the XML file. How to extract the XML of CLOB and map the fields to the another final scheme variable.

    Thank you

    Published by: chaitu123 on Sep 20, 2009 08:16

    (1) when you create DBAdapter on a table that has the clob column watch closely the xsd created for the DBAdapter cloumn clob element must be a String data type

    (2) create xsd for Xml files and create the variable of the xsd element

    (3) use ora:parseEscapedXML("yourDBAdapterclobElement") for XmlFileVarilable

    Krishna

  • How to import the selected tables from MySQL to Oracle

    Hello world:

    How can I import tables from MySQL to Oracle? I have more than 180 tables in my DB (MySQL), but I copy only 12 of them to ORACLE... rigth now I do it manually, using SQL Developer (rigth clicking on the picture icon and copy to Oracle) but now I need to do this automatically every 4 hours...

    I read on the dblink option, but I don't know if this is the best for this case...

    I appreciate all help.

    Best regards
    Jack

    Hi Jack,

    I think that with a db-link you can define an automated task (PL/SQL + programmer).
    We use a db-link to Oracle's RDBMS to MySQL. So we have no experience.
    We have created the db-link like http://www.pythian.com/news/1554/how-to-access-mysql-from-oracle-with-odbc-and-sql/

    Heike cordially

  • Question to load data using sql loader in staging table, and then in the main tables!

    Hello

    I'm trying to load data into our main database table using SQL LOADER. data will be provided in separate pipes csv files.

    I have develop a shell script to load the data and it works fine except one thing.

    Here are the details of a data to re-create the problem.

    Staging of the structure of the table in which data will be filled using sql loader

    create table stg_cmts_data (cmts_token varchar2 (30), CMTS_IP varchar2 (20));

    create table stg_link_data (dhcp_token varchar2 (30), cmts_to_add varchar2 (200));

    create table stg_dhcp_data (dhcp_token varchar2 (30), DHCP_IP varchar2 (20));

    DATA in the csv file-

    for stg_cmts_data-

    cmts_map_03092015_1.csv

    WNLB-CMTS-01-1. 10.15.0.1

    WNLB-CMTS-02-2 | 10.15.16.1

    WNLB-CMTS-03-3. 10.15.48.1

    WNLB-CMTS-04-4. 10.15.80.1

    WNLB-CMTS-05-5. 10.15.96.1

    for stg_dhcp_data-

    dhcp_map_03092015_1.csv

    DHCP-1-1-1. 10.25.23.10, 25.26.14.01

    DHCP-1-1-2. 56.25.111.25, 100.25.2.01

    DHCP-1-1-3. 25.255.3.01, 89.20.147.258

    DHCP-1-1-4. 10.25.26.36, 200.32.58.69

    DHCP-1-1-5 | 80.25.47.369, 60.258.14.10

    for stg_link_data

    cmts_dhcp_link_map_0309151623_1.csv

    DHCP-1-1-1. WNLB-CMTS-01-1,WNLB-CMTS-02-2

    DHCP-1-1-2. WNLB-CMTS-03-3,WNLB-CMTS-04-4,WNLB-CMTS-05-5

    DHCP-1-1-3. WNLB-CMTS-01-1

    DHCP-1-1-4. WNLB-CMTS-05-8,WNLB-CMTS-05-6,WNLB-CMTS-05-0,WNLB-CMTS-03-3

    DHCP-1-1-5 | WNLB-CMTS-02-2,WNLB-CMTS-04-4,WNLB-CMTS-05-7

    WNLB-DHCP-1-13 | WNLB-CMTS-02-2

    Now, after loading these data in the staging of table I have to fill the main database table

    create table subntwk (subntwk_nm varchar2 (20), subntwk_ip varchar2 (30));

    create table link (link_nm varchar2 (50));

    SQL scripts that I created to load data is like.

    coil load_cmts.log

    Set serveroutput on

    DECLARE

    CURSOR c_stg_cmts IS SELECT *.

    OF stg_cmts_data;

    TYPE t_stg_cmts IS TABLE OF stg_cmts_data % ROWTYPE INDEX BY pls_integer;

    l_stg_cmts t_stg_cmts;

    l_cmts_cnt NUMBER;

    l_cnt NUMBER;

    NUMBER of l_cnt_1;

    BEGIN

    OPEN c_stg_cmts.

    Get the c_stg_cmts COLLECT in BULK IN l_stg_cmts;

    BECAUSE me IN l_stg_cmts. FIRST... l_stg_cmts. LAST

    LOOP

    SELECT COUNT (1)

    IN l_cmts_cnt

    OF subntwk

    WHERE subntwk_nm = l_stg_cmts (i) .cmts_token;

    IF l_cmts_cnt < 1 THEN

    INSERT

    IN SUBNTWK

    (

    subntwk_nm

    )

    VALUES

    (

    l_stg_cmts (i) .cmts_token

    );

    DBMS_OUTPUT. Put_line ("token has been added: ' |") l_stg_cmts (i) .cmts_token);

    ON THE OTHER

    DBMS_OUTPUT. Put_line ("token is already present'");

    END IF;

    WHEN l_stg_cmts EXIT. COUNT = 0;

    END LOOP;

    commit;

    EXCEPTION

    WHILE OTHERS THEN

    Dbms_output.put_line ('ERROR' |) SQLERRM);

    END;

    /

    output

    for dhcp


    coil load_dhcp.log

    Set serveroutput on

    DECLARE

    CURSOR c_stg_dhcp IS SELECT *.

    OF stg_dhcp_data;

    TYPE t_stg_dhcp IS TABLE OF stg_dhcp_data % ROWTYPE INDEX BY pls_integer;

    l_stg_dhcp t_stg_dhcp;

    l_dhcp_cnt NUMBER;

    l_cnt NUMBER;

    NUMBER of l_cnt_1;

    BEGIN

    OPEN c_stg_dhcp.

    Get the c_stg_dhcp COLLECT in BULK IN l_stg_dhcp;

    BECAUSE me IN l_stg_dhcp. FIRST... l_stg_dhcp. LAST

    LOOP

    SELECT COUNT (1)

    IN l_dhcp_cnt

    OF subntwk

    WHERE subntwk_nm = l_stg_dhcp (i) .dhcp_token;

    IF l_dhcp_cnt < 1 THEN

    INSERT

    IN SUBNTWK

    (

    subntwk_nm

    )

    VALUES

    (

    l_stg_dhcp (i) .dhcp_token

    );

    DBMS_OUTPUT. Put_line ("token has been added: ' |") l_stg_dhcp (i) .dhcp_token);

    ON THE OTHER

    DBMS_OUTPUT. Put_line ("token is already present'");

    END IF;

    WHEN l_stg_dhcp EXIT. COUNT = 0;

    END LOOP;

    commit;

    EXCEPTION

    WHILE OTHERS THEN

    Dbms_output.put_line ('ERROR' |) SQLERRM);

    END;

    /

    output

    for link -.

    coil load_link.log

    Set serveroutput on

    DECLARE

    l_cmts_1 VARCHAR2 (4000 CHAR);

    l_cmts_add VARCHAR2 (200 CHAR);

    l_dhcp_cnt NUMBER;

    l_cmts_cnt NUMBER;

    l_link_cnt NUMBER;

    l_add_link_nm VARCHAR2 (200 CHAR);

    BEGIN

    FOR (IN) r

    SELECT dhcp_token, cmts_to_add | ',' cmts_add

    OF stg_link_data

    )

    LOOP

    l_cmts_1: = r.cmts_add;

    l_cmts_add: = TRIM (SUBSTR (l_cmts_1, 1, INSTR (l_cmts_1, ',') - 1));

    SELECT COUNT (1)

    IN l_dhcp_cnt

    OF subntwk

    WHERE subntwk_nm = r.dhcp_token;

    IF l_dhcp_cnt = 0 THEN

    DBMS_OUTPUT. Put_line ("device not found: ' |") r.dhcp_token);

    ON THE OTHER

    While l_cmts_add IS NOT NULL

    LOOP

    l_add_link_nm: = r.dhcp_token |' _TO_' | l_cmts_add;

    SELECT COUNT (1)

    IN l_cmts_cnt

    OF subntwk

    WHERE subntwk_nm = TRIM (l_cmts_add);

    SELECT COUNT (1)

    IN l_link_cnt

    LINK

    WHERE link_nm = l_add_link_nm;

    IF l_cmts_cnt > 0 AND l_link_cnt = 0 THEN

    INSERT INTO link (link_nm)

    VALUES (l_add_link_nm);

    DBMS_OUTPUT. Put_line (l_add_link_nm |) » '||' Has been added. ") ;

    ELSIF l_link_cnt > 0 THEN

    DBMS_OUTPUT. Put_line (' link is already present: ' | l_add_link_nm);

    ELSIF l_cmts_cnt = 0 then

    DBMS_OUTPUT. Put_line (' no. CMTS FOUND for device to create the link: ' | l_cmts_add);

    END IF;

    l_cmts_1: = TRIM (SUBSTR (l_cmts_1, INSTR (l_cmts_1, ',') + 1));

    l_cmts_add: = TRIM (SUBSTR (l_cmts_1, 1, INSTR (l_cmts_1, ',') - 1));

    END LOOP;

    END IF;

    END LOOP;

    COMMIT;

    EXCEPTION

    WHILE OTHERS THEN

    Dbms_output.put_line ('ERROR' |) SQLERRM);

    END;

    /

    output

    control files -

    DOWNLOAD THE DATA

    INFILE 'cmts_data.csv '.

    ADD

    IN THE STG_CMTS_DATA TABLE

    When (cmts_token! = ") AND (cmts_token! = 'NULL') AND (cmts_token! = 'null')

    and (cmts_ip! = ") AND (cmts_ip! = 'NULL') AND (cmts_ip! = 'null')

    FIELDS TERMINATED BY ' |' SURROUNDED OF POSSIBLY "" "

    TRAILING NULLCOLS

    ('RTRIM (LTRIM (:cmts_token))' cmts_token,

    cmts_ip ' RTRIM (LTRIM(:cmts_ip)) ")". "

    for dhcp.


    DOWNLOAD THE DATA

    INFILE 'dhcp_data.csv '.

    ADD

    IN THE STG_DHCP_DATA TABLE

    When (dhcp_token! = ") AND (dhcp_token! = 'NULL') AND (dhcp_token! = 'null')

    and (dhcp_ip! = ") AND (dhcp_ip! = 'NULL') AND (dhcp_ip! = 'null')

    FIELDS TERMINATED BY ' |' SURROUNDED OF POSSIBLY "" "

    TRAILING NULLCOLS

    ('RTRIM (LTRIM (:dhcp_token))' dhcp_token,

    dhcp_ip ' RTRIM (LTRIM(:dhcp_ip)) ")". "

    for link -.

    DOWNLOAD THE DATA

    INFILE 'link_data.csv '.

    ADD

    IN THE STG_LINK_DATA TABLE

    When (dhcp_token! = ") AND (dhcp_token! = 'NULL') AND (dhcp_token! = 'null')

    and (cmts_to_add! = ") AND (cmts_to_add! = 'NULL') AND (cmts_to_add! = 'null')

    FIELDS TERMINATED BY ' |' SURROUNDED OF POSSIBLY "" "

    TRAILING NULLCOLS

    ('RTRIM (LTRIM (:dhcp_token))' dhcp_token,

    cmts_to_add TANK (4000) RTRIM (LTRIM(:cmts_to_add)) ")" ""

    SHELL SCRIPT-

    If [!-d / log]

    then

    Mkdir log

    FI

    If [!-d / finished]

    then

    mkdir makes

    FI

    If [!-d / bad]

    then

    bad mkdir

    FI

    nohup time sqlldr username/password@SID CONTROL = load_cmts_data.ctl LOG = log/ldr_cmts_data.log = log/ldr_cmts_data.bad DISCARD log/ldr_cmts_data.reject ERRORS = BAD = 100000 LIVE = TRUE PARALLEL = TRUE &

    nohup time username/password@SID @load_cmts.sql

    nohup time sqlldr username/password@SID CONTROL = load_dhcp_data.ctl LOG = log/ldr_dhcp_data.log = log/ldr_dhcp_data.bad DISCARD log/ldr_dhcp_data.reject ERRORS = BAD = 100000 LIVE = TRUE PARALLEL = TRUE &

    time nohup sqlplus username/password@SID @load_dhcp.sql

    nohup time sqlldr username/password@SID CONTROL = load_link_data.ctl LOG = log/ldr_link_data.log = log/ldr_link_data.bad DISCARD log/ldr_link_data.reject ERRORS = BAD = 100000 LIVE = TRUE PARALLEL = TRUE &

    time nohup sqlplus username/password@SID @load_link.sql

    MV *.log. / log

    If the problem I encounter is here for loading data in the link table that I check if DHCP is present in the subntwk table, then continue to another mistake of the newspaper. If CMTS then left create link to another error in the newspaper.

    Now that we can here multiple CMTS are associated with unique DHCP.

    So here in the table links to create the link, but for the last iteration of the loop, where I get separated by commas separate CMTS table stg_link_data it gives me log as not found CMTS.

    for example

    DHCP-1-1-1. WNLB-CMTS-01-1,WNLB-CMTS-02-2

    Here, I guess to link the dhcp-1-1-1 with balancing-CMTS-01-1 and wnlb-CMTS-02-2

    Theses all the data present in the subntwk table, but still it gives me journal wnlb-CMTS-02-2 could not be FOUND, but we have already loaded into the subntwk table.

    same thing is happening with all the CMTS table stg_link_data who are in the last (I think here you got what I'm trying to explain).

    But when I run the SQL scripts in the SQL Developer separately then it inserts all valid links in the table of links.

    Here, she should create 9 lines in the table of links, whereas now he creates only 5 rows.

    I use COMMIT in my script also but it only does not help me.

    Run these scripts in your machine let me know if you also get the same behavior I get.

    and please give me a solution I tried many thing from yesterday, but it's always the same.

    It is the table of link log

    link is already present: dhcp-1-1-1_TO_wnlb-cmts-01-1

    NOT FOUND CMTS for device to create the link: wnlb-CMTS-02-2

    link is already present: dhcp-1-1-2_TO_wnlb-cmts-03-3
    link is already present: dhcp-1-1-2_TO_wnlb-cmts-04-4

    NOT FOUND CMTS for device to create the link: wnlb-CMTS-05-5

    NOT FOUND CMTS for device to create the link: wnlb-CMTS-01-1

    NOT FOUND CMTS for device to create the link: wnlb-CMTS-05-8
    NOT FOUND CMTS for device to create the link: wnlb-CMTS-05-6
    NOT FOUND CMTS for device to create the link: wnlb-CMTS-05-0

    NOT FOUND CMTS for device to create the link: wnlb-CMTS-03-3

    link is already present: dhcp-1-1-5_TO_wnlb-cmts-02-2
    link is already present: dhcp-1-1-5_TO_wnlb-cmts-04-4

    NOT FOUND CMTS for device to create the link: wnlb-CMTS-05-7

    Device not found: wnlb-dhcp-1-13

    IF NEED MORE INFORMATION PLEASE LET ME KNOW

    Thank you

    I felt later in the night that during the loading in the staging table using UNIX machine he created the new line for each line. That is why the last CMTS is not found, for this I use the UNIX 2 BACK conversion and it starts to work perfectly.

    It was the dos2unix error!

    Thank you all for your interest and I may learn new things, as I have almost 10 months of experience in (PLSQL, SQL)

  • How to compare the length of the data to a staging table with the definition of the base table

    Hello
    I have two tables: staging of the table and the base table.
    I get flatfiles data in the staging of the table, depending on the structure of the requirement of staging of the table and the base table (length of each column in the staging table is 25% more data dump without errors) are different for ex: If we have the city long varchar 40 column in table staging there 25 in the base table. Once data are discharged into the intermediate table that I want to compare the actual length of the data for each column in the staging table with the database table definition (data_length for each column of all_tab_columns) and if no column is different length that I need to update the corresponding line in the intermediate table which also has an indicator called err_length.

    so for that I use the cursor c1 is select length (a.id), length (b.SID) of staging_table;
    c2 (name varchar2) cursor is select data_length all_tab_columns where table_name = 'BASE_TABLE' and column_name = name;
    But we get atonce data in the first query while the second slider, I need to get for each column and then compare with the first?
    Can someone tell me how to get the desired results?

    Thank you
    Manoi.

    Hey, Marco.

    Of course, you can set src.err_length in the USING clause (where you can reference all_tab_columns) and use this value in the SET clause.
    It is:

    MERGE INTO  staging_table   dst
    USING  (
           WITH     got_lengths     AS
                     (
              SELECT  MAX (CASE WHEN column_name = 'ENAME' THEN data_length END)     AS ename_len
              ,     MAX (CASE WHEN column_name = 'JOB'   THEN data_length END)     AS job_len
              FROM     all_tab_columns
              WHERE     owner          = 'SCOTT'
              AND     table_name     = 'EMP'
              )
         SELECT     s.ename
         ,     s.job
         ,     CASE WHEN LENGTH (s.ename) > l.ename_len THEN 'ENAME ' END     ||
              CASE WHEN LENGTH (s.job)   > l.job_len   THEN 'JOB '   END     AS err_length
         FROM     staging_table     s
         JOIN     got_lengths     l     ON     LENGTH (s.ename)     > l.ename_len
                             OR     LENGTH (s.job)          > l.job_len
         )     src
    ON     (src.ename     = dst.ename)
    WHEN MATCHED THEN UPDATE
         SET     dst.err_length     = src.err_length
    ;
    

    As you can see, you have to hardcode the names of the columns common to several places. I swam () to simplify that, but I found an interesting (at least for me) alternative grouping function involving the STRAGG user_defined.
    As you can see, only the subquery USING is changed.

    MERGE INTO  staging_table   dst
    USING  (
           SELECT       s.ename
           ,       s.job
           ,       STRAGG (l.column_name)     AS err_length
           FROM       staging_table          s
           JOIN       all_tab_columns     l
          ON       l.data_length  < LENGTH ( CASE  l.column_name
                                              WHEN  'ENAME'
                                    THEN      ename
                                    WHEN  'JOB'
                                    THEN      job
                                       END
                               )
           WHERE     l.owner      = 'SCOTT'
           AND      l.table_name     = 'EMP'
           AND      l.data_type     = 'VARCHAR2'
           GROUP BY      s.ename
           ,           s.job
           )     src
    ON     (src.ename     = dst.ename)
    WHEN MATCHED THEN UPDATE
         SET     dst.err_length     = src.err_length
    ;
    

    Instead of the user-defined STRAGG (that you can copy from AskTom), you can also use the undocumented, or from Oracle 11.2, WM_CONCAT LISTAGG built-in function.

  • Approach to analyze the large number of XML files in the relational table.

    We are studying the possibility of XML DB to treat a large number of files from same day.
    The goal is to parse the XML file and store it in several relational tables. Once in the relational table don't care us about the XML file.
    The file cannot be stored on the file server and must be stored in a table before analysing because of security concerns. A third-party system will send the file and it will store in the XML database.
    The file size can be between 1 MB to 50MB and high performance is very that much is expected of other wise men, that the solution will be discarded.
    Although we have no of XSD, the XML file is well structured. We are 11g Release 2.

    Based on the reading, that's what my approach.
    1. CREATE THE DONNEES_XML TABLE
    (xml_col XMLTYPE)
    XMLTYPE xml_col STORE AS BINARY XML NAVIGATION;

    2. third will store the data in the donnees_xml table.
    3. create XMLINDEX on the unique XML element
    4 create XMLTYPE views
    CREATE OR REPLACE FORCE VIEW (V_XML_DATA)
    SType,
    Mtype,
    MNAME,
    ROT
    )
    AS
    SELECT x."Stype"
    x."Mtype."
    x."Mname"
    x."ROT".
    OF data_table t,.
    XMLTABLE)
    ' / SectionMain'
    PASSAGE t.data
    COLUMNS Stype VARCHAR2 (30) PATH "Stype"
    Mtype VARCHAR2 (3) path "Mtype."
    MNAME VARCHAR2 (30) PATH "MNAME"
    ROT VARCHAR2 (30) PATH "OID") x;

    5. loading mass analysis of data in the staging table based on the column in the index.

    Please comment on the process that precedes any suggestions that can improve the performance.

    Thank you
    AnuragT

    PASSAGE t.xml_col<--WHAT will="" passing="" here="" since="" no="">

    If you are using an XMLType Table, instead of an XMLType column reference, you are referring to the contents of XMLType while using the NICKNAME "OBJECT_VALUE" column

    If--> t.object_value

  • Columns of the nested Table and ADF BC 11.1.2

    I think coming to a new conception of the application, including a redesign of the database. In this application, there are users who cannot change tables of production directly, but their amendments must be approved (and possibly modified) before applying them to production tables. The production tables are part of an existing system and are fairly well standardized, with a main table and several paintings of detail.

    So for the new design, I want to have a table intermediate, mirrored in the main table, where the user's changes are stored until they are approved and applied to the production tables. The intermediate table contains some additional columns for the user "add, change or delete", who supported the change, the date modified is requested. After you apply the change, the intermediate folder must be copied in a historic change and deleted from the staging table. In this way, the intermediate table is never a lot of data in it.

    Here's the question:
    I need to treat the tables in detail. I could have staged versions of each table in detail, but I thought it might be easier to manage if detail tables have been included in table nested table columns main staging area. Most of the detail tables contain only a few rows with rank of master. But ADF BC 11.1.2 can treat the nested table columns? Is it easy to use in an application?

    Hello

    and ADF Faces does support nested tables? lol so even if ADF BC would be, where would you go with this approach? Polymorphic views would be an option (think hard)?

    Frank

  • Remove the large table

    Hello

    I have a very large non partitioned table about 50 GB. I need to remove old data from the table that would be around 25-30 GB.

    What I have is

    (1) table a Dump Export using expdp
    (2) drop table
    (3) create a partitioned table
    (4) import the Table data
    (5) scores drop

    Please let me know if there is a better way to do the same?

    Uhm... do a conditional export?

    It should be a bit faster because you do not have to import the original table.

    Bye,.
    Antonio

  • Use of the staging of prepareBulk table name parameter / completeBulk functions.

    I try to load 1.6 billion triple in a semantic graph RDF instance. I use the prepareBulk / completeBulk approach described in "7.12 Bulk loading Using RDF graphic semantic support for Apache Jena. I loaded the triplets of. TTL.gz files in an intermediate table with prepareBulk according to the "example 7-10 loading data into the staging (prepareBulk) Table.

    Example 7-10, I used "null" for the parameter "staged by the name of the table" at prepareBulk. I then ran a separate program to run completeBulk according to the "example 7-11 loading data into the table of staging in semantic networks (completeBulk). 7-11 watch also the use of "null" as default value for the parameter "staged by the name of the table. PrepareBulk operations seem to have executed successfully with a null value, staging table name. However, null does not seem to be a valid entry for the staging of completeBulktable name parameter. "CompleteBulk (null, null);" run displays the following error message:

    Hit the exception ORA-00942: table or view does not exist

    What is the relationship between "staged by the name of the table" Settings prepareBulk and completeBulk? Is null, a valid value for this parameter to prepareBulk, and if so what should be the corresponding value passed to completeBulk?

    Hello

    This seems odd. We have a test for this case. We will try this. By default, the intermediate table created is under the same user schema and table name would be "RDFB_" followed by the name of model.

    Can you please verify the existence of such a table in your schema? There must be 1 b + lines. If so, you can directly proceed to the name of the table.

    Since you are dealing with a good amount of data, the following should be helpful for performance:

    (1) remove the indexes on the table of the application before you run the completeBulk call;
    2) enable parallel DML before the call: oracle.executeSQL ("alter session enable dml parallel");

    (3) use the parallel load options. An example is the following. Degree of parallelism is set to 4, and you will need to customize it to your own configuration.

    "PARSE PARALLEL PARALLEL_CREATE_INDEX = 4 mbv_method = shadow"

    Thank you

    Zhe Wu

  • Diff between the Dimension and the staging of Dimension table?

    Hi all

    I would like to learn OBI Apps I'm Consultant OBIEE. While getting some travel queries. Please guide me on this path.

    Thank you

    Dimension ends with D and dimension staging table ends with the DS. Its very nomenclature everything used in the warehouse projects.

    Table _DS will connect directly to the source and will be the largest part of the detials.
    Table D will have the primary Key(Called as ROWWID which will be the foreign key in all the fact table which ot refres too) as well as the table _DS detials.

    Mark correct or useful if this can help,

    Kind regards
    Rayan Vieira

    Published by: Rayan Vieira on June 5, 2013 16:36

  • GR 11, 2 Data Pump. Import the table in a schema into another schema

    I have oracle stady. I export shema HR in the hrexport.dmp file. When I import tables from this file, I got hurt. I have used Enterprise Manager:
    1. connected by SYSTEM user as USUAL
    2 selected, choose the type of import - tables
    3. the data in the file imported
    4 selected tables to import tables
    5. in the next step, I try to insert a row into the table remapping patterns and change the cell Destination Shema, but in the list is only one name of shema - HR! Why?

    Published by: alvahtin on 10.03.2013 06:11

    ORA-39166: Object SYSTEM. EMPLOYEES were found.
    ORA-39166: Object SYSTEM. The DEPARTMENTS was not found.
    ORA-39166: Object SYSTEM. PLACES not found.

    Tables are not owned by system. Try

    impdp system/oracle remap_schema=hr:inventory tables=hr.employees, hr.departments, hr.locations ........
    

Maybe you are looking for

  • Satellite L300-171 - defective memory slot

    Hello, Toshiba users! I am the proud owner of a laptop Toshiba Satellite L300-171, purchased last year.one of my memory locations stoped work 4 weeks ago. This model comes with 3 GB of DDR2 667 MHz RAM. Now it shows me only 2 GB.I swapped the modules

  • My printer was working and now it does not work.

    * Original title: printer not workin My printer was working and now it does not work I TURN my printer press the print icon on the subject and itgoes by movements as if it is printing, but paper comes out of the Virgin

  • uninstall Norton Security software preloaded Windows 8 on a HP Pavilion g7

    I want to install a different internet security software that delivered Norton software pre-loaded on my HP Pavilion g7. This can be done until the Norton software is removed.  Norton Uninstall from the control panel doesn't do the trick (not sure if

  • Without HP color laserjet M277dw MFP print margins

    Is it possible to print without margins with this printer? I need top print brochures and can not find the setting to make this possible. Thank you EKHC

  • How can I send a file containing links so that they can use the links

    I bought two copies of a file that is a list of links.  I want to send to a friend, but want to make sure, that the links will work.  Can someone help me with this, not Adobe do send?Post edited by: Marion Steinbrenner THIS QUESTION has NO ANSWER REC