EAS load data and contour review

Dear all,

I am currently using Hyperion planning 11.1.2.2, I created an application with appropriate (see below) outline

HSP_Rates/account/period/year/scenario/Version/currency/entity/brand/product/customer/measure

and created the corresponding tables in essbase I want to load the data via Eas, I created a file to load data (a table) with exactly the same column names as my plan dimensions names EXCEPT HSP_Rates which is absent in my data file. I'm sure all the dimension members are correct / exist and there is no empty fields.

When loading the file, the following error message appears:
Value data [2056] met before that all selected Dimensions, complete Records [2]
Unexpected error Essbase 1003007

Question: I need to add a column named HSP_Rates in my data file? should what value I put under this column?


PS: workload data file is an Excel file

Thanks in advance for your help.
Best regards.

the problem is resolved:

Load data into Essbase using the rules file

the answer is Yes, create a column named HSP_Rates with 'HSP_InputValue' as values in the data file.

Thank you
you

Tags: Business Intelligence

Similar Questions

  • Data loading Date and text in the workforce planning?

    Hello world

    I have a need to make an annual charge of HR planning (labor) data. I don't know how to retrieve columns that have text (function, for example) or a date (for example, start_date) in planning, or even if I can do it.

    We on 11.1.2 planning & FDM

    Kind regards
    Robb Salzmann

    Yes, you can load no doubt about text, date information using the charger of outline, the same logic applies if you use ODI so that uses the same base as the shipper of contour elements.

    See you soon

    John
    http://John-Goodwin.blogspot.com/

  • loading data to essbase using EAS against back-end script

    Good afternoon

    We have noticed recently that when loading of our ASO cube (with the help of back-end scripts - esscmd etc.) it seems to load much more overhead then when using EAS and loading files of data individually.  When loading using scripts, the total size of the cube has been 1.2 gig.  When you load the files individually to EAS, the size of the cube has been 800 meg.  Why the difference?  is there anything we can do in scripts supported to reduce this burden?

    Thank you

    You are really using EssCmd to load the ASO cubes?  You should use MAxL with buffers to load. Default EAS uses a buffer to load when you load multiple files. Esscmd (and without the command buffer MAxL) won't. It means loads long and larger files. Loading ASO, it takes the existing .dat file and adds the new data. When you are not using a buffer load, it takes the .dat file and the .tmp file and merges together, then when you do the second file, it takes the files .dat (which includes the first load data) and repeats the process. Whenever he does that he has to move together (twice) .dat file and there is growth of the .dat files. If nothing else I'll call it fragmentation, but I don't think it's as simple as that. I think it's just the way the data is stored. When you use a buffer and no slices, he need only do this once.

  • Failed to load data using the contour load utility

    I am unable to load data using contour utility charge.
    Assigned data loading dimension such as: account
    Driver-Dimension such as: period
    Member as a pilot: Jan

    Connection file:

    Account, Jan, Point of view, the name of Cube data loading
    Investment, 1234, "India, current, heck, FY13", Plan1


    Utility of contour control
    OutlineLoad A:pract U:admin /I:C:\test1.csv /D:Account /L:C:\lg.log /X:C:\ex.exc


    Exception file

    [Thu Mar 28 18:05:39 GMT + 05:30 2013] Error loading of data record 1: investments, 1234, '' ' India, common, project, FY14' ' ', Plan1
    [Thu Mar 28 18:05:39 GMT + 05:30 2013] com.hyperion.planning.InvalidMemberException: Member India, common, rough, FY14 does not exist or you do not have access to it.
    [Thu Mar 28 18:05:40 GMT + 05:30 2013] Planning of vector data store processes loaded with exceptions: exceptions have occurred, examine the file exception for more information. 1 data record has been read 1 record of data have been processed, 0 were loaded successfully, 1 was rejected.


    Log file:


    Connected application "Rlap", liberation 11.113, adapter Interface Version 5, supported and not enabled workforce, CapEx no taken care and not active, CSS Version 3
    [Thu Mar 28 18:05:38 GMT + 05:30 2013] Input file located and opened successfully "C:\load.csv".
    [Thu Mar 28 18:05:38 GMT + 05:30 2013] Record header fields: account, Jan, Point of view, the name of Cube data loading
    [Thu Mar 28 18:05:38 GMT + 05:30 2013] Located by using 'Account' dimension and for the loading of data into the application 'Rlap.
    [Thu Mar 28 18:05:40 GMT + 05:30 2013] Loading dimension 'Account' has been successfully opened.
    [Thu Mar 28 18:05:40 GMT + 05:30 2013] A refresh of the cube operation will not be run.
    [Thu Mar 28 18:05:40 GMT + 05:30 2013] Create filters for safe operation will not be performed.
    [Thu Mar 28 18:05:40 GMT + 05:30 2013] Look at the files of newspapers of Essbase to status if Essbase data have been loaded.
    [Thu Mar 28 18:05:40 GMT + 05:30 2013] Planning of vector data store processes loaded with exceptions: exceptions have occurred, examine the file exception for more information. 1 data record has been read 1 record of data have been processed, 0 were loaded successfully, 1 was rejected.



    Infact members exist in outline.
    Any help would be appreciated.

    can you double check your csv file, open it in a text editor, because the error is showing additional citations.

    [Thu Mar 28 18:05:39 GMT + 05:30 2013] Error loading of data record 1: investments, 1234, '' ' India, common, project, FY14' ' ', Plan1

    See you soon

    John
    http://John-Goodwin.blogspot.com/

  • How to load the date and time from text file to oracle using sqlloader table

    Hi friends

    I need you to show me what I miss loading date and time text file in a table oracle using sqlloader

    It's my data in this way (c:\external\my_data.txt)
    7369,SMITH,17-NOV-81,09:14:04,CLERK,20
    7499,ALLEN,01-MAY-81,17:06:08,SALESMAN,30
    7521,WARD,09-JUN-81,17:06:30,SALESMAN,30
    7566,JONES,02-APR-81,09:24:10,MANAGER,20
    7654,MARTIN,28-SEP-81,17:24:10,SALESMAN,30
    my table in the database emp2
    create table emp2 (empno number,
                      ename varchar2(20),
                      hiredate date,
                      etime date,
                      ejob varchar2(20),
                      deptno number);
    the code for the control in this path (c:\external\ctrl.ctl) file
    load data
     infile 'C:\external\my_data.txt'
     into table emp2
     fields terminated by ','
     (empno, ename, hiredate, etime, ejob, deptno)
    This is the error:
    C:\>sqlldr scott/tiger control=C:\external\ctrl.ctl
    
    SQL*Loader: Release 10.2.0.1.0 - Production on Mon May 31 09:45:10 2010
    
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    
    Commit point reached - logical record count 5
    
    C:\>
    any help that I enjoyed

    Thank you

    Published by: user10947262 on May 31, 2010 09:47

    load data
    INFILE 'C:\external\my_data.txt '.
    in the table emp2
    fields completed by «,»
    (empno, ename, hiredate, etime, Elysa, deptno)

    Try

    load data
     infile 'C:\external\my_data.txt'
     into table emp2
     fields terminated by ','
     (empno, ename, hiredate, etime "to_date(:etime,'hh24:mi:ss')", ejob, deptno)
    

    This is the error:

    C:\>sqlldr scott/tiger control=C:\external\ctrl.ctl
    
    SQL*Loader: Release 10.2.0.1.0 - Production on Mon May 31 09:45:10 2010
    
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    
    Commit point reached - logical record count 5
    
    C:\>
    

    This isn't a mistake, you can see errors in the log file and bad.

  • Newbie sorry data-load question and datafile / viral measure

    Hi guys

    Sorry disturbing you - but I did a lot of reading and am still confused.

    I was asked to create a new tablespace:

    create tablespace xyz datafile 'oradata/corpdata/xyz.dbf' size 2048M extent management local size unique 1023M;

    alter tablespace xyz add datafile ' / oradata/corpdata/xyz.dbf' size 2048M;

    Despite being worried not given information about the data to load or why the tablespace must be sized that way - I was told to just 'do it '.

    Someone tried to load data - and there was a message in the alerts log.

    ORA-1652: unable to extend temp by 65472 segment in tablespace xyz

    We do not use autoextend on data files even if the person loading the data would be so (they are new on the environment).

    The database is on a cold backup nightly routine - we are in a rock anvil - we have no space on the server - to make RMAN and only 10 G left on the Strip for (Veritas) backup routine and thus control space with no autoextend management.

    As far as I know of the above error message is that the storage space is not large enough to hold the load data - but I was told by the person who imports the data they have it correctly dimensioned and it something I did when the database create order (although I have cut and pasted from their instructions - and I adapted to our environment - Windows 2003 SP2 but 32 bits).

    The person called to say I had messed up their data loading and was about to make me their manager for failing to do my job - and they did and my line manager said that I failed to correctly create the tablespace.

    When this person was asked to create the tablespace I asked why they thought that extensions should be 1023M and said it was a large data load that must be inserted to a certain extent.

    That sounds good... but I'm confused.

    1023M is very much - this means that you have only four extents in the tablespace until it reaches capacity.

    It is a load - is GIS data - I have not participated in the previous data loads GIS - other than monitor and change of tablespaces to support - and previous people have size it right - and I've never had no return. Guess I'm a bit lazy - just did as they asked.

    However, they never used 128K as a size measure never 1023M.

    Can I ask is 1023 M normal for large data loads - or I'm just the question - it seems excessive unless you really just a table and an index of 1023M?

    Thanks for any idea or other research.

    Assuming a block size of 8 KB, 65472 would be 511 MB. However, as it is a GIS database, my guess is that the database block size itself has been set to 16K, then 65472 is 1023MB.

    What load data is done? Oracle Export dump? Which includes a CREATE INDEX statement?
    Export-Import is a CREATE TABLE and INSERT so that you would get an ORA-1652 on it. So you get ORA-1652 if the array is created.
    However, you will get an ORA-1652 on an INDEX to CREATE the target segment (ie the Index) for this operation is initially created as a 'temporary' segment until the Index build is complete when it switches to be a 'temporary' to be a segment of "index".

    Also, if parallelism is used, each parallel operation would attempt to assign degrees of 1023 MB. Therefore, even if the final index target should have been only, say 512 MB, a CREATE INDEX with a DEGREE of 4 would begin with 4 extensions of 1023 MB each and would not decrease to less than that!

    A measure of 1023 MB size is, in my opinion, very bad. My guess is that they came up with an estimate of the size of the table and thought that the table should be inserted in 1 measure and, therefore, specified 1023 MB in the script that is provided to you. And it is wrong.

    Same Oracle AUTOALLOCATE goes only up to 64 MB extended when a Segment reached the mark of 1 GB.

  • Loading data into the env DWH

    Hello

    We have an obligation to delete and load once a week: 20 million records. Some bitmap indexes are available on the table.

    Please suggest me which option I should consider to load the data effectively.

    (1) drop index dynamically, load data and re-create indexes gather table / index stats after that load data.

    (2) set the unusable index and rebuild, gathering statistics for table / index after loading data. (reference https://asktom.oracle.com/pls/apex/f?p=100:11:0:P11_QUESTION_ID:2680568300346966968 ).

    If I choose option 2 is required to run command below during the loading process?

    alter session set skip_unusable_indexes=true;

    Thank you very much in advance for your help.

    -Suri

    A small question - since we are to transfer data in the main table that is the need to build on the work table index?

    Indexes on the partitioned table, the non-partitioned table must be compatible.

    I've never found useful to mark as unusable bitmap indexes need to be rebuilt anyway. Each bitmap index entry has bits for SEVERAL lines. And setting a bit in an entry means that the ILO should be DISABLED in all other entries.

    So it does not make much sense to try to use an unusable bitmap index and try to update to make it current.

    We did not score on the main table. Can I still use the technique of swap partition to load the data.  (Sorry for asking questions without trying, I'll try tomorrow)

    One of the tables must be partitioned. Given that the main table is NOT partitioned, then you partition the work table. The only working table has a partition segment. This segment is swapped with a stretch of the non-partitioned table.

    On the other hand note which option is recommended to load the data when we have no license to use partition.

    Delete the index and rebuild them. Like I said more high index bitmap needs to be rebuilt anyway.

  • Loading Essbase and the strengthening of the dimension

    Hi all

    Can I build size and load data at the same time form a single file.

    If yes how can I do?

    Thank you

    Sunny

    To develop on what 955124 said, the charge in environmental assessment rule editor only allows you to see "load data fields" or "build of the dimension fields" at any time.  But he still remembered these two sets and records them in the State of charge.  You should be able to display fields from dimension build and validate the State of charge for size of construction, then go to showing the load data and validate the State of charge for loading data.

    If you are loading with the rule through the Regional service you can choose to use the rule to load data, build size or both.

    If you get a validation error the State of charge I do not think that it has nothing to do with the fact that you try to use the same rule for dimension data and load build.  It is a separate issue.

  • Loading data for planning in the Interface of EPMA

    Hi all
    I have worked in Essbase and planning Applications and have accomplished the task of loading data and the option Table Interface EPMA and planning.

    However I have a requirement, that is to say, I want to load planning data in the Interface Table...
    Is this possible? Everyone please help me...

    Thanks in advance

    If you want to fetch data from the essbase planning data and load into a table of interface?
    The table interface is essentially just a relational table, there are a number of ways to get order dataexport data, report script, cdf, essbase mdx.
    You could automate with scripts, ODI, or if you are willing to pay then stars analytical.

    See you soon

    John
    http://John-Goodwin.blogspot.com/

  • Loading (without rules file) crashes on EAS on the loading data in the exported file.

    I'm doing a load of data in a database if a file I exported previously and data loading crashes just after taking in the text file. It hangs where it says "press ESC or cancel button to complete the current operation. It is a very small text file that am loading data from. I have to kill the ESSSVR process for this particular application back the EAS to perform other tasks. Am able to do any other tasks very well. Also on our Prod environment we do not have this problem and can't seem to load the data through very well exported text files. Has anyone seen this behavior before?

    Thank you
    Ted.

    Published by: Teddd on March 4, 2013 08:52

    This announcement so it would be beneficial to others on 11.1.1.4 with essbase on RHEL 5 server. The answer to the question is in this Doc ID 1531236.1 knowledge of Oracle database. Apparently, it shows only on RHEL 5 and glibc is updated glibc - 2.5 - 107 of glibc - 2.5 - 81. The solution is to donwgrade the glibc 2.8 - 81. Please refer to this doc 1531236.1 if any of you have a similar problem.

    Published by: Teddd on March 14, 2013 11:38

  • ORA-31693: Data Table object 'AWSTEMPUSER '. "' TEMPMANUALMAPRPT_273 ' failed to load/unload and being ignored because of the error:

    Dear all,

    OS - Windows server 2012 R2

    version - 11.2.0.1.0

    Server: production server

    ORA-31693: Data Table object 'AWSTEMPUSER '. "' TEMPMANUALMAPRPT_273 ' failed to load/unload and being ignored because of the error:

    ORA-02354: Error exporting/importing data

    ORA-00942: table or view does not exist

    When taken expdp and faced error mentioned above. but expdp completed successfully with waring as below.

    Work "AWSCOMMONMASTER". "" FULLEXPJOB26SEP15_053001 "finished with 6 errors at 09:30:54

    (1) what is the error

    (2) is there any problem in the dump because file as above of the error. If Yes, then I'll resume expdp.

    Please suggest me. Thanks in advance

    Hello

    I suspect that what has happened, is that demand has dropped a temporary table to during the time that you run the export - consider this series of events

    (1) temp table created by application

    (2) start expdp work - including this table

    (3) the extracted table metadata

    (4) the application deletes the table

    (5) expdp is trying to retrieve data from the table - and gets the above error.

    Just to confirm with the enforcement team that the table is just a temporary thing - it certainly seems it name.

    See you soon,.

    Rich

  • Question to load data using sql loader in staging table, and then in the main tables!

    Hello

    I'm trying to load data into our main database table using SQL LOADER. data will be provided in separate pipes csv files.

    I have develop a shell script to load the data and it works fine except one thing.

    Here are the details of a data to re-create the problem.

    Staging of the structure of the table in which data will be filled using sql loader

    create table stg_cmts_data (cmts_token varchar2 (30), CMTS_IP varchar2 (20));

    create table stg_link_data (dhcp_token varchar2 (30), cmts_to_add varchar2 (200));

    create table stg_dhcp_data (dhcp_token varchar2 (30), DHCP_IP varchar2 (20));

    DATA in the csv file-

    for stg_cmts_data-

    cmts_map_03092015_1.csv

    WNLB-CMTS-01-1. 10.15.0.1

    WNLB-CMTS-02-2 | 10.15.16.1

    WNLB-CMTS-03-3. 10.15.48.1

    WNLB-CMTS-04-4. 10.15.80.1

    WNLB-CMTS-05-5. 10.15.96.1

    for stg_dhcp_data-

    dhcp_map_03092015_1.csv

    DHCP-1-1-1. 10.25.23.10, 25.26.14.01

    DHCP-1-1-2. 56.25.111.25, 100.25.2.01

    DHCP-1-1-3. 25.255.3.01, 89.20.147.258

    DHCP-1-1-4. 10.25.26.36, 200.32.58.69

    DHCP-1-1-5 | 80.25.47.369, 60.258.14.10

    for stg_link_data

    cmts_dhcp_link_map_0309151623_1.csv

    DHCP-1-1-1. WNLB-CMTS-01-1,WNLB-CMTS-02-2

    DHCP-1-1-2. WNLB-CMTS-03-3,WNLB-CMTS-04-4,WNLB-CMTS-05-5

    DHCP-1-1-3. WNLB-CMTS-01-1

    DHCP-1-1-4. WNLB-CMTS-05-8,WNLB-CMTS-05-6,WNLB-CMTS-05-0,WNLB-CMTS-03-3

    DHCP-1-1-5 | WNLB-CMTS-02-2,WNLB-CMTS-04-4,WNLB-CMTS-05-7

    WNLB-DHCP-1-13 | WNLB-CMTS-02-2

    Now, after loading these data in the staging of table I have to fill the main database table

    create table subntwk (subntwk_nm varchar2 (20), subntwk_ip varchar2 (30));

    create table link (link_nm varchar2 (50));

    SQL scripts that I created to load data is like.

    coil load_cmts.log

    Set serveroutput on

    DECLARE

    CURSOR c_stg_cmts IS SELECT *.

    OF stg_cmts_data;

    TYPE t_stg_cmts IS TABLE OF stg_cmts_data % ROWTYPE INDEX BY pls_integer;

    l_stg_cmts t_stg_cmts;

    l_cmts_cnt NUMBER;

    l_cnt NUMBER;

    NUMBER of l_cnt_1;

    BEGIN

    OPEN c_stg_cmts.

    Get the c_stg_cmts COLLECT in BULK IN l_stg_cmts;

    BECAUSE me IN l_stg_cmts. FIRST... l_stg_cmts. LAST

    LOOP

    SELECT COUNT (1)

    IN l_cmts_cnt

    OF subntwk

    WHERE subntwk_nm = l_stg_cmts (i) .cmts_token;

    IF l_cmts_cnt < 1 THEN

    INSERT

    IN SUBNTWK

    (

    subntwk_nm

    )

    VALUES

    (

    l_stg_cmts (i) .cmts_token

    );

    DBMS_OUTPUT. Put_line ("token has been added: ' |") l_stg_cmts (i) .cmts_token);

    ON THE OTHER

    DBMS_OUTPUT. Put_line ("token is already present'");

    END IF;

    WHEN l_stg_cmts EXIT. COUNT = 0;

    END LOOP;

    commit;

    EXCEPTION

    WHILE OTHERS THEN

    Dbms_output.put_line ('ERROR' |) SQLERRM);

    END;

    /

    output

    for dhcp


    coil load_dhcp.log

    Set serveroutput on

    DECLARE

    CURSOR c_stg_dhcp IS SELECT *.

    OF stg_dhcp_data;

    TYPE t_stg_dhcp IS TABLE OF stg_dhcp_data % ROWTYPE INDEX BY pls_integer;

    l_stg_dhcp t_stg_dhcp;

    l_dhcp_cnt NUMBER;

    l_cnt NUMBER;

    NUMBER of l_cnt_1;

    BEGIN

    OPEN c_stg_dhcp.

    Get the c_stg_dhcp COLLECT in BULK IN l_stg_dhcp;

    BECAUSE me IN l_stg_dhcp. FIRST... l_stg_dhcp. LAST

    LOOP

    SELECT COUNT (1)

    IN l_dhcp_cnt

    OF subntwk

    WHERE subntwk_nm = l_stg_dhcp (i) .dhcp_token;

    IF l_dhcp_cnt < 1 THEN

    INSERT

    IN SUBNTWK

    (

    subntwk_nm

    )

    VALUES

    (

    l_stg_dhcp (i) .dhcp_token

    );

    DBMS_OUTPUT. Put_line ("token has been added: ' |") l_stg_dhcp (i) .dhcp_token);

    ON THE OTHER

    DBMS_OUTPUT. Put_line ("token is already present'");

    END IF;

    WHEN l_stg_dhcp EXIT. COUNT = 0;

    END LOOP;

    commit;

    EXCEPTION

    WHILE OTHERS THEN

    Dbms_output.put_line ('ERROR' |) SQLERRM);

    END;

    /

    output

    for link -.

    coil load_link.log

    Set serveroutput on

    DECLARE

    l_cmts_1 VARCHAR2 (4000 CHAR);

    l_cmts_add VARCHAR2 (200 CHAR);

    l_dhcp_cnt NUMBER;

    l_cmts_cnt NUMBER;

    l_link_cnt NUMBER;

    l_add_link_nm VARCHAR2 (200 CHAR);

    BEGIN

    FOR (IN) r

    SELECT dhcp_token, cmts_to_add | ',' cmts_add

    OF stg_link_data

    )

    LOOP

    l_cmts_1: = r.cmts_add;

    l_cmts_add: = TRIM (SUBSTR (l_cmts_1, 1, INSTR (l_cmts_1, ',') - 1));

    SELECT COUNT (1)

    IN l_dhcp_cnt

    OF subntwk

    WHERE subntwk_nm = r.dhcp_token;

    IF l_dhcp_cnt = 0 THEN

    DBMS_OUTPUT. Put_line ("device not found: ' |") r.dhcp_token);

    ON THE OTHER

    While l_cmts_add IS NOT NULL

    LOOP

    l_add_link_nm: = r.dhcp_token |' _TO_' | l_cmts_add;

    SELECT COUNT (1)

    IN l_cmts_cnt

    OF subntwk

    WHERE subntwk_nm = TRIM (l_cmts_add);

    SELECT COUNT (1)

    IN l_link_cnt

    LINK

    WHERE link_nm = l_add_link_nm;

    IF l_cmts_cnt > 0 AND l_link_cnt = 0 THEN

    INSERT INTO link (link_nm)

    VALUES (l_add_link_nm);

    DBMS_OUTPUT. Put_line (l_add_link_nm |) » '||' Has been added. ") ;

    ELSIF l_link_cnt > 0 THEN

    DBMS_OUTPUT. Put_line (' link is already present: ' | l_add_link_nm);

    ELSIF l_cmts_cnt = 0 then

    DBMS_OUTPUT. Put_line (' no. CMTS FOUND for device to create the link: ' | l_cmts_add);

    END IF;

    l_cmts_1: = TRIM (SUBSTR (l_cmts_1, INSTR (l_cmts_1, ',') + 1));

    l_cmts_add: = TRIM (SUBSTR (l_cmts_1, 1, INSTR (l_cmts_1, ',') - 1));

    END LOOP;

    END IF;

    END LOOP;

    COMMIT;

    EXCEPTION

    WHILE OTHERS THEN

    Dbms_output.put_line ('ERROR' |) SQLERRM);

    END;

    /

    output

    control files -

    DOWNLOAD THE DATA

    INFILE 'cmts_data.csv '.

    ADD

    IN THE STG_CMTS_DATA TABLE

    When (cmts_token! = ") AND (cmts_token! = 'NULL') AND (cmts_token! = 'null')

    and (cmts_ip! = ") AND (cmts_ip! = 'NULL') AND (cmts_ip! = 'null')

    FIELDS TERMINATED BY ' |' SURROUNDED OF POSSIBLY "" "

    TRAILING NULLCOLS

    ('RTRIM (LTRIM (:cmts_token))' cmts_token,

    cmts_ip ' RTRIM (LTRIM(:cmts_ip)) ")". "

    for dhcp.


    DOWNLOAD THE DATA

    INFILE 'dhcp_data.csv '.

    ADD

    IN THE STG_DHCP_DATA TABLE

    When (dhcp_token! = ") AND (dhcp_token! = 'NULL') AND (dhcp_token! = 'null')

    and (dhcp_ip! = ") AND (dhcp_ip! = 'NULL') AND (dhcp_ip! = 'null')

    FIELDS TERMINATED BY ' |' SURROUNDED OF POSSIBLY "" "

    TRAILING NULLCOLS

    ('RTRIM (LTRIM (:dhcp_token))' dhcp_token,

    dhcp_ip ' RTRIM (LTRIM(:dhcp_ip)) ")". "

    for link -.

    DOWNLOAD THE DATA

    INFILE 'link_data.csv '.

    ADD

    IN THE STG_LINK_DATA TABLE

    When (dhcp_token! = ") AND (dhcp_token! = 'NULL') AND (dhcp_token! = 'null')

    and (cmts_to_add! = ") AND (cmts_to_add! = 'NULL') AND (cmts_to_add! = 'null')

    FIELDS TERMINATED BY ' |' SURROUNDED OF POSSIBLY "" "

    TRAILING NULLCOLS

    ('RTRIM (LTRIM (:dhcp_token))' dhcp_token,

    cmts_to_add TANK (4000) RTRIM (LTRIM(:cmts_to_add)) ")" ""

    SHELL SCRIPT-

    If [!-d / log]

    then

    Mkdir log

    FI

    If [!-d / finished]

    then

    mkdir makes

    FI

    If [!-d / bad]

    then

    bad mkdir

    FI

    nohup time sqlldr username/password@SID CONTROL = load_cmts_data.ctl LOG = log/ldr_cmts_data.log = log/ldr_cmts_data.bad DISCARD log/ldr_cmts_data.reject ERRORS = BAD = 100000 LIVE = TRUE PARALLEL = TRUE &

    nohup time username/password@SID @load_cmts.sql

    nohup time sqlldr username/password@SID CONTROL = load_dhcp_data.ctl LOG = log/ldr_dhcp_data.log = log/ldr_dhcp_data.bad DISCARD log/ldr_dhcp_data.reject ERRORS = BAD = 100000 LIVE = TRUE PARALLEL = TRUE &

    time nohup sqlplus username/password@SID @load_dhcp.sql

    nohup time sqlldr username/password@SID CONTROL = load_link_data.ctl LOG = log/ldr_link_data.log = log/ldr_link_data.bad DISCARD log/ldr_link_data.reject ERRORS = BAD = 100000 LIVE = TRUE PARALLEL = TRUE &

    time nohup sqlplus username/password@SID @load_link.sql

    MV *.log. / log

    If the problem I encounter is here for loading data in the link table that I check if DHCP is present in the subntwk table, then continue to another mistake of the newspaper. If CMTS then left create link to another error in the newspaper.

    Now that we can here multiple CMTS are associated with unique DHCP.

    So here in the table links to create the link, but for the last iteration of the loop, where I get separated by commas separate CMTS table stg_link_data it gives me log as not found CMTS.

    for example

    DHCP-1-1-1. WNLB-CMTS-01-1,WNLB-CMTS-02-2

    Here, I guess to link the dhcp-1-1-1 with balancing-CMTS-01-1 and wnlb-CMTS-02-2

    Theses all the data present in the subntwk table, but still it gives me journal wnlb-CMTS-02-2 could not be FOUND, but we have already loaded into the subntwk table.

    same thing is happening with all the CMTS table stg_link_data who are in the last (I think here you got what I'm trying to explain).

    But when I run the SQL scripts in the SQL Developer separately then it inserts all valid links in the table of links.

    Here, she should create 9 lines in the table of links, whereas now he creates only 5 rows.

    I use COMMIT in my script also but it only does not help me.

    Run these scripts in your machine let me know if you also get the same behavior I get.

    and please give me a solution I tried many thing from yesterday, but it's always the same.

    It is the table of link log

    link is already present: dhcp-1-1-1_TO_wnlb-cmts-01-1

    NOT FOUND CMTS for device to create the link: wnlb-CMTS-02-2

    link is already present: dhcp-1-1-2_TO_wnlb-cmts-03-3
    link is already present: dhcp-1-1-2_TO_wnlb-cmts-04-4

    NOT FOUND CMTS for device to create the link: wnlb-CMTS-05-5

    NOT FOUND CMTS for device to create the link: wnlb-CMTS-01-1

    NOT FOUND CMTS for device to create the link: wnlb-CMTS-05-8
    NOT FOUND CMTS for device to create the link: wnlb-CMTS-05-6
    NOT FOUND CMTS for device to create the link: wnlb-CMTS-05-0

    NOT FOUND CMTS for device to create the link: wnlb-CMTS-03-3

    link is already present: dhcp-1-1-5_TO_wnlb-cmts-02-2
    link is already present: dhcp-1-1-5_TO_wnlb-cmts-04-4

    NOT FOUND CMTS for device to create the link: wnlb-CMTS-05-7

    Device not found: wnlb-dhcp-1-13

    IF NEED MORE INFORMATION PLEASE LET ME KNOW

    Thank you

    I felt later in the night that during the loading in the staging table using UNIX machine he created the new line for each line. That is why the last CMTS is not found, for this I use the UNIX 2 BACK conversion and it starts to work perfectly.

    It was the dos2unix error!

    Thank you all for your interest and I may learn new things, as I have almost 10 months of experience in (PLSQL, SQL)

  • Name of user and password invalid executes the Plan of loading data

    I get the following error when I try to execute the plan of loading data for loading data from my EBS server.  I don't know what username and password she is claiming is not valid or where to change the value.  Any ideas where I can find it?

    ODI-1519: series step "start load Plan.

    (InternalID:4923500) ' failed, because the child step "Global Variable Refresh.

    (InternalID:4924500) "is a mistake.

    ODI-1529: refreshment of the variable 'BIAPPS.13P_CALENDAR_ID' failed:

    Select CASE WHEN 'The Global Variable Refresh' in (select distinct)

    group_code from C_PARAMETER_VALUE_FORMATTER_V where PARAM_CODE =

    "13P_CALENDAR_ID")

    THEN (select param_value

    of C_PARAMETER_VALUE_FORMATTER_V

    where PARAM_CODE = '13P_CALENDAR_ID. '

    and group_code = 'Global Variable Refresh'

    and datasource_num_id = ' #BIAPPS. WH_DATASOURCE_NUM_ID')

    ON THE OTHER

    (select param_value in the C_GL_PARAM_VALUE_FORMATTER_V where PARAM_CODE =

    '13P_CALENDAR_ID' and datasource_num_id =

    ' #BIAPPS. WH_DATASOURCE_NUM_ID')

    END

    of the double

    0:72000:Java.SQL.SqlException: ORA-01017: name of user and password invalid.

    connection refused

    java.sql.SQLException: ORA-01017: name of user and password invalid. opening of session

    denied

    to

    oracle.odi.jdbc.datasource.LoginTimeoutDatasourceAdapter.doGetConnection(LoginTimeoutDatasourceAdapter.java:133)

    to

    oracle.odi.jdbc.datasource.LoginTimeoutDatasourceAdapter.getConnection(LoginTimeoutDatasourceAdapter.java:62)

    to

    oracle.odi.core.datasource.dwgobject.support.OnConnectOnDisconnectDataSourceAdapter.getConnection(OnConnectOnDisconnectDataSourceAdapter.java:74)

    to

    oracle.odi.runtime.agent.loadplan.LoadPlanProcessor.executeVariableStep(LoadPlanProcessor.java:3050)

    to

    oracle.odi.runtime.agent.loadplan.LoadPlanProcessor.refreshVariables(LoadPlanProcessor.java:4287)

    to

    oracle.odi.runtime.agent.loadplan.LoadPlanProcessor.AddRunnableScenarios(LoadPlanProcessor.java:2284)

    to

    oracle.odi.runtime.agent.loadplan.LoadPlanProcessor.AddRunnableScenarios(LoadPlanProcessor.java:2307)

    to

    oracle.odi.runtime.agent.loadplan.LoadPlanProcessor.SelectNextRunnableScenarios(LoadPlanProcessor.java:2029)

    to

    oracle.odi.runtime.agent.loadplan.LoadPlanProcessor.StartAllScenariosFromStep(LoadPlanProcessor.java:1976)

    to

    oracle.odi.runtime.agent.loadplan.LoadPlanProcessor.startLPExecution(LoadPlanProcessor.java:491)

    to

    oracle.odi.runtime.agent.loadplan.LoadPlanProcessor.initLPInstance(LoadPlanProcessor.java:384)

    to

    oracle.odi.runtime.agent.loadplan.LoadPlanProcessor.startLPInstance(LoadPlanProcessor.java:147)

    to

    oracle.odi.runtime.agent.processor.impl.StartLoadPlanRequestProcessor.doProcessRequest(StartLoadPlanRequestProcessor.java:87)

    to

    oracle.odi.runtime.agent.processor.SimpleAgentRequestProcessor.process(SimpleAgentRequestProcessor.java:49)

    to

    oracle.odi.runtime.agent.support.DefaultRuntimeAgent.execute(DefaultRuntimeAgent.java:68)

    to

    oracle.odi.runtime.agent.servlet.AgentServlet.processRequest(AgentServlet.java:564)

    to

    oracle.odi.runtime.agent.servlet.AgentServlet.doPost(AgentServlet.java:518)

    at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)

    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)

    to

    weblogic.servlet.internal.StubSecurityHelper$ ServletServiceAction.run (StubSecurityHelper.java:227)

    to

    weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)

    to

    weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:301)

    at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)

    to

    weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)

    to

    oracle.security.jps.ee.http.JpsAbsFilter$ 1.run(JpsAbsFilter.java:119)

    at java.security.AccessController.doPrivileged (Native Method)

    to

    oracle.security.jps.util.JpsSubject.doAsPrivileged(JpsSubject.java:324)

    to

    oracle.security.jps.ee.util.JpsPlatformUtil.runJaasMode(JpsPlatformUtil.java:460)

    to

    oracle.security.jps.ee.http.JpsAbsFilter.runJaasMode(JpsAbsFilter.java:103)

    to

    oracle.security.jps.ee.http.JpsAbsFilter.doFilter(JpsAbsFilter.java:171)

    at oracle.security.jps.ee.http.JpsFilter.doFilter(JpsFilter.java:71)

    to

    weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)

    to

    oracle.dms.servlet.DMSServletFilter.doFilter(DMSServletFilter.java:163)

    to

    weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)

    to

    weblogic.servlet.internal.WebAppServletContext$ ServletInvocationAction.wrapRun (WebAppServletContext.java:3730)

    to

    weblogic.servlet.internal.WebAppServletContext$ ServletInvocationAction.run (WebAppServletContext.java:3696)

    to

    weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)

    to

    weblogic.security.service.SecurityManager.runAs(SecurityManager.java:120)

    to

    weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2273)

    to

    weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2179)

    to

    weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1490)

    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:256)

    at weblogic.work.ExecuteThread.run(ExecuteThread.java:221)

    Caused by: java.sql.SQLException: ORA-01017: name of user and password invalid.

    connection refused

    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:462)

    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:397)

    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:389)

    at oracle.jdbc.driver.T4CTTIfun.processError(T4CTTIfun.java:689)

    to

    oracle.jdbc.driver.T4CTTIoauthenticate.processError(T4CTTIoauthenticate.java:455)

    at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:481)

    at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:205)

    to

    oracle.jdbc.driver.T4CTTIoauthenticate.doOAUTH(T4CTTIoauthenticate.java:387)

    to

    oracle.jdbc.driver.T4CTTIoauthenticate.doOAUTH(T4CTTIoauthenticate.java:814)

    at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:418)

    to

    oracle.jdbc.driver.PhysicalConnection. < init > (PhysicalConnection.java:678)

    to

    oracle.jdbc.driver.T4CConnection. < init > (T4CConnection.java:234)

    to

    oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:34)

    at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:567)

    to

    oracle.odi.jdbc.datasource.DriverManagerDataSource.getConnectionFromDriver(DriverManagerDataSource.java:410)

    to

    oracle.odi.jdbc.datasource.DriverManagerDataSource.getConnectionFromDriver(DriverManagerDataSource.java:386)

    to

    oracle.odi.jdbc.datasource.DriverManagerDataSource.getConnectionFromDriver(DriverManagerDataSource.java:353)

    to

    oracle.odi.jdbc.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:332)

    to

    oracle.odi.jdbc.datasource.LoginTimeoutDatasourceAdapter$ ConnectionProcessor.run (LoginTimeoutDatasourceAdapter.java:217)

    to

    java.util.concurrent.Executors$ RunnableAdapter.call (Executors.java:441)

    to java.util.concurrent.FutureTask$ Sync.innerRun (FutureTask.java:303)

    at java.util.concurrent.FutureTask.run(FutureTask.java:138)

    to

    java.util.concurrent.ThreadPoolExecutor$ Worker.runTask (ThreadPoolExecutor.java:886)

    to

    java.util.concurrent.ThreadPoolExecutor$ Worker.run (ThreadPoolExecutor.java:908)

    at java.lang.Thread.run(Thread.java:662)

    Found the answer after 4 days of research.  Opening ODI Studio.  Go to the topology, expand-> Oracle Technologies.  Open BIAPPS_BIACOMP.  In my system, he had "NULLBIACM_IO" has changed nothing to my correct prefix and it worked.

    Now, my data load gives me the error that the table W_LANGUAGES_G does not exist.   At least I liked long.

  • Can't get solitaire mahjong to load on Mindjolt - online reviews that people also the probs on Microsoft games and Pogo.  You wonder if this is a Flash thing since my flash updated Feb.  Everything else on the games seems to work very well.  My Flash is 2

    Can't get solitaire mahjong to load on Mindjolt - online reviews that people also the probs on Microsoft games and Pogo.  You wonder if this is a Flash thing since my flash updated Feb.  Everything else on the games seems to work very well.  My Flash is 20.0.0.306.  Using Firefox 39.0 Sony laptop Vaio VPCEC4SOE, Windows 7.  Just try to really get to the bottom of it.

    Hi, thank you very much, your 21.0.0.213 Flash issue fixed it for me on Firefox.

  • Cannot delete the data and source ERPI loading system rules 11.1.1.3

    Hello

    I am trying to remove the source system screen ERPI in the workspace (what of more, we use in our application UAT), but it throws to up and say error message "could not delete the Source System. There are Applications associated with this Source system targets. Please remove all associated target Applications. »

    I tried to delete the data load rules, but are all not valid and deletion icon (trash) is disabled.

    Can someone help me how to clean these invalid instances?

    Version: 11.1.1.3

    Thank you

    Jehanne

    Figured out, remove the application of "Target Application Registartion" and your rules of metadata and loading data are deleted.

Maybe you are looking for

  • The new font is hard to read

    The new clear grey font is hard to read in the new advertising software. Could you please go back to black as the default? Edit: even if I had changed the black font color, when my post had been changed to gray. I resorted to "BOLD" to make it readab

  • Cannot cancel all documents in the print queue, which leads to the mistake of the printer

    Original title: I LIKE MY PRINTER, BUT LATELY I CAN'T CANCEL DOCUMENTS "ALL". IT WILL CANCEL ALL BUT ONE... SO I can not PRINT ANY FURTHUR because there is a document remaining, sometimes, you make a mistake and that you do not want the document to p

  • How can I burn files to CD/DVD

    I was trying to burn files to my CD/DVD of the last hour. Even after I wipe the drive and Windows 7 to say what files to burn it tells me the files already exist on the destination disk or there is not enough disk space on the destination, even after

  • networking and windows7 Vista

    under pc UN win 7 en cable network connected to a livebox and a laptop under vista connected to WiFi livebox How to make a network with these 2 machines?

  • How to install the language pack English on a Microsoft surface that is running Japanese Windows 8 RT?

    HelloI got a Microsoft Surface that is running Japanese Windows 8 RT. I need to change the system in English language. I downloaded English language pack (.cab file). I need to know hot to install it. Also, I can not install my dongle to the Surface.