Loading data changes to the HFM mulit-period
Hi all
Anyone know the best way to load the source excel file with several times, including periods of adjustment in HFM 11.1.2.4? I can't loading period adjustments in the < entity curr Wo > in the size of the value when the excel file contains several periods? I know one solution is to separate the periods and adjustments in two separate files, but is anyway to do it in a single file?
Thank you!
Can you tie your model or screenshot?
I don't think that is possible, but I want to check something.
See you soon
Tags: Business Intelligence
Similar Questions
-
ASO Essbase - loading data to overwrite the existing values
I'm support ASO but does not know how he would "Overwrite the existing values" correctly. I want it to work as BSO "crush." Need help please, I am a newbie to ASO :(
change the ASOAPP database. ASODB load_buffer to initialize with values of substitution of resource_usage 0.15 buffer_id 1 create slice;
Thank you
Which only Initializes the buffer, it does not load anything (as I imagine you know, KKT). Just to be the opposite, I would say the MaxL may be harder but gives you a better idea on how loading to ASO that the use of the Regional service.
To the OP, you are mixing two different command syntax. 'Substitution values' and 'create a group' syntax is part of the import , not the alter statement.
Suggest you read this article of technical reference: loading data using pads
Using tampons complicates slightly reproducing the behavior "crush." You can choose multiple values loaded at the same intersection crash in the buffer zone, or in committing the buffer for the cube (or both). BSO is not this distinction. The substitution values clause controls what happens when the buffer is enabled for the cube, but by default multiple values at the same intersection in loading the buffer still the sum. You can control what is happening to several values, hitting the same intersection in the buffer to aggregate_use_last / aggregate_sum buffer options in the alter statement.
-
Format of Date change in the dashboard quick drop-down list
Hi friends, my dashboard displays a drop-down list for dates, which are in the format "2008-07-01 12:00:00 AM.
It is defined in sql server datetime data type. Is it possible to change the date format in the swift dash to just '01/07/2008' without doing next database.
Appreciate your help.
Thanks and greetingsHello... Toony
In the fast dashboard, you have the option to select for example to show SQL,.
you write SQL or SQL advanced logic: select cast ("timestampCOLUMN" as the date) of "PresentationLayerName"
This is a simple way without changing anything beside SPR...
And if you want that the timestamp column somewhere in some reports you can directly use this column.
If you don't want you must typecast as the DATE.If your question has been answered then put it as answered and mark it as correct... ;)
Published by: Kishore Guggilla on October 20, 2008 23:44
-
Exchange rate ERPi loading from EBS to the ARM - no period was identified error
Hi all
I'm under a load of EBS data to load into the ARM using ERPi 11.1.2.2. The State of charge of data ends and I'm a success, but in the log details of the process, I see this error:
No periods have been identified for the loading of the data in the table "AIF_EXCHANGE_RATES_STG".
Where so I put the period mapping to EBS exchange rates?
-Mike
Hello
There are some related documents to support this issue.
I suggest that take you a look at them.
Concerning
-
loading data by passing the parameters in the ctl file
Hello
I have 3 files and 3 intermediate tables. Is it possible to integrate the data from the files of their respective intermediate tables data using only a single command using parameters file.
For details about the scenario, there are 3 files namely A.dat B.dat c.dat, whose data must be entered in A_Stg, B_Stg, C_Stg, staging of data tables. Without doubt, this can be done by using files separate ctl. But the requirement is to do it using a single charger file.
Indications in this direction would be great.
~ Astr0
Published by: 976696 on December 13, 2012 17:15 redirected [url https://forums.oracle.com/forums/thread.jspa?messageID=10744651 & #10744651] hereWe can not in sql loader, we can DIY external table with n number of files...
-
Insert the page elements when loading data
Hi all
I use APEX 5.0 and using wizard to load data to download the data.
My table has 7 fields and I need to insert 3 fields in the table while loading data.
I have created a process after Parse the data downloaded as below.
The result is that the 5th field will be First Row column as 'Y' and no 6th and 7th column. There is value of 5th field only.
When click on next to data validation page, it shows the 5th field with column field and 6th without the name of the column.
Please help me to the point where I must change my code.
BEGIN
APEX_COLLECTION. ADD_MEMBER)
p_collection_name = > 'PARSE_COL_HEAD ',.
p_c001 = > 'HUBCODE ',.
p_c002 = > 'UPLOAD_FILENAME ',.
p_c003 = > 'UPLOAD_DATE');
FOR UPLOAD_ROW IN (SELECT SEQ_ID
OF APEX_COLLECTIONS
WHERE COLLECTION_NAME = "SPREADSHEET_CONTENT")
LOOP
APEX_COLLECTION. () UPDATE_MEMBER_ATTRIBUTE
p_collection_name = > 'SPREADSHEET_CONTENT ',.
p_seq = > UPLOAD_ROW. SEQ_ID,
p_attr_number = > '5',.
p_attr_value = >: P2_HUB_CODE);APEX_COLLECTION. () UPDATE_MEMBER_ATTRIBUTE
p_collection_name = > 'SPREADSHEET_CONTENT ',.
p_seq = > UPLOAD_ROW. SEQ_ID,
p_attr_number = > '6'.
p_attr_value = >: P2_FILE_NAME);APEX_COLLECTION. () UPDATE_MEMBER_ATTRIBUTE
p_collection_name = > 'SPREADSHEET_CONTENT ',.
p_seq = > UPLOAD_ROW. SEQ_ID,
p_attr_number = > '7'.
p_attr_value = > SYSDATE);END LOOP;
END;Want to close this loop and hope that it will be useful for others.
In the end, I managed download document using the latest plugin for APEX 5.0 excel2collections.
As MK pointed out, I can control my data using the plugin.
Another point more directly is users can download xls or xlsx files, do not need to convert them to csv format before downloading.
I thank all of you for the help.
-
Loading data from the table of operating system
Hi all;
MY DB vsersion is 10.2.0.5.0 in LINUX
I have the .sql file in path X11R6.
I am trying to load data by running the script load.sql , but I get the error message.
> > CONTENT
$ cat vi.sample_tab.sql
create table tab1 (identification number,
name varchar2 (15).
Qual varchar2 (15).
City varchar2 (15).
Mobile number);
$ vi load.sql;
Start
because me 1.100000 loop
insert into table values (i,'* ',' MS ',' *', 1234554321);
commit;
end loop;
end;
/
SQL > @sample.sql;
Table created.
SQL > @load.sql;
insert into table values (i,'* ',' MS ',' *', 1234554321);
*
ERROR at line 3:
ORA-06550: line 3, column 19:
PL/SQL: ORA-00942: table or view does not exist
ORA-06550: line 3, column 1:
PL/SQL: SQL statement ignored
SQL > select * from tab;
TNOM TABTYPE CLUSTERID
------------------------------ ------- ----------
TABLE TAB1
SQL > tab1 desc;
Name Null? Type
----------------------------------------- -------- ----------------------------
ID NUMBER
NAME VARCHAR2 (15)
QUAL VARCHAR2 (15)
CITY VARCHAR2 (15)
MOBILE PHONE NUMBER
Thanks in advance.
Hello
GTS (DBA) wrote:
Hi all;
MY DB vsersion is 10.2.0.5.0
I have the .sql file in path X11R6.
I am trying to load data by running the script load.sql , but I get the error message.
> CONTENT
$ cat vi.sample_tab.sql
create table tab1 (identification number,
name varchar2 (15).
Qual varchar2 (15).
City varchar2 (15).
Mobile number);
...
Well, which creates a table called TAB1. The last character of the name of the table is the numeral "1".
$ vi load.sql;
Start
because me 1.100000 loop
insert into table values (i,'* ',' MS ',' *', 1234554321);
commit;
end loop;
end;
/
SQL > @sample.sql;
Table created.
SQL > @load.sql;
insert into table values (i,'* ',' MS ',' *', 1234554321);
*
ERROR at line 3:
ORA-06550: line 3, column 19:
PL/SQL: ORA-00942: table or view does not exist
ORA-06550: line 3, column 1:
PL/SQL: SQL statement ignored
Refers to another table. The last character of the name of this table is the letter 'L '.
SQL > tab1 desc;
Name Null? Type
----------------------------------------- -------- ----------------------------
ID NUMBER
NAME VARCHAR2 (15)
QUAL VARCHAR2 (15)
CITY VARCHAR2 (15)
MOBILE PHONE NUMBER
This is the table that you created (its name ends with the digit "1"), not that used in the INSERT statement (its name ends with the letter "L").
-
sqlldr question: field in the data file exceeds the maximum length
Hello friends,
I am struggling with a load of simple data using sqlldr and hoping someone can guide me.
Ref: I use Oracle 11.2 on Linux 5.7.
===========================
Here is my table:
When I try to load a text file data using sqlldr, I get the following errors on some files that do not charge.SQL> desc ntwkrep.CARD Name Null? Type ----------------------------------------------------------------- -------- ------------------ CIM_DESCRIPTION VARCHAR2(255) CIM_NAME NOT NULL VARCHAR2(255) COMPOSEDOF VARCHAR2(4000) DESCRIPTION VARCHAR2(4000) DISPLAYNAME NOT NULL VARCHAR2(255) LOCATION VARCHAR2(4000) PARTOF VARCHAR2(255) *REALIZES VARCHAR2(4000)* SERIALNUMBER VARCHAR2(255) SYSTEMNAME NOT NULL VARCHAR2(255) TYPE VARCHAR2(255) STATUS VARCHAR2(255) LASTMODIFIED DATE
Example:
=======
Sheet 1: Rejected - error on the NTWKREP table. CARD, column REALIZES.
Field in the data file exceeds the maximum length
Looking at the actual data and count the characters for the data of the "CONSCIOUS" column, I see that it is basically a little more of 1000 characters.
So try various ideas to solve the problem, I tried to change to "tank" nls_length_semantics and re-create the table, but this does not always helped and always got the same errors of loading data on the same lines.
Then, I changed back to byte nls_length_semantics and recreated the table again.
This time, I have changed the table manually as:
Yet once, loading data failed with the same error on the same lines.SQL> ALTER TABLE ntwkrep.CARD MODIFY (REALIZES VARCHAR2(4000 char)); Table altered. SQL> desc ntwkrep.card Name Null? Type ----------------------------------------------------------------- -------- -------------------------------------------- CIM_DESCRIPTION VARCHAR2(255) CIM_NAME NOT NULL VARCHAR2(255) COMPOSEDOF VARCHAR2(4000) DESCRIPTION VARCHAR2(4000) DISPLAYNAME NOT NULL VARCHAR2(255) LOCATION VARCHAR2(4000) PARTOF VARCHAR2(255) REALIZES VARCHAR2(4000 CHAR) SERIALNUMBER VARCHAR2(255) SYSTEMNAME NOT NULL VARCHAR2(255) TYPE VARCHAR2(255) STATUS VARCHAR2(255) LASTMODIFIED DATE
So, this time, I thought that I would try to change the data type of column in a clob (navigation), and again, it is still impossible to load on the same lines.
Any ideas?SQL> desc ntwkrep.CARD Name Null? Type ----------------------------------------------------------------- -------- ----------------------- CIM_DESCRIPTION VARCHAR2(255) CIM_NAME NOT NULL VARCHAR2(255) COMPOSEDOF VARCHAR2(4000) DESCRIPTION VARCHAR2(4000) DISPLAYNAME NOT NULL VARCHAR2(255) LOCATION VARCHAR2(4000) PARTOF VARCHAR2(255) REALIZES CLOB SERIALNUMBER VARCHAR2(255) SYSTEMNAME NOT NULL VARCHAR2(255) TYPE VARCHAR2(255) STATUS VARCHAR2(255) LASTMODIFIED DATE
Here's a copy of the first line of data that fails to load each time any how to change the column 'TRUE' in the table.
Finally, for reference, here's the controlfile I use.other(1)`CARD-mes-fhnb-bldg-137/1` `other(1)`CARD-mes-fhnb-bldg-137/1 [other(1)]`HwVersion:C0|SwVersion:12.2(40)SE|Serial#:FOC1302U2S6|` Chassis::CHASSIS-mes-fhnb-bldg-137, Switch::mes-fhnb-bldg-137 ` Port::PORT-mes-fhnb-bldg-137/1.23, Port::PORT-mes-fhnb-bldg-137/1.21, Port::PORT-mes-fhnb-bldg-137/1.5, Port::PORT-mes-fhnb-bldg-137/1.7, Port::PORT-mes-fhnb-bldg-137/1.14, Port::PORT-mes-fhnb-bldg-137/1.12, Port::PORT-mes-fhnb-bldg-137/1.6, Port::PORT-mes-fhnb-bldg-137/1.4, Port::PORT-mes-fhnb-bldg-137/1.20, Port::PORT-mes-fhnb-bldg-137/1.22, Port::PORT-mes-fhnb-bldg-137/1.15, Port::PORT-mes-fhnb-bldg-137/1.13, Port::PORT-mes-fhnb-bldg-137/1.18, Port::PORT-mes-fhnb-bldg-137/1.24, Port::PORT-mes-fhnb-bldg-137/1.26, Port::PORT-mes-fhnb-bldg-137/1.17, Port::PORT-mes-fhnb-bldg-137/1.11, Port::PORT-mes-fhnb-bldg-137/1.2, Port::PORT-mes-fhnb-bldg-137/1.8, Port::PORT-mes-fhnb-bldg-137/1.10, Port::PORT-mes-fhnb-bldg-137/1.16, Port::PORT-mes-fhnb-bldg-137/1.9, Port::PORT-mes-fhnb-bldg-137/1.3, Port::PORT-mes-fhnb-bldg-137/1.1, Port::PORT-mes-fhnb-bldg-137/1.19, Port::PORT-mes-fhnb-bldg-137/1.25 `Serial#:FOC1302U2S6`mes-fhnb-bldg-137`other(1)
load data infile '/opt/EMC/data/out/Card.txt' badfile '/dbadmin/data_loads/logs/Card.bad' append into table ntwkrep.CARD fields terminated by "`" TRAILING NULLCOLS ( CIM_DESCRIPTION, CIM_NAME, COMPOSEDOF, DESCRIPTION, DISPLAYNAME, LOCATION, PARTOF, REALIZES, SERIALNUMBER, SYSTEMNAME, TYPE, STATUS, LASTMODIFIED "sysdate" )
The default data in sqlldr type is char (255)
Modify your control file following which I think should work with VARCHAR2 (4000) REALIZES:
COMPOSEDOF char(4000), DESCRIPTION char(4000), LOCATION char(4000), REALIZES char(4000),
-
my computer's date changes automatically
HelloWmy computer date will automatically change the manufacturing date is 2007.
This occurs when after every 2nd stop, I start my pc.Please help... :(A cmos battery problem would not result in an hour or so of difference, the whole date would have been changed.
Differences in an hour or two are the result of time or a problem with the internet time synchronization settings.
If the hour / date changes in the Bios setup pages theres a cmos problem, and if it changes from time to time a restart, first try to clean the cmos battery and contacts (scratched with a sharp knife)
-
Hi friends,
I'm trying to load records into the rules of the product table of the table with the following...
create table product)
prod_id varchar2 (20).
prod_grp varchar2 (20).
from_amt number (10),
to_amt number (10),
share_amt number (10)
);
Insert into product (prod_id, prod_grp, from_amt, share_amt) Values ('10037', "STK", 1, 18);
Insert into product (prod_id, prod_grp, from_amt, share_amt) Values ('10037', "NSTK", 1: 16.2);
Insert into product (prod_id, prod_grp, from_amt, to_amt, share_amt) Values ('10038', "NSTK", 1, 5000, 12);
Insert into product (prod_id, prod_grp, from_amt, to_amt, share_amt) Values ('10038', "STK", 5001, 10000, 16);
Insert into product (prod_id, prod_grp, from_amt, share_amt) Values ('10038', "STK", 10001, 20);
Insert into product (prod_id, prod_grp, from_amt, to_amt, share_amt) Values ('10039', "NSTK", 1, 8000, 10);
Insert into product (prod_id, prod_grp, from_amt, share_amt) Values ('10039', "STK", 8001, 12);
create table rules)
rule_id varchar2 (30),
rule_grp varchar2 (10),
rate_1 number (10),
point_1 number (10),
rate_2 number (10),
point_2 number (10),
rate_3 number (10),
point_3 number (10)
);
Criteria of loading in the rules of the table:
rule_id - "RL" | Product.prod_id
rule_grp - product.prod_grp
rate_1 - product.share_amt where from_amt = 1
point_1 - product.to_amt
rate_2 - if product.to_amt in point_1 is not NULL, then find product.share_amt of the next record with the same rule_id/prod_id where from_amt (of the next record) = to_amt (current record -
point_1) + 1
point_2 - if product.to_amt in point_1 is not NULL, then find product.to_amt of the next record with the same rule_id/prod_id where from_amt (of the next record) = to_amt (current record - )
point_1) + 1
rate_3 - if product.to_amt in point_2 is not NULL, then find product.share_amt of the next record with the same rule_id/prod_id where from_amt (of the next record) = to_amt(current )
Enregistrement-point_2) + 1
point_3 - if product.to_amt in point_2 is not NULL, then find product.to_amt of the next record with the same rule_id/prod_id where from_amt (of the next record) = to_amt (current record - )
point_2) + 1
I tried to load the first columns (rule_id, rule_grp, rate_1, point_1, rate_2, point_2) via the sql loader.
SQL > select * from product;
PROD_ID PROD_GRP FROM_AMT TO_AMT SHARE_AMT
-------------------- -------------------- ---------- ---------- ----------
10037 STK 1 18
10037 NSTK 1 16
1 5000 12 NSTK 10038
10038 5001-10000-16 STK.
10038 10001 20 STK.
10039 1 8000 10 NSTK
10039 STK 8001 12
produit.dat
PROD_ID | PROD_GRP | FROM_AMT | TO_AMT | SHARE_AMT
"10037' |'. STK' | 1. 18
"10037' |'. NSTK' | 1. 16.2
'10038' |' NSTK' | 1. 5000 | 12
'10038' |' STK' | 5001 | 10000 | 16
'10038' |' STK' | 10001 | 20
"10039' |'. NSTK' | 1. 8000 | 10
"10039' |'. STK' | 8001 | 12
Product.CTL
options (Skip = 1)
load data
in the table rules
fields ended by ' |'
surrounded of possibly ' '.
trailing nullcols
(rule_id POSITION (1) ""RL"|: rule_id")
rule_grp
from_amt BOUNDFILLER
point_1
share_amt BOUNDFILLER
, rate_1 ' BOX WHEN: from_amt = 1 THEN: share_amt END.
, rate_2 expression "(sélectionnez pr.share_amt de produit pr où: point_1 n'est pas null et pr.prod_id=:rule_id et: point_1 =: from_amt + 1)" "
, expression point_2 "(sélectionnez pr.to_amt de produit pr où: point_1 n'est pas null et pr.prod_id=:rule_id et: point_1 =: from_amt + 1)" "
)
He has not any support only values in rate_2, point_2... no error either... Not sure if there is another method to do this...
Please give your suggestions... Thank you very much for your time
Hello
Thanks for posting the CREATE TABLE and INSERT instructions for the sample data; It's very useful!
Don't forget to post the exact results you want from this data in the sample, i.e. what you want the rule table to contain once the task is completed.
As ground has said, there is no interest to use SQLLDR to copy data from one table to another in the same database. Use INSERT, or perhaps MERGE.
2817195 wrote:
Thank you for your answers... I thought it would be easier to manipulate the data using sql loader... I tried to use insert but do not know how to insert values in point_2, rate_3, rate_2, point_3, columns... For example, when point_1 is not null, need to do a find for the next with the same rule_id record and if the inserted record = pr.from_amt + 1 point_1 then RATE_2 should be inserted with this pr.share_amt of this record...
SQL > insert into the rules)
2 rule_id,
rule_grp 3,.
rate_1 4,.
point_1 5,.
rate_2 6,.
point_2 7,.
rate_3 8,.
9 point_3)
10. Select
11 'RL ' | PR.prod_id RULE_ID,
12 pr.prod_grp RULE_GRP,
13 CASES WHEN END of pr.from_amt = 1 THEN pr.share_amt RATE_1,
14 pr.to_amt POINT_1,
15 (select pr.share_amt from product pr where point_1 is not null and rules.rule_id = pr.prod_id and point_1 = pr.from_amt + 1) RATE_2,
16 (select pr.to_amt from product pr where point_1 is not null and rules.rule_id = pr.prod_id and = pr.from_amt + 1 point_1) POINT_2,.
17 (select pr.share_amt from product pr where point_2 is not null and rules.rule_id = pr.prod_id and = pr.from_amt + 1 point_2) RATE_3,.
18 (select pr.to_amt from product pr where point_2 is not null and rules.rule_id = pr.prod_id and = pr.from_amt + 1 point_2) POINT_3
19 product pr;
(select pr.share_amt from product pr where point_1 is not null and point_1 = pr.from_amt + 1) RATE_2,
*
ERROR on line 15:
ORA-00904: "POINT_1": invalid identifier
Help, please... Thank you very much
This is what causes the error:
The subquery on line 15 references only 1 table in the FROM clause, and this table is produced. There is no point_1 column in the product.
A scalar subquery like this can be correlated to a table in the Super request, but the only table in the FROM (line 19) clause is also produced. Since the only table that you read is produced, only columns that you can read are the columns of the product table.
You use the same table alias (pr) to mean different things 5. It's very confusing. Create aliases for single table in any SQL statement. (What you trying to do, I bet you can do without all these subqueries, in any case.)
-
SQL Loader loading data into two Tables using a single CSV file
Dear all,
I have a requirement where in I need to load the data into 2 tables using a simple csv file.
So I wrote the following control file. But it loads only the first table and also there nothing in the debug log file.
Please suggest how to achieve this.
Examples of data
Source_system_code,Record_type,Source_System_Vendor_number,$vendor_name,Vendor_site_code,Address_line1,Address_line2,Address_line3
Victor, New, Ven001, Vinay, Vin001, abc, def, xyz
Control file script
================
OPTIONS (errors = 0, skip = 1)
load data
replace
in the table1 table:
fields ended by ',' optionally surrounded "" "
(
Char Source_system_code (1) POSITION "ltrim (rtrim (:Source_system_code)),"
Record_type tank "ltrim (rtrim (:Record_type)),"
Source_System_Vendor_number tank "ltrim (rtrim (:Source_System_Vendor_number)),"
$vendor_name tank "ltrim (rtrim (:Vendor_name)),"
)
in the Table2 table
1 = 1
fields ended by ',' optionally surrounded "" "
(
$vendor_name tank "ltrim (rtrim (:Vendor_name)),"
Vendor_site_code tank "ltrim (rtrim (:Vendor_site_code)),"
Address_line1 tank "ltrim (rtrim (:Address_line1)),"
Address_line2 tank "ltrim (rtrim (:Address_line2)),"
Address_line3 tank "ltrim (rtrim (:Address_line3)).
)the problem here is loading into a table, only the first. (Table 1)
Please guide me.
Thank you
Kumar
When you do not provide a starting position for the first field in table2, it starts with the following after a last referenced in table1 field, then it starts with vendor_site_code, instead of $vendor_name. So what you need to do instead, is specify position (1) to the first field in table2 and use the fields to fill. In addition, he dislikes when 1 = 1, and he didn't need anyway. See the example including the corrected below control file.
Scott@orcl12c > test.dat TYPE of HOST
Source_system_code, Record_type, Source_System_Vendor_number, $vendor_name, Vendor_site_code, Address_line1, Address_line2, Address_line3
Victor, New, Ven001, Vinay, Vin001, abc, def, xyz
Scott@orcl12c > test.ctl TYPE of HOST
OPTIONS (errors = 0, skip = 1)
load data
replace
in the table1 table:
fields ended by ',' optionally surrounded "" "
(
Char Source_system_code (1) POSITION "ltrim (rtrim (:Source_system_code)),"
Record_type tank "ltrim (rtrim (:Record_type)),"
Source_System_Vendor_number tank "ltrim (rtrim (:Source_System_Vendor_number)),"
$vendor_name tank "ltrim (rtrim (:Vendor_name)).
)
in the Table2 table
fields ended by ',' optionally surrounded "" "
(
source_system_code FILL (1) POSITION.
record_type FILLING,
source_system_vendor_number FILLING,
$vendor_name tank "ltrim (rtrim (:Vendor_name)),"
Vendor_site_code tank "ltrim (rtrim (:Vendor_site_code)),"
Address_line1 tank "ltrim (rtrim (:Address_line1)),"
Address_line2 tank "ltrim (rtrim (:Address_line2)),"
Address_line3 tank "ltrim (rtrim (:Address_line3)).
)
Scott@orcl12c > CREATE TABLE table1:
2 (Source_system_code VARCHAR2 (13),)
3 Record_type VARCHAR2 (11),
4 Source_System_Vendor_number VARCHAR2 (27),
5 $vendor_name VARCHAR2 (11))
6.
Table created.
Scott@orcl12c > CREATE TABLE table2
2 ($vendor_name VARCHAR2 (11),)
3 Vendor_site_code VARCHAR2 (16).
4 Address_line1 VARCHAR2 (13),
5 Address_line2 VARCHAR2 (13),
Address_line3 6 VARCHAR2 (13))
7.
Table created.
Scott@orcl12c > HOST SQLLDR scott/tiger CONTROL = test.ctl DATA = test.dat LOG = test.log
SQL * Loader: release 12.1.0.1.0 - Production on Thu Mar 26 01:43:30 2015
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
Path used: classics
Commit the point reached - the number of logical records 1
TABLE1 table:
1 row loaded successfully.
Table TABLE2:
1 row loaded successfully.
Check the log file:
test.log
For more information on the charge.
Scott@orcl12c > SELECT * FROM table1
2.
RECORD_TYPE SOURCE_SYSTEM_VENDOR_NUMBER $VENDOR_NAME SOURCE_SYSTEM
------------- ----------- --------------------------- -----------
Victor Ven001 new Vinay
1 selected line.
Scott@orcl12c > SELECT * FROM table2
2.
$VENDOR_NAME VENDOR_SITE_CODE ADDRESS_LINE1 ADDRESS_LINE2 ADDRESS_LINE3
----------- ---------------- ------------- ------------- -------------
Vinay Vin001 abc def xyz
1 selected line.
Scott@orcl12c >
-
Can I load data in imported Essbase cube in OBIEE?
Hi all
I use OBIEEv11.1.1.7.I imported Essbase cube in RPD and I load sheet Essbase Excel data using smartview.
My question is after you import the cube in OBIEE do I still need load the new data into Essbase cube or is it possible to load data directly to the OBIEE to excel for any Essbase cube?
Thank you
GP
Hi GP.
Essbase substitution variables can be imported and can indeed be referenced in OBIEE. So I think that you have a solution here :-).
Thank you
GEO
-
How to remove future changes in the person of Info via API
Hello
We use the EBS 11i. I want to know if there are any API call that can remove specific date changes in the information of the person for any worker.
for example record created on January 1, 2012
change person made November 1, 2013 through API information
now a few reasons we want to remove these changes we did November 1, 2013.
Kind regards
Ali
Hi Ali,
You can use the same API, say HR_PERSON_API. update_person and pass date_track as UPDATE_OVERRIDE mode
Update and correct the Information Datetracked (Oracle HRMS help)
In your case, you can change the date as the start_date the first record and use UPDATE_OVERRIDE.
See you soon,.
Vignesh
-
loading data to essbase using EAS against back-end script
Good afternoon
We have noticed recently that when loading of our ASO cube (with the help of back-end scripts - esscmd etc.) it seems to load much more overhead then when using EAS and loading files of data individually. When loading using scripts, the total size of the cube has been 1.2 gig. When you load the files individually to EAS, the size of the cube has been 800 meg. Why the difference? is there anything we can do in scripts supported to reduce this burden?
Thank you
You are really using EssCmd to load the ASO cubes? You should use MAxL with buffers to load. Default EAS uses a buffer to load when you load multiple files. Esscmd (and without the command buffer MAxL) won't. It means loads long and larger files. Loading ASO, it takes the existing .dat file and adds the new data. When you are not using a buffer load, it takes the .dat file and the .tmp file and merges together, then when you do the second file, it takes the files .dat (which includes the first load data) and repeats the process. Whenever he does that he has to move together (twice) .dat file and there is growth of the .dat files. If nothing else I'll call it fragmentation, but I don't think it's as simple as that. I think it's just the way the data is stored. When you use a buffer and no slices, he need only do this once.
-
Cannot open the HFM (grayed out menu items) in the workspace (weblogic\IIS)
Hello
Here is the configuration:
HFM on IIS on server1. Port 80
Planning on weblogic\IIS on server 2.
Workspace on weblogic\IIS on the Server 3. Port 80.
With this combination, I can see the menu items in the workspace of planning and can create an application. But the HFM as "Create app" menu items is gray and clicking on Sunrise an error that the module is not configured.
If I rerun the configuration utility on the server (Server 3) workspace and workspace on weblogic\Apache (changed between IIS and apache), to deploy the two planning and HFM are at my disposal and I can create HFM applications.
My client wants us to use IIS and also, we heard that support for Apache would be removed over time.
Any ideas on how I can use IIS on the server to the workspace and get HFM to work with her.
Another question is - can I change the port default HFM from 80 to something else while running the configuration tool for HFM.
Thank you.
Published by: user1658817 on June 10, 2009 17:24
Published by: user1658817 on June 10, 2009 17:25I'll fly my answer on this topic from a previous post.
It is useful to know what versions of the software on which you are working. I'll assume you're on the latest version.
If you use a separate web server and application server and that you are not Apache as web server, you need to deploy the system of financial management. Please read the installation manual 52 next page: http://download.oracle.com/docs/cd/E12825_01/epm.111/epm_manual_deployment.pdf
Alternatively, you can deploy the Web of HFM components directly to the server of the workspace that can be a little easier to install.
Once you have chosen the direction chosen and made changes to the HFM, you will need to then to redeploy Workspace App Server and Configuration Web.
Kind regards
-John
Maybe you are looking for
-
Why doesn't apple mail attachments?
Dear sage & powerful masters of technology, After a recent update, I'm afraid that my application of apple mail no longer accepts attachments. I am running OSX 10.9.5. Is there a way to fix this? Also has anyone seen this problem? What is the best w
-
Problem with SMTP servers that disappear
I've recently updated to the official version of Sierra. Since then, I can't send mail more. I have a special setup in Apple Mail: I config a manual counting that iCloud as IMAP for receiving server and Gmail as SMTP server to send. That allows me to
-
Need phone service NZ for 3 weeks.
I am traveling to the United States in New Zealand for 3 weeks. Can I buy a SIM NZ here and pop in my 5 C AT & T iPhone, locked?
-
Ive got a text saying my iCloud ID has expired and its got a link and asks me to enter bank details. Is it doubtful?
-
I work with a Win7 machine and a user with a roaming profile. The user configures Windows Live Mail with account information and is able to send and receive mail, until the computer is restarted successfully. After the reboot, all account information