Load the rule with the data in two columns.

I have a SQL Interface as the source of data for my rule to load but I have two columns of data, it is possible to load them into the same rule of load?

Published by: user5170363 on June 4, 2010 18:51

Published by: user5170363 on June 4, 2010 19:04

This is in addition to what Glenn says.
Generally, we will have two types of data files.
If we have five dimensions.

(1) in the first five columns of data files is the names of the members and the last column data.
The mapping is like this:
column contains the names of the members the deposited property is a corresponding dimension name
column contains numbers the area of the given property.

(1) in the foucolumns files first is the member names and you have one or more columns of data
The mapping is like this:
column contains the names of the members the deposited property is a corresponding dimension name
column contains numbers the property field is the names of the members of the dimension remaining problems (level 0).

Note: To provide the missing information from the dimension, you can use header definition in the rules file.

Tags: Business Intelligence

Similar Questions

  • Is mapping dimension AWM - possible to load the data in two tables of entry?

    Hi all

    I have two tables ProductFamily (parent level) and products (child level).

    I want to load a dimension of those tables where the parent-child relationships are maintained (I use AWM).

    I created a map with these two entry tables, but the loaded data has no relationship.
    So, how do I do that? Is it possible to load dimensions where different levels get data from multiple tables?
    Is any type of Carpenter available in AWM?

    Thank you

    ------------------------

    A few Notes:

    -I don't want to use OWB here that my data are clean
    -In AWM, when I loaded the data in a single view that contains two tables of input data, it worked fine. But it's my worst case option.

    You must use Dimension option in snowflake in the mapping screen of size for the Product Dimension (as opposed to the default style mapping - star schema dimension).

    This will modify map entries to include a separate parent for each level of hierarchy level / that is to say, for each level of hierarchy / (unless a higher level of the hierarchy), you must specify the parent level key in addition to the key current level, code/name/description/other attributes etc.

    You can make the mapping or... use the icons at the top of the map screen.
    mode drag/drop by dragging the relational column on dimension - level/hierarchy/attribute model
    -or-
    the table expression mapping mode that gives the same effect... by dragging a column on an attribute defined in the .

    . the format of .

    HTH
    Shankar

    Note1: Complete the mapping of a sudden... B & w switching mapping modes cause the mappings to reset.
    NOTE2: assumes that your data are correct foreign key table parent level: ProductFamily exists in the child level table: products.

  • get data from two columns in a column

    Hello

    I need to display the data in two columns in a column, that I would also like to insert a space between the data in two columns in the output

    can I use something like this?

    (IMPORTER_NAME +' ' + IMPORTER_ADDRESS) as a 'place '.

    Your fence.

    to concatenate in oracle using pipes:

    (IMPORTER_NAME||' '||IMPORTER_ADDRESS) as "Location"
    
  • Kindly help me with the request to find the data in two tables

    Hello Guru

    Kindly help me to recover the data from two tables-

    BASEBALL
    LEGAL_ENT_ID (PK)
    GAME_ID (FK)
    LEGAL_ENT_NM
    INACTIVE_DT
    DATE OF INS_TS
    INS_LOGIN
    DATE OF UPD_TS
    UPD_LOGIN


    FOOTBALL
    GAME_ID (PK)
    BRKR_NM,
    BRKR_ISR_ID
    BROKER_SYMBOL
    INACTIVE_DT
    BRKR_SWIFT_FLG
    BRKR_INTERNAL_FLG
    BRKR_CATEGORY
    UPD_TS
    MINORITY_FLG
    BROKER_TYP
    STATUS
    INS_TS
    INS_LOGIN
    UPD_LOGIN
    APP_USER
    ACTIVE_FLG

    and if I want fecth data from these two tables according to the following condition then it is fine with the suite of applications.

    1 select distinct values only table of BASEBALL by using the following query.

    SELECT DISTINCT B.GAME_ID as 'CLEARING GAME ID', B.BRKR_NM "NAME of THE GAME of COMPENSATION" OF BASEBALL A, FOOTBALL B WHERE A.BROKER_RELATION_CD IN ('FUTBRKR1', 'FUTBRKR2') AND A.GAME_ID = B.GAME_ID

    2 Select all the table BRKR_NM OF FOOTBALL as well by using the query - next

    SELECT GAME_ID "RUNNING GAME ID", 'NAME OF THE GAME OF EXECUTION' BRKR_NM SOCCER

    Now, my query is that--

    I want a query that gives me a combination of above mentioned queries... and if I tried to use Union or Union All, then she is not giving me the result as expected.

    I like the result to look like who has a few conditions such as -
    1 - the records in the table Football are high vs Baseball table because there is no condition to filter the records of the Football.
    2 - football is a superset of records and Baseball is a subset.
    3 - COMPENSATION NOM_JEU and RUNNING NOM_JEU may return the same values as well.

    I want the result to be in the following form-

    EXECUTION ID GAME | NAME OF THE GAME TO RUN. COMPENSATION ID GAME | DELETE THE NAME OF THE GAME.
    2123 test1 2345 test5
    2456 test10 2456 test10


    Thanks in advance. Kindly help me.

    Published by: user555994 on January 4, 2011 23:48

    In the output you want.
    All the values of baseball;
    Values of football that are matched;
    But on what condition you want to match?

  • Create a script to update, to deploy, to load the data with the version 9.3.1

    Hello

    I would like to know if it is possible to run a script to run:

    -Dimensions profile (by EPMA) import
    -deploy an application (by EPMA)
    -load rule to load the data (by essbase)
    -Calculation (by essbase) script

    We use version 9.3.1

    Thanks in advance.

    Roby

    Hello

    I don't think it doesn't matter which feature of command batch for EPMA up to version 11.
    It is possible, you can try creating a regular taskflow in EPMA to import a dimension and then reploy, finally you could run a batch script that contains a maxl script to load and calculate essbase data.

    See you soon

    John
    http://John-Goodwin.blogspot.com/

  • Automation of loading the data of single application FDM for different applications

    Friends, my query is somewhat complex. I have 6 (two in Hyperion Planning, two HPCM) and two in Hyperion Essbase applications. I copied the adapter in the Workbench and renamed accordingly to load the data for each of them. Now the problem is I want to automate the data load for each of these requests, but don't know how it's done. Through many forums to get a better understanding but no luck!

    A humble request to all the FDQM experts for their valuable advice on how to realize the automation of all the tools in one application of FDM.

    Thanks in advance!

    You would automate this process via the Batch Loader integrated with FDM. The process to use this is exactly the same as you have one or more target applications. The ultimate target application is based on the name of the place incorporated into the batch processing file naming convention. Each of your adapters different target will be associated with one or more locations in your configuration of metadata location FDM.

  • How do I load the data calculated in HFM

    Hi gurus

    1. how to load the data calculated in HFM?

    I extracted the calculated data and when I tried to load the data calculated in HFM is partially responsible, showing the errors you can not load data for parent members of the account, Custom personalized 1 4...
    Then I ran the consolidation to get the values of the parent company.
    Is there an alternative way to load the data calculated in HFM?

    Concerning
    Hubin

    Hi Hubin,

    Calculated data cannot be loaded in HFM manually, these accounts with calculated field data should be generated through the logic of the computation.

    And parent members also don't take the data they are parents for the sum of some basic level accounts.

    So just load the data for basic level accounts and make sure they are consolidated accounts of field, that there was no error and calculated accounts data automatically generates through the logic written in the Rules file.

    And after loading data just run the consolidation and sink also rules file in both cases to the work of the logic of the computation.

    Kind regards
    Srikanth

  • I want to loop through the data from two different tables using for loop where the query should be replaced at runtime, please help me

    I have the data into two table with the structure of similar column, I want to loop through the data in these two tables

    based on some condition and runtime that I want to put the query in loop for example, the example is given, please help me

    create table ab (a number, b varchar2 (20));

    Insert into ab

    Select rownum, rownum. "" sample "

    of the double

    connect by level < = 10

    create table bc (a number, b varchar2 (20));

    Insert into BC.

    Select rownum + 1, rownum + 1 | "" sample "

    of the double

    connect by level < = 10

    declare

    l_statement varchar2 (2000);

    Boolean bool;

    Start

    bool: = true;

    If it is true, then

    l_statement: =' select * ab ';

    on the other

    l_statement: =' select * from bc';

    end if

    I'm in execute immediate l_statement - something like that, but I don't know

    loop

    dbms_output.put_line (i.a);

    end loop;

    end;

    Something like that, but this isn't a peace of the code work.

    Try this and adapt according to your needs:

    declare

    l_statement varchar2 (2000);

    c SYS_REFCURSOR;

    l_a number;

    l_b varchar2 (20);

    Boolean bool;

    Start

    bool: = true;

    If it is true, then

    l_statement: = "select a, b, AB;

    on the other

    l_statement: = "select a, b from bc;

    end if;

    --

    Open c for l_statement;

    --

    loop

    extract the c in l_a, l_b;

    When the output c % notfound;

    dbms_output.put_line (l_a |') -' || l_b);

    end loop;

    close c;

    end;

    /

  • Load the data from a text file into a table using pl/sql

    Hi Experts,

    I want to load the data from a text file (sample1.txt) to a table using pl/sql

    I used the pl/sql code below

    ***********************************
    declare
    f utl_file.file_type;
    s varchar2 (200);
    c number: = 0;
    Start
    f: = utl_file.fopen('TRY','sample1.txt','R');
    loop
    UTL_FILE.get_line (f, s);
    insert into sampletable (a, b, c) values (s, s, s);
    c: = c + 1;
    end loop;
    exception
    When NO_DATA_FOUND then
    UTL_FILE.fclose (f);
    dbms_output.put_line('No. deles de lignes insérées: ' || c);
    end;

    ***************************************

    and my sample1.txt file looks like

    ***************************************
    1
    2
    3
    ***************************************

    Gets the data inserted, with way below

    Select * from sampletable;

    A, B AND C

    1-1-1
    2-2-2
    3 3 3

    I want that data to get inserted as

    A, B AND C

    1 2 3

    The text file I have is to have three lines, and the first value of each line should go to each column

    Help, please...

    Thank you
    declare
    f utl_file.file_type;
    s1 varchar2(200);
    s2 varchar2(200);
    s3 varchar2(200);
    c number := 0;
    begin
    f := utl_file.fopen('TRY','sample1.txt','R');
    utl_file.get_line(f,s1);
    utl_file.get_line(f,s2);
    utl_file.get_line(f,s3);
    insert into sampletable (a,b,c) values (s1,s2,s3);
    c := c + 1;
    utl_file.fclose(f);
    exception
    when NO_DATA_FOUND then
    if utl_file.is_open(f) then utl_file.fclose(f); ens if;
    dbms_output.put_line('No. of rows inserted : ' || c);
    end;
    

    SY.

  • Load the data from txt

    Hello

    I just export the data in my cube, he geneates two text because of this file is more than 2 GB in size, one is xxxxx.txt, another is xxxxx_1.txt.

    My question is that if I load the data file above in the same cube, I should just specify the first file xxxxx.txt? It seems that I can't specify both text files?

    Thank you

    you only need to load the second file
    Or
    clear data and load the two

    Your choice, you will get the same results.

  • Loading the data into the planning

    I'm a little confused how loading data in the planning of the work of the application. I have created the new application of planning with the Application Wizard, created the database, etc.. Then, after documentation (the administrator Planning Guide), I loaded the metadata using contour loading utility (OutlineLoad). Now I want to load the data. Following a few steps from ducumentation I add DIRECT_DATA_LOAD = False parameter (there is no front) and the DATA_LOAD_FILE_PATH parameter points to a directory on the server (I tried creating the empty file and DATA_LOAD_FILE_PATH the value piont this file). After saving configuration and restart the schedule I used Adminsitation to load data to choose the size of data loading, Driver-Dimension, I recorded it and then... no new files in the DATA_LOAD_FILE_PATH catalog has been created. What I am doing wrong?

    EPM 11.1.1.1.0 on Windows 2003 Server EE

    Hello

    Let me give you an example of loading data from the sample application planning.

    Dimensions
    Account, currency, entity, period, scenario, Version, year, Segments

    Loading data Dimension = account

    Driver dimension = scenario

    Selected member real =

    The format of the csv file is

    Account, Point of view, data load Cube real name
    330000, "USD, Jan, E05 NoSegment, work, AFA 08 ', Consol, 1000

    I hope that gives you an idea?

    See you soon

    John
    http://John-Goodwin.blogspot.com/

  • MaxL Script to clear the data between two dates

    Hi all

    I need advice to clear the data between two dates, I have three dimensions in my sketch, 'eno', 'hiredate' and the 'actualamount '.

    Now I need to erase the data between the date range, up to now, I have this script,

    Fix ("HireDate", "Eno")

    Difficulty (@relative ("00:00:00",0),@relative("eno",0)) 2015-07-15 "))

    CLEARDATA "sal."

    endfix

    ENDFIX

    These scripts only clears on a specific day, but I tried to write the script to clear between two dates, I surfed on a few sites, but no clear answers, finaanly came here, kindly help in this regard.

    Thanks in advance.

    Have you not tried the format of "startdate":"enddate" for example "August":"September."

    See you soon

    John

  • Generic procedure to load the data from the source to the table target

    Hi all

    I want to create a generic procedure to load data of X number of the source table to X number of the target table.

    such as:

    Source1-> Target1

    Source2-> Target2

    -> Target3 Source3

    Each target table has the same structure as the source table.

    The indexes are same as well. Constraint are not predefined in the source or target tables.there is no involved in loading the data from the business logic.

    It would simply add.

    This procedure will be scheduled during off hours and probably only once in a month.

    I created a procedure that does this, and not like:

    (1) make a contribution to the procedure as Source and target table.

    (2) find the index in the target table.

    (3) get the metadata of the target table indexes and pick up.

    (4) delete the index above.

    (5) load the data from the source to the target (Append).

    (6) Re-create the indexes on the target table by using the collection of meta data.

    (7) delete the records in the source table.

    sample proc as: (logging of errors is missing)

    CREATE or REPLACE PROCEDURE PP_LOAD_SOURCE_TARGET (p_source_table IN VARCHAR2,

    p_target_table IN VARCHAR2)

    IS

    V_varchar_tbl. ARRAY TYPE IS VARCHAR2 (32);

    l_varchar_tbl v_varchar_tbl;

    TYPE v_clob_tbl_ind IS TABLE OF VARCHAR2 (32767) INDEX OF PLS_INTEGER;

    l_clob_tbl_ind v_clob_tbl_ind;

    g_owner CONSTANT VARCHAR2 (10): = 'STG '.

    CONSTANT VARCHAR2 G_OBJECT (6): = 'INDEX ';

    BEGIN

    SELECT DISTINCT INDEX_NAME BULK COLLECT

    IN l_varchar_tbl

    OF ALL_INDEXES

    WHERE table_name = p_target_table

    AND the OWNER = g_owner;

    FOR k IN l_varchar_tbl. FIRST... l_varchar_tbl. LAST LOOP

    SELECT DBMS_METADATA. GET_DDL (g_object,

    l_varchar_tbl (k),

    g_owner)

    IN l_clob_tbl_ind (k)

    FROM DUAL;

    END LOOP;

    BECAUSE me IN l_varchar_tbl. FIRST... l_varchar_tbl. LAST LOOP

    RUN IMMEDIATELY "DROP INDEX ' |" l_varchar_tbl (i);

    DBMS_OUTPUT. PUT_LINE (' INDEXED DROPED AS :'|| l_varchar_tbl (i));

    END LOOP;

    RUN IMMEDIATELY ' INSERT / * + APPEND * / INTO ' | p_target_table |

    ' SELECT * FROM ' | '. p_source_table;

    COMMIT;

    FOR s IN l_clob_tbl_ind. FIRST... l_clob_tbl_ind LAST LOOP.

    EXECUTE IMMEDIATE l_clob_tbl_ind (s);

    END LOOP;

    RUN IMMEDIATELY 'TRUNCATE TABLE ' | p_source_table;

    END PP_LOAD_SOURCE_TARGET;

    I want to know:

    1 has anyone put up a similar solution if yes what kind of challenges have to face.

    2. it is a good approach.

    3. How can I minimize the failure of the data load.

    Why not just

    create table to check-in as

    Select "SOURCE1" source, targets "TARGET1", 'Y' union flag double all the

    Select "SOURCE2', 'TARGET2', 'Y' in all the double union

    Select "SOURCE3', 'Target3', 'Y' in all the double union

    Select "SOURCE4', 'TARGET4', 'Y' in all the double union

    Select 'Source.5', 'TARGET5', 'Y' in double

    SOURCE TARGET FLAG
    SOURCE1 TARGET1 THERE
    SOURCE2 TARGET2 THERE
    SOURCE3 TARGET3 THERE
    SOURCE4 TARGET4 THERE
    SOURCE.5 TARGET5 THERE

    declare

    the_command varchar2 (1000);

    Start

    for r in (select source, target of the archiving of the pavilion where = 'Y')

    loop

    the_command: = "insert / * + append * / into ' |" r.Target | ' Select * from ' | '. r.source;

    dbms_output.put_line (the_command);

    -execution immediate the_command;

    the_command: = 'truncate table ' | r.source | "drop storage."

    dbms_output.put_line (the_command);

    -execution immediate the_command;

    dbms_output.put_line(r.source ||) 'table transformed');

    end loop;

    end;

    Insert / * + append * / into select destination1 * source1

    truncate table SOURCE1 drop storage

    Treated SOURCE1 table

    Insert / * + append * / to select TARGET2 * in SOURCE2

    truncate table SOURCE2 drop storage

    Treated SOURCE2 table

    Insert / * + append * / into select target3 * of SOURCE3

    truncate table SOURCE3 drop storage

    Treated SOURCE3 table

    Insert / * + append * / into TARGET4 select * from SOURCE4

    truncate table SOURCE4 drop storage

    Table treated SOURCE4

    Insert / * + append * / into TARGET5 select * from source.5

    truncate table source.5 drop storage

    Treated source.5 table

    Concerning

    Etbin

  • Problem loading the data in the table

    Hi friends,

    I'm using ODI 11 g.
    I'm doing a flat file for the Table mapping. I have 10 records in the flat file when loading the data in an Oracle table, I can see only 1 card is loaded.
    I use IKM SQL add and control using separate option.

    Can you please let me know where exactly the problem.

    Thank you
    Lony

    Hi Lony,

    Please let us know other KM by in your ODI interface.
    Please check in the flat file, column PK have same value or it idifferent?
    Please check if the header is present in your flat file.
    When you load the file in the table of the model > right click on the table (flat file adding that model table) and click Show data and see all 10 records are you able to see at ODI level

    Kind regards
    Phanikanth

  • API to load the data of Group of people

    Hello

    Can someone give me the API for loading the data of Group of people, in Oracle Payroll?

    Thank you
    George

    Hello

    You can update the api group of people by HR_ASSIGNMENT_API.update_emp_asg_criteria data,

Maybe you are looking for