Perfomance of dimension to slow variation

Hello

I am facing big trouble with the implementation of an interface that uses the dimension to slow variation Ikm.

The question is: 'Lines of flag for update' step takes more than 10 hours at the ends.

I have a table source with 1 million records and a target table that has about 2 million documents. I had last hystorize documents of some columns (10 columns) and the other 20 columns will update if the data source changes. I've already put what each column of my target table (key replacement, replace the column, insert a line,...) and type Olap in my target table, that, in this case, it is a dimension to slow variation.

I did all the types of changes but nothing works very well and the interface is still malfunctioning.

I hope that any expert can help me because this problem is driving me crazy.

Thanks in advance,


Daniel huh

I sent him. Take a look in your mail.

Tags: Business Intelligence

Similar Questions

  • An error of Dimension to slow variation: too many values

    Hello
    Trying to implement using the CPCS? IKM dimension to slow variation on all columns with their descriptions

    but, when I execute the lines 8 old Historize interface, that it was not

    Error message:

    913: 42000: java.sql.SQLException: ORA-00913: too many values

    java.sql.SQLException: ORA-00913: too many values

    at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:125)
    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:316)
    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:282)

    I have only on 93 records

    can someone help me on this?

    Thank you
    Saichand.v

    Published by: André Varanasi on October 4, 2010 16:26

    SAI,

    The query I've deduced the following:
    1. natural key is DATASOURCE_ID
    2. you have 2 columns marked as end IFL stamp - T.AUX_DATE2, T.AUX_DATE4
    3. you have 3 columns marked as IFL Start Timestamp - S.INS_EXP_DT, S.AUX_DATE1, S.AUX_DATE3

    Apart from the error you are getting, there is another error you get - not enough columns.
    >
    Set)
    T.FLAG
    ,
    T.AUX_DATE2,
    T.AUX_DATE4
    ) = (
    Select 0
    ,
    S.INS_EXP_DT,
    S.AUX_DATE1,
    S.AUX_DATE3
    André. ' ' I$ _WC_CAR_Test "S
    >
    You try to define 3 columns, while the select statement has 4 columns.
    And this is due to the fact that you have more columns marked as start date of the SCD and SCD End Date.
    I met this interesting scenario, I think that the KMs are built to handle only a single column to be marked as SCD Start Date and another a column to be marked as SCD End Date.
    And it is consistent with the definition of the Type 2 SCD.

    Now, the error you get currently - as you have mentioned earlier that your Datasource_id is constant = 1. So, you have a natural key which is always 1 for all 93 records!
    Don't you think that it is good natural key?

  • dimension to slow variation

    Hello

    I heard the dimensions slowly changin which is incorporated in version 11. is this true and how to do it in the console

    Mrithyu

    Hi Mrithyu,

    1. Yes, SCD (dimensions to slow variation) is now possible in essbase outline.

    2. sound of type 2.

    3. If you goto Regional service console. Open the outline-> click on 'Properties'-> 'various attributes enabled' do to true, if its fake.

    4. returning to theory and additional information of this feature, pls refer to SER60 (refer only version 11 SER60 pls).

    Hope this helps

    Sandeep Reddy, Enti
    HCC
    http://hyperionconsultancy.com/

  • Define 'Dimension to slow variation' behavior on a column of odi using Groovy

    Hallo,
    I want to tell him slowly change Dimensions behavior of a column in ODI via a groovy script.
    In a loop, I says:

    Col OdiColumn = new OdiColumn (ds, "ETL_ID");
    col.setDataTypeCode ("NUMBER");
    col.setMandatory (false);
    col.setLength (20);
    col.setScale (0);
    Col. ScdType ("ADD_ROW_ON_CHANGE");


    This work does'nt correct. I need to specify other parameters? (for the neck of the line. ScdType ("ADD_ROW_ON_CHANGE"); )

    The manual States:
    setScdType
    public void setScdType (OdiColumn.ScdType pScdType)

    Sets the OdiColumn.ScdType of this OdiColumn instance.
    Parameters:
    pScdType - type SCD


    Can you help me?

    Oops, I forgot the name of Enum...
    and you must use the setScdType method

    Try this:

    col.setScdType(ScdType.ADD_ROW_ON_CHANGE);
    
  • IKM Dimension application to slow variation problem

    Hello

    I am trying to load my target to help table to slowly change Dimension IKM. I put some attributes as a surrogate key and natural key. I also put a 'Eff_from_date' as the 'Start timestamp' column and 'Eff_to_date' as the 'end Timestamp"in the data store. The "Eff_from_date" is mapped to the "sysdate" on the target. However when I run I get the "a relational operator not valid" error during step 'lines for update of the Pavilion. The code is as follows:

    update of ABC. I have _Emp $ S
    Set S.IND_UPDATE = 'U '.
    where (S.D_NUM_ID, S.INT_ID, S.CHANGED_ON_DT, S.T_ID)
    en)
    Select T.CHANGED_ON_DT, T.D_ID, T.INT_ID, T.T_ID
    ABC. T EMP
    where

    and T.EFFECTIVE_TO_DT = to_date ('01-01-2400', "dd-mm-yyyy")
    )


    For some reason any beginning timestamp is not included in the query. Can you please let me know what I am doing wrong and how I can fix.

    Thank you

    Follow this link

    http://odiexperts.com/SCD-type-2

    -app

  • ODI to evolution slow only dimension not crush with source logged

    Hello

    I have a table source logged and my goal is a dimension to slow variation. The problem is that the data updates in the source always cause an addition of a new record (and the expiry of the former) in the target table and never a crash.

    The dimension table has type olap "dimension to slow evolution.

    The IKM Oracle slowly changing Dimension KM is used

    The columns are described as follows

    #COLUMN_NAME #BEHAVIOR
    DIMENSION_KEY-> surrogate key
    NAME-> crush on changes
    BIRTH_DATE-> crush on changes
    CURRENT_RECORD_FLAG-> current record flag
    START_TIMESTAMP-> start timestamp
    -> End timestamp END_TIMESTAMP
    HOSP_ID-> natural key
    LOC_ID-> natural key
    MOT_ID-> adding line on changes

    'Updates the existing lines' step of the IKM despite all the success contains no source and target code. I can't understand why.

    Just to repeat rows with different data on the crash on change columns cause add line on change of behavior.

    Hello
    What ODI version do you use? , check the prior agreement the steps in the "Rows for update flag" operator log that this moves all the records in your table column I $ IND_UPDATE a 'I' to a 'U' which is then filtered on the step of updating the target table, you must make sure you still have lines coming out of this step with reported updates ('U')

  • IKM Oracle slowly changing Dimension - Type 0

    I have a data store that has been configured as a Dimension to slow variation with the correct settings on the columns for natural key/start/end/sk/current flag.
    Everything works fine but I would have a field that stores the number of session in which it was created for traceability. This must be a field of type 0 (only insert, no editable field)

    Is this possible in ODI 11 g? I tried several things but the field keeps update if the record is changed by a crush on mutation change.

    My configuration:
    Version: ODI_11.1.1.3.0_GENERIC_100623.1635
    KM: Version IKM Oracle Dimension to slow variation 11.1.2.11

    Thank you very much!

    Michael,

    Did you have to hardcode the fields at km? I hope not.

    We had the same problem and the way we approached it was using UD1 marker for the column we wanted to put only and not to add a new line.
    The columns were marked as Overwrite on the evolution of CPC behaviour.

    We have not changed 172 and 182 KMS and added! * Ud1 * (don't read NO UD1) in the comparison.
    This ignores your session for CPCS number column.

    HTH

  • Cannot create indexes on the flow table

    Hello

    I'm new to ODI.

    The problem is that during the execution of an interface, I get the error of the "IKM Oracle Dimension to slow variation"

    The command in step "Create unique index on the flow table:

    creating index < % = odiRef.getTable ("L", "INT_NAME", "A") % > idx

    on < % = odiRef.getTable ("L", "INT_NAME", "A") % > (< % = odiRef.getColList ("", "[column]", ",", "", "SCD_NK") % >)

    < % = odiRef.getUserExit ("FLOW_TABLE_OPTIONS") % >

    generate the following statement which lacks the name of the column between the (_)

    Create index I$ _MYTABLE_idx

    I have $_MYTABLE)

    NOLOGGING

    The result is that the interface fails with the error 936: 42000: java.sql.SQLException: ORA-00936: lack of expression caused by the previous command wrong.

    Please, can you help me?

    Thank you very much

    Angelo

    Hello

    I'm really really sorry! I just realized that you are working on the SCD. Basically, you are looking for all the column mapped as SCD_NK (key to slowly change natural Dimensions) insofar as shown here

    Substitution QAnywhere

    IF you need to read this

    SCD Type 2 - ODIExperts.com

    Let me know.

  • How to customize a LKM, IKM

    Hi Experts,

    Can someone help me how to customize the modules of LKM and IKM knowledge.
    Please give me your brief explanation on this...


    THX,
    Sara.

    A very common improvement made to IKM is to add a step to collect statistics after loading.
    I also use to replace the code in step truncated to call a stored procedure instead of the truncate DOF (at end of privileges).

    Some of my IKMs have additional steps to disable the index before the load and rebuild afterwards.

    You can also build entirely new KMs to partition Exchange instead of standard Insert for example. Or if you have a complex specific implementation of Dimension to slow variation.

    Basically, you can customize/remove all steps. The KM is where you choose the extract/integration of data, where the interface defines WHAT (what source, that target, filters, expressions, joint,...). So if you have a technical requirement, there is much of a chance that it will be held at km. If it is a requirement of the company, it is probably in the interface mapping.

    It will be useful.

    Kind regards
    JeromeFr

  • Problem importing the Module knowledge

    Hello
    When exporting data from a relational table to another relational table (and the primary key of the target table is populated by sequence) I need to import "* IKM SQL for SQL Append * ' for the target. But I can't find such an option in the list of options import knowledge. What can I do? When I select IKM SQL for SQL command Append it shows fatal error. Help, please.
    NB:-these are my options
    IKM Access incremental update
    IKM DB2 400 incremental update
    IKM DB2 400 incremental update (CPYF)
    IKM DB2 400 Dimension to slow variation
    Incremental update of IKM DB2 UDB
    IKM DB2 UDB Dimension to slow variation
    IKM file for Teradata (TTU)
    Incremental update for IKM Informix
    IKM JDE world command Append
    Incremental update IKM MSSQL
    IKM MSSQL Dimension to slow variation
    IKM Netezza command Append
    Incremental update of the IKM Netezza
    IKM Netezza to the file (EXTERNAL TABLE)
    Incremental update of the IKM Oracle AW
    IKM Oracle BI to SQL add
    Incremental update of the IKM Oracle
    Incremental update IKM Oracle (FUSION)
    Incremental update IKM Oracle (PL-SQL)
    IKM Oracle Multi Table Insert
    IKM Oracle Dimension to slow variation
    IKM Oracle Spatial incremental update
    Control of Oracle for Oracle IKM Append (DBLINK)
    IKM SQL command Append
    IKM SQL command Append (BSE XREF)
    IKM SQL command Append (SOA XREF)
    IKM SQL incremental update
    IKM SQL incremental update (line by line)
    IKM SQL to the file Append
    IKM SQL for Hyperion Essbase (DATA)
    IKM SQL for Hyperion Essbase (METADATA)
    IKM SQL for Hyperion Financial Management Data
    IKM SQL for Hyperion Financial Management Dimension
    IKM SQL for Hyperion Planning
    Add IKM SQL to JMS
    Add IKM SQL XML JMS format
    IKM SQL for SQL command Append
    IKM SQL for SQL incremental update
    IKM SQL for Teradata (TTU)
    Add IKM SQL for control of Teradata
    Incremental update for IKM Sybase ASE
    IKM Sybase ASE Dimension to slow variation
    Incremental update for IKM Sybase IQ
    IKM Sybase IQ Dimension to slow variation
    IKM Teradata command Append
    Incremental update for IKM Teradata
    Statement of Teradata Multi IKM
    IKM Teradata Dimension to slow variation
    IKM Teradata to the file (TTU)
    Incremental update IKM TimesTen (FUSION)
    Adding the XML IKM control

    Instead of doing all the transformations in the staging. Make a transformation in the target. Until the expression editor, you can see run on the source, staging, target. Select on target for at least one column. See if it works.

    Example: sysdate must be running on the target.
    Thank you.

  • can we change the default 1 and 0 as flag of the CPC in odi 'Y' and ' don't

    Hello

    as we all know that IKM SCD slowly changing dimension that the flg scd will be inserted 1 and 0

    but can we change is to 'Y' and 'n', as I demand that this column only accepts "Y" and "n" value.

    For example, using IKM Oracle Dimension to slow variation

    You would be interested in steps

    Historize old lines
    and
    Change insert and new dimensions

    Looking for references like:

    <%=odiRef.getColList("", "0", ",\n\t\t\t", "", "(SCD_FLAG and REW)")%>
    for the value 0

    and

    <%=odiRef.getColList(",", "1", ",\n\t", "", "(SCD_FLAG and REW)")%>
    for 1

    Hope this helps,
    Concerning
    Alastair

  • Warning of Type2 SCD

    Hi all

    I have an Interface that loads of dimension SCDType2. It uses IKM Oracle Dimension to slow variation.

    The surrogate Key (Key size) displays the following warning - "don't check not null was disable in this interface for the mandatory column.

    What is it?

    When I say fixed error on the warning screen, I can run the interface but I do not see the changes made to the interface.

    -Henry.

    Hi Henry,.

    If it is filled by a sequence that is located on your target database, where it should be run depends on where your I table $ (staging) is created. That is done it will be created in the same pattern as your target, or you have temporary objects created elsewhere.

    If they are identical, you can use "Execute on staging", if they are different, you must use "target".

    I think the problem here is that the column is not Null, if "Flow control" is set to true and it is set to "Execute on the target", all records will be removed I$ in $ E that the column will be = null because it does not get populated until it reaches the target. If it is set to "Execute on staging", then the I column $ will be filled, so no problem.

    To be honest, if you use a sequence value to fill the column, there is no need to check if it is 'Null '.

    So, you have two options.
    1. If your staging scheme is the same as target value 'Execute on the staging.
    Or
    2. let the "Not null" checkbox disabled and run on the target if your staging scheme is different from your target.

    BOS

  • Materialized views or water courses

    I'm working on 11 GR 2 on Red Hat Linux

    I have a TBL_ADDRESS table on the DB1 database and it is an OLTP database
    SQL> desc TBL_ADDRESS
    Name                                      Null?    Type
    ----------------------------------------- -------- --------------------

    CODE_ID                                   NOT NULL NUMBER(19)
    ADDRESS_LINE_1                            NOT NULL VARCHAR2(100)
    ADDRESS_LINE_2                            NOT NULL VARCHAR2(100)
    CITY                                      NOT NULL VARCHAR2(40)
    COUNTRY                                   NOT NULL VARCHAR2(40)
    I'm working on a data warehouse that needs these details, and also must capture changes (Dimension to slow variation).
    So I'm looking at a similar to that on the DB2(Data Warehouse) database table design
    SQL> desc TBL_ADDRESS_HISTORY
    Name                                      Null?    Type
    ----------------------------------------- -------- --------------------
    CODE_ID                                   NOT NULL NUMBER(19)
    ADDRESS_LINE_1                            NOT NULL VARCHAR2(100)
    ADDRESS_LINE_2                            NOT NULL VARCHAR2(100)
    CITY                                      NOT NULL VARCHAR2(40)
    COUNTRY                                   NOT NULL VARCHAR2(40)
    FROM_DATE                                 NOT NULL TIMESTAMP
    TO_DATE                                            TIMESTAMP
    The From_Date and To_Date fields will help me determine if the data is current and also it will help me to identify the changes that have occurred in the past.

    To get the data in the TBL_ADDRESS table in DB1, I have two options, configuration of the watercourse or using materialized views.

    (1) in the case of option 1, I have the same table (TBL_ADDRESS stream) on the DB2 database configured using the replication flow from DB1 to DB2. Then I can create triggers (on TBL_ADDRESS in DB2) in order to identify changes and have full accordingly TBL_ADDRESS_HISTORY table.

    (2) in the case of option 2, I have materialized view implemented using the links of db that will be refreshed every few hours. Also I will have TBL_ADDRESS in the populous initially DB2 table when DW is built. Until no refreshment of the MV, I'm having a procedure that checks the data in TBL_ADDRESS and the view materialized to see if there is no change and updates another table TBL_ADDRESS_HISTORY with the old data, if there is any change, so TBL_ADDRESS will have current data and TBL_ADDRESS_HISTORY will have the old data.

    It seems that Option 1 is better but I have about 8 tables (which have to be met to obtain the relevant data) so I thought having 8 process of watercourse would be a little complex to maintain.
    In the case of MV, I have 1 MV who joined all the tables of 8 and is updated twice a day.

    Any suggestions/advice?

    What do you mean by "8 workflow process? You would have a capture process, a propagation process, and a process applies that everything that happened to work with ADR of 8 different tables. Once you have configured streams, there is no increase in complexity to add additional tables in the replication process.

    If you use feeds, why would you use the triggers for copy of changes to the ADDRESS table in the ADDRESS_HISTORY table? If it's the process that you are considering, you would be better with a custom apply Manager who just applied the LCR ADDRESS from the source system to the ADDRESS_HISTORY table in the data warehouse. Of course, in a data warehouse, I would refrain usually the ADDRESS table are replicated in place and that a separate process ETL (is not a trigger) the processed data.

    Justin

  • What knowledge modules to use

    Hi all
    I have two scenarios of integration to be implemented and I need to know the knowledge modules to be used.

    Scenario 1: Extract data from oracle database and generate a flat file. File to send to Ms SQL Server
    Scenario 2: Extract data from oracle and MSSql databases and load in the Sybase database

    What are knowledge modules to be used for the above scenarios

    Thanks in advance.

    Script 1:
    Part 1: From Oracle to file
    (1) LKM loads SQL for SQL
    (2) choose Stadium area different from the target, probably the oracle itself
    (3) IKM SQL to the file Append
    Part 2: Component to use FTP to transfer files to MS SQL Server
    Part 3: From file of MS SQL Server
    (1) LKM file MSSQL (BULK)
    (2) incremental update IKM MSSQL

    But for scenario 1, why do you want intermediate file when you can head the conversion of Oracle, MS - SQL

    Scenario2:
    (1) LKM for Sybase ASE SQL (BOF extraction of Oracle and MSSQL)
    (2) incremental update of the Sybase ASE IKM

    For scenario 2, you have other options for option multiiple (maybe you want to explore what suits best to your requirements):
    LKM SQL for Sybase ASE (BCP)
    LKM SQL for Sybase ASE
    LKM SQL in Sybase IQ (LOAD TABLE)
    Incremental update for IKM Sybase ASE
    IKM Sybase ASE Dimension to slow variation
    Incremental update for IKM Sybase IQ
    IKM Sybase IQ Dimension to slow variation

    It could be useful!

    Concerned,
    Amit

  • What is the process of capturing change as in DAC?

    Hello guys

    I am fairly new with DAC and I would like to know how to implement the change using the CAD capture process...

    I've been using incremental loading in informatica and now have to bring it to the DAC process. I couldn't find any detailed description on how the capture of change process is as DAC with Guild step by step

    Could someone give me some advice?

    Thank you very much

    Yes that's right, for off connectors ETL box BIA as for EBS, Siebel, TFTP JDE etc, it deals with all aspects of the ETL.
    Now, if you want to change the behavior of the way oracle implements the dimensions to slow variation, then Yes you will begin to change
    some of the indicators as always truncate the table etc...

    In general, you can think of DAC as "console Informatica for Dummies" and for the most part you really do not need to go under the covers of Informatica.

    Simply test in your environment, complete ETL, make some changes in source line, run an other ETL (who will be gradual) and then look for the
    changes in the DW.

Maybe you are looking for