ODI performance

Hello

We are using ODI 11.1.1.6.0 and trying to insert records from 300 K using IKM 'IKM SQl command Append"it takes 13 hours to complete loading.

I'm simple straight insertion with IKM who has only 3 steps, 1.) TRUNCATE Table target 2). Insert 3 records). Engage

However my interfaces interface uses 4 Temp (yellow interfaces) I had to use temporary interfaces because I had about 10 tables to join and had to do some sous-sélections them.

Temp interfaces loads very fast with any questions how ever in the main interface where I include temporary interfaces it takes 13 hours to load 300 K records

My question is does using temporary interfaces too degrade the performance of loading ODI? I tried to resolve the query and reduce the cost of the query, but of no use.

Please suggest some steps I can perform additional performance.

Thank you.

Chantal, ARVs, Bertram, Michael Rainey thank you very much for your answers, appreciate it sincerely.

I managed to solve the problem, he had the look upward in the main interface that created a performance problem,

I had to remove the search and replace it with standard join with adding a few clues and adjustment of the burden of other queries now complete in 2 minutes.

Thank you once again.

Tags: Business Intelligence

Similar Questions

  • while loading does no data in the table target, odi performance degrades?

    Hello

    I am trying to load some data from the source (Oracle DB1) to the target (Oracle, DB2) table in the same server

    Source table have 45lacks of data. These data must be load in the target table.

    I create interface. I took LKM SQL SQL & IKM SQL for SQL incremental Add.

    yesterday I executed interface, I checked the status in the browser of the operator. "his display like the loading status.

    However, the interface is in State of loading.

    sometimes odi is unresponsive. Plese help me.

    Need to add all the fetch sizes?

    Please solve this problem.

    Thank you and best regards,

    A.Kavya.

    Hello

    As you say, I tried with different knowledge modules. I used the incremental update IKM for db and db 45 do not data loading.

    its really increase performance. with in 10 minutes it will be correctly loaded into the target table. previously, I used SQL & SQL LKM, IKM SQL for SQL control append.it takes a long time. It takes two days also data is not loaded in the target table. really bad performance.

    I used DB loading for better performnace LKM SQL to ORACLE & IKM INCREMENTAL UPDATE.

  • ODI performance problem

    Hello world

    I have experience with a performance in ODI 12 c problem. One of my maps launched in more than 500 seconds while the response time for this request is only 14 seconds in environment of Toad. Step of the LOADING in the ODI process takes too long. The generated query is;

    SELECT / * + FULL (MUH_BUF) * /.

    ORGANIZATION_ID (ORG.ORGANIZATION_ID),

    (ORG. ATTRIBUTE10) MUH_KODU,.

    (MUH_BUF. TARIH FATURA_TARIHI),

    (SUM (DECODE (MUH_BUF. MIKTAR, 0, 0, MUH_BUF. NAKIT_CIRO))

    -IN SUMMARY (DECODE (MUH_BUF. MIKTAR, 0, MUH_BUF. NAKIT_CIRO *-1, 0)))

    NAKIT_CIRO,

    (CAT.) SEGMENT1) ANA_GRUP

    FROM APPS. TSA_MAGAZA_SATIS_MUH_TABLE_BUF MUH_BUF,

    APPS.HR_ALL_ORGANIZATION_UNITS ORG,

    APPS. MTL_ITEM_CATEGORIES_V CAT,

    APPS. MTL_SYSTEM_ITEMS_B S_ITEM

    WHERE MUH_BUF. INVENTORY_ITEM_ID = CAT. INVENTORY_ITEM_ID

    AND MUH_BUF.ORGANIZATION_ID = ORG.ORGANIZATION_ID

    AND MUH_BUF. FATURA_TARIHI > = add_months (sysdate,-3) -: GLOBAL.gv_MonthAgo

    AND CAT. INVENTORY_ITEM_ID = S_ITEM. INVENTORY_ITEM_ID

    AND CAT. CATEGORY_SET_ID = 1100000003

    AND CAT.ORGANIZATION_ID = 23

    AND S_ITEM.ORGANIZATION_ID = 23

    GROUP BY (ORG.ORGANIZATION_ID),

    (ORG. ATTRIBUTE10),

    (MUH_BUF. FATURA_TARIHI),

    (CAT.) SEGMENT1)

    What can be the problem.

    Tables source: Oracle

    Target Table: MSSQL

    Please help me.

    Thank you

    Hello

    Use the LKM ' LKM SQL FOR MSSQL (in BULK).

    Hope it will work, because it inserts data in bulk.

    Thank & best regards

    Shiv Kumar

  • Loop in ODI performance

    Hello

    I have question regarding performnace

    (Two 2 1 tables) Bank_info) Customer_info

    I want to run TABLE of FILE scenerio.

    In file, 1st row on Bank_info then two rows of Customer_info, must be written in this way, up to depend on the number of lines in a table.*(Note every single row in Bank_info table correspond to two rows in Customer_info table) of bank_info *.

    In Bank_info, I have 1 million records and in Customer_info, I have 2 million records.so in total 3 million files should be written.

    So, I did a scenerio using the concept of line ODI. Every once in a loop, I will knock twice at database.

    So please suggest me, it is write 3 million records in a file using ODI on the right.

    I noticed time to successfully run same scenerio for 12000 records.
    and it took nearly 53 minutes.

    Thank you for your kind reply

    As you can see on the execution statistics, it is a more inefficient way to write data to the file. And I would say to keep away from him.
    Basically, you hit 3 million lines over and over again and so the performance degradation.

    The way I would do is to create a procedure (perhaps the PL/SQL block), to write the data in the format in a temporary table. And then use OdiSQLUnload tool to unload the data from the file.

    Alternatively, you can use OdiSQLUnload and in the SQL query (this is the description of what happens inside this file - SQL query to execute on the database server.) The query must be a SELECT statement or a call to a stored procedure returns a valid Recordset.), you can write your query or stored procedure that return you the records in the order you want.

  • ODI Performance Tunning

    Hi all

    our Interface is very slow.

    The source file is flat and the target is Oracle.

    The source file is to have 10 records but its taking more than 15 minutes to complete.


    Please let me know any place where I can look for tuning.



    See you soon,.
    Andy

    Hi Andy,.

    Don't worry, all things are difficult before they are easy and ODI is just a tool that you can learn in some time. :)

    If your meant the performance of the components of the ODI (designer, Manager of the topology, engine...)

    Then I increase the parameter ODI_MAX_HEAP of odiparams.bat will help.

    On the contrary, keep in mind the points that I mentioned in development.

    In addition, there are tons of documents in metalink speaking performance.

    Thank you
    G

  • JKM Oracle compatible (Date of update) CDC

    Hi all, I'm trying to implement the CDC using the JKM compatible Oracle (Update Date) for a fairly simple interface (a source and a target table table). After that I did a normal load to the target (single record) table, I've implemented logging as shown below:


    (1) select the model for the Set of logging mode. Choose the JKM compatible Oracle (Date of update) and specified the column name (LAST_UPDATE_DT) of the source table for the UPDATE_DATE_COL_NAME option.
    (2) to activate the data store source for CDC (Change Data Capture-> add to the CDC)
    (3) start magazines for the data store (-> change data Capture log starting)
    (4) add a subscriber to the journal (change->-> Subscribe Subscriber Data Capture)
    (5) insert a new record in the source table with the timestamp that is appropriate for the LAST_UPDATE_DT
    (6) check the log data and ensure that the inserted record is here (right click Datastore and change data Capture, and data log). I can see the window of log data record.
    (7) create a copy of the interface above and check the logged data only to use data logging for the load of the CDC.

    I am now under this interface in simulation mode to see if the new record is taken up to be loaded and the question. It is not the case. Weird, considering that it appears in the log data. So I checked the query that is executed to select new records, and below is what I get, Column1 as the pharmacokinetics of the source table:
    insert /*+ APPEND */  into SCHEMA.I$_TARGET
         (
         COLUMN1,
         COLUMN2
         ,IND_UPDATE
         )
    select       
         SOURCE_CDC.COLUMN1,,
         SOURCE_CDC.COLUMN2,
         JRN_FLAG IND_UPDATE
    from     SCHEMA.JV$SOURCE_CDC SOURCE_CDC
    where     (1=1)
    And JRN_SUBSCRIBER = 'SUNOPSIS' /* AND JRN_DATE < sysdate */ 
    Execution of the select statement does not show the new record. Return of location and control of the definition of the view of $ J, JV$ SOURCE_CDC:
    CREATE OR REPLACE FORCE VIEW ETL_ASTG.JV$TRADER_CDC (JRN_FLAG, JRN_DATE, JRN_SUBSCRIBER, COLUMN1, COLUMN2) AS 
      select      
         decode(TARG.ROWID, null, 'D', 'I') JRN_FLAG,
         sysdate JRN_DATE, 
         JRN.COLUMN1, 
         JRN.COLUMN2
    from     
    (select JRN.COLUMN1 ,SUB.CDC_SUBSCRIBER, SUB.MAX_WINDOW_ID_INS, max(JRN.WINDOW_ID) WINDOW_ID
         from      SCHEMA.J$SOURCE_CDC    JRN,
                  SCHEMA.SNP_CDC_SUBS        SUB 
         where     SUB.CDC_SET_NAME     = 'MODEL_NAME'
         and      JRN.WINDOW_ID     > SUB.MIN_WINDOW_ID
         and       JRN.WINDOW_ID     <= SUB.MAX_WINDOW_ID_DEL
         group by     JRN.COLUMN1,SUB.CDC_SUBSCRIBER, SUB.MAX_WINDOW_ID_INS) JRN,
         SCHEMA.SOURCE_CDC TARG
    where JRN.COLUMN1     = TARG.COLUMN1(+)
    and not      (
                   TARG.ROWID is not null
                and     JRN.WINDOW_ID > JRN.MAX_WINDOW_ID_INS
                ); 
    I can say that the record is not be stalled because of the State LOG. WINDOW_ID < = SUB. MAX_WINDOW_ID_DEL. I don't know what does this condition but the LOG. WINDOW_ID = 28, SUB. MIN_WINDOW_ID = 26 and SUB. MAX_WINDOW_ID_DEL = 27.

    Any ideas on how to get this working?

    Hello
    After the start of your paper you must implement a packge (or manually perform these steps on the model in ODI) perform the following operations using the ODI tools:

    Extend the window (this resets the YVERT numbers in the table of subscriber you found)---> Subscriber Lock---> (< run="" interfaces="">)---> unlock subscribed---> Purge Journal

    Its hidden in the docs here:
    http://docs.Oracle.com/CD/E14571_01/integrate.1111/e12643/data_capture.htm#CFHIHJEE

    Here's excellent guide, I always refer people to that shows exactly how:

    http://soainfrastructure.blogspot.co.UK/2009/02/setting-up-Oracle-data-integrator-odi_15.html

    The guide explains how to configure ODI loop around and wait for the CDC more occur (using ODIWaitForLogData).
    Hope this helps
    Alastair

  • Is it possible to update a column that has been selected as a KEY?

    Hello.

    I'm trying to run an interface that takes a distinct value of the column source and fill in the dimension of the target (table). Dimension of the target (table) has an another colum (surrogate key) which is filled through a sequence in oracle. Source and target are in Oracle. LMK used (Oracle Oracle (DBLINK) 2).

    Problem is when the key column must be defined with update so when I run the update checking interface in the active mapping window and update 'YES' in the IKM (incremental update of Oracle). I have an error message that ODI performs a step for updating the current lines (although table is empty) and says the table.column name is not valid. I agree that it makes sense, but how to solve this problem.

    If I select the surrogate (column number sewurnce) as gived key key always error. So to be precise how to deal with such a situation. Is it possible that I have create another column that is populated by a separate number by ODI and I do that as a key.

    Any help will be much appreciated.



    Table_name1 T update
    Set)

    ) =
    (
    Select

    of PFTW. I have _Table_name $ S
    where T.column_name1 = S.column_name1
    )

    where (column_name1)
    en)
    Select columns_name1
    of PFTW. I table_name1 $
    where IND_UPDATE = 'U '.
    )
    I'm the separate lines by checking the checkbox separate lines in the target zone properties window.

    As I understand it, you have two columns in the target, an excerpt from the Source a different surrogate key (that is, the sequence). In this case you would never updated, so go Oracle incremental updates with Option No. update and insert Yes.

  • Procedure of ODI with slow performance (SOURCE and TARGET are different Oracle databases)

    Hi experts,

    I have an ODI procedure but its market with slow performance (SOURCE and TARGET are different Oracle databases), you can see below.

    My question is:

    It is possible write Oracle BULK COLLECT at the 'command on the target' (below)? or

    There is a KM of ODI that perform this task below in a quick way? If so, what KM can you guys suggest me?

    I found 'Oracle Append (DBLINK) control' but I try to avoid creating the dblink database.

    ===============================================================================

    * COMMAND ON the SOURCE (* technology: ORACLE * logic diagram: ORACLE_DB_SOURCE):

    SELECT NUM_AGENCIA, NUM_CPF_CNPJ, NOM_PESSOA

    < % = OdiRef.getSchemaName ("D") % >. < % = odiRef.getOption ("P_TABELA") % >

    ===============================================================================

    *ON the COMMAND TARGET (* technology: ORACLE * logic diagram: ORACLE_DB_TARGET):

    BEGIN

    INSERT INTO DISTSOB_OWNER. DISTSOB_PESSOA (NOM_PESSOA, NUM_CPF_CNPJ, FLG_ATIVO)

    VALUES ('#NOM_PESSOA', '#NUM_CPF_CNPJ', THE FROM ');

    EXCEPTION WHEN DUP_VAL_ON_INDEX THEN

    NULL;

    END;

    ===============================================================================


    Thank you guys!

    Please use SQL for SQL command Append KM... You can delete the unnecessary steps in the KM.E.g. fi you won't create I$ table, control flow etc, then you can remove related steps.

    Please try with that.

  • What is the performance of the ODI Tweak we can do

    Hi guru,.

    What are the performance improvement we can do?


    as if a single interface takes a lot of time which parameter to check and what we can do to get better performance


    and all in all what we can do to improve the performance of ODI?

    Hello

    There is no setting of ODI which really, you can change to increase the overall performance (otherwise it would be enabled by default). So it really depends on your needs.

    Most of the work is not made by ODI itself but by your underlying DB. ODI is an ELT and not an ETL.
    While Chantal said, the best thing to do is to check the generated code and change your interface or your if necessary KM.

    For example if you think you need to run his stats, you can do something like this: http://www.business-intelligence-quotient.com/?p=1754
    You can also disable the index before the integration steps and rebuild afterwards.

    It will be useful.

    Kind regards
    JeromeFr

  • Adapter ODI for Hyperion Performance Scorecard

    Hello

    I want to install the latest version of Hyperion Performance Scorecard and I was wondering if ODI has an adapter for direct integration?

    I used HAL 9.2 of HPS.

    Thank you

    Hello

    Please refer to the bottom of post

    Adapter ODI for hyperion performance scorecard

    There isn't any direct way to integrate HPS. You must use Scorecard Import / Export utility.

    I hope this helps.

    Kind regards

    Manmohan Sharma

  • Performance in ODI 11 g problem

    Hello world

    I created an interface, where source has 2 has no records and I used the incremental update of Oracle IKM.

    When I ran at full load it loads data into the target in a few minutes.

    When I ran the next day to delta load taking 1 hour for load data into the target.

    How to solve this problem please help me.

    Hello.

    There are a few things that you can play around incremental IKM to tweak it to make it run faster...

    * The first thing you will need to manually identify which generates ODI as sql code. If you don't like or don't know, then next step is to look at a few option that you can change to IKM level.

    The underside of my discoveries... in order to choose who should be.

    Solution 1: The best approach is to CDC, but I say again, I use the Golden gate for CDC + ODI for normal load combination (but you must have additional licenses for goldengate)

    Solution 2: Always check your value for the option of screening strategy, if you see a NOT EXISTS clause in sql generated by ODI, then better change detection at LEAST stragety. It will be faster to NOT exist.

    Work around 3: understand actuall incremental update, it will check all the lines from the source to target based on PK which is defined on the data store target, some of the IKM do it line by line, some of them by Setbased.

    so try to put the detection strategy option in an incremental update IKM votes to ZERO and then try... This means that... ODI will now try NOT to compare source and target again data for those rows that are already in the target...

    Knit approximately 4: try incremental update of revenge (FUSION) instead of the IKM incremtal update, as merge IKM will be club all operations update/insert in a single step, but not the same case in incremental update IKM.

    Hope this helps

    Concerning

    ASP.

  • ODI - how to clear a slice before performing data loading interface

    Hello world

    I use ODI 10.1.3.6 to load data into a cube ASO (version: 11.1.2.1) every day. Before loading data for a particular date, I want the region to be set up in the ASO cube defined by 'this date '.

    I guess I need to run a PRE_LOAD_MAXL_SCRIPT who clears the area defined by a MDX function. But I don't know how I can automatically set the region watching several coloums in the data source.

    Thank you very much.

    There are a number of ways to do this.
    Define substitution variables, it could be programmed from ODI by passing parameters in a maxl script, or by using the Java API
    Or you could generate the clear area in ODI and then put through to a Maxl script as a parameter.

    See you soon

    John
    http://John-Goodwin.blogspot.com/

  • ODI - 1228:Task Load Data - LKM File to SQL-fails on the connection target: table or view does not exist

    While performing a mapping (present in the package) that loads the file to table data, my mapping is being failed in the step - LKM file with above mentioned SQL error.

    This task is running for 30 candy Mint and loading data about 30 to 40 million for the temporary table of C$ ODI.

    Before the completion of the task is to make failure and also C$ table is also get deleted.

    Any possible resolution for above mentioned the issue?

    Problems have been solved.

    In our case, the prefix of all the data store name has been SRC_ so the nickname of all the data store became SRC, and the table name C$ depends on the daatastore Alias.

    So for executing two mapping tables $ CAN have been getting dropped by other mapping due to the same table name $ CAN.

    Change the Alias name giving it a unique name solve the problem.

  • Have a 11.1.1.7.0 to acchieve ODI scenario

    Hi all

    I use ODI 11.1.1.7.0 in my project, and I have the script below to perform and please help me with your thoughts.

    I have a table called A with 30 columns and the column name is "Comments" and I have another B table with only the column 'comments '. I need to extract 10 columns plus column 'Observations' on the table was the target. But during the loading of the 'Comments' column, I need to look at if comments exists in Table B and if there are otherwise pass comments column pass Null table A to the target. Please tell us how I can achieve this.

    Thank you

    Dany

    Solution1) you can write a view (in SQL) with your logic and then create a data store source (example: src_view_a) that can load data in table B

    Option2), you can manage this condition of a mapping by writing a notice online

  • Error SMTPSendFailedException: 530 authentication required when I try to send logs odi post odi 11g?

    Hello

    I used jython code below, to send mail from odi. because I created the procedure in odi called as emailodi

    I have put this code in the command on the target. Technology of jython code.

    "import smtplib

    import string

    BODY = string.join (())

    « De : %s » % ' [email protected] ',

    “To: %s” % ‘ [email protected] ',

    "Subject: %s ' % 'ODI Mail."

    “”,

    "This is a letter from ODi Studio. Thank you. Keep visiting www.DwTeam.in. "

    ), “\r\n”)

    sender = smtplib. SMTP('smtp.gmail.com',587)

    sender.set_debuglevel (1)

    sender. EHLO()

    sender. STARTTLS()

    sender. EHLO()

    sender. Login ('GMAIL_USER_NAME', 'PASSWORD')

    sender.sendmail ("[email protected]', [' [email protected]'], BODY ')

    sender. Close() '.»


    Using this code. I am able to send mail however odi what I wrote in the subject.

    My requirement is I want to send the logs odi to mail...

    for this, I tried,

    I took odiexportlog tool & this procedure above placed in the package.


    After execution of this package.

    I get the error like...


    ODI-1226: OdiSendMail step 3 fails after 1 attempt.

    ODI-1241: Oracle Data Integrator tool execution fails.

    Caused by: com.sun.mail.smtp.SMTPSendFailedException: 530 authentication required.



    Thank you and best regards,

    A.Kavya.



    Hello

    To send a that odi saves the file in Mail,

    I did like.

    1. I created a procedure (named log_data_success) to load data from oracle database (like snp_session error log table) to the text file,

    for this I used the odisqlunload tool in commnad Panel. technology tools odi.

    "OdiSqlUnload «-FILE=D:\TEXT\sucess_log.txt" "-DRIVER = oracle.jdbc.OracleDriver" «-URL=jdbc:oracle:thin:@192.0.0.0:1521:odi ' '-USER = odi ""-PASS = hpfHiT7Ql0Hd79KUseSWYAVIA ' "-FILE_FORMAT = VARIABLE" "-FIELD_SEP = |  ""-ROW_SEP = \r\n "" "-DATE_FORMAT = YYYY/MM/DD hh: mm:" "-CHARSET_ENCODING = ISO8859_1" "-XML_CHARSET_ENCODING = ISO-8859-1".

    SELECT * from snp_session

    2. After performing the procedure described above. I got the log file in the file.



    2. then, I created another procedure (named success_mail) to attach the log file above to mail.

    path of the file to log for this i have used code in jython, to this we must add the smtp address, the port, the credentials of mail, the code below

    Import smtplib, os

    enamel. MIMEMultipart MIMEMultipart import

    enamel. MIMEBase import MIMEBase

    enamel. MIMEText MIMEText import

    enamel. Utils import COMMASPACE, format

    E-mail import encoders

    FROM = ' [email protected]'

    = ' [email protected]' # must be a list

    SUBJECT = "today_odi_sucess_log_status."

    TEXT = "this is a message that is automated; "Please do not respond to this message"

    SERVER ='smtp. XXX XXXX.'

    PORT = 25

    USERNAME = "abd_cheuan".

    PASSWORD = "xxxxxxx".

    message = MIMEMultipart()

    message ["Subject"] = OBJECT

    message ['From'] = FROM

    ["message"] = to

    message. Attach (MIMEText (TEXT))

    part = MIMEBase ("application", "octet-stream")

    part.set_payload (open('D:\TEXT\sucess_log.txt', 'rb') .read ())

    Encoders.encode_base64 (part)

    part.add_header ('Content-Disposition", 'attachment; filename =' +'sucess_log.txt')

    message. Attach (share)

    s = smtplib. SMTP (SERVER, PORT)

    s.Login (username, PASSWORD)

    s.sendmail ('[email protected]','[email protected]', message.as_string ())

    s.Quit ().

    3. after that, I created a single package. I added above two procedures in the package diagram.

    4. after the execution of this package, I received the email from odi.

    its working fine.

    to resolve this problem, I used this link

    Send mail to ODI using Gmail CredentialsDW team

    Thanks & regrads,

    A.Kavya

Maybe you are looking for