Failed to truncate the target table while programming the interface in ODI

Hi Expertise,

I developed a scenario in an interface, where I planned keeping the option Truncate as false, right there the data is been going to target without any problems, but when I changed the status of truncate to TRUE, the interface is been run, but the array is not get truncated, rather the data's been loading again to target. (when running manually, the target table is been truncated perfectly) but not in programming.

Can someone help me in this problem.

Thank you

Shakur.

Have you regenerate the scenario in question since you changed the option Truncate. If you have not the regular job must still run the old code

Tags: Business Intelligence

Similar Questions

  • FAILED TO CREATE THE INTERFACE ORACLE DATA INTEGRATOR PROJECT

    Hello

    I'm new to ODI & I started to learn by following the links

    http://www.Oracle.com/WebFolder/technetwork/tutorials/OBE/FMW/ODI/odi_11g/odi_master_work_repos/odi_master_work_repos.htm

    I did creating the repository master & repository work successfully by the following scenario in the link above. Later, I followed the link below

    http://www.Oracle.com/WebFolder/technetwork/tutorials/OBE/FMW/ODI/odi_11g/odi_project_ff-to-FF/odi_project_flatfile-to-flatfile.htm

    in this scenario, when I'm creating interface, I got two errors

    Gravity Message object
    Fatal target target temporary DataStore datastore has no name.
    Critics target DataStore No. IKM is selected for this interface.

    But, I chose IKM SQL to ADD the FILES in the tree structure Module knowledge in respect of integration after selecting I click on save and then new open pop-up window and you are prompted for the name of the file by clicking the search icon I have chosen a path, and I gave him the name of file then recorded... After having done that also, I face the same mistakes. So someone please help me to overcome this problem

    Thank you
    Gunuru

    Hi Gunuru,

    Problem 1:
    In your interface, go to the mapping tab.
    Click "Target data store" on the top right of your screen and then enter a name for the data (bottom) store.

    Problem 2:
    Your probably forgotten point 4 and point 5 of this article: http://www.oracle.com/webfolder/technetwork/tutorials/obe/fmw/odi/odi_11g/odi_project_ff-to-ff/odi_project_flatfile-to-flatfile.htm#t6

    Just click the workflow tab, click your staging area and choose your IKM.
    You must also choose a LKM on the part of the source.

    It will be useful.

    Kind regards
    JeromeFr

  • Level lock table while truncating the partition?

    Below, I use the version of oracle.

    Oracle Database 11 g Enterprise Edition Release 11.2.0.2.0 - 64 bit Production
    PL/SQL Release 11.2.0.2.0 - Production
    CORE Production 11.2.0.2.0
    AMT for Linux: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - Production

    I have a script to truncate the partition as below. Is there a table-level lock while truncating the parition? Any input is appreicated.

    ALTER TABLE TEMP_RESPONSE_TIME TRUNCATE PARTITION part1

    >
    Is there a table-level lock while truncating the parition?
    >
    No - it will lock the partition being truncated.

    Is there a global index on the table? If so they will be marked UNUSABLE and must be rebuilt.

    See the VLDB and partitioning Guide
    http://Oracle.Su/docs/11g/server.112/e10837/part_oltp.htm
    >
    Impact of surgery Maintenance of Partition on a Table partitioned local index
    Whenever a partition maintenance operation takes place, Oracle locks the partitions of the table concerned for any DML operation. Data in the affected partitions, except a FALL or a TRUNCATION operation, are always fully available for any operation selection. Given that clues them are logically coupled with the (data) table partitions, only the local index partitions affected table partitions must be kept as part of a partition maintenance operation, which allows the optimal treatment for the index maintenance.

    For example, when you move an old score a level of high-end storage at a level of low-cost storage, data and index are always available for SELECT operations. the maintenance of the necessary index is either update the existing index partition to reflect the new physical location of the data or, more commonly, relocation and reconstruction of the index to a level of storage partition low cost as well. If you delete an older partition once you have collected it, then its local index partitions deleted, allowing a fraction of second partition maintenance operation that affects only the data dictionary.

    Impact of surgery Maintenance of Partition on Global Indexes

    Whenever a global index is defined on a table partitioned or not partitioned, there is no correlation between a separate partition of table and index. Therefore, any partition maintenance operation affects all global indices or index partitions. As for the tables containing indexes, the affected partitions are locked to prevent the DML operations against the scores of the affected table. However, unlike the index for the local index maintenance, no matter what overall index remains fully available for DML operations and does not affect the launch of the OLTP system. On the conceptual and technical level, the index maintenance for the overall index for a partition maintenance operation is comparable to the index maintenance which would become necessary for a semantically identical DML operation.

    For example, it is semantically equivalent to the removal of documents from the old partition using the SQL DELETE statement to drop old partition. In both cases, all the deleted data set index entries must be removed to any global index as a maintenance operation of normal index that does not affect the availability of an index to SELECT and DML operations. In this scenario, a drop operation represents the optimal approach: data is deleted without the expense of a conventional DELETE operation and the indices are maintained in a non-intrusive way.

  • subset of table truncate the end of the 2d array

    The intention was to make a program that would generate asynchronous several different signals in a buffer.  Then something would consume the buffer - an output daq, and signal processing.  I created a dummy consumption which takes only 1% over the beginning of the buffer.  Whenever the buffer is smaller than the specified size, more signal will be added at the end.

    I ran into a problem where the function of the subset of the Array is truncate the end of the subset sometimes, so I disassembled the program until a congruent portion of the code exists to cause the problem.  It seems to be the use of memory or related allowance.  Maybe I'm doing something that I shouldn't be, but it seems like a bug in labview.  In the block diagram, I have a note that shows a waveform wire that goes to a case statement.  Just remove this thread causes it to work properly as seen by the consistency of the waveform on the front panel.

    I'm using Labview 2014 (without SP1)

    I would be grateful for any ideas.

    To work around the problem, use the copy always at the moment. I'll try to engage someone R & D of LabVIEW to get the last word.

    In any case, it seems unnecessary to carry all these t0 (which is always zero!) and dt (which is always the same. Constantly from waveforms to bays and back just really clutters the code. If dt would be different between the waveforms, you would have a much bigger problem .

    I understand that your actual code is much more complicated and what you show is just the tip of the iceberg lettuce.

    Here is a general overview of execution project ideas.

    • Use 'building the table' (concatenation mode) instead of "insert into array. It's cleaner.
    • Use simpler and easier to read the code to find the size of the table smaller
    • Only use tables. You can define once and dt for all graphs.
    • Use the correct representation for buffer size controls.
    • Don't place unnecessary sequence structures.
    • I don't think that you really need that local variables, the terminal is written often enough (stops you extra copy of the immense tables in memory!)
    • Do not know what is the structure of matter, but I left it in for now.
    • Add conditionally empty bays, just wire the table via unchanged instead.
    • ...

  • ODI 12 c: IKM for differential insert and update with a sequence in the target table

    Hello

    I have a map where I fill in a column of my target table using a database sequence. Now my mapping is supposed to load the target gradually table. So I need a revenge for update and incremental insert. Now with this differential IKM it compares all the columns to match all colmuns line to understand, it should be an insert or update. Now, the following code shows that when the ROW_WID is loaded with a sequence of database.

    If NOT EXISTS

    (select 1 from W_LOV_D T

    where T.ROW_WID = S.ROW_WID

    and ((T.CREATED_BY = S.CREATED_BY) or (T.CREATED_BY IS NULL and S.CREATED_BY IS NULL)) and

    ....

    ....

    < the rest of the comparison of columns >

    )

    So when running ODI returns following error

    Caused by: java.sql.SQLSyntaxErrorException: ORA-00904: "S". "" ROW_WID ": invalid identifier

    Please suggest if there is no other IKM I should use or if there is another way around it without changing the code IKM...

    Hi Marc,

    Thanks for your reply.

    I had solved it. The incremental update process inserts all rows from the source table to I$ table that exists in the target table. It does so by the where sql such as mentioned in my questions as

    WHERE THERE is NOT ( . COLUMNS = . COLUMNS)

    Now in the incremental update IKM Oracle to retrieve all the columns it uses the substitution with parameter as TABLE TARGET. Due to this column sequence will in the comparison and the request fails. When I used the IKM SQL incremental update it used INTEGRATION TABLE as parameter table to pick up the columns, as I'd mentioned in the target sequence is run, so it does not get the sequence column.

    Simple answer: to solve this, use incremental update of the SQL IKM.

    Thank you

    SM

  • Failed to populate the tables using StartSQLRepository.

    Hi Experts,

    I use ATG 10.0.3 with application server 10.3.6 Weblogic and Oracle 10 g db.

    While trying to use data startSQLRepository m MyModule-import \config\dynamusic\mymoduledata.xml-referentiel/atg/commerce/catalog/ProductCatalog fill command I get after errors.

    Path: C:\oraclexe\app\oracle\product\10.2.0\server\bin; C:\Program Files\Java\jdk1.6.0_20\bin; C:\apache-ant-1.9.2\bin; C:\ATG\ATG10.0.3\MySQL\win32\bin;. \.. \DAS\os_specific_files\i486-unknown-Win32;. \.. \DAS\os_specific_files\i486-unknown-win32\ice

    Error my Sep 21 13:19:10 IST 2013 1379749750141 /atg/dynamo/service/jdbc/JTDataSource---javax.naming.NoInitialContextException: need to specify the class name in the environment or property of the system, as a cmdlet parameter or in a file of application resources: java.naming.factory.initial

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:645)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:288)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to javax.naming.InitialContext.getURLOrDefaultInitCtx(InitialContext.java:325)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to javax.naming.InitialContext.lookup(InitialContext.java:392)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to atg.nucleus.JNDIReference.getReference(JNDIReference.java:140)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to atg.nucleus.JNDIReference.doStartService(JNDIReference.java:116)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to atg.nucleus.GenericService.startService(GenericService.java:496)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to atg.nucleus.NucleusNameResolver.startService(NucleusNameResolver.java:1458)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to atg.nucleus.NucleusNameResolver.configureAndStartService(NucleusNameResolver.java:1206)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to atg.nucleus.NucleusNameResolver.createFromName(NucleusNameResolver.java:826)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to atg.nucleus.NucleusNameResolver.createFromName(NucleusNameResolver.java:590)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to atg.nucleus.NucleusNameResolver.createFromName(NucleusNameResolver.java:571)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to atg.nucleus.NucleusNameResolver.resolveName(NucleusNameResolver.java:416)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to atg.nucleus.NucleusNameResolver.resolveName(NucleusNameResolver.java:1120)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to atg.nucleus.ConfigurationRef.getValue(ConfigurationRef.java:81)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to atg.nucleus.SimpleComponentState.setBeanProperty(SimpleComponentState.java:379)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to atg.nucleus.SimpleConfigurationState.saveToBean(SimpleConfigurationState.java:218)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to atg.nucleus.SimpleConfigurationState.configureBean(SimpleConfigurationState.java:241)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to atg.nucleus.BeanConfigurator.configureBean(BeanConfigurator.java:275)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to atg.nucleus.PropertyConfiguration.configureService(PropertyConfiguration.java:763)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to atg.nucleus.SingleNucleusConfigurator.configureService(SingleNucleusConfigurator.java:62)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to atg.nucleus.NucleusNameResolver.configureService(NucleusNameResolver.java:1392)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to atg.nucleus.NucleusNameResolver.configureAndStartService(NucleusNameResolver.java:1192)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to atg.nucleus.NucleusNameResolver.createFromName(NucleusNameResolver.java:826)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to atg.nucleus.NucleusNameResolver.createFromName(NucleusNameResolver.java:590)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to atg.nucleus.NucleusNameResolver.createFromName(NucleusNameResolver.java:571)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to atg.nucleus.NucleusNameResolver.resolveName(NucleusNameResolver.java:416)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to atg.nucleus.Nucleus.resolveName(Nucleus.java:2536)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to atg.nucleus.GenericService.resolveName(GenericService.java:315)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to atg.nucleus.GenericService.resolveName(GenericService.java:367)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to atg.nucleus.Nucleus. < init > (Nucleus.java:932)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to atg.nucleus.Nucleus. < init > (Nucleus.java:717)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to atg.nucleus.Nucleus.startNucleusCheckLicense(Nucleus.java:4144)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to atg.nucleus.Nucleus.startNucleus(Nucleus.java:4021)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to atg.adapter.gsa.xml.TemplateParser.runParser(TemplateParser.java:5625)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource to atg.adapter.gsa.xml.TemplateParser.main(TemplateParser.java:5241)

    Error my Sep 21 13:19:10 IST 2013 1379749750141/atg/dynamo/service/jdbc/JTDataSource

    Error Sat Sep 21 13:19:10 IST 2013 1379749750221 / failed to start the service "/ atg/dynamo/service/jdbc/JTDataSource ': atg.nucleus.ServiceException: could not resolve the reference to the JNDI component: ATGProductionDS

    Error my Sep 21 13:19:10 IST 2013 1379749750229 /atg/dynamo/service/jdbc/JTDataSource---javax.naming.NoInitialContextException: need to specify the class name in the environment or property of the system, as a cmdlet parameter or in a file of application resources: java.naming.factory.initial

    Error my Sep 21 13:19:10 IST 2013 1379749750229/atg/dynamo/service/jdbc/JTDataSource to javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:645)

    Error my Sep 21 13:19:10 IST 2013 1379749750229/atg/dynamo/service/jdbc/JTDataSource to javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:288)

    Error my Sep 21 13:19:10 IST 2013 1379749750229/atg/dynamo/service/jdbc/JTDataSource to javax.naming.InitialContext.getURLOrDefaultInitCtx(InitialContext.java:325)

    Error my Sep 21 13:19:10 IST 2013 1379749750229/atg/dynamo/service/jdbc/JTDataSource to javax.naming.InitialContext.lookup(InitialContext.java:392)

    I created the JTDataSource.properties, connectionPoolName.properties and the connectionPoolNameFakeXA.properties under the following path: C:\ATG\ATG10.0.3\home\localconfig\atg\dynamo\service\jdbc.

    JTDataSource.properties has the following key-value.

    $class = atg.nucleus.JNDIReference

    JNDIName = ATGProductionDS


    connectionPoolNameFakeXA.properties: -.

    $class = atg.service.jdbc.FakeXADataSource

    Server = localhost:1313

    user = system

    needsSeparateUserInfo = false

    URL=JDBC:Oracle:thin:@localhost:1521:XE

    readOnly = false

    password = admin

    database =

    Driver = Oracle.JDBC.XA.client.OracleXADataSource

    I tried many things but unable to solve these errors.

    Please help fix these errors!

    Kind regards

    Meera



    When you run the startSQLRepository.bat file, you will not have access to the application context server where only the application server JNDI name, not talk. In your case, JTDataSource is pointing to the data source that is set up on your application server.  You need point to the FakeXAdataSource of your JTDatasource instead of give the JNDI name.

    Your JTDataSource will be like,

    $class = atg.service.jdbc.MonitoredDataSource

    dataSource =connectionPoolNameFakeXA (with path)

    and the FakeXADataSource will be like

    $class = atg.service.jdbc.FakeXADataSource

    Driver = Oracle.JDBC.OracleDriver

    URL =jdbc:oracle:thin:@localhost:1521:xe

    user =>

    password =>

    See you soon

    R

  • Index rebuild required after truncate the table and load data

    Hello


    I have a situation that we truncate tables bit and then we loaded data [only content] on these tables. What you need to rebuild the index online is necessary or not?


    And another question is if we drop a few clues is the total amount of space is released or not. And re-create indexes will use the same amount of space. As I don't have disk space more? In this situation, rebuild the index online will be a better idea...

    Can you please on this...


    truncate the table some the few loading tables + reconstruction markings online is the best (or) droping little tables, a few tables loading + re-create the index is better

    Can you suggest the best way... We have a time that it currently we don't have enough space on the disk... [Option should not effect the space]

    user13095767 wrote:
    Ok. I have it...

    u want to say if we disbale the index while loading... Next, we need to spend the time to build.

    If the indexes are enabled, then rebuild again is not necessary after loading tables...

    Please answer if my understanding is correct...

    above is correct

    >

    If so, how abt the differences in the space occupied by the spaces of storage during the index rebuild and re-create... T he acquires more space if recreate us [deletion and creation] or rebuild online is preferable to an index...?

    space used is the same for all options.

  • delete all, then insert into the target table, two steps in one transaction?

    Hello

    We have the following two steps in one ODI package, it will be managed in a single transaction?

    procedure step 1): remove all of the target table.
    interface in step 2): insert data in the source table in the target table.

    our problem is that step 2 can take some minutes, and then the target table can temporary unusable for end users who try to access it, I'm good?

    Kind regards
    Marijo

    Hello

    It can be managed in a single transaction by selecting IKM as the Append and TRUNCATE/DELETEALL Option command in the FLOW of an interface tab.

    Thanxs
    Malezieux

  • Impossible to install programs, download 1628: failed to complete the installation after the installation of service pack 3

    Impossible to install programs, download 1628: failed to complete the installation after the installation of service pack 3. What I have to do.

    Assuming you meant WinXP SP3...

    Why the SP3 was not installed for over a year?

    What application or antivirus security suite is installed and your current subscription?  What anti-spyware (other than Defender) applications?  What third-party firewall (if applicable)?  Who were these applications that run in the background when you tried to install WinXP SP3?

    A (another) Norton or McAfee application has already been installed on this machine (for example, a free trial version which is preinstalled when you bought it)?

    ~ Robear Dyer (PA Bear) ~ MS MVP (that is to say, mail, security, Windows & Update Services) since 2002 ~ WARNING: MS MVPs represent or work for Microsoft

  • Failed to initialize the installation program. Error 110. On Windows XP.

    Whenever I try to install LV6.1 or LV6.1 DSC Run-Time system, on my Windows XP machine, I always get the following error message:

    Failed to initialize the installation program. Error 110.

    I think it has something to do with the MSI or "Windows Installer". I tried to uninstall all of THE products of National Instruments, but it did not help. Any ideas?

    Hello. Thank you. I tried, but in fact, it did not help. However, after doing a little research on our, I found a solution by following these steps:

    1. reboot Windows XP in Safe Mode
    2. double-click on the Setup program to start the installation process
    3 cancel the installation process after it starts
    4. restart windows in normal mode

    Now, all my applications of MSI with setup.exe work correctly! Quite strange. Just for the record: this problem was not due to a Virus protection program (as all assumed first that many people) and it kept me from running applications (MSI?) of ANY installer from a CD.

  • ODI 12 c: do I have to reimport the target tables after that I did some clues on them

    Hello

    I created a map after import target tables in my ODI 12 C studio. The mapping is complete and it works without any errors. Now I intend to create some clues that I have an obligation to report. SO, I have to reimport these after creating indexes.

    Thank you

    SM

    Hello

    Here's the thing. The indexes that you created, if you plan to use as a unique index, or as your key to updating ODI you can go ahead and add the template manually in or hit the table again. However, if these indices are purely to make your report more quickly and will have no impact on you data loading then it is pointless to ODI. In the future when you make changes to the table and refresh it in ODI indexes will be added automatically.

    Thank you

    Ajay

  • The logged data is not loaded in the target table in cdc-compatible?

    Hello

    I tried with cdc-simple concept on single table.

    I had loaded, journaled table (only changed records) inserted in the simple target in cdc table. Its working fine.

    When I work on cdc-consistent and logged data are not loaded in the target table

    For this, I used the data model, it has 3 data stores. log the without option of data, its works very well.

    When I am trying to load tables logged in the target table in interface, its running fine.

    To do this, I chose "logged data only.

    Although I have not changed the records in the target table. target table is empty after the executed insterface.

    err1.png

    err4.png

    err2.png

    err3.png

    I chose the real option of insertion in ikm. But the logged data that is not inserted in the target table.

    Please help me.

    Thankin advacnce,

    A.Kavya.

    Hello

    You must EXPAND WINDOW and LOCK SUBSCRIBERS before consuming the CDC data:

    http://docs.Oracle.com/CD/E14571_01/integrate.1111/e12643/data_capture.htm#ODIDG283

    Subsequently, you unlock Subscriber and purge the log.

    Better to put a package to automate the whole thing.

  • Data export of the Disqualification to the table in step should not truncate the table.

    Hello friends,

    Please find blow my requirement and do the necessary.

    We export data cleaned to staging tables, whenever we export data, Disqualification truncates the table and inserts the new data sets into this table.

    My requirement is instead of truncating the table before inserting the data in it, Disqualification must add these records, should not truncate the table.

    Please let me know how to configure this in OEDQ, your help is appreciated.

    Thank you, Prasad

    Could not be easier. Double-click the task to export in your work and change the mode append.

  • A target table is in charge of two different sources but the same columns, but a source is one database and another in a flat file.

    We all hope you are doing well.

    I have a business problem to implement in ODI 11 G. It's here. I'm trying to load a target table from two sources that have the same column names. But a source is to the file format and the other is in the Oracle database.

    That's what I think I'll create two mappings in the same interface by using the Union between the sources. But I don't know how the interface would connect to different logical architecture to connect to two different sources.

    Thank you

    SM

    You are on the right track, all in a single interface. Follow these steps

    (1) pull model of your data in the designer of the source file and your table model target to the target pane.

    (2) all relevant columns map

    (3) in the source designer to create a new dataset and choose as the UNION join type (this will create a separate tab in the source designer pane)

    (4) select the new dataset tab in the source designer pane and pull your source oracle table data model in the designer of the source. All columns that are relevant to the target card

    (5) make sure that your staging location is set to a relational technology i.e. in this case the target would be an ideal candidate because it is where the ODI will organize the data from source two files and oracle and perform the UNION before loading to the target

    If you want to watch some pretty screenshots showing the steps above, take a look at http://odiexperts.com/11g-oracle-data-integrator-part-611g-union-minus-intersect/

  • How to MERGE when the target table contains invisible columns?

    Oracle running on Oracle Linux 6.4 12.1.0.2.0 database:

    During his studies of FUSION with invisible columns, I discovered that invisible columns in the target table cannot be read. Workaround seems to be

    MERGE INTO (SELECT <column list> FROM <target table>) AS <alias>
    

    However, the documentation does not seem to allow this. Here are the details.

    Test data

    > CREATE TABLE t_target(
      k1 NUMBER PRIMARY KEY,
      c1 NUMBER,
      i1 NUMBER invisible
    )
    
    table T_TARGET created.
    
    > INSERT INTO t_target (k1,c1,i1)
    SELECT 2, 2, 2 FROM dual
    UNION ALL
    SELECT 3, 3, 3 FROM dual
    UNION ALL
    SELECT 4, 4, 4 FROM dual
    
    3 rows inserted.
    
    > CREATE TABLE t_source(
      k1 NUMBER PRIMARY KEY,
      c1 NUMBER,
      i1 NUMBER invisible
    )
    table T_SOURCE created.
    
    > INSERT INTO t_source (k1,c1,i1)
    SELECT 1, 1, 1 FROM dual
    UNION ALL
    SELECT 2, 2, 9999 FROM dual
    UNION ALL
    SELECT 3, 3, 3 FROM dual
    
    3 rows inserted.
    

    First try

    Please note that I have a WHERE clause in the WHEN MATCHED clause. Its purpose is to avoid the update of a row when data are already correct. The WHERE clause is trying to read the invisible column of the target table.

    > MERGE INTO t_target o
    USING (
      SELECT k1, c1, i1 FROM t_source
    ) n
    ON (o.k1 = n.k1)
    WHEN MATCHED THEN UPDATE SET
      c1=n.c1, i1=n.i1
      WHERE 1 IN (
        decode(o.c1,n.c1,0,1),
        decode(o.i1,n.i1,0,1)
      )
    WHEN NOT MATCHED THEN INSERT
      (k1, c1, i1)
      VALUES(n.k1, n.c1, n.i1)
    ...
    Error at Command Line : 10 Column : 12
    Error report -
    SQL Error: ORA-00904: "O"."I1": invalid identifier
    

    As you can see, I put a subquery after the USING clause so that 'n.i1' would be 'visible', but this is not enough since the 'I1' column in the target table is always invisible.

    Second test

    > MERGE INTO (
      SELECT k1, c1, i1 FROM t_target
    ) o
    USING (
      SELECT k1, c1, i1 FROM t_source
    ) n
    ON (o.k1 = n.k1)
    WHEN MATCHED THEN UPDATE SET
      c1=n.c1, i1=n.i1
      WHERE 1 IN (
        decode(o.c1,n.c1,0,1),
        decode(o.i1,n.i1,0,1)
      )
    WHEN NOT MATCHED THEN INSERT
      (k1, c1, i1)
      VALUES(n.k1, n.c1, n.i1)
    
    2 rows merged.
    

    Here I used a subquery in the INTO clause thus, and it worked.

    Unfortunately, this does not seem to be admitted in the documentation: IN fact refers to a table or a view as schema objects.

    Description of merge.gif follows

    My question is:

    How can I refer to invisible columns in the target table without creating a new object? My workaround using a subquery solution seems to work very well, but can I recommend if it is not documented?

    Can I replace a "inline view" for a view and still be supported?

    During his studies of FUSION with invisible columns, I discovered that invisible columns in the target table cannot be read. Workaround seems to be

    However, the documentation does not seem to allow this. Here are the details.

    Here I used a subquery in the INTO clause thus, and it worked.

    Unfortunately, this does not seem to be admitted in the documentation: IN fact refers to a table or a view as schema objects.

    My question is:

    How can I refer to invisible columns in the target table without creating a new object? My workaround using a subquery solution seems to work very well, but can I recommend if it is not documented?

    Can I replace a "inline view" for a view and still be supported?

    But the documentation DO ALLOWS not only! You use a view - a view online and those that can be changed in a MERGE statement.

    All versions of the doc for FUSION since 9i specifically say this:

    INTO clause

    Use the INTO target clause to specify the table or view you are updating or inserting into. To merge the data in a view, the view must be updated. Please refer to the "Notes on the editable views" for more information.

    Here are the links for the doc. 9i, 10g, 11g and c 12, ALL OF THEM (the last three), except 9i have this EXACT clause above.

    SQL statements: INDICATED to ROLLBACK FALLS, 15 of 19

    http://docs.Oracle.com/CD/B19306_01/server.102/b14200/statements_9016.htm

    http://docs.Oracle.com/CD/B28359_01/server.111/b28286/statements_9016.htm

    https://docs.Oracle.com/database/121/SQLRF/statements_9016.htm

    9i doc does not have this specific quote in the INTO clause section, but it doesn't have that quote a little later:

    Limitation of the update of a view
    • You cannot specify DEFAULT when refreshing a view.
    • You cannot update a column referenced in the ON condition clause.
    merge_insert_clause

    The merge_insert_clause specifies the values to insert into the column of the target table, if the condition of the ON clause is false. If the insert clause is executed, then all insert triggers defined on the target table are activated.

    Restrictions on the merger in a view

    You cannot specify DEFAULT when refreshing a view.

    If your "workaround" isn't really a workaround solution. You SHOULD use an inline view if you need to reference a column "invisible" in the target table, since otherwise, these columns are INVISIBLE!

    My workaround using a subquery solution seems to work very well, but can I recommend if it is not documented?

    You can recomment it because IT IS documented.

Maybe you are looking for

  • Portege A600-12 t - freeze all the time 10 times per day

    My laptop freeze all the time, maybe 10 times a day - someone has a solution to this? Even watching a movie or surfing the net is very difficult. I need the computer for my work.

  • Edit 3d point clouds

    Hello Anyone know if it is possible to edit a 3d point cloud? I want to be able to move the palette of projection somewhere else and could not find how. Thanks in advance Gabriel

  • microSD 8 GB not detected

    Formatted a new card 8 GB microSD in my phone, but the laptop don't see it in the adapter in the location. work of 1 and 2 GB fine.

  • Seized auto-complete textfield

    Can someone guide how to use auto-complete textfield for filtering values from the table database (SQLite). The link http://docs.blackberry.com/en/developers/deliverables/18125/Autocomplete_text_field_1200231_11.jsp shows with predefined tables. I wa

  • How to get the decimal value of a string of international currency

    Hi all How to get the decimal values to a string of international currency.