Limit the data in the source.

Hi all

I want to use the "Sample" oracle on my Db source clause, which is an oracle db. I want to limit the amount of data that I bring this moment in my target, which is another source of Oracle.

I don't know where I have to specify this condition "Sample (10)", while only 10% of the sample source data is extracted? Can I do this using ODI?

Kindly let me know.

Kind regards
John

Ok...

It's so simple as:

(1) duplicate the LKM

(2) rename the double (I named my as LKM SQL for SQL - sample)

(3) right-click on the LKM and choose "insert."

(4) use the following values in the new window option:

Name: USE_SAMPLE
Value type:
Description: write the simple value of % (for example: 35)

Do not write to the "Default value" TextBox. The other fields are informative.

(5) change step 31 (Load Data) since the LKM and go to the tab "Source on command.

(6) add the following code as the last line of pending order:

<% if (!odiRef.getOption("USE_SAMPLE").equals("")) {%> SAMPLE (<%=odiRef.getOption("USE_SAMPLE")%>) <%}%>

(7) in the interface, edit the real LKM to the new

8) go to USE_SAMPLE option and set the value you want.

It works for you?

Cezar Santos
http://odiexperts.com

Tags: Business Intelligence

Similar Questions

  • OutOfMemoryError: Limit superior GC exceeded when loading directly the source using IKM sql for sql. Growing ODI_MAX_HEAP do not solve the problem.

    OutOfMemoryError: GC overhead limit at execution a loading interface directly sql for sql with no work table.

    I get the error message: error: exception OutOfMemoryError: higher GC limit exceeded when executing an interface making a direct using IKM SQL for SQL command load Append, source a 150millions lines table.

    I have increased the ODI_MAX_HEAP and the interface run longer and failed. I'm already at: ODI_MAX_HEAP = 12560 m I tested with ODI_MAX_HEAP = 52560 m and still error.

    I am following up to the memory of the server and I still have available memory...

    Apart from the problem of memory I know that this type of load should be possible because the step of data load on LKM SQL to Oracle is able to load the work table $ CAN. Ideally, I want to emulate this behavior by using SQL for SQL IKM.

    1 - What is the right path to follow here? (change the parameters of memory or modify the IKM?)


    2 - ideas on how to solve the OutOfMemoryError: GC overhead limit exceeded error? (GC means Garbage Collector)

    Execution of the IKM interface in the Simulator generates this code:

    Load (Source) command:

    Select

    source - tbl.col1 COL1,

    source - tbl.col2 COL2,

    source-tbl. "' COL3 ' COL3

    of public.source - tbl AS source-tbl

    where

    (1 = 1)

    Default command (Destination):

    insert into the source-tbl

    (

    col1,

    col2,

    COL3

    )

    values

    (

    : COL1,.

    : COL2.

    : COL3

    )

    My experience is very limited with ODI so I don't know about changing the code to the KMs

    Thanks in advance.

    Find a work around the error of generals limit exceeded GC:

    -in my case I was running without the IDE so that changes made to the odiparams.sh were not useful.

    -This means that I need to change the JVM settings to:

    $ODI_HOME/oracledi/client/odi/bin/odi.conf

    AddVMOption - XX: MaxPermSize = NNNNM

    $$ODI_HOME/oracledi/client/ide/bin/ide.conf

    AddVMOption - XmxNNNNM

    AddVMOption - XmsNNNNM

    Where NNNN is a higher value.

  • How can I use notifications to send data from different sources for the same chart?

    Hello

    I use the model of 'Continuous measurement and logging' project comes with LV 2013.

    It is extremenly helpful in understanding the messaging between the acquisition, graphic and loops of newspaper. (Thank you NEITHER!)

    I ran into a snag though.

    I want to change so that my graphic loop receives notifications of data from two sources of acquisition by the declarant.

    I have trouble getting the data from the two sources to display on one graph.

    I've isolated the problem in the attached vi.

    Here's what happens:

    1. I create 2 parallel loops data and send the data to a third parallel loop with the notifiers.

    2. the third loop receives data from one of the loops because one of the authors of just receiving notifications is to expire instead of receive data.

    Can anyone suggest how can I fix?

    Thank you.

    -Matt

    Here's my modification of your VI. I put notes on the block diagram to explain the changes. He uses a queue for data transfer to avoid data loss. It uses a notifier to stop loops. All local variables and value property nodes have been eliminated.

    The way loops are arrested probably let some data in the queue. No more of one or two iterations of each of the loops of data acquisition. If you need ensure that all data has been displayed (or recorded in a real application), then you must stop acquiring loop first and read the queue until you know it's empty and both other loops stopped. Then stop the render loop and release the queue and the notifier.

    Lynn

  • Limit the use of data on home network

    Hello

    I currently have three computers connected to a wifi modem with two computers running vista and a Mac SL running. I was wondering if it was possible to limit the daily use of computer data? I have experimented with the settings of the router/modem, but there is no built-in feature.

    Modem: 2701 HGV - W (bigpond)

    Any help would be fantastic.

    I have no idea on the SL Mac (you will need to post this question in a Mac forum or contact support Mac), but as far as Vista goes, certain limits can be defined in part by using parental controls and in part by using permissions (but it depends on exactly what you want to say that I am addressing the following).  No process will actually limit the use of network data (although they can help you to set other boundaries that can help).

    In Vista, here is the procedure for setting up restrictions. http://windows.microsoft.com/en-US/windows-vista/Set-up-Parental-Controls. Once you have access to parental controls, and then go to the section of deadlines and follow the instructions there (or for the other sections).  Technically, it's not a way to limit the use of data in itself - if by here you mean the amount of data accessed for a specified period (but you can limit the data that is used, what programs are available, what websites are allowed, when the computer can be used and several other things.  To limit the available data, you use permissions rather than Parental control, but well enough that is already treated as system implements security rights to prevent users other data and you can do this for any folder that you create outside profiles (while the file system are almost already protected - even on your part).  If you mean data over the Internet, then there is no way I know in Vista or IE which limit explicitly by the user.  Keep in mind that this will only work with standard user accounts as account administrator can simply replace the limitations or remove them all.  I know that's not what you asked really, but I thought that I should bring him as he is very much related.

    Now for the good news that you want can be done with the 3rd party software free.  See the following:http://www.ampercent.com/control-internet-data-usage/4572/.  This seems to be exactly what you want and should do the trick.

    I hope this helps.

    Good luck!

    Lorien - MCSA/MCSE/network + / has + - if this post solves your problem, please click the 'Mark as answer' or 'Useful' button at the top of this message. Marking a post as answer, or relatively useful, you help others find the answer more quickly.

  • How to change the source of data in application with the deployment plan

    Hello

    JDev Version: 12.1.3

    I can't change the source data with the deployment plan bc4j.

    Any example on this requirement?

    Thank you

    Anil

    Well, if you take a look at:

    http://Biemond.blogspot.com/2009/04/using-WebLogic-deployment-plan-to.html

    (one of the links in the link you get), you will see Edvin Biemond answwer:

    "you must change the configuration of module of the Application so that it uses a data source. and in the deployment of applications disable deployment of jdbc connection.

    Now you is enough to make the good source of data on each wls and deploy ears. »

    The simplest (and probably the best) way is, as I told you, and you are also mentioned: open up JDeveloper and change declaratively and re-download the ears

  • Is a database table that is required for the temporary interfaces with the data flat file source?

    People, this is the situation I ODI 11.1.1.7

    1. I have an interface temporary (yellow), called MJ_TEMP_INT, which uses data from TWO sets of data from the source in a temporary target (TEMP_TARG). Wrestling is a shot of a data set from a table while the other set of data extracted from a flat file.  A union is made on data sets.
    2. I then create another interface, called MJ_INT, which uses the MJ_TEMP_INT as the source and the target is a real database. table called "REAL_TARGET".

    Two questions:

    1. When I run my second interface (MJ_INT), I get a message "ORA-00942: table or view does not exist" because it is looking for a real TEMP_TARG db table. Why I have to have one? because I am pulling a flat file?
    2. On my second interface (MJ_INT) when I look at the interface of my source MJ_TEMP_INT (yellow) property sheet, the box 'Use the temporary interface as a Derived table' is DISABLED.  Why? Is also because my temporary interface is pulling from a flat file?

    I am attaching a file that shows a screenshot of my studio ODI.

    Furthermore, IF my temporary source interface has only a single set of data by pulling from a database. Table to table in a temporary target, called MJ_TEMP2_TARG, and then when I use this temporary interface as a source to the other another real db. target table (REAL2_TARGET), THEN everything works.  ODI requires me to have a real database. Table MJ_TEMP2_TARG and the checkbox for "interface temporary use as a Derived table" is NOT DISABLED and my REAL2_TARGET table gets filled.

    Thank you in advance.

    Mr. Jamal.

    You quite rightly assume the reasons that you have questions is because you try to attach a file. A file I always have to be materialized in the transit zone, as a temporary table and then have the data loaded in it.

  • Generic procedure to load the data from the source to the table target

    Hi all

    I want to create a generic procedure to load data of X number of the source table to X number of the target table.

    such as:

    Source1-> Target1

    Source2-> Target2

    -> Target3 Source3

    Each target table has the same structure as the source table.

    The indexes are same as well. Constraint are not predefined in the source or target tables.there is no involved in loading the data from the business logic.

    It would simply add.

    This procedure will be scheduled during off hours and probably only once in a month.

    I created a procedure that does this, and not like:

    (1) make a contribution to the procedure as Source and target table.

    (2) find the index in the target table.

    (3) get the metadata of the target table indexes and pick up.

    (4) delete the index above.

    (5) load the data from the source to the target (Append).

    (6) Re-create the indexes on the target table by using the collection of meta data.

    (7) delete the records in the source table.

    sample proc as: (logging of errors is missing)

    CREATE or REPLACE PROCEDURE PP_LOAD_SOURCE_TARGET (p_source_table IN VARCHAR2,

    p_target_table IN VARCHAR2)

    IS

    V_varchar_tbl. ARRAY TYPE IS VARCHAR2 (32);

    l_varchar_tbl v_varchar_tbl;

    TYPE v_clob_tbl_ind IS TABLE OF VARCHAR2 (32767) INDEX OF PLS_INTEGER;

    l_clob_tbl_ind v_clob_tbl_ind;

    g_owner CONSTANT VARCHAR2 (10): = 'STG '.

    CONSTANT VARCHAR2 G_OBJECT (6): = 'INDEX ';

    BEGIN

    SELECT DISTINCT INDEX_NAME BULK COLLECT

    IN l_varchar_tbl

    OF ALL_INDEXES

    WHERE table_name = p_target_table

    AND the OWNER = g_owner;

    FOR k IN l_varchar_tbl. FIRST... l_varchar_tbl. LAST LOOP

    SELECT DBMS_METADATA. GET_DDL (g_object,

    l_varchar_tbl (k),

    g_owner)

    IN l_clob_tbl_ind (k)

    FROM DUAL;

    END LOOP;

    BECAUSE me IN l_varchar_tbl. FIRST... l_varchar_tbl. LAST LOOP

    RUN IMMEDIATELY "DROP INDEX ' |" l_varchar_tbl (i);

    DBMS_OUTPUT. PUT_LINE (' INDEXED DROPED AS :'|| l_varchar_tbl (i));

    END LOOP;

    RUN IMMEDIATELY ' INSERT / * + APPEND * / INTO ' | p_target_table |

    ' SELECT * FROM ' | '. p_source_table;

    COMMIT;

    FOR s IN l_clob_tbl_ind. FIRST... l_clob_tbl_ind LAST LOOP.

    EXECUTE IMMEDIATE l_clob_tbl_ind (s);

    END LOOP;

    RUN IMMEDIATELY 'TRUNCATE TABLE ' | p_source_table;

    END PP_LOAD_SOURCE_TARGET;

    I want to know:

    1 has anyone put up a similar solution if yes what kind of challenges have to face.

    2. it is a good approach.

    3. How can I minimize the failure of the data load.

    Why not just

    create table to check-in as

    Select "SOURCE1" source, targets "TARGET1", 'Y' union flag double all the

    Select "SOURCE2', 'TARGET2', 'Y' in all the double union

    Select "SOURCE3', 'Target3', 'Y' in all the double union

    Select "SOURCE4', 'TARGET4', 'Y' in all the double union

    Select 'Source.5', 'TARGET5', 'Y' in double

    SOURCE TARGET FLAG
    SOURCE1 TARGET1 THERE
    SOURCE2 TARGET2 THERE
    SOURCE3 TARGET3 THERE
    SOURCE4 TARGET4 THERE
    SOURCE.5 TARGET5 THERE

    declare

    the_command varchar2 (1000);

    Start

    for r in (select source, target of the archiving of the pavilion where = 'Y')

    loop

    the_command: = "insert / * + append * / into ' |" r.Target | ' Select * from ' | '. r.source;

    dbms_output.put_line (the_command);

    -execution immediate the_command;

    the_command: = 'truncate table ' | r.source | "drop storage."

    dbms_output.put_line (the_command);

    -execution immediate the_command;

    dbms_output.put_line(r.source ||) 'table transformed');

    end loop;

    end;

    Insert / * + append * / into select destination1 * source1

    truncate table SOURCE1 drop storage

    Treated SOURCE1 table

    Insert / * + append * / to select TARGET2 * in SOURCE2

    truncate table SOURCE2 drop storage

    Treated SOURCE2 table

    Insert / * + append * / into select target3 * of SOURCE3

    truncate table SOURCE3 drop storage

    Treated SOURCE3 table

    Insert / * + append * / into TARGET4 select * from SOURCE4

    truncate table SOURCE4 drop storage

    Table treated SOURCE4

    Insert / * + append * / into TARGET5 select * from source.5

    truncate table source.5 drop storage

    Treated source.5 table

    Concerning

    Etbin

  • Shrink a VM HD - is the destination data store need to have enough space for the source or destination VM?

    Hi all

    I have an existing file with HD Server configures:

    HD1: 60 GB

    HD2: 20 GB

    HD3: 650 GB

    Total: 730 GB

    I intend to shrink HD3 250 GB that will give me a new total of 330 GB.

    My question is, when I rode the VMware Converter Standalone process and I get to the step where I select the 'destination '.  Obviously, I need to select a data store that can adapt to the virtual destination machine.

    My concern is that it shows the size of the source disk (Go 730) (see image below), and for some reason any part of the conversion process the destination data store requires the storage of the size of 330 GB vm 730GB as opposed to the 'new '.

    source disk size.PNG

    Can anyone confirm?

    Thank you

    There are no for the data store 730 GB free in order to submit the conversion. 330 GB free (size after reduction) would be sufficient.

    If you also select a provisioning, you could even start the conversion with less free space on the data store, but it may fail at some point, if the actual data meet.

  • How to limit the I/O in a data store?

    Hello

    Not sure if storage i/o control can accomplish what I want. so, here's the scope. We have an EMC e 3300 connected via iSCSI on our ESXis 5.0 U2. We have a virtual machine that generates a lot of IO.

    Is there anyway to limit the OPS are / s to a specific virtual machine to the data store, so it not to saturate the pipe set iscsi?

    Thank you

    There is a limit of IOPS / s on the virtual disk as an option... You can assign to that.

    http://KB.VMware.com/kb/1038241

  • How to limit the data download duplicate via WebADI

    Hello Experts,

    The isue is in what regards the data transfer WebADI. Currently system allows the user to download the same set of records repeatedly, even if they could have already been pushed to the project.

    This system leads to double upload of documents by the user behavior. It could be a valid case when the user tries to download the first time data sheet, sheet visiting non-responsive State, with the transfer having been completed
    but in the background.

    Nowwhen the user trying to download, the system is not validate if the records have already been processed and sent to the Oracle projects.

    Is it possible in the rows of flag during treatment, in order to avoid duplicate downloads.

    Any provision of the custom extensions?

    Kind regards

    Shan

    Hi Shan,

    Here are the details, the use of your Source of the Transaction. Try n use Unique Trx ID...

    When WebADI imports data Oracle projects because this is information known by the user and which comes from outside of projects.

    But in business case doesn't want duplicates to come in, they can write an Import Client Extension operation (prior to importation) so that you can put your custom logic to see, already, whether it exists in the system. If already exists then you can stop the download.

     

    Allow a double reference

    Enable this option allow multiple transactions with the source of the transaction use the same original system. If you enable this option, you cannot uniquely identify the source of the transaction and the original system reference element.

    Concerning

    Christopher K

  • How to validate the data in the source

    Hi all


    I am new to ODI, in my process could be called in 2 modes of Validation and load

    a. validation - V - should perform validation on the data from the source according to the filter criteria
    b. load - L - could make a mandatory check and load the data into the target table

    My concern is how do only validation on data sources, please share any ideas on this...


    Thanks in advance...

    Dragging data store in the package in order to apply the static control, you are doing your validation without loading.
    This validation step will copy the invalid data in the table of errors (E$) and you can also decide if you want to remove from the source (and keep the data only in the error table).

  • Plan the work that runs according to max date in the source to create table

    Hi all

    I am trying to create an automation in my sql. Basically, I have a table I do reporting and analysis on which is a subset of records from a source table. For the purposes of this example, I called the source table "src_table" and my table as the target table, "tgt_table." Essentially the source table gets updated with a value again months from date (founded order_dt) and I have a procedure I call when the source table is updated to load the value of the new month of records in my target table. So let say June 2012 data is loaded in the source table, I call my procedure that has a date parameter and enter June 2012 as the point of departure date so that it looks for new records in the source table from June 1, 2012 and load it into my target table. For variety reasons I won't get in, my goal in a position is not a view, there must be a stand alone table. The problem is that the source never table gets updated at the same speed interval... sometimes they will load a new months of data at the beginning of the following month, sometimes medium of the month... sometimes I have to wait a few months to get another month of data. So, whenever I get a email notification telling me another month of data is loaded in the source table, I have a windows batch file that runs my setting... I just have to change the start date (based on the order_dt column) and run it. I want to do is automate the process so that I did not yet do. What do I think creates a task that runs every day or even at the end of each month, this work has basically seized the date of maximum order of the source table and if the date of the order is greater than the maximum order date in my target table, it runs my procedure that loads the new month with a value of data in my target table. Thus, for example, my maximum order of my target table date is May 31, 2012. The work runs at a certain period of time and check the maximum date on the source table, if the maximum date is May 31 nothing happens... new data has not been loaded so does not perform the procedure. If the maximum date on the source table is June 30, 2012, the maximum date is greater than the maximum in my table target date so that the procedure runs. I need also the procedure to insert records into the table in the source where the order date in the source table is > = at the time of maximum order in the target table.

    In a Word, this is what I need but I'm not sure how to put in place. The procedure works fine... all I need is an interim procedure that runs as a scheduled task every x number of days and the date of maximum order check on the source table if it exceeds the maximum order on my table of target date it runs my procedure. My procedure is called "insert_new_orders" and when I run it I run it like:

    execute insert_new_orders('01-jun-2012');

    If anyone can help me with what would be great :-)

    Thank you
    Ed

    To start:
    You must remove the EXECUTE keyword.

  • Write data to the Source folder

    I'm trying to change part of a script that I use for work he entered into the source of a file folder, run on rather than to a specific folder (in this case, the office).  Can someone help me understand how to change and/or by adding the code below?

    create the reference to the csv file

    var file = new file (Folder.desktop + "/ Data.csv");

    Open the csv file in Add mode

    leader. Open ("e", "TEXT", "?");

    jump to the operating system in use on the line set

    ($.os.search(/windows/i)! = -1 ? file.lineFeed = 'windows': file.lineFeed = "macintosh";

    at the end of the file

    leader. Seek (0.2)

    write all the information required for the csv file

    Name of the document, Date1, database2, donnees3, etc.

    file.writeln (decodeURI (activeDocument.Name) + ","Data1","Data2","data3"," + data4. ") ToString()', '+ Data5.toString ());

    Close the csv file

    leader. Close();

    Now, I'm assuming that the part to change here is Folder.Desktop.  And I think I will need to change this option to be Folder.getFolderName.  getFolderName is a variable used in other code that checks the source directory of the current file.  This code of source directory, it's what I do not know how.  The script itself would have to do the check whenever it is running as I want him to be able to work with included subfolder of batch runs, make a new file in the subfolder when working with these files.  Anyone has any ideas, how to proceed?  Any help is very appreciated!

    dgolberg

    Edit: I forgot to mention; It would be nice if it can also store the name of the folder in which the source file is in a variable, so it can be written to the file as .csv.  This is not as important as the code mentioned above, but it would be helpful none-the-less, if possible.

    No problem, I was able to solve it on my own all the time.  I thought it would be much harder than it actually was. but has been able to do by simply changing

    var foldLoc = new File ("C:\\Temp\\");

    var file = new file (foldLoc + "/ Data.csv");

    TO

    var foldLoc = app.activeDocument.path;

    var file = new file (foldLoc + "/ Data.csv");

    and now, it records the data and the .csv file in the same folder as the currently open project.  Thanks for trying to help; It is much appreciated.

    dgolberg

  • Data synchronization after restoring the database to the Source or target

    Hi DBAs,

    I've implemented replication unidirectional flow at the level of transactional database to the database schema (excluding some tables). It works very well without any problems after doing the tuning of flow with the help of health check scripts. My question is if some reasons, if I have to recover to the level of the database on the database Source then I need rebuild flows from scratch? Database size is close to 2 TB and after recovery, even using the pump data at the same time, it could take hours and also the source won't be available even after the recovery (data pump running job - if the source is online, then I was getting ORA-01555 errors and work was not).

    Please notify by above circumstances what is the best way to re - synchronize data between the source and target.

    Thank you
    -Samar-

    I would export of the datadict (build) once again, just to avoid any MVDD, something to do directly after the restoration of the DB,.
    because I'm not sure from the perception of this restoration DB in terms of DBID. Have you checked in rman if source db is considered to be the epitome?

    Then restart what are the messages in the target error queues. I think that you will have problems to move the first_scn
    due to differences on some YVERT in source/target system.logmnr_restart_ckpt$. system will think it has holes.
    You may need to manually remove a bunch of lines to allow the flow to jump on the fate of the YVERT
    that have been sent and are not present in the source system.logmnr_restart_ckpt$

  • Filter on the Source data store

    Hello

    I am new to ODI. My source and target are the two Oracle. I have a table in the Source that has 10000 files. I Reversed engineer the source table and was able to view the data on the source data store. However, I want to filter the data on the source to send only rare recordings and not all 10000. I am writing a filter on the Data Source on a particular column store, but I still see all the records when I click Show data. Any suggestions?

    Thank you
    Arun

    Edited by: user9525002 May 19, 2010 09:26

    Edited by: user9525002 May 19, 2010 09:54

    Arun,

    I don't think it's possible. You want to look at the filtered source data before loading to make sure that only correct data is loaded into the target.

    A little more complicated would be to create a temporary interface (interface yellow) by selecting the Sunopsis Memory engine than the target temporary table.
    And then right-click to display on this memory temporary table data.

Maybe you are looking for

  • thumbnails in firefox are shown off the page

    Only in Firefox and Palemoon I get a condition where thumbnails are popping out of the web page (right) as shown in the attached picture. I don't get this problem in any other browser major i.e. Chrome/IE/Opera/Safari so I'm assuming that the issue i

  • contacts Favorites/Favorites?

    I wonder what is the purpose of the function of favorites... I have one of my contacts marked as Favorites or played, but I noticed no difference except when I look at coordinates the star is red... but otheer than that and you while knowing that the

  • HP Laser Jet Pro 400 M425 MFP Multifunzione

    Salve, ho da only recently he multifunzione HP Laser Jet Pro 400 MFP425 stampa funziona place my ho problemi con lo scanner in quanto mi documenti pdf produce molto pesanti. leaves situation bastano per superare 1Mo... often need send file pdf con di

  • How do we install first 14 elements

    HelloI have problems with the installation of the first 14 elements on my Toshiba laptop.I bought the card at the Micro Center store in offline mode.I bought the technology using the product code.It gave me a file extracted with certain documents on

  • Photoshop CS6 crashes on launch, the 'sweep of the presets' after the "Topaz" plugin loading / Mac El Capitan

    In my view, there was a question yesterday about Photoshop CS6 hanging on scanning presets.I've recently upgraded to El Capitan 10.6.8 and Photoshop CS6 open. I installed a new 2 TB drive, installed El Capitan and done a migration assistant to move m