Logical Informatica for the Balance_Global1_Amt column in the W_GL_BALANCE_F table.

Can someone help me with the following scenario?

I get no data in the column Balance_Global1_Amt in the W_GL_BALANCE_F table.
Can someone help me to identify the logic that I'm not able to understand the logic in the transformations of the SDE_PSFT_ExchangeRateDimension in informatica mapping.

Thank you
Nikki.

Hi Nikki,

You have configured global currencies in accordance with the documentation? :-

http://docs.Oracle.com/CD/E20490_01/BIA.7963/e19039/anyimp_oracle_apps.htm#BABJJJHF

SIL mapping SIL_GLBalanceFact fills the fact table and the columns of the world currency. Exchange rates are sought after in the W_EXCH_RATE_G table; global rate types and codes of currencies are populated from the DAC settings in the W_GLOBAL_CURR_G table. For E-Business Suite warehouse exchange rate table is filled by the ETL standard. I'm not sure of PeopleSoft, so you will need to check the SDE mappings. If it is not met, you will need to create a mapping to fill it with the required necessary exchange rates to create global amounts and set the parameters of the DAC (in accordance with the section above) accordingly. If the DAC parameters have not been set or table W_EXCH_RATE_G is not met, then this explains why you get NULL values in the columns of your amount overall.

I recommend that you look at the SIL_GLBalanceFact, especially the mapplet MPLT_CURCY_CONVERSION_RATES1 mapping. It is important that you understand what is happening in these objects to see how the global exchange columns are being filled. W_EXCH_RATE_G is used for the exchange rate for a given currency and date, then you must also make sure that the exchange rates exist in this table for all dates in the given operation.

The next section of the manual of the BI applications may also be of interest: -.

http://docs.Oracle.com/CD/E20490_01/BIA.7963/e19039/anyimp_configfinance.htm#CHDJDBEB

Please note that you posted a very similar thread: -.

Re: Insert records in the W_EXCH_RATE_G table.

In which you have not marked any useful comment / correct. If you want people to help you, then you must recognize the effort that people put in.

Please indicate so useful / correct.
Andy

www.Project.eu.com

Tags: Business Intelligence

Similar Questions

  • Logic required for the following data


    Hi all

    I have the procedure that I call a cursor to retrieve records. This query returns the following data

    DISZDIICWTBack to topdown
    9 1/29.6258.92136181602
    13 1/213.37512.51561191962
    18 1/218.62517.75587.520503
    262624.7510520103
    9 1/29.6258.8354016023858
    776.2762616836352

    I want to print only those values...

    9 1/29.6258.92136181602
    9 1/29.6258.8354016023858
    776.2762616836352

    As you can see in these first values and down overlap.

    I tried several ways to sort the query on the fields and have a logic, but I always get an extra line that is not overlapping.

    Can someone give me please the logic to get the desired result through conditions of procedure/function/formula

    Thank you

    929107 wrote:

    Hey Maher,

    Thanks for the reply, but I'm looking at some generic logic /Algorithm, I don't want to play with a fixed set of values. Values can change with the evolution of other structures.

    Kind regards

    I showed with fixed values for explanation.

    You can use my select statement on your table

    Select * from

    (select di, dii, ctw, sz, offset (down) on ldown (top control)

    of )

    where downstairs > = nvl (ldown, down);

    Concerning

    Mr. Mahir Quluzade

  • can we create specific diagrams of logic model for the relational model?

    Hi-

    I created several diagrams of relational model. I want to create the logical data model diagram for each of the relational model.
    Is it possible to do?

    Currently, it is always associated with a logical data model diagram. When I create LD for the 2nd, it is always update the same diagram.
    Please let me know.

    Hello

    Yes you can have the diagram in the logic model. Check there titled "Subview as" at the top right of dialogue engineering - select it and subview will be created in the logic model corresponding to the main schema to your relational model.

    Philippe

  • AF:table columns are required for the entire table space.

    Hello

    I use a certain controls fast query table. Some tables have only one or two columns displaying data.
    It is necessary that in such a case, avoid any white space after the columns.

    I tried setting for each column width to 50%, but it gave no results required in firefox 4.0

    I use JDeveloper 11.1.1.4.0.

    The only option that seems to work is to specify the width of the column in pixels. It's the last thing I like to do.

    Is there a styleClass or any other better alternative?

    Concerning

    for the game of table the styleClass as "AFStretchWidth" - which would ensure that it occupies the entire width.
    and use columnStretching as required.

    Sample:


    fetchSize = "#{bindings." Departments.rangeSize}.
    emptyText = "#{bindings." Departments.Viewable? "{'No data to display.': 'Access Denied.'}".
    var = "row" value = "#{bindings." Departments.collectionModel}' rowBandingInterval = '0 '.
    selectionListener = "#{bindings." Departments.collectionModel.makeCurrent}" columnStretching ="last"
    'unique' = rowSelection styleClass = "AFStretchWidth" >

    Thank you
    Nini

  • How do I display values skyrocket in the IR filter for the joined table columns?

    Hello

    I have a problem in the IR the query is based on a table, joined with other tables. I would like to provide users the ability to use IR filter search bar in the joined table columns. The problem facing on this filter, the Expression field, by pressing the arrow button displays values for the fields in the primary table, but not for fields that come from joined tables. Have you experienced this behavior in your reports? Is this normal?

    TIA

    Hello

    Correlated subqueries can improve performance - but it does not depend on the involved tables, the number of columns and the existence of indices. As far as I know, the optimizer has problems with them. You could try to explain the Plans on the two statements to verify that.

    In any case, I created a new test page with the SQL for IR:

    SELECT E.EMPNO,
    E.ENAME,
    D.DEPTNO,
    D.DNAME,
    E2.EMPNO "EMPNO2",
    E2.ENAME "ENAME2"
    FROM EMP E, EMP2 E2, DEPT D
    WHERE E.EMPNO = E2.EMPNO(+)
    AND E.DEPTNO = D.DEPTNO(+)
    AND E2.PRIMARY_EMPLOYEE(+) = 'Y'
    

    http://Apex.Oracle.com/pls/OTN/f?p=267:226

    As far as I can see, it works properly - except that if I do a filter on the ename column, when I try to create a second filter, drop-down lists ename all the values, while the other columns list only the available values after having applied the first filter. Which seems strange that the filters are applied as ANDS. But it does the same thing for other areas - IE, the field used in a filter is not filtered for the second filter - so I guess this is normal, but only a person in Apex could probably explain why it is so.

    Otherwise, everything seems to work as I expect and the above page works the same as my test page, which uses external joins http://apex.oracle.com/pls/otn/f?p=267:224

    Andy

  • With the help of AfterInsertTrigger for the audit table

    Hi all

    I use after Insert trigger to re - write the record goes to the base table in the corresponding table. The statement from the two tables are the same, except that the table has a column more - audit_time. Here is how I defined the trigger:

    Create or replace trigger dup_rec after insert on table_A for each line
    begin PK_comm.create_audit_rec (table_A, table_b,: new.) ROWID); end;


    In the PK_comm package, I defined the procedure as 'PRAGMA AUTONOMOUS_TRANSACTION' create_audit_rec as follows:


    Procedure create_audit_rec (baseTable IN VARCHAR2, auditTable IN VARCHAR2, cRowId IN VARCHAR2)
    IS
    PRAGME AUTONOMOUS_TRANSACTION;

    Insert into auditTable (audit_time, col1, col2, col3)
    Select systimestamp, col1, col2, col3
    from baseTable
    where ROWID = chartorowid ('v_rowID');

    It seems that registration is NOT added to the audit table when I tried to insert a record in the database table. Is it possible that integration into baseTable have not completed when the trigger is activated? If Yes, what is the work around?

    Thanks for your help,

    Mike

    Packages have session scope. Thus, each session will have a different instance of the collection.

    If you have multiple tables, you must in general, several collections. If you can guarantee that a single table would never change during the scope of a single statement, you could get away with a single collection. But as soon as a trigger on table A changes data in table B, you would need different collections. For most applications, it is much easier to simply create separate collections. I guess you can also create a collection of objects and folders that have a table name and a ROWID as well and add the logic to process only the associated with the table ROWID that you are interested in.

    Justin

  • Size for the imported table descrepancy

    Version: 11.2
    OS: RHEL 5.6

    I imported a large table with (450 columns). I'm a little confused about the size of this table
    Import: Release 11.2.0.1.0 - Production on Wed Aug 1 14:08:42 2012
    
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    ;;;
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Master table "SYS"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
    Starting "SYS"."SYS_IMPORT_FULL_01":  userid="/******** AS SYSDBA" DIRECTORY=dpump2 DUMPFILE=sku_dtl_%U.dmp LOGFILE=impdp_TST.log REMAP_SCHEMA=WMTRX:testusr REMAP_TABLESPACE=WMTRX_TS:TESTUSR_DATA1 PARALLEL=4
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    . . imported "TESTUSR"."SKU_DTL"                         7.311 GB 9502189 rows         <-------------------------------------------
    Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    .
    .
    Although it is said 7,311 gb when importing, which is not reflected in DBA_SEGMENTS

    SQL> select sum(bytes/1024/1024/1024) from dba_segments where owner = 'TESTUSR' AND SEGMENT_NAME = 'SKU_DTL' AND SEGMENT_TYPE = 'TABLE';
    
    SUM(BYTES/1024/1024/1024)
    -------------------------
                   2.28027344
    
    
    -- Verifying the row count shown in the import log
    
    SQL> SELECT COUNT(*) FROM TESTUSR.SKU_DTL;
    
      COUNT(*)
    ----------
       9502189
    Why is there a difference of 5 GB?

    -Info on indexes created for this table (if this info is of no use)
    The total combined size of index into this array is 12 GB (hope the query below is correct)
    SQL> select sum (bytes/1024/1024) from dba_segments
    where segment_name in
    (
    select index_name from dba_indexes
    where owner = 'TESTUSR' and table_name = 'SKU_DTL'
    ) and segment_type = 'INDEX'  2    3    4    5    6  ;
    
    SUM(BYTES/1024/1024)
    --------------------
                12670.75

    The size is reported in the dump file is the amount of space the data took in the dumpfile. This does not necessarily mean that it is how much space it will discuss when they are imported.

    One reason for this is that if the tablespace in that it is written is compressed or not. If the target tablespace is compressed, then once the import is complete, it will be much smaller than what has been written for the dumpfile.

    I hope this helps.

    Dean

  • How to create indexes for the great table of telecom

    Hello

    I'm working on DB 10 G on a 5 REHL for telecommunications company with more than 1 million saved per day, we need speed up the result of the query.
    We know, there are several types of INDEX and I need professional advice to create a suitable,

    many of our requests depend on the MSID (the MAC address of the Modem) column,
       
    Name           Null Type         
    -------------- ---- ------------ 
    STREAMNUMBER        NUMBER(9)    
    MSID                VARCHAR2(20) 
    USERNAME            VARCHAR2(20) 
    DOMAIN              VARCHAR2(20) 
    USERIP              VARCHAR2(16) 
    CORRELATION_ID      VARCHAR2(64) 
    ACCOUNTREASON       NUMBER(3)    
    STARTTIME           VARCHAR2(14) 
    PRIORTIME           VARCHAR2(14) 
    CURTIME             VARCHAR2(14) 
    SESSIONTIME         NUMBER(9)    
    SESSIONVOLUME       NUMBER(9)    
    .
    .
    .
    Please help,

    Hello

    first of all, think of all these SQL for the subquery on MAX (fairy) with analytical functions, the examples given on AskTom of rewriting: http://asktom.oracle.com/pls/asktom/f?p=100:11:0:P11_QUESTION_ID:9843506698920

    So I'll start with a normal index the MSID (but I think you already have), but provide a compression on the column, I think the same value MSID exist several times in the table. If the performance is not satisfactory or that the plan is not the best, then an option to include more columns can help you.

    I think that the first part of the answer will bring more gain.

    Herald tiomela
    http://htendam.WordPress.com

  • How to get the metadata for the selected tables of a schema

    Hello

    I need the metadata for the tables selected for an activity. The list of the table continues to change now. I get the list of tables just before the activity.

    What I need is to know how to put the list of tables in the sub query dynamicallly

    +++++++++++
    exec dbms_metadata.set_transform_param (DBMS_METADATA. False SESSION_TRANSFORM, 'SEGMENT_ATTRIBUTES',);
    exec dbms_metadata.set_transform_param (DBMS_METADATA. False SESSION_TRANSFORM, 'STORAGE',);
    exec dbms_metadata.set_transform_param (DBMS_METADATA. SESSION_TRANSFORM, 'TABLESPACE', TRUE);

    Select DBMS_METADATA. GET_DDL ('TABLE', '&') from user_tables where rownum < 2;
    +++++++++++

    What I need is something of the form where table_name in (< table1 >, < table2 >, < table3 >,..., etc.) in the query above, so that i access all the metadata
    He doesn't hit me how I can write this query to single line. Can someone help here?

    Regds,
    Malika

    Hello

    try using the name of the column to user_tables and owner of the table in the "DBMS_METADATA. GET_DDL' and the reel to reel for some files and run this file in turn exit.

    coil
    Select ' select DBMS_METADATA. GET_DDL ("' TABLE," ' | table_name |', "' ) double;' from user_tables
    where ;
    spool off;

    @some_file - trying to spool output too.

    -Pavan Kumar N

  • Setting or drop the synonym of synonym for the new table access

    Hello

    Using oracle 11.2.0.3 and we have the suite requirement.

    New, we build a summary table to replace an older.

    Currently reports access a synonymn which in turn accesses the table of old_summary.

    Once built, we would like to change the synonym to point to the new table, ideally with no interruption of service.

    The only thing that could potentially access these synonyms when change is synonymous with points table select stattements.

    Pu questions.

    (1) can you change table points synonym of without deletion and recreation?


    (2) what happens if have a sysnonym of race and drop of select statement to re-create.  It runs until the end sysnonym fact existed early in the race?

    Thank you

    1)  Can you change table synonym points to without dropping and recreating?
    

    He has not ALTER statement for synonyms. As I said the other answering machine use CREATE or REPLACE.

    2)  What happens if have a select statment running and drop sysnonym the recreate.  Does it run to completion fact sysnonym existed at start of running?
    

    All object references are resolved BEFORE the implementation phase. Once a statement begins executing, it is immaterial that the same is a synonym.

  • Best app to make a custom template for the versatile tables?

    I'm new as a result of creative and wonder what the best app is to make a custom template to view and edit tables such as Budgets and the Phases of the project for my clients in new design. Most of the models I find online are dull spreadsheets.

    I want to:

    • create my own templates with my own brand image
    • Edit and adjust the information in the tables for the Phases and budget proposals.
      • import data from spreadsheets or directly enter custom data
    • export to PDF or JPG for quick sharing

    It is better to make of spreadsheets and import them into my models? If so what is the most compatible adobe for this software? I take courses at Lynda.com and would like to know where to focus my hours of learning.

    I appreciate any help!

    Thank you

    Gabe

    If you want to make a spreadsheet, you must use the spreadsheet software as ms excellent.  Adobe does not have the spreadsheet software.

    and you could google that if you want something free or other choices.

  • How to take partial dump using EXP/IMP in oracle only for the main tables

    Hi all

    select*from v$version;
    
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
    PL/SQL Release 10.2.0.1.0 - Production
    "CORE    10.2.0.1.0    Production"
    TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    

    I have about 500 huge data main tables in my database of pre production. I have an environment to test with the same structure of old masters. This test environment have already old copy of main tables production. I take the dump file from pre production environment with data from last week. old data from the main table are not necessary that these data are already available in my test environment. And also I don't need to take all the tables of pre production. only the main tables have to do with last week data.

    How can I take partial data masters pre prodcution database tables?  and how do I import only the new record in the test database.

    I use orders EXP and IMP. But I don't see the option to take partial data. Please advice.

    Hello

    For the first part of it - the paintings of masters just want to - use datapump with a request to just extract the tables - see example below (you're on v10, so it is possible)

    Oracle DBA Blog 2.0: expdp dynamic list of tables

    However - you should be able to get a list of master tables in a single select statement - is it possible?

    For the second part - are you able to qrite a query live each main table for you show the changed rows? If you can not write a query to do this, then you won't be able to use datapump to extract only changed lines.

    Normally I would just extract all the paintings of masters completely and refresh all...

    See you soon,.

    Rich

  • What is advised to collect statistics for the huge tables?

    We have a staging database, some tables are huge, hundreds GB in size.  Auto stats tasks are performed, but sometimes it will miss deadlines.

    We would like to know the best practices or tips.

    Thank you.

    Improvement of the efficiency of the collection of statistics can be achieved with:

    1. Parallelism using
    2. Additional statistics

    Parallelism using

    Parallelism can be used in many ways for the collection of statistics

    1. Parallelism object intra
    2. Internal parallelism of the object
    3. Inner and Intra object jointly parallelism

    Parallelism object intra

    The DBMS_STATS package contains the DEGREE parameter. This setting controls the intra parallelism, it controls the number of parallel processes to gather statistics. By default, this parameter has the value is equal to 1. You can increase it by using the DBMS_STATS.SET_PARAM procedure. If you do not set this number, you can allow oracle to determine the optimal number of parallel processes that will be used to collect the statistics. It can be achieved if you set the DEGREE with the DBMS_STATS. Value AUTO_DEGREE.

    Internal parallelism of the object

    If you have the 11.2.0.2 version of Oracle database you can set SIMULTANEOUS preferences that are responsible for the collection of statistics, preferably. When there is TRUE value at the same TIME, Oracle uses the Scheduler and Advanced Queuing to simultaneously manage several jobs statistics. The number of parallel jobs is controlled by the JOB_QUEUE_PROCESSES parameter. This parameter must be equal to two times a number of your processor cores (if you have two CPU with 8 cores of each, then the JOB_QUEUE_PROCESSES parameter must be equal to 2 (CPU) x 8 (cores) x 2 = 32). You must set this parameter at the level of the system (ALTER SYSTEM SET...).

    Additional statistics

    This best option corresponds to a partitioned table. If the INCREMENTAL for a partitioned table parameter is set to TRUE and the DBMS_STATS. GATHER_TABLE_STATS GRANULARITY setting is set to GLOBAL and the parameter of DBMS_STATS ESTIMATE_PERCENT. GATHER_TABLE_STATS is set to AUTO_SAMPLE_SIZE, Oracle will scan only the partitions that have changes.

    For more information, read this document and DBMS_STATS

  • Purge of work for the inner table of wwv_flow_file_objects$

    Hi all

    Can anyone tell me is there any internal apex job that deletes the data in table $ wwv_flow_file_object?  I see the size of this table continues to increase.

    I'm planing to create a job manually, if it is not there.

    I use Oracle Apex 4.2.

    ZAPEX wrote:

    Thanks for sharing the information.

    I checked the setting of the Instance. Job is enabled and is set to delete files every 14 days, but when I checked wwv_flow_files it contains files that was created on 2013.

    Can you let me know what can be the reason? Is it possible that the work is broken?

    It is more likely that he is not serving all types of files. It would be only broken if the retained files are of types listed in the documentation. You need to determine what these files are and how and why they have been downloaded.

    You have applications that download files and not to remove / move them to a table of application? If Yes, then by editing these applications to store and manage files in the user tables rather than system tables is a better solution than blindly trying to purge everything.

  • Sinlge select query in the diff for the same table (same Structure) diagrams

    Scenario:

    Table XYZ is created in detail a.
    After a year, the old data of the previous year could be moved to another schema. However in the other schema of the same table name would be used.

    For example

    A schema contains XYZ table with data from the year 2012
    Schema B contains XYZ table with data for the year 2011
    Table XYZ in the two schemas have an identical structure.

    So we can draw a single select query to read the data from the tables in an effective way.
    For example select * from XYZ so including date between October 15, 2011 to March 15, 2012.
    However, the data resides in 2 different schema altogether.


    Creating a view is an option.
    But my problem, there are ORM (Hibernate or Eclipse Top Link) layer between the application and the database.
    If the queries would be constituted by the ORM layer and are not generated by hand.
    So I can't use the view.
    So is there any option that would allow me to use only query on different scheme?

    970773 wrote:
    Scenario:

    Table XYZ is created in detail a.
    After a year, the old data of the previous year could be moved to another schema. However in the other schema of the same table name would be used.

    For example

    A schema contains XYZ table with data from the year 2012
    Schema B contains XYZ table with data for the year 2011
    Table XYZ in the two schemas have an identical structure.

    So we can draw a single select query to read the data from the tables in an effective way.
    For example select * from XYZ so including date between October 15, 2011 to March 15, 2012.
    However, the data resides in 2 different schema altogether.

    Creating a view is an option.
    But my problem, there are ORM (Hibernate or Eclipse Top Link) layer between the application and the database.
    If the queries would be constituted by the ORM layer and are not generated by hand.
    So I can't use the view.

    Why not make the ORM as below?

    SELECT * FROM VIEW_BOTH;
    -VIEW_BOTH is a real VIEW of Oracle

Maybe you are looking for