HFM Consolidation of the balance sheets of the EBS GL

Hello

I'm consolidating an applicaton of HFM, I created my account dimension based on the balance sheet of GL accounts, I mean that the account type is done on GL.GL_CODE_COMBINATIONS. ACCOUNT_TYPE,

I generated a text file based on the sql query, the first time I loaded the balance without any validation of the account and I realized that HFM change the sign for expenses account, so I change the sign for accounts DECODE (GL.GL_CODE_COMBINATIONS. ACCOUNT_TYPE, 'A',-1, 'E',-1, 'L', 1, 'o', 1, 'R', 1) when I generate the text file, but in peacebuilding, I noticed that expense accounts has not changed the sign. My questions are, what is just way to load in HFM? HFM change the sign and what kind of accounts?

Concerning

Hello

The answers to your questions are quite complex.

First of all, about the right way to load in HFM, you should find the answer by asking your customers. HFM should be aligned with business processes, and that means that the right way is defined by the company.

Secondly, regarding the HFM change sign and what kind of accounts, I suggest to read pages 75 to 80 of the PDF document

Post specific questions.

Kind regards

Thanos

< spam="" signature="" link="" removed="" by="" moderator="">

Tags: Business Intelligence

Similar Questions

  • HFM Consolidation during the loading of data

    Hi Experts,

    I have a small question on the Consolidation of the FDM to time data load of HFM.

    I know THAT FDM is going to pass parameters to the HFM to perform the Consolidation, all these entities available in the list of the VALIDATION ENTITIES to respective location and Validation group.

    My Question is, what scenario should he perform the consolidation in the HFM. (Where can I check it)

    1. It is based on the SCENARIO, mentioned in the validation logic in the VALIDATION RULE. (because, I find a difference from the same Point of view to scenario 2)

    If Abs (|) ACTUAL_1, ASSETS, PKI, Custom1, Custom2, Custom3, Custom4, Jolie; REAL ASSETS, PKI, Custom1, Custom2, Custom3, Custom4, |) > 10 then

    RESULT = False

    RES. PstrCheckMessage1 = "Difference between scenario 2, please check.

    On the other

    RESULT = True

    End If

    1. For the logic of the above rule, what scenario will be consolidated.

    Regarding

    Kath

    You're right, target category for your scenario of HFM.

    Active category of FDM is your selected in the bar of FDM POV category. Categories are used to "categorize" the data, such as scenarios. So you can see category FDM as script source.

  • Balance sheet at the average rate?

    Hi all

    I have an application where: -.
    Default currency is EUR.
    By default the rate for the balance sheet accounts = ClosingRate.
    By default the rate to the flow accounts = AverageRate.
    Use PVA for the flow accounts is checked.
    Org by Application of the period is checked.

    For the problem - we now have a child entity in GBP contributing to 100% for a parent in euros entity. Looking at an individual account - the account (A100) is marked as an ASSET (i.e. the Type of account Consolidation = 'Active').

    I loaded the data (rates & operations) and set '[Active]' for all of the necessary features. I ran a default consolidation (no calculations) and local values have been translated and aggregated upwards.
    HOWEVER - the account marked as an asset (A100) was converted from GBP EUR to the AverageRate.

    Why is this not with the ClosingRate (which I expected)?

    11.1.1.3 usage and it is an application of EPMA

    Thank you very much
    George

    This asset has an associated custom you entered marked as type of flow switch?

    JOINT TASK FORCE

  • How to build finance balance sheet as user interface using the ADF Faces component

    Hello

    We know that financial balance sheet will have a calculated field, imaging of the last row will be the value of the sum for all above each column (see below the very approximate example), I doubt that ADF Table is the faces component appropriate to have a quick idea on this? Thank you

    1 point | 10. 10. 10. 10. 10.

    2 point | 20. 20. 20. 20. 20.

    sum | 30. 30. 30. 30. 30.

    -Liang Yi

    It is possible in the ADF. See below how you can add a sum calculated lines.

    http://rohanwalia.blogspot.in/2012/11/ADF-Groovy-for-total-sum-of-column-in.html

    You can extend the same approach in your use case.

    Thank you

  • How to print a balance sheet in the currency EUR

    Hello

    Can someone help me with this please

    I have "BOLD" balance sheet * fat * report defined under "BOLD"big book * fat * responsibility.
    I want to change the currency from USD to EUR. This report is always printed in USD currency. I edited the currency and changed into EUROS in italics other Options + italic + article to define-> report, its saved the file, but when print them values are not converted to the euro. Can someone help me and tell me if I need to check/modify any other column?

    Thank you

    Aali

    Hello Aali.

    Ok. If you perform the following operations:
    -Go to the reports/set/RowSet and interrogate the line set for the report of the FSG. Press the button set the lines and make sure that no currency is defined on the control of balance area.

    -Go to the reports/set/defined column and and check that the above condition is also true.

    -Run the concurrent request GL called "Program - run the financial statement generator" by selecting the report balance sheet FSG and the desired currency.

    I hope this helps.

    Octavio

  • How to make instant consolidation if the data store is out of disk space...

    Hello

    I have a virtual machine that has left many of the snapshot files and now can consolidate using the tool (he always said of disk space to make the consolidation)

    The data store have now 60 GB of free space when the space data store maximum disk is 850 GB and the busy VVMM over almost 700 GB of disk space...

    Can I attach a 3 TB USB drive to the ESXi server and move all the files through or another way to do...

    HELP ~ ~

    With the disk space required on drive E, it may be useful to consider an alternative:

    • create a (much smaller) another virtual disk
    • Stop all of the services entering drive E
    • Copy the data from drive E more to the new virtual disk (for example using robocopy)
    • switch drive letters
    • Once everything is working as expected, get rid of the old virtual disk

    This will not only reduce the disk space used on the data store, but also to ensure that the virtual disk cannot grow more.

    André

  • Cannot access the HFM applications through the workspace

    Hello

    I use HFM 11.1.1.3 version and Oracle database version is 10g. We have the Oracle database has migrated to the new server and configured the HFM applications to this new database server. When I tried to access the HFM application through the workspace, I am getting error below. Please could you help me on this. The error message says that the table or view does not exist. But I'm not able to find which table I missed it.

    Track: Error reference number: {464E7E1C-A859-4CB8-B168-99105BB1B756};
    Username: native Directory

    NUM: 0x80004005; Type: 1; DTime: 14/05/2013 01:42:25. SVR: Localhost; File: CHsvDSSQL.cpp; Online: 2593; Worm: 11.1.1.3.500.3124;

    DStr: ORA-00942: table or view does not exist;

    NUM: 0x80004005; Type: 0. DTime: 14/05/2013 01:42:25. SVR: Localhost; File: Events.cpp; Line: 57; Worm: 11.1.1.3.500.3124;
    NUM: 0x80004005; Type: 0. DTime: 14/05/2013 01:42:25. SVR: Localhost; File: CHsvDSSystemChange.cpp; Line: 179; Worm: 11.1.1.3.500.3124;
    NUM: 0x80004005; Type: 0. DTime: 14/05/2013 01:42:25. SVR: Localhost; File: CHsvSystemChange.cpp; Line: 258; Worm: 11.1.1.3.500.3124;
    NUM: 0x80004005; Type: 0. DTime: 14/05/2013 01:42:25. SVR: Localhost; File: CHsvSystemChange.cpp; Line: 96; Worm: 11.1.1.3.500.3124;
    NUM: 0 x 80040236; Type: 1; DTime: 14/05/2013 01:42:25. SVR: Localhost; File: CHsvSession.cpp; Line: 270; Worm: 11.1.1.3.500.3124;
    NUM: 0 x 80040236; Type: 0. DTime: 14/05/2013 01:42:25. SVR: Localhost; File: CHsvDataSource.cpp; Line: 531; Worm: 11.1.1.3.500.3124;
    NUM: 0 x 80040236; Type: 0. DTime: 14/05/2013 01:42:25. SVR: Localhost; File: CHsxServerImpl.cpp; Line: 2189; Worm: 11.1.1.3.500.3124;
    NUM: 0 x 80040236; Type: 1; DTime: 14/05/2013 01:42:25. SVR: Localhost; File: CHsxServer.cpp; Line: 1467; Worm: 11.1.1.3.500.3124;
    DStr: OpenApplication: DEV;
    NUM: 0 x 80040236; Type: 0. DTime: 14/05/2013 01:42:26; SVR: Localhost; File: CHsxClient.cpp; Online: 2373; Worm: 11.1.1.3.500.3124;
    NUM: 0 x 80040236; Type: 0. DTime: 14/05/2013 01:42:26; SVR: Localhost; File: CHFMwManageApplications.cpp; Line: 206; Worm: 11.1.1.3.500.3124;

    Thank you
    Michel K

    I solved this problem. I gave bad database schema in the udl file.
    Changed the udl with correct schema file and solves the problem.

  • R12 implementation level and the Consolidation of the Instance

    Hello

    We have 11.5.10 HRMS on IBM AIX and Financials 11.5.10 on Sun Solaris servers. The plan is to upgrade the Applications for R12 and consolidate instances in a single case.
    Please share your experiences to first upgrade the applications for R12 and then make the Instance Consolidation or do the Consolidation of the Instance on 11.5.10 and then perform the upgrade.

    What are the advantages and disadvantages of the two approaches.

    Please share your experiences, if someone has worked on similar projects.

    Kind regards
    VN

    I would say do the consolidation then upgrade the instance, this way you can avoid upgrading twice - please log an SR to confirm this with the support of the Oracle.

    Thank you
    Hussein

  • Extended Analytics - reason not to use the HFM database as the destination

    Why can't use us the HFM database as the destination database to create patterns in Star using extended Analytics? How can we get data with metadata in tables exported using EA?

    As far as I know, export EA is a star that includes metadata and data schema format?
    You don't would not export it to your database of the HFM application as you would ideally keep it separate for analsys by services such as Essbase OLAP?
    Alternatively, you can also use Oracle Hyperion Essbase Analytics Link, which creates a permanent bridge in real-time between your HFM and Essbase applications.

  • How to load data from the EBS to HFM

    I'm new to HFM. Now, I need to load the data from the EBS to HFM. How can I do this? Version of the EBS is R12.1 and HFM is 11.1.1.3.

    Hello

    Have you looked in the help of FDM 11.1 x version. It has adapters for EBS and you can also map the HFM entities.

    Thank you

    Partnered Bhatti

  • Identify whether an account is a balance sheet account

    Hello Experts,

    I need know how to identify if an account is a balance sheet account or a P & L (profit and loss) account.
    is there a way that we can find it in the tables. As does matter which column in any table is giving him a flag, as P is for P & L, and B is for balance.
    Any help is appreciated.

    Thank you
    Bob

    Hi Bob,

    A - asset - then BS
    E - fairness - PL
    L - responsibility
    O - owners Equity
    R - recipes

    A small correction:

    A - asset - then BS
    E load - then PL
    L - responsibility then BS
    O - owners Equity and BS
    R - income then PL

    Octavio

  • No results in GL balance sheet and Cash doesn't Flow reports

    Hello gurus,

    I installed and configured OBIA 7.9.6 financials for oracle EBS 12.1 configuration document.
    Updated the GL group account names, codes maps according to business forced.
    A comprehensive ETL was loaded and he succeeded.
    Now on the side of the statement, we have all the data of the balance sheet, the cash flow of the reports.

    My understanding is, it's because the group account numbers are changed and I need to make some adjustments on RPD file as well.
    Could you please correct me if I'm wrong and suggest best practices to customize for analytical financial.

    Thank you very much. !

    Published by: obiuser on October 7, 2011 05:58

    If the team functional financial explains that it has no beaches... then they can map the individually in the .csv file. Normally accounting teams try to regroup by using slices of account, but seems in your business they don't do that. The beaches are not required, it is just an option. You can enumerate each account individually to the accounting plan natural need.

    In addition, given that you said W_AP_BALANCE_F, tables W_AR_BALANCE_F is empty... This seems to be a matter of config ETL, not a matter of RPD. The section of the guide 7963 is as follows for EBS:

    3.2.1.4 on connections between Oracle natural GL accounts of group account numbers

    If it helps, please check the answer as helpful.

  • HFM consolidation rules file to dynamically Group entities

    I can't encode the HFM-file rules right to consolidate the entity dimension dynamically.
    Basically, I want to be able to load the data, and without a right click on the grid and selecting 'consolidate', I want that peacebuilding have already occurred (to automate).
    Any ideas?

    HFM consolidation rules must be called intentionally is a person (by using the right-click menu), or a software process. For the latter, you can configure a workflow using the module of automation of task that loads a file and then performs the consolidation. As mentioned in this thread, you can also use FDM to load the file and then call a consolidation, but you must configure FDM for this. Certainly another option is to write your own batch process that would do this.

    In all cases, there is no "out of the box" features (apart from my comments above) within HFM which allows a given user load a data file and also to consolidate the hierarchies associated without intervention of the user.

    -Chris

  • Cannot load the metadata of the EBS

    Hi guys!

    I'm using 11.1.2.3.500 and try to shoot from 12 of the EBS metadata.

    Everything is well configured, (so far).

    I am trying to load 3 (account, entity and product in this order) dimensions form EBS and I got a green OK on Sun entity and one yellow triangle for others (account and entity).

    It seems that he loses the connection between planning. But what happens if its OK for entity dim?

    [Wed Jul 02 19:46:05 IS 2014] Execution of the refresh cube...

    [Wed Jul 02 19:46:05 IS 2014] An error occurred during the refresh cube operation: com.hyperion.planning.HspRuntimeException: unable to connect to a Hyperion Essbase.

    Check that Hyperion Essbase is running and check your network connections.

    Thank you

    RMarques

    Journal of the whole:

    ACCOUNT

    2014-07-02 19:45:20, 399 [AIF] INFO: beginning of the process FDMEE, process ID: 30

    2014-07-02 19:45:20, 399 [AIF] INFO: recording of the FDMEE level: 4

    2014-07-02 19:45:20, 399 [AIF] INFO: FDMEE log file: /u01/app/oracle/product/epm/EPMSystem11R1/HPL/EPRI/outbox/logs/HPL_DEV_30.log

    2014-07-02 19:45:20, 400 INFO [AIF]: user: admin

    2014-07-02 19:45:20, 400 INFO [AIF]: place: EBS_ACT_DL_AUD (Partitionkey:2)

    2014-07-02 19:45:20, 400 INFO [AIF]: period name: NA (period key: null)

    2014-07-02 19:45:20, 400 INFO [AIF]: category name: NA (category key: null)

    2014-07-02 19:45:20, 400 INFO [AIF]: rule name: 1 (rule ID:1)

    2014-07-02 19:45:23, 681 [AIF] INFO: Jython Version: 2.5.1 (Release_2_5_1:6813, September 26 2009, 13:47:54)

    [JRockit (R) Oracle (Oracle Corporation)]

    2014-07-02 19:45:23, 682 INFO [AIF]: Java platform: java1.6.0_37

    2014-07-02 19:45:33, 283 [AIF] INFO: COMM Dimension pre-processing - Multi process Validation - START

    2014-07-02 19:45:33: 293 [AIF] INFO: COMM Dimension pre-processing - Multi process Validation - END

    2014-07-02 19:45:33, 378 [AIF] INFO: LKM EBS/FS extracted members Table VS - process value Sets - START

    2014-07-02 19:45:33, 396 [AIF] INFO: LKM EBS/FS extracted members Table VS - process value Sets - END

    2014-07-02 19:45:33, 681 INFO [AIF]: extract members Ind VS EBS - fill step members Dim - START

    2014-07-02 19:45:33, 745 INFO [AIF]: extract from BSE members Ind VS - fill step members Dim - END

    2014-07-02 19:45:34, INFO 079 [AIF]: extract members Ind VS EBS - fill step members Dim - START

    2014-07-02 19:45:34, INFO 186 [AIF]: extract from BSE members Ind VS - fill step members Dim - END

    2014-07-02 19:45:34, 470 INFO [AIF]: extract members Ind VS EBS - fill step members Dim - START

    2014-07-02 19:45:34, 532 INFO [AIF]: extract from BSE members Ind VS - fill step members Dim - END

    2014-07-02 19:45:36, 383 [AIF] INFO: EBS/FS load Concat Sun - load Concat Dim members - START

    2014-07-02 19:45:36, 386 [AIF] INFO: EBS/FS load Concat Dim members - load Concat Dim member - END

    2014-07-02 19:45:41, 815 [AIF] INFO: EBS hierarchy processing Ind VS - process Insert details Ind VS - START

    2014-07-02 19:45:41, 824 [AIF] INFO: EBS hierarchy VS. IND. - Insert into process details Ind VS - END of treatment

    2014-07-02 19:45:42, 270 INFO [AIF]: extract EBS hierarchies Ind VS - fill step hierarchy Dim - START

    2014-07-02 19:45:42, 331 [AIF] INFO: from hierarchies Ind VS EBS - fill step hierarchy Dim - END

    2014-07-02 19:45:42, 897 INFO [AIF]: extract EBS hierarchies Ind VS - fill step hierarchy Dim - START

    2014-07-02 19:45:43, 044 [AIF] INFO: from hierarchies Ind VS EBS - fill step hierarchy Dim - END

    2014-07-02 19:45:43, 531 INFO [AIF]: extract EBS hierarchies Ind VS - fill step hierarchy Dim - START

    2014-07-02 19:45:43, 657 [AIF] INFO: from hierarchies Ind VS EBS - fill step hierarchy Dim - END

    2014-07-02 19:45:43, 803 INFO [AIF]: extract EBS hierarchies VS Table - fill step hierarchy Dim - START

    2014-07-02 19:45:43, 805 INFO [AIF]: extract EBS hierarchies VS Table - fill step hierarchy Dim - END

    2014-07-02 19:45:43, 891 INFO [AIF]: load COMM hierarchies unique Sun - hierarchies unique load Dim - START

    2014-07-02 19:45:45, 883 [AIF] INFO: COMM load single Dim hierarchies - load hierarchies only Dim - END

    2014-07-02 19:45:45, 989 [AIF] INFO: COMM load Concat Dim hierarchies - load Concat Dim hierarchies - START

    2014-07-02 19:45:45, 993 [AIF] INFO: COMM load Concat Dim hierarchies - load Concat Dim hierarchies - END

    2014-07-02 19:45:48, INFO 177 [AIF]: Member of Dimension COMM - treatment of the PKI attribute - attributes START

    2014-07-02 19:45:48, INFO 179 [AIF]: Member of Dimension COMM - treatment of the PKI attribute - attributes END

    2014-07-02 19:45:56, 815 INFO [AIF]: metadata HPL - Application support: HPL_DEV

    2014-07-02 19:45:56, 818 [AIF] INFO: number of dimensions to load in HPL_DEV: 3

    2014-07-02 19:45:56, 818 INFO [AIF]: query SQL building for the dimension account...

    2014-07-02 19:45:56, INFO 819 [AIF]: number of tables in HPL_DEV aliases: 1

    2014-07-02 19:45:56, 825 INFO [AIF]: loading of the dimension members: account

    2014-07-02 19:45:56, 825 INFO [AIF]: [HPLService] Info: OLU log file name is: /u01/app/oracle/product/epm/user_projects/epmdevexa1/tmp/aif_30_Account.log

    2014-07-02 19:46, 571 INFO [AIF]: property file arguments:-C /RIC: * /D:Account /RIU: * /RIR: * / RID: * / RIP: * /RIQ: * /IR

    No argument came from the command line. Subject (merged) command line:

    -C /RIC: * /D:Account /RIU: * /RIR: * / RID: * / RIP: * /RIQ: * /IR

    [Wed Jul 02 19:45:56 IS 2014] "The entry specified with the /RIQ switch request did not correspond to a key in the file properties of the"null"command if it is run as a sql query directly:

    $m.name SELECT 'account '.

    , m.PARENT 'Parent '.

    , m.DataStorage 'memory '.

    , m.VALUE1 ' Alias: Default.

    m.MemberValidForPlan1 'Plan Type (FinPlan)'

    m.MemberValidForPlan2 'Plan Type (RevPlan)'

    m.MemberValidForPlan3 'Plan Type (CapProj)'

    , m.AccountType "Account Type".

    , m.TimeBalance "time scale".

    m.VarianceReporting "Variance Reporting"

    m.SourcePlanType 'Source Plan Type'

    DE)

    SELECT mem.NAME

    CASE yesterday. PARENT WHEN '#root' NULL THEN ELSE yesterday. END OF PARENT PARENT

    yesterday. CHILD_DEPTH_NUM

    Mem. AccountType WHEN mem 'SavedAssumption' THEN 'submission saved' ELSE. AccountType END AccountType

    mem. SoldeTemps

    Mem. VarianceReporting WHEN mem "NonExpense" THEN "Non-charges" ELSE. VarianceReporting END VarianceReporting

    yesterday. MemberValidForPlan1

    yesterday. MemberValidForPlan2

    yesterday. MemberValidForPlan3

    yesterday. MemberValidForCapex

    yesterday. MemberValidForWorkforce

    yesterday. SourcePlanType

    CASE yesterday. Dtx200

    WHEN "NeverShare" THEN "never shared."

    WHEN "StoreData" THEN "Save".

    WHEN 'ShareData' THEN 'shared '.

    Another THING yesterday. Dtx200

    END Dtx200

    yesterday. HIERARCHY_ID

    HD. BASE_HIERARCHY_FLAG

    ,(Select prop.) VALUE

    PILLAR AIF_HS_DIM_PROPERTYARRAY

    WHERE prop. LOADID = mem. LOADID

    AND prop. DIMENSION = mem. DIMENSION

    AND prop. PROPERTY = "Alias."

    AND prop.nom = mem.NAME

    AND prop. KEY = "Default".

    ) VALUE1

    MEM AIF_HS_DIM_MEMBER

    Yesterday INNER JOIN AIF_HS_DIM_HIERARCHY

    ON yesterday. LOADID = mem. LOADID

    AND yesterday. DIMENSION = mem. DIMENSION

    AND yesterday. CHILD = mem.NAME

    LEFT OUTER JOIN AIF_MAP_HIERARCHIES HD

    ON HD. HIERARCHY_ID = yesterday. HIERARCHY_ID

    Mem WHERE. LOADID = 30

    AND mem. DIMENSION = "ACCOUNT1.

    ) m

    ORDER BY m.BASE_HIERARCHY_FLAG desc

    m.HIERARCHY_ID

    m.CHILD_DEPTH_NUM

    m.PARENT

    $m.name ".

    [Wed Jul 02 19:45:56 IS 2014] Trying to establish the connection of rdb entry.

    Source B.I. "EPM_REPO" on jdbc:oracle:thin:@server:1521/instace logged in successfully.

    [Wed Jul 02 19:45:56 IS 2014] Sign at the entrance to RDB successfully completed.

    [Wed Jul 02 19:45:56 IS 2014] Record header fields: Parent account, data storage, Alias: by default, Type of Plan (FinPlan), Plan Type (RevPlan), Plan Type (CapProj), Type of account, balance time, Variance Reporting, Source Plan Type

    [Wed Jul 02 19:45:56 IS 2014] Located by using 'Account' dimension and for the loading of data into the application 'HPL_DEV '.

    [Wed Jul 02 19:45:56 IS 2014] HspOutlineLoad::dateFormatSpecified is set to false, SessionHalDateFormat stored on session: null, sessionId: 886045942

    [Wed Jul 02 19:45:59 EST 2014] com.hyperion.planning.HspRuntimeException: exchange rate must be None if the Data Type is Non-monnaie, percentage, Smart List, Date or text.  Member: ATRDAY

    [Wed Jul 02 19:45:59 EST 2014] com.hyperion.planning.HspRuntimeException: exchange rate must be None if the Data Type is Non-monnaie, percentage, Smart List, Date or text.  Member: ATRDAY

    [Wed Jul 02 19:46:00 IS 2014] com.hyperion.planning.HspRuntimeException: an alias with the name of Trading member account already exists.

    [Wed Jul 02 19:46:00 IS 2014] com.hyperion.planning.HspRuntimeException: an alias with the name of Trading member account already exists.

    [Wed Jul 02 19:46:00 IS 2014] Loading dimension 'Account' has been successfully opened.

    [Wed Jul 02 19:46:00 IS 2014] A refresh of the cube operation will not be run.

    [Wed Jul 02 19:46:00 IS 2014] Create filters for safe operation will not be performed.

    Planning of vector data store finished loading processes. 1099 data records were read, 1099 data records have been processed, 1085 were accepted for loading (check the actual load with Essbase logs), 14 were rejected.

    [Wed Jul 02 19:46:00 IS 2014] Planning of vector data store finished loading processes. 1099 data records were read, 1099 data records have been processed, 1085 were accepted for loading (check the actual load with Essbase logs), 14 were rejected.

    2014-07-02 19:46, 571 [AIF] INFO: completed - account dimension load records read: 1 099, rejected records: 14, files: 1 099.

    ENTITY

    2014-07-02 19:46, 574 [AIF] INFO: building for the dimension entity SQL query...

    2014-07-02 19:46, 579 [AIF] INFO: loading of the dimension members: entity

    2014-07-02 19:46, 579 [AIF] INFO: [HPLService] Info: OLU log file name is: /u01/app/oracle/product/epm/user_projects/epmdevexa1/tmp/aif_30_Entity.log

    2014-07-02 19:46:04, 247 INFO [AIF]: property file arguments:-C /RIC: * /D:Entity /RIU: * /RIR: * / RID: * / RIP: * /RIQ: * /IR

    No argument came from the command line. Subject (merged) command line:

    -C /RIC: * /D:Entity /RIU: * /RIR: * / RID: * / RIP: * /RIQ: * /IR

    [Wed Jul 02 19:46:00 IS 2014] "The entry specified with the /RIQ switch request did not correspond to a key in the file properties of the"null"command if it is run as a sql query directly:

    $m.name SELECT 'entity '.

    , m.PARENT 'Parent '.

    , m.DataStorage 'memory '.

    , m.VALUE1 ' Alias: Default.

    m.MemberValidForPlan1 'Plan Type (FinPlan)'

    m.MemberValidForPlan2 'Plan Type (RevPlan)'

    m.MemberValidForPlan3 'Plan Type (CapProj)'

    DE)

    SELECT mem.NAME

    CASE yesterday. PARENT WHEN '#root' NULL THEN ELSE yesterday. END OF PARENT PARENT

    yesterday. CHILD_DEPTH_NUM

    Mem. AccountType WHEN mem 'SavedAssumption' THEN 'submission saved' ELSE. AccountType END AccountType

    mem. SoldeTemps

    Mem. VarianceReporting WHEN mem "NonExpense" THEN "Non-charges" ELSE. VarianceReporting END VarianceReporting

    yesterday. MemberValidForPlan1

    yesterday. MemberValidForPlan2

    yesterday. MemberValidForPlan3

    yesterday. MemberValidForCapex

    yesterday. MemberValidForWorkforce

    yesterday. SourcePlanType

    CASE yesterday. Dtx200

    WHEN "NeverShare" THEN "never shared."

    WHEN "StoreData" THEN "Save".

    WHEN 'ShareData' THEN 'shared '.

    Another THING yesterday. Dtx200

    END Dtx200

    yesterday. HIERARCHY_ID

    HD. BASE_HIERARCHY_FLAG

    ,(Select prop.) VALUE

    PILLAR AIF_HS_DIM_PROPERTYARRAY

    WHERE prop. LOADID = mem. LOADID

    AND prop. DIMENSION = mem. DIMENSION

    AND prop. PROPERTY = "Alias."

    AND prop.nom = mem.NAME

    AND prop. KEY = "Default".

    ) VALUE1

    MEM AIF_HS_DIM_MEMBER

    Yesterday INNER JOIN AIF_HS_DIM_HIERARCHY

    ON yesterday. LOADID = mem. LOADID

    AND yesterday. DIMENSION = mem. DIMENSION

    AND yesterday. CHILD = mem.NAME

    LEFT OUTER JOIN AIF_MAP_HIERARCHIES HD

    ON HD. HIERARCHY_ID = yesterday. HIERARCHY_ID

    Mem WHERE. LOADID = 30

    AND mem. DIMENSION = "ENTITY1.

    ) m

    ORDER BY m.BASE_HIERARCHY_FLAG desc

    m.HIERARCHY_ID

    m.CHILD_DEPTH_NUM

    m.PARENT

    $m.name ".

    [Wed Jul 02 19:46:00 IS 2014] Trying to establish the connection of rdb entry.

    Source B.I. "EPM_REPO" on jdbc:oracle:thin:@server:1521/instace logged in successfully.

    [Wed Jul 02 19:46:00 IS 2014] Sign at the entrance to RDB successfully completed.

    [Wed Jul 02 19:46:00 IS 2014] Record header fields: Parent, entity, Alias, data storage: by default, the Type of Plan (FinPlan), Type of Plan (RevPlan), Type of Plan (CapProj)

    [Wed Jul 02 19:46:00 IS 2014] Find and use the 'Entity' dimension for the loading of the data in the application 'HPL_DEV '.

    [Wed Jul 02 19:46:00 IS 2014] HspOutlineLoad::dateFormatSpecified is set to false, SessionHalDateFormat stored on session: null, sessionId: 872481248

    [Wed Jul 02 19:46:04 IS 2014] Loading dimension 'Entity' has been successfully opened.

    [Wed Jul 02 19:46:04 IS 2014] A refresh of the cube operation will not be run.

    [Wed Jul 02 19:46:04 IS 2014] Create filters for safe operation will not be performed.

    Planning of vector data store finished loading processes. 881 data records were read, 881 records have been processed, 881 have been accepted for loading (check the actual load with Essbase logs), 0 were rejected.

    [Wed Jul 02 19:46:04 IS 2014] Planning of vector data store finished loading processes. 881 data records were read, 881 records have been processed, 881 have been accepted for loading (check the actual load with Essbase logs), 0 were rejected.

    2014-07-02 19:46:04, 247 INFO [AIF]: load dimension complete entity - reading documents: 881, rejected records: 0 records processed: 881.

    PRODUCT

    2014-07-02 19:46:04, 249 [AIF] INFO: building SQL query for dimension product...

    2014-07-02 19:46:04, 253 [AIF] INFO: loading of the dimension members: product

    2014-07-02 19:46:04, 253 [AIF] INFO: [HPLService] Info: OLU log file name is: /u01/app/oracle/product/epm/user_projects/epmdevexa1/tmp/aif_30_Product.log

    2014-07-02 19:46:05, 556 INFO [AIF]: property file arguments: /C /RIC: * /D:Product /RIU: * /RIR: * / RID: * / RIP: * /RIQ: * /IR

    No argument came from the command line. Subject (merged) command line:

    /C /RIC: * /D:Product /RIU: * /RIR: * / RID: * / RIP: * /RIQ: * /IR

    [Wed Jul 02 19:46:04 IS 2014] "The entry specified with the /RIQ switch request did not correspond to a key in the file properties of the"null"command if it is run as a sql query directly:

    SELECT $m.name 'product '.

    , m.PARENT 'Parent '.

    , m.DataStorage 'memory '.

    , m.VALUE1 ' Alias: Default.

    DE)

    SELECT mem.NAME

    CASE yesterday. PARENT WHEN '#root' NULL THEN ELSE yesterday. END OF PARENT PARENT

    yesterday. CHILD_DEPTH_NUM

    Mem. AccountType WHEN mem 'SavedAssumption' THEN 'submission saved' ELSE. AccountType END AccountType

    mem. SoldeTemps

    Mem. VarianceReporting WHEN mem "NonExpense" THEN "Non-charges" ELSE. VarianceReporting END VarianceReporting

    yesterday. MemberValidForPlan1

    yesterday. MemberValidForPlan2

    yesterday. MemberValidForPlan3

    yesterday. MemberValidForCapex

    yesterday. MemberValidForWorkforce

    yesterday. SourcePlanType

    CASE yesterday. Dtx200

    WHEN "NeverShare" THEN "never shared."

    WHEN "StoreData" THEN "Save".

    WHEN 'ShareData' THEN 'shared '.

    Another THING yesterday. Dtx200

    END Dtx200

    yesterday. HIERARCHY_ID

    HD. BASE_HIERARCHY_FLAG

    ,(Select prop.) VALUE

    PILLAR AIF_HS_DIM_PROPERTYARRAY

    WHERE prop. LOADID = mem. LOADID

    AND prop. DIMENSION = mem. DIMENSION

    AND prop. PROPERTY = "Alias."

    AND prop.nom = mem.NAME

    AND prop. KEY = "Default".

    ) VALUE1

    MEM AIF_HS_DIM_MEMBER

    Yesterday INNER JOIN AIF_HS_DIM_HIERARCHY

    ON yesterday. LOADID = mem. LOADID

    AND yesterday. DIMENSION = mem. DIMENSION

    AND yesterday. CHILD = mem.NAME

    LEFT OUTER JOIN AIF_MAP_HIERARCHIES HD

    ON HD. HIERARCHY_ID = yesterday. HIERARCHY_ID

    Mem WHERE. LOADID = 30

    AND mem. DIMENSION = "DIM1.

    ) m

    ORDER BY m.BASE_HIERARCHY_FLAG desc

    m.HIERARCHY_ID

    m.CHILD_DEPTH_NUM

    m.PARENT

    $m.name ".

    [Wed Jul 02 19:46:04 IS 2014] Trying to establish the connection of rdb entry.

    Source B.I. "EPM_REPO" on jdbc:oracle:thin:@server:1521/instace logged in successfully.

    [Wed Jul 02 19:46:04 IS 2014] Sign at the entrance to RDB successfully completed.

    [Wed Jul 02 19:46:04 IS 2014] Record header fields: Parent, product, Alias, data storage: default

    [Wed Jul 02 19:46:04 IS 2014] Find and use the dimension 'Product' for the loading of the data in the application 'HPL_DEV '.

    [Wed Jul 02 19:46:04 IS 2014] HspOutlineLoad::dateFormatSpecified is set to false, SessionHalDateFormat stored on session: null, sessionId: 677666057

    [Wed Jul 02 19:46:04 IS 2014] java.lang.RuntimeException: com.hyperion.planning.DuplicateObjectException: an object with the name of WULC1 already exist.

    [Wed Jul 02 19:46:04 IS 2014] java.lang.RuntimeException: com.hyperion.planning.DuplicateObjectException: an object with the name of WULC1 already exist.

    [Wed Jul 02 19:46:05 EST 2014] com.hyperion.planning.InvalidDimensionMemberNameException: name of the Dimension Member 'ORDER' is a report Script command.

    [Wed Jul 02 19:46:05 EST 2014] com.hyperion.planning.InvalidDimensionMemberNameException: name of the Dimension Member 'ORDER' is a report Script command.

    [Wed Jul 02 19:46:05 IS 2014] Dimension 'Product' load has been successfully opened.

    [Wed Jul 02 19:46:05 IS 2014] Execution of the refresh cube...

    [Wed Jul 02 19:46:05 IS 2014] An error occurred during the refresh cube operation: com.hyperion.planning.HspRuntimeException: unable to connect to a Hyperion Essbase.

    Check that Hyperion Essbase is running and check your network connections.

    [Wed Jul 02 19:46:05 IS 2014] An error occurred during the refresh cube operation: com.hyperion.planning.HspRuntimeException: unable to connect to a Hyperion Essbase.

    Check that Hyperion Essbase is running and check your network connections.

    [Wed Jul 02 19:46:05 IS 2014] Impossible to get analytical information and/or perform a data load: an error occurred during the refresh cube operation: com.hyperion.planning.HspRuntimeException: unable to connect to a Hyperion Essbase.

    Check that Hyperion Essbase is running and check your network connections.

    [Wed Jul 02 19:46:05 IS 2014] Impossible to get analytical information and/or perform a data load: an error occurred during the refresh cube operation: com.hyperion.planning.HspRuntimeException: unable to connect to a Hyperion Essbase.

    Check that Hyperion Essbase is running and check your network connections.

    [Wed Jul 02 19:46:05 IS 2014] Trace of information: com.hyperion.planning.utils.HspOutlineLoad::parseAndLoadInputFile:1912, com.hyperion.planning.utils.HspOutlineLoad::halAdapterInfoAndLoad:304, com.hyperion.planning.utils.HspOutlineLoad::loadAndPrintStatus:4667, com.hyperion.planning.utils.HspOutlineLoad::outlineLoadAsyncImpl:3752, com.hyperion.planning.utils.HspOutlineLoad::outlineLoad:3692, com.hyperion.planning.utils.HspOutlineLoad::outlineLoad:3810, com.hyperion.aif.webservices.HPLService::loadMetadata:500, sun.reflect.NativeMethodAccessorImpl::invoke0:-2, sun.reflect.NativeMethodAccessorImpl::invoke:39, sun.reflect.DelegatingMethodAccessorImpl::invoke:25, java.lang.reflect.Method::invoke:597, com.hyperion.aif.servlet.ODIServlet::doPost:97, javax.servlet.http.HttpServlet::service:727, javax.servlet.http.HttpServlet::service:820,. weblogic.servlet.internal.StubSecurityHelper$ ServletServiceAction::run:227, weblogic.servlet.internal.StubSecurityHelper::invokeServlet:125, weblogic.servlet.internal.ServletStubImpl::execute:301, weblogic.servlet.internal.TailFilter::doFilter:26, weblogic.servlet.internal.FilterChainImpl::doFilter:56, oracle.security.jps.ee.http.JpsAbsFilter$ 1::run:119, oracle.security.jps.util.JpsSubject::doAsPrivileged:324, oracle.security.jps.ee.util.JpsPlatformUtil::runJaasMode:460, oracle.security.jps.ee.http.JpsAbsFilter::runJaasMode:103, oracle.security.jps.ee.http.JpsAbsFilter::doFilter:171, oracle.security.jps.ee.http.JpsFilter::doFilter:71, weblogic.servlet.internal.FilterChainImpl::doFilter:56, oracle.dms.servlet.DMSServletFilter::doFilter:163, weblogic.servlet.internal.FilterChainImpl::doFilter:56,. weblogic.servlet.internal.RequestEventsFilter::doFilter:27, weblogic.servlet.internal.FilterChainImpl::doFilter:56, weblogic.servlet.internal.WebAppServletContext$ ServletInvocationAction::wrapRun:3730, weblogic.servlet.internal.WebAppServletContext$ ServletInvocationAction::run:3696, weblogic.security.acl.internal.AuthenticatedSubject::doAs:321, weblogic.security.service.SecurityManager::runAs:120, weblogic.servlet.internal.WebAppServletContext::securedExecute:2273, weblogic.servlet.internal.WebAppServletContext::execute:2179, weblogic.servlet.internal.ServletRequestImpl::run:1490, weblogic.work.ExecuteThread::execute:256, weblogic.work.ExecuteThread::run:221

    [Wed Jul 02 19:46:05 IS 2014] Trace of information: com.hyperion.planning.utils.HspOutlineLoad::parseAndLoadInputFile:1912, com.hyperion.planning.utils.HspOutlineLoad::halAdapterInfoAndLoad:304, com.hyperion.planning.utils.HspOutlineLoad::loadAndPrintStatus:4667, com.hyperion.planning.utils.HspOutlineLoad::outlineLoadAsyncImpl:3752, com.hyperion.planning.utils.HspOutlineLoad::outlineLoad:3692, com.hyperion.planning.utils.HspOutlineLoad::outlineLoad:3810, com.hyperion.aif.webservices.HPLService::loadMetadata:500, sun.reflect.NativeMethodAccessorImpl::invoke0:-2, sun.reflect.NativeMethodAccessorImpl::invoke:39, sun.reflect.DelegatingMethodAccessorImpl::invoke:25, java.lang.reflect.Method::invoke:597, com.hyperion.aif.servlet.ODIServlet::doPost:97, javax.servlet.http.HttpServlet::service:727, javax.servlet.http.HttpServlet::service:820,. weblogic.servlet.internal.StubSecurityHelper$ ServletServiceAction::run:227, weblogic.servlet.internal.StubSecurityHelper::invokeServlet:125, weblogic.servlet.internal.ServletStubImpl::execute:301, weblogic.servlet.internal.TailFilter::doFilter:26, weblogic.servlet.internal.FilterChainImpl::doFilter:56, oracle.security.jps.ee.http.JpsAbsFilter$ 1::run:119, oracle.security.jps.util.JpsSubject::doAsPrivileged:324, oracle.security.jps.ee.util.JpsPlatformUtil::runJaasMode:460, oracle.security.jps.ee.http.JpsAbsFilter::runJaasMode:103, oracle.security.jps.ee.http.JpsAbsFilter::doFilter:171, oracle.security.jps.ee.http.JpsFilter::doFilter:71, weblogic.servlet.internal.FilterChainImpl::doFilter:56, oracle.dms.servlet.DMSServletFilter::doFilter:163, weblogic.servlet.internal.FilterChainImpl::doFilter:56,. weblogic.servlet.internal.RequestEventsFilter::doFilter:27, weblogic.servlet.internal.FilterChainImpl::doFilter:56, weblogic.servlet.internal.WebAppServletContext$ ServletInvocationAction::wrapRun:3730, weblogic.servlet.internal.WebAppServletContext$ ServletInvocationAction::run:3696, weblogic.security.acl.internal.AuthenticatedSubject::doAs:321, weblogic.security.service.SecurityManager::runAs:120, weblogic.servlet.internal.WebAppServletContext::securedExecute:2273, weblogic.servlet.internal.WebAppServletContext::execute:2179, weblogic.servlet.internal.ServletRequestImpl::run:1490, weblogic.work.ExecuteThread::execute:256, weblogic.work.ExecuteThread::run:221

    Planning of vector data store finished loading processes. 707 data records were read, 707 data records have been processed, 674 were accepted for loading (check the actual load with Essbase logs), 33 have been rejected.

    [Wed Jul 02 19:46:05 IS 2014] Planning of vector data store finished loading processes. 707 data records were read, 707 data records have been processed, 674 were accepted for loading (check the actual load with Essbase logs), 33 have been rejected.

    2014-07-02 19:46:05, 556 INFO [AIF]: charge of finished product dimension - reading documents: 707, rejected records: 33 files: 707.

    2014-07-02 19:46:05, 558 INFO [AIF]: metadata HPL charge ended with the status code returned: true

    2014-07-02 19:46:05, 653 [AIF] INFO: end process FDMEE, process ID: 30

    Hi all

    His works now.

    The Oracle guy suggested to change some variables in setCustomParamErpIntegrator.sh and it worked.

    As I got some time, I was trying to find the good exchange and found to LD_LIBRARY_PATH, which must include planning LD_LIBRARY_PATH.

    Francisco Amores

    See you soon,.
    Safiya

  • Convert the EBS for one example at RAC

    Hi all

    EBS R12.2.4

    11 GR 2

    OL 6.5

    I want to convert our EBS instance to RAC.

    Can you share me documentation on how to convert a single instance of the EBS R12.2.4 CARS?

    I need to convert both the apps and the level of db? or the level of db only?

    Kind regards

    JC

    Hi Vishnu,

    I ran the 2nd step: but received the error

    rman target $ /.

    Recovery Manager: release 11.2.0.3.0 - Production on Tue Feb 23 11:42:11 2016

    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

    RMAN-00571: ===========================================================

    RMAN-00569: = ERROR MESSAGE STACK FOLLOWS =.

    RMAN-00571: ===========================================================

    RMAN-00554: initialization of the package manager internal collection failed

    RMAN-04005: target database error:

    ORA-21561: OID generation failed

    Kind regards

Maybe you are looking for

  • All bookmarks are not copied over with new synchronization.

    I use 29.0a2 Aurora (2014-02-28) on Windows 7 x 64 and Ubuntu 13.10 x 64 laptop computers and created a new account to sync. Windows has the bookmarks that I want. Ubuntu has no favorite. When I used the new sync, it copied the bookmarks on the toolb

  • HP Pavilion 500 - 200 t: card memory HP Pavilion 500-200 t and upgrade information

    I'm about to take my 10 GB of RAM RAM 16 GB of RAM and 500-200 t has only two slots for memory card. When I replace (2 x 8 GB cards) I have two 5 GB cards flying over. I need to know the exact left specifications more cards? I want to put the card in

  • Unable to access the boot sequence in the BIOS

    I press ESC at startup, then F9. I can use the arrow keys to switch between 'the operating system startup Manager' and 'file. EFI Boot. But when I select "OS boot manager" and press on enter as the instructions on the screen, the date of the BIOS and

  • Pass property nodes / / Activation/deactivation to the off position

    Is there a property switch node that will transform by program a toggle switch on the position at the exit of a while loop?  I can put the value property (signal) false node but the switch is always displayed as being on the front panel when the prog

  • Why do I get an error "failure of the Web deployment task"?

    Hello microsoft,. I found an error while I'm trying to publish a Web site. ». Web deployment job failed (the 'ASPNET' account doesn't seem to be valid. The account was obtained thanks to its location: 'AspNetWorkerProcessIdentityName'.)  On behalf of