Population of parent entities - calculation vs aggregation

I'm having a problem link data in two hierarchies that contain exactly the same base members.

At the top of the page parent of our fiscal hierarchy, for accounts that uses rules to calculate an opening balance, the opening balances are different in our main management hierarchy.

This rule allows us to fill opening balances (based on the previous year ending balances):
HS. EXP "C3 #FA_PY_END IS C3 #FIXASST. "P #Last.Y #Prior.
HS. EXP "C3 #DE_PY_END IS C3 #DEPREC. "P #Last.Y #Prior.
HS. EXP "C3 #TO_PY_END IS C3 #TOOL. "P #Last.Y #Prior.
HS. EXP "C3 #PPD_TO_PY_END = PPD_TOOL # C3." "P #Last.Y #Prior.

Very simple rule. And I checked the rules file again and I see not all conditions where the specific entities may be excluded from this.

The rule does NOT specify that it runs only at a basic level or a child.

To help solve this, here is my question:

How would the parent level data be met, parents would be an aggregation of values calculated by the basic units, or the real rule populate balances in the parent entity by recovering the last pay of the year ends for the parent?

Mark,
In the absence of any entity or value dimension of the conditions that would normally surround rules like these, the rules will be implemented in each entity - base and parent similar and each Member of the dimension value where Sub calculate runs ( / / [percentage] / [disposal], plus one of the members of the four setting where the newspapers were published).

The vast majority of the rules must be written to run in (base entities AND ( OR )) or HS. IsTranscur or HS. IsTransCurAdj. What then, is that mothers entities wouldn't recalculate these amounts but would rather the consolidated amounts of calculations that have occurred in the lower layers. It's better for the performance by calculating only where you need.

A few rules also need to be performed after [percentage] and [removal], but let's not complicate things more that we need, to answer your question.

-Chris

Tags: Business Intelligence

Similar Questions

  • Translation of HFM entity Currency Adjustment and roll-up to the parent entities

    Hello

    I hope someone can help me understand why a Currency Adjustment amount entity does not and the roll-up for the entities of parents to the two accounts used in a journal entry. See below for more details:

    Journal entry

    After you run a "consolidate all Data with ' , I see the following results in Smart View:

    I was expecting to see the amount of translated paper winding to entities parent as indicated below in red (I manually added the Red amounts for demonstration):

    In addition, the translation done by the system uses EOMRate instead of PREVEOMRate exchange currency. 0115 account is the P & L account type and should be translated by using PREVEOMRate.

    I really appreciate any help you can provide.

    Thank you

    Carmen G.

    Hi Carmen,.

    I'm glad that you found the solution.

    However, I wouldn't consider the flag of Type switch to sink as a problem, but you have a better understanding of the issue.

    If you have any questions, can you please close the topic because it is too long?

    See you soon,.

    Thanos

  • Input data to the parent entity

    Hello

    We are planning to create a rule of "Entry" to the parent entities for a scenario for an account and have two questions

    1. my assumption is, when we write the default rule allows an entry for all mothers entities and not need to put any condition for POV. Fix?

    2. account should NOT be marked as "IsCalculated" and should be "ISConsolidated". Fix?

    Thanks in advance.

    Hello

    First question: I don't think he uses parent entities by default, or no matter what default POV besides. We specify each POV we want to execute the ENTRY sub on, whether entities, accounts, custom size, periods, year, etc.. Just be aware that once you run a rule on a parent entity ENTRY, he never used numbers loaded or enrolled in basic entities. This is a case where the sum [Contribution Total] of children does NOT EQUAL to the of the parent.

    Second question: If the account is labeled IsConsolidated, this should work only to leave with the parent entity since it does not take account of all the day in child entities. In addition, we do not have Auditors who have entry rules on them but are not labeled as IsCalculated, and it works very well.

    Dany

  • IR: result of the aggregation in footer of report

    Hello!

    This quote from 'beginning Oracle Application Express 4.2' at page 174 (on the aggregation to IR):

    "The results are displayed at the end of the report."

    There is a simple method to print the results of the aggregation for the report footer on each page?

    I asked - and I said

    To do what I create processes on demand for the results of the calculations of aggregation (via APEX_IR_PKG) and make ajax request to him in the event "discount for after ' IR

    There is the manual detailed with the sample http://devsonia.ru/2013/11/14/oracle-apex-aggregation-in-interactive-report-on-each-page-en/.

  • Locking in HFM entities

    Hi all

    I intend to start locking the entities in an application of HFM with 8 scenarios, 9 years of data and basic feature 600.

    I try to undestand that is the best approach. Should only lock us base entities, should block us both basic entities and the parents or should not block us anything? For the actual task, we will use an API to publish and to lock the pool for all the years until 2012.

    Ideas, experiences or issues that I should think about?

    Kind regards

    Thanos

    Post edited by: thanasis_agr

    I don't lock the parent entities, in this way if there is an organization change where a base unit moves from one parent to the other, a consolidation will make changes after the fact without having to unlock the parents.  As our system controls avoid Scriptures HFM monetary parent or journals to the parents directly, we don't have to worry about changing data, unless it is a change expected through an adjustment of metadata.

    Concerning

    JOINT TASK FORCE

  • FDM is reflected HFM drill at the Parent level

    We are developing a new application of HFM (11.1.2.1) and migration of historical data to the new application with a new instance of FDM. In the new application, examining the various intersections in Smart View shows that drill-through is enabled. We live this way several questions:

    1.), we didn't intend the percables region

    After scouring the forums, it seems that we will need to delete and reload the FDM data to remove this reference extraction of HFM. It would force us re - validate the historical data of 6 years... Is there a method to remove a drill-through directly from the database reference?

    2.) smart View indicates that drill-through is available at NON-BASE LEVEL intersections.

    For example, when we look at one of our top-level parent entities, the top of the C1 and high PEAK, we see a pink cell extraction based on certain accounts. It is obviously incorrect, it seems that some type of indexing is "stuck" and hence the error.

    Our FDM application is currently implemented the same as the production environment:
    Percables region support is enabled
    SSO is disabled

    In Smart View of the production environment, there is no available intersections extraction. This is the functionality desired for the development environment.

    Any thoughts on what could be going on here?

    Thanks in advance!
    S

    1.), we didn't intend the percables region

    After scouring the forums, it seems that we will need to delete and reload the FDM data to remove this reference extraction of HFM. It would force us re - validate the historical data of 6 years... Is there a method to remove a drill-through directly from the database reference?

    Yes, the drilling data is stored in the HFM application database in tables ERPi and ERPI_URL

    If you don't want to just drill holes in HFM data, you could simply truncate the data in these two tables.

    Remember that if this is done, any exercise of data will be erased of the HFM Application.

    2.) smart View indicates that drill-through is available at NON-BASE LEVEL intersections.

    For example, when we look at one of our top-level parent entities, the top of the C1 and high PEAK, we see a pink cell extraction based on certain accounts. It is obviously incorrect, it seems that some type of indexing is "stuck" and hence the error.

    Our FDM application is currently implemented the same as the production environment:
    Percables region support is enabled
    SSO is disabled

    In Smart View of the production environment, there is no available intersections extraction. This is the functionality desired for the development environment.

    Any thoughts on what could be going on here?

    This is normal. FDM will load to MEMBERALL in HFM, which will result in the mother's cells with an option ' drill if to FDM ", but the drill will work only in the basics like FDM charge only at the intersections of base level.

    Published by: user735469 on May 20, 2013 13:25

  • CALCTASKDIMS / CALCPARALLEL / Multi - CPU calculation / Performance

    Hello

    I'm working on essbase scince long now and I am faced with a new question for multi cpu calculation.

    I use to work on 4 / 8 core CPU, but now we are building a new planning application on the server cores cpu 32 with 32 GB of Ram.

    I would like to optimize the calculation of the aggregations.

    Here's my hierarchy:
    Members of Type Dim store
    There are 72 Dense 158
    6 5 dense scenarios
    Dense period 4 3
    Agg 389 113 rare market
    Distributor Agg 360 359 rare
    Agg 315 230 rare category
    Product Agg 2241 rare 2228
    Scenario1 sparse 2
    Sparse version 8 7
    12 12 scattered exercise
    65 63 rare currency

    My essbase Setup is like this:
    CALCPARALLEL 24

    CALCCACHEHIGH 199000000
    CALCCACHEDEFAULT 10000000
    CALCCACHELOW 5000000

    CALCLOCKBLOCKHIGH 1000000
    CALCLOCKBLOCKDEFAULT 100000
    CALCLOCKBLOCKLOW 2500

    Size blocks (B) 8640
    Block density (%) 0.12%

    I have a calculation of aggregation taking [46.2] seconds and use only 4% CPU

    On the top of the script, I add:
    GAME CALCTASKDIMS 2; -> result [49.178] seconds and use the 8% CPU
    SET CALCTASKDIMS 3; -> result [59.613] seconds and use the 12% of CPU
    SET CALCTASKDIMS 4; -> result [53.156] seconds and use the 12% of CPU
    SET CALCTASKDIMS 5; -> result [64.159] seconds and use the 12% of CPU

    I must admit that I do not understand why the performance decrease when I use multi cpu...

    Any suggestion is welcome,

    Thank you

    David L

    Published by: Spicer on 6 may 2013 04:07

    Published by: Spicer on 6 may 2013 07:06

    Look - the performance problem for this cube will be almost totally related to IO of existing blocks. Other proposed script improvements, your only hope is to accelerate IO.

    I assume that your key cache is large enough to hold the entire index. And your data cache is at least as big (does not actually to be which is great).

    Try to create a ram disk and moving the cube to see if she becomes faster.

    Finally - are you really getting the material you are? What is a virtual machine? What else is running on this subject?

    Finally, rember it takes a woman 9 months to give birth to a new baby. 9 women to share the experience will not reduce it to 1 month. It's basically why nominal increase of calc and decreases intensity does not help.

    Can you define information messages on and see how many calculations are carried out? It does not resemble a large number of blocks are presented (see sparse calculations in information msg) makes me more in more suspicious about the index cache.

  • New tree of entities

    Hi guys,.

    I need to create an additional tree feature to an existing application (there was a lot of changes in the structure of the company since the app was built). This is the first time that I do, so I wanted to know if you have suggestions to share. My concern are the rules...
    I really appreciate your comments.

    Thank you!

    Read

    The rules will certainly be something to watch. Searching for references to parent entities (whether through hard coded references, lists of members or of the generic parent references).

    Whether it is necessary to keep the existing hierarchy. The number of entities in an application is one of the greatest drivers of consolidation time (second only to the rules in most cases).

    If you get rid of the original, you probably run into a problem with the Exchange adjustments Parent if you move entities of base under the new parents. PCA is specific to the parent.child they are reserved, so parents changing causes integrity problems (scan of metadata will prevent you from loading. NEVER ignore that by disabling the integrity check, it'll just be worse on the road).

    If you don't have CPA and you don't need two hierarchies, I suggest to get rid of the original to avoid confusion and to improve performance.

    If this isn't an option, or if you want to go the easy way, simply create a second hierarchy. I suggest you create a new entity parent above two hierarchies so that you can consolidate the two at a time. You will also need to know if the old hierarchy should be maintained to move forward when you create new entities, or if she'll be the hierarchy date X and never updated again.

  • Hierarchy parent child scenario

    Hi friends,

    Im just working on the parent-child hierarchy by using the link below with the sample data.

    http://www.Oracle.com/WebFolder/technetwork/tutorials/OBE/FMW/bi/bi11115/biadmin11g_02/biadmin11g.htm_

    I tried to implement the same hierarchy using my local data instead of referring to the sample data.

    I have a query that returns the Manager as well as the position of the employee at the employee below
    select distinct papf.person_id,  papf.full_name "Employee Name", supf.person_id "Manager Id", supf.full_name "Manager Name", pj.name "Position Name"
    from per_all_people_f papf, per_all_assignments_f paaf, per_all_people_f supf, per_jobs pj
    where papf.person_id = paaf.person_id and supf.person_id = paaf.supervisor_id and paaf.job_id = pj.job_id
    and trunc(sysdate) between paaf.effective_start_date and paaf.effective_end_date and 
    trunc(sysdate) between papf.effective_start_date and papf.effective_end_date
    Im looking forward to implement the same result in my BI with a parent-child hierarchy.

    Since then, I imported three tables to my physical layer
    per_all_people_f------------Dimension
    per_all_assignments_f-----Fact
    per_jobs---------------------Dimension
    For the creation of parent-child in BI, we need to have a table of separate Parent child which consist of four columns like ancestorkey, memberkey, distance, sheet.

    In the column above, I can understand the meaning as
    For Ancestorkey-->Managerid
    Memberkey------->Employeeid
    But I could not the meaning of the column distance as the meaning suggest, as a distance b/w the two column of the worksheet in the sense suggest as one sheet What leaf member it refers.

    I also have the link below then too could not get the feel for it

    http://www.rittmanmead.com/2010/08/Oracle-BI-EE-11g-Parent-Child-Hierarchies-Differing-Aggregations/+.

    How do I train the child parent table for the BI of my three tables above in the HRMS.

    Thank you

    Kind regards
    Saro

    Hi friends,

    I think I found a link to

    http://prasadmadhasi.com/2011/11/15/hierarchies-parent-child-hierarchy-in-OBIEE-11g/

    Let me try this and will update accordingly.

    Thanks for your point of view.

    Kind regards
    Saro

  • why we use Parent parent adjs total parent in the dimension value in HFM

    Hi Experts

    Can someone give me please an explanation of why we use Parent, parent adjs and total in the dimension value in HFM?

    concerning
    Smilee

    Hello
    As a quick response, when you post a journal to adjustment, this setting affects all parent entities. On the contrary, if you post an adjustment of newspaper [Adjs Parent] then you must also select which parent entities should be affected by the review (while all the other parents will not be affected). So that this discussion is relevant, you must have your entity shared as part of several parents. If you're a single parent so this does not apply. Note that, to make this work, you must have active AllowAdjFromChildren

  • The lowest level in the cube aggregation

    Hi guys

    I designed cube of simple test with a single dimension (the two leads MOLAP).

    The PRODUCT dimension consists of three levels:
    -Group
    -Category
    -Product_detail

    Table PRODUCT_SRC to load the PRODUCT dimension:

    PR_GROUP_NAME PR_GROUP_ID PR_CATEGORY_NAME PR_CATEGORY_ID PR_DETAIL_NAME PRODUCT_DETAIL_ID

    1000 dairy yogurts 1000000 yoghurt_1 1000000000
    1000 dairy yogurts 1000000 yoghurt_2 1000000001
    1000 dairy yogurts 1000000 yoghurt_3 1000000002
    Candy cookies_1 1000001 1001 1000000003 cookies
    Candy cookies_2 1000001 1001 1000000004 cookies
    Candy cookies_3 1000001 1001 1000000005 cookies
    1002 1000002 juice_1 1000000006 juice drinks
    1002 drinks mineral water 1000003 mineral_water_1 1000000007
    1002 1000004 energy_drink_1 1000000008 beverage energy drink


    The SALES cube has a measure:
    -Value_of_sales (sum aggr)

    Table SALES_SRC to load the SALES cube:

    PROD_ID ID VALUE

    1236 1000000002 2
    115 1000000006 3
    1697 1000000005 4
    12-1000000004-5
    168 1000000008 6
    7 1000000005 1984
    9684 1000000004 8
    84-1000000002-9
    8 1000000007 10
    498 1000000006 11
    4894 1000000008 12
    4984 1000000004 13
    448 1000000003 14
    4489 1000000004 15
    13 1000000001 16
    879 1000000004 17
    896 1000000006 18
    4646 1000000007 20

    I created the PRODUCT dimension and a mapping which loads the data in the dimension. It worked perfectly. The hierarchy has been created as I expected.

    Then, I created a mapping that should load the data into the cube and cube SALES. It's very very simple mapping - there were only two points on the canvas:

    -Table SALES_SRC
    and
    -Cube SALE

    and two lines:

    -of SALES_SRC. VALUE of SALE. VALUE_OF_SALES
    -of SALES_SRC. PROD_ID to turnover. PRODUCT_NAME

    Then, I have deployed everything and ran mapping, which cube load. , But in my opinion the cube was not populated properly, because this was no aggregation performed at the lowest level of the hierarchy product - it there was only a value of the first occurrence of certain product. I mean:

    Turnover. CBC, we have for example:

    PROD_ID ID VALUE

    1236 1000000002 2
    84-1000000002-9

    For me the value of the cube must be 1236 + 84 = 1320, but the value of the cube, at PRODUCT_DETAIL_LEVEL, for yoghurt_3 is only 1236 - first occurrence of this product on SALE. CBC.


    Why was not the aggregate data at the lowest level of the hierarchy of the PRODUCT dimension - is this the way that OWB did these things?

    Should I manually before boarding to the cube data aggregation (just to use aggregator of aggregation of the data at the lowest level)? If so - what incremental loading of cube data (the old value of the value is simply replaced by a new and not added in the cube)

    In solutions of other vendors data warehouse cube in such a situation is responsible as I expected here.

    I don't really know what to do. I really appreciate help on your part.

    Thanks in advance

    Peter

    If you have several facts for identical dimension member keys then you need to deal with this by aggregation or select the first/last etc...

    See you soon
    David

  • Update the metadata of HFM 11.1.2.3

    Hello

    I update the metadata in HFM application. Version - 11.1.2.3 Classic application. I extracted the HFM workspace metadata and update dimension members by using the HFM desktop client.

    What will be the impact on the data if I'm changing the hierarchy in the app metadata file (I'm not remove all members.) Just create new members and change the hierarchy of existing members)? It erases data from HFM applications after loading the metadata file updated through the workspace?

    Thank you

    Michel K

    Hello. Your level of database will not be affected if you do not remove members at the basic level. For members of parent, if you change the descendants, it depends. If account or custom dimensions, everything is recalculated in the memory as requested, so the change is retroactive. For entities, it depends on if you help org by period and/or parent entities of locking. If one or both of those, entity hierarchy changes generally will not be affected. If you do not either of them, the hierarchy is affected, but you will need to rebind the data in order for the changes to be aggregated.

    Eric

  • Cannot lock the entity

    Hi all

    I'm having a problem with the locking of the data.

    (a) received error message: "cannot complete this action because calculations, translations or collections should be made."

    (b) I was able to lock the entity in the previous years except for January 2011.

    (c) I consolidated with application data and calculation forced on the entity, but that did not help.


    Someone else ran into similar issues?

    Thank you
    T.T.

    We encounter this problem from time to time. I don't know what causes it, but to solve that go set to the [None] dimension and for some reason, we find several parent entities that remain have a status of CN. The only way we can get to lock is to do a force on each of these individual parent and the lock then calculate them while we work our way upwards. Not sure if this is your problem but I'd maybe start looking at the different levels of the dimension of value. If anyone has any suggestions on a better way to solve this problem, we would be happy to try them!
    Thank you.

  • Limits of the process management

    Hi guys,.

    I just activated and tested in HFM 11.1.1.3 process management. Is it true that once that basic users have calculated their data and a critical level 1 favours data, these users cannot see the data in the webgrids more?

    I would like to have all users (RU) to display the data at any time but cannot be changed once it is above their level of access. Is this possible?

    Thanks a lot for your help or thoughts!


    (btw, the hierarchy is like this: RU-> BU-> CORP)

    That shouldn't be the case. Users must have access to the data in the revision levels above their role - it simply becomes read-only for them.

    Please keep in mind that process management is by entity. So, if a user promotes RU from level 1 to level 2, this has no bearing at all on BU or Corp. Parent entities cannot be in a level of control to their children, but they can be a lower level of study. I guess that to help you, I would need to understand what data they can not see.

    Finally, it is independent of the gradual submission, which takes these concepts and adds more complexity.

    What message they get in the gates? NoAccess?

    -Chris

  • Nested data entity on the user interface collection

    Hi team,

    We are facing challenges with entities nested to collect the data on the screens of the interview.

    Scenario is similar to this, we need to collect several information owner and with in the lod country information we collect several main landlord infromation and with in the main field of infromation Lord we collect several additional licenses.

    I created the entity (the owner) and created another child entity for (Principal owner) and created a more than child entity (license).

    When I try to collect the information in the above manner, OPM throw error saying: you cannot nest entities.

    Please provide your suggestions or solution to this problem.

    Thank you

    Viv

    Parent entities and their children cannot be taken on the same screen. "Perceived" here means create instances of the entity (using Add/Remove).

    Information on parents and kids can both be collected on the same screen. 'Information' means here attributes for the entity instances that you have already created.

    Davin.

Maybe you are looking for