Explanation of dimension value HFM

Hello

Is there any specific article/document explaining the dimension value HFM? IAM looking for information expalining the data transformation from loading/entrance to the translation of the data wrt dimension of value...

Good things, Thanos!

For more info, the Administrator's guide is in fact good enough now.

http://docs.Oracle.com/CD/E40248_01/EPM.1112/hfm_admin.PDF

See pages 213 and 214 for the full consolidation process.  Bottom of page 211 lists the sequence of events that occur during a consolidation... relevant how the value dimension members are filled.

Tags: Business Intelligence

Similar Questions

  • why we use Parent parent adjs total parent in the dimension value in HFM

    Hi Experts

    Can someone give me please an explanation of why we use Parent, parent adjs and total in the dimension value in HFM?

    concerning
    Smilee

    Hello
    As a quick response, when you post a journal to adjustment, this setting affects all parent entities. On the contrary, if you post an adjustment of newspaper [Adjs Parent] then you must also select which parent entities should be affected by the review (while all the other parents will not be affected). So that this discussion is relevant, you must have your entity shared as part of several parents. If you're a single parent so this does not apply. Note that, to make this work, you must have active AllowAdjFromChildren

  • List of members of generic dimension in HFM 11.1.2.2.300

    What is the syntax for a list of members of a generic dimension in HFM. Generic dimensions trying to write the 'old' way that works for all of the nothing does not work.
    The dimension is called C3Network, Alias Dimension: AllNetworks, abbreviated name: NTW

    That's what I tried:
    Void EnumMemberLists()
    Dim aC3NetworkLists (1)

    Select HS case. Dimension
    Case "C3Network".
    aC3NetworkLists (1) = 'Test '.
    HS. SetMemberLists aC3NetworkLists
    End Select ' HS case. Dimension
    End Sub ' EnumMemberLists()

    Void EnumMembersInList()
    Select HS case. Dimension
    Case "C3Network".
    Select HS case. MemberListID
    Case 1 ' test
    HS. AddMemberToList "R16".
    HS. AddMemberToList "W16".
    HS. AddMemberToList 'D1 '.
    HS. AddMemberToList "R15".
    End Select ' HS case. MemberListID
    End select ' HS case. Dimension
    End Sub ' EnumMembersInList()

    You'll want to use the Alias for this.

  • All dimension values must be single line values

    Hi all

    I have a dimension long_description attribute mapped to a column of text that contains a character "/ n". When I try to load dimension I get following error.


    An error occurred on the server
    Class of error: failure of the Express
    Server error descriptions:
    INI: error creating a generic Manager definition to < BuildProcess > TxsOqConnection::generic
    INI: XOQ-01600: OLAP DML error "ORA-34052: all the dimension values must be single line values." while executing DML 'SYS. AWXML! R11_COMPILE_ATTRIBUTES('TEST.) DIMENSION') ', generic for TxsOqStdFormCommand::execute '.

    If I delete the mapping between my column of text in the description attribute long size loads very well.

    It was happening because my text column contains several lines? text seems valid for reporting purposes (I mean having several lines)

    Thank you
    Dietsch.

    Analytic workspace dimensions do not support dimension members that contain new lines. This assumption is so integrated in language OLAP DML that it is difficult to see how it could ever be changed. Therefore, you cannot map a level (or hierarchy) key to a column that contains values to the new lines. But in your case you map an attribute, not a level key, so the error message is confusing. The problem is that your long description attribute is "indexed", which means that it is implemented using a DIMENSION and a RELATIONSHIP rather than a VARIABLE. To illustrate, I created a dimension named TEST with two levels, A and B, and one attribute, LONG_DESCRIPTION. The page of an attribute in AWM has two check boxes 'Create columns in views level attribute' and 'Index' that control how the attribute is being implemented.

    This is what is created in the AW if both are false.

    ->listnames like '%TEST%LONG%'
       1 VARIABLE
       ---------------------
       TEST_LONG_DESCRIPTION
    

    This is what is created if "Index" is checked.

    ->listnames like '%TEST%LONG%'
       1 DIMENSION                    1 VARIABLE
       ----------------------------   ----------------------------
       TEST_LONG_DESCRIPTION_INDEX    TEST_LONG_DESCRIPTION_STORED
    
       1 RELATION
       ----------------------------
       TEST_LONG_DESCRIPTION
    

    And here's what you get if you check "create columns for the level attribute of views."

    ->listnames like 'TEST%LONG%'
       2 DIMENSIONs                     3 VARIABLEs
       ------------------------------   ------------------------------
       TEST_A_LONG_DESCRIPTION_INDEX    TEST_A_LONG_DESCRIPTION_STORED
       TEST_B_LONG_DESCRIPTION_INDEX    TEST_B_LONG_DESCRIPTION_STORED
                                        TEST_LONG_DESCRIPTION
    
       6 RELATIONs
       ------------------------------
       TEST_A_LONG_DESCRIPTION
       TEST_A_LONG_DESCRIPTION_HIER_U
       TEST_A_LONG_DESCRIPTION_UNIQUE
       TEST_B_LONG_DESCRIPTION
       TEST_B_LONG_DESCRIPTION_HIER_U
       TEST_B_LONG_DESCRIPTION_UNIQUE
    

    The thing to note is that if you check one of these boxes, then your attribute is implemented by using a dimension of AW and AW relationship. This gives a good performance, but imposes the limitation that your attribute values cannot contain newlines. The obvious solution is to uncheck both boxes so that your attribute is implemented as a VARIABLE. If you absolutely have indexed attributes, so I guess you can use the SQL REPLACE function to change the new lines in escaped to the mapping layer

    GLOBAL > select REPLACE('a
      2  b', '
      3  ',
      4  '\n')
      5* from dual
    /
    
    REPL
    ----
    a\nb
    
    GLOBAL > select REPLACE('a\nb', '\n','
      2  ')
      3* from dual
    /
    
    REP
    ---
    a
    b
    

    You must convert the escape sequence in a new line endangered.

  • All the dimension values must be single line values

    I created a Simple hierarchy with the following levels:

    Category
    Subcategory
    Agenda

    The mapping of the hierarchy above is based on a table where the column list are as follows:

    ITEM_KEY
    NOM_ELEMENT
    BRAND_KEY
    BRAND_NAME
    CATEGORY_KEY
    CATEGORY_NAME
    SUBCATEGORY_KEY
    SUBCATEGORY_NAME


    Item_key is the primary key for this table and nom_element is also unique.

    When I maintain this dimension, the following error occurs:


    An error occurred on the server
    Class of error: failure of the Express
    Server error descriptions:
    INI: Error creating a generic Manager definition to < BuildProcess > TxsOqConnection::generic
    INI: XOQ-01600: OLAP DML error "all dimension values must be single line values." while executing DML 'SYS. AWXML! R11_COMPILE_ATTRIBUTES('ITEM.) DIMENSION') ', generic for TxsOqStdFormCommand::execute '.

    at oracle.olapi.data.source.DataProvider.callGeneric (unknown Source)
    at oracle.olapi.data.source.DataProvider.callGeneric (unknown Source)
    at oracle.olapi.data.source.DataProvider.executeBuild (unknown Source)
    to oracle.olap.awm.wizard.awbuild.UBuildWizardHelper$ 1.construct (unknown Source)
    to oracle.olap.awm.ui.SwingWorker$ 2.run (unknown Source)
    at java.lang.Thread.run(Thread.java:595)

    The essential error is "all the dimension values must be unique row values", which means that the server tries to create a dimension of AW member containing a newline character. The error occurs under the SYS. AWXML! Procedure R11_COMPILE_ATTRIBUTES, which is where the attributes are indexed (i.e. transformed into dimension members). If my guess is that one of your attributes (likely mapped to a column _NAME) contains a new line.   The solution is to disable the indexing for that attribute.   In terms of AWM you must make sure the following boxes are not activated in the "Général" pane

  • Create the views level attribute columns
  • Index

  • LIMIT to THE dimension values to certain level of hierarchy in OLAP_CONDITION

    Hello

    I use OLAP 10 g. I try to limit the dimension values to certain level of hierarchy (not only a certain value).

    I have only one dimension: CHANNEL and two levels for the dimension: CHANNEL_TOTAL and CHANNEL_NAME.

    In order to limit to a certain value, it is an example of code:

    OLAP_CONDITION (R2C, ' channel LIMIT to "all channels"', 1).

    But what about limiting to set values to the CHANNEL_NAME level?

    something like

    OLAP_CONDITION (R2C, 'Channel LIMIT to CHANNEL_NAME', 1) does not

    Thanks in advance
    Peter

    channel_levelrel is the purpose of relationship metadata containing the relationship of level for each Member.

    You can try:
    OLAP_CONDITION (R2C 'LIMITED channel TO channel_levelrel eq "CHANNEL_NAME" ', 1).

  • Dimension of value HFM

    Hi Experts,

    I'm new to HFM technology and want to know the dimension of value. I talked about the documents available in the oracle site, but unable to get it

    Ramapra

    Hi Ramapra,

    1. first of all, this is the wrong place where you filed your request.

    Financial consolidation

    2. in my understanding, value dimension is a dimension of the defined system. Its purpose is to audit.

    3. way HFM is multidimensional, so each entry must participate in all dimensions (IE a member of sizes). So we choose the appropriate members of the dimension of value too.

    4. If you have forms, registration, or entry level, then would be your dimension of value. Like wise

    [Contribution Total] - value that aggregate to the parent

    What follows is the intermediaries, who would lead and help you get total Contributino

    [Adjs contribution]
    [Contribution]
    [Disposal]
    [Share]
    [Parent Local]
    [Adjs parent]
    [Parent]





    -for the value input or load

    Hope this helps you and see this HFM forum for more information

    Sandeep Reddy, Enti
    HCC
    http://hyperionconsultancy.com/

  • can we have multiselect dimension values / refinements in short?

    Hello

    I have a requirement for multiple selection of values of refinement. For example: say 'SelectMe' refinement have three values

    1 selectMe1

    2 selectMe2

    selectMe3 3.

    on the front end, I need to use the checkbox on these values of refinement. If we use checkboxes, then user can check several boxes at a time.

    but according to my knowledge, we can select value as a refinement in both short, so that we can send in its dimension as selection id.

    can achieve us through short directly? Please suggest on this.

    Thank you

    Vijay

    Hi Vijay,

    You create dimension SelectMe as multiple of developer studio. For that SelectMe Dimension available for multiple selection.

    Thank you

    Sunil N

  • Partial update and dimension values

    Hello

    A question about partial and analytical, updates the documentation is not clear (perhaps because of my Dutch interpretation :-)).

    We have a dimension called 'platform' that is autogen. 2 new platforms (Wii you) and XBox 720 will soon be available. These values will be displayed when we use partial updates?

    Thank you
    Maarten

    Maarten - if you introduce these two values in the files passed to partial updates, they become available, if this update succeeds.

  • Decode the values without ETL (Group Dimension values)

    Hello guys, I have a question that is partly triggered by me not wanting to change the default ETLs.

    I have values in a dimension to come as table:

    Area A
    Region B
    Region C
    Region D
    Region E

    However, I'm hoping to re - org the hierarchy as below:

    A new region
    D new region

    Essentially, there is a new org structure where we Group (consolidation) old parts in new and rename values.

    I know it can be done in ETL but y at - it everywhere else where this is possible? Perhaps in the business layer? Is there a place in the business layer where we can decode the values and combine them?

    Regards and thanks in advance for your help,.

    Hello

    You can do to the RPD layer with an instruction box on a column of logic. However I really wouldn't say that because it means that if the group never changes you need to release a new RPD to get change.

    Why not build a task custom ETL that you can set to run after the vanilla who takes just these values, consolidates necessary (perhaps using a lookup table to find the old-> new maps) and then load it the new value in a column of extension on the dimension or the extension of related dimension, i.e. W_ORG_DX. Then, you can simply display this column in the presentation layer for users. Unless the table in question has millions of people off the coast of columns, just let him do this mapping for each row in the table for each ETL.

    I think it would be a very simple task and would mean that you can change the mappings easily through the table if necessary. This also means that you don't need to touch the vanilla ETL mappings and are not changing the values in the columns of vanilla, as you mentioned that you didn't want to do.

    Kind regards

    Matt

  • Member of dimension values does not (empty cells just)

    Hello

    I have a dimension of the Organization (company > Business Unit > Department > account) and one made (in Dollars) on my test repository. When I add my fact and any level of the dimension of a report of responses I see for example two empty cells for each Member to the 'Business' level and the result of my calculation. I tried several ways to address this issue of aggregating my data to compress all of the dimension in a single table and still have had no success. I also turned on logging and discovered that the generated SQL physical layer returns the correct values when I run in my query tool.

    Anyone knows similar behavior? I don't know if I skipped a step on my modeling or it is a bug in my installation of OBIEE

    Thank you very much!!
    -Ignacio

    Hello Ignacio,

    Did you check the length of your columns in the physical layer?

    Good luck

    Daan Bakboord
    Scamander Solutions

  • get same rank 2nd dimension value based on 1st value of dimension of a 2d array

    Hello

    I have a chart 2d figures.

    If I need to extract the value Y automatically to a chosen value X of a digital command. How can I do this?

    BP

    Use the interpolate VI.

  • remove totals for some dimension values

    Hello

    I have a PivotTable in which I need to display totals for certain categories and for other categories must be no totals. Could you please let me know if this is possible?

    Here's an example, for the C product label should be total measurement.

    Thank you.

    Kind regards

    Oana

    Hello

    not possible out of the box. Totals everywhere or nowhere.

  • ODI: Error loading the data of HFM: invalid dimension name

    Hello

    I am fairly new to ODI and I was wondering if any guru could help me overcome a question im facing. I try to load data from a csv file in HFM. I chose the good KM (LKM file SQL and SQL IKM to Hyperion Financial Management Data), with the Sunopsis Memory engine than the staging area.

    To facilitate the file csv has the exact structure as well as the dimensions of HFM applications and has been located in the interface, as shown below:

    Column of the source - target HFM column file

    -Scenario
    Year - year
    Display - display
    Entity - entity
    Value - value
    Account - account
    PIC - PIC
    CUSTOM1 - Custom1
    CUSTOM2 - Custom2
    Custom3 - Custom3
    Custom4 - Custom4
    -Period
    DataValue - Datavalue
    -Description (no column of the source, mapped as ")

    The csv file contains basic members only. I set the error log file path, and when running the interface I get an error. When I open the error log, I see the following messages:

    Line: 1, error: invalid dimension name
    ! Column_Order = C1_SCENARIO, C2_YEAR, C3_VIEW, C4_ENTITY, C5_VALUE, C6_ACCOUNT, C7_ICP, C8_CUSTOM1, C9_CUSTOM2, C10_CUSTOM3, C11_CUSTOM4, C12_PERIOD, C13_DATAVALUE
    C1_SCENARIO
    line: 3 error: a valid column order is not specified.
    Actual; 2007; YTD; 20043; < entity currency >; 13040; [ICP no]; [None]; 1000; [None]; [None]; Jan; 512000; » »
    > > > > > >



    I'm not sure how to solve, as it is based on the interface mapping match dimensions on a 1:1 basis. In addition, dimension in the target column names correspond to the dimension names of application of HFM (that this application has been deducted).

    Help, please!

    Thank you very much
    Jorge

    Published by: 993020 on March 11, 2013 05:06

    Dear Experts,

    ODI: 11.1.1.6.0
    HFM: 9.3.3

    I also met a similar error as OP.

    In my case, the error occurs when I use SUNOPSIS_MEMORY_ENGINE as the staging. If I simply change this staging to the Oracle schema, the Interface will load data successfully to HFM. So, I'm curious on what cause the SUNOPSIS cannot become the staging for the loading of HFM.

    This will show in the IKM SQL to the FM data log file:

    Load data started: 3/14/2013 13:41:11.
    Line: 1, Error: Invalid dimension name
    !Column_Order = C1_SCENARIO, C2_YEAR, C3_VIEW, C4_ENTITY, C5_VALUE, C6_ACCOUNT, C7_ICP, C8_PRODUCT, C9_CUSTOMERS, C10_CHANNEL, C11_UNITSFLOWS, C12_PERIOD, C13_DESCRIPTION
    
    
    C1_SCENARIO

    
    Line: 3, Error: A valid column order is not specified.
    Actual;2007;YTD;EastSales;;Sales;[ICP None];Comma_PDAs;Electronic_City;National_Accts;[None];February;;555
    >>>>>>
    
    Load data completed: 3/14/2013 13:41:11.
    

    It seems like the query generated is not picking up the Column Alias name, but this only happens if I use SUNOPSIS_MEMORY_ENGINE as the staging. With Oracle schema as staging, data load is successfully finished.

    This is the generated code from the KM

    Prepare for Loading (Using Oracle as Staging)

    from java.util import HashMap
    from java.lang import Boolean
    from java.lang import Integer
    from com.hyperion.odi.common import ODIConstants
    from com.hyperion.odi.hfm import ODIHFMConstants
    from com.hyperion.odi.connection import HypAppConnectionFactory
    
    # Target HFM connection properties
    clusterName   = "demo92"
    userName      = "admin"
    password      =  "<@=snpRef.getInfo("DEST_PASS") @>"
    application   = "COMMA"
    
    targetProps = HashMap()
    targetProps.put(ODIConstants.SERVER,clusterName)
    targetProps.put(ODIConstants.USER,userName)
    targetProps.put(ODIConstants.PASSWORD,password)
    targetProps.put(ODIConstants.APPLICATION_NAME,application)
    
    # Load options
    consolidateOnly    = 0
    importMode            = "Merge"
    accumulateWithinFile  = 0
    fileContainsShareData = 0
    consolidateAfterLoad  = 0
    consolidateParameters = ""
    logEnabled             = 1
    logFileName           = r"C:\Temp\ODI_HFM_Load.log"
    tableName             = r"HFMData"
    columnMap            = 'SCENARIO=Scenario , YEAR=Year , VIEW=View , ENTITY=Entity , VALUE=Value , ACCOUNT=Account , ICP=ICP , PRODUCT=Product , CUSTOMERS=Customers , CHANNEL=Channel , UNITSFLOWS=UnitsFlows , PERIOD=Period , DATAVALUE=DataValue , DESCRIPTION=Description '
    srcQuery= """select   C1_SCENARIO    "Scenario",C2_YEAR    "Year",C3_VIEW    "View",C4_ENTITY    "Entity",C5_VALUE    "Value",C6_ACCOUNT    "Account",C7_ICP    "ICP",C8_PRODUCT    "Product",C9_CUSTOMERS    "Customers",C10_CHANNEL    "Channel",C11_UNITSFLOWS    "UnitsFlows",C12_PERIOD    "Period",555    "DataValue",C13_DESCRIPTION    "Description" from ODI_TMP."C$_0HFMData"  where      (1=1)     """
    srcCx                    = odiRef.getJDBCConnection("SRC")
    srcQueryFetchSize=30
    
    loadOptions = HashMap()
    loadOptions.put(ODIHFMConstants.OPTIONS_NAME_IKMDATA_CONSOLIDATEONLY, Boolean(consolidateOnly))
    loadOptions.put(ODIHFMConstants.OPTIONS_NAME_IKMDATA_IMPORTMODE, importMode)
    loadOptions.put(ODIHFMConstants.OPTIONS_NAME_IKMDATA_ACCUMULATEWITHINFILE, Boolean(accumulateWithinFile))
    loadOptions.put(ODIHFMConstants.OPTIONS_NAME_IKMDATA_FILECONTAINSSHAREDATA, Boolean(fileContainsShareData))
    loadOptions.put(ODIHFMConstants.OPTIONS_NAME_IKMDATA_CONSOLIDATEAFTERLOAD, Boolean(consolidateAfterLoad))
    loadOptions.put(ODIHFMConstants.OPTIONS_NAME_IKMDATA_CONSOLIDATEPARAMS, consolidateParameters)
    loadOptions.put(ODIConstants.LOG_ENABLED, Boolean(logEnabled))
    loadOptions.put(ODIConstants.LOG_FILE_NAME, logFileName)
    loadOptions.put(ODIHFMConstants.OPTIONS_NAME_TABLENAME, tableName);
    loadOptions.put(ODIHFMConstants.OPTIONS_NAME_COLUMNMAP, columnMap);
    loadOptions.put(ODIHFMConstants.OPTIONS_NAME_SOURCECONNECTION, srcCx);
    loadOptions.put(ODIHFMConstants.OPTIONS_NAME_SOURCEQUERY, srcQuery);
    loadOptions.put(ODIHFMConstants.OPTIONS_NAME_SOURCEQUERYFETCHSIZE, Integer(srcQueryFetchSize));
    
    # Get the writer
    hfmWriter = HypAppConnectionFactory.getAppWriter(HypAppConnectionFactory.APP_HFM, targetProps);
    
    # Begin load
    hfmWriter.beginLoad(loadOptions)
    

    Prepare for loading (using SUNOPSIS as staging)

    from java.util import HashMap
    from java.lang import Boolean
    from java.lang import Integer
    from com.hyperion.odi.common import ODIConstants
    from com.hyperion.odi.hfm import ODIHFMConstants
    from com.hyperion.odi.connection import HypAppConnectionFactory
    
    # Target HFM connection properties
    clusterName   = "demo92"
    userName      = "admin"
    password      =  "<@=snpRef.getInfo("DEST_PASS") @>"
    application   = "COMMA"
    
    targetProps = HashMap()
    targetProps.put(ODIConstants.SERVER,clusterName)
    targetProps.put(ODIConstants.USER,userName)
    targetProps.put(ODIConstants.PASSWORD,password)
    targetProps.put(ODIConstants.APPLICATION_NAME,application)
    
    # Load options
    consolidateOnly    = 0
    importMode            = "Merge"
    accumulateWithinFile  = 0
    fileContainsShareData = 0
    consolidateAfterLoad  = 0
    consolidateParameters = ""
    logEnabled             = 1
    logFileName           = r"C:\Temp\ODI_HFM_Load.log"
    tableName             = r"HFMData"
    columnMap            = 'SCENARIO=Scenario , YEAR=Year , VIEW=View , ENTITY=Entity , VALUE=Value , ACCOUNT=Account , ICP=ICP , PRODUCT=Product , CUSTOMERS=Customers , CHANNEL=Channel , UNITSFLOWS=UnitsFlows , PERIOD=Period , DATAVALUE=DataValue , DESCRIPTION=Description '
    srcQuery= """select   C1_SCENARIO    "Scenario",C2_YEAR    "Year",C3_VIEW    "View",C4_ENTITY    "Entity",C5_VALUE    "Value",C6_ACCOUNT    "Account",C7_ICP    "ICP",C8_PRODUCT    "Product",C9_CUSTOMERS    "Customers",C10_CHANNEL    "Channel",C11_UNITSFLOWS    "UnitsFlows",C12_PERIOD    "Period",555    "DataValue",C13_DESCRIPTION    "Description" from "C$_0HFMData"  where      (1=1)     """
    srcCx                    = odiRef.getJDBCConnection("SRC")
    srcQueryFetchSize=30
    
    loadOptions = HashMap()
    loadOptions.put(ODIHFMConstants.OPTIONS_NAME_IKMDATA_CONSOLIDATEONLY, Boolean(consolidateOnly))
    loadOptions.put(ODIHFMConstants.OPTIONS_NAME_IKMDATA_IMPORTMODE, importMode)
    loadOptions.put(ODIHFMConstants.OPTIONS_NAME_IKMDATA_ACCUMULATEWITHINFILE, Boolean(accumulateWithinFile))
    loadOptions.put(ODIHFMConstants.OPTIONS_NAME_IKMDATA_FILECONTAINSSHAREDATA, Boolean(fileContainsShareData))
    loadOptions.put(ODIHFMConstants.OPTIONS_NAME_IKMDATA_CONSOLIDATEAFTERLOAD, Boolean(consolidateAfterLoad))
    loadOptions.put(ODIHFMConstants.OPTIONS_NAME_IKMDATA_CONSOLIDATEPARAMS, consolidateParameters)
    loadOptions.put(ODIConstants.LOG_ENABLED, Boolean(logEnabled))
    loadOptions.put(ODIConstants.LOG_FILE_NAME, logFileName)
    loadOptions.put(ODIHFMConstants.OPTIONS_NAME_TABLENAME, tableName);
    loadOptions.put(ODIHFMConstants.OPTIONS_NAME_COLUMNMAP, columnMap);
    loadOptions.put(ODIHFMConstants.OPTIONS_NAME_SOURCECONNECTION, srcCx);
    loadOptions.put(ODIHFMConstants.OPTIONS_NAME_SOURCEQUERY, srcQuery);
    loadOptions.put(ODIHFMConstants.OPTIONS_NAME_SOURCEQUERYFETCHSIZE, Integer(srcQueryFetchSize));
    
    # Get the writer
    hfmWriter = HypAppConnectionFactory.getAppWriter(HypAppConnectionFactory.APP_HFM, targetProps);
    
    # Begin load
    hfmWriter.beginLoad(loadOptions)
    

    If anyone can help on how to solve this?

    Thank you

    Published by: user10620897 on March 14, 2013 14:28

  • PKI with HFM dimension conflict

    Hi guys,.

    I need your help for the following, I am using FDM 11.1.1.3 with HFM 11.1.1.3.50. Now, in HFM the dimension of the PKI has been implemented as follows (for which I was told that this is not recommended):

    Parent: [high PEAK PAGE]
    Child1: [ICP NONE]
    Child2: PKI [feature]
    Children: 001 to 940

    Apparently, FDM guess [NO PIC] should be the parent. At least, I can't export the PEAK values for the entities as they should. Currently, there is a solution with the accounts of logic and a script in place:

    -The script adds "PKI." in front of an account when it carries PKI.
    -ICP Mapping said export all to [NO PIC] except when the account starts with L -.
    -Logical accounts were created to duplicate each account "PIC".
    -The values of L-logic of these accounts are then exported to children [PKI entities].

    Needles to say this requires some maintenance since for each account that carries the PEAK values in a flat file, a new account of logic must be created and mapped.

    Any solution or suggestion how I can tweak FDM and formulate the set to the top of the dimension of HFM PKI? Or how can I minimize the maintenance? You'd be a great help!

    Kind regards
    JDeM

    The ICP Mapping would be a passage of the card *--> [no PIC] and then a plan explicit for each Member of the L - xxx which creates the logical group.

    If your source in PKI members correspond to members of your target, you can also create a passage of the card for L-*--> * but you would need to name the rule something that starts with an alphabetic character that comes before the first alpha character in your pass card for [no PIC].

    Make sense?

Maybe you are looking for