OLAP cube loading

Hi I am new to the use of OLAP. I use Analytic Workspace Manager. I created a cube and now I'm trying to load the cube. I get this error and I have no idea what it means. Any help would be greatly appreciated.

Thank you

Caused by: oracle. AWXML. AWException: * Error occurred: validation of the mappings of measurement error. Measure the expression key to PVPERIOD. Dimension of the DIMENSION, CHECK_REGISTER group mapping. MAPGROUP1. There is no such thing as CUBEMAPGROUP.

usually this error occurs when the mappings for dimension/cube not done correctly. After creating the dimension/cube, you must match the source table/view to load data in AW.
Make sure that all your dimensions and cubes are correctly mapped.

Tags: Business Intelligence

Similar Questions

  • Windows 7 - Excel 2010 - unable to connect to the OLAP Cubes SQL 2000

    Running 32-bit Windows 7 - Excel 2010 - unable to connect to SQL 2000 of the GreatPlains OLAP Cubes - message Getting connection refused actively.

    It works with XP and even load Excel 2010.  I can't find what I need to change in win 7.  I guess that's the firewall or driver settings.  I completely disabled the firewall and still no luck.  I also took the file to connect to the XP machine and tried to use it as an existing connection and still have the error.  I ran Excel on the Win 7 XP SP2, XP SP3 and still no luck.  I have spent hours reading and researching the issue and can not find an answer.
    I tried to change the OLAP service account and open all ports on the server.  Still nothing.  The error message will connect to the client computer and the server does not record demand, so I think that I don't yet get on the server.  I do not understand as well as if the message is actively refusing connection.
    I can run the cubes very well on the server and on the XP SP3 with Excel 2010 machine.
    I'm in my troubleshooting garnis.  There has been some problems, that I have not been able to solve in Microsoft Servers and this is one.  I know that there is an answer.  I'm just not see it.
    Thank you!!

    Hi Litiasheldon,

    The question you posted would be better suited in the TechNet Forums. I would recommend posting your query in the TechNet Forums.

    SQL Server (TechNet Forums)

    http://social.technet.Microsoft.com/forums/en-us/category/SQLServer

     

    I hope this helps.

  • First time to create the OLAP Cube - problem of aggregation

    I'm new to OLAP technology and I want to build the first OLAP cube. I use the scott schema is:

    1-building dimension on the table 'DEPT' with a level deptno.

    2-building cube with a sal of measure on the table 'EMP '.


    Emp table

    DEPTNO SAL

    20 800
    30 1600
    30 1250
    20 2975
    30 1250
    30 2850
    2450 10
    20 3000
    10 5000
    30-1500
    20 1100
    30 950
    20 3000
    10 1300


    DEPt table

    DEPTNO DNAME

    10 ACCOUNTING
    SEARCH 20
    30 SALES
    40 OPERATIONS



    When I use the wizard to maintain and then display the data, that the sum of the wage is not accurate, it looks like

    sum

    all departments 5550
    accounting of 1300
    Researches. 3000
    sale 1250
    0 operations


    the values must be

    sum

    all departments 29025
    accounting 8750
    Researches. 10875
    sale 9400
    0 operations

    Why the aggregation of the values for the ministries are not accurate?

    Problem is visible in your table.

    Emp table

    DEPTNO SAL

    20 800
    30 1600
    30 1250
    20 2975
    30 1250
    30 2850
    2450 10
    20 3000
    10 5000
    30-1500
    20 1100
    30 950
    20 3000
    10 1300

    There are several Dept. not with different sal. In OLAP when you load this record then the last record just victories. That's why, if you look at it then you will see the value that you see is the last value is charged.

    To solve this, you should do a group (emp) deptno and sum (sal). Load the folders and you'll see the correct result.

    I hope this helps.

    Thank you
    Brijesh

  • How to build BI Publisher of Olap Cube

    I have a requirement on the construction of a report, but seems I am unable to meet customer's requirement by using OBIEE answers so I want to do this, use the BI Publisher. So I have a few questions.

    1. is it possible to convert a report of responses OBIEE created using an Olap Cube to BI Publisher?

    2. how to create a data model by using the Olap Cube?

    Thank you

    You can take the logical query of the report BI answers and allows to create a new report, BI Publisher.

  • OLAP cubes

    Can I get information on The OLAP CUBES in any link? pls fwd them...

    Hello

    A good starting point is the product Page of OTN:-http://www.oracle.com/technetwork/database/options/olap/index.html

    You should look at the "Oracle OLAP preview video" here

    There is also a view of all-white paper:-http://www.oracle.com/technetwork/database/options/olap/oracle-olap-11gr2-twp-132055.pdf

    And also links to the wiki and the blog below

    Thank you

    Stuart Bunby

    OLAP blog: http://oracleOLAP.blogspot.com
    OLAP Wiki: http://wiki.oracle.com/page/Oracle+OLAP+Option
    OLAP OTN: http://www.oracle.com/technology/products/bi/olap/index.html
    DW OTN: http://www.oracle.com/technology/products/bi/db/11g/index.html

  • OLAP cube can load remote database?

    Sorry, since always, I worked with Express / Oracle OLAP. If I want to build a cube in a base of data, is able to load data from a different database AWM? Or do I have to build the cube in the database even when the source is?

    Thank you!
    Scott

    You can load a local table or view. The simplest approach is to define a view in your AWM database that selects from the table in the second database. You map your cube on the local display.

  • Expressions of QDR in OLAP cube measures calculated

    Hello everyone,

    I'm going crazy with a calculated measure in OLAP Analytic Workspace Manager, which is defined as an Expression of the following OLAP syntax:

    NVL (CUBE. MEASURE1 [DIM1 = 'A'], 0) + NVL (CUBE. Date2 [DIM2 = 'B'], 0)

    Where:

    CUBE. MEASURE1 and CUBE. Date2 are not calculated measures, they are stored measures.

    Dim1 and DIM2 are the edges of the CUBE, A and B values both exist in their dimensions.

    In most of my questions, the calculated measure retrieves the correct results, when two members of the sum gets the data. But I have other cases, the calculated measure retrieves null!

    In these cases, the calculated measure retrieves null when the CUBE. Date2 [DIM2 = 'B'] retrieves NO results. But I think that if none of the two expressions QDR retrieves no results, NVL function will be replaced by 0.

    I read about this, situations where the QDR expressions get no result, by default, it throws and error and non-null o NA value. I found that there are 2 options of ORACLE DML that can manage this type of situation:

    LIMITSTRICT = NO (http://docs.oracle.com/cd/B28359_01/olap.111/b28126/dml_options043.htm#OLADM384)

    OKNULLSTATUS = YES (http://docs.oracle.com/cd/B28359_01/olap.111/b28126/dml_options077.htm#OLADM418)

    I tried to create a DML function in the AW to define two options, first to NO, then Yes and returns 0, call this function with the instruction OLAP_DML_EXPRESSION ('MyFunc', NUMBER), but it does not work

    Exchange monitoring calculated like this: OLAP_DML_EXPRESSION ('MyFunc', NUMBER) + NVL (CUBE. MEASURE1 [DIM1 = 'A'], 0) + NVL (CUBE. Date2 [DIM2 = 'B'], 0)

    Please, I need a solution to bypass this, how can I catch these situations? Should I create a program of LMD to solve? Where I put this default option (LIMITSTRICT, OKNULLSTATUS) and put them in each measurement calculation?

    Thanks in advance for the answer.

    Great, you can not use a formula directly? No missing documents, such as observed earlier?

    For example the formula =

    IF (CUBE_MEASURE1 (DIM1 'A')) NAFLAG EQ 0 THEN CUBE_MEASURE1 (DIM1 'A') ELSE 0 + IF (CUBE_MEASURE2 (DIM2 'B')) NAFLAG EQ 0 THEN CUBE_MEASURE2 (DIM2 'B') ELSE 0

    If this expression does not work in a single formula, set rather measures 3 Meas1/2/3...

    (I generally prefer many formulas compared with olap dml program option in the mix...-online program looping on several other dimensions values is sometimes tricky to control/understand.)

    Meas1 = IF (CUBE_MEASURE1 (DIM1 'A')) NAFLAG EQ 0 THEN CUBE_MEASURE1 (DIM1 'A') ELSE 0

    and

    Meas2 = IF (CUBE_MEASURE2 (DIM2 'B')) NAFLAG EQ 0 THEN CUBE_MEASURE2 (DIM2 'B') ELSE 0

    Then set

    Meas3 = Meas1 + Meas2

  • How can I create a logic reports OLAP Cube in AWM

    Hello

    I created several cubes, but now the client wants to have a cube created report so that we can report in a cube. Can someone tell me please to some guidance on how to create a logical cube.  Thanks in advance

    Its a simple idea... a bit like a layer of sql-opinion on the data stored in sql tables.

    I just tape a few points quickly.  If you do not understand it, feel free to post more questions.

    Lets say, you have the following stored cubes where data has been loaded and grouped.

    Designed by DIM1, DIM2 and DIM3, DIM4, TIME CUBE1 (with stored measures meas1x, meas1y, meas1z)

    CUBE2 (with meas2x, meas2y, meas2z stored measures) designed by DIM1, DIM3, DIM5, 6 TIMES

    CUBE3 (with meas3x, meas3y, meas3z stored measures) designed by DIM1, DIM2, DIM3, 6 TIMES

    Now you create a cube considered, lets call it RPT_CUBE

    RPT_CUBE must be dimensioned by all dimensions. This is the important point.

    It will be only calculated measures, NO measurement stored.

    Create a base (calculated) measure in RPT_CUBE that points to a stored measurement.

    So, for the stored measures 9 above, you will have 9 measures calculated in RPT_CUBE.

    Other measures (such as time) will be created by using these basic measures in RPT_CUBE.

    The properties LOOP_DENSE and LOOP_VAR parameters are critical for each measure in RPT_CUBE.  Often that's correct, but I have seen cases, when it is not set correctly, so I had to manually fix these two properties for measurements in my RPT_CUBE.

    When you query RPT_CUBE measures, make sure that WHERE the conditions are met for all dimensions.

    The value of the dimension of topancestor of a hierarchy, if it is not necessary in the query.

  • Removal of cells for the OLAP CUBE

    Hello
    I want to remove the old CUBE cell (to keep the House)?

    Is there any process for that?

    Thank you very much
    Alexandre Martins

    If your build script contains a CLEAR sequence, then the cube will automatically synchronize with the fact table. Consider the following sequence of actions.

    (1) insert a line in fact with (prod = "P1", time = 'T1')
    (2) load the cube. The cell (prod = "P1", time = 'T1') will now be in the cube.
    (3) remove the line from the fact
    (4) load the cube again.

    Will be the cell (prod = "P1", time = 'T1') still be in the cube at the end of all this? The answer is YES, if the script is (LOAD, SOLVE) and SO if the script is (CLEAR, LOAD, SOLVE). The AWM default script is LOAD, SOLVE.

  • How to achieve the extraction of 11g OLAP CUBE to relational source of OBI

    I have several sources for working with OBIEE. Can anyone provide the info how to configure drill down in the RPD (METADATA)? My highest level based on OLAP 11g cube while the level of detail is a relational source.

    Thanks in advance!

    N ° you must drag and drop the physical layer all THE columns over the table in the logic layer olap ROLAP (from made from the dimension).
    Then in the source of logical fact table, you will find two tables (The ROLAP and the MOLAP) and on the content tab, you need to know on how to use them.

    OBIEE occur only one query on the ROLAP and MOLAP server. You must then have ALL your logical dimension columns mapped on two physical columns (one for the ROLAP) and one for the MOLAP.

  • Cube loading

    iHi,
    We are new on datawarehousing. We created the dimensions and their mappings and everything was ok. Then, we created the cube and dimensions-related. Now, let's create a mapping to fill the cube, he create us joins between the dimensions and other tables of the DB, which are necessary for relationate dimensions and search data for the measures. All joins end up in a deduplicator and we want to link with the cube, but we do not know how. Any idea?
    Thank you

    Hello

    The mapping to load a cube (using a cube operator, not a table) should query the data source and create a list of measures and distinctive signs of company. Distinctive signs of company are then mapped to equivalent columns of the cube operator. The cube operator is a plug-in mapping that will make the search for the appropriate dimensions surrogate keys.

    If

  • OLAP cube - complex join condition

    Hi Experts,

    I requires a complex join between the dimension and cube.

    Requirement goes like this-

    We have a dimension to the date level-
    January 1, 2011
    January 2, 2011
    ...
    .. and so on.

    The fact table is not a matching surrogate key, but it has valid_from_date and valid_to_date that are stamps.

    I need to join these two to reach the condition readings - where dim.date between trunc (valid_from_date) and trunc (valid_to_date).

    Is it possible/feasible? Enjoyed your responses.

    Best regards, Marion

    You should be able to set a condition like this in the field "Join Condition" of the mapping pane cube in AWM. You would need to qualify table names to be something like this

    dim_table.date between trunc(fact_table.valid_from_date) and trunc(fact_table.valid_to_date).
    

    The only restriction to this approach is that you will not be able to partition on the time dimension for this cube due to the complexity of the condition.

    As an alternative, you can define a SQL view that joined your fact table in your time dimension table using the same condition. You map the cube on this point of view, in which case the restriction on partitioning goes.

  • Recovery of space in OLAP 11.1.0.7

    Hello
    I tried to understand how the OLAP cube loads consume space and how can we free up space used by OLAP AW objects in DB version 11.1.0.7. I have the following questions.

    1 during the loading process of a large number of data tablespace is apparently is used as TEMP, I note that the data space is consumed while a generation of cube is underway, but free in the tablespace space increases once the charging.
    2. it is very difficult to reclaim the space used by an AW. I tried to remove the cubes, but it does not work. The only way I understood is to drop and re-create the full AW using the xmls. Is there a better way?
    3 loads of incremental data sometimes take more space than the full loads. What is this means that old data is not deleted?

    Thanks in advance.

    One of the oddities of analytic workspaces is that a lot of versions or generations, can exist at the same time. It's very powerful, especially for the analysis of "what if?", but these generations take place. A special generation of AW will last as long as there is a customer in the appendix or if it is one of the last three generations. The following code can be used to eliminate some memory by forcing the creation of three new (small) generations. The AW in this case is GLOBAL, so you would need to adjust accordingly.

    set serverout on size unlimited
    GLOBAL > select round(sum(DBMS_LOB.GETLENGTH(AWLOB))/1024,0) kb from aw$global;
    
         KB
    ----------
        167699
    
    GLOBAL > declare
    2  aw_name varchar2(30);
    3  begin
    4   aw_name := 'GLOBAL';
    5   dbms_aw.execute('aw attach ' || aw_name || ' rwx;define junkvar int;update;commit');
    6   for i in 1..3 loop
    7     dbms_aw.execute('aw reattach ' || aw_name || ' rwx;junkvar=junkvar+1;update;commit');
    8   end loop;
    9   dbms_aw.execute('delete junkvar;update;commit;aw detach ' || aw_name || '');
    10 end;
    11 /
    
    PL/SQL procedure successfully completed.
    
    GLOBAL > select round(sum(DBMS_LOB.GETLENGTH(AWLOB))/1024,0) kb from aw$global;
    
         KB
    ----------
        147019
    

    Another useful trick is to call rebuild freepools. This is done automatically in 11.2, but you would need to do it yourself in 11.1.0.7.

    alter table aw$global modify lob (awlob) (rebuild freepools);
    

    Here's a quote from the Oracle documentation. Note that you should not run this if you use SECUREFICHIERS.

    The REBUILD FREEPOOLS clause deletes all the old data from the LOB column. This clause is only useful if return you to PCTVERSION to LOB data management. You can do this to manage older data blocks.

    Even after all this, there is always a table unadvoidable growth of $ AW. As a general rule, you can expect the table to increase three times its size after his original charge. It's mainly a side effect of the use of the LOBS to store the AW, given that LOB subsystem do not abandon the memory as soon as he got it.

    2. it is very difficult to reclaim the space used by an AW. I tried to remove the cubes, but it does not work. The only way I understood is to drop and re-create the full AW using the xmls. Is there a better way?

    This is a bug and has been discussed (and indeed discovered) in a recent thread: releasing storage occupied by Cube

    3 loads of incremental data sometimes take more space than the full loads. What is this means that old data is not deleted?

    Loads of additional data, or more precisely aggregations of inremental, take existing lists of the child nodes and extend them. This can lead to a similar to a fragmented hard disk fragmentation. After a full charge children lists will be stored together again. It is, either incidentally, why additional performance can degrade over time, and why a periodic complete aggregation can improve performance.

  • How we call OLAP to make unnecessary aggredations for loading data?

    Hello

    I am trying to create an OLAP cube relatively simple two-dimensional (you can call it "Square OLAP"). My current environment is 11.2EE with MN for the workspace management.

    A dimension is date,-> year-> month-> day annually, the other is production unit, implemented as a hierarchy with some machine on the lower level. The fact is defined by a pair of low level of these dimensions values; for example, a measure is taken once per day for each machine. I want to store these detailed facts in a cube as well as aggregates, so that they can be easily drilled up to without questioning the original fact table.

    The rules of aggregation are on 'aggregation level = default' (which is the day and the machine respectively) for two of my dimensions, the cube is mapped to the fact with dimension tables table, the data is loaded, and the everything works as expected.

    The problem is with the charge itself, I noticed that it is too slow for my amount of sample data. After some research on the issue, I discovered a query in cube_build_log table, query the data is actually loaded.

    < SQL >

    <! [CDATA]

    SELECT / * + bypass_recursive_check cursor_sharing_exact no_expand no_rewrite * /.

    T4_ID_DAY ALIAS_37,

    T1_ID_POT ALIAS_38,

    MAX (T7_TEMPERATURE) ALIAS_39,

    MAX (T7_TEMPERATURE) ALIAS_40,

    MAX (T7_METAL_HEIGHT) ALIAS_41

    Of

    (

    SELECT / * + no_rewrite * /.

    T1. "" T7_DATE_TRUNC DATE_TRUNC. "

    T1. "" T7_METAL_HEIGHT METAL_HEIGHT. "

    T1. "" T7_TEMPERATURE OF TEMPERATURE. "

    T1. "" POT_GLOBAL_ID "T7_POT_GLOBAL_ID

    Of

    POTS. POT_BATH' T1)

    T7,

    (

    SELECT / * + no_rewrite * /.

    T1. "" T4_ID_DIM ID_DIM. "

    T1. "" ID_DAY "T4_ID_DAY

    Of

    LAUGHED. ("' DIM_DATES ' T1)

    T4,

    (

    SELECT / * + no_rewrite * /.

    T1. "" T1_ID_DIM ID_DIM. "

    T1. "" ID_POT "T1_ID_POT

    Of

    LAUGHED. ("' DIM_POTS ' T1)

    T1

    WHERE

    ((T4_ID_DIM = T7_DATE_TRUNC)

    AND (T1_ID_DIM = T7_POT_GLOBAL_ID)

    AND ((T7_DATE_TRUNC) IN "a long long list of dates for the currently processed cube partition is clipped")))

    GROUP BY

    (T1_ID_POT, T4_ID_DAY)

    ORDER BY

    T1_ID_POT ASC NULLS LAST,

    T4_ID_DAY ASC NULLS LAST]] > >

    < / SQL >


    View T4_ID_DAY, T1_ID_POT in the high level of the column list - here are identifiers of low level of my dimensions, which means that the query is not doing an aggregation here, because there is that one made by each pair of (ID_DAY, ID_POT).

    What I want to do is somehow to load data without doing that (totally useless in my case) the aggregation intermediaries. Basically I want it to be something like

    SELECT / * + bypass_recursive_check cursor_sharing_exact no_expand no_rewrite * /.

    T4_ID_DAY ALIAS_37,

    T1_ID_POT ALIAS_38,

    T7_TEMPERATURE ALIAS_39,

    T7_TEMPERATURE ALIAS_40,

    T7_METAL_HEIGHT ALIAS_41

    Etc...


    without aggregations. In fact, I can live even with this load query, such as the amount of data isn't that great, but I want that things in the right way to work (more or less ).

    Any chance to do?

    Thank you.

    I thought about it. There is a mistake in my correspondence, I've specified a column dim_table.dimension_id in a column field source of my section of the mapping, rather than the fact_table.dimension_id column, and (supposed) building tried to group by keys of the dimension tables. After this the definition of primary key did the trick (just the unique index, however, was not enough).

  • Maintenance of data Cube using DBMS_BUILD

    Hello

    I've got cube partitioned by date, with the 1 year retention period - every day a new created partition and the other removed.

    No problem with the creation of data, but I am struggling to find a way to lose the old partition.

    I'm looking for is

    1) more information on DBMS_CUBE. BUILD to help me understand how to use it to reach my goal.

    The official docs are examples of syntax, but I'm not finding in the syntax HERE to allow me to do this: http://docs.oracle.com/cd/E11882_01/appdev.112/e40758/d_cube.htm#ARPLS73464

    (I would have the cube data cycle would be a common requirement that would be required by all and so surprised that it is not explicitly covered for example).

    Is there another source of reference for the info DBMS_CUBE. BUILD that I can use to help me better understand the syntax fix myself?

    (2) or a direct help with my problem

    I found another post that addresses this specific point and gives the syntax to achieve this: https://community.oracle.com/thread/2154852

    I tried to use the technique which recommends David, but it takes a lot of times more to manage that in order to do that requires a full version of the cube and actually does not reach the goal by the removal of the partition (no change of data in the cube and the values for the target partition remains).

    The generation of complete cube takes too long, and I think that must be a quicker way of deleting a data partition, but am not able to do this.

    When I run the command based on the advice of David (see below), in the CUBE_BUILD_LOG I see the CLEAR VALUES with the correct partition identified (that's the step that almost all of the time is spent on), but nothing really changes in the data when the cube is queried,

    for example, I can query the cube for values in this partition and they stay.

    No error is returned.

    So I am very confused and do not know how to move it - any ideas?

    for example, in the example below, I'm trying to delete the partition "20131230" which is a level in the hierarchy "STD" in the "TIME_1YEAR" of the "CUBE_MNE_E_RATED" cube dimension

    I'm eager for this deletion, in the cube, all the data to date, but as above it takes a lot of time doing nothing.

    When this command is executed the data actually, relative to this day table does not exist, and so it is not true that reconstruction including date based on data it finds in the fact table.

    I'm on 11.2.0.3 Enterprise Edition.

    BEGIN
        DBMS_CUBE.BUILD('"CUBE_MNE_E_RATED" USING
                           (FOR "TIME_1YEAR"
                            WHERE "TIME_1YEAR"."DIM_KEY" IS DESCENDANT OF ''20131230'' WITHIN "TIME_1YEAR"."STD"
                            BUILD(CLEAR))');
    END;
    /
    
    

    Thank you in anticipation

    Jim

    Regd move 365 days entry windows:

    You have a window mobile day or level the relational table entry containing the 365 lines last 365 days... It seems that this would trigger aggregations on a large scale along the time dimension hierarchies assuming the definitions of month/quarter/year regular. In this case, perhaps the cube full re-aggregation may be inevitable. The cube can attempt a full build/refresh (C) from other types of charges (? for quick or otherwise Complete, F for Fast Refresh etc) are not possible.

    If today is April 4, 2014, so your input table gave April 4, 2013-April 3, 2014. The difference b/w done cube entry table b/w yesterday and today is that day = April 3, 2013 fell from the table. Month Apr 2013 with children from April 1, 2013 to 30 April 2013, this should trigger a new aggregation level months Member April 2013 and even a new Member of the month level aggregation Apr 2014 (since April 3, 2014 has now come in the picture).

    Charge of OLAP via DBMS_CUBE. BUILD

    Your best case scenario is for olap Manager do the following:

    * reloading data for affected for months because of moving window - Apr 2013 and 2014 after only... that is to say reload the level data sheet for April 4, 2013 April 30, 2013, as well as for April 1, 2014, to April 3, 2014.

    * re - aggregate affected higher level members like Apr 2013, 2nd quarter of 2013, 2013 year 2014 Apr, 2nd quarter 2014 and 2014 year alone.

    Olap perspective load or dbms_cube.build load behavior, it is not an exact match due to the fact that entry window varies every day. FAST_SOLVE is closest, in my opinion.

    A Complete (C) charge the fact:

    * Clears all data - sheets as well as aggregates

    * reloads all the data at the level sheet of the object (table/view) the input source

    aggregates the cube according to the settings/aggregation design

    A charge of FAST_SOLVE (S) for this:

    * reloads all the data at the level sheet of the object (table/view) the input source... NOTE: this refill all the data of reln. not only for the day added to the entrance... independent source updated lopped or daily added.

    * re - includes the affected/relevant part of the cube according to the settings/aggregation design

    A FAST (F) refresh the fact:

    * Load the data for the new dates only... that is will load data for April 3, 2014, but does not delete the data up-to-date lopped relational source: April 3, 2013

    * re - includes the affected/relevant part of the cube according to the settings/aggregation design... in this case: April 2014, 2nd quarter of 2014, year 2014 alone. All 2013 aggregates remain as they did because they have not been erased.

    Olap for DBMS_CUBE documentation. BUILD explains, if the cube can be built using a load of Partition (can handle more recent partitions as they appear but I guess he can't handle more delicate complexity relating to the partitions being cut at the base/start/front of the moving time window) or through refresh FAST charge (usually this needs a quick refreshment on MV to act as Cube source again without the complexity of the data will lack every day at the beginning of the input time window) then it falls on the full charge (charging data, cube of aggregates). There is a special type of load - FAST_SOLVE which is to reload all the data, but only hit reaggregate / higher levels of the cube.

    Currently, the cube seems to function according to the COMPLETE (C) charge.

    QUICKLY RESOLVE may be your Best Bet... recharge every day, placed affected qtrs/months/years only in the cube.

    ******************

    DBMS_CUBE. BUILD has many options regarding the build/refresh of capacity.

    Maybe you should try to reference the other parameters of the procedure to refine the process of generation.

    I hope this helps.

    -ERASE the partition corresponding to day being filed and also try to re-aggregation of stakeholders

    BEGIN

    DBMS_CUBE. BUILD)

    script => ' USE of 'CUBE_MNE_E_RATED '.

    (FOR 'TIME_1YEAR'

    WHERE 'TIME_1YEAR '. "" "DIM_KEY" IS A DESCENDANT OF "20131230" IN THE "TIME_1YEAR" STD ".

    BUILD)

    CLEAR VALUES,

    FIX

    )

    )',

    method => of '-S tent FAST_SOLVE refresh method

    refresh_after_errors to-online true,-refresh after errors

    parallelism-online 1-parallelism

    atomic_refresh => false, - atomic refresh

    automatic_order-online order wrong, - automatic

    Add an add_dimensions to-online true - dimensions

    );

    END;

    /

    -Reload all data, including the newly added date and aggregate/solve the cube incrementally

    BEGIN

    DBMS_CUBE. BUILD)

    script => ' USE of 'CUBE_MNE_E_RATED '.

    (

    PLUM CHARGE OF SYNCHRONIZATION,

    FIX

    )',

    method => of '-S tent FAST_SOLVE refresh method

    refresh_after_errors to-online true,-refresh after errors

    parallelism-online 1-parallelism

    atomic_refresh => false, - atomic refresh

    automatic_order-online order wrong, - automatic

    Add an add_dimensions to-online true - dimensions

    );

    END;

    /

    Rgds
    Shankar

Maybe you are looking for