BSO level 0 block analysis tips

Hi all. Looking for advice on the analysis of the content of a BSO cube.

Recently, we have crossed a performance huge hit with our aggregations, from take 5 minutes <>40 minutes. We initially thought it was some system resources, but I realized this morning that this is probably due to the size of the cube.

All about March to September, the cube was usually about 5 000 to 10 000 level 0 blocks. In the last month the cube of production rose to 72 000 level 0 blocks.

A new structure of the Department was added, from 200 ministries 1-800, who could very well be the driver.

Suggestions re. How to identify where all of the new level 0 blocks come from previous versions of the cube? Is it possible somehow, we filled all the blocks with 0, and if so, can we find them?

Thank you!

Bill S.

It will be slow, but it will work. Slowly. :)

Seriously, because it is a sparse calc it's just not to be too quick. See the comments of Edward on how to make it work to move forward if people will send zeros, no matter the pain and agony you promise them that if they do.

Kind regards

Cameron Lackpour

Tags: Business Intelligence

Similar Questions

  • If I run the calc script to aggregate a BSO cube, it blocks and release each block in a few seconds?

    Hello

    If I run the calc script to aggregate a BSO cube, it blocks and release each block in a few seconds? Or is it keep held locked blocks even after aggregation for this block is over?

    For example if I correct sparse dimensions Forecast, FY15, dec.  and my accounts dimension is only dense, after calc has my senior level members in the accounts, it it will issue after updating (i.e. in fractions of seconds) or is agg keep it held in a lock?

    I ask because I want to run scripts tot., but there is the update of our cube users.  I never had a problem to start agg, while users are updating.  But maybe I am lucky.  If a user updates a closed block, they will receive an error message that I think.  They may try to update again after a few seconds, I hope.

    Thank you.

    Locking behavior for BSO Essbase is described in the database administrator's Guide: http://docs.oracle.com/cd/E57185_01/epm.1112/essbase_db/dstinteg.html

    It is certainly theoretically possible that a user can obtain a lock because of a calc, although I can't say I saw him be a problem in real-world applications (perhaps because access uncommitted is the default).

  • "Control terminals on component connector not on the level superior. block diagram" to comment on the report of the ADC

    Hi all

    Could someone enlighten me please, what does this comment on the value of the ADC

    "Terminals on component connector not on the block diagram of higher level of control.

    This means that some terminals is hidden within certain structures of the case and does not show not not the diagram without going into the structures of the case or by 'higher level block diagram', it means

    main.VI and main.vi controls must also be connected to the connector pane?

    Thank you

    K.Waris

    On the one hand, this means that they run on your screws VI Analyzer, since it is a warning in extenso you receive.  This means simply a terminal which is connected to the ConPane is not on the top level diagram, IE. within a structure of housing.

    As to why he is often not a good idea to do that read this classic thread:

    http://forums.NI.com/T5/LabVIEW/case-structure-parameter-efficiency/m-p/382516#M191622

  • Level the vs locking level field block page

    In my application, I have a page with fields and a button.

    1. by requirement, I need to lock each present field on this page. However, button should get not locked because it has script written on the click event to unlock the fields present on the same page.

    To lock all fields: Form.Page.access = 'protected' - it works fine

    Now for the release button: Form.Page.OverRide.access = ""; -it does not work. Always locked button. I printed out the status of the button on the console and it shows as open.

    2. If I try to lock each field individually, some fields are not getting locked and can still be modified. I checked the written code is correct.

    My second question is how to lock all fields without blocking a button on the same footer?

    Thank you, Jono and Melrose.

    I could solve this problem by placing all other fields on a subform and the substitution buttonn alone on an another subform and only the first lock.

  • Unlike block cube BSO

    Hello
    I did a complete data export and import of a cube of old env BSO (essbase 7) to new env (EPM 11.1.2.2) as well as the contour. After you import in the new environment, I expect the blocks are the same, since it is a complete data import/export, but I see some differences. Can you please let me know why 'Number of existing blocks' and 'Existing top-level blocks' are different that I expect it to be the same.


    Old Env:
    =====
    Number of existing blocks: 928879
    Block size: 14864
    Number of blocks that can: 41897071152
    Current level 0 blocks: 407528
    The existing top-level blocks: 521351


    New Env:
    =====
    Number of existing blocks: 928722
    Block size: 14864
    Number of blocks that can: 41897071152
    Current level 0 blocks: 407528
    The existing top-level blocks: 521194

    Thanks for your help.

    The cube in your old environment can be completely empty (i.e. all values #Missing) blocks. Export / import will eliminate these blocks.

    I guess you can expect do some basic reconciliation between the two cubes to validate — if the numbers look good, I'd be pretty confident, that explains the low decrease in number of block.

  • VM Block level access

    Hi friends,

    I have a small request. Make vm have access level of block on the disc if the disc asigned to the VMFS or RDM a block level access.OR data store both have access to the block-level.

    Thanks in advance.

    ZoOmbie

    (1) guest operating system access block - vmdk > block in the vmdk file

    (2) block Guest OS access - RDM > block on the LUN

    ---

    MCITP: SA + WILL, VMware vExpert, VCP 3/4

    http://blog.vadmin.ru

  • Essbase number difference blocks

    Hello

    I have EMP 11.1.2.2 and I am exporting and importing cube BSO dev to uat environment. After importation and performing calculations, I see little difference there in the 'Number of blocks' and "Existing cutters uper level" in any environment.

    Others like "existing level 0 blocks", "Block size" and 'The potential number of blocks' are the same.

    Can you please let me know what could be the reason for the difference, even if it's the same data, and the pattern is the same.

    You asked a very similar question earlier (I have not this good a souvenir, but I searched this thread designate in this one!): https://forums.oracle.com/message/10877200#10877200

    My first guess would be "all the #Missing ' top level in one or the other cube blocks.  You did not say that we have more, by the way.   But if they reconcile, and a restructuring forced in both environments brings counties into alignment I doubt you have a problem.

  • What are the consequences of blocking of monitoring sites that I find in the add-on?

    -I'm not a programmer, but I love Firefox. I just added add-on and would like to block certain sites I see, but when I get the warning I walk back. When will I know that I can safely block a site without interfering with the sites I visit in fact? Thank you!

    There are many levels of blocking... If you mean that you're not some servers set cookies, or for Firefox to send cookies back to them, that they have been previously defined, maybe it's not a problem for servers unrelated, such as advertising servers, because the main site depends on whether these servers can recognize whether you.

    It's a little more complicated for social networking sites who have sharing buttons in all directions. Because these servers are not linked to sites where their sharing buttons appear, you may be able to block cookies when you're on these sites so not everything in life ends by in memory without any background of your social network. The gross distinction between 'third party' and 'first party' cookies does not always a good job in these situations. Add - ons may provide a smarter solution.

    I suggest more research! And a little experimentation because it is usually easy to remove a block specific sie cookie.

  • We need to block point in particular: the double feature by clicking on

    Hi, I have a multi files datablock, contained in this data block with an element that displays with a total none of bu records using the aggretate function. I wrote this datablock when the trigger level mouse_doble_click block, I call the a small canvas. So when I double click on items in this data block of multi canvas opens fine. But my problem is when I double click on the no total of documents Web item opens. I don't want to open the canvas when I double click on the total, no element of records

    Hello

    Don't create the trigger WHEN-DOUBLE CLICK on the total, none of the documents at the level of the number. In the code, just - do nothing a declaration of value NULL. This will replace the trigger at the block level and block level trigger will not be raised when you click on the total, no element of records.

    Kind regards

    Harsha

  • RMAN LEVEL BACKUP INCREMENTAL 0 AND 1 OF CUMULATIVE LEVEL

    Dear friends

    This is Eliza, I need some clarification, your suggestion will help me more...

    Currently I am working in RMAN backup and restore, recovery and I need to take the extra SIZE 0 AND LEVEL 1 CUMULATIVE

    I know something about what level rman incremental 0 and level 1 cumulative can do is, first I have to take the level 0 and then level 1 respectively.

    As follows

    SUNDAY: LEVEL 0 CUMULATIVE

    MONDAY - SATURDAY: LEVEL 1 CUMULATIVE

    SUNDAY: LEVEL 0 CUMULATIVE (repetition of this cycle)

    MY QUESTION IS

    Can I use after an order to take INCREMENTIELLE LEVEL 1 CUMULATIVE backup of,

    CUMULATIVE DATABASE INCREMENTAL LEVEL 1; # has changed since the level 0 blocks

    But, if I want to take CUMULATIVE INCREMENTAL level 0 BACKUP, that is me I want to use the following,

    INCREMENTAL LEVEL 0 BACKUP CUMULATIVE BASIS;

    (GOLD)

    DATABASE INCREMENTAL LEVEL 0 BACKUP;

    (GOLD)

    Suggest me your valuable idea. Thanks in advance.


    Hello

    I think that this CUMULATIVE or incremental level 0 backup specification makes no sense so INCREMENTAL level 0 BACKUP database; = INCREMENTAL CUMULATIVE 0 DATA LEVEL;

  • What are the different ways to initialize the elements of data block?

    Hello

    I have an already developed form works correctly. There a DataBlock whose elements are not database. There are also database datablock in this form. On some basic database datablock users and also revenge based data non-base is filled with the data.

    I can't find any code that initializes the value of the database no blocking again then its value is initialized. I wonder how the value of the block of data is initialized without explicitly initializing it.

    Can someone let me know how its value is initialized?

    Check the LOV or visit the level trigger block as once - new - record - instance once - new - block - instance trigger.

  • SBS2008 level conversion and file to the C: drive

    Hi guys,.

    I'm looking to perform a conversion of the C drive on a Windows Small Business Server 2008 installation file-level, in order to reduce the size of the disk. Is this likely to work without problem or drive C should I look to do at the level of the block instead?

    On another note, estimates on how much slower the file level is block-level and I am also curious, but if I drop the level, it would defrag which is the VM disk generated?

    Thank you very much

    Hi a copyist

    Please use the file level which is better than block-level.  in the view performance use higher speed diks RPM so that slow will not come to the role.

    If you don't succeed the first time. call it version 1.0. "
  • Calculation of density of block

    My apllication has the classification of next dimension:
    3dense dimensions: having respectively 151,21,5 and stored members 57,13,5 total members
    11sparse dimensions: total members 8,6,4,4,10,15,13,7,3,26,3 and stored 8,6,4,4,10,12,11,0,0,0,0 respectively.

    Number of existing blocks: 368
    Block size is:29640(57*13*5*8) B.
    Potential number of Blocks: 1013760(8*6*4*4*10*12*11)
    Level 0 blocks existing: 9
    Blocks: 359 existing top-level

    Block Density (%):.45

    Can someone help me to know how you calculate this density block based on the above statistics?

    Thanks in advance...

    I don't think it's possible to consult the driver data for the calculation of the density of the block directly. I do not think that it has even documented exactly what percentage / blocks how Essbase can enjoy.

    You can try to evaluate yourself if you know what your data will look like. For example, if you have 12 months membership stored in your database, and in January, load you only January data, you can predict that the density of your block will be, at most, 8.3%.

    But it is impossible to know what density block will be without reference to the data itself.

  • Trigger validation key in two blocks in a data module

    Hello

    I would like to have a clear idea on works trigger the validation key to the level of data block.

    I have a module that contains two blocks of data Block1 and block2 and eah block for example, I've defined respectively the trigger key validation as follows:

    tirgger key validation (Block1)

    Begin

    if condition1 then
    raise forms_trigger_failure;
    end;

    commit_forms;

    end;

    tirgge key validation (block2)

    Begin

    If condition2 then
    raise forms_trigger_failure;
    end;

    commit_forms;
    end;

    My question is:

    When I make changes in blocks (Block1 and block2) and press the save button, one of the tirigger in fires level data block and so one of the validation is the skikpped (condition1 or condition2) according to the update where it is.

    Is it not better to use only the validation key in the module level instead, and if this is the case, why the oracle forms let us ppersonalize ky-validation of data block level.

    Thank or for your advice.

    When I make changes in blocks (Block1 and block2) and press the save button, one of the tirigger in fires level data block and so one of the validation is the skikpped (condition1 or condition2) according to the update where it is.

    The Module level trigger is the best option in this case and you will need both your test conditions before calling t_Form() Commi.

    Is it not better to use only the validation key in the module level instead, and if this is the case, why the oracle forms let us ppersonalize ky-validation of data block level.

    Generally, you will find a trigger key validation only at the Module level, but because there are times when it makes sense to put the trigger at the block level. Also, depending on the properties of the trigger, Oracle Forms will run the first block-level triggers, then it executes a trigger level of Module of the same name. It really depends on the developer and how they want to organise/organize their code. The tool simply allows you to put the code where will she be better tailored to your needs.

    Hope this helps,
    Craig B-)

    If someone useful or appropriate, please mark accordingly.

  • When I "manually export level zero data", I get a few upper level members

    I found that the export of level zero behaves differently over time. in our OSI 11.1.1.3 number cube, Feb. 4 EAS has exported only zero-level data when I "manually export level zero data. On 25 February, limbs stored exporting EAS with formula as well as the members of level as a Total, total compensation salary (when I manually 'export data to zero"). Why is this?

    In OSB, you export level zero blocks - i.e. at level zero in all sparse dimensions. Export may still contain high-level dense dimension members.

    Why it seems to me change over time, I'm not sure. Maybe to do with the question of whether data had been calculated when export has been run?

Maybe you are looking for