Model aggregation tables to speed up queries

Hello
Can someone help me in the topics
"Tables of aggregation model to accelerate the processing of applications.
'Model Partitions and Fragments to improve ease of use and performance of applications '.

Am new to this concept, have not worked on it before.

Kind regards
Arun

Hi Arun,

There are some good articles on overall awareness / overall persistence in OBIEE, check here:

http://obiee101.blogspot.com/2008/11/OBIEE-making-it-aggregate-aware.html
and here
http://obiee101.blogspot.com/2008/11/OBIEE-aggregate-persistence-Wizard.html

Also check Oracle documentation (obviously).

For Fragmentation, once again see this blog on how to:
http://108obiee.blogspot.com/2009/01/fragmentation-in-OBIEE.html
and
http://gerardnico.com/wiki/dat/OBIEE/fragmentation

Let us know about specific questions after reading these links above.
See you soon
Alastair

Tags: Business Intelligence

Similar Questions

  • create aggregation tables using job Manager windows client linux server BI

    Hi people,

    I have OBIEE 10.1.3.4.1 running on a Linux server.
    I'm trying to run the script in overall table created via the wizard by using the Task Manager.

    Task Manager requires a DSN to run on. But saw that the job scheduler runs on the Linux machine so there is no DSN point to?

    My question is:

    I need to install and configure a scheduler on a windows environment to launch task manager to run the NQCmd script to the aggregation tables?
    or
    Can I get the linux box to have his own NAME or see the windows one somehow?

    If you have information or advice that would be really untangle me.

    Thank you.

    You don't need to change anything, just use AnalyticsWeb in your nqcmd call, and it should work.

  • What could cause aggregation tables to use is not in my query?

    Hello

    I created a few tables of aggregation for on tables 10-12 facts in 2 dimensions in my environment of OBIEE.

    I have a report that shows the members of dimension and all the facts used in aggregates. I have also some other columns (tables of the same physical) for which I have not used by creating aggregates.

    When I consult my log of requests, I see that the OBIEE server does not use tables of aggregation for the facts that I have created for. What could be causing this - is it because of the additional columns for which no aggregation tables exist?

    Thank you
    Kevin

    Yes, these additional columns would be a problem.
    On the same note, logical fact tables must not contain attributes, only measures. (you can see in any sample RPD provided by oracle)

  • Reconstruction of the aggregation Tables

    Hello

    In the tutorial of the RTO for the wizard of overall persistence, it tells you to rebuild aggregation Tables by putting a 'aggregates; Remove"command at the beginning of the script. I wonder if it is a common practice for production environments as well. I started working on a project where the person here before put me in place the aggregate the table script but did not remove aggregates; order initially.

    Any ideas?

    Thank you
    Kevin

    Yes,

    You should always clean up first! If there have been copy paste action in the repository on the risk that the aggregation tables have new id "submarine".

    http://obiee101.blogspot.com/2008/11/OBIEE-aggregate-persistence-Wizard.html

    concerning

    John
    http://obiee101.blogspot.com

  • Date dimension unique creating aggregation tables

    Hi guys,.

    I have a date single dimension (D1 - D) with key as date_id and the granularity is at the level of the day. I did table(F1-D) that gives daily transactions. Now, I created three tables of aggregation with F2-M(aggregated to monthly), Q(Aggregated to quarterly)-F3, F4-Y(Aggregated to yearly). As I said. I have a table of unique date with date-id as a key dimension. I have other columns month, quarter, year in the Date dimension.


    My question is: is this single dimension table is sufficient to create the joins and maintain layer MDB. I joined the date_id of all facts in the physical layer. MDB layer, I have a fact and logical table 4 sources. II have created the hierarchy of the Date dimension dimension and created the logical levels as a year, quarter, month, and day and also set their respective level keys. Now, after doing this I also put the logic levels for logic table 4 sources in the fact table.

    Here, I get an error saying:



    WARNINGS:


    BUSINESS financial model MODEL:
    [39059] D04_DIM_DATE logical dimension table has a source of D04_DIM_DATE at the level of detail of D04_DIM_DATE that connects to a source of fact level superior F02_FACT_GL_DLY_TRAN_BAL. F03_FACT_GL_PERIOD_TRAN_BAL




    Can someone tell me why I get this error.

    Reverse - your group table months must have information on the year.

    It's so she can be summarized in the parent hierarchy levels.

    In general, it is so you don't have to create a table of aggregation for each situation - your table of months can be used for aggregates of the year. Still quite effective (12 times more data than the needs, but better than 365 times).

    Think about your particular situation where you have a year AND a month group you might get away without information from parent levels - but I have not tested this scenario.

    With the second part, let's say you have a description of months and a key of the month field. When you select month and income description, obiee needs to know where to find the description of months of. You don't find it secondary date for reasons mentioned previously dimension table. So, you tell him to do it from the global table. It is a simple as you drag the respective physical column from the overall table on the existing logical column for the description of months.

    Kind regards

    Robert

  • What models of Macbook support speed Gigabit Internet BOTH wired (fine thunderbolt adapter) AND wifi (specs are unclear and inconsistent)?

    I have just upgraded to 1 GB fiber Internet - YAY - only to learn that my laptop current cannot go more than 100 MB - BOO. He is not old and not cheap, so, Goodbye Windows, Mac Hello! (A move on to the next purchase even when I use Macs for work).

    In comparing the Macbook models, I can't figure out those who support the TWO Gigabit speed when wired (adapter thunderbolt is fine if caught gigabit supported by the machine) AND Gigabit WiFi. I saw a spec that says 801n gigabit and 801ac was not, which I thought was the opposite of reality, so I do not trust the sheets, and I really not trust sellers to know this information.

    Models to consider:

    -Only current/new models, so no worries for older

    -The Macbook Air and models so no worries for the Macbook Pro,

    Help!

    Thank you

    All of the current models. However the computers laptop wil need an adapter to provide an Ethernet connection. Apple USB it is not adequate, it isn't Gigabit Ethernet.

  • 3 clues on 3 different columns, but explain plan shows full table scan for select queries

    I have a table - used and have index - functional ind1 (upper (f_name)), index - (emp_id) ind2 ind3 (upper (l_name) functional on 3 columns diffferent - what, emp_id, l_name respectively.) Now when I check explain plans for sub queries, they all have two shows complete table for the employee of the table scan. FYI - employee table is non-parittioned.

    Can someone tell me why 3 indices are not used here?

    (1) select emp_id, upper (f_name), upper (l_name) of the employee

    (2) select upper (f_name), mp_id, upper (l_name) of the employee

    where upper (f_name) = upper (f_name)

    and emp_id = emp_id

    and upper (l_name) = upper (l_name)

    If I can push oracle (version 11) to use these indexes somewho - maybe using tips? Any help is appreciated.

    
    Observations:
    
    SQL> desc emp1;
     Name                                      Null?    Type
     ----------------------------------------- -------- -----------------
     EMPID                                      NOT NULL NUMBER
     F_NAME                                    NOT NULL VARCHAR2(3)
     L_NAME                                    NOT NULL VARCHAR2(3)
     SALARY                                    NUMBER
     JOB_ROLE                                 VARCHAR2(5)
     DEPTID                                     NUMBER
    
    create index idx2 on emp1(empid);
    create index idx1 on emp1(upper(f_name) );
    create index idx3 on emp1(f_name,empid, l_name);
    exec dbms_stats.gather_table_stats(user,'EMP1', cascade=>true);
    
    8 rows selected.
    
    SQL> explain plan for
      2  select /*+ index_join(e idx1 idx2 idx3)*/   upper(l_name),empid, upper(f_name) from emp1 e;
    
    Explained.
    
    SQL> select * from table(dbms_xplan.display);
    
    PLAN_TABLE_OUTPUT
    --------------------------------------------------------------------------------
    Plan hash value: 3449967945
    
    -------------------------------------------------------------------------
    | Id  | Operation        | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    -------------------------------------------------------------------------
    |   0 | SELECT STATEMENT |      | 20000 |   175K|    14   (0)| 00:00:01 |
    |   1 |  INDEX FULL SCAN | IDX3 | 20000 |   175K|    14   (0)| 00:00:01 |
    -------------------------------------------------------------------------
    
    8 rows selected.
    
    SQL> explain plan for
      2  select    upper(f_name),empid,upper(l_name) from emp1 e;
    
    Explained.
    
    SQL> select * from table(dbms_xplan.display);
    
    PLAN_TABLE_OUTPUT
    --------------------------------------------------------------------------------
    Plan hash value: 3449967945
    
    -------------------------------------------------------------------------
    | Id  | Operation        | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    -------------------------------------------------------------------------
    |   0 | SELECT STATEMENT |      | 20000 |   175K|    14   (0)| 00:00:01 |
    |   1 |  INDEX FULL SCAN | IDX3 | 20000 |   175K|    14   (0)| 00:00:01 |
    -------------------------------------------------------------------------
    
    8 rows selected.
    
    SQL> explain plan for
      2  select /*+ index_ffs(e idx3)*/   upper(l_name),empid, upper(f_name) from emp1 e;
    
    Explained.
    
    SQL> select * from table(dbms_xplan.display);
    
    PLAN_TABLE_OUTPUT
    --------------------------------------------------------------------------------
    Plan hash value: 2496145112
    
    -----------------------------------------------------------------------------
    | Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    -----------------------------------------------------------------------------
    |   0 | SELECT STATEMENT     |      | 20000 |   175K|    14   (0)| 00:00:01 |
    |   1 |  INDEX FAST FULL SCAN| IDX3 | 20000 |   175K|    14   (0)| 00:00:01 |
    -----------------------------------------------------------------------------
    
    8 rows selected.
    
    SQL> explain plan for
      2  select /*+ index(e idx3)*/   upper(l_name),empid, upper(f_name) from emp1 e;
    
    Explained.
    
    SQL> select * from table(dbms_xplan.display);
    
    PLAN_TABLE_OUTPUT
    --------------------------------------------------------------------------------
    Plan hash value: 3449967945
    
    -------------------------------------------------------------------------
    | Id  | Operation        | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    -------------------------------------------------------------------------
    |   0 | SELECT STATEMENT |      | 20000 |   175K|    14   (0)| 00:00:01 |
    |   1 |  INDEX FULL SCAN | IDX3 | 20000 |   175K|    14   (0)| 00:00:01 |
    -------------------------------------------------------------------------
    
    8 rows selected.
    
    SQL> explain plan for
      2  select    upper(f_name),empid,upper(l_name) from emp1 e;
    
    Explained.
    
    SQL> select * from table(dbms_xplan.display);
    
    PLAN_TABLE_OUTPUT
    --------------------------------------------------------------------------------
    Plan hash value: 3449967945
    
    -------------------------------------------------------------------------
    | Id  | Operation        | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    -------------------------------------------------------------------------
    |   0 | SELECT STATEMENT |      | 20000 |   175K|    14   (0)| 00:00:01 |
    |   1 |  INDEX FULL SCAN | IDX3 | 20000 |   175K|    14   (0)| 00:00:01 |
    -------------------------------------------------------------------------
    
    8 rows selected.
    
    SQL> drop index idx3;
    
    Index dropped.
    
    SQL> explain plan for
      2     select   upper(l_name),empid, upper(f_name) from emp1 e;
    
    Explained.
    
    SQL> select * from table(dbms_xplan.display);
    
    PLAN_TABLE_OUTPUT
    --------------------------------------------------------------------------------
    Plan hash value: 3330885630
    
    --------------------------------------------------------------------------
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    --------------------------------------------------------------------------
    |   0 | SELECT STATEMENT  |      | 20000 |   175K|    18   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| EMP1 | 20000 |   175K|    18   (0)| 00:00:01 |
    --------------------------------------------------------------------------
    
    8 rows selected.
    
    SQL> create index idx3 on emp1(f_name,empid, l_name );
    
    Index created.
    
    SQL>  explain plan for
      2     select   upper(l_name),empid, upper(f_name) from emp1 e;
    
    Explained.
    
    SQL> select * from table(dbms_xplan.display);
    
    PLAN_TABLE_OUTPUT
    --------------------------------------------------------------------------------
    Plan hash value: 3449967945
    
    -------------------------------------------------------------------------
    | Id  | Operation        | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    -------------------------------------------------------------------------
    |   0 | SELECT STATEMENT |      | 20000 |   175K|    14   (0)| 00:00:01 |
    |   1 |  INDEX FULL SCAN | IDX3 | 20000 |   175K|    14   (0)| 00:00:01 |
    
  • Physical model - spread table property 'Tablespace' doesn't seem to work in 4.0EA2

    I'm passing from v3.3.0.747 to 4.0EA2, but some problems.

    In the physical model when I try to be propagated to the table property 'Space' for the remaining tables a silent error happens and as a result the value is not propagated.

    The error appears in the log file:

    2013-09-24 16:31:31, 776 [AWT-EventQueue-0] WARN PropertyWrapper - oracle.dbtools.crest.model.design.storage.oracle.v11g.TableProxyOraclev11g.setTableSpace (oracle.dbtools.crest.model.design.storage.oracle.TableSpaceOracle)

    java.lang.NoSuchMethodException: oracle.dbtools.crest.model.design.storage.oracle.v11g.TableProxyOraclev11g.setTableSpace (oracle.dbtools.crest.model.design.storage.oracle.TableSpaceOracle)

    at java.lang.Class.getMethod(Class.java:1624)

    at oracle.dbtools.crest.util.propertymap.PropertyWrapper.setValue(PropertyWrapper.java:65)

    at oracle.dbtools.crest.swingui.editor.storage.PropertiesPropagationDialog.applyTo(PropertiesPropagationDialog.java:353)

    at oracle.dbtools.crest.swingui.editor.storage.PropertiesPropagationDialog.setProperties(PropertiesPropagationDialog.java:337)

    to oracle.dbtools.crest.swingui.editor.storage.PropertiesPropagationDialog.access$ 400 (PropertiesPropagationDialog.java:46)

    to oracle.dbtools.crest.swingui.editor.storage.PropertiesPropagationDialog$ 11.actionPerformed(PropertiesPropagationDialog.java:300)

    at javax.swing.AbstractButton.fireActionPerformed(AbstractButton.java:2018)

    in javax.swing.AbstractButton$ Handler.actionPerformed (AbstractButton.java:2341)

    at javax.swing.DefaultButtonModel.fireActionPerformed(DefaultButtonModel.java:402)

    at javax.swing.DefaultButtonModel.setPressed(DefaultButtonModel.java:259)

    at javax.swing.plaf.basic.BasicButtonListener.mouseReleased(BasicButtonListener.java:252)

    at java.awt.Component.processMouseEvent(Component.java:6505)

    ...



    However if I try to propagate the value tablespace for the index this works very well.


    Some ideas?

    Thank you.

    Hello

    This problem is fixed in version 4.0 EA3 data maker.

    Thank you

    David

  • Totals or sums of Sub for OBIEE report without aggregation Table

    Hello people

    I created a table report, and the rule of aggregation for all columns of facts are no referential and default in the formula column, and for this report, I need subtotals at the bottom of the report. I activated the sum at the top left (a report when it is necessary), but I still don't see the overall totals. But when I put the sum or any other exception aggregation rule no, then I can see the totals, but I don't have to aggregate numbers. Don't you guys think that how the tool works or is there an alternative way where you guys can help me get the totals

    Please REPLY as soon as possible BECAUSE IT IS a REQUIREMENT of the IMP

    but I don't have to aggregate numbers

    You mean, you try to view char columns... ?

    If so, try "Global complex server" available in the formula for the column.

    Kind regards
    Rambeau

  • HP keyboard model: KU-0841 with speed of typing issues

    I have the same problems as a number if the desire X 2 users have in what I type so fast as the characters are jumbled.  I recall, I used to be able to adjust the speed of response between my fingers and I have the keys could type faster - or maybe I dreamed it.  This material does not seem to offer this choice - speed of the cursor and repeat time are the only choice I have in my control panel.

    Is it possible to speed up the response time between my fingers and the keyboard?

    KGY30, welcome to the forum.

    Go to Control Panel / keyboard.  Adjust the delay "repeat" to the left to increase the delay or the right to decrease the delay.

    Please click on "BRAVO", if I helped you and click on "Accept as Solution" If your problem is resolved.

  • delete all the records in a table, the speed of insertion is not change.

    I have an empty table, and I insert a record need 100ms.
    During this table a 40,0000 trace, I insert a record need 1 s, it's ok, because I need to make a comparison according to an index before inserting a record need, so more record, more time.
    The problem is when I delete all the record in this table, the insertion time is always 1s, not reduce to 100ms. Why?

    Hello

    Read this part of the oracle documentation

    http://download.Oracle.com/docs/CD/B19306_01/server.102/b14220/logical.htm#CNCPT004

    The reason is always 1s, because when you inserted 400ko record as HWM (high tide is the border between old and new space in a segment.) moved to the top. And when you remove all records of your HWM still in the same coordinate system and that you have not reset to 0. So when you insert 1 it save free space and after finding faces (usually a regular insertions got 6 steps it inserts data). If you truncate your table you and try it again it will be faster that your HWM is reset to 0.

    Concerning

  • Several Tables of aggregation in OBIEE

    We live a funky issue with the use of the aggregation tables. Unfortunately, I don't see any obvious way in the documentation or articles to address this problem.

    Based on the overall Wizard of persistence, we have developed 2 aggregates for all of our data.

    1 is a monthly aggregate with a rollup data set and the 2nd is the daily aggregate with a more limited data set.

    For a set of monthly queries and dashboards two aggregates will produce correct results, but made 10 times faster monthly agg < 5 seconds. However, for some reason any the BI server is to choose the total daily instead.
    Any ideas on how we can force / trick to the BI server to use the global monthly all first if she can?
    Thank you
    Michael

    Can you explain with an example how you want to use the total monthly when you have a daily column in your report?
    It is of course for me not possible.

    I think that I do not really understand your question.
    The best way is to give us the logical obiee sql code (you will find tab in advance for the answers) where you expect OBIEE use total monthly and not daily.
    And also give us this response log to see the plan of the logic.

    Here is an article to retrieve the log:
    http://gerardnico.com/wiki/dat/OBIEE/bi_server/log/obiee_query_performed

  • Create triggers in the table, sequence, insert and update with "model"?

    It must be of rtfm, trial and error thing but you wanted to ask, is it possible to have models or similar automation for the following scenario:

    1.), I add the table to the logic model

    2.) Using glossary I transform a relational model that was recovered / synchronized with the data dictionary

    3.) then I have the new table to add

    -but

    I would then have auto-DDL of to be synchronized to database:

    -create sequence for the id column

    -create table

    -create indexes for the id column pk

    -Create triggers for insert and update

    -l' idea is to have db_created_dt and db_modified_dt defined in the table, so that each table has them to the fields of record etc.

    -activate the triggers

    Each of them following the same naming convention.

    Similarity with approx. generator Apex workshop utils sql create table of the copy paste "excel" that creates 'id' - column + sequence and insert the trigger.

    rgrds Paavo

    Hi Paavo,

    most of the steps can be made in one or other way

    -create sequence for the id column

    -create table

    -create indexes for the id column pk

    If you want to start in the logic model and you don't want to have the ID column in the logic model and select 'Create the surrogate key' checkbox in the dialog entity - you will get an identity column in the relational model and the version of database and settings in ' preferences > Data Modeler > model > physics > Oracle "you can set the sequence generation and the trigger for taking in load.

    fields of record defined in the table, so that each table has them

    You can add the same set of columns in all tables with the transformation script 'model of Table... ».

    You can also look here Oracle SQL Developer Data Modeler 4.1 user - defined DDL generation using transformation scripts

    to see how to grant your DDL generation using the transformation script. DM comes with example to generate separate tables of logging and triggers. You can create your build script of triggers that support logging in these common columns.

    Philippe

  • ADF - Model collection assign to Rich Table

    In a managed bean, I create a rich array, but then I want to entrust an existing model of collection viewObject.

    I tried without success to the following:

    RichTable itemDtlTable = new RichTable();
    itemDtlTable.setValue("#{bindings.GisWebLayerHdrView1.collectionModel}");
    

    Any ideas?

    -Andreas

    So finally, you are able to assign collectionModel to table

    BdW what is the code in getElExpression

    and when you have created the table and column manually then just use outputText to display the data within the column, why are you using dynamicComponent inside the column?

    Rot RichOutputText = new RichOutputText();

    rot.setValue (getValueExpression("#{row.bindings[mycolumn.name].inputValue}"));

    and add this column to the table, as you use row , I see that you have attributed this as a var of table

    You will have to set like this inorder to make use of the model of table collection

    itemDtlTable .setVar ("row");


    Andreas, it's complicated


    Ashish

  • Join queries from 4 tables

    Hi all

    I find the categories of assets

    The tables used are-

    fa_categories_b;

    fa_category_books;

    fa_category_book_defaults;

    gl_code_combinations;

    I have attached the tables using the following queries

    Select 'Main category', a.segment2 as 'Subcategory', b.asset_cost_acct, b.asset_clearing_acct, b.deprn_expense_acct, a.category_id, a.category_type, a.segment1

    b.deprn_reserve_acct, b.bonus_deprn_expense_acct, b.bonus_deprn_reserve_acct, c.life_in_months/12 as life, c.deprn_method

    of fa_categories_b an inner join fa_category_books b on a.category_id = b.category_id

    inner join fa_category_book_defaults c = c.category_id b.category_id

    order by stuff;

    However, I am only using

    fa_categories_b;

    fa_category_books;

    fa_category_book_defaults;

    I want to join the gl_code_combinations to the ccid ccid of in the fa_category_books for asset_cost_acct_ccid, asset_clearing_acct_ccid, deprn_expense_acct_ccid,.

    deprn_reserve_acct_ccid, bonus_deprn_expense_acct_ccid and the bonus_deprn_reserve_acct_ccid.

    Grateful if someone can advise.

    Thank you

    I put as a left joins, but if you can be sure that there will always be a value in the three main tables that are in the code_combinations table, then you can use inner joins.

    Select 'Main category', a.segment2 as 'Subcategory', b.asset_cost_acct, b.asset_clearing_acct, b.deprn_expense_acct, a.category_id, a.category_type, a.segment1

    b.deprn_reserve_acct, b.bonus_deprn_expense_acct, b.bonus_deprn_reserve_acct, c.life_in_months/12 as life, c.deprn_method,.

    G1. cost_acct_something,.

    G2. clearing_acct_something,.

    ... other columns of g1, g2, g3 etc.

    of fa_categories_b one

    inner join fa_category_books b on a.category_id = b.category_id

    inner join fa_category_book_defaults c = c.category_id b.category_id

    join left gl_code_combinations on g1 g1. CODE_COMBINATION_ID = b.ASSET_COST_ACCOUNT_CCID

    join left gl_code_combinations g2 on g2. CODE_COMBINATION_ID = b.ASSET_CLEARING_ACCOUNT_CCID

    ... 4 joints more

    order by stuff;

Maybe you are looking for