Reconstruction of the aggregation Tables

Hello

In the tutorial of the RTO for the wizard of overall persistence, it tells you to rebuild aggregation Tables by putting a 'aggregates; Remove"command at the beginning of the script. I wonder if it is a common practice for production environments as well. I started working on a project where the person here before put me in place the aggregate the table script but did not remove aggregates; order initially.

Any ideas?

Thank you
Kevin

Yes,

You should always clean up first! If there have been copy paste action in the repository on the risk that the aggregation tables have new id "submarine".

http://obiee101.blogspot.com/2008/11/OBIEE-aggregate-persistence-Wizard.html

concerning

John
http://obiee101.blogspot.com

Tags: Business Intelligence

Similar Questions

  • Move the same Table tablespace

    Hello!

    I had a table with 5.7 Tb and it fragmented s, so I need to reclaim space in the data files.

    How the table is stored in a Tablespace LMT, unable to make a space of shrinkage I m and therefore, I m just a reconstruction in this table, issuing the command move to the same tablespace.

    I Don t have more space to create a new tablespace with the same size as the tablespace concerned to do a reorganization.

    I dropped the old partitions, to free space in the tablespace, and now I'm free 2.3 Tb.

    It is a partitioned table, and my concern is how I Don t have enough space to create a different tablespace, if I run the movement in the same tablespace, use I´ll space relesed in the tablespace or should I increase the space in my temp tablespace?

    Another doubt is only if I run this line, I lock the table?

    Thanks in advance!

    Concerning

    Yes you can rebuild the partition in the partition in the same tablespace. Unfortunately no one guaranteed that oracle would start from beginning of TS. There is a great chance that after reconstruction of the finished table, you will not be able to reduce the tablespace. There is a tip can be used, you can run the query to search for items at the end of the tablespace move this object and narrowing tablespace as much as you can, then try again. It is a project for a long time, but if you want to do that anyway, so do it.

    select de.*
      from  dba_extents de,
           (select file_id, max(block_id) block_id
              from dba_extents
             where tablespace_name= 'LMT'
             group by file_id) t
     where tablespace_name= 'LMT'
       and de.file_id = t.file_id
       and de.block_id = t.block_id
    /
    
    ALTER TABLE . MOVE PARTITION  TABLESPACE LMT PARALLEL;
    

    Also, remember to rebuild the unusable index after the removal of partition.

    select 'alter INDEX ' || index_OWNER || '.' || INDEX_NAME
    || ' REBUILD PARTITION ' || PARTITION_NAME || ' TABLESPACE ' || TABLESPACE_NAME || ' NOLOGGING PARALLEL;'
    from all_ind_partitions
    where status='UNUSABLE'
    order by  partition_name DESC, index_name
    
  • create aggregation tables using job Manager windows client linux server BI

    Hi people,

    I have OBIEE 10.1.3.4.1 running on a Linux server.
    I'm trying to run the script in overall table created via the wizard by using the Task Manager.

    Task Manager requires a DSN to run on. But saw that the job scheduler runs on the Linux machine so there is no DSN point to?

    My question is:

    I need to install and configure a scheduler on a windows environment to launch task manager to run the NQCmd script to the aggregation tables?
    or
    Can I get the linux box to have his own NAME or see the windows one somehow?

    If you have information or advice that would be really untangle me.

    Thank you.

    You don't need to change anything, just use AnalyticsWeb in your nqcmd call, and it should work.

  • Reconstruction of the table should be performed after removal of most of its data?

    Hi all
    We are on Oracle 10.2.0.4 on Solaris 10. There is a table in my DB of production that has 872944 number of lines. Most of his data is now useless, that it must keep, based on a date column in the table just last a month of rest data and delete data. Thus, after the array will be just 3000 lines.

    However, as the table was huge earlier (872 k lines before you remove), the deletion of data releases his oracle blocks and reduced the size of the table? Otherwise, this will help in the reconstruction of the line (redefined online) table so that the query that performs a full scan on this table goes faster?

    I checked using an example of a table that simply deleting data does not remove the oracle blocks - they remain in the user_tables for this table and complete the table scan cost remains the same. I think that after this deletion, I have to do a re-definition of table online, which is the right decision so we have a query that makes the full table scan?

    Thank you

    If read you about the orders, you will find that they require a DDL lock. Your users should not notice this.

  • Need to dimension aggregated using the summary tables

    Hello

    I have two made tables workdetail and worksummary. Worksummary is grouped in time Sun and workdetail is at the level of day Timedim.

    Now, I set up my business model with Timedim and secondary table work. (creates a hirerchy for time (year-month-day) Sun). Now I want to use the tables of worksummary, how can I include this in my business model. I know to create a new Source of logic and mentioning levels.

    My important question is what do I have to create another physical table for time-months? or can I use same calendar dim physical to use with the summary fact tables?

    the answer to your question is YES, to use aggregated summary tables, we have grouped the dimensions. Other wise data will be redundant and return values incosistent.

    In your case if you use the same table of Timedim-day level with summaries, data tables will be multiplied by 30 days due to the time-Sun monthkey will be repeated in several lines.

    the simplest solution is to create Time_Dim table view, select separate year, month, monthkey. This view returns only unique year-month. Thus, each month will have only one line.
    -> view to import in your physical layer and create a join with the fact summary table.
    -> The table of months (which is the point of view) in the logic time_dim as another source, mention levels.
    -> and include your table of facts in logical fact table and mention that the levels at months time Sun

    It will work. Let me know if I'm not clear. Also, we can expect further comments of experts.

    -Madan

  • Data store missing after the reconstruction of the raid array

    One of the players in my raid array has failed a few days ago.

    I had a spare drive in the table so the controller immediately began the process of rebuilding and all servers were running throughout.

    When the reconstruction has been completed (I checked this via the user interface of raid array and the newspaper), I had to stop the server to remove disc defective (not replaceable chassis hot) and when I restarted the server, the data on the raid array store isn't there anymore.  I also checked through the raid controller interface that I removed the (defective) drive and that the table was always in a ready state when he came.

    In vsphere client, when I click on the storage... Add link, the server sees the material but if I click then it tells me that it will be re - format the volume.  See attachment.  I have most certainly does not take the next step and reformat.  Simply, I took the screenshot and regularized.

    I found these instructions, but they are for an older version of ESXi and am unsure if they are correct for ESXi 6.0.0 338124

    VMware KB: Lack of data store after the reconstruction of the RAID disk/LUN

    Here are the steps I should follow?

    If these aren't the right instructions can you tell me the version which is for ESXi 6.0.0 338124 that I couldn't find anything either.

    Thank you

    Hi ThompsG,

    Yes, there are two data stores for the virtual machine located on the RAID array.  The virtual machine itself was stored in a different data store that was not in the raid array.

    I spent about 48 hours last week, including this morning, trying to coax ESXi recognizing volumes, with no luck.  Finally, I gave up and I removed the virtual computer hard drives that were on the corrupt data store.  Then the virtual machine came without problem.

    Finally, since I have everything on these 2 volumes backed up to a cloud provider, I have recreated the two data stores in the raid array and began the restoration process. It is currently running and has about 16 days left to go.

  • missing parenthesis in insertion into separate lines select the table from the other table

    Hello

    could you help me with the following question?

    I have the following tables

    CREATE TABLE table1)

    ID varchar (12),

    col2 varchar (10),

    COL3 varchar (10),

    level varchar (10))

    CREATE TABLE table2)

    Id2 varchar (12)

    A varchar (10),

    B number (1)

    CONSTRAINT PRIMARY KEY PK (ID2, is));

    INSERT INTO table2 (ID2, A, B) SELECT ID, col2

    MAX (CASE WHEN level = "level 1" then 1

    level = 'level 2' then 2

    Level = 3 then 'niveau3') as colIN3)

    FROM table1 GROUP BY ID2, a.;

    the first table have duplicates as follows:

    Id2 COL2 COL3 level

    A1 pepe football level1

    A1 pepe football level2

    A1 pepe football level1

    A1 pepe basket level2

    A1 pepe pingpong level3

    the output should be selected with unique key (ID2, col3) lines and the level must be the greatest.

    Id2 COL2 COL3 level

    A1 pepe football level2

    A1 pepe basket level2

    A1 pepe pingpong level3

    The output of the script tells me the following messages:

    -lack of right parenthesis referring to the max function.

    Thanks adavance.

    Kind regards

    Hello

    Remember the ABC's of the GROUP BY:

    When you use a GROUP BY clause or in an aggregate function, then all in the SELECT clause must be:

    (A) a ggregate function,

    (B) one of the expressions "group By."

    (C) adding to C, or

    (D) something that Depends on the foregoing.  (For example, if you "GROUP BY TRUNC (dt)", you can SELECT "TO_CHAR (TRUNC (dt), 'Mon - DD')").

    To ask him, there are 5 columns in the SELECT clause.  The last one is a function MAX (...); It is an aggregate, is not serious.

    The first 2 columns are also named in the GROUP BY clause, so that they are well.

    The other 2 columns, country and internal_Id do not match any of the above categories.  These 2 columns cause the error.

    There are many ways to avoid this error, each producing different results.  You could

    • remove these 2 columns in the SELECT clause
    • Add these 2 columns in the GROUP BY clause
    • use the aggregation such as MIN, 2-column functions
    • remove the country from the SELECT clause and add internal_id to the GROUP BY clause
    • remove the internal_id from the SELECT clause, and add countries to the GROUP BY clause
    • ...

    What are the results you want?

    Whenever you have a question, please post a small example of data (CREATE TABLE and INSERT statements) for all the tables involved, so people who want to help you can recreate the problem and test their ideas.  Also post the results you want from this data, as well as an explanation of how you get these results from these data.

    Always say what version of Oracle you are using (for example, 11.2.0.2.0).

    See the FAQ forum: https://forums.oracle.com/message/9362002

  • Model aggregation tables to speed up queries

    Hello
    Can someone help me in the topics
    "Tables of aggregation model to accelerate the processing of applications.
    'Model Partitions and Fragments to improve ease of use and performance of applications '.

    Am new to this concept, have not worked on it before.

    Kind regards
    Arun

    Hi Arun,

    There are some good articles on overall awareness / overall persistence in OBIEE, check here:

    http://obiee101.blogspot.com/2008/11/OBIEE-making-it-aggregate-aware.html
    and here
    http://obiee101.blogspot.com/2008/11/OBIEE-aggregate-persistence-Wizard.html

    Also check Oracle documentation (obviously).

    For Fragmentation, once again see this blog on how to:
    http://108obiee.blogspot.com/2009/01/fragmentation-in-OBIEE.html
    and
    http://gerardnico.com/wiki/dat/OBIEE/fragmentation

    Let us know about specific questions after reading these links above.
    See you soon
    Alastair

  • Returns the Nested table Max length

    Hello

    I want to create a function that accepts a nested table / array as I / p and return the maximum length of the elements in the array.

    As if I was passing the table as arr('India','Mumbai','Kolkata','Pune'), this function should return the length max 7 (Kolkata).

    Can apply us the aggregation on the nested table function or is there an alternative?

    Kind regards
    Rakesh

    Hello

    Here's a possible solution:

    
    create or replace type tCharArray is table of varchar2(1000);
    
    create or replace function get_max_size(p_array tCharArray) return number
    is
      l_result number;
    begin
      select max(length(column_value))
      into l_result
      from table(p_array);
      return l_result;
    end;;  
    
    select get_max_size(tCharArray('abc', 'aaaaa')) from dual;
    

    Best regards
    Nikolai

  • Date dimension unique creating aggregation tables

    Hi guys,.

    I have a date single dimension (D1 - D) with key as date_id and the granularity is at the level of the day. I did table(F1-D) that gives daily transactions. Now, I created three tables of aggregation with F2-M(aggregated to monthly), Q(Aggregated to quarterly)-F3, F4-Y(Aggregated to yearly). As I said. I have a table of unique date with date-id as a key dimension. I have other columns month, quarter, year in the Date dimension.


    My question is: is this single dimension table is sufficient to create the joins and maintain layer MDB. I joined the date_id of all facts in the physical layer. MDB layer, I have a fact and logical table 4 sources. II have created the hierarchy of the Date dimension dimension and created the logical levels as a year, quarter, month, and day and also set their respective level keys. Now, after doing this I also put the logic levels for logic table 4 sources in the fact table.

    Here, I get an error saying:



    WARNINGS:


    BUSINESS financial model MODEL:
    [39059] D04_DIM_DATE logical dimension table has a source of D04_DIM_DATE at the level of detail of D04_DIM_DATE that connects to a source of fact level superior F02_FACT_GL_DLY_TRAN_BAL. F03_FACT_GL_PERIOD_TRAN_BAL




    Can someone tell me why I get this error.

    Reverse - your group table months must have information on the year.

    It's so she can be summarized in the parent hierarchy levels.

    In general, it is so you don't have to create a table of aggregation for each situation - your table of months can be used for aggregates of the year. Still quite effective (12 times more data than the needs, but better than 365 times).

    Think about your particular situation where you have a year AND a month group you might get away without information from parent levels - but I have not tested this scenario.

    With the second part, let's say you have a description of months and a key of the month field. When you select month and income description, obiee needs to know where to find the description of months of. You don't find it secondary date for reasons mentioned previously dimension table. So, you tell him to do it from the global table. It is a simple as you drag the respective physical column from the overall table on the existing logical column for the description of months.

    Kind regards

    Robert

  • How to choose a skin when using the aggregator?

    I write a tutorial that I put in the aggregator. (I use 6 Captivate). Everything is perfect, but I want to change the skin of generic. I know that it is possible that someone has done it for the last done tutorial. Any suggestions? (Here's my boring table of contents)

    untitled.JPG

    Hello

    For this by modifying one of the projects used, then by defining as that of using the Aggreagator. Do not have it open for the time being, but I remember correctly, it will appear in the list with a green font dark to indicate which is the one that will be used.

    See you soon... Rick

  • GET A DATE THE TARGET TABLE MAX

    Hello friends,

    Can someone tell me how to extract the date max of the target table.

    Thank you.

    Azhar

    Sorry why again once you are not able to use the aggregator operator to get a max value. You don't need a group of when taking a max value what from a table, unless otherwise specified

    Try it with a simple table b table card

    table (several itemno)

    table b (number maxitem)

    Use aggregator and don't put anything in the section of owb agg transformation group. Generate the code and take a look

  • Calc problem with fact table measure used in the bridge table model

    Hi all

    I have problems with the calculation of a measure of table done since I used it as part of a calculation in a bridge table relation.

    A table of facts, PROJECT_FACT, I had a column (PROJECT_COST) whose default aggregate is SUM. Whenever PROJECT_COST was used with any dimension, the appropriate aggregation was made at appropriate levels. But, no more. One of the relationships that project_fact is a dimension, called PROJECT.

    PROJECT_FACT contains details of employees, and every day they worked on a project. So for one day, employee, Joe, could have a PROJECT_COST $ 80 to 123 project, the next day, Joe might have $40 PROJECT_COST for the same project.

    Dimension table, PROJECT, contains details of the project.

    A new feature has been added to the software - several customers can now be charged to a PROJECT, where as before, that a single client has been charged.
    This fresh percentage collapse is in a new table - PROJECT_BRIDGE. PROJECT_BRIDGE has the project, CUSTOMER_ID, will BILL_PCT. BILL_PCT always add up to 1.

    Thus, the bridge table might look like...
    CUSTOMER_ID BILL_PCT PROJECT
    123 100.20
    123 200.30
    123 300.50
    456 400 1.00
    678 400 1.00

    Where to project 123, is a breakdown for multiple clients (. 20,.30.50.)

    Let's say in PROJECT_FACT, if you had to sum up all the PROJECT_COST for project = 123, you get $1000.


    Here are the steps I followed:

    -In the physical layer, PROJECT_FACT has a 1:M with PROJECT_BRIDGE and PROJECT_BRIDGE (a 1:M) PROJECT.
    PROJECT_FACT = > PROJECT_BRIDGE < = PROJECT

    -In the logical layer, PROJECT has a 1:M with PROJECT_FACT.
    PROJECT = > PROJECT_FACT

    -Logical fact table source is mapped to the bridge table, PROJECT_BRIDGE, so now he has several tables, it is mapped (PROJECT_FACT & PROJECT_BRIDGE). They are defined for an INTERNAL join.
    -J' created a measure of calculation, MULT_CUST_COST, using physical columns, which calculates the sum of the PROJECT_COST X the amount of the percentage in the bridge table. It looks like: $ (PROJECT_FACT. PROJECT_COST * PROJECT_BRIDGE. BILL_PCT)
    -J' put MULT_CUST_COST in the presentation layer.

    We still want the old PROJECT_COST autour until it happened gradually, it is therefore in the presentation layer as well.


    Well, I had a request with only project, MULT_CUST_COST (the new calculation) and PROJECT_COST (the original). I expect:

    PROJECT_COST MULT_CUST_COST PROJECT
    123. $1000 $1000

    I'm getting this for MULT_CUST_COST, however, for PROJECT_COST, it's triple the value (perhaps because there are quantities of 3 percent?)...

    PROJECT_COST MULT_CUST_COST PROJECT
    123 $1000 (correct) $3000 (incorrect, it's been tripled)

    If I had to watch the SQL, you should:
    SELECT SUM (PROJECT_COST),
    SUM (PROJECT_FACT. PROJECT_COST * PROJECT_BRIDGE. BILL_PCT),
    PROJECT
    Of...
    PROJECT GROUP


    PROJECT_COST used to work properly at a table of bridge of modeling.
    Any ideas on what I got wrong?

    Thank you!

    Hello

    Phew, what a long question!

    If I understand correctly, I think the problem is with your old measure of cost, or rather that combines with you a new one in the same request. If you think about it, your request as explained above will bring back 3 rows from the database, that's why your old measure of cost is multiplied. I think that if you took it out of the query, your bridge table would work properly for the only new measure?

    I would consider the migration of your historical data in the bridge table model so that you have one type of query. For historical data, each would have a single row in the bridge with a 1.0 BILL_PCT.

    Good luck

    Paul
    http://total-bi.com

  • Report with multiple columns NUMBER of counts of the same table

    I am new to discoverer, so I'm a little lost.

    I work to create a report to show usage data and Knowledge Base of e-business. I have written using subqueries in SQL query that is in the format:

    Solution number | Soultion title | Solution views. Positive feedback | Negative feedback
    Title of 12345 _ 345 _ 98 34


    The entries 'Views', 'Positive' and 'Negative' are stored in the same table, so I do a count where setid = setid and usedtype = VS, then count where usedtype = usedtype and PF = NF

    Discoverer, I can get the number of solution, the title and THE totals but I can't seem to understand how to get an ACCOUNT for three different things from the same table in the columns on the same line.

    When I go on change map-> select the items once I select the option NUMBER of the UsedType column in the CS_KB_SET_USED_HISTS table, I can't select it again. I also found way now to add a column based on a query entered.

    If someone could help it would be much appreciated.

    Thank you

    Published by: Toolman21 on December 2, 2010 14:17
    _ to correct spacing added.

    Hello
    You can separate the column with a case or decode.
    for example to create 2 calculations:

    case
    When usedtype = "PF".
    then - that contain both
    0 otherwise
    end

    case
    When usedtype = 'NF '.
    then - that contain both
    0 otherwise
    end

    After that, you can create the aggregation count on those.

    Tamir

  • Expansion in the aggregator TOC titles

    I have a series of tutorials and connects via the aggregator is exactly what I need for this project. However, as you can see in the screenshot below, some of my songs are cut. Is it possible to adjust the amount of space or maybe wrap the text for titles? I played around with the size of police communities table of contents in the master of the movie, and I seem to be in the right place. I defer to your wisdom, Cap gurus - any ideas?

    Krista

    AggTOC.JPG

    Hello

    Is this Captivate 5: you can allow more space for the TOC in the parameters, minimum is 250pixels. I do not have titles of packaging is possible, decreasing font size is possible.

Maybe you are looking for

  • Re: Satellite L650 PSK1JA: cannot read 2nd life due to the old GPU driver

    Hello I have a windows 7, Toshiba Satellite L650 PSK1JA 0K 4017, ATI mobility Radeon HD5650 video card and I m unable to use second life (everything in the universe is a pink color) and its unusable because I need a display driver updated I tried the

  • I can't access and find my podcast on itunes channel

    Hello I have a podcast since there are 3 months and when I search by name I don't find it on itunes. He said: not available point, the item you want is not currently available in the Colombian store. This is the URL: https://itunes.apple.com/co/podca

  • T510 Fingerprint Reader Light

    When I stop my fingerprint reader T510 stays on. Does anyone know if it is supposed to do that? I've only had the phone for a week and it didn't do the first time I stopped him.

  • Configuration of E3000

    I bought the E3000 in summer and wired & installed in an old laptop that was hooked to our modem. It was the only computer at home and I was installing it so my son could play internet games on their Wii with their friends.  My laptop was not already

  • C410 + Apple AirPrint wireless configuration

    I bought a new printer HP Premium (C410a), a new MacBook Pro, a new router (Asus RT-N56U). The router is sitting in front of a naked netgear ADSL2 modem basis. The Asus router acts as my DHCP server and firewall security to my network (http://www.asu