NQSError 14020 query executing Cross-fact

I get the following error message when you try to run a query cross-fact in OBIEE answers:

State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error occurred. [nQSError: 14020] None of the fact tables are compatible with the D3 of the query. X. (HY000)

I've implemented a new RPD with only 4 tables and made sure that all levels of content have been set up correctly, but I still get the error. I have the following configuration:


Physics
I have 2 tables of facts and three dimensions
F1 <-D1, D2
F2 <-D2, D3

MDB
I have a fact tables and three dimension tables:
F <-D1, D2, D3

I created a dimension for each dimension table higherarchy and kept the logical levels by two default - Total and retail.

Each dimension table has a unique LTS set to the level of "Detail".

The fact table has two LTS, and levels are set up as follows:
F1: D1 = Detail, D2 = Detail, D3 = Total
F2: = Total D1, D2 is Detal, D3 = detail

Answers
I am trying to run the following query in OBIEE answers: D1.x, D2.x, D3.x

Could someone could point me in the right direction?

-Tim

Hi Tim,.

In your request, you have just the dimensions? How about you have measure in this report and check if you still have the same error.

I see your LTS settings well (as you have merged the individual facts in MDB). Generally, the query according to dimensions must go through setting relationships and so here, he must go through different facts well;). Could you post your request so that we can have a look?

Thank you
Diakité

Tags: Business Intelligence

Similar Questions

  • Find the query executed in webcenter content triggered from portal...


    Hello

    I want to see what query executed in webcenter content of the portal. How to see in the newspaper?

    1. connect to the administration Webcenter content Administrator console, navigate to Administration - "Audit Information System.

    2 'Sections Tracing Information' section, make a backup of the Sections "active."

    3. in the 'Active' chapters, add the new criteria:

    systemdatabase, search *, requestaudit

    4. Select 'Full Verbose', 'Save', then 'Refresh '.

    5. in the upper right corner, click on "View Server output" it will show the SQLs as below below.

  • Inline query vs cross join

    Hi all

    Two tables which have no relationship with each other. For example, an Employees table and a systemparameter with a startworktime table.

    We need a query with the data in the two tables:

    Get all employees and the startworktime (which is the same for everyone)

    Which is cheaper: an inline query or a product Cartesian or crossjoin?

    Inine:

    Select name, function

    (by selecting startworktime in systemparameter)

    employees;

    Cartesian product:

    SELECT name, function, startwoime

    rktfrom used

    Cross join systemparameter;

    Your opinion about this.

    Both these do the same thing. I seriously doubt if we would have the benefits of performance on the other.

    Kind regards

  • Dynamic Query(execute immediate)

    Greetings,

    VERSION:-10 g

    Name of the table column is the name of the month which I store in a condition variable that I have to pass the names of columns at runtime of the value

    TABLE DEALER_MAST

    NUMBER OF DEALER_DIM_ID
    NUMBER OF APR
    MAY ISSUE
    JUNE ISSUE
    NUMBER OF JUL
    AUGUST ISSUE
    NUMBER OF MS


    I now have the code example below in my procedure


    v_dealer VARCHAR2 (3);
    XYZ varchar2 (2000);

    SELECT TO_CHAR (SYSDATE, 'MY') IN THE DOUBLE V_DEALER;


    DECLARE CURSOR a1 IS SELECT DEALER_ID FROM DEALER_MAST;

    BEGIN

    FOR j IN a1
    loop

    COUNT (*) SELECT IN DEALER_COMM FROM subs_fact
    WHERE TO_CHAR (ACTIVATION_DATE, 'Mon - yy') = (select to_char (add_MONTHS(sysdate,-2), 'mon-yy') FROM dual)
    - AND TAB_ELEG = 1
    and DEALER_ID = j.DEALER_ID;


    -Dynamicaly passing the name of column

    XYZ: = 'SELECT'. V_DEALER | "IN DEALER_MAST FROM DEALER_MAST WHERE DEALER_DIM_ID =' | j.DEALER_DIM_ID;

    run immediately (XYZ);

    /*

    AFTER immediate execution of the query should be as
    SEVEN ELECTED IN DEALER_MAST FROM DEALER_MAST WHERE DEALER_DIM_ID = 24345

    But not to store the data in the variable & gives error like key word missing on run immediately (XYZ);

    */

    If (DEALER_MAST > 2) can
    ---
    --
    end if;

    ERROR:-do not store data in the variable & gives error like key word missing on run immediately (XYZ);

    Thanks in advance

    Maybe

    l_var: = j.DEALER_DIM_ID;

    XYZ: = "SELECT" | TO_CHAR (sysdate, 'MY'). ' FROM DEALER_MAST WHERE DEALER_DIM_ID =: l_var';

    EXECUTE IMMEDIATE XYZ
       INTO DEALER_MAST
       USING l_var;


    Concerning

    Etbin

  • SQL Query Execute County tkprof

    Hi all

    I have the query that is slow.
    SELECT MAX(ID) FROM ID_TAB WHERE R_ID = :B1 
    
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse        0      0.00       0.00          0          0          0           0
    Execute 649574    117.93     127.29          0          0          0           0
    Fetch   649574     20.40      20.85          0    1948722          0      649574
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total   1299148    138.33     148.14          0    1948722          0      649574
    
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 65     (recursive depth: 2)
    
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      cursor: pin S wait on X                       133        0.03          2.13
      cursor: pin S                                1179        0.00          0.01
      
    I know that this may be a bug or something else. But I have another question, then this

    Why is he running count so much. I think that run County increase when you DML? If this is not the case, what are the other scenerios where it can increase run County.

    Nico wrote:
    Thanks for the reply. Another instance

    The index is here:

    recursive depth: 2
    

    * SQL executed directly by a user runs in the recursive depth: 0
    * SQL in a PL/SQL procedure called by SQL run directly by a user runs in the recursive depth: 1
    * SQL executed within a trigger that executes automatically in response to SQL run directly by a user runs in the recursive depth: 1
    * SQL executed directly by a user who calls a PL/SQL procedure that then causes a trigger is a way to get a recursive depth: 2
    * SQL executed directly by a user who calls a PL/SQL procedure that then calls a second PL/SQL procedure is another way to get a recursive depth: 2

    Possible cause: suppose you have a trigger that fires at each change of rank. A user session wishes to insert a large number of rows into a table, then it calls a PL/SQL procedure to handle the task. The PL/SQL procedure running insert statements, and after each insert statement is triggered, a trigger is activated for the sequential number next to a column in the table (or a separate record) by running "SELECT MAX (ID) OF ID_TAB WHERE R_ID =: B1" in the database.

    Charles Hooper
    Co-author of "Expert Oracle practices: Oracle Database Administration of the Oak Table.
    http://hoopercharles.WordPress.com/
    IT Manager/Oracle DBA
    K & M-making Machine, Inc.

  • How to optimize the select query executed in a cursor for loop?

    Hi friends,

    I run the code below and clocked at the same time for each line of code using DBMS_PROFILER.
    CREATE OR REPLACE PROCEDURE TEST
    AS
       p_file_id              NUMBER                                   := 151;
       v_shipper_ind          ah_item.shipper_ind%TYPE;
       v_sales_reserve_ind    ah_item.special_sales_reserve_ind%TYPE;
       v_location_indicator   ah_item.exe_location_ind%TYPE;
    
       CURSOR activity_c
       IS
          SELECT *
            FROM ah_activity_internal
           WHERE status_id = 30
             AND file_id = p_file_id;
    BEGIN
       DBMS_PROFILER.start_profiler ('TEST');
    
       FOR rec IN activity_c
       LOOP
          SELECT DISTINCT shipper_ind, special_sales_reserve_ind, exe_location_ind
                     INTO v_shipper_ind, v_sales_reserve_ind, v_location_indicator
                     FROM ah_item --464000 rows in this table
                    WHERE item_id_edw IN (
                             SELECT item_id_edw
                               FROM ah_item_xref --700000 rows in this table
                              WHERE item_code_cust = rec.item_code_cust
                                AND facility_num IN (
                                       SELECT facility_code
                                         FROM ah_chain_div_facility --17 rows in this table
                                        WHERE chain_id = ah_internal_data_pkg.get_chain_id (p_file_id)
                                          AND div_id = (SELECT div_id
                                                          FROM ah_div --8 rows in this table 
                                                         WHERE division = rec.division)));
       END LOOP;
    
       DBMS_PROFILER.stop_profiler;
    EXCEPTION
       WHEN NO_DATA_FOUND
       THEN
          NULL;
       WHEN TOO_MANY_ROWS
       THEN
          NULL;
    END TEST;
    The SELECT inside the LOOP FOR cursor query took 773 seconds.
    I tried to use COLLECT in BULK instead of a cursor for loop, but it did not help.
    When I took the select query separately and executed with a value of the sample, and then he gave the results in a Flash of a second.

    All tables have primary key index.
    Any ideas what can be done to make this code more efficient?

    Thank you
    Raj.
    DECLARE
      v_chain_id ah_chain_div_facility.chain_id%TYPE := ah_internal_data_pkg.get_chain_id (p_file_id);
    
      CURSOR cur_loop IS
      SELECT * -- better off explicitly specifying columns
      FROM ah_activity_internal aai,
      (SELECT DISTINCT aix.item_code_cust, ad.division, ai.shipper_ind, ai.special_sales_reserve_ind, ai.exe_location_ind
         INTO v_shipper_ind, v_sales_reserve_ind, v_location_indicator
         FROM ah_item ai, ah_item_xref aix, ah_chain_div_facility acdf, ah_div ad
        WHERE ai.item_id_edw = aix.item_id_edw
          AND aix.facility_num = acdf.facility_code
          AND acdf.chain_id = v_chain_id
          AND acdf.div_id = ad.div_id) d
      WHERE aai.status_id = 30
        AND aai.file_id = p_file_id
        AND d.item_code_cust = aai.item_code_cust
        AND d.division = aai.division;         
    
    BEGIN
      FOR rec IN cur_loop LOOP
        ... DO your stuff ...
      END LOOP;
    END;  
    

    Published by: Dave hemming on December 4, 2008 09:17

  • Column derived logical of the MDB - visit bounced

    In Web Metrics, a "visit rebounded" when a session views a page on a Web site and then leaves. In my table of facts, I capture the "total pages" read for each session and I set this metric in my Business Model with a rule of aggregation "sum". I'm trying to use this measure to calculate the metric 'bounced visit' but I'm running into issues.

    Session ID number Total of Pages
    1179860475 5
    1179861625 1 < = this is a visit of rebound
    1179861920 7
    1179866260 2
    1179868693 13

    If I set 'bounced visits' as

    BOX WHEN "total pages" = 1 THEN 1 ELSE END 0

    What I see in the newspapers of session is:

    CASE amount WHEN ("total number of pages") = 1 THEN 1 ELSE END 0

    The aggregation of the pages 'total' is made first, and then the derived metric calculation. This leads to incorrect results. Is in any way to solve this in the business model? I know that I can go back to the ETL, calculate a metric "bounce tour", store it in the fact, create aggregates, etc.. I was looking for a short term solution.

    I tried other things:

    (1) make a copy of the column "total pages" and put off aggregation, call him 'total pages - no aggregation '.

    This leads to queries of the form:

    Select distinct T22583. TOTAL_PAGES C1
    Of
    WEB_SESSIONS_A1 T22583
    order of c1


    (2) create a logical column based on 'total pages - no aggregation '.

    has bounced visit = CASE WHEN EnterpriseWarehouse. "" Web sessions. "' Total Pages - no aggregation ' = 1 THEN 1 ELSE END 0

    This leads to [nQSError: 14020] none of the fact tables are compatible with the query request Web Sessions.bounced visit.

    Sorry that if I wasn't clear... you don't need a new select statement for this... you need to do is to define a column using the physical layer (not an existing column of logic in use)... To do this, you must create a new column in MDB... then you go under the mapping tab and click on button... and you enter it formula using the physical column... Let me know if it is clear...

  • Dilemma when-validate-Item and Go_Block/Execute Query

    Hey people out there - probably not a matter of surprise on this forum. I have tried to look for it and came up with a few posts but did not find solutions. If all goes well, I also did the right way to conduct a search on this forum. If this is a message for the umpteenth time on this question, I apologize.

    So, I have 2 blocks:
    1. the block command that contains the field STUDENT_ID (database item = 'n')
    2 block database which contains STUDENT_ID field (element of database = 'Y' and many other areas)

    My goal is to do an Execute_Query if the user enters the student card, then press TAB or place the MOUSE in the database block and display the record if it exists. As we know, a Go_Block is not allowed in When-validate-Item. So, I went a little more away from "simulate" this. The problem is that I'm always pulled the query Execute when the record has already been retrieved. Here's what I have so far. Thus, the trigger next-item key works fine if the user types a tab of the student card and presses. It also works very well if the user clicks on the STUDENT_BK as the trigger once - new - block - Instance that triggers calls the next key. The problem occurs when after recording was recovered and the user click on control.student_id then comes and clicks on the STUDENT_BK, the KEY NEXT-ITEM trigger fires again.

    Suggestions or pointers would be more than welcome.

    Thank you!

    CONTROL.student_id. Next-key
      if :control.student_id is not null and :SYSTEM.BLOCK_STATUS != 'CHANGED' then
        next_block;
        clear_block(NO_COMMIT);
        execute_query;
      else
        show_alert('Please enter Student ID');
      end if;
    STUDENT_BK. Once - new - block - Forum
    go_item('control.student_id');
    do_key('next_item');

    I think you remove the "value copy of the element" and add a where clause clause to your STUDENT_BK of "student_id =: CONTROL.» STUDENT_ID.

    Then when you do

    if STUDENT_BK.STUDENT_ID != :CONTROL.STUDENT_ID then
    

    It logic should endeavour to answer your second question mark block requirement when the user between the block of database STUDENT_BK. If the IDS match, the record already was questioned, if this is not the logic of your application is triggered and with the where clause added to your STUDENT_BK block, it would be in the folder that you want to.

  • Query OBIEE adding additional tables in from clause

    Hello

    I have a logic with multiple LTS in tables when trying to query form only one column this logic table adds a few additional tables in which the clause of the query. Could if it you please let me know what could be the reason behind this issue.

    Here, I would add the query should be and what query generates OBIEE.

    I have a logical table - T_SO_ACCRUAL_PRODUCTS with source table logic multi.

    tables is LTS - INV_PRODUCT_GROUP1, Inv_Accrual_Groups, Inv_Grade_Code, Inv_Item_Classes, Inv_Item_Codes, PS_DAILY_STANDING_DETAILS, T_So_Customer_Request, PS_DAILY_STANDING_DETAILS_FREZ, T_So_Request_Details, T_SO_ADJUSTED_QTY, T_SO_ACCRUAL_MASTER, T_SO_ACCRUAL_PRODUCTS

    Now I have an ITEM_NUMBER column in the Inv_Item_Codes table (which I added in forT_SO_ACCRUAL_PRODUCTS LTS).

    I select only that ITEM_NUMBER in analysis of BI to find only the ITEM_NUMBER in INV_ITEM_CODE, so that the query selection must be--

    Select 0 distinct as c1.

    D1. C1 C2

    Of

    (select distinct T2010613. ITEM_NUMBER C1

    Of

    SALES_E0. D1 INV_ITEM_CODES T2010613)

    But the query I get in the nqquery.log file.

    Select 0 distinct as c1.

    D1. C1 C2

    Of

    (select distinct T2010613. ITEM_NUMBER C1

    Of

    SALES_E0. INV_ITEM_CODES T2010613,

    SALES_E0.PS_DAILY_STANDING_DETAILS T2010846,

    SALES_E0.PS_DAILY_STANDING_DETAILS_FREZ T2010861,

    T_SO_CUSTOMER_REQUEST T2011274,

    T_SO_REQUEST_DETAILS T2011561,

    SALES_E0. T_SO_REQUEST_TRANSACTIONS T2011657

    where (T2010613. ITEM_NUMBER = T2010846. ITEM_NUMBER and T2010613. ITEM_NUMBER = T2010861. ITEM_NUMBER and T2010613. ITEM_NUMBER = T2011561. ITEM_NUMBER and T2010613. ITEM_NUMBER = T2011657. ITEM_NUMBER and T2010846. CUSTOMER_NUMBER = T2011274. CUSTOMER_NUMBER and T2010861. CUSTOMER_NUMBER = T2011274. CUSTOMER_NUMBER and T2011274. REQUEST_NUMBER = T2011561. REQUEST_NUMBER and T2011561. ITEM_NUMBER = T2011657. ITEM_NUMBER and T2011561. LINE_NUMBER = T2011657. LINE_NUMBER and T2011561. REQUEST_NUMBER = T2011657. REQUEST_NUMBER)

    ) D1

    I don't understand, why is the addition of these additional tables that I select only one column of the INV_ITEM_CODES table.

    Does anyone have an idea about it, why he adds these additional tables in the query?

    Thank you in advance for your help!

    Gerard

    [nQSError: 43119] Query Failed: [nQSError: 14025] indeed no table exists at the level of detail required

    Possible reasons:

    (1) level of content is not defined in the layer MDB of the SPR to one of the measures or dimension

    Logical Table-> content-> logical level tab-> Fact1/Fact2/Dim1: Source

    (2) Admintool WARNING: logical dimension table [39008]... a... a source that do not adhere to any source of fact.

    -> to check business model (and by the way of physical model) diagram for: Fact1, Fact2, Dim1

    (3) lack of aggregation method the measurement made ('Aggregation' of the measurement column tab)

    -> set aggregation (perhaps ' amount') for measures

    Looks like you have a lot of tables, but put them as sources of logical table, when they could not logically be related ' ' it... If you want you can send me your SPR and I can watch what you are doing.

  • Filtering of query output

    Hi, I use developer Forms 10 g and I learned recently from the with... As command. I have an exit that gives the details of a particular query executed; Here's my query:

    {WITH months LIKE)
    SELECT ADD_MONTHS(:V_DATE_FROM,LEVEL-1) m_first
    M_last (ADD_MONTHS(:V_DATE_FROM, LEVEL-0)-1)
    OF the double
    CONNECT BY LEVEL < MONTHS_BETWEEN(:V_DATE_TO,:V_DATE_FROM) + 1
    )
    load_per_month AS)
    SELECT conn_load
    SUM (conn_load) max_conn_load
    m_LASt
    CIRCUIT_code, substation_code, classification
    FROM (SELECT substr (c1. CIRCUIT_code, 1, 5) circuit_code, max (c1.connected_load) conn_load,.
    C1.classification, m_LAst, c1.substation_code
    SINCE CROSS JOIN SPM_CIRCUITS c1
    WHERE the c1.date_commission < = m_last
    AND (c1.date_decommission > m_first)
    GOLD date_decommission IS NULL)
    AND SUBSTR(c1.substation_code,6,4) IN ('N002")
    GROUP BY substr (c1. CIRCUIT_code, 1, 5).
    C1.classification, m_LAst, substation_code
    )
    Conn_load, m_last, CIRCUIT_code, substation_code, classification group)
    * SELECT NVL (SUM (NVL(conn_load,0)), 0) connected_load, m_LASt *.
    *, CIRCUIT_code, substation_code, classification *.
    * From load_per_month *.
    * WHERE conn_load = max_conn_load *.
    * GROUP OF M_LAST, CIRCUIT_code, substation_code, classification *.
    * ORDER BY m_LASt, CIRCUIT_code *;
    }

    The above query gives a power of connected load values, substation_code and classification for each circuit_code of the year grouped into months.

    And in the above query if I changed the my Select command (in bold) to this:

    SELECT MAX (load)
    Of
    *(*
    SELECT NVL (SUM (NVL(conn_load,0)), 0) connected_load
    *, m_LAst. *
    OF load_per_month
    WHERE conn_load = max_conn_load
    GROUP BY m_LAst
    ORDER BY m_LAst
    *)*

    I get an output value of 279, which corresponds to the total of the load for the month of November (total monthly high for the year). I want to show well that are the specific columns (connected_load, m_LASt, CIRCUIT_code, substation_code, classification) of that particular month. How can I do this? Thank you very much.

    Thank you guys! I, atlas, created the query by trail and error Phew!

  • OBIEE 11 g - connection Pool &gt; Execute before application procedural error.

    Hello

    I am trying to run a stored procedure before running a query. Thus in the Connection Pool > Login Scripts > before Execute Query, I have the following code.
    BEGIN PRC_RUN_MV_JOBS();  
    END;
    I call a DBMS_JOB. RUN the PRC_RUN_MV_JOBS procedure (NUMBER of JOBS). It works fine when I run using Toad or SQL Developer.

    And I run a scan, I get the following error.
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 43119]
     Query Failed: [nQSError: 17001] Oracle Error code: 6550, message: ORA-06550: line 1, column 27: PLS-00103: Encountered the symbol "" when expecting one of the following: 
    begin case declare end exception exit for goto if loop mod null pragma raise return select update while with <an identifier> <a double-quoted delimited-identifier> <a bind variable> 
    << close current delete fetch lock insert open rollback savepoint set sql execute commit forall merge pipe The symbol "" was ignored. at OCI call OCIStmtExecute: BEGIN 
    PRC_RUN_MV_JOBS(); END;. [nQSError: 17011] SQL statement execution failed. (HY000)
    Can someone make me please how to fix this? Thanks in advance.

    Try this:

    BEGIN PRC_RUN_MV_JOBS;  END;
    

    Best regards
    Kalyan Chukkapalli
    http://123obi.com

  • Hello, the issue with Microsoft Server-link plan with the execution of query execution

    is it possible to map estimated plan (parts) to running query? for example I have a simple query worked 5 minutes already. And I have his plan estimated with 3 main rooms as the clustered index seek for one of join tables, analysis of table to the second table junction and the nested loops join operation that works with two inputs and outputs corresponding rows.can I somehow know what exact SQL Server query execute development plan part is currently - running at this minute - for example is reading pages of data #n - #m with the index clustered or / and match lines key #x - #y.

    Hello

    I suggest you according to the question in this forum and check if that helps:
    http://social.msdn.Microsoft.com/forums/en-us/TransactSQL

    It will be useful.

  • Why I have two different execution plans for the same query on two different servers

    Hello everyone.

    I need your help to solve the problem quickly.

    In a nutshell, we have two servers that have the same version of Oracle RDBMS (11.2.0.4 EE). One of them for purposes of development and another is a production one.

    We have therefore two different execution plans for the same query executed on both servers. The only case of execution is OK and another is too slow.

    So I have to slow down the work of query using the same scheme to explain that young.

    Fence wire.

  • Partitioning strategy for the OBIEE query performance

    I use partitioning for the first time and I'll have trouble determining if I partition my fact table in a way that will allow the partition size to work with queries generating OBIEE.  I've set up a simple example using query I wrote to illustrate my problem.  In this example, I have a star with a fact table schema and I join in two dimensions.  My fact table is partitioned on JOB_ID and TIME_ID RANGE LIST and those are the keys that link the two dimensions that I use in this application.


    Select sum (boxbase)

    TEST_RESPONSE_COE_JOB_QTR a

    Join DIM_STUDY C on A.job_id = C.job_id

    Join DIM_TIME B on A.response_time_id = B.time_id

    where C.job_name = "FY14 CSAT"

    and B.fiscal_quarter_name = ' quarter 1';


    What can I say, because the query is in fact done filtering on columns in the dimensions instead of they, the columns in the table, the size isn't actually is happening.  I see actually slightly better performance of a non-partitioned table, even though I wrote this application specifically for partitioning strategy which is now in place.


    If I run the next statement, it runs a lot faster and a plan to explain is very simple and it seems to me that it's size down to a partition of sub as I hoped.  This isn't any query generated by OBIEE how will seem so.


    Select sum (boxbase)

    of TEST_RESPONSE_COE_JOB_QTR

    where job_id = 101123480

    and response_time_id < 20000000;


    Any suggestions?  I get some benefits from the exchange of partition by using this configuration, but if I'm going to sacrifice performance reports then that maybe isn't useful, or at the very least, I would need to get rid of my partitions void if they are not providing any benefit.


    Here are the plans to explain that I got for two queries in my original post:

    Operation

    Name of the object

    Lines

    Bytes

    Cost

    Object node

    In/Out

    PStart

    PStop

    INSTRUCTION SELECT optimizer Mode = ALL_ROWS


    1


    20960





    AGGREGATION OF TRI


    1

    13






    VIEW

    SYS. VW_ST_5BC3A99F

    101 K

    1 M

    20960





    NESTED LOOPS


    101 K

    3 M

    20950





    PARTITION LIST SUBQUERY


    101 K

    2 M

    1281



    KEY (SUBQUERY)

    KEY (SUBQUERY)

    RANGE OF PARTITION SUBQUERY


    101 K

    2 M

    1281



    KEY (SUBQUERY)

    KEY (SUBQUERY)

    CONVERSION OF BITMAP IN ROWID


    101 K

    2 M

    1281





    BITMAP AND









    MERGE TO BITMAP IMAGE









    KEY ITERATION BITMAP









    BUFFER









    INDEX SKIP SCAN

    CISCO_SYSTEMS. DIM_STUDY_UK

    1

    17

    1





    BITMAP INDEX RANGE SCAN

    CISCO_SYSTEMS. FACT_RESPONSE_JOB_ID_BMID_12






    KEY

    KEY

    MERGE TO BITMAP IMAGE









    KEY ITERATION BITMAP









    BUFFER









    VIEW

    CISCO_SYSTEMS.index$ _join$ _052

    546

    8 K

    9





    HASH JOIN









    INDEX RANGE SCAN

    CISCO_SYSTEMS. DIM_TIME_QUARTER_IDX

    546

    8 K

    2





    INDEX FULL SCAN

    CISCO_SYSTEMS. TIME_ID_PK

    546

    8 K

    8





    BITMAP INDEX RANGE SCAN

    CISCO_SYSTEMS. FACT_RESPONSE_TIME_ID_BMIDX_11






    KEY

    KEY

    TABLE ACCESS BY ROWID USER

    CISCO_SYSTEMS. TEST_RESPONSE_COE_JOB_QTR

    1

    15

    19679



    ROWID

    L LINE









    Operation

    Name of the object

    Lines

    Bytes

    Cost

    Object node

    In/Out

    PStart

    PStop

    INSTRUCTION SELECT optimizer Mode = ALL_ROWS


    1


    1641





    AGGREGATION OF TRI


    1

    13






    SIMPLE LIST OF PARTITION


    198 K

    2 M

    1641



    KEY

    KEY

    RANGE OF SINGLE PARTITION


    198 K

    2 M

    1641



    1

    1

    TABLE ACCESS FULL

    CISCO_SYSTEMS. TEST_RESPONSE_COE_JOB_QTR

    198 K

    2 M

    1641



    36

    36


    It seems unreasonable to think that relying on our index in a table partitioned (or partitioned in a way only focused on the help of ETL) can actually work better than partitioning in a way that we could get some size dynamic, but never static pruning?

    Yes - standard tables with indexes can often outperform partitioned tables. It all depends on types of queries and predicates to query which are typically used and the number of rows returned in general.

    Size of the partition eliminates the partitions TOGETHER - regardless of the number of rows in the partition or table. An index, on the other hand, is prohibited if the query predicate needs a significant number of lines since Oracle can determine that the cost is lower simply use close readings and make a full scan.

    A table with 1 million rows and a query predicate who wants 100 K of them probably will not use an index at all. But the same table with two partitions could easily have one of the partitions pruned by the "number of effective lines" only 500 k or less.

    If you are partitioning for performance, you should test your critical queries to make sure partitioning/pruning is effective for them.

    Select sum (boxbase)

    TEST_RESPONSE_COE_JOB_QTR a

    Join DIM_STUDY C on A.job_id = C.job_id

    Join DIM_TIME B on A.response_time_id = B.time_id

    where C.job_name = "FY14 CSAT"

    and B.fiscal_quarter_name = ' quarter 1';

    So, what is a typical value for 'A.response_time_id '? That represents a 'B.time_id '?

    Because a way of providing explicit partition keys may be to use a range of 'response_time_id' of the FACT table rather than a value of 'fiscal_quarter_name' of the DIMENSION table.

    If "1 quarter" could correspond to a range of dates from 01/01/YYYY ' at ' 03/31/yyyy '. "."

    Also, you said that on the partitioning: JOB_ID and TIME_ID

    But if your questions relate mainly to the DATES / TIMES, you might be better use the TIME_ID for PARTITIONS and JOB_ID, if necessary, for the subpartitioning.

    Date range partitioning is one of the most common around.and serves both performance and ease of maintenance (delete/archive old data).

  • Two criteria in mind: one for af:query, second with bind var, both use same result table

    JDeveloper 12.1.2

    I have two display criteria, VC1 and VC2 are based both on the same VO. For example, let's say that I have used VO

    SELECT Employees.EMPLOYEE, Employees.EMPLOYEE_ID, Employees.FIRST_NAME, Employees.LAST_NAME, Employees.DEPARTMENT_ID

    Employees;

    Let this be VC1:

    SELECT * from (SELECT Employees.EMPLOYEE, Employees.EMPLOYEE_ID, Employees.FIRST_NAME, Employees.LAST_NAME, Employees.DEPARTMENT_ID

    EMPLOYEES employees ) QRSLT WHERE 1 = 2


    Used for af:query


    So let's say I have VC2 with bind variable


    SELECT * from (SELECT Employees.EMPLOYEE, Employees.EMPLOYEE_ID, Employees.FIRST_NAME, Employees.LAST_NAME, Employees.DEPARTMENT_ID

    EMPLOYEES employees ) QRSLT WHERE 1 = 2 AND ((UPPER (QRSLT. Department_id) = UPPER(:DeptIdBind))) )

    The user runs the af:query research and resulting table lists some employees. The last value of the column in the table of results, I did as links:

    < af:column headerText = 'Department' id = "c1" width = "140" >

    < af:link shortDesc = "#{bindings.myVO.hints.DepartmentId.tooltip}" id = "ot4" text = ' #{row. "" DepartmentId}'

    action = "#{backingBeanScope.MyBean.byDept}" partialTriggers =": t1" / >

    < / af:column >

    If the user now clicks on the link, I would expect the following:

    1. the values entered in the af: query to stay

    2. the table refreshes since the underlying VC2 is defined and executed VO. Here is the code VOImpl and backingbean

    The bean of support code

    public String byDept() {}

    Links DCBindingContainer = (DCBindingContainer) BindingContext.getCurrent () .getCurrentBindingsEntry ();

    DCIteratorBinding dcItteratorBindings = bindings.findIteratorBinding("EmployeesIterator");

    Line r = dcItteratorBindings.getCurrentRow ();

    Object keyvar = r.getAttribute ("DepartmentId");

    OperationBinding searchExp = ADFUtils.findOperation ("searchByDept");

    searchExp.getParamsMap () .put ("deptId", keyvar);

    searchExp.execute ();

    Returns a null value.

    }

    Impl VO

    {} public void searchByDepartment (String deptId)

    setDeptIdBind (deptId);

    executeQuery();

    }

    Problem:

    Query (VC2) runs but table does not reflect that. That is to say I don't see the list of employees of the selected Department

    Any idea?

    RPP = partial page request

    PPR refresh a component jsf

    AdfFacesContext.getCurrentInstance () .addPartialTarget (UIComponent);

    Where component is the component that you want to refresh.

    Have you checked that the query executed vc ID?

    Timo

Maybe you are looking for