global tables

Hello

As I understand it, there are two overall picture in the JCRE implementation. They are Baker to Install method and the APDU buffer.
Do you know what their sizes? And because they are together, it is possible to store credentials in the class with this code.

REF of Byte [] = Baker

The JCRE rises exception if there is such a code in a program applet? And how to prevent the storage of references in class / instance variables as mention in JCRE spec (section 6.2.2)

Thank you

Hello

As mentioned in my first post, they are usually the same buffer. You can test it in your code with apdu.getBuffer () .length.

See you soon,.
Shane

Tags: Java

Similar Questions

  • apex_items and apex global tables

    Hello
    I use apex_item.checkbox and apex_item.text via a report sql like this:
    SELECT
    id,
    apex_item.checkbox(1,id) " ", 
    apex_item.text(2,name) "name"
    FROM APEX_APPLICATION_FILES
    WHERE REGEXP_LIKE(name,'txt');
    and an after submit processes like this:
    DECLARE
    BEGIN
    
    --Loop through the selected id
    FOR i in 1..apex_application.g_f01.COUNT
    LOOP
      IF apex_application.g_f01(i) IS NOT NULL
        THEN 
           INSERT INTO INC9_TEST(t2)values(apex_application.g_f02(i));
           wwv_flow.debug('MY PROCESS:' || APEX_APPLICATION.G_F02(i));
      END IF;
    END LOOP;
    I have two lines as sample data:
    Id  name
    1   abc
    2   def
    When I select the checkbox for Id 2, it keeps returning Id 1 in the global table of apex_applicaiton.g_f01 instead of Id 2. But if I select the two check boxes, then it correctly she travels with the id of 1 and 2. Anyone know why this is happening and what is the fix for this strange behavior?

    Thank you

    OK - I explained that, on the thread I linked to. You must have the check box values set for line numbers. This can be done by using something like:

    APEX_ITEM.CHECKBOX (1, '#ROWNUM#')
    

    Now, if the user activates the boxes, the submitted values will be the line numbers.

    You can then use this to retrieve the value of NAME to the same line:

    DECLARE
     vROW NUMBER;
     vNAME VARCHAR2(100);
    BEGIN
     FOR i IN 1..APEX_APPLICATION.G_F01.COUNT
     LOOP
      vROW := APEX_APPLICATION.G_F01(i);
      vNAME := APEX_APPLICATION.G_F02(vROW);
      ... etc...
     END LOOP;
    END;
    

    So, first get us the line number for each active element, and then use it to get the value of the corresponding row name.

    Andy

  • How do I route out of the VRF to the global table

    How to build static routes (two-way) between the VRF and the overall table?

    Cat 6509

    12.2 (33)

    Single VRF, Full BGP. EIGRP inside the VRF.

    I do not have a 6509

    but on IOS, you attach the word 'global' key to the VRF road, and on the incoming Interfaces, I created a policy map to send traffic to the vrf.

  • The global tables are interconnected?

    Dear friends
    I'm more than puzzling over the following

    It's the overall definition of two tables:

    var gasLocations      = [];                       // filled from template or current document  
    var gasLocationsPredef= [];                       // filled in SetUpArraysEtc () 
    

    They met in the SetUpArraysEtc () function:

    // locations for the list in the dialogue
    gasLocations1   = ['Above()', 'Below()', 'Left()', 'Right()']; // used in UnknownElement concatenated to gasFunctions
    var asLocations2    = ['[COL m]', '[ROW n]', '[CELL n, m]']; 
    gasLocationsPredef = gasLocations1.concat(asLocations2); // These must not be deleted from the list
    gasLocations    = gasLocationsPredef;
    

    Later in a dialog table gasLocations function will need to be updated. I put a breakpoint right before the crucial moment:

    function ButtonAddLocation () { // ================================================================
    // Attention: this regex is different from that in ChecktermElements!
      var re_Def = /(\[ROW +\d+\]|\[COL +\d+\]|\[CELL +\d+, +\d+\]|Left *\(\d*\)|Right *\(\d*\)|Above *\(\d*\)|Below *\(\d*\))/;
      var item = wPalDS.p0.tabL.g1.p2.g1.sLocation.text;
      if (item.search(re_Def)) {
        alert (gsDSSyntaxErr);
        return;
      } 
    $.bp(true);
      UpdateListLocations (wPalDS.p0.tabL.g1.p1.listLocations, item);
      FillItemList (wPalDS.p0.tabL.g1.p1.listLocations,  gasLocations);
      PutRefPageItems ("Ref-Locations", "");          // update reference page
    } // --- end ButtonAddLocation
    
    function ButtonDelLocation () { // ================================================================
      var lstIndex = wPalDS.p0.tabL.g1.p1.listLocations.selection.valueOf();
      var locName = gasLocations[lstIndex];
      if (IsInArray (gasLocationsPredef, locName)) {
        alert ("Predfeined items can not be deleted");
        return;
      };
      DeleteItemLocations (wPalDS.p0.tabL.g1.p1.listLocations, locName)
      PutRefPageItems ("Ref-Locations", "");          // update reference page
    } // --- end ButtonDelLocation
    
    function UpdateListLocations (oList, item) { // ====================================================
    // update global array and list in dialogue;
      var locArray = gasLocations, j, nItems;
      locArray.push (item);
      gasLocations = locArray.sort ();                // update global array
      nItems = gasLocations.length;
      oList.removeAll ();                             // clear the list
      for (j = 0; j < nItems; j++) {
        oList.add ("item", gasLocations[j]);          // add item
      }
    } // --- end UpdateListLocations
    
    function DeleteItemLocations (oList, item) { // ====================================================
    // List is rebuilt from array. Function array.splice not available in ES
      var index = IsInArray (gasLocations, item);
      var lower = gasLocations.slice (0, index);      // lower part
      var upper = gasLocations.slice (index+1);       // upper part
      gasLocations    = lower.concat(upper);
      FillItemList (oList, gArray);
    } // --- end DeleteItemLocations
    
    function FillItemList (oList, aSource) { // =====================================================
      var j, nItems = aSource.length;
      oList.removeAll ();                             // clear the list
      for (j = 0; j < nItems; j++) {
        oList.add ("item", aSource[j]);               // add item
      }
    } // --- end FillItemList
    function IsInArray (array, what) { //==============================================================
    // indexOf is not supported in ExtendScript...
      var jl;
      for (j = 0; j < array.length; j++) {
        if (array [j] == what ) { return j;}
      }
      return null;     // not found in array
    } // --- end IsInArray
    

    Now, guess what? After UpdateListLocations the two tables have the same content! But gasLocationsPredef doesn't have to be updated, because it serves as a reference: list items in the selection corresponding to the elements of this array must not be removed from the list. The ButtonDelLocation function has always search the newly inserted item and does not accept the deletion.
    I once read a warning on global variables - but I carefully avoid using as arguments to functions output (only objects are managed "by" reference").

    Who has an eagle eye for spotting my error?

    Hi Klaus,

    to copy the contents of a table (and ONLY content), you must use sth. Like this:

    Array1 = Array2.slice ();

    Array1 = array 2 you create only a second word for the same thing.

  • HTMLDB_APPLICATION. G_Fxx GLOBAL TABLES ARE NOT PROPERLY FILL RDI

    Hello

    Scenario1:

    Unordered data

    I have an interactive relationship with column checkbox and 4 editable fields. When I enter an editable field is called a javascript on this field to make the checkbox in the first column checked. It works very well.

    Scenario 2:
    Sort on a column:

    I sort the data on the number of part and when I change one of the editable field box gets checked who is correct. But when I save HTMLDB_APPLICATION.gxx variable seems to be messed up when IRR is sorted and is recording not all values.

    Any body facing a similar problem.

    Thank you
    Sukarna

    Hello

    Use of rownum is dangerous, because you can't point to a single rownum in a query result. For example, you can do a select x from y where rownum< 10,="" but="" you="" can't="" do="" a="" select="" x="" from="" where="" rownum="">

    This is because the rownum is generated after ORDER BY. So "select x from y where rownum".< 10"="" gives="" a="" different="" result="" than="" "select="" x="" from="" y="" where="" rownum="">< 10="" order="" by="">

    Instead, use the row ID (or primary key column).

  • Using a global temporary table in BI Pubisher

    Hi all

    We have written a PL/SQl package that contains a function that fills the data in a global temporary table. We call this package in the "beforeReport" trigger in our data model.

    Our Dataquery section in the data model contains a 'Select' on this global temporary table statement. We are able to call the function, but there is no data from the Global Temp table.

    Initially, we have created the global table to try. Do we need to create this global temporary table in the package itself.

    Please advice.

    Thank you!

    Just use this

    
     
         
              
                   
              
         
    
    
  • Routing of global IP addresses on a taking VRF support IPSec

    Hi all

    I would be extremely grateful if you have any tips on the following points.

    I have already setup an IPSec tunnel that was to be directed VRF data because of how the Setup is out to the ISP provider. However, the traffic, I need to move on this tunnel includes some IP addresses that are in the global Routing Table and so I can't put them in a VRF.

    I am new to VPN and would be grateful if anyone can advice some document that I can refer to for that--or a glimpse of how this can be addressed.

    Any help will be much appreciated.

    Looking forward to your response.

    Thank you very much

    One thing that comes to mind is to create VTY tunnel between beers for transit traffic nonvrf. If the tunnel interface itself will be part of the default routing table.

    interface Tunnel200
    ip address x.x.x.x
    tunnel source fa0/1.100
    tunnel destination 10.1.1.2
    tunnel mode ipsec ipv4
    tunnel vrf inet
    tunnel protection ipsec profile PROFILENAME
    There's a command that could be added, to set the tunnel interface into specific vrf:
    ip vrf forwarding some-vrf
    wich doesn't allow explicitly put tunnel interface into global table, but probably,
    without this string the interface will belong to the default RT.

    I never did so, so can't be 100% sure it'll work.

  • Independent LOV in each row of the table

    Hello

    I use Jdev 12 c, my requirement is, I want to ask a LOV a table in every row and want to get its value in my grain of java against the selected line. Snapshot is attached for clarity.

    acceptReject.PNG

    Help, please

    Kind regards

    Crusher

    Add a transitional attribute to the object of the entity that holds the value selected from the lov.

    Create a static view object to contain possible values to select for the lov. Then, build a model piloted by lov on the transitional attribute using the static vo.

    An example of this approach is presented here https://tompeez.wordpress.com/2013/04/05/using-one-viewobject-for-global-lookup-data/

    It's using a global table to get the data to search instead of a static vo.

    Timo

  • Can I get the total number of records that meet the conditions of a query using the Table API?

    Hello

    A < row > TableIterator is returned when I ask operations using the index of tables. If I want to get the total number of records, I count one by one using the returned TableIterator < row >.


    Can I get the total number of records directly meets the conditions of the query?

    I can get the total number of records directly the request of the meeting of the conditions of CLI using the command Global table - name tableName - count - index index-name-field fieldName - start startValue-end endValue.

    Can I get the same results using the Table API?

    I used MongoDB and NoSQL Oracle for about a year. According to the experience of the use of these dbs, I think mongoDB querying interface is powerful. In the contras, the query interface is relatively simple, which results is a lot of work that is usually a long time in the client side.

    Hello

    Counting records in a database is a tricky thing.  Any system that gives you an accurate count of the records will have a hotspot of concurrency on updates, namely the place where the counting is maintained.  Such a count is a problem of performance in addition to competitive access problem.   The problem is even more difficult in a system widely distributed such a NoSQL database.

    The CLI has overall command that counts, but does so by brutal force - iterate keys that correspond to the parameters of the operation.  This is how you must do this within the API.  There is not a lot of code, but you have to write.  You certainly want to use TableIterator TableAPI.tableKeysIterator (), because a key iteration is significantly faster than the iteration of lines.  Just one iteration and count.

    If you use TableAPI.multiGet () and a key with a touch of brightness full then, in fact, count the results as they are returned in a single piece (a list).

    Kind regards

    George

  • Update of a global network

    Dear friends!

    I don't remmber not what I did a few days - global tables are no longer being updated. I cut down to this mechanism:

    var arrayOfObjects = [];             // global Array
    
    UpdateA (arrayOfObjects);            // array is NOT updated
    $.writeln ("1) arraOfObjects = \n" + arrayOfObjects);
    
    UpdateB (arrayOfObjects);            // array is NOT updated either
    $.writeln ("2) arraOfObjects = \n" + arrayOfObjects);
    
    function UpdateA (gArray, someotherStuff) {
    var locObject = new Object();
        locObject.Name = "A";
        locObject.Item = "Something";
    
        gArray = [locObject, locObject, locObject];
    $.writeln ("A) gArray = " + gArray);
    }
    
    function UpdateB (gArray, someotherStuff) {
    var locObject = new Object();
        locObject.Name = "A";
        locObject.Item = "Something";
      
    var localArray = [locObject, locObject, locObject];
        gArray = localArray.slice();
    $.writeln ("B) gArray = " + gArray);
    }
    

    The result is very clear: the global table is not at all affected by update features:

    A) gArray = [object Object],[object Object],[object Object]
    1) arraOfObjects = 
    
    B) gArray = [object Object],[object Object],[object Object]
    2) arraOfObjects = 
    
    

    If I become blind?

    In the original script, the update function collects markers and (done) put them in the overall picture.

    It seems that I need a tool to save the changes...

    Klaus

    Dear friends,

    After a lot of fiddling around, I found solutions for the desired table functions. A key indicator that has been

    gArray = [];
    

    wihin a function creates a new array, does not drain the original.

    So now, I have a set of functions for arrays. It may be trivial to you, but not for me...

    gArray = [];
    gArray2 = ["1", "2", "3", "4", "5"];                                
    
    AmendArray (gArray, "one", "two", "three", "four", "five");          //1 gArray =
    $.writeln ("1 gArray = \n   " + gArray);                             //  one,two,three,four,five  OK
    
    DeleteArrayItem (gArray, 1);                       // remove item 2  //2 gArray =
    $.writeln ("2 gArray = \n   " + gArray);                             //  one,three,four,five      OK   
    
    CopyArray (gArray, gArray2);                                         //3 gArray =
    $.writeln ("3 gArray = \n   " + gArray);                             //  1,2,3,4,5                OK
    
    InsertArrayItem (gArray, 1, "two");                                  //4 gArray =
    $.writeln ("4 gArray = \n   " + gArray);                             //  1,two,2,3,4,5            OK
    
    ReplaceArrayItem (gArray, 3, "three");                               //5 gArray =
    $.writeln ("5 gArray = \n   " + gArray);                             //  1,two,2,three,4,5        OK
    
    ExtractArray (gArray, gArray2, 2, 3);          // extract items 3..4 //6 gArray, gArray2          OK
    $.writeln ("6 gArray, gArray2 = \n   " + gArray + "\n   " + gArray2);//  3,4,5
                                                                         //  1,2
    function AmendArray (array, items) {
      var j, n = arguments.length;
      for (j = 1;  j < n; j++) {
        array.push(arguments[j]);
      }
    }
    
    function DeleteArrayItem (array, index) {
    // array.splice(index, n-delete,insert-item1,.....,itemX)
      array.splice (index, 1);
    }
    
    function CopyArray (target, source) {
    // http://www.2ality.com/2012/12/clear-array.html tells:
    // target = [];        Creates extra garbage: existing array not reused, a new one is created.
    // target.length = 0;  JavaScript semantics dictate that all elements must be deleted; costs time
      var j, n = source.length;
      target.length = 0;
      for (j=0; j < n; j++) {
        target.push(source[j]);
      }
    }
    
    function ReplaceArrayItem (array, index, item) {
    // array.splice(index, n-delete,insert-item1,.....,itemX)
      array.splice (index, 1, item);
    }
    
    function InsertArrayItem (array, index, item) {
    // Insert item after index, pushing the rest 'upwards'
      array.splice (index, 0, item);
    }
    function ExtractArray (target, source, index, n) {
    // extract n items starting at index in source giving new array target
    // Create local extract and use copy method
      var lArray = source.splice(index, n);
      var j, n = lArray.length;
      target.length = 0;
      for (j=0; j < n; j++) {
        target.push(lArray[j]);
      }
    }
    

    Thanks for listening and forcing me to think about myself!

  • Business Model - Table logic Source

    Can someone give me details on exactly when you would follow scenario 1 when modeling a logical table and exactly when you would follow scenario 2, and what is the main difference in behavior between the two. It would help if someone could illustrate with joins of equivalent sql tables.


    Scenario 1


    You drag a second physical column of table on the logical table. This causes an additional field will appear in this list of logical tables in columns from another table to the original and causes a second table to appear under the original table in the source folder.


    Scenario 2

    You drag a second physical table to the existing source of the logical table of a logic table. The source of physical table appears on the surface, the same as before, but when you examine its properties you will see that the second table was joined to it.


    Thanks for your comments,


    Robert.

    Scenario 1
    ---> It would be more economical and BI server is free to use his Intelligence to peak sources based on the extraction of columns in the criteria tab provide you do know the sources using the content tab.
    In general, we're going to do this is when:
    Extensions of dimension
    Fragmentation
    Global table

    Scenario 2
    ---> In this case we force Server BI go as we said (which forces to use joins) and can be intelligence not to use BI
    In general, we're going to do this is when:
    Of fact extensions
    10 g; What measures are based on certain conditions based on the dimensions, so we might have to add/map them and make
    version of Siebel Analytics we used to go the aggregations based on the logical columns, it is not more than 10 g and 11 g.

    Hope this helps

    Published by: Srini VIEREN on 21 February 2013 09:17

  • Complex global options in ODI interface

    I have one source and I need a global table of a target.
    I've implemented the following example query:

    SELECT
    ATRIBUTE1,
    ATRIBUTE2,
    ATRIBUTE3,
    SUM (ATRIBUTE4),
    SUM (ATRIBUTE5),
    AVG (SUM(ATRIBUTE4+ATRIBUTE5)/ATRIBUTE6),
    ...
    OF TABLE_AGG
    GROUP BY SUBSTR (ATRIBUTE1, 0, 6), ATRIBUTE2, ATRIBUTE3

    ¿Options to implement this query or similar please?

    Thank you

    Have you looked at this thread?

    How to use GROUP BY in ODI

    Just add the SUM operator to the target fields required in your interface ODI and the GROUP BY clause is automatically added by ODI. I notice that you have a group of field that does not exist in your selection list - was it deliberate?

  • Package global Variable - Collection - Associative Aray

    All,

    What we are doing is...
    (1) fill us tab_emp_sal_comm with big collect and browse in a loop.
    (2) check if the deptno success is available in the tab_emp_sal_comm collection.
    (3) if it is available to fill a collection called tab_emp_sal_comm_1 and push corresponding only files inside.
    (4) of the corresponding collection, we want to fill at the table of global collection of package which is again of the same type.
    (5) by default for each new call old values are replaced, but I want to add the global variable at each call and finally do a big update to the corresponding tables.
    (6) l_deptno will be a parameter and values will change at each call of this procedure in the code in real-time.

    For the sake of ease, given an example simulated the EMP table. Goal is to add the global table in the package. Because each call, the previously loaded values are replaced. I want them to be available and additional calls should only add the values in the next set of lines instead of over writing.

    How to achieve it, please discuss.
    CREATE OR REPLACE PACKAGE employees_pkg
    IS
      type rec_sal_comm is record(esal emp.sal%type, ecomm emp.comm%type,edeptno emp.deptno%type);
      type at_emp_sal_comm is table of rec_sal_comm index by pls_integer;
      pkg_tab_emp  at_emp_sal_comm;
      pkg_tab_emp_1 at_emp_sal_comm;
    END;
    /
    -- Block Starts 
     declare
      -- Local variables here
      type emp_sal_comm is record(
        esal    emp.sal%type,
        ecomm   emp.comm%type,
        edeptno emp.deptno%type);
      type at_emp_sal_comm is table of emp_sal_comm index by pls_integer;
      tab_emp_sal_comm  at_emp_sal_comm;
      tab_emp_sal_comm1 at_emp_sal_comm;
      l_deptno          dept.deptno%type := 30;
      l_comm            number(7, 2) := 0;
      M_CNTR            NUMBER(7, 2) := 0;
    begin
    
      select sal, comm, deptno bulk collect into tab_emp_sal_comm from emp;
      for indx in 1 .. tab_emp_sal_comm.count loop
        if tab_emp_sal_comm(indx).edeptno = l_deptno then
          tab_emp_sal_comm1(indx).ecomm := tab_emp_sal_comm(indx).ecomm * 0.5;
          tab_emp_sal_comm1(indx).esal  := tab_emp_sal_comm(indx).esal * 0.75;
        end if;
      end loop;
      dbms_output.put_line(tab_emp_sal_comm1.count);
      dbms_output.put_line('**');
    
      m_cntr := tab_emp_sal_comm1.FIRST;
      loop
        exit when M_CNTR is null;
    --    dbms_output.put_line(M_CNTR || ' ** ' ||nvl(tab_emp_sal_comm1(M_CNTR).ecomm, 0));
        employees_pkg.pkg_tab_emp(m_cntr).ecomm := tab_emp_sal_comm1(M_CNTR)
                                                        .ecomm;
        employees_pkg.pkg_tab_emp(m_cntr).edeptno := tab_emp_sal_comm1(M_CNTR)
                                                          .edeptno;
        employees_pkg.pkg_tab_emp(m_cntr).esal := tab_emp_sal_comm1(M_CNTR).esal;
        m_cntr := tab_emp_sal_comm1.next(m_cntr);
    ---  other computations and calculations made based on the matched records
      end loop;
    
      employees_pkg.pkg_tab_emp_1 := employees_pkg.pkg_tab_emp;
     -- dbms_output.put_line('**');
    --  dbms_output.put_line(employees_pkg.pkg_tab_emp_1.count);
    end;
    -- Code will work in a Scott Schema with Emp Table.

    Hi Ramarun,

    (1) operator MULTISET, AFAIK, is to always give a dense nested table. To create a sparse nested table, the only way is to delete some items with the DELETE method.

    (2) using the 1... NesTableVar.count is valid only when the collection is dense. With the release of MULTISET UNION you won't have a problem. But if you have a loop in a rare collection, you must use another method.
    Below is an example:

    declare
      type  t_set is table of number;
      set1  t_set := t_set(1,2,3,4,5,6,7,8,9,10);
      idx   integer; 
    
    begin
      -- make collection sparse
      set1.delete(3);
      set1.delete(7);
    
      idx := set1.first;
      while idx is not null
      loop
         dbms_output.put_line('Set('||idx||'): '||set1(idx));
         idx := set1.next(idx);
      end loop;
    end;
    / 
    
    Output:
    Set(1): 1
    Set(2): 2
    Set(4): 4
    Set(5): 5
    Set(6): 6
    Set(8): 8
    Set(9): 9
    Set(10): 10
    

    (3) you can use FORALL update/insert/Delete with the help of a nested table. It is dense, you can use 1... NesTableVar.count, if it is rare, then you must use another method as explained here.

    Kind regards.
    Al

    Published by: Alberto Faenza on 2 December 2012 13:35

  • Date dimension unique creating aggregation tables

    Hi guys,.

    I have a date single dimension (D1 - D) with key as date_id and the granularity is at the level of the day. I did table(F1-D) that gives daily transactions. Now, I created three tables of aggregation with F2-M(aggregated to monthly), Q(Aggregated to quarterly)-F3, F4-Y(Aggregated to yearly). As I said. I have a table of unique date with date-id as a key dimension. I have other columns month, quarter, year in the Date dimension.


    My question is: is this single dimension table is sufficient to create the joins and maintain layer MDB. I joined the date_id of all facts in the physical layer. MDB layer, I have a fact and logical table 4 sources. II have created the hierarchy of the Date dimension dimension and created the logical levels as a year, quarter, month, and day and also set their respective level keys. Now, after doing this I also put the logic levels for logic table 4 sources in the fact table.

    Here, I get an error saying:



    WARNINGS:


    BUSINESS financial model MODEL:
    [39059] D04_DIM_DATE logical dimension table has a source of D04_DIM_DATE at the level of detail of D04_DIM_DATE that connects to a source of fact level superior F02_FACT_GL_DLY_TRAN_BAL. F03_FACT_GL_PERIOD_TRAN_BAL




    Can someone tell me why I get this error.

    Reverse - your group table months must have information on the year.

    It's so she can be summarized in the parent hierarchy levels.

    In general, it is so you don't have to create a table of aggregation for each situation - your table of months can be used for aggregates of the year. Still quite effective (12 times more data than the needs, but better than 365 times).

    Think about your particular situation where you have a year AND a month group you might get away without information from parent levels - but I have not tested this scenario.

    With the second part, let's say you have a description of months and a key of the month field. When you select month and income description, obiee needs to know where to find the description of months of. You don't find it secondary date for reasons mentioned previously dimension table. So, you tell him to do it from the global table. It is a simple as you drag the respective physical column from the overall table on the existing logical column for the description of months.

    Kind regards

    Robert

  • Implementation of the Global Navigation in OBIEE

    Hello Guru

    Could someone please tell me what is global navigation and and how to configure it in OBIEE.

    Thank you

    Hello

    Global navigation is excellent technique for the aggregated values in reports
    You can check this link that will completely explain the creation of the global table, setting up aggregation & troubleshooting

    http://www.rittmanmead.com/2006/11/aggregate-navigation-using-Oracle-BI-server/
    http://hiteshbiblog.blogspot.com/2010/04/OBIEE-aggregate-navigation-not-hitting.html

    You can get this Oracle site in the part of the documentation

    Thank you
    K.Babu

Maybe you are looking for