groups of vCOps and historical data

I create a group in the vSphere vCOps UI containing a number of virtual machines.

Once I created the group, I can start to see the metrics from the group.

However, it seems that when I view the settings for the group in the custom user interface, it is impossible to get all values from before the time that the Group has been created.

For example, I create the Group on Feb. 1.  Virtual machines have given individual and metrics for them individually since their creation a months ago on January 1.

However, when I create the group, it seems that vCOps fails to extract data before 1 February.

For example, I want to get application CPU MHz for the whole group of virtual machines.  Even if the data to calculate the Group exist if vCOps was to review the historical data of the individual VM, vCOps shows only data at the point when the Group was created and not before.

Is that a correct understanding of the limitation of the groups regarding the preparation of reports on historical data?

What you describe is how all the resources in vCOps behave with respect to the collection of data (these attributes or supermetrics). Metrics will begin to collect according to the assigned package attributes/supermetrics once the resource is created. Historical data prior to this existing/recovery resource is not available.

So I would call not this specific behavior of "groups" because that is how all resources behave with respect to the collection of data.

Tags: VMware

Similar Questions

  • Management of historical data in BPM

    Hi all

    We have a requirement of the company in need of a historical report of business transactions made in the process of BPM (mainly of the certifications of workflow). There are about 150 transactions per day with detailed data. We will have discussions about how to manage this requirement:

    1. first of all, we have proposed to use our platform of BI (OBIEE) for these reports, but they serve operationally, for purposes of analysis/Analytics not

    2. We then began to explore the option to retain historical BPM transactions, since they are not too a day although they do not contain a lot of data (and historical data should be stored for up to 2 years). Personally, I don't like this option because BPM should not be used for this purpose, but the same thing happens for option 1, when you try to use the BI platform for something, so that it is not intrinsically designed.

    What do you think? What would you recommend?

    Any comments will be appreciated. Thank you very much in advance,

    -Carlos

    Hi Carlos,

    Then, you must check the data directly on the screen of work or can be made available through an area of the screen/application?

    Regardless of the answer, you make reports based on tables in the Oracle BPM/SOA engine.

    My suggestion would be for you to build a data base that contains relevant activity data for reports. This database can be filled using the task of human events that are triggered automatically when a task status changes in the BPM engine.

    For example, when a task is assigned to someone, the engine fires a reminder which can then take the relevant information of process and task and place it in the historical database.

    In this way, you can have your reports made during this DB in some technology that you prefer. Oracle BI Publisher, for example, or just a nice display of the user interface with search capabilities.

    To learn more about human task events, see this document

    http://docs.Oracle.com/middleware/1213/BPM/BPM-develop/SOA-BPM-human-task-design.htm#BPMPD87372

    If you would like more information, feel free to say.

    See you soon

    José

  • vCOps and vCenter Collection levels

    How the collection-level, I have to configure in vCenter affect vCOps?  VCOps will retain the same amount of statistics and the granularity no matter what vCenter level collection with that Server is configured?

    The collection in vCenter Server level has nothing to do with the vCOps and the data it collects. vCOps collects the most recent data of 12 points since the vCenter on objects - 20 sec intervals * 12 samples = 5 minutes - which fits in the window of statistics since 1 h perf in real time and is not affected by the accumulation of HR/day/week/month.

  • find emails in email group, by sending one, update contact and map Data - all in one

    Hello

    I wanted to know if I'll be able to do the following:

    1. pull the names of all emails in a given e-mail group

    2 extract contact data

    3. send an email from (1)

    4 put to update the data of contact and one or two were data cards

    All before I would need to do a program step, with custom cloud connector / API.

    Possible? Any guidance welcome.

    Thanks for your help,

    Adam

    Hi Adam,.

    (1) you can search by name, or list all emails and filter them.  You are not able to pass a name or a group id and retrieve emails within the group.

    (2) you can use the standard or DTS WSDL to extract data contact (E9 and E10).

    With a (generator program with external services) cloud connector you first list of contacts in the status stage and pull their contact ID. You can then use the Standard Service extract to extract contact data.

    (3) you can send an email to 1 contact both using electronic mail (E10 and E9) using the id contact comes from the stage of the program or send to an existing distribution list (E9 only)

    (4) you can do so via the standard WSDL (E9 and E10) or DTS WSDL (E9 only)

    Cheers, Aaron.

  • Date of creation and the Date of importation

    When you import photos or video in the Photos to a folder, the application uses the date of importation of integration rather than the original creation date.  The result is that imports are all presented together under "Today."  Many photos and video taken on different dates, so I would only they listed according to date of creation rather than be grouped under the date of importation.  I went 'View' and checked "date of creation".  Photos don't work with "SORT" because it is always grey.  Any help would be greatly appreciated!

    If you look in the window of Photos photos and videos are sorted by date with the oldest items at the top.  This sort order cannot be change.

    In the pictures window, the elements are sorted by the date imported into the library with the oldest at the top.  The sort order cannot be changed here either.

    So, you can use a smart album to include all your photos in the library and then sort them one of these ways:

    The smart album could be created to this criterion:

    that would include all the photos in the library.  Now you can sort them as you like.  Just make sure that the date is early enough to catch all the photos in the library.

    Moments in Photos are new events, i.e. groups of photos sorted by date of catch.

    When the iPhoto library has been migrated first to the pictures there is a folder created in the box titled iPhoto events and all migrated iPhoto events (which are now Moments) are represented by an album in this folder. Use the Command + Option + S key combination to open the sidebar if it is not already open.

    NOTE: it has been reported by several users that if the albums of the event are moved in the iPhoto Library folder in the sidebar, they disappear.  It is not widespread, but several users have reported this problem.  Therefore, if you want to if ensure that you keep these event albums do not transfer out of the iPhoto events folder.

    Is there a way to simulate events in pictures.

    When new photos are imported in the library of Photos, go to the smart album last import , select all the photos and use the file menu option ➙ New Album or use the key combination command + N.  Call it what you want.  It appears just above the folder to iPhoto events where you can drag it into the events in iPhoto folder

    When you click on the folder to iPhoto events, you will get a window simulated of iPhoto events.

    Albums and smart albums can be sorted by title, by Date, with the oldest first and by Date with the most recent first.

    Tell Apple what missing features you want restored or new features added in Photos Photo-Applefeedback.

  • How does the historical Data.vi rename? I get the error-1967386622

    What format is required for entries to rename history Data.vi?

    No matter what I try, I can't make it work, I get the error-1967386622:

    HIST_RenameNodesCORE.VI, Citadel: (Hex 0x8ABC1002) the item specified, like computer, path or a folder cannot be found.

    Given this hypothetical hierarchy in the Citadel 5 universe in MAX, that entries are to ZZZ and rename XXX?

    -My computer (PC)

    -C__DB

    -AAA

    + AAA_Process_Name

    (traces)

    -ZZZ

    + ZZZ_Process_Name

    (traces)

    Thank you!!

    This knowledge base article describes exactly this error. The syntax is a bit complicated but sometimes. To rename the ZZZ xxx entries, the following entries in the VI rename historical data:

    name of the database: C_DB

    current name: \\ZZZ\

    new name: \\XXX\

    And for the ZZZ process:

    name of the database: C_DB

    current name: \\ZZZ\ZZZ_Process_Name

    new name: \\ZZZ\XXX_Process name

    To change the folder and the name of the process, we would have run the VI twice through. Hope this helps some!

  • Is there a good way to archive historical data?

    Our planning cubes become too big with 5 years of forecasting and budgeting data.

    Is there a good way to archive historical data?

    How you guys do it?

    I know a simple way is easy make a copy planning essbase cubes.  However, is there text, attachments and support details, these will be lost unless there is a way to archive a RDBMS repository data for planning.  Even in this case - all links and hooks of the Essbase cubes in these RDBMS repository will be broken.

    The old fashion method is to print all reports in PDF and archiving.

    Given that the plan changes every month, reprocess you history until you check in?

    Thanks for your advice.

    This can be done in 2 ways...

    1. just make a copy of the old 'DATA' in text file or another essbase cube history. Clear only the historical data for the current application. This will keep other information as text in the intact cell. In case the user wants to refer to the old texts/support cell details they can do by going directly into the application and for part of data, they can look in old PDF report.

    2. a copy of the planning application to create a copy of the current request. Keep all old data, text etc in old app of cells. All previous reports also point this app. Then erase current app and can be simply provide read access to older data App users can be trained to use older applications for all historical data and current app for existing budgets. Also this app can be used during the period of archiving all data.

  • Need help with Oracle SQL merge records according to date and term dates

    Hi all

    I need help to find this little challenge.

    I have groups and flags and effective dashboards and dates of term against these indicators according to the following example:

    GroupName Flag_A Flag_B Eff_date Term_date
    Group_ATHERETHERE2011010199991231
    Group_ANN2010010120101231
    Group_ANN2009010120091231
    Group_ANN2006010120081231
    Group_ANTHERE2004010120051231
    Group_ATHERETHERE2003010120031231
    Group_BNTHERE2004010199991231
    Group_BNTHERE2003010120031231

    As you can see, group_A had the same combination of (N, N) flag for three successive periods. I want to merge all the time periods with the same indicators in one. Where entry into force will be the most early (underlined) time period and end date will be later (underlined)

    So the final result should look like this:

    GroupName Flag_A Flag_B Eff_date Term_date
    Group_ATHERETHERE2011010199991231
    Group_ANN2006010120101231
    Group_ANTHERE2004010120051231
    Group_ATHERETHERE2003010120031231
    Group_BNTHERE2003010199991231

    Thanks for your help

    Here's the DDL script

    drop table TMP_group_test;

    create table TMP_group_test (groupname varchar2 (8))

    , flag_a varchar2 (1)

    , flag_b varchar2 (1)

    , eff_date varchar2 (8)

    , term_date varchar2 (8)

    );

    insert into TMP_group_test values ('Group_A', 'Y', 'Y', ' 20110101 ', ' 99991231');

    insert into TMP_group_test values ('Group_A', 'n', ' n ', ' 20100101 ', ' 20101231');

    insert into TMP_group_test values ('Group_A', 'n', ' n ', ' 20090101 ', ' 20091231');

    insert into TMP_group_test values ('Group_A', 'n', ' n ', ' 20060101 ', ' 20081231');

    insert into TMP_group_test values ('Group_A', 'n', 'Y', ' 20040101 ', ' 20051231');

    insert into TMP_group_test values ('Group_A', 'Y', 'Y', ' 20030101 ', ' 20031231');

    insert into TMP_group_test values ('Group_B', 'n', 'Y', ' 20040101 ', ' 99991231');

    insert into TMP_group_test values ('Group_B', 'n', 'Y', ' 20030101 ', ' 20031231');

    commit;

    Post edited by: user13040446

    It is the closest, I went to the solution


    I create two rows;

    Rnk1: partition by group name, order of eff_date / / desc: this grade will sort the records of the most recent and handed to zero for each group\

    Rnk2: (dense) partition by group name, flag_A, flagb: this grade for each combination of group\flag gives a number so that they are classified as "families".

    Then I use the function analytic min

    Min (eff_date) more (partition of GroupName, rnk2): the idea is that, for each Member of the same family, the new date is the min of the family (and the max for the date of the term), at the end I just need separate so that the duplicates are gone

    Now the problem. As you can see from the query below, records of 1 and 6 (as identified by rownum) are identified in the same family, because they have the same combination of flag, but they are not successive, so everyone must keep its own date of entry into force.

    If only I can make the distinction between these two that would solve my problem


    Query:


    Select rowNum,GroupName, flag_a, flag_b, eff_date, term_date, rnk1, rnk2

    , min (eff_date) more than (partition by GroupName rnk2( ) min_eff

    Of

    (

    Select rowNum,

    GroupName , flag_a , flag_b , eff_date , term_date

    rank() more than (partition by GroupName stopped by eff_date desc) rnk1

    DENSE_RANK() more than (partition by GroupName order by flag_A flag_B ( ) rnk2

    de dsreports . tmp_group_test

    ) order by rowNum

    Hello

    user13040446 wrote:

    Hi KSI.

    Thanks for your comments, you were able to distinguish between these lines highlight, but lost lines 2,3,4 which are supposed to have the same date min = 20060101.

    Please see the table wanted to see the final result I want to reach

    Thanks again

    This first answer is basically correct, but in the main query, you want to use the function MIN, not the analytical function aggregation and GROUP BY columns with common values, like this:

    WITH got_output_group AS

    (

    SELECT GroupName, flag_a, flag_b, eff_date, term_date

    ROW_NUMBER () OVER (PARTITION BY GroupName

    ORDER BY eff_date

    )

    -ROW_NUMBER () OVER (PARTITION BY GroupName, flag_a, flag_b)

    ORDER BY eff_date

    ) AS output_group

    OF tmp_group_test

    )

    SELECT GroupName, flag_a, flag_b

    MIN (eff_date) AS eff_date

    MAX (term_date) AS term_date

    OF got_output_group

    GROUP BY GroupName, flag_a, flag_b

    output_group

    ORDER BY GroupName

    eff_date DESC

    ;

    The result I get is

    GROUP_NA F F EFF_DATE TERM_DAT

    -------- - - -------- --------

    Group_A Y 20110101 99991231 Y

    N Group_A 20101231 20060101 N

    Group_A N 20051231 20040101 Y

    Group_A Y Y 20031231-20030101

    Group_B N Y 99991231 20030101

    which is what you asked for.

  • With the help of a supermetric on historical data

    Once, I create a supernetric, should he be able to access historical data also?  Or it can only pull data from the time wherever it is created?

    You can only display the historical data that is earlier than the SM if you use the editor of great metric and "view supermetrics".

    Otherwise, supermetrics will begin only add to the FSDB as they are calculated (once applied to a resource).

  • Historical data of esxtop?

    IM somewhat familiar with esxtop... as a real-time Performance Analyzer. Is it possible to obtain historical data of esxtop or a real-time monitoring is only? I only need a few hours back, but a day would be ideal.

    d is the delay in seconds - and n is the number of interactions.  So 4320 d 20 - n gives you 24 hours of data with a 20 second sampling interval.

  • Update the values in the Table from another Table containing historical data

    So, I have two tables, a table and a master table.  The current table is updated each week and at the end of the week, is copied to the main table to keep historical data.  I have update the table in progress early in the week and want to take the latest data from the master table and update the current table with the data.  The current table could have additional IDs or some of the IDS could have deposited (these lines would receive data in the main table).  I want to only update the rows in the current table that have existing data to the attr1, attr2, attr3 columns.  A particular ID may have more than one record in the primary table, I want only the last disk to use for updating the current table.  The data from a different database where no direct connection is possible then I have to import data every week.  Here are some statements of create/insert:

    create table current_T (ID1 varchar(100),adate date,attr1 varchar(100),attr2 varchar(100),attr3 varchar(100))
    

    create table Master_T (ID1 varchar(100),adate date,attr1 varchar(100),attr2 varchar(100),attr3 varchar(100))
    
    

    begin
    insert into current_T (ID1,adate)
    values ('IE111','08/02/13');
    insert into current_T (ID1,adate)
    values ('IE112','08/02/13');
    insert into current_T (ID1,adate)
    values ('IE113','08/02/13');
    
    insert into master_T (ID1,adate,attr1,attr2,attr3)
    values ('IE111','08/01/13','yes','abc','123');
    insert into master_T (ID1,adate,attr1,attr2,attr3)
    values ('IE112','08/01/13','no','dgf','951');
    insert into master_T (ID1,adate,attr1,attr2,attr3)
    values ('IE113','08/01/13','no','dgf','951');
    insert into master_T (ID1,adate,attr1,attr2,attr3)
    values ('IE113','07/01/13','no','dgf','951');
    end;
    

    This has been a scratcher for me head and any help would be greatly appreciated.  I'm coding in Apex 4.1

    Thank you

    -Steve

    Not tested

    merge into current_t c

    using (select *)

    Of

    (select m.*

    row_number() over (partition by m.id1 m.adate DESC order) rn

    of master_t m

    )

    where rn = 1

    ) u

    on (c.id1 = u.id1)

    When matched then update

    Set c.adate = u.adate

    c.attr1 = u.attr1,

    c.attr2 = u.attr2,

    c.attr3 = u.attr3,

    When not matched then insert

    (c.id1, c.adate, c.attr1, c.attr2, c.attr3)

    values

    (u.id1, u.adate, u.attr1, u.attr2, u.attr3)

    ;

  • Calculate the start and end date in Connect By - during Hirerchy changes

    / * Formatted 05/20/2013 09:53 (PS5 v5.115.810.9015) * /.



    Oracle Database 11 g Enterprise Edition Release 11.2.0.3.0 - 64 bit Production

    Hello, can you please help me or guide me in the calculation of the dates of beginning and end to the underside of logic

    I want to calculate the Hirerchy Manager to the Agent.
    Then under query works fine and its giving me the expected results
    But when there is a change in the Hirerchy Manager or manager gets promoted
    Then I need to calculate the start date and end date.
    CREATE TABLE PERSON_DTL
    (
      SID                 VARCHAR2(10 BYTE),
      EMP_MGRS_ID         VARCHAR2(10 BYTE),
      START_EFFECTIVE_DT  DATE,
      END_EFFECTIVE_DT    DATE
    );
    
    Insert into PERSON_DTL
       (SID, EMP_MGRS_ID, START_EFFECTIVE_DT, END_EFFECTIVE_DT)
     Values
       ('M100', 'M107', TO_DATE('05/20/2013 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('12/31/9999 00:00:00', 'MM/DD/YYYY HH24:MI:SS'));
    Insert into PERSON_DTL
       (SID, EMP_MGRS_ID, START_EFFECTIVE_DT, END_EFFECTIVE_DT)
     Values
       ('M101', 'M102', TO_DATE('01/01/2010 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('05/18/2013 00:00:00', 'MM/DD/YYYY HH24:MI:SS'));
    Insert into PERSON_DTL
       (SID, EMP_MGRS_ID, START_EFFECTIVE_DT, END_EFFECTIVE_DT)
     Values
       ('A100', 'M100', TO_DATE('01/01/2010 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('12/31/9999 00:00:00', 'MM/DD/YYYY HH24:MI:SS'));
    Insert into PERSON_DTL
       (SID, EMP_MGRS_ID, START_EFFECTIVE_DT, END_EFFECTIVE_DT)
     Values
       ('M100', 'M101', TO_DATE('01/01/2010 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('05/18/2013 00:00:00', 'MM/DD/YYYY HH24:MI:SS'));
    Insert into PERSON_DTL
       (SID, EMP_MGRS_ID, START_EFFECTIVE_DT, END_EFFECTIVE_DT)
     Values
       ('M107', 'M102', TO_DATE('05/20/2013 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('12/31/9999 00:00:00', 'MM/DD/YYYY HH24:MI:SS'));
    Insert into PERSON_DTL
       (SID, EMP_MGRS_ID, START_EFFECTIVE_DT, END_EFFECTIVE_DT)
     Values
       ('M102', 'M103', TO_DATE('01/01/2010 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('12/31/9999 00:00:00', 'MM/DD/YYYY HH24:MI:SS'));
    Insert into PERSON_DTL
       (SID, EMP_MGRS_ID, START_EFFECTIVE_DT, END_EFFECTIVE_DT)
     Values
       ('M103', 'M104', TO_DATE('01/01/2010 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('12/31/9999 00:00:00', 'MM/DD/YYYY HH24:MI:SS'));
    Insert into PERSON_DTL
       (SID, EMP_MGRS_ID, START_EFFECTIVE_DT, END_EFFECTIVE_DT)
     Values
       ('A101', 'M105', TO_DATE('01/01/2010 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('12/31/9999 00:00:00', 'MM/DD/YYYY HH24:MI:SS'));
    Insert into PERSON_DTL
       (SID, EMP_MGRS_ID, START_EFFECTIVE_DT, END_EFFECTIVE_DT)
     Values
       ('M105', 'M106', TO_DATE('01/01/2010 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('12/31/9999 00:00:00', 'MM/DD/YYYY HH24:MI:SS'));
    COMMIT;
    SELECT   CONNECT_BY_ROOT (b.sid) agent_sid,
                 TRIM (
                    LEADING ',' FROM    SYS_CONNECT_BY_PATH (b.sid, ',')
                                     || ','
                                     || b.emp_mgrs_id
                 )
                    PATH,
                 START_EFFECTIVE_DT Start_dt,
                 END_EFFECTIVE_DT End_dt
          FROM   PERSON_DTL b
         WHERE   CONNECT_BY_ISLEAF = 1
    START WITH   sid IN ('A101', 'A100')
    CONNECT BY   PRIOR b.emp_mgrs_id = b.sid
    This is the results that i am getting now.
    
    AGENT_SID    PATH                       START_DT    END_DT
    
    A100    A100,M100,M101,M102,M103,M104    1/1/2010    12/31/9999
    A100    A100,M100,M107,M102,M103,M104    1/1/2010    12/31/9999
    A101    A101,M105,M106                   1/1/2010    12/31/9999
    Results Required
    
    A100    A100,M100,M101,M102,M103,M104    1/1/2010    5/18/2013
    A100    A100,M100,M107,M102,M103,M104    5/20/2013   12/31/9999
    A101    A101,M105,M106                   1/1/2010    12/31/9999

    WITH the CLAUSE will make it readable

    SQL> with paths as
      2  (
      3        SELECT   CONNECT_BY_ROOT (b.sid) agent_sid,
      4                 TRIM (
      5                    LEADING ',' FROM    SYS_CONNECT_BY_PATH (b.sid, ',')
      6                                     || ','
      7                                     || b.emp_mgrs_id
      8                 )
      9                  PATH,
     10                 START_EFFECTIVE_DT Start_dt,
     11                 END_EFFECTIVE_DT End_dt,rownum rn
     12        FROM   PERSON_DTL b
     13        START WITH   sid IN ('A101', 'A100')
     14        CONNECT BY   PRIOR b.emp_mgrs_id = b.sid
     15  ),
     16  flagged as
     17  (
     18      select agent_sid,
     19             path,
     20             start_dt,
     21             end_dt,rn,
     22             case when path like lag(path) over(order by rn)||'%' then 0 else 1 end flg
     23      from paths
     24  ),
     25  summed as
     26  (
     27      select agent_sid,path,start_dt,end_dt,
     28             sum(flg) over(order by rn) sm
     29      from flagged
     30  )
     31  select agent_sid,max(path) path,max(start_dt) start_dt,
     32         min(end_dt) end_dt
     33  from summed
     34  group by agent_sid,sm
     35  order by agent_sid;
    
    AGENT_SID  PATH                                     START_DT  END_DT
    ---------- ---------------------------------------- --------- ---------
    A100       A100,M100,M101,M102,M103,M104            01-JAN-10 18-MAY-13
    A100       A100,M100,M107,M102,M103,M104            20-MAY-13 31-DEC-99
    A101       A101,M105,M106                           01-JAN-10 31-DEC-99
    
  • How to store historical data?

    HIII,

    I'll be very happy if someone resolve my issue?

    I try to maintain historical data, I created a table using the materialized view, and every time I refreshed the table with new values, the old value should change to the history table and the current table should be updated with new values. I use oracle 10g. can someone tell how to do this.


    Thankz in advance.

    981145 wrote:
    I created a view, materialized with the name test.

    Simply create table history through copy of the table structure:

    create history_table as
    select sysdate as date_backup, a.* from m_view a where 1=2
    

    981145 wrote: I will update this materialized view once a month.

    define a job with DBMS_SCHEDULER

    981145 wrote:
    When I refresh the test table should switch to test_hist and test the old values will be updated with new values.

    If work is started, it must copy all the data of m_view in the history table:

    insert into history_table
    select sysdate, a.* from m_view a
    

    and then call the dbms_mview.refresh:

    DBMS_MVIEW.REFRESH('M_VIEW', 'C'); --completely refresh
    
  • Historical data in the audit tables

    We are facing the following requirement of our current customer.

    They run a number of workflow manually for a few years, and they have retained the historical data in Excel spreadsheets. Now, they want to start using the BPM Suite.

    They would like to query power and display the track Audit Suite BPM of all workflows, i.e. not only those who will be executed with BPM Suite in the future, but also those who have been stored in spreadsheets Excel over the years.

    I have a few questions about this:

    What is a reasonable demand?
    My guess is that we need to load historical data in the tables of audit BPM (BPM_AUDIT_QUERY, etc.), is - that correct?
    Any suggestions on how to get there?

    I was reading the doc, Googling the web and searching this forum and I have not found yet any relevant material on this subject.

    Concerning

    Juan

    Published by: Juan Algaba Colera on October 11, 2012 04:14

    It would be very difficult to directly load their data in the audit tables. Also check tables are stored in the soa-infra schema which is usually configured to be served after a certain period of time so that it grows indefinitely, because it contains all the runtime information as well. You will need to determine how long they need these historical data and configure properly the purge and the size of the DB correctly for this amount of data.

    If data in excel are truly one, the new BPM process the best way to get the audit data in the process can be use the APIs to create instances of the process and then automate execution of each task as defined in the spreadsheets. This way these past instances have been run through the system.

    In most cases this is not done however. Most of the customers usually somewhere archived these worksheets in case they need to recover, but will not try to make that old data visible from the BPM workspace and to view an audit trail.

    Thank you
    Adam DesJardin

  • Historical data relating to objects locked

    Hello!

    I use Oracle DB 9.2.0.8 .
    It is possible to get historical data on objects which engaged in shared pool
    by analogy with the note ID 163424.1 describes how do I see the current image in the buffer Cache?


    Thank you and best regards,
    Paul

    Hello

    check the following views:

    DBA_HIST_LATCH_NAME
    DBA_HIST_LATCH
    DBA_HIST_LATCH_CHILDREN
    DBA_HIST_LATCH_PARENT
    DBA_HIST_LATCH_MISSES_SUMMARY

    Best regards
    Nikolai

Maybe you are looking for