Management of historical data in BPM

Hi all

We have a requirement of the company in need of a historical report of business transactions made in the process of BPM (mainly of the certifications of workflow). There are about 150 transactions per day with detailed data. We will have discussions about how to manage this requirement:

1. first of all, we have proposed to use our platform of BI (OBIEE) for these reports, but they serve operationally, for purposes of analysis/Analytics not

2. We then began to explore the option to retain historical BPM transactions, since they are not too a day although they do not contain a lot of data (and historical data should be stored for up to 2 years). Personally, I don't like this option because BPM should not be used for this purpose, but the same thing happens for option 1, when you try to use the BI platform for something, so that it is not intrinsically designed.

What do you think? What would you recommend?

Any comments will be appreciated. Thank you very much in advance,

-Carlos

Hi Carlos,

Then, you must check the data directly on the screen of work or can be made available through an area of the screen/application?

Regardless of the answer, you make reports based on tables in the Oracle BPM/SOA engine.

My suggestion would be for you to build a data base that contains relevant activity data for reports. This database can be filled using the task of human events that are triggered automatically when a task status changes in the BPM engine.

For example, when a task is assigned to someone, the engine fires a reminder which can then take the relevant information of process and task and place it in the historical database.

In this way, you can have your reports made during this DB in some technology that you prefer. Oracle BI Publisher, for example, or just a nice display of the user interface with search capabilities.

To learn more about human task events, see this document

http://docs.Oracle.com/middleware/1213/BPM/BPM-develop/SOA-BPM-human-task-design.htm#BPMPD87372

If you would like more information, feel free to say.

See you soon

José

Tags: Fusion Middleware

Similar Questions

  • Historical data in the audit tables

    We are facing the following requirement of our current customer.

    They run a number of workflow manually for a few years, and they have retained the historical data in Excel spreadsheets. Now, they want to start using the BPM Suite.

    They would like to query power and display the track Audit Suite BPM of all workflows, i.e. not only those who will be executed with BPM Suite in the future, but also those who have been stored in spreadsheets Excel over the years.

    I have a few questions about this:

    What is a reasonable demand?
    My guess is that we need to load historical data in the tables of audit BPM (BPM_AUDIT_QUERY, etc.), is - that correct?
    Any suggestions on how to get there?

    I was reading the doc, Googling the web and searching this forum and I have not found yet any relevant material on this subject.

    Concerning

    Juan

    Published by: Juan Algaba Colera on October 11, 2012 04:14

    It would be very difficult to directly load their data in the audit tables. Also check tables are stored in the soa-infra schema which is usually configured to be served after a certain period of time so that it grows indefinitely, because it contains all the runtime information as well. You will need to determine how long they need these historical data and configure properly the purge and the size of the DB correctly for this amount of data.

    If data in excel are truly one, the new BPM process the best way to get the audit data in the process can be use the APIs to create instances of the process and then automate execution of each task as defined in the spreadsheets. This way these past instances have been run through the system.

    In most cases this is not done however. Most of the customers usually somewhere archived these worksheets in case they need to recover, but will not try to make that old data visible from the BPM workspace and to view an audit trail.

    Thank you
    Adam DesJardin

  • historical data in the repository with regard to targets?

    Hi all


    I have a question regarding the historical data. We have a CMS 11.1 and 11.2 repository DB. And lots of DB/target, download data to WHO. Our question to what extent rear can be go and watch performace data?

    Who is still dependent on the AWR retention policy (our current retention during most of our database are 30 days) on the DB or is there a setting where I can tell all in WHO or repository DB saying I want to store the CWA or any retention performance for lets say 6 months? How can I do this? I won't put in AWR retention to 6 months on the actual target db. Anyway is the objectifdes AWR data to db repository?

    See the documentation
    Oracle® Enterprise Manager Administration
    11g Release 1 (11.1.0.1)
    http://download.Oracle.com/docs/CD/E11857_01/EM.111/e16790/repository.htm#i1030660

    for data retention strategies.

    The configuration of the AWR reflects only a guarded target database.

    The data collected (Agents) will be stored in the repository and is subject to the policies of data retention.

    Concerning
    Rob
    http://oemgc.WordPress.com

  • How does the historical Data.vi rename? I get the error-1967386622

    What format is required for entries to rename history Data.vi?

    No matter what I try, I can't make it work, I get the error-1967386622:

    HIST_RenameNodesCORE.VI, Citadel: (Hex 0x8ABC1002) the item specified, like computer, path or a folder cannot be found.

    Given this hypothetical hierarchy in the Citadel 5 universe in MAX, that entries are to ZZZ and rename XXX?

    -My computer (PC)

    -C__DB

    -AAA

    + AAA_Process_Name

    (traces)

    -ZZZ

    + ZZZ_Process_Name

    (traces)

    Thank you!!

    This knowledge base article describes exactly this error. The syntax is a bit complicated but sometimes. To rename the ZZZ xxx entries, the following entries in the VI rename historical data:

    name of the database: C_DB

    current name: \\ZZZ\

    new name: \\XXX\

    And for the ZZZ process:

    name of the database: C_DB

    current name: \\ZZZ\ZZZ_Process_Name

    new name: \\ZZZ\XXX_Process name

    To change the folder and the name of the process, we would have run the VI twice through. Hope this helps some!

  • Historical data of the Internet

    You can create a text file of historical data Internet so it can be printed?

    http://www.NirSoft.NET/utils/iehv.html
     
    You can record the history of the browser in a text file that can be printed.
     
     
    "pcdoctor48" wrote in message news: 684d2a0e-7b9c-437b-acc7-fa8d73f41f68...
    > How to record the history of the browser in a text file?
     
     
     
  • Is there a good way to archive historical data?

    Our planning cubes become too big with 5 years of forecasting and budgeting data.

    Is there a good way to archive historical data?

    How you guys do it?

    I know a simple way is easy make a copy planning essbase cubes.  However, is there text, attachments and support details, these will be lost unless there is a way to archive a RDBMS repository data for planning.  Even in this case - all links and hooks of the Essbase cubes in these RDBMS repository will be broken.

    The old fashion method is to print all reports in PDF and archiving.

    Given that the plan changes every month, reprocess you history until you check in?

    Thanks for your advice.

    This can be done in 2 ways...

    1. just make a copy of the old 'DATA' in text file or another essbase cube history. Clear only the historical data for the current application. This will keep other information as text in the intact cell. In case the user wants to refer to the old texts/support cell details they can do by going directly into the application and for part of data, they can look in old PDF report.

    2. a copy of the planning application to create a copy of the current request. Keep all old data, text etc in old app of cells. All previous reports also point this app. Then erase current app and can be simply provide read access to older data App users can be trained to use older applications for all historical data and current app for existing budgets. Also this app can be used during the period of archiving all data.

  • groups of vCOps and historical data

    I create a group in the vSphere vCOps UI containing a number of virtual machines.

    Once I created the group, I can start to see the metrics from the group.

    However, it seems that when I view the settings for the group in the custom user interface, it is impossible to get all values from before the time that the Group has been created.

    For example, I create the Group on Feb. 1.  Virtual machines have given individual and metrics for them individually since their creation a months ago on January 1.

    However, when I create the group, it seems that vCOps fails to extract data before 1 February.

    For example, I want to get application CPU MHz for the whole group of virtual machines.  Even if the data to calculate the Group exist if vCOps was to review the historical data of the individual VM, vCOps shows only data at the point when the Group was created and not before.

    Is that a correct understanding of the limitation of the groups regarding the preparation of reports on historical data?

    What you describe is how all the resources in vCOps behave with respect to the collection of data (these attributes or supermetrics). Metrics will begin to collect according to the assigned package attributes/supermetrics once the resource is created. Historical data prior to this existing/recovery resource is not available.

    So I would call not this specific behavior of "groups" because that is how all resources behave with respect to the collection of data.

  • With the help of a supermetric on historical data

    Once, I create a supernetric, should he be able to access historical data also?  Or it can only pull data from the time wherever it is created?

    You can only display the historical data that is earlier than the SM if you use the editor of great metric and "view supermetrics".

    Otherwise, supermetrics will begin only add to the FSDB as they are calculated (once applied to a resource).

  • Historical data of esxtop?

    IM somewhat familiar with esxtop... as a real-time Performance Analyzer. Is it possible to obtain historical data of esxtop or a real-time monitoring is only? I only need a few hours back, but a day would be ideal.

    d is the delay in seconds - and n is the number of interactions.  So 4320 d 20 - n gives you 24 hours of data with a 20 second sampling interval.

  • Update the values in the Table from another Table containing historical data

    So, I have two tables, a table and a master table.  The current table is updated each week and at the end of the week, is copied to the main table to keep historical data.  I have update the table in progress early in the week and want to take the latest data from the master table and update the current table with the data.  The current table could have additional IDs or some of the IDS could have deposited (these lines would receive data in the main table).  I want to only update the rows in the current table that have existing data to the attr1, attr2, attr3 columns.  A particular ID may have more than one record in the primary table, I want only the last disk to use for updating the current table.  The data from a different database where no direct connection is possible then I have to import data every week.  Here are some statements of create/insert:

    create table current_T (ID1 varchar(100),adate date,attr1 varchar(100),attr2 varchar(100),attr3 varchar(100))
    

    create table Master_T (ID1 varchar(100),adate date,attr1 varchar(100),attr2 varchar(100),attr3 varchar(100))
    
    

    begin
    insert into current_T (ID1,adate)
    values ('IE111','08/02/13');
    insert into current_T (ID1,adate)
    values ('IE112','08/02/13');
    insert into current_T (ID1,adate)
    values ('IE113','08/02/13');
    
    insert into master_T (ID1,adate,attr1,attr2,attr3)
    values ('IE111','08/01/13','yes','abc','123');
    insert into master_T (ID1,adate,attr1,attr2,attr3)
    values ('IE112','08/01/13','no','dgf','951');
    insert into master_T (ID1,adate,attr1,attr2,attr3)
    values ('IE113','08/01/13','no','dgf','951');
    insert into master_T (ID1,adate,attr1,attr2,attr3)
    values ('IE113','07/01/13','no','dgf','951');
    end;
    

    This has been a scratcher for me head and any help would be greatly appreciated.  I'm coding in Apex 4.1

    Thank you

    -Steve

    Not tested

    merge into current_t c

    using (select *)

    Of

    (select m.*

    row_number() over (partition by m.id1 m.adate DESC order) rn

    of master_t m

    )

    where rn = 1

    ) u

    on (c.id1 = u.id1)

    When matched then update

    Set c.adate = u.adate

    c.attr1 = u.attr1,

    c.attr2 = u.attr2,

    c.attr3 = u.attr3,

    When not matched then insert

    (c.id1, c.adate, c.attr1, c.attr2, c.attr3)

    values

    (u.id1, u.adate, u.attr1, u.attr2, u.attr3)

    ;

  • How to store historical data?

    HIII,

    I'll be very happy if someone resolve my issue?

    I try to maintain historical data, I created a table using the materialized view, and every time I refreshed the table with new values, the old value should change to the history table and the current table should be updated with new values. I use oracle 10g. can someone tell how to do this.


    Thankz in advance.

    981145 wrote:
    I created a view, materialized with the name test.

    Simply create table history through copy of the table structure:

    create history_table as
    select sysdate as date_backup, a.* from m_view a where 1=2
    

    981145 wrote: I will update this materialized view once a month.

    define a job with DBMS_SCHEDULER

    981145 wrote:
    When I refresh the test table should switch to test_hist and test the old values will be updated with new values.

    If work is started, it must copy all the data of m_view in the history table:

    insert into history_table
    select sysdate, a.* from m_view a
    

    and then call the dbms_mview.refresh:

    DBMS_MVIEW.REFRESH('M_VIEW', 'C'); --completely refresh
    
  • Historical data relating to objects locked

    Hello!

    I use Oracle DB 9.2.0.8 .
    It is possible to get historical data on objects which engaged in shared pool
    by analogy with the note ID 163424.1 describes how do I see the current image in the buffer Cache?


    Thank you and best regards,
    Paul

    Hello

    check the following views:

    DBA_HIST_LATCH_NAME
    DBA_HIST_LATCH
    DBA_HIST_LATCH_CHILDREN
    DBA_HIST_LATCH_PARENT
    DBA_HIST_LATCH_MISSES_SUMMARY

    Best regards
    Nikolai

  • Historical data report

    I try to get 3 historical data of what happened on a specific virtual machine and a specific box of ESX in a cluster to a specific range.

    vMotion: I saw a script by LucD that annoys me VMotion for this day http://communities.VMware.com/message/1165979#1165979 for a specific VM... do not know how to see vMotion that was made on this ESX box as a whole.

    Get-Stats: memory, CPU and IO for a period of time. Get-stat wasworks well to live... but how it works with historical data? I tried something like: Get-Stat -entity $vm -stat mem.usage.average, disk.usage.average and date ranges in... but I seem to have messed up somewhere?

    Finally, I try to use Get-VIEvent for info, errors, and warnings for that day here. It seems to me doing this based on an example of LucD but what is strange, is that it does not match the vMotion script I ran earlier (some DRS events are in this viewer and not in the script of vMotion earlier).

    Any help would be appreciated to mucho.

    vMotion: the script you are referring to had a slightly different purpose. It searches all the tasks which have been drawn by the DRS for a specific virtual machine. In other words, vMotion all tasks that initiate DRS.

    Once the tasks are found, the script collects all the events associated with each task.

    And these events were extracted the source ESX host and destination.

    And Yes, the script can be used for an ESX. It takes just allows you to store a MoRef to an ESX server field of the entity

    Get-Stats: it depends on how you have the history intervals.

    The Vic go.

    According to what you have here for each of the four intervals, the data statistics will be kept for a specific period of time.

    Also important are the levels that you define for each interval. It defines which parameters are available in each interval.

    What you get from the script are only events that are related to the Insider DRS vMotions.

    If you do a manual vMotion, it will not be on the list defined by the script.

  • Maintenance of historical data

    Dear members,
    This is my second post in the forum. Let me explain the scenario first,

    "We have 2 tools - outil1 and outil2 pointing to 2 different databases - db1 and db2 respectively. Currently, db1 performance is very poor due to huge data. We only have data more recent 18 months in db1. The oldest data beyond 18 months must be in db2 (in read-only mode) which connects 2 tool. So, whenever I need historical data, I'll use outil2. At regular intervals the db1 data should move to db2. »

    My idea is to use partitioning and the logic of sleep. At the end of each month, a month older data will be moved to db2. But please let me know if this will work on the concept above. If so, how to implement it and if not, what would be the right solution for this?

    Kind regards
    Mani
    TCS

    Partitioning is great on the side of the source (assuming you partition by date, of course).

    I'm not sure how logical standby would help on the destination. The logical expectation is to keep the database up-to-date with primary eve, while the database pending would not be read-only, it would constantly apply transactions from the primary. And when you delete a partition on the primary, you would pass the score on the night before, so that the wait wouldn't keep history.

    Instead of the logical standby, you can use streams to replicate transactions and configure workflows to ignore certain DDL operations like score drops. This would allow you to maintain history on db2 but wouldn't give you a unalterable db2 database.

    You could potentially make partition changes of db1 on a regular basis, by moving the data that you want to delete in an intermediate non-partitioned table, move this table for db2 (via export/import, transportable tablespaces, etc.) and make a swap partition to load the data into the partitioned on db2 table. Which gives you a single db2 read and allows you to maintain history, but requires some work to move the data to each month.

    Of course, if you decide to partition db1, assuming you did it correctly, I would tend to think that performance problems would go away (or at least this archiving old data do affect performance more). One of the points of partitioning is that Oracle can then partition your removal requests so that just look at the current score, if not all outil1 is interested. So maybe, all what you need to do is partition db1 and you don't need to all DB2.

    Justin

  • How to manage updates of data in ArrayDataModel

    Hello

    I load some data from a JSON file in a QVariantList which is then added to an ArrayDataModel that allows to display the data in a ListView. It all works very well. Now, I want to update certain values of a ListViewItem and want to propagate these changes to the ArrayDataModel-> QVariantList-> JSON file. How I would handle this? Should I update the DataModel, ListView or even the QVariantList? Loading and display of data in a ListView are very well documented, but I can't find information on reverse litle?

    Any help will be greatly appreciated!

    You can access the dataModel of ListView items so it is possible to directly manipulate.

    Outside the ListView join dataModel signals such as

    void   itemAdded (QVariantList indexPath)
    void    itemUpdated (QVariantList indexPath)
    void    itemRemoved (QVariantList indexPath)
    void    itemsChanged (bb::cascades::DataModelChangeType::Type eChangeType=bb::cascades::DataModelChangeType::Init, QSharedPointer< bb::cascades::DataModel::IndexMapper > indexMapper=QSharedPointer< bb::cascades::DataModel::IndexMapper >(0))
    

    You'll probably have to subscribe to each of them as you know what changes you make to the dataModel. For example, if you only update the elements subscribe to itemUpdated etc..

    In Manager signals, extract the dataModel changes and save them to the disc.

    PS Don't forget to subscribe to the changes that the dataModel is initially populated.

Maybe you are looking for