OmniPortlet and Oracle data dictionary

Hello
I have this problem. I have to develop a portlet in order to:

-Send an email to all the users who belong to groups whose names begin to "shiftlist.

For example, if we have groups:

1 shiftlist_ XX
2 shiftlist_YY
3 AA

I send email to all users who belong to the first and second groups.

Someone told me to query the data dictionary (SELECT all_users username where username like ' shiftlist_ %) to get the desired users.

Is it possible to create a portlet with Omniportlet to get the list of users in your opinion?

Thank you very much

You can create a portlet omni with standard sql and display users, but if you want to send an email to them, I think that the best approach is to develop a regular JSR 168 portlet.

Tags: Fusion Middleware

Similar Questions

  • Options for Oracle GoldenGate and Oracle Data Guard

    Hello

    Please give me suggestions on Oracle GoldenGate vs Oracle DataGuard? Based on the options, I choose for the implementation

    Thanks in advance

    Thank you
    Vincent

    Hello

    Oracle Dataguard:

    1 primary and Standby Database must be the same. (But 11g, it supports Heteregenous Data Guard Configurations. Example: We can imeplement Oracle Data Guard between Oracle Linux 6.2 Server (x86_64) and Microsoft Windows 2008 Server R2 (x 64)

    2 oracle Database version should be same in source and target

    3. no additional license required for Oracle Data Guard to install.

    Oracle GoldenGate:

    1 operating system in the primary database and the Standby Database are not necessarily even.

    2. major databases and standby are not necessarily even. (Including database software).

    3 oracle GoldenGate Software license required in the Source code and database is target.

    Hope it helps to...

    Thank you
    LaserSoft

  • Oracle data dictionary table partition information

    Hi guys,.

    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE 10.2.0.4.0 Production
    AMT for HP - UX: release 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production


    DW contains a number of tables. Some of them have partitions. Is there a DD view that indicates what are the partitions in tables?

    Thanks in advance!

    Published by: abyss on May 24, 2011 10:19
       select * from all_tab_partitions where table_owner=  
    

    Choose what you want in the select list.

    for example

        select table_name,partition_name,tablespace_name from all_tab_partitions
                  where table_owner='SCOTT'
    
  • Data dictionary... Help please

    I have create this query for my college courses. I need help. I have half of the request, according to me. Need an another join clause in the declaration for the views, do not know what to do. Any help will be appreciated! Thank you!



    Create and run a simple query on the Oracle data dictionary to show all the foreign key constraints in the HR schema. Note that the foreign key constraints are constraint type 'R '. For each constraint of the list the type of constraint, constraint name, the name of the table referencing, the position of column referencing, referencing column name, the name of the referenced constraint (i.e. the name of constraint primary key from another table), the referenced table and the columns that are referenced. Sort your results by name of constraint, table_name and position.

    TIPS: You'll need two views of different data dictionary for this query. One of the points of view should be mentioned twice in the FROM clause, and so you will need two alias names. With this agreement, you must also join two conditions.


    That's what I have so far...

    SELECT uc.constraint_type, uc.constraint_name, uc.table_name, r_constraint_name,
    UCC.table_name, ucc.position, ucc.column_name, ucc.constraint_name
    From user_constraints uc JOIN user_cons_columns ucc
    ON uc.table_name = ucc.table_name
    AND uc.constraint_name = ucc.constraint_name
    WHERE constraint_type = 'R '.
    AND IN r_constraint_name
    (SELECT constraint_name
    Of all_constraints
    WHERE constraint_type
    IN ('P', 'U')
    AND table_name = 'hr')
    ORDER BY uc.constraint_name, ucc.table_name, ucc.position;

    Try this.

    SELECT uc.constraint_type,
           uc.constraint_name,
           uc.table_name,
           uc.r_constraint_name,
           ucc1.table_name ref_table_name,
           ucc.position,
           ucc.column_name,
           ucc1.column_name ref_column_name
      FROM user_constraints uc
           JOIN user_cons_columns ucc
             ON uc.table_name = ucc.table_name
            AND uc.constraint_name = ucc.constraint_name
           JOIN user_cons_columns ucc1
             ON uc.r_constraint_name = ucc1.constraint_name
            AND ucc.position = ucc1.position
     WHERE uc.constraint_type = 'R'
    ORDER BY uc.constraint_name, ucc.table_name, ucc.position;
    

    G.

  • data dictionary views and result_cache

    Hello
    Are there special aspects when caching the results of a function that uses data dictionary views to determine its results?
    This issue arose because I have a such a result_cached function for which the result_cache objects are not get invalidated, even when the underlying data dictionary views have changed and the function gives 'fade' in it values came out. Addition of article relies_on has not helped either.

    Here's what I'm trying to do:
    The function accepts the name of the table as its input and trying to determine all the child tables by using the sys.dba_constraints view. The results are returned in a pl/sql table and are cached so that subsequent calls to this function use the result_cache.
    Everything works well for the parent/child tables that were created before the creation of this function. All the results are correct.

    The problem starts when a child table is added to an existing table in the parent.
    V$ result_cache_objects view shows the result of this function as "Published" and the output of the function do not show the child table that is newly created.
    The same is when an existing child table is deleted; the function continues to return in the output as it is extracted from the result_cache.

    Oracle version:
    Oracle Database 11 g Enterprise Edition Release 11.2.0.1.0 - 64 bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE 11.2.0.1.0 Production
    AMT for Linux: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production

    >
    Restrictions on result caching functions *.

    To be cached in result, a function must meet all of these criteria:

    * It is not defined in a module that has the rights to the plaintiff, or in an anonymous block.
    * It is not a function table in pipeline.
    * It does not reference dictionary tables, temporary tables, sequences or not deterministic SQL.
    For more information, see Oracle Database Performance Tuning Guide.
    * It has no OUT or IN OUT parameters.
    * No parameter has one of these types:
    o BLOB
    o CLOB
    o NCLOB
    o REF CURSOR
    o Collection
    o object
    o file
    Return type is none:
    o BLOB
    o CLOB
    o NCLOB
    o REF CURSOR
    o object
    o collect Record or PL/SQL that contains a return unsupported type

    It is recommended that a result caching function also meet these criteria:

    * It has no side effects.
    For more information about side effects, see "subprogram Side Effects".
    * It is not specific to the session settings.
    For more information, see "Making Result-Cached functions manage Session-specific settings.
    * It is not specific to the session application contexts.
    >
    http://download.Oracle.com/docs/CD/E11882_01/AppDev.112/e17126/subprograms.htm#LNPLS698

  • ODI and Siebel CRM - Modules of knowledge Oracle Siebel CRM in Oracle Data Integrator.

    Hi all

    I'm looking for information on ODI and Siebel CRM.

    Part 1:

    For the moment, we do not use ODI to load data into Siebel, guys on the team here

    do a lot of PL/SQL for CEF Siebel system tables. (I'm not a person of Siebel)

    Part 2:

    Then these tables of Siebel, I use ODI BI Apps to finally get the data for some dashboards OBIEE.

    I thought that maybe there would be a better way to do part 1 using ODI.

    According to me, that there is a knowledge Oracle Siebel CRM Modules in Oracle Data Integrator.

    http://docs.Oracle.com/CD/E21764_01/doc.1111/e17466/oracle_siebel.htm#ODIAA461

    Here's what we have:

    ODI 11.1.1.7.0 Patch 18204886

    ODI BI Apps 11.1.1.8.1

    Siebel Version 8.1.1

    Pointers to people who have made would be apprecated.

    Eric

    After talking with my boyfriend of Siebel, here's what I understand:

    ODI will load the tables of EIM, which, said, is 75% of employment

    and the last 25% is the .ifb file is generated and ths to run the file on the

    Server Manager.

    Yes, ODI can load data but my guy Siebel

    the ifb file will have to be revised manually...

    When I get a moment, I'll do a small test case and see

    If I can build and run a simple example of this in ODI.

    A small proof of concept.

    Eric

  • What LKM and IKM for b/w MSSQL 2005 and Oracle 11 of fast data loading

    Hello

    Can anyone help to decide what LKMs and IKMs are best for data loading between MSSQL and Oracle.

    Staging area is Oracle. Need to load around the lines of 400Million of MSSQL to Oracle 11 g.

    Best regards
    Muhammad

    "LKM MSSQL to ORACLE (BCP SQLLDR)" may be useful in your case which uses BCP and SQLLDR to extract and laod of MSSQL and Oracle database.

    Please see details on KMs to the http://docs.oracle.com/cd/E28280_01/integrate.1111/e12644/ms_sqlserver.htm#BGBJBGCC

  • data replication with ODI between SQL SERVER and ORACLE

    Hello world
    First of all, I want to migrate database SQL SERVER and ORACLE DB tables.
    And then make online (synchronous) replication from SQL SERVER to ORACLE using ODI.

    I have not used before ODI.
    How to use the ODI for this?

    1. create a master repository and connect to the "topology Manager.
    2. in the topology Manager, you must configure the following
    2.1 create a data server for the Oracle under Oracle database in the physical connection
    2.2. create a database for the SQL Server database server in SQL Server in the physical connection. To do this, the jdbc for sql server driver.
    2.3 implement the logical connection and frame
    2.4 create a workrepository in the topology manager repositories tab
    3. connect to the designer and follow these steps
    3.1 create a template for the SQL Server source and reverse (import) the datastores (tables) to the model.
    3.2 the value of the model for the target of the Oracle
    3.3 create a (mapping) interface, under the table in schema define the source and then add it to the target and bind
    3.3 on the flow tab, you must set the Modules (KMs) of knowledge to perform the load. You must have imported the KMs before creating the interface.
    3.4 in IKM put 'create table traget' to 'yes '.
    4 run the interface to load data from SQL to Oracle Server

    Thank you
    Fati

  • Need help with Oracle SQL merge records according to date and term dates

    Hi all

    I need help to find this little challenge.

    I have groups and flags and effective dashboards and dates of term against these indicators according to the following example:

    GroupName Flag_A Flag_B Eff_date Term_date
    Group_ATHERETHERE2011010199991231
    Group_ANN2010010120101231
    Group_ANN2009010120091231
    Group_ANN2006010120081231
    Group_ANTHERE2004010120051231
    Group_ATHERETHERE2003010120031231
    Group_BNTHERE2004010199991231
    Group_BNTHERE2003010120031231

    As you can see, group_A had the same combination of (N, N) flag for three successive periods. I want to merge all the time periods with the same indicators in one. Where entry into force will be the most early (underlined) time period and end date will be later (underlined)

    So the final result should look like this:

    GroupName Flag_A Flag_B Eff_date Term_date
    Group_ATHERETHERE2011010199991231
    Group_ANN2006010120101231
    Group_ANTHERE2004010120051231
    Group_ATHERETHERE2003010120031231
    Group_BNTHERE2003010199991231

    Thanks for your help

    Here's the DDL script

    drop table TMP_group_test;

    create table TMP_group_test (groupname varchar2 (8))

    , flag_a varchar2 (1)

    , flag_b varchar2 (1)

    , eff_date varchar2 (8)

    , term_date varchar2 (8)

    );

    insert into TMP_group_test values ('Group_A', 'Y', 'Y', ' 20110101 ', ' 99991231');

    insert into TMP_group_test values ('Group_A', 'n', ' n ', ' 20100101 ', ' 20101231');

    insert into TMP_group_test values ('Group_A', 'n', ' n ', ' 20090101 ', ' 20091231');

    insert into TMP_group_test values ('Group_A', 'n', ' n ', ' 20060101 ', ' 20081231');

    insert into TMP_group_test values ('Group_A', 'n', 'Y', ' 20040101 ', ' 20051231');

    insert into TMP_group_test values ('Group_A', 'Y', 'Y', ' 20030101 ', ' 20031231');

    insert into TMP_group_test values ('Group_B', 'n', 'Y', ' 20040101 ', ' 99991231');

    insert into TMP_group_test values ('Group_B', 'n', 'Y', ' 20030101 ', ' 20031231');

    commit;

    Post edited by: user13040446

    It is the closest, I went to the solution


    I create two rows;

    Rnk1: partition by group name, order of eff_date / / desc: this grade will sort the records of the most recent and handed to zero for each group\

    Rnk2: (dense) partition by group name, flag_A, flagb: this grade for each combination of group\flag gives a number so that they are classified as "families".

    Then I use the function analytic min

    Min (eff_date) more (partition of GroupName, rnk2): the idea is that, for each Member of the same family, the new date is the min of the family (and the max for the date of the term), at the end I just need separate so that the duplicates are gone

    Now the problem. As you can see from the query below, records of 1 and 6 (as identified by rownum) are identified in the same family, because they have the same combination of flag, but they are not successive, so everyone must keep its own date of entry into force.

    If only I can make the distinction between these two that would solve my problem


    Query:


    Select rowNum,GroupName, flag_a, flag_b, eff_date, term_date, rnk1, rnk2

    , min (eff_date) more than (partition by GroupName rnk2( ) min_eff

    Of

    (

    Select rowNum,

    GroupName , flag_a , flag_b , eff_date , term_date

    rank() more than (partition by GroupName stopped by eff_date desc) rnk1

    DENSE_RANK() more than (partition by GroupName order by flag_A flag_B ( ) rnk2

    de dsreports . tmp_group_test

    ) order by rowNum

    Hello

    user13040446 wrote:

    Hi KSI.

    Thanks for your comments, you were able to distinguish between these lines highlight, but lost lines 2,3,4 which are supposed to have the same date min = 20060101.

    Please see the table wanted to see the final result I want to reach

    Thanks again

    This first answer is basically correct, but in the main query, you want to use the function MIN, not the analytical function aggregation and GROUP BY columns with common values, like this:

    WITH got_output_group AS

    (

    SELECT GroupName, flag_a, flag_b, eff_date, term_date

    ROW_NUMBER () OVER (PARTITION BY GroupName

    ORDER BY eff_date

    )

    -ROW_NUMBER () OVER (PARTITION BY GroupName, flag_a, flag_b)

    ORDER BY eff_date

    ) AS output_group

    OF tmp_group_test

    )

    SELECT GroupName, flag_a, flag_b

    MIN (eff_date) AS eff_date

    MAX (term_date) AS term_date

    OF got_output_group

    GROUP BY GroupName, flag_a, flag_b

    output_group

    ORDER BY GroupName

    eff_date DESC

    ;

    The result I get is

    GROUP_NA F F EFF_DATE TERM_DAT

    -------- - - -------- --------

    Group_A Y 20110101 99991231 Y

    N Group_A 20101231 20060101 N

    Group_A N 20051231 20040101 Y

    Group_A Y Y 20031231-20030101

    Group_B N Y 99991231 20030101

    which is what you asked for.

  • Import data dictionary stored procedures

    It's the rewamp of an old archived thread:

    Re: import of data dictionary

    .. relevant even today with DataModeleter 4.1 (standalone) on Oracle 12 c: I seem not to be able to import any stored procedure / function of the well data dictionary that following the steps (see link) and reaching even to the summary confirming the detection of target objects:

    Capture.JPG

    However, at the time of the merger, previews DDL (to import/merge, no generation) says:

    -CREATE THE PROCEDURE0
    -CREATE FUNCTION0

    Am I missing a trivial step?

    THX

    Hello

    I think I know what is the cause of the problem.

    There is a bug in version 4.1 where different types of objects (including functions and stored procedures) do not appear in the tree view to compare if the property "include physical properties to compare the feature ' is not defined.

    So the solution is to ensure that this property is set.  It is on the Data Modeler > DOF > DDL/comparison of preferences page.

    This bug in the next version.

    David

  • A full import by using datapump will overwrite the target database data dictionary?

    Hello

    I have a 11G with 127 GB database. I did a full export using expdp as a user of the system. I'll import the created dump file (which is 33 GB) on the basis of data from 12 c target.

    When I do the full import on the 12 c database data dictionary is updated with new data. But as is it already contained data dictionary? It will also change?

    Thanks in advance

    Hello

    In addition to the responses of the other comrades

    To start, you need to know some basic things:

    The dictionary database tables are owned by SYS and must of these tables is created when the database is created.

    Thus, in the different versions of database Oracle there could be less or more data dictionary tables of different structure database,.

    so if this SYSTEM base tables are exported and imported between different versions of oracle, could damage the features of database

    because the tables do not correspond with the version of database.

    See the Ref:

    SYS, owner of the data dictionary

    Database Oracle SYS user is owner of all the base tables and a view of the data available to the user dictionary. No Oracle database user should never change (UPDATE, DELETE, or INSERT) ranks or schema objects contained in the SYS schema, because this activity can compromise the integrity of the data. Security administrator must keep strict control of this central account.

    Source: http://docs.oracle.com/cd/B28359_01/server.111/b28318/datadict.htm

    Prosecutor, the utilities for export cannot export the dictionary SYS base tables and is marked

    as a note in the documentation:

    Data Pump export Modes

    Note:

    Several patterns of system cannot be exported because they are not user patterns; they contain metadata and data managed by Oracle. Examples of system schemas that are not exported MDSYS SYS and ORDSYS.

    Source: https://docs.oracle.com/cd/E11882_01/server.112/e22490/dp_export.htm#SUTIL826

    That's why import cannot modify/alter/drop/create dictionary database tables. If you can not export, so you can not import.

    Import just to add new Non - SYS objects/data in the database, therefore new data are added to the dictionary base tables (as new users, new tables, code pl/sql etc).

    I hope that this might answer your question.

    Kind regards

    Juan M

  • Data dictionary Synchonizing (reading) - database

    Hello

    I work for a while with the Oracle Data Modeler using the JDBC tab for the database connection. The database is a SQL Anywhere and the connection details like: jdbc:sybase:Tds:localhost:2638? ServiceName = Hades & CHARSET = utf8.

    Unfortunately, I have still no clear idea in my mind about the relationship of the data dictionary and the database connected (via JDBC adapter).

    -What time is read the database? If it is read in the dictionary? I guess that the information of connected database are read in the ODM at startup?

    -If a change occurs in the database (for example, a column added, a table, or a foreign key), can I manually update the ODM dictionary?

    -In connection details, I can change the local host with an IP address and press the Test button with success as a result. But I'm not sure, the data dictionary now 'filled' with the database newly connected without rebooting of the MDGS?


    Documentation of any clarification or pointing to on the concept of how and when the data dictionary and database are synchronized would be very welcome.

    Best regards

    Robert

    Hi Robert,.

    Just to clarify that "Data dictionary" refers to the definitions of the database metadata.  File > import > data dictionary is important these definitions in Data Modeler.

    (There is no separate "data dictionary" in data maker.)

    The blue button on the right (the data dictionary synchronize with model) is similar in effect to the opening of your model and then by doing a file > import > data dictionary (and by setting the option to exchange the target model in step 2 of the Wizard).

    Both shows a comparison between the current relational model and the current definitions in the database.

    David

  • DBMS PARALLEL EXECUTE TASK NOT VISIBLE IN DATA DICTIONARY (don't no segmentation of data)

    Hi all

    I have a standard code we use for treatment using 'dbms_parallel_execute' in typical parallel
    dbms_parallel_execute.create_Task
    dbms_parallel_execute.create_chunks_by_rowid
    dbms_parallel_execute.run_task
    Get the status of the task and retry to resume processing

    But I'm not able to do it successfully in production env well I tested the same code on stage several times.

    I am not able to view task information in dba_parallel_execute_tasks then my work being performed in the production oracle database.

    It simply goes into retry section
    WHILE (l_retry < 2 AND l_task_status! = DBMS_PARALLEL_EXECUTE.) FINISHED)
    LOOP
    l_retry: = l_retry + 1;
    DBMS_PARALLEL_EXECUTE.resume_task (l_task_name);
    l_task_status: = DBMS_PARALLEL_EXECUTE.task_status (l_task_name);
    END LOOP;

    and coming up with this exception

    * ORA-06512: at "SYS." DBMS_PARALLEL_EXECUTE', line 458 ORA-06512: at
    'SYS. DBMS_PARALLEL_EXECUTE', line 494 ORA-06512: at "pkg_name.", line 1902
    ORA-29495: invalid state for the task of CV *.
    Except it seems something went wrong with the State of the task, but I suspect that the task is itself not having created and data are not getting stored in bulk for a specific table on this.


    * Have you encountered this any time during your codes. I'm really naïve what goes wrong. Why I am not able to see the task in these data
    Dictionary and why his does not address anything I am not able to see the information stored bulk when executing my work.

    Hi all

    For this question special chunking going on some how I wasn't able to see in Toad but even got read when I ran through sqlplus. Something strange with Toad.

    But the issue I debugged and found it to be a failure after the sequencing of the work in eight parallel threads.

    I got all the info related to these jobs when I ask dba_scheduler_job_run_details and find the State of the work "In FAILURE" with certain policies of Homeland Security has failed in background process which plans jobs where they are tracking call schema os and ip address. Then triggered demand ACL for this scheme and the fixed number.

    Hope that this info will be useful.

    Thank you

    Sunil

  • Import of data dictionary

    Hello

    I have a problem with the Import of data dictionary.

    1. I have all the tables in the DB schema Oracle 11 (Test - the name of the schema).

    2 Date Modeler 4.0 I use Import / data dictionary.

    3. in the data dictionary import wizard I create the name of the connection, select schema and select objects to import.

    4. After completing this process I see the tables in the relational model but with the schema name: schema_name.table_name. I would have just table_name. I tried to use the options to compare models (before merger) but no result.

    Concerning

    Hello

    Dimitar response stop scheme displayed on the diagram name.

    If you don't want the schema name to appear in the generated DDL, you can unset the 'include schema DDL' option on the Data Modeler/DDL preferences page (which is accessible in the Help menu).

    David

  • Re-import table data dictionary

    Hello

    I recently created a relational model with more than 2000 paintings by importing the data from the data dictionary. Unfortunately, I left out a few tables when I originally created the template. How can I add the tables after the fact? I tried to go through the process of reimportation again, but it doesn't seem like it adds tables. Thank you!

    Hello

    Are there any error messages in the log file?  (This is normally datamodeler.log file in the datamodeler\datamodeler\log folder, unless you set a different location on the environment/log preferences page.)

    Note that if you want to update the properties of Java, the updated file is different for the version 4.0.0.833 Data Modeler.  AddVMOption reports should be updated in the product.conf file and not in the datamodeler64.conf file. Assuming you are on Windows 7, you should be able to find the product.conf file in the file C:\Users\\AppData\Roaming\Oracle SQL Developer Data Modeler\4.0.0.833.

    David

Maybe you are looking for

  • Equium A200-1VO - replacement DVD player

    Hello I was wondering if someone could help me find a replacement of CD/DVD for a Toshiba Satellite A200-1VO. do I need to export the model exact same DVD player or is there another appropriate drive that works? I know how to mount the drive for the

  • Photosmart C6180: Wireless scanning pop up

    I'm scanning wireless. After viewing the document on laptop computer based on review I press scan and its 'fact', but I don't get the pop-up of the document that I can drop off then I choose. The document seems to be lost somewhere. How to activate t

  • control of cyclic redundancy check when you try to install the specific disk

    I installed it now mainly music programs of this pile of disc of times with no probs, I get the error message "data error (cyclic redundancy check)" on all the programs on this disc, when I put the disc in the drive of my pc seems to be a bit slower

  • 2 x Base system device drivers not found

    Hello world I have a little trouble here, because I can't find what makes problem after installation of WIN 7 Ultimate 64. I have a ProBook s 4535. In Device Manager, I had two problematic devices (both are Base system devices) Here is the hardware I

  • plugin Eclipse BB nightmare

    Hi guys, I'm working on a small application that is supposed to display graphics on a phone smart blackberry. I am using JFreeChart as library plotting, but faced problems when instantiated. I've implemented my build system to use BlackBerry jRE 6.0.