Fields "Synonym of entity source" and "Target entity synonym" for?

Hello

He is a newbie question.

Is someone can you please tell me what that fields "Entity synonym Source" and "Synonym of target entity" in the dialog box properties of relationship are for? They are drop down boxes and look as if you should be able to select from the list of synonyms put in place for the entity in the properties of the entity dialog box.

Thanks in advance.

John

Using SQL Developer Data Model Version 4.1.1.888 on Windows 7 Home premium 64-bit.

Hi John,.

If you right-click on an entity on the diagram object, then select Create Synonym in the drop down below, another entity object will be added to the diagram of the same entity.

The fields "Synonym of entity Source" and 'Target entity synonym' in the relationship properties dialog box allow you to specify which of these multiple representations is connected to the relationship on the diagram.

David

Tags: Database

Similar Questions

  • AD, used as a trusted source and target system

    Hello

    I have a requirement to use AD as a trusted source and the target system the zero days and 1 for a transitional period.  We have 18 applications to integrate with OAM for SSO.  Currently, the authentication and authorization of these applications are made via AD.  However, the client wants to move to the use of LDAP in goal SSO.  The first phase includes 3 apps on the 18 apps.

    Day zero, I use AD as a source of confidence to push users to IOM.  Then, I run AD as a target system to link the user to their existing AD accounts.  Their ad groups will also be reconciled by IOM.

    Because there will be a transition period and the customer would not have to change the process of assistance to creating accounts AD (for internal and external users), they asked that we continue to allow accounts AD to reconcile with IOM in the trusted source and target system.

    I have not used AD connector as a reliable source.  I intend to source AD done trust reconciliation to run first, then the task of reconciliation system AD target to run then.  It's the same AD connector.

    This can work as long as the customer wishes until they want to spend to IOM for the creation of the user and accounts AD commissioning through the access policy.

    Is this sound ok you?  There is a "witch hunt" I didn't think?

    Thank you

    Khanh

    Yes, you can run the recon trust first and get all the identity created by IOM.

    Later, you can run the Scheduler for recon target for linking AD accounts with user profile of IOM.

    Note: There is a field in AD IT resource in need of update if you want to switch between the target and recon trust.

    Research of configuration:

    This parameter contains the name of the lookup definition that stores configuration information used during the reconciliation and commissioning.

    If you have set your target system as a resource target, then enter Lookup.Configuration.ActiveDirectory.

    If you have set your target system as a reliable source, then enter Lookup.Configuration.ActiveDirectory.Trusted.

    Default value: Lookup.Configuration.ActiveDirectory

    ~ J

  • Procedure of ODI with slow performance (SOURCE and TARGET are different Oracle databases)

    Hi experts,

    I have an ODI procedure but its market with slow performance (SOURCE and TARGET are different Oracle databases), you can see below.

    My question is:

    It is possible write Oracle BULK COLLECT at the 'command on the target' (below)? or

    There is a KM of ODI that perform this task below in a quick way? If so, what KM can you guys suggest me?

    I found 'Oracle Append (DBLINK) control' but I try to avoid creating the dblink database.

    ===============================================================================

    * COMMAND ON the SOURCE (* technology: ORACLE * logic diagram: ORACLE_DB_SOURCE):

    SELECT NUM_AGENCIA, NUM_CPF_CNPJ, NOM_PESSOA

    < % = OdiRef.getSchemaName ("D") % >. < % = odiRef.getOption ("P_TABELA") % >

    ===============================================================================

    *ON the COMMAND TARGET (* technology: ORACLE * logic diagram: ORACLE_DB_TARGET):

    BEGIN

    INSERT INTO DISTSOB_OWNER. DISTSOB_PESSOA (NOM_PESSOA, NUM_CPF_CNPJ, FLG_ATIVO)

    VALUES ('#NOM_PESSOA', '#NUM_CPF_CNPJ', THE FROM ');

    EXCEPTION WHEN DUP_VAL_ON_INDEX THEN

    NULL;

    END;

    ===============================================================================


    Thank you guys!

    Please use SQL for SQL command Append KM... You can delete the unnecessary steps in the KM.E.g. fi you won't create I$ table, control flow etc, then you can remove related steps.

    Please try with that.

  • FDM to connect to a relational source and target

    I'm new to this tool. I have a few basic questions. I based my research, but could not find a clear answer.

    Can I connect to a relational table as source and target for an application of FDM. Relational source isn't eBS or other applications, just a relational table. I would like to load the same data from a relational table to another table using FDM. I'm looking at the wrong tool? Should which adapter I use? ERPI source adapter is? Which adapter target?

    Can someone please you suggest? Thanks in advance.

    Hello

    If you're new to this tool, you should know that FDM won't 11.1.2.4.

    It is replaced by FDMEE which is already available in 11.1.2.3.

    In all cases, you can extract data from relational db by using a script of integration. FDM doesn't load data to a target database not Hyperion EPM but you he could get it works using the custom script.

    Concerning

  • Database of waiting on a Source and target different endian formats

    Hello

    I have one familiar with the case where the source and target servers are different endian formats. I want to implement a 11g Data Guard in this environment. How can I do to implement the problem of different endian formats.

    Thank you

    As indicated in the documentation you quote, Data Guard is not possible when the main and backup databases are on platforms with a different endian format. If the endian format is the same, heterogeneous platforms may, or may not, be supported. The final documentation seems to be MOS Note 413484.1 (for physical standby) and 1085687.1 (for logical standby).

  • TABLE import in the source and target OLAP and OLTP Informatica records

    In the designer to map every time I import from my source OLTP and OLAP source table is displayed in its own folder named instance imported from. It is a problem that when I migrate my lower the workflow repository is always looking ofr these sources and targets. How to import or migrate there relative OLTP and OLAP source and target material in the table designer.

    Appreciate the help.

    Hi, before you migrate you can choose options as explained below

    1. create the same global name of connection for ODBC OLAP and OLTP and then import the tables to informatica

    2. other is, after you import the source tables with any ODBC connection name, you can change the name of the file as below
    a. check the table and in the Source Analyzer workspace to modify the table. In the table , tab click on rename
    (b) change the name of the data base by "OLAP" or 'OLTP' depending on your source.
    c. the source table is automatically moved to the folder OLAP and OLTP

    3. If the table is already existing in OLTP or OLAP, you can use the reuse or replacement of options when importing new mappings during the migration.

    Just to test this scenario once and applies for all tables in the source. I do the same during the migration

    Hope this helps

  • Source and target tabs in the procedure

    I'm trying to create a procedure to insert data into a table.

    My order on the tab target:

    INSERT INTO insert_table

    My order on the Source tab:

    SELECT BATCHNUMBER,' it, COUNT (*) IN table_source WITH BATCHNUMBER, 'C '.

    I selected the logic diagrams correctly in the source and target tabs. I get the following error...


    java.sql.BatchUpdateException: ORA-00926: lack of keyword VALUES

    What would my orders in the target and Source tabs?

    -app

    For the string type data please include in single quotes, for example, if C is varchar2

    INSERT INTO TRGT_TABLE VALUES (#BATCHNUMBER,'#C',#COUNT) 
    

    Please try this

  • Is it possible to have the source and target schema in the same instance of DB?

    Hi all

    I'm using Oracle 11 g 1 material.
    I spent another source than with OWB server locations.
    In the course of deploy I get VLD-3064 and I can't deploy mapping due to the many warnings "table or view does not exist.

    Is it possible to have the source and target schemas in the same case?
    How to do?

    Kind regards
    Martin

    Hi Martin!

    1. the target schema have select rights for source-tables/views.
    (Run as a user with dba rights: grant select on to ;).

    2 «.. . none generated code will use the link dataabase...'.
    This is only a warning and means there is no need to use a database link. If your mapping will be executed faster as using a database link.

    error of VLD 3064

    Greetings
    Guenther Herzog

  • Schema (source and target)

    Hello

    I have a basic question. now I'm in the schema of the Designer tab, and I don't have an idea where I need to drag the source and target

    Thanks for the support

    Sam

    Hi Sam,

    I guess that you had created the following before you start your 'diagram' of Interface

    1. a project
    2 models (according to your requirement)

    Now, after you had provided information to the "definition" of the interface. In your diagram, you can goto "models" and then drag and drop the required for the models that you created in step 2 above. At the time where you drag and source and target, you can also auto card

    Hope this helps

    Sandeep Reddy, Enti
    HCC
    http://hyperionconsultancy.com/

  • Source and target data store mapping query

    I have to get the source and mapping target in ODI interface.

    Which table will I hit to get mapping information.

    E.g.

    Interface: INT_SAMPLE

    Data store: Source_DataStore with columns (cola, colb, teachers) Target_DataStore with columns (cola, colb, cold)

    Well mapping cover the QuickEdit tab and expand the field of mapping mapping is so

    Source_DataStore.Cola = Target_DataStore.Cola

    Source_DataStore.colB = Target_DataStore.colB



    Now, I want to get mapping information above as well as the name of the interface and the rest of the column that are not mapped using SQL (is it possible to trick ODI for mapping).

    Hi Prashant da Silva,

    Are you looking for an application to run on the repository?

    If so, it can help:

    select I.POP_NAME INTERFACE_NAME, ds.ds_name DATA_SET
          , s.lschema_name SOURCE_SCHEMA, NVL(S.TABLE_NAME, S.SRC_TAB_ALIAS) SOURCE_TABLE
          , mt.lschema_name TARGET_SCHEMA, I.TABLE_NAME TARGET_TABLE, c.col_name  TARGET_COLUMN, t.FULL_TEXT MAPPING_CRITERIA
      from SNP_POP i, SNP_DATA_SET ds, SNP_SOURCE_TAB s, SNP_TXT_HEADER t, SNP_POP_MAPPING m, SNP_POP_COL c, SNP_TABLE trg, snp_model mt
      where I.I_POP = DS.I_POP  (+)
        and DS.I_DATA_SET = S.I_DATA_SET (+)
        and T.I_TXT (+) = M.I_TXT_MAP
        and M.I_POP_COL (+) = C.I_POP_COL
        and M.I_DATA_SET = DS.I_DATA_SET (+)
        and C.I_POP (+) = I.I_POP
        and I.i_table = trg.i_table (+)
        and trg.i_mod = mt.i_mod (+);
    

    Just add a filter on UPPER (I.POP_NAME) = UPPER ('').

    Kind regards

    JeromeFr

  • Partition member in source and target must be the same

    Hello gurus, I have a cube with partitions and each partition is to have about 10 dimensions.

    A dimension, I became a member and wrote a rule for this member in the partition of the source, I have to take the same Member in the score target to write the rule.

    No, you don't have to have the same set of members on both sides.

    If you have...

    A, B, C, D

    .. .in the source, and...

    A, B, C

    .. .in the target, you have two options.

    First option, live with the warnings of validation "incompatibility of cell count" and do nothing to do.  Then no data will appear in the target for had 'and had data' will be not only be pushed in any of the partition.

    Second option, card would be "one of the 'A', 'B' and 'C'."

  • How can I change the connections of source and target OWB?

    Hello, I have hurt to change the connection to the source in OWB.

    My login name of source is identical to the front. That DB name, schema name is now moving to connect to the source. Here are the steps that I did.

    1. went to control Center Manager and cancel the mapping of source.
    2. is go explore connection and right click on the connection to the source, and then click Open Editor. change the username, ip address, password, name of the comic.
    3. goes to control Center Manager and redeploy the mapping. It does not work... It is said, table or view does not exist.

    Is there somewhere else I need to change... Somewhere, Miss me...

    Any help is appreciated...

    Hello

    Go to the module using the mentioned location.
    Change the module, set the location of metadata to the correct location
    Go to data locations, the correct location should be with selected locations. If so, remove it and place it again.

    Now, go to the configuration of the module and go to the Identification
    Check the value of the location and make sure that it uses the correct location

    I think it is sometimes useful to change the location to another location and back to the correct location.
    I hope this helps.

    Kind regards

    Emile

  • automatically create source and target tables

    Hello
    I need to convert many flat COBOL tables (each table has a file description and a data file) to the Oracle Tables.
    I just create (using the 'flat file to oracle table' online documentation) my first project and it works.

    It is possible to automate the operation without manually create each single source table and the target table?
    Where can I find a manual?

    Thanks to advice
    Fabio

    Fabio,

    I do not see how ODI could do this process automatically except by accessing the repository, but it won't work for repositories of executions.

    I suggest do you it manually...

    On the stpe update, if you need not change the revenge for SQL append coming to do inserts.

    Do not forget to put the IKM 'Flow Control' or 'no' which generates fewer steps and really increase performance...

    No sense?

    Cezar Santos
    [www.odiexperts.com]

  • How to join several tables source and do the research?

    I have a requirement to load a target table by joining 4 source tables. Also, I do a search on a field of table to transform codes and check for NULL values. What will be the best approach for load table target?
    Is it possible to do it in a single interface, or do I need to create multiple interfaces to achieve this?

    My basic source and target are oracle, and I am planing to use incremental update Oracle merge.

    Thank you

    You are in the right direction by creating an interface for this transformation.
    You will need to drag the source drop 4 tables + the lookup table in the Sources of Interface window and then make the appropriate joins.
    Also, look for NULL values in the transformation. Depends on what you want to do with NULL values. If you want to ignore, use a filter.
    If you want to make mistakes, use a constraint.
    If you want to convert them, use NVL

    Start with Oracle Update incremental and once successful, use incremental update Oracle MERGE.

  • Data synchronization after restoring the database to the Source or target

    Hi DBAs,

    I've implemented replication unidirectional flow at the level of transactional database to the database schema (excluding some tables). It works very well without any problems after doing the tuning of flow with the help of health check scripts. My question is if some reasons, if I have to recover to the level of the database on the database Source then I need rebuild flows from scratch? Database size is close to 2 TB and after recovery, even using the pump data at the same time, it could take hours and also the source won't be available even after the recovery (data pump running job - if the source is online, then I was getting ORA-01555 errors and work was not).

    Please notify by above circumstances what is the best way to re - synchronize data between the source and target.

    Thank you
    -Samar-

    I would export of the datadict (build) once again, just to avoid any MVDD, something to do directly after the restoration of the DB,.
    because I'm not sure from the perception of this restoration DB in terms of DBID. Have you checked in rman if source db is considered to be the epitome?

    Then restart what are the messages in the target error queues. I think that you will have problems to move the first_scn
    due to differences on some YVERT in source/target system.logmnr_restart_ckpt$. system will think it has holes.
    You may need to manually remove a bunch of lines to allow the flow to jump on the fate of the YVERT
    that have been sent and are not present in the source system.logmnr_restart_ckpt$

Maybe you are looking for