name.table in ODI datastore schema name

Hello

We have a scheme-> others-> HFM_ESB-> TIME table.

Now, I need to connect to HFMESB. TIME table. When I try to give this table in the ODI data store, I get the message as invalid table name.

Any suggestions on how to call and reverse engineer these tables in ODI.

We were previously using synonyms and therefore sees no problems. But now we cannot synonyms, where to get this working...

Appreciate the quick response that few urgent.

Thanks in advance

Hello

Apparently HFM_ESB is your schema.

First, you create the server data in ODI, if you have not done: go in the topology tab in ODI Studio, then under Physical Architecture done right click on Oracle and choose new server data. Enter a name for your database server for example TRG_MyName and specify the username/password, for example, USER: ODI_STAGING PASS: MyPass. On the JDBC specify the driver JDBC and URL, for example: oracle.jdbc.OracleDriver and jdbc:oracle:thin:@localhost:1521:orcl respectfully. Save and close the window.

Second, on the right, click on the (new) database server and click new schema of Phisical , specify a name and then set the schema: HFM_ESB and the scheme of work: ODI_STAGING

Please note: If you do not have ODI_STAGING diagram you can specify your HFM_ESB as a USER and working scheme

Thirdly, under the logical Architecture right-click on Oracle technology and create new logical schema specifies a name and you can leave the Global context and choose the given (new) physical schema in the physical patterns column.

Fourth, go to the tab of the topology to the Designer tab. In the bar of models , click on the folder and new model of. Now, on the definition tab, enter a name for your model, choose Oracle as the technology, choose the logical schema that we just created. Go to the tab of bone , here you can leave everything in the Standard/table report which means do the standard reverse engineering for all tables in the given schema. If you want only to reconstruct the table PERIOD comes down to the click tab Sélective Reverse-Engineering on all the boxes at the top, then, under the control of column name of the Table field only PERIOD box and finally click on the button of boning (at the top left).

Tags: Business Intelligence

Similar Questions

  • How to find the same column name in different tables in the same schema

    Hello

    I find the 'ename' column name in different tables in the same schema.

    Thank you
    Nr

    Hello

    Try this query

    Select column_name, table_name from user_tab_columns where column_name = 'ENAME ';

    Award points and end the debate, if your question is answered or mark if she was...

    Kind regards

    Lacouture.

  • Join of two tables in two different schemas

    Hi all

    I have an obligation to join two tables on two different schemas. How to join these two tables in the object view.

    Thanks in advance.

    Concerning
    Kaushik Guillaumin

    You can do just that using schema name in the +' view object.table name ' + according to the query object and also grant select the another schema to this schema user

    ex
    you need schem is test and another Act you need to a view object based on the test to join a table on shcema Act

    you write semply object sql view code:

    Act.table name

    and you can also give him select statement on Bill schem table to test

    concerning

  • How can find data in a colum prj_no in all the table of the same schema

    Hi all

    I find the list of tables with data that has prj_no = "Axis_11" for all tables in the same schema.



    Thank you
    Nr

    PNR wrote:
    I find the list of tables with data that has prj_no = "Axis_11" for all tables in the same schema.

    1. find the tables with a column of PRJ_NO name. You can find it in USER_TAB_COLUMNS
    2 write a query to read the data in each table, using the UNION/UNION ALL operators to merge the results for each table

  • Moved to subpartitions in a table doubles in another schema.

    + NOTE: I asked this question on the forum SQL and PL/SQL , but came here as I think it is more appropriate to this forum. I placed a pointer to this post on the original post. +

    Hello ladies and gentlemen.

    We are currently involved in an exercise on my place of work where we're trying to logically organize our data by world region. For more information, our database of production is currently at version 10.2.0.3 and will soon become 10.2.0.5.

    For the moment, all our data "lives" in the same schema. We are trying to produce a proof of concept to migrate this data to the identically structured (and named) tables in separate database schemas. Each schema to represent a world region.

    In our current schema, our data are partitioned to date range and then list partitioned on a column named OFFICE. I want to spend the subpartitions of OFFICE of a schema in a same table named and structured in a new scheme. The tablespace will remain the same for the two tables with the same name in the two schemas.

    Any of you have an opinion on the best way to do it? Ideally in the new scheme, I would like to create each table an empty array with the appropriate range and list partitions that are defined. I did a few tests in our development environment with the EXCHANGE PARTITION statement, but this needs to be non partitioned, the destination table.

    I was wondering if, for the migration of partition in all drawings bearing the name of the table and tablespace remains constant, there is a formal method of 'best practices' to accomplish such a move subpartition cleanly, quickly and elegantly?

    Welcome any helpful response.

    See you soon.

    James

    You CAN exchange a subpartition in another table by using a 'temporary' (intermediate) as an intermediate table.

    See:

    SQL> drop table part_subpart purge;
    
    Table dropped.
    
    SQL> drop table NEW_part_subpart purge;
    
    Table dropped.
    
    SQL> drop table STG_part_subpart purge;
    
    Table dropped.
    
    SQL>
    SQL> create table part_subpart(col_1  number not null, col_2 varchar2(30))
      2  partition by range (col_1) subpartition by list (col_2)
      3  (
      4  partition p_1 values less than (10) (subpartition p_1_s_1 values ('A'), subpartition p_1_s_2 values ('B'), subpartition p_1_s_3 values ('C'))
      5  ,
      6  partition p_2 values less than (20) (subpartition p_2_s_1 values ('A'), subpartition p_2_s_2 values ('B'), subpartition p_2_s_3 values ('C'))
      7  )
      8  /
    
    Table created.
    
    SQL>
    SQL> create index part_subpart_ndx on part_subpart(col_1) local;
    
    Index created.
    
    SQL>
    SQL>
    SQL> insert into part_subpart values (1,'A');
    
    1 row created.
    
    SQL> insert into part_subpart values (2,'A');
    
    1 row created.
    
    SQL> insert into part_subpart values (2,'B');
    
    1 row created.
    
    SQL> insert into part_subpart values (2,'B');
    
    1 row created.
    
    SQL> insert into part_subpart values (2,'C');
    
    1 row created.
    
    SQL> insert into part_subpart values (11,'A');
    
    1 row created.
    
    SQL> insert into part_subpart values (11,'C');
    
    1 row created.
    
    SQL>
    SQL> commit;
    
    Commit complete.
    
    SQL>
    SQL> create table NEW_part_subpart(col_1  number not null, col_2 varchar2(30))
      2  partition by range (col_1) subpartition by list (col_2)
      3  (
      4  partition n_p_1 values less than (10) (subpartition n_p_1_s_1 values ('A'), subpartition n_p_1_s_2 values ('B'), subpartition n_p_1_s_3 values ('C'))
      5  ,
      6  partition n_p_2 values less than (20) (subpartition n_p_2_s_1 values ('A'), subpartition n_p_2_s_2 values ('B'), subpartition n_p_2_s_3 values ('C'))
      7  )
      8  /
    
    Table created.
    
    SQL>
    SQL> create table STG_part_subpart(col_1  number not null, col_2 varchar2(30))
      2  /
    
    Table created.
    
    SQL>
    SQL> -- ensure that the Staging table is empty
    SQL> truncate table STG_part_subpart;
    
    Table truncated.
    
    SQL> -- exchanging a subpart out of part_subpart
    SQL> alter table part_subpart exchange subpartition
      2  p_2_s_1 with table STG_part_subpart;
    
    Table altered.
    
    SQL> -- exchanging the subpart into NEW_part_subpart
    SQL> alter table NEW_part_subpart exchange subpartition
      2  n_p_2_s_1 with table STG_part_subpart;
    
    Table altered.
    
    SQL>
    SQL>
    SQL> select * from NEW_part_subpart subpartition (n_p_2_s_1);
    
         COL_1 COL_2
    ---------- ------------------------------
            11 A
    
    SQL>
    SQL> select * from part_subpart subpartition (p_2_s_1);
    
    no rows selected
    
    SQL>
    

    I traded subpartition p_2_s_1 out of the part_subpart table in the NEW_part_subpart table - even with a different name for the subpartition (n_p_2_s_1) If you wish.

    NOTE: because the source and target tables are in different schemas, you need to move (or copy) the intermediate table STG_part_subpart from the schema of the first towards the second scheme after that first "Exchange subpartition' is done. You will need to do this for each subpartition to Exchange.

    Hemant K Collette

    Published by: Hemant K deal April 4, 2011 10:19
    Added details for cross-schema exchange.

  • Compare the table structures in different schemas. help please

    Hi all

    I have a question...


    I have pictures on different schemas something like


    Diagram A
    -------------
    Table 1
    Table 2
    Table 3


    Diagram B
    ------------
    Table 1
    Table 2
    Table 3


    Now situation is table 1 and table 2 will have a similar structure or table 1 in Figure B will have additional columns.

    like so... on... for all other tables...

    example!
    Schema A:
    
    Desc Table 1;
    
    Name                                  Type            Null
    -------------                             -------             -------
    No                                Number            Notnull
    Name                           Varchar2(10)     Not null
    Fee                              Number (10,2)   Not null
    
    
    
    Scheam B;
    
    Desc Table1;
    
    Name                                  Type               Null
    -------------                             -------             -------
    DX_No                                Number            Notnull
    DX_Name                           Varchar2(10)      Not null
    DX_Fee                              Number (10,2)   Not null
    comments                          Varchar(2)        
    Now I need to write some sort of procedure for the thing to compare these tables which are in different in the column names in the schema and get it had exported to an Excel sheet.
    and here it's first three columns SHOULD BE CONSIDERED AS SAME even if the DX_ prefix are from the REST OF the PART OF THE COLUMN NAME IS SAME.

    and the same way commit coloumn new schema B only... So it should be noted that excel sheets...

    I don't know how the ADO or SQL Developer handle this... Is there any plsql block that I can write to do it...

    Thanks in advance...

    Comparison of columns on all the tables in the two schema

    select tc1.owner,tc1.TABLE_NAME,tc1.COLUMN_NAME,
           tc2.owner,tc2.TABLE_NAME,tc2.COLUMN_NAME
    from
      all_tab_columns tc1
      full outer
      join all_tab_columns tc2 on &user1=tc2.owner
                               and tc1.TABLE_NAME=tc2.TABLE_NAME
                               and tc1.COLUMN_NAME like '%'||tc2.COLUMN_NAME
    where
    tc1.owner=&user2
    and tc1.TABLE_NAME in (
                          select t1.table_name
                          from all_tables t1
                          join all_tables t2 on t2.owner=&user2 and t1.table_name=t2.table_name
                          where t1.owner=&user1)
    

    Published by: xtender on 06.11.2010 12:43

  • ODI Datastore length differs with the length DB - IKM throws value too large

    ODI datastore when reverse engineering a different length than the datalength in actual db.

    The Datastore ODI column details: tank (44)
    Column db target: varchar2 (11 char)

    The I$ table inserts char44 in varchar2 (11char) in the target. As the value of the source column is empty ODI throws
    "ORA-12899: value too large for column (size: 44, maximum: 11)."

    You must always include the ODI version you are using.

    I assume that you are using the standard method or not an RKM of reverse engineering.
    Because of a bug in JDBC driver (ojdbc14.jar is), using reverse engineering standard will multiply the lengths of all varchar2 by 4.
    Upgrade the jdbc driver or use oracle RKM to reverse engineer the data store.

    PS. Please indicate as good/useful answers when you're done.

  • How can I import tables to a different schema in the relational model existing... to add these tables in the existing model? PLSS help

    How can I import tables from a different schema in the relational model... to add these tables in the existing relational/logic model? PLSS help

    Notes; I already have a schema ready relational/logic model... and I need to add more tables to this relational/logic model

    can I import the same way like I did before?

    But even if I do the same how can I add it in the template? as the logic model has been designed...

    Help, please...

    Thank you

    To view the diagram of logic model in Bachman notation, you right-click on a blank area of the diagram and select the rating > Bachman Notation.

    David

  • Change all the triggers table in the oracle schema by using a script - possible?


    Is it possible to modify all the triggers table in the oracle schema using a single script or modify each trigger table separately is the only way?

    A couple of alternatives come to mind.

    (1) you can go in SQL Develolper, go to the schema, expand the node for triggers and select all the triggers that you want to change.  Right-click and choose the fast DDL--> save spreadsheet.  Find and replace in the worksheet and then run the script.

    (2) if the trigger is the same for all 70 tables, you can move the PL/SQL out of the trigger and in a procedure in a package and just call the procedure of 70 triggers.  Now, the code is kept in the same place.  For new tables, you can add a call to the procedure and you are done.  If all 70 triggers are not the same, see if you can pass parameters to allow a generalized procedure that they can all use.

    You have not indicated what your triggers.  Are insert you columns of auditing, archiving data, inserting into a table of log, update another table or something else?  What type of trigger is it?  What you trying to accomplish with your triggers?

    Marcus Bacon

  • Large table must be moved schema

    Hi guru Oracle

    IM using AIX 5.3 with oracle 11.2.0.2

    I just finished a job where I took a LOW partition of a partitioned table on our production database and installs all the data table into its own not partitioned our database archive.  This table on Archive is 1.5 to.

    Just to give you a brief overview of what I've done, successfully:

    -Created a partition no table on Production with exactly the same structure as the partitioned table set apart from the partitions.  Then, I moved all the subject segments in the same tablespace as the table to make it TRANSPORTABLE.

    -J' then took an expdp of the metadata tables by using the transport_tablespaces parameter.

    -Take the tablespace to read write I used the cp command to transfer the data files to the new directory.

    -Then on the database to ARCHIVE, I used impdp to import metadata and direct them to new files.

    parfile =

    DIRECTORY = DATA_PUMP_DIR

    DUMPFILE = dumpfile.dmp

    LOGFILE logfile.log =

    REMAP_SCHEMA = schema1:schema2

    DB11G = "xx", "xx", "xx"...

    My problem now is that there is some confusion and I traced the wrong schema, this isn't a major problem, but I wouldn't like to groom, so instead of saying REMAP_SCHEMA = schema1:schema2 must be REMAP_SCHEMA = schema1:schema3

    To the question: what is the best way to occupy the table schema3 (1.5 to in scheam2).


    To the question: what is the best way to occupy the table schema3 (1.5 to in scheam2).

    The easiest way is to use EXCHANGE PARTITTION to just 'swap' in the segment.

    You can only 'swap' between a partitioned table and a non-partitioned table, and you already have a non-partitioned table.

    So, create a table that is partitioned with a score in the new schema and table not partitioned in the same pattern again.

    Then exchange the old schema with the partition table in the new schema. Then share this new partition with the new table not partitioned in the new scheme. No data should move at all - just a data dictionary operation.

    Using patterns HR and SCOTT. Suppose you have a copy of the EMP table in SCOTT that you want to move to HR.

    - as SCOTT
    grant all on emp_copy HR

    - as HR

    CREATE TABLE EMP_COPY AS SELECT * FROM SCOTT. EMP_COPY WHERE 1 = 0;

    -create a partitioned temp table with the same structure as the table
    CREATE TABLE EMP_COPY_PART
    PARTITION OF RANGE (empno)
    (partition ALL_DATA values less than (MAXVALUE)
    )
    AS SELECT * FROM EMP_COPY;

    -the Bazaar in the segment of the table - very fast
    ALTER TABLE EXCHANGE PARTITION ALL_DATA EMP_COPY_PART TABLE SCOTT. EMP_COPY;

    -now share again in the target table
    ALTER TABLE EXCHANGE PARTITION ALL_DATA WITH TABLE EMP_COPY EMP_COPY_PART;

  • How to collect statistics on the table for tables in a different schema

    Hi all

    I have a table in a schema, and I want to collect statistics for the table in a different schema.

    I gave GRANT ALL ON SCHEMA1. T1 TO SCHEMA2;

    And when I tried to run the command to collect statistics to help

    DBMS_STATS. GATHER_TABLE_STATS (OWNNAME = > 'SCHMEA1', TABNAME = > 'T1');

    The function will fail.

    Is there a way we can collect statistics of the table for tables in a schema into another schema.

    Thank you
    MK.

    You must grant analyze to schema2.

    SY.

  • empty table USER_COLL_TYPES in custom schema

    Hi all

    We created a custom schema and connect to the schema custom via JDBC.
    When you call a procedure stored via JDBC, I get the error message:
    ORA-00920: invalid relational operator
    However, if called from the APPS schema, the procedure returns successfully.

    When you look in the SQL trace, it seems that anytime jdbc, runs a query on the user_coll_types table:

    SELECT ELEM_TYPE_NAME, ELEM_TYPE_OWNER FROM USER_COLL_TYPES WHERE TYPE_NAME =: 1

    And if no entry is found, the following query is executed (which causes the error with the statement "NOCYCLE"):

    SELECT ELEM_TYPE_NAME, ELEM_TYPE_OWNER FROM USER_COLL_TYPES WHERE TYPE_NAME in (SELECT TABLE_NAME FROM USER_SYNONYMS START WITH SYNONYM_NAME =: CONNECT BY NOCYCLE PRIOR TABLE_NAME = SYNONYM_NAME UNION SELECT 1: 1 FROM DUAL)


    Conclusion: When the procedure is called from the schema APPS the second query is not executed, the entrances are with the first, while in the custom schema not entries are found with the first request that led to the second query.

    So my question is, why a select * on table USER_COLL_TYPES returns all entries in my custom schema, although USER_COLL_TYPES is granted to the PUBLIC?
    Do I have to establish any other priivileges to get the entries in the USER_COLL_TYPES table in my custom schema?

    Thank you
    Carolin

    The query select USER_COLL_TYPES is not explicitly run in my code - it's something that is generated internally by JDBC...

    Does that mean I have to load all types of collection to my custom schema?

    Yes, I think so.

    Run this query in as user APPS and your CUSTOM user to check the number of records:

    SQL> select count(*)
    from USER_COLL_TYPES;
    

    Thank you
    Hussein

  • Data pump import a table in to another schema in 11g

    Hi all

    I have an Oracle 11.2 database and I have a requirement to import a few tables in a new scheme using my export from the previous month. I can't import any scheme as it is very large. I check REMAP_TABLE option, but it is enough to create the table in the same pattern and only rename the table.


    For example, I TABLE GAS. EMPLOYE_DATA I want to import the to GASTEST. EMPLOYEE_DATA

    Is there a way to do it using datapump?



    Appriciate any advice.

    Hello

    You can use the parameter INCLUDE in order to only select one Table:

    REMAP_SCHEMA=SCOTT:SCOTT_TEST
    INCLUDE=TABLE:"IN ('EMP')"
    

    Hope this helps.
    Best regards
    Jean Valentine

  • Access DB & insert long distance column in a table in the current schema

    I have a new requirement as:
    I have to create a table in my current schema, selection of data in a table in different schema using the database link and the table contains a column LONG.
    I wrote code like:

    CREATE TABLE tmp as
    SELECT a.index_name, a.index_type, a.uniqueness and a.table_name,
    b.column_name, c.column_expression, b.column_position
    User_indexes@FRISKDEVI41B_ORCL a.,
    b user_ind_columns@FRISKDEVI41B_ORCL,
    c user_ind_expressions@FRISKDEVI41B_ORCL
    WHERE b.index_name = a.index_name
    AND a.table_name = b.table_name
    AND b.index_name = c.index_name (+)
    AND c.table_name (+) = b.table_name
    AND b.column_position = c.column_position (+)
    ORDER BY a.table_name;

    and when I try to run this am getting an error like "ora-00997 illegal of the long data type using", can any1 help me solve this

    Hello

    SQL> create or replace type tmp_type as object(
      2    index_name       varchar2(30),
      3    table_name       varchar2(30),
      4    column_expression clob,
      5    column_position   number(10)
      6  )
      7  /
    
    Type created.
    
    SQL> create type tmp_tab as table of tmp_type;
      2  /
    
    Type created.
    
      1  create or replace function user_ind_expr return tmp_tab pipelined as
      2    cl clob;
      3  begin
      4    for r in (select * from user_ind_expressions@FRISKDEVI41B_ORCL) loop
      5      cl := r.column_expression;
      6      pipe row (tmp_type(r.index_name, r.table_name, cl, r.column_position));
      7    end loop;
      8    return;
      9* end;
    SQL> /
    
    Function created.
    
    SQL> CREATE TABLE tmp as
      2  SELECT a.index_name,a.index_type,a.uniqueness,a.table_name,
      3  b.column_name,c.column_expression,b.column_position
      4  FROM user_indexes@FRISKDEVI41B_ORCL a,
      5  user_ind_columns@FRISKDEVI41B_ORCL b,
      6  table(user_ind_expr) c
      7  WHERE b.index_name = a.index_name
      8  AND b.table_name = a.table_name
      9  AND b.index_name = c.index_name (+)
     10  AND b.table_name = c.table_name (+)
     11  AND b.column_position = c.column_position (+)
     12  ORDER BY a.table_name
    SQL> /
    
    Table created.
    

    Bartek

  • Need info of table based on the schema name

    I need the following information for a given schema name:

    1. Table
      name
    2. Schema
      name
    3. Number or rows for each table (column catalogs num_rows)
    4. Size of the table (MB)

    I tried the query nelow but I can't make it work.

    Select col.table_name,
    Col.col_cnt as column_count,
    RC.row_cnt as row_count.
    s.size_in_MB as table_size_in_MB
    Of
    (
    / * number of columns * /.
    SELECT upper (table_name), COUNT (*) col_cnt
    FROM dba_tab_columns
    Owner WHERE = 'V500.
    Upper group (table_name)
    ) col

    Join

    (
    / * number of rows * /.
    Select
    table_name,
    TO_NUMBER (extractvalue (xmltype (dbms_xmlgen.getxml ("select count (*) c to ' |")))) ((table_name)), ' / LINES/LINES/C ')) as row_cnt
    from dba_tables
    where (iot_type! = 'IOT_OVERFLOW' or iot_type is null)
    and owner = 'SCOTT '.
    ) rc
    We upper (col.table_name) = upper (rc.table_name)


    Join

    (
    / * the size in MB of table * /.
    SELECT
    owner, table_name, (sum (bytes) / 1024/1024) size_in_MB
    Of
    (SELECT table_name, nom_segment, owner, bytes)
    FROM dba_segments
    WHERE segment_type = 'TABLE '.
    and owner = 'SCOTT '.
    )


    Group by owner, table_name

    ) s
    We upper (col.table_name) = upper (s.table_name);

    Paul








    I think you need this query:

    Select T.table_name,

    -to_number (extractvalue (xmltype (dbms_xmlgen.getxml ("select count (*) BSB_GL c.' |")))) ((table_name)), ' / LINES/LINES/C ')) as row_cnt

    Row_cnt T.Num_Rows,

    County (tc. Column_name) column_count;

    (sum (s.bytes) / 1024 / 1024) size_in_MB

    from dba_tables T


    Join the tc dba_tab_columns

    on tc. OWNER = "BSB_GL."

    and tc. Table_name = t.table_name


    Join dba_segments s

    on s.segment_name = t.table_name

    and s.OWNER = 'BSB_GL. '


    where (t.iot_type! = 'IOT_OVERFLOW' or t.iot_type is null)

    and t.owner = 'BSB_GL. '

    T.table_name group, T.NUM_ROWS

    If you gather stats column NUM_ROWS in dba_tables gives you cumber of rows in table correctly. And this will be faster than yous query

    ----

    Ramin Hashimzade

Maybe you are looking for