user source expdp

Hi all

How can I determine the user source in a dump file generated with expdp?
(The info not gave using proc GET_DUMPFILE_INFO)

Thank you very much...

The only way I know how to do this would be to do the following.

from your script, you must add another impdp command

Directory of sqlfile = my_sql.sql for = Impdp system/password...
Then you can replace your order of strings with:

grep "\-\-CONNECT" my_sql.sql | grep - v SYSTEM. AWK '{print $3} ' | Head - 1

The v - SYSTEM is the user tracks the impdp command with sqlfile. If you change this, you must change this. You can be able to reduce the sqlfile work if you know what are the types of objects in the dumpfile, but if you don't then running complete work needs to be done.

I hope this helps.

Dean

Tags: Database

Similar Questions

  • More than a user sources for simple event structure of the event

    I have five different sources, I want to use to generate the user events using the structure of the unique event, acting as a producer loop. Is does not seem to be possible to use five different sets of create/generate user for a single event events structure and the structure of the event does not have the ability to add dynamic event hydrants more than once. Is it possible to do this by using the events structure and user event

    Kind regards

    Austin

    And again in LV2010, (good guess?)  I think this is what you need.

    Michael.

  • Cannot modify most user Source in SQL Developer

    Hello

    I created an Oracle 12.1.0.2 database container called DEVELOP and responsible for 11 GR 2 one hundred packages and 600 tables with hundreds of pages of APEX 5.0.

    I did it using IMPDP and applications run well in the 5.0.2 APEX I just tested all applications.

    Each version of SQL Developer (4.1.2 4.1.3 and the version that comes with 12 c for download) does not allow me to edit any package, procedure, function, or type.

    All I see is "CREATE or REPLACE.

    I can see all the tables and data.

    I saw some problems that appear which suggested using 12.1.0.2 if that happen which I use.

    I create a window using the SYSDBA account for the container, then:

    Select * from DBA_OBJECTS where owner = 'LSDB' and object_type = 'PACKAGE '.

    I see all of my packages.

    I create a window in the account for these objects, called LSDB can:

    When I run select * from USER_OBJECTS where type_objet = 'PACKAGE '. I see all my packages, for example.

    When I run select NAME DISTINCT from USER_SOURCE; I don't see any source at all code, even if I can right-click on each package of SQL Developer and download the entire package in a window and modify the set.

    The source code is somewhere else, simply not visible to the process tree viewer that seems.

    I don't see many discussions on this issue for 12 c 12.1.0.2.

    Because it is a test of 12 c, I gave all the roles and privileges for the LSDB account.

    I am new to 12 c and Developer SQL 4.1.x. and I'm not a magician to the inner workings of the Oracle or SQL Developer, this behavior seems very strange to me.

    Any help would be greatly appreciated.

    Thank you

    Bernie

    Jeff,

    Here is the right solution to this problem.

    This problem was caused by a failure to completely put the PDB container $ SEEDS at 32K extended varchar2 objects.

    The pluggable DEVELOP database has been upgraded to 32K VARCHAR2 objects correctly, but APB$ SEEDS was not.

    This resulted in an Oracle ORA-14696 error when you create a new database plug-in, DEVELOP2.

    Once the PDB database $ SEED container has been properly upgraded to 32K VARCHAR2 objects using the following code, Developer SQL displays all the source properly, and the new database plug-in DEVELOP2 could also be created.

    You can close this topic and thanks for your help.

    BPW

    SQL > conn / AS sysdba

    SQL > startup mount

    SQL > migrate ALTER DATABASE OPEN;

    DATABASE altered.

    SQL > SELECT con_id, name, open_mode FROM v$ PDB;

    CON_ID NAME OPEN_MODE

    ---------- ------------------------------ ----------

    2 PDB$ SEEDS MIGRATE

    SQL > ALTER SESSION SET container = APB$ SEED;

    Modified SESSION.

    SQL > SELECT SYS_CONTEXT ('USERENV', 'CON_NAME') AS a 'Container' FROM dual;

    Container

    ------------------------

    PDB$ SEEDS

    SQL > ALTER system SET max_string_size = EXTENDED;

    Modified system.

    SQL > @? / rdbms/admin/utl32k. SQL

    PROCEDURE is complete PL/SQL with success.

  • EXPDP/IMPDP

    Hi all

    How do I create a sqlfile using impdp so to run it so that the procedures will be replaced by the more recent source expdp database.

    But the sqlfile a bug that include spaces in the sql statements, the cause of the error.

    For example:

    ALTER PROCEDURE "BATCH". "" SP_COUNT_DLS_DRCR ".

    COMPILE

    PLSQL_OPTIMIZE_LEVEL = 2

    PLSQL_CODE_TYPE = INTERPRETER

    PLSQL_DEBUG = FALSE PLSCOPE_SETTINGS = ' IDENTIFIERS: NO

    "REUSE THE TIMESTAMP SETTINGS ' 2014-01-10 10:12:26.

    /

    It has a space before RE-USE, where the script error on RE-USE statement

    Tricks treats on this subject?

    Thank you very much

    Hello

    Yep - thought I'm sorry that you had made only empty lines and that was the problem :-)

    The empty sqlfile option just what import tries to run text - so it just remains to create instead of replace.

    What you have to do is change the file and do a global replace of 'CREATE' with 'CREATE or REPLACE '...

    Still don't know why oracle did not add this as an option - it is one of the main things missing impdp - perhaps there is a technical reason why they do not want to implement?

    See you soon,.

    Rich

  • With the help of expdp/impdp in pump data

    Hi all

    I am newbie in oracle database through the use of data import and export oracle data pump. I use Oracle 10 g 2 on Windows 7

    After you create the directory object 'test_dir' and the granting of read, write privilege to user scott, I connect like scott to the database and create the table 'test' with two rows.

    Then I run expdp command prompt as follows:

    C:\users\administrator > expdp scott/tiger@orcl tables = happy test = all = TEST_DIR dumpfile = test.dmp expdp_test.log = logfile directory

    Export: Release 10.2.0.3.0 - Production on Monday, June 13, 2011 20:20:54

    Copyright (c) 2003, 2005, Oracle. All rights reserved.

    Connected to: Oracle Database 10 g Enterprise Edition release 10.2.0.3.0 - production
    tion
    With partitioning, OLAP and Data Mining options
    Departure 'SCOTT '. "SYS_EXPORT_TABLE_01": scott/***@orcl tables = test content.
    = all = TEST_DIR dumpfile = test.dmp expdp_test.log = logfile directory
    Current estimation using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 0 KB
    Object type TABLE_EXPORT/TABLE/TABLE processing
    . . "exported"SCOTT" TEST' 0Ko 0 rows
    Table main 'SCOTT '. "" SYS_EXPORT_TABLE_01 "properly load/unloaded
    ******************************************************************************
    Empty the files together for SCOTT. SYS_EXPORT_TABLE_01 is:
    D:\DAMP\TEST. DMP
    Job 'SCOTT '. "" SYS_EXPORT_TABLE_01 "conducted at 20:21:02

    My question is why data pump seem to export the table 'test' without lines (that is to say the line: exported 'SCOTT'.) ("' TEST ' 0Ko 0 lines)? How can I do it with the associated export lines?


    I dropped the table test, then I ran the command impdp as follows:

    C:\users\administrator > impdp scott/tiger tables = content test = all = TEST_DIR dumpfile = Test.dmp impdp_test.log = logfile directory

    Import: Release 10.2.0.3.0 - Production on Monday, June 13, 2011 20:23:18

    Copyright (c) 2003, 2005, Oracle. All rights reserved.

    Connected to: Oracle Database 10 g Enterprise Edition release 10.2.0.3.0 - production
    tion
    With partitioning, OLAP and Data Mining options
    Table main 'SCOTT '. "' SYS_IMPORT_TABLE_01 ' properly load/unloaded
    Departure 'SCOTT '. "" SYS_IMPORT_TABLE_01 ": scott / * tables = happy test = all
    Directory = TEST_DIR dumpfile = Test.dmp logfile = impdp_test.log
    Object type TABLE_EXPORT/TABLE/TABLE processing
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    . . imported 'SCOTT '. "' TEST ' 0Ko 0 rows
    Job 'SCOTT '. "" SYS_IMPORT_TABLE_01 "carried out at 20:23:21


    Then, after selection * test. No rows returned

    Please someone with an idea to this topic... What I expected after operation data in table view pump it could export and import table with the data if I'm not mistaken.

    Concerning
    Sadik

    Sadik says:
    He had two rows

    Export is disagreeing with you.
    have you COMMITTED after INSERT & before export?

  • Cannot verify the source of MS SQL server

    Hi all
    I faced a problem in trying to verify the source of MS SQL server with the Oracle Audit Vault
    This is my first experience with MS SQL Server. On this AV already successfully registered some sources oracle and sybase.
    MS SQL Server 2005 x 32.

    avmssqldb check - CBC srvarch01.rccf.ru:1433
    Enter the source username: svc_is
    Enter the password for the source:
    Unable to connect to the source database. Please ensure that the source is running.
    Error occurred, please check the log for details

    Journal:
    15 sep, 2010 15:11:18 10 FINE-wire: the AV Version is 10.2.3.2.0
    15 sep, 2010 15:11:27 Thread - FINE 10: Exception: com.microsoft.sqlserver.jdbc.SQLServerException: failed to connect to the user ' sv
    c_is'.
    15 sep, 2010 15:11:27 Thread - FINE 10: Exception: unable to connect to the source database. Please ensure that the source is in place
    and running.
    com.microsoft.sqlserver.jdbc.SQLServerException: failed to connect to the user "svc_is".
    com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError (unknown Source)
    com.microsoft.sqlserver.jdbc.IOBuffer.processPackets (unknown Source)
    com.microsoft.sqlserver.jdbc.SQLServerConnection.logon (unknown Source)
    com.microsoft.sqlserver.jdbc.SQLServerConnection.connect (unknown Source)
    com.microsoft.sqlserver.jdbc.SQLServerDriver.connect (unknown Source)
    java.sql.DriverManager.getConnection(DriverManager.java:512)
    java.sql.DriverManager.getConnection(DriverManager.java:171)
    oracle.av.plugin.common.DAO.getConnection(DAO.java:161)
    oracle.av.plugin.common.DAO. < init > (DAO.java:134)
    oracle.av.plugin.sql.command.Verify.runQuery(Verify.java:147)
    oracle.av.plugin.sql.command.Verify.checkVersion(Verify.java:224)
    oracle.av.plugin.sql.command.Verify.execute(Verify.java:570)
    oracle.av.plugin.command.Command.process(Command.java:190)
    oracle.av.plugin.sql.command.AVMSSQLDBUtility.execute(AVMSSQLDBUtility.java:92)
    oracle.av.plugin.sql.command.AVMSSQLDBUtility.main(AVMSSQLDBUtility.java:147)

    Will be greatfull for any help.

    Edited by: user12973514 the 15.09.2010 04:16

    Hi Vlad,

    Yes, this can be a problem.
    So check out below:
    a. If you have downloaded the driver (sqljdbc.jar) to the $ORACLE_HOME/jlib
    b. create the user source with sp_addlogin, which should allow the use of sql server authentication.

    Then check again on the issue.

    Thank you
    Sisi

  • migrate catalog basic rman 10.2 11.2

    My rman catalog is currently in a 10.2.0.4 IS based on Windows.

    I'm moving it to 11.2.0.4 SE on Linux.

    I thought at first it would be a simple the schema of the source expdp, impdp rman which target.

    The database is created. The user created "rman" and recovery_catalog_owner granted to him.

    Exported from the current catalog with "" expdp system / * directory = dpump dumpfile = exp_rmcat_catalog.dmp = logfile = exp_rmcat_catalog.log rman schemas "»

    "Copy the dump file to the new server and import with."

    Impdp ${localuser} / patterns$ {localpwd} = directory of rman dumpfile logfile = DPUMP = rmcat_import.log = ${dumpfile} table_exists_action = replace

    But in a second trial of the importation, he threw a lot of mistakes like

    ORA-39111: type of dependent object OBJECT_GRANT: "RMAN" ignored, basic VIEW object type: "RMAN". "' TSATT_V ' already exists

    OK, I understand that the 'table_exists_action"applies to the tables and objects not non-table.  I was a bit surprised to find non-table that the rman schema objects, but it occurs to me that since they are, and they come from a 10 g database, could it be questions their import in a an 11 g database?  Would I better off clean start, issuing a CATALOG to CREATE then import only tables?

    All of the databases being backed up with rman are 11.2.0.2 or 11.2.0.4.  The catalog rman itself is the last database 10g I.

    Ah, I forgot the IMPORT CATALOG command.

    So...

    create the rman schema owner in new catalog databsae

    create the catalog in the new database

    catalog import not unregister, a database at a time.

  • Partitioning of a table in the production

    Hello

    Oracle 10.2.0.4

    I partitioned a table lines about 100 million (62 GB) in Server DEV source. The target database was created again. It was partioned range on a column of date as follows:
    PARTITION BY RANGE (ENTRY_DATE_TIME)
    (
      PARTITION ppre2012 values less than (TO_DATE('01/01/2012','DD/MM/YYYY')) TABLESPACE WST_LRG_D,
      PARTITION p2012 values less than (TO_DATE('01/01/2013','DD/MM/YYYY')) TABLESPACE WST_LRG_D,
      PARTITION p2013 values less than (TO_DATE('01/01/2014','DD/MM/YYYY')) TABLESPACE WST_LRG_D,
      PARTITION p2014 values less than (MAXVALUE) TABLESPACE WST_LRG_D
    )
    It's every year. Nothing before 2012 to p2013 ppre2012, then 2012, and so on. There are 20 million lines in 2012. and about 75 million lines of ppre2012. We had as well to the tables (partitioned) target DEv and (unpartitioned) source for comparison. Queries are normally on the partition for the current year. Just to say that I am a developer and do not have full visibility to the production instance.

    Now that our tests are completed, we would like to promote this production. Of course in production we would not not need source and target tables. In all likelihood, this will be done on a weekend window. This is why I would like to suggest the following.

    (1) use expdp export the source table
    (2) remove the table from the source
    (3) create a new source "partitioned" no index table
    (4) use impdp to retrieve data in a table
    (5) create a global index (it's a unique index to enforce the uniquness) and the rest of the clues like local
    (6) run dbms_stats.gather_table_stats (waterfall of the user, 'SOURCE' = > true). It takes about 2 hours by dev

    My point is that, if the importation of 100 million lines will cause no issues with segments of cancellation. Can we import data say firstly to the current partition (20 million lines) 2012 first. Any practical advice is appreciated.

    Thank you

    Published by: 902986 on November 2, 2012 02:08

    Published by: 902986 on November 2, 2012 03:10

    >
    -Why do I get "ORA-12838: cannot read/modify an object after edit it in parallel."
    >
    Why don't find you the error and read the cause "that applies?
    http://ORA-12838.ora-code.com/
    >
    ORA-12838: cannot read/modify an object after edit it in parallel
    Cause: In the same transaction, an attempt was made to add instructions for reading or changing on a table once it had been amended at the same time or with a direct charge. This is not allowed.
    >
    Do you have a 'direct charge '? Yes - you can not do anything else under this operation. Do a COMMIT or ROLLBACK first.

  • Is it possible to create custom antimalware definition?

    I have a need to create a custom definition for alerting us on Crypto activity.  The definition would of howdecrypt *. * creation of files, but not the value block but ONLY monitor.  In this way, we follow source system spreading the ransomware and user source account.

    Please point me to the documentation on how to do this if possible.

    Thank you.

    Hello Clorin,

    Welcome to the Microsoft Community Forum.

    The question you posted would be better suited to the Microsoft Developer Community.

    Please visit the link below to find a community that will support what ask you:

    Microsoft Developer Network

    https://social.msdn.Microsoft.com/forums/en-us/0568779f-7ded-45C6-B967-0f34530b5fb9/antimalware-service-executable?Forum=offtopic

    Hope the helps of information. Let us know if you need help with Windows related issues. We will be happy to help you.

    Thank you

  • Creating a cron for export and deletion

    I'm planning the export backup runs as follows: structure of metadata only backup weekly on the rept database
    As well as a few other exports 35 table

    And the scripts does not have more than three copies of the backup every time, which means that the script will check files in the area of backup by date range. And the files of more than 3 weeks will be deleted.

    Please, can anyone help with this.

    Try this:

    #! / bin/sh

    If [~/.bashrc - f]; then

    . ~/.bashrc

    FI

    Today = $(date + %Y%-%m-d)

    CD

    Directory of the user/pass expdp = dumpfile dpdumpdir =- $TODAY. DMP logfile =log.log exclude = grant = TABLE inclde: content (list of tables to export) = metdata_only

    #Zip the. DMP and the. LOG files to save space

    tar czvf - - $TODAY. DMP. tar.gz /- $TODAY. DMP /- $TODAY. JOURNAL

    #If the Zipper is successful, delete the original log used in the command export file

    VAR = $?

    If [$VAR - eq 0]; then

    RM /- $TODAY. DMP

    RM /- $TODAY. JOURNAL

    FI

    #Delete backup and log files that are older than 3 days

    find /* - mtime + 3 - exec rm-f ();

    Onkar

  • How to initialize the Java variable within pl/sql block in the ODi procedure

    I have a step in the procedure of odi that using oracle technology.

    I want to initialize the java variable inside that.

    Please help me for this.

    < @ if (odiRef.getOption ("USE_PUBADMIN_PARAM_TABLE").equals("1")) {@ >}

    DENIZ

    EH_FAILURE_MESSAGE_TEXT VARCHAR2 (4000);

    EH_FIXED_VERSION VARCHAR2 (4000);

    EH_ISSUE_TYPE VARCHAR2 (4000);

    EH_PRIORITY VARCHAR2 (4000);

    EH_SUMMARY VARCHAR2 (4000);

    EH_DESCRIPTION VARCHAR2 (4000);

    EH_PROJECT_ID VARCHAR2 (4000);

    EH_COMPONENT VARCHAR2 (4000);

    EH_AFFECTED_VERSION VARCHAR2 (4000);

    EH_CUSTOMPROPXML VARCHAR2 (4000);

    EH_LOG_JIRA VARCHAR2 (4000);

    EH_CONTINUE_ON_ERROR VARCHAR2 (4000);

    EH_SEND_MAIL_NOTIFICATION VARCHAR2 (4000);

    EH_NOTIFICATION_RECIPENTS VARCHAR2 (4000);

    EH_JIRAJARPATH VARCHAR2 (4000);

    BEGIN

    SELECT

    DECODE ('< % = odiRef.getOption ("FAILURE_MESSAGE_TEXT") % >', NULL, PBA_PARAM_PKG.) ("GET_PARAMETER ('EH_FAILURE_MESSAGE_TEXT'),' < % = odiRef.getOption ("FAILURE_MESSAGE_TEXT") % >") IN EH_FAILURE_MESSAGE_TEXT.

    DECODE ('< % = odiRef.getOption ("FIXED_VERSION") % >', NULL, PBA_PARAM_PKG.) ("GET_PARAMETER ('EH_FIXED_VERSION'),' < % = odiRef.getOption ("FIXED_VERSION") % >") IN EH_FIXED_VERSION.

    DECODE ('< % = odiRef.getOption ("ISSUE_TYPE") % >', NULL, PBA_PARAM_PKG.) ("GET_PARAMETER ('EH_ISSUE_TYPE'),' < % = odiRef.getOption ("ISSUE_TYPE") % >") IN EH_ISSUE_TYPE.

    DECODE ('< % = odiRef.getOption ("PRIORITY") % >', NULL, PBA_PARAM_PKG.) ("GET_PARAMETER ('EH_PRIORITY'),' < % = odiRef.getOption ("PRIORITY") % >") IN EH_PRIORITY.

    DECODE ('< % = odiRef.getOption ("SUMMARY") % >', NULL, PBA_PARAM_PKG.) ("GET_PARAMETER ('EH_SUMMARY'),' < % = odiRef.getOption ("SUMMARY") % >") IN EH_SUMMARY.

    DECODE ('< % = odiRef.getOption ("DESCRIPTION") % >', NULL, PBA_PARAM_PKG.) ("GET_PARAMETER ('EH_DESCRIPTION'),' < % = odiRef.getOption ('DESCRIPTION') % >") IN EH_DESCRIPTION.

    DECODE ('< % = odiRef.getOption ("project") % >', NULL, PBA_PARAM_PKG.) ("GET_PARAMETER ('EH_PROJECT_ID'),' < % = odiRef.getOption ("project") % >") IN EH_PROJECT_ID.

    DECODE ('< % = odiRef.getOption ("ELEMENT") % >', NULL, PBA_PARAM_PKG.) ("GET_PARAMETER ('EH_COMPONENT'),' < % = odiRef.getOption ('ELEMENT') % >") IN EH_COMPONENT.

    DECODE ('< % = odiRef.getOption ("AFFECTED_VERSION") % >', NULL, PBA_PARAM_PKG.) ("GET_PARAMETER ('EH_AFFECTED_VERSION'),' < % = odiRef.getOption ("AFFECTED_VERSION") % >") IN EH_AFFECTED_VERSION.

    DECODE ('< % = odiRef.getOption ("CUSTOMPROPXML") % >', NULL, PBA_PARAM_PKG.) ("GET_PARAMETER ('EH_CUSTOMPROPXML'),' < % = odiRef.getOption ("CUSTOMPROPXML") % >") IN EH_CUSTOMPROPXML.

    DECODE ('< % = odiRef.getOption ("LOG_JIRA") % >', '0', PBA_PARAM_PKG.) ("GET_PARAMETER ('EH_LOG_JIRA'),' < % = odiRef.getOption ("LOG_JIRA") % >") IN EH_LOG_JIRA.

    Decode('%=odiRef.GetOption("CONTINUE_ON_ERROR") % > ', '0', PBA_PARAM_PKG. ("GET_PARAMETER ('EH_CONTINUE_ON_ERROR'),' < % = odiRef.getOption ("CONTINUE_ON_ERROR") % >") IN EH_CONTINUE_ON_ERROR.

    DECODE ('< % = odiRef.getOption ("SEND_MAIL_NOTIFICATION") % >', '0', PBA_PARAM_PKG.) ("GET_PARAMETER ('EH_SEND_MAIL_NOTIFICATION'),' < % = odiRef.getOption ("SEND_MAIL_NOTIFICATION") % >") IN EH_SEND_MAIL_NOTIFICATION.

    DECODE ('< % = odiRef.getOption ("NOTIFICATION_RECIPENTS") % >', NULL, PBA_PARAM_PKG.) ("GET_PARAMETER ('EH_NOTIFICATION_RECIPENTS'),' < % = odiRef.getOption ("NOTIFICATION_RECIPENTS") % >") IN EH_NOTIFICATION_RECIPENTS.

    PBA_PARAM_PKG. GET_PARAMETER ('EH_JIRAJARPATH') IN EH_JIRAJARPATH

    Double;

    / * I want to start as it is below. idon't want to control user source and control target conecpt.

    Please help me to below concept.

    */

    < @.

    String V_EH_FAILURE_MESSAGE_TEXT = EH_FAILURE_MESSAGE_TEXT;

    String V_EH_FIXED_VERSION = EH_FIXED_VERSION;

    String V_EH_ISSUE_TYPE = EH_ISSUE_TYPE;

    String V_EH_PRIORITY = EH_PRIORITY;

    String V_EH_SUMMARY = EH_SUMMARY;

    String V_EH_DESCRIPTION = EH_DESCRIPTION;

    String V_EH_PROJECT_ID = EH_PROJECT_ID;

    String V_EH_COMPONENT = EH_COMPONENT;

    String V_EH_AFFECTED_VERSION = EH_AFFECTED_VERSION;

    String V_EH_CUSTOMPROPXML = EH_CUSTOMPROPXML;

    String V_EH_LOG_JIRA = EH_LOG_JIRA;

    String V_EH_CONTINUE_ON_ERROR = EH_CONTINUE_ON_ERROR;

    String V_EH_SEND_MAIL_NOTIFICATION = EH_SEND_MAIL_NOTIFICATION;

    String V_EH_NOTIFICATION_RECIPENTS = EH_NOTIFICATION_RECIPENTS;

    String V_EH_JIRAJARPATH = EH_JIRAJARPATH;

    @ >

    END;

    {< @} @ >

    I have corrected this problem. No need to search on that.

  • Error in my first interface

    Hi all

    I created my first interface with ODI 11 g.

    Everything is rosy in the topology browser.

    I'm simply pass data of tables oracle source to another target oracle table in the same database

    Text source and target are on my laptop of Oracle 11 g in 2 different patterns such as src_schema and tgt_schema in the same database
    created a temp odi treatment work_schema

    in the topology
    (1) for the user Source dataserver is src_schema , in the physical schema, schema name is again src_schema for the diagram and work diagram

    (2) for the user root target is tgt_schema, in the physical schema, the schema name is still tgt_schema for schema and scheme of work is work_schema

    For models all rosy, did engineering reversed for customer table source. and also got the f_customer of target table

    given connect, resource, s/n to all users of 3 above.

    Used KM as
    (1) LKM sql to oracle
    (2) incremental update IKM Oracle

    I put the FLOW_CONTROL to false in the target

    I discovered the code below does not work when integrated, because the Select statement fails



    < code >
    my code
    / * code * /.

    You can define a logical primary key of ODI even where there is no key to the level of the database. In interface, select the desired target column and set it as the key.
    You can also create a data store level (extend the data store and create constraint)

    Chantal
    http://dwteam.in

  • Confidence level of VMmark 2.1

    Hello-


    I have a problem with the trust level setting in my staf.cfg file that I can't seem to cross.

    Here is my error and I have attached my file staf.cfg for assistance:

    0110801-11: 44:32 Stafcmd process: tile 0: OlioWeb: restore filestore could not begin/end. Gave: RC = 25, STAFResult = trust level 5 required to request to START the service
    Applicant's level of confidence 3 on machine localhost
    Machine applicant: tcp://192.168.2.10 (tcp://192.168.2.10)
    User: none://anonymous
    20110801-11: 44:32 Stafcmd process: tile 0: OlioWeb: restore filestore also returned: STAFResultContext. User display job log
    20110801-11: 44:32 Stafcmd: Tile0: FS copy file:ConfigOlioJSP.txt failed with RC = 25, STAFResult = 4 confidence level required for the COPY of the FS service request
    Applicant's level of confidence 3 on machine localhost
    Machine applicant: tcp://192.168.2.10 (tcp://192.168.2.10)
    User: none://anonymous
    20110801-11: 44:32 Stafcmd: Tile0: FS copy the file: ConfigServer_OLIOjspDB.sh failed with RC = 25, STAFResult = 4 confidence level required for the request for the COPY of the MSDS.
    Machine/user source trust level 3 on TOMACHINE olio - db.example.com
    Machine source: tcp://192.168.2.10 (tcp://192.168.2.10)
    The user source: none://anonymous
    20110801-11: 44:32 Stafcmd process: tile 0: OlioDB: restore DB failed to start/finish. Gave: RC = 25, STAFResult = trust level 5 required to request to START the service
    Applicant a confidence level 3 on machine olio - db.example.com
    Machine applicant: tcp://192.168.2.10 (tcp://192.168.2.10)
    User: none://anonymous
    20110801-11: 44:32 Stafcmd process: tile 0: OlioDB: restore DB also returned: STAFResultContext. User display job log
    20110801-11: 44:32 Stafcmd: Tile0: FS copy file:ConfigOlioJSP.txt failed with RC = 25, STAFResult = 4 confidence level required for the COPY of the FS service request
    Applicant a confidence level 3 on machine olio - db.example.com
    Machine applicant: tcp://192.168.2.10 (tcp://192.168.2.10)
    User: none://anonymous
    20110801-11: 44:32 Stafcmd: FS copy the file: ConfigServer_DS2Web.sh failed with RC = 25, STAFResult = 4 confidence level required for the request for the COPY of the MSDS.
    Source machine/user is the trust level 3 on TOMACHINE DS2 - WEB.eng.vmware.com
    Machine source: tcp://192.168.2.10 (tcp://192.168.2.10)
    The user source: none://anonymous
    20110801-11: 44:32 Stafcmd process: tile 0: DS2WebA: clean up old newspapers could not begin/end. Gave: RC = 25, STAFResult = trust level 5 required to request to START the service
    Applicant a confidence level 3 on machine DS2 - WEB.eng.vmware.com
    Machine applicant: tcp://192.168.2.10 (tcp://192.168.2.10)
    User: none://anonymous
    20110801-11: 44:32 Stafcmd process: tile 0: DS2WebA: clean up old newspapers also returned: STAFResultContext. User display job log
    20110801-11: 44:32 Stafcmd: FS copy file:ConfigDS2web.txt failed with RC = 25, STAFResult = 4 confidence level required for the COPY of the FS service request
    Applicant a confidence level 3 on machine DS2 - WEB.eng.vmware.com
    Machine applicant: tcp://192.168.2.10 (tcp://192.168.2.10)
    User: none://anonymous
    20110801-11: 44:33 Stafcmd: FS copy the file: ConfigServer_DS2Web.sh failed with RC = 25, STAFResult = 4 confidence level required for the request for the COPY of the MSDS.
    Source machine/user is the trust level 3 on TOMACHINE DS2 - WEB.eng.vmware.com
    Machine source: tcp://192.168.2.10 (tcp://192.168.2.10)
    The user source: none://anonymous
    20110801-11: 44:33 Stafcmd process: tile 0: DS2WebB: clean up old newspapers could not begin/end. Gave: RC = 25, STAFResult = trust level 5 required to request to START the service
    Applicant a confidence level 3 on machine DS2 - WEB.eng.vmware.com
    Machine applicant: tcp://192.168.2.10 (tcp://192.168.2.10)
    User: none://anonymous
    20110801-11: 44:33 Stafcmd process: tile 0: DS2WebB: clean up old newspapers also returned: STAFResultContext. User display job log
    20110801-11: 44:33 Stafcmd: FS copy file:ConfigDS2web.txt failed with RC = 25, STAFResult = 4 confidence level required for the COPY of the FS service request
    Applicant a confidence level 3 on machine DS2 - WEB.eng.vmware.com
    Machine applicant: tcp://192.168.2.10 (tcp://192.168.2.10)
    User: none://anonymous
    20110801-11: 44:33 Stafcmd: FS copy the file: ConfigServer_DS2Web.sh failed with RC = 25, STAFResult = 4 confidence level required for the request for the COPY of the MSDS.
    Source machine/user is the trust level 3 on TOMACHINE DS2 - WEB.eng.vmware.com
    Machine source: tcp://192.168.2.10 (tcp://192.168.2.10)
    The user source: none://anonymous
    20110801-11: 44:33 Stafcmd process: tile 0: DS2WebC: clean up old newspapers could not begin/end. Gave: RC = 25, STAFResult = trust level 5 required to request to START the service
    Applicant a confidence level 3 on machine DS2 - WEB.eng.vmware.com
    Machine applicant: tcp://192.168.2.10 (tcp://192.168.2.10)
    User: none://anonymous
    20110801-11: 44:33 Stafcmd process: tile 0: DS2WebC: clean up old newspapers also returned: STAFResultContext. User display job log
    20110801-11: 44:33 Stafcmd: FS copy file:ConfigDS2web.txt failed with RC = 25, STAFResult = 4 confidence level required for the COPY of the FS service request
    Applicant a confidence level 3 on machine DS2 - WEB.eng.vmware.com
    Machine applicant: tcp://192.168.2.10 (tcp://192.168.2.10)
    User: none://anonymous
    20110801-11: 44:33 Stafcmd: FS copy the file: ConfigServer_DS2db.sh failed with RC = 25, STAFResult = 4 confidence level required for the request for the COPY of the MSDS.
    Source machine/user is confidence level 3 on TOMACHINE localhost
    Machine source: tcp://192.168.2.10 (tcp://192.168.2.10)
    The user source: none://anonymous
    20110801-11: 44:33 Stafcmd process: tile 0: DS2DB: clean up old newspapers and restore the database failed to start/finish. Gave: RC = 25, STAFResult = trust level 5 required to request to START the service
    Applicant's level of confidence 3 on machine localhost
    Machine applicant: tcp://192.168.2.10 (tcp://192.168.2.10)
    User: none://anonymous
    20110801-11: 44:33 Stafcmd process: tile 0: DS2DB: clean up old newspapers and restore the database also returned: STAFResultContext. User display job log
    20110801-11: 44:33 Stafcmd: FS copy file:ConfigDS2DB.txt failed with RC = 25, STAFResult = 4 confidence level required for the COPY of the FS service request
    Applicant's level of confidence 3 on machine localhost
    Machine applicant: tcp://192.168.2.10 (tcp://192.168.2.10)
    User: none://anonymous
    20110801-11: 44:33 already done ListVMs
    20110801-11: 44:33 already done ListHosts
    20110801-11: 44:34 gHostNames: ['10.12.78.60', ' 10.12.78.61']: NumHosts 2
    20110801-11: 44:34 calculated sVMotionBurstQueueSize = 1
    20110801-11: 44:34 UserSpecified: NumLUNs: 1: TargetLUN (s) = "[" MSA_LUN14"].
    20110801-11: 44:34 already done ListVMs
    20110801-11: 49:28 Storage vMotion located TargetLUN and all virtual machines are on different LUNS
    20110801-11: 49:28 already done ListHosts
    20110801-11: 49:28 gHostNames: ['10.12.78.60', ' 10.12.78.61']: NumHosts 2
    20110801-11: 49:29 calculated DeployBurstQueueSize = 1
    20110801-11: 54:22 Info: model "standby_config": located
    20110801-11: 59:16 Info: Customizing OS "Vmark2_Standby_Custom": located
    20110801 12:04:11 info: DeployLUN "MSA_LUN14": located
    20110801 12:04:16 could not complete the installation program for the 6 following Wklds: ['OlioWeb Tile0 doesn't have installation', ' OlioDB Tile0 has no installation: error copy ConfigOlioJSPdb.txt', "DS2WebA Tile0 has no installation", "DS2WebB Tile0 has no installation", "DS2WebC Tile0 has no configuration", "DS2DB Tile0 has no installation"]

    Yes, all virtual machines must have their individual records of STAF.cfg updated for your test environment.  In my view, it is stated in the troubleshooting section, but I don't know if it is clearly defined in the evaluation of the performance guide, I'll file a bug on this subject so that we make sure that it is in the next version of the guide.

    -Joshua

  • MISP crashes when connecting after aborted scan

    After a stop of the database at one analysis MISP, our impossible to connect to the database more MISP.
    The GUI says 'connection to the database... ", the Panel of State 'Successful connection' and guard MISP newspapers suspended.

    DMU runs this query to the database: ' select count from sys.source$ ".
    s, o sys.obj$ where s.obj # o.obj # and o.type = # = 13 / * TYPE * / and
    s.ROWID in (select decode (substr (cast (e.row_id as varchar2)))
    (((4000)), 1, 1),'* ', null, e.row_id) of system.dum$ exceptions e, sys.obj$ o,.
    u sys. User$ where e.obj # = o.obj # and o.owner # = # and o.name = u.user
    ('SOURCE$ ' et you.name = 'SYS') ".

    It takes forever.

    Someone else had this problem?

    It will be normal to drop everything in running \dmu\admin\drop_repository.sql?

    I suspect an index on the SYSTEM. DUM$ EXCEPTIONS is missing. Unless you have the results of the analysis of value of many hours of time of scanning, the best approach is indeed to run \dmu\dmu\admin\drop_repository.sql and then connect to the database as if it were the very first connection MISP.

    -Sergiusz

  • STREAMS_POOL_SIZE for the datapump database 10.2.0.3

    Hi all

    We have a 10g of Dev database. We do the usual data backup using generic exp order every day.

    We want to change the process using datapump since our new servers with 11g use now the datapump process.

    When we launched firstly the datapump most of the databases (we have like five 10 g) received the error related to "STREAMS_POOL_SIZE" needed to be defined, which is not extant in the init.ora before. and to a desired value (e.g. 1 Mb) solved the problem.

    My question is why some database 10g, the expdp but the impdp wouldn't? Does this mean that expdp need one less memory than impdp?

    Or is it possible that there are fewer users runtime expdp?

    Thank you very much?

    MissGuided

    We have a 10g of Dev database. We do the usual data backup using generic exp order every day.
    We want to change the process using datapump since our new servers with 11g use now the datapump process.
    When we launched firstly the datapump most of the databases (we have like five 10 g) received the error related to "STREAMS_POOL_SIZE" needed to be defined, which is not extant in the init.ora before. and to a desired value (e.g. 1 Mb) solved the problem.
    My question is why some database 10g, the expdp but the impdp wouldn't? Does this mean that expdp need one less memory than impdp?
    Or is it possible that there are fewer users runtime expdp?

    regular backups aren't in RMAN expdp, if you only want to go with expdp.
    can you check how much the size necessary to configure v $ STREAMS_POOL_ADVICE.

Maybe you are looking for

  • SDIO Bluetooth Toshiba in iPaq rz1710

    I have a TOshiba Bluetooth SD-BT00U sdio card and an iPaq rz1710. I can't get the bluetooth card to be detected by the ipaq. Are there updates/driver compatibility information? [Edited by: admin 14 March 05 21:12]

  • Cannot install program on a hidden on the Satellite P100-387 partition

    Hello I have a P100-387. Using Partition Magic 8.0 I divided the disk into 5 partitions.Now, I have problem with the software instalation like MS Office, Java Runtime approx...Error message: "Error 1327. Wronng drive H:\ ».The "H" partition is hidden

  • Yoga 13 Bluetooth mouse problem

    I use a Microsoft Bluetooth mouse sculpt with a 13 Yoga and every two minutes, the mouse pointer starts to wander and does not follow the movements of the mouse. This corrects itself after 30 to 50 seconds, but then it repeats every few minutes. I re

  • Ethernet emulation

    Dear LabVIEW forum, Anyone know if this is possible, or if there is no example vi, showing examples of using ethernet network connections that are emulated without the actual physical connection, but showing how data can be transmitted/received?. Ide

  • could not initialize volume meter

    I have no sound at all and the message (could not iniatialize volume counter), I also had in adjustment of volume menu no audio device.