How to export PSIBWSDL of the source database

Hello

I have the following error when running UPGCOUNT. In Metalink doc, it is suggested to PSIBWSDL to the export of the source database.

How should I do?

Thank you.


What is the error, I don't see the error message.

Tags: Oracle Applications

Similar Questions

  • How to add the source database

    On the source database server have 4 instances.
    I already have install Oracle Audit Vault Agent on this server.
    And when I add the source source DB user, I can't found all role (e.g. DV_ACCTMGR) DV

    My approach is
    On Server1 (Audit Vault Server)
    -Install Oracle Audit Vault Server to Server1
    -addagent on Server1 «avca add_agent nom_agent - HOLTUU1 - agenthost 192.0.66.87»

    The server Server2 (Audit Officer Vault)
    -on this server have 4 instances
    -Create user 'oaudit' OS, create ORACLE_BASE ORACLE_HOME
    -Install Oracle Audit Vault Agent

    I mistaked perhaps one or more step.
    Please suggest me

    Dear suredch,

    Yes you are right, this will create a new House and you need to change the House to use AV agent.

    and the 2nd is the difference between the AV and DV?
    they are all two different products. you don't have to be confused. There is no need to use database on the database.you source vault can add your source database to server AV without database Vault.

    and ultimately that ask you database configurations, dear they are the recommended settings, you need to do to add data source to av server.

    Well my dear you can personally send me on [email protected] so i can give u a full doc include the entire process step by step, I used in my scenario, it would be the best practice for you. and you fight understand.

    Concerning

  • Session at which was the bulk operations (DML) during the period determined in the Source database?

    Hi all


    How to find which session was in bulk (DML) operations on a particular period in the Source database?


    The source database: SRCDB - version: 11.2.0.3 (2 node RAC on RHEL OS)

    Target database: TGTDB - version: 11.2.0.3 (Non-RAC)

    Source-> target (United directional Replicat use of Goldengate)


    In our (source) database, I have observed that some inserts in bulk / updates which happened and because of this corresponding Replicat, database was a delay to apply updates/insert and there was a lag (treatment gap) happened in the target database.

    When I checked the source database, I was unable to find that the session/SQL caused the bulk INSERT/UPDATE/DELETE (batch), since the transaction has already finished.

    1. is as possible to get the story SQL_ID or generated Session operations more Redo/DML

    for example: the current time is 11:00 and I need to get the session/caused sql operations dml in bulk between 09:00 and 11:00?

    2. when I checked the history of SQL (dba_hist_sqlstat) time-based CPU, I found some instructions update. But I was unable to find the update caused SQL more updates to a table.

    For example: -.

    Update t1 set attribut1 = 'XX' where SHIPMENT_GID =': p1' / * it has updated 100 rows * /.

    Update t1 set attribut1 = 'XX' attribut1 = "YY" where SHIPMENT_GID =': p2' / * it has updated 5000 lines * /.

    Please provide relevant SQL to identify in above cases...

    Kind regards

    Veera

    column MODULE format a15

    column BEGIN_INTERVAL_TIME format a25

    column END_INTERVAL_TIME format a25

    Select distinct from a.*

    (

    Select sql.snap_id, sql.module, sql.sql_id, decode(t.command_type,2,'INSERT',6,'UPDATE',7,'DELETE') cmd_type,

    s. BEGIN_INTERVAL_TIME, s. END_INTERVAL_TIME, sql. EXECUTIONS_TOTAL, sql. ROWS_PROCESSED_TOTAL,

    SQL. ELAPSED_TIME_TOTAL/1000000 "elapsed time (in seconds).

    of dba_hist_sqlstat sql, dba_hist_snapshot s, dba_hist_sqltext t where s.snap_id = sql.snap_id and sql.sql_id = t.sql_id

    and s.begin_interval_time BETWEEN to_date ('03 - Apr - 2013 16:00 "," dd-mon-yyyy hh24:mi:ss')

    AND to_date ('03 - Apr - 2013 17:00 "," dd-mon-yyyy hh24:mi:ss')

    and sql.sql_id

    IN

    (

    SELECT distinct stat.sql_id

    Of dba_hist_sqlstat stat

    JOIN dba_hist_sqltext txt ON (stat.sql_id = txt.sql_id)

    JOIN dba_hist_snapshot snap ON (stat.snap_id = snap.snap_id)

    WHERE the snap.begin_interval_time between to_date ('03 - Apr - 2013 16:00 "," dd-mon-yyyy hh24:mi:ss')

    AND to_date ('03 - Apr - 2013 17:00 "," dd-mon-yyyy hh24:mi:ss')

    AND txt.command_type in (2,6,7)

    )

    ) a.snap_id asc order

    ;

    I hope that this sql help.

    Kind regards

    Harman

  • How to store images in the oracle database and get back on a jsff page in ADF?

    Mr President.

    How to store images in the oracle database and get back on a jsff page in ADF?

    I have students and employees in my database and want to store their pictures against their ID.

    How to do this?

    Concerning

    Tender,

    You can check the links that explain this below.

    https://tompeez.WordPress.com/2011/11/26/jdev11-1-2-1-0-handling-imagesfiles-in-ADF-part-2/

    Johny tips: ADF: display image files from database as a popup in Application Web ADF

    See you soon

    AJ

  • How to export settings of the workspace in PhotoShop CC?

    How to export settings of the workspace in PhotoShop CC?

    Specifically, the location of my tools, palettes and so on.

    I created my work space and save - and this is my window > Workspace menu but it's not here the export option.

    Other instructions I found (for CS6) supposed to go into file > presets > menu manage - but that has only brushes, shades, etc. - no option I can find for the workspace layout.

    PC instructions, please. Feel free to include Mac, if they are different, so the next person can find a complete solution.

    (I hope the translation of this statement for INDD & I - or I'll post on the forums as well)

    Please and thank you.

    If you can access other system files of users on the local or remote computer, you can import the Photoshop preferences files.

  • Informatica Workflow not able to connect to the source database.

    Hello

    I completed the installation of OBI Apps.All test connections are working properly and I have also configured the relational connection in Informatica Workflow.The passwords, user name and connect string are correct. Tasks that need to connect to the source database fail always when I run an ETL. I checked the logs of these workflow session and he gave me following error:

    READER_1_1_1 > DBG_21438 Reader: Source is [UPG11i], [obiee] users
    READER_1_1_1 > CMN_1761 Timestamp event: [Fri Sep 05 18:01:37 2008]
    READER_1_1_1 > RR_4036 error connection to the database]
    Database driver error...
    Function name: logon
    ORA-12154: TNS: could not resolve the connect identifier specified

    Database driver error...
    Function name: connect
    [Database error: unable to connect to the database using the user [obiee] and the connection string [UPG11i].]

    DAC, I am able to connect to the databases. There seems to be a problem in relational connections. What are the drivers involved and where they should be installed.

    Help, please.

    Thank you and best regards,
    Soumya.

    It seems that the TNS Informatica file is pointing to is not the appropriate for your database entry. The the more safe thing to do is to verify that all the settings Oracle containing information in the source database.

  • Reg: Export data of the physical database system standby.

    Hi all

    We have a standard edition one 11 GR 1 material oracle environment, I need to export the data from the physical monitoring system.

    If anyone can suggest me, how to do it safely (up state).

    Kind regards

    Konda.

    Oracle Data Guard is available only as a feature of Oracle Database Enterprise Edition. It is not available with Oracle database Standard edition.

    Then you must export data only from primary or you use EXP instead of EXPDP on the standby database. Because EXPDP create a temporary table export process duration.

    Concerning

    Mr. Mahir Quluzade

  • Restore the problem - how to get rid of the corrupt database?

    People,

    Some dumb developers managed to trash on our Oracle database by making a mistake in SQLDev.

    This database has been / is using the "autobackup" such as proposed in the installation wizard.

    For some reason, it seems that the restoration is not complete and robust.

    I get an error when mounting the database

    ORA-01248: 5 file has been created in the future of incomplete recovery
    ORA-01110: file of data 5: < path to file >

    not sure I understand what happened but I can't live without this specific database assuming that others are ok.

    An Advisor from how to get rid of this specific database and enjoy the rest of the server?

    The nest step will be to understand why, AutoSave does not what it is supposed to do...

    This is Oracle 11 g standard on Oracle Linux.

    Hello
    Before you start can you confirm that you have a backup somewhere to be able to do a new restore?
    (please do one now)
    If so, you can try to delete datafile 5 of the database.

    Startup mount
    change the drop offline database datafile 5;
    ALTER database open resetlogs;

    If you are not sure that the corruption/inconsistency in the schema sys (or controlfile) went please do an export of relevant patterns and re - create the database (as indicated by damorgan).
    Let us know your results.
    Kind regards
    Tycho

  • How to save data to the SAP database?

    Can someone tell me please how to warn of data to SAP databases?

    Thank you

    Aerts

    Hi AEK.

    Take a look at OSS 105047 - Support for Oracle in the SAP environment functions that you will find this under

    14 Oracle Data Guard

    You can use "physical Standby".
    You cannot use "logical Standby".
    You are allowed to use Fast-Start Failover (FSFO) but the SAP is not supported.
    You can use Data Guard Broker.
    You can use the Maximum Performance Mode, maximum availability Mode and Mode of Maximum Protection.
    In the case of maximum availability and maximum Protection, you must pay attention to a fast in order to avoid network connection performance problems.
    Maximum protection causes the primary database to terminate in the event of problems in the database pending.

    And you will find the 2010 http://www.oracle.com/us/solutions/sap/wp-ora4sap-dataguard11g-303811.pdf Oracle white paper

    Maybe some SAP users have answers for you http://scn.sap.com/community/oracle/content?query=guard

    concerning
    Kay

  • How to export data from the table with the colouring of cells according to value.

    Hi all

    I use jdeveloper 11.1.1.6

    I want to export data from the table with a lot of formatting. as for color cells based on value and so much. How to do this?

    You can find us apache POI-http://poi.apache.org/

    See this http://www.techartifact.com/blogs/2013/08/generate-excel-file-in-oracle-adf-using-apache-poi.html

  • PE 9 - Why is my exported file as the source?

    Hello

    I am new to Adobe first Elements 9. I have a digital video recorder connected to my TV. It produces MPEG - 2 TS VIDEO files. I want to import into PE9 and comes to cut the ads.

    I tried with a source file that is 1 GB in size. I have imported and removed the ads, but when I export using the share of > computer and select settings that mimic the source I stll end with a 3.7 GB file. I thought that it should be smaller than 1 GB because video is now about 5 minutes shorter.

    I chose my export settings:

    Multiplexer: TS

    Audio: Left that it intact

    Video:

    Video basic settings:

    Quality = 4

    Everything else set to automatic (depending on source)

    Go to maximum depth, do not

    Flow settings:

    Encoding: VBR, 1 Pass

    Bitrate - Custom level

    The Target, min and Max value every 15

    GOP settings:

    M frames: 3

    N frames: 12

    Is my better DVR to the video encoding/compression PE9 or my export put evil? The exported file should not be less than 1 GB?

    As Steve, there are two parameters that affect the size of the file:

    Duration

    Bitrate

    With a fixed term, the only way to adjust the size of the file is to reduce the bit rate (Bit-rate = size of reduced file, but at the expense of quality.

    Now, some CODECS will use a flow rate lower, but all while maintaining the quality perceived, better than others, but it is still the bitrate, which affects the size of the file.

    Some options to export/CODEC will list the flow directly and MB/s, where others instead have a 'quality' setting, and MB/s bit rate will not be shown directly, but will focus on some parameters of quality.

    An important consideration is what needs to be done with the output file. If it must be used for additional editing, then I think that the file size is not a big problem, as I hear the ultimate quality. If we keep to the output file, then there are several considerations, the platform of the computer of the recipient, the drive, the necessary quality of the file.

    Good luck

    Hunt

  • How do I know if the RAC database or single instance?

    Hello

    How can you find only if the database is the only database instance or RAC? is there any table, view or setting in the settings file that you cannot know CARS?

    any sugestion will be appreciable


    Thanks and greetings
    VD
    11:41:57 SQL> show parameter cluster
    
    NAME                                 TYPE        VALUE
    ------------------------------------ ----------- ---------------------------
    cluster_database                     boolean     FALSE
    cluster_database_instances           integer     1
    cluster_interconnects                string
    

    http://www.MCS.csueastbay.edu/support/Oracle/doc/10.2/server.102/b14237/initparams023.htm

  • How to pass Variables from the Source (SQL) to tab target (JavaBin Shell) in ODI knowledge Modules

    Hi all

    My name is Alessandro and I am new to the community.

    I have a problem with the KM stage custom when I try to pass the variable from the source to the target.

    The ODI version tha I use is the 11.1.1.5.0.

    I created a stage KM following the instructions of the metalink document (Doc ID 728636.1).

    But when I insert the value of the variable in a test table, the value in a table is the name of the variable.

    Where I'm wrong?

    I am doing thi.

    (1) create a step in my goal of shell sql source and javabin KM. In the source tab I get the value of a query, and the name "LAST_UPDATE" column.

    In the target tab, I attribute to the variabile jv_last_update the value of the variable #LAST_UPDATE, with the same name in the selection column.

    source_tab.jpgtarget_tab.jpg

    (2) I created a second stage where I insert into a table of the value of the variable jv_last_update (to debug the value of the variable):

    insert_step.jpg

    (3) when I try to see what I have in the table, the value of any line is the name of the variable "#LAST_UPDATE"; ".

    result_table.jpg

    Thanks in advance


    Alessandro

    Hi Alessandro,.

    Interesting... I have not read the Oracle document you sent yet, but one thing I can guarantee you, is that it doesn't. If Oracle says that it should work, maybe it's a bug, or maybe he has changed on new versions of ODI. I am also on ODI 11.1.1.5 and I tried many different ways to pass SQL results to variables of java with all sorts of different labels, but none of them worked for me as well. But I managed to use it in a different way, so please see below if it matches your needs:

    We are just one step. The second is just to show that it worked:

    In the first step, on the source tab, select Oracle and point to the logical schema that you want to run the query:

    On the target, note the following Java BeanShell code with the SQL that you want to run:

    Now your variable must have the correct value of your SQL. To test it, I just write a comment 'Raise' to Jython in the second step:

    The result is 'X' as expected:

    It will be useful.

    Thank you!

  • ODI filter to the Source database.

    Dear all,

    I'm trying to add filters in the source. I managed to addining the source filter as xyzinterface. Period = "JAN".

    However I love just for the filter in the list as Jan, Feb, Mar... Nov, dec.

    Please suggest.

    838332 wrote:
    Dear all,

    I'm trying to add filters in the source. I managed to addining the source filter as xyzinterface. Period = "JAN".

    However I love just for the filter in the list as Jan, Feb, Mar... Nov, dec.

    xyzinterface. Period ("January", "February",...)

    Please suggest.

  • How to find changes in the sources of Table?

    How does the robot know what has changed in a source table without the column LASTMODIFIEDDATE as sources of database?

    Source table cannot do an incremental crawl - they must extract all lines and compare them with the lines previously analyzed (via a hash value "checksum") to see if they have changed.

Maybe you are looking for