User IMP in a different database schema

Scenario of
=====
Oracle 9i Enterprise Edition (9.2.0.8)
Windows Server 2003 (32-bit)
--- to ---
Oracle 11g Enterprise Edition (11.2.0.3)
Windows Server 2008 R2 Standard (64-bit)


Hi all

I'm doing an upgrade from 9i to 11g and I use native imp/exp to migrate these data... I exported 9i patterns in a dmp file and I'm goiing to export in 11g. Database 11g is a new database with any of the existing patterns...

My newbie question is "should I manually create these schema exact 11g and their tables before importing data? If so, there are many diagrams and objects to create... one by one... This is the tedious problem that I face...
Is there a work around that? Thank you.

Hello
Exp/imp old creates no users unless you do a 'full' export (10 g impdp/expdp will do that for exports of level diagram). You need to first create all users if you exported from a list of schemas. Is it possible to export as "comprehensive"?

Kind regards
Rich

Tags: Database

Similar Questions

  • moving to a different database schema (datapump or?)

    Friends and Experts,

    DB: 11 GR 2

    OS: Linux

    (Sorry for the long post but did not give us any information)

    I move a scheme from 1 host to another host-2 schema size is 400 GB.

    I had planned to use datapump since this method I'm more comfortable, export everything, including statistics.

    300 GB partition of a table. (including indexes)

    170 GB of segments like '% TABLE % '.

    Schema has the table to partition table, business segments, lob indexes, indexes, index partitions

    Then he was killed by mistake of snapshot has exported about 250GB of dumpfile.

    Host-1 have only 4 CPU can not really use more than 2 parallel channels in the file export settings, tried using 4 channels and host load reached 10 in a few minutes. Export was killed on the spot to avoid the closure of the host.

    Host-2 is faster, so no, I don't not delay while import.

    We no license for the Golden Gate, but helped advanced compression.

    Problem:

    I started to export but it was so slow that only generated 10 GB/HR, I let him turn to test and it failed after 14 hours with snapshot too old error.

    Not a lot of process runs on the host or the schema, main problem I see is host/drive is slow and not seen in any case to move this huge scheme to another real data base. In the worst case I have the schema lock before the interview for 15 hours and another 10 hours for import or so but I still don't think will end export.

    Issues related to the:

    1. What can be done here to move the schema using datapump?

    2. any other safe method to move the schema? I know that this can be done through transportable tablespace but I have never done this and this is the pattern of production in order to don't want to take the risk.

    3. how anyone with similar project sharing their experience?

    4. any other advice/method/suggestions?

    File export settings:

    DIRECTORY = DATA_PUMP_REFRESH

    DUMPFILE = EXP_01.DBF, EXP_02.DBF

    LOGFILE = EXP_USER1. JOURNAL

    PARALLEL = 2

    SCHEMAS = USER1

    CONTENT = ALL

    Add the parameter parallel and added the size of segments

    You pay conservation discard at least the hour of your longest running query (and ensure that this cancellation can develop that much space).

    Your 'senior' is correct in saying, "we will not do this for the first time in production", but one could say that whatever it is. If there is one, transportable tablespace will probably be the fastest and easiest option.

    Yes, you will still need enough UNDO for a network-link data pump job, but if the writing of the album is the cause of the problem of the speed, the approach of the network link * may * be faster, which means you need less to cancel.

  • JCA database adapter issue because of the dependence of database schema

    Hello

    I use the database JCA adapter to call a stored procedure. The adapter uses the schema of database name while calling the stored procedure. The schema name is added to the namespace XSD, wsdl files.
    If I have to use a different database schema name, how can I change the code without a lot of change.

    Thank you!

    You can change this property in the JCA file

  • How to choose the scheme of work of a different database instead of the database target

    Hi everyone, can you please answer to my problem.

    Q1.is it is mandatory that the two target schema schema and work must present in a database.if not then How to choose the scheme of work of another data base instead of target database schema. Please suggest.

    __Description__

    My work, my work patterns in odi's in a database schema and target in a different database,
    Now I'm dataserver with identification of the target or credentials of scheme work information.
    Here's the problem.

    I can't see scheme of work in the drop-down list when you create the physical schema with data server with the credentials of the target.

    It's obvious that he cannot show the workschema which presents in an another database.because - database server has the schema of the target database credentials.

    Vice versa

    Please suggest

    Concerning
    Chantal

    Hello

    Your KM will create the tables for working in the schema, you write to and that's why you have the problem:

    create table< %="odiRef.getTable" ("l",="" "int_name",="" "w")="" %="">

    I don't know of an option to replace this 'W' that will say 'go and get work in the staging area, I've specified.

    I guess that you can customize the KM and pass all the commands to Execute on the Source tab, or I guess you could specify the schema for each of the steps to be the logic diagram of the various staging areas.

    BUT I don't know if it would work throughout and I think it will create a massive Mount of work unnecessarily.

    Is there a decent reason why you can not have the staging area on the target?

  • How to use references from web third party service with service Cloud Computing to Oracle database schema

    APEX 5.0

    Cloud Computing service for the Oracle database schema


    I'm in the middle of do a proof of concept.  Basically, I need an application with security of the stored data, UI, user, data loading, and able to post data via an external web service said.  It seems that with the database schema Oracle cloud service, it is not possible to use web service references that are not in the field.

    If I try to use a service via http reference, I get:

    ORA-20987: APEX - the requested URL was forbidden. Contact your administrator. -Contact your administrator of the application.

    If I try to use the same reference service via https, I get:

    ORA-29273: HTTP request failed

    ORA-06512: at & quot; SYS. UTL_HTTP & quot; line 1130

    ORA-29259: end-of-input reached

    I read somewhere that cloud services only https can be used.  Is this true?

    And then I read somewhere to use the protocol https, the portfolio must be configured to store certificates, etc.  However, I read somewhere else that the portfolio cannot be configured because there is no access to the instance database with the Oracle Cloud Computing database schema service.  Is this true?

    If both are true, how can I make a call to post data to an external web service?  Or do I need to use a different Cloud Computing service?  Or do I need my own instance of Oracle DB?

    Any help would be great.  Thank you!

    It turns out there was a problem with the remote rest service.  After successfully calling a rest service that was created using SQL Workshop, I tried different remote rest services and they all work.  Sorry for the confusion.  I thought it was very strange that the schema of database service wouldn't be able to do it easily.

  • APEX on a database by using a different database security

    I'm new to APEX so please forgive me if my question is elementary or if it crosses also ignoring.  My organization uses APEX for the first time and you are looking to fill a specific role.  I don't know if we want APEX can be done.

    Here's what we want to do:

    We want to create an app of the APEX on A database.

    My APEX application will be used to modify database tables b.

    Users have usernames, passwords and access set up on the basis of data B.

    When users access the application of APEX, we want the application to use the database security B.  In other words, it connects using the IDs and passwords for database B.

    So:

    I go to the application of the APEX

    He invites me to the user ID, I enter one I use when I log on database B.

    He asks me a password, I get the one I use with database ID B.

    I click OK.

    Forms are loaded with the data accessible by my ID on the B database.

    Changes on the forms and my user ID is marked as one making changes to database b lines.

    In other words, I just want to use the database A to build and enhance the application.  Anyone can run the application, but they must connect using their database B ID and password to make changes.

    (1) is it possible?

    (2) how to configure in the application of the APEX?

    Thanks for your help on this.

    8dc1e333-95ad-4714-9820-16d3e4296c4d wrote:

    In other words, I just want to use the database A to build and enhance the application.

    APEX does not work like that.

    APEX is nothing more than a bunch of PL/SQL code that runs on the database, it is installed on and run the code as "pattern analysis".

    Comment on "works on the basis of data"

    If you want APEX to read data from a different database, you use a DATABASE LINK

    for example to SELECT a report

    Select * from scott.emp@db2

    I doubt that one of the assistants APEX you'll love.

    comment "works like «pattern analysis"»

    If your application has an analysis of 'BOB' scheme... all SQL and PL/SQL code will run as BOB.

    The "DB"account "security" is more a misnomer.  APEX only checks that the entered name and password match that of the database that it is installed on.

    Once verified, APEX performs a 'switch user' for the 'scheme of analysis. "  (Authentication of proxy in Oracle)

    That's how web applications work... they use a shared schema.

    WORK AROUND

    Connect to the APEX by ADR, no EPG.  Will prevent it people to access the database directly.

    The next thing that you need is a SCHEME of ANALYSIS dedicated to the execution of SQL and PL/SQL.

    As any other database USER, it shouldn't be the same schema as the schema that contains all of your data. (Analysis schema! = data schema)

    Personally, I like to my 'space work [schema]' separated also.

    You will most likely need to use a database of private virtual control data access.

    Required Code changes

    If none of your code using the username 'USER' column, you need to change to COALESCE (V ('APP_USER'), USER)

    (I prefer to COALESCE on NVL because I anticipate different infrastructure that works similar to APEX)

    MK

  • Can I plug a tablespace in a different database?

    Version:11.2.0.3/RHEL 5.8

    We have a database of non-prod with a weekly backup of level diagram using datapump. This DB contains patterns QA which is essential to our versions of the product.

    The system tablespace data file has been corrupted. After the last logical backup taken with expdp 5 days back, many changes were made for business schema objects. Then, restore the dumpfile expdp scheme will be a great help.

    We have a critical QA scheme called LMFS_QA that uses a tablespace (LMFS_QA_DATA) with 4 data files. All data files were fine when the DB went down because of the corruption of the system tablespace.

    Is there anyway that I can plug this tablespace in an another healthy database after the user LMFS_QA was created in the DB?

    Is there anyway that I can plug this tablespace in an another healthy database after the user LMFS_QA was created in the DB?

    Sorry to say but the short answer is no.  You can not just plug these data files to a different database.

    No doubt you are any rman backups, or you wouldn't be asking this question?  Critical databases must be in archivelog mode and backup using rman.

    Use your file expdp and re-apply your changes.

    Are there no way to recover your database "corrupt"?  You have connected a call with Oracle?

  • Access to the database/schema to another server...

    Hello - I have a need that will require me to access additional information about a schema on a server that is different from what my APEX installation runs on (IE 3rd part of the data that is not part of the inherent schema that accesses the APEX)...

    My APEX server running version 3.2.1.00.11 on top of oracle 11 g Enterprise Edition Release 11.1.0.7.0 - 64 bit Production. We will call this server "A".

    The target server is currently an Oracle server, but will eventually migrate to a Teradata installation (no - I have no word to say in the matter!). We will call this server 'B '. No idea what versions in both cases.

    I understand that if I needed to access different patterns on the same server (IE Server A) it would be easy enough to do using grant statements.

    Also, I understand that to access a schema on server B, I could use a DBLink to do.

    My questions are:

    (1) oracle allow a DBLink to teradata? I found the following thread that seems to indicate that it is possible (or at least used to be - don't know if it's still a valid configuration) Re: Teradata to Oracle connection

    (2) is there another way to make this available external data source? A tech in our team 'Architecture COMPUTING"said DBLinks are not recommended or a best practice. He proposed adding the data source 'directly '?

    This thread (add multiple schemas in a workspace seems to speak of adding multiple schemas to a workspace, but I do not have access to this part of our oracle server (I'm only a workspace administration).)

    (3) if I create a view that accesses the tables through the DBLink to Server B when it comes to Oracle, and then update the DBLink to point to the new teradata server during the migration happens - it will break anything within the APEX?

    My hypothesis is that the table names are the same concerns only APEX that the view is valid and not what fuels the view.


    The rationale against using DBLinks gave me was that "it is not sensible for APEX down to the oracle database (its native underlying server/schema) to cross to another server and return to the oracle database that then went up to the APEX." It makes more sense for APEX to go straight to the other source of data. Normally, I would say that they (Apex/infrastructure) are the same server so it does not really matter that he might have to go through additional 1 'service' or 'interface', but this area is not my specialty.

    Also - I think that technically speaking our APEX service is already separated from its native schema/data on different servers (for load balancing), so in this case really maybe a middle Server Getting (IE Server APEX A-> native schema/data of the server where the DBLink might be-> server B) do an unnecessary extra jump (APEX Server A)-> server B. Note : I am sure that our architect does not know that this is the case, then it is not part of its raison d'etre.

    Thoughts?

    Thank you!
    Jim

    (1) there is a wide variety of databases that you can access from Oracle using the links to the db. The main purpose of a db link must provide connectivity between databases that are not consistent with the use of a certain type of driver or translator. Some databases provide native connectivity with other thing than their own products.

    (2) I would ask another dba from this source that he or she recommends, and if he or she has all the documentation. The long and short of it is (as I explained above) few providers of database provides native support for other database engines, which is needed for the kind of "direct connection" implies that person. My suspicion is that this person is an ODBC user and is equivalent to the use of ODBC drivers with "direct connection", which is far from accurate. ODBC only provides a generic interface to a database, at the expense of speed, functionality and efficiency thanks to the translation of the command and overhead.

    (3) oracle generally doesn't care about the back end of a connection to a different database and neither does APEX. As long as you can build a database to the database in question, shouldn't you have any problems ask these data, although only in a reduced performance due to air travel, networking, drivers, etc.

    Really, APEX is intended to be run against and integrates better with Oracle databases. If your main data is on another platform, APEX may not be the best solution for your needs.

  • Different databases with the same name in 1 cluster?

    Hello classmates of dba,.

    I wish to discuss with you the following situation:

    We have a 4 RAC Cluster node.

    Databases node 1 and 2 contain the 11.2.0.3 (Enterprise Edition)

    Databases node 3 and 4 contain 11.2.0.4 (Standard edition)

    12.1 GI on all nodes and all are part of the 1 cluster.

    OS: Oracle Linux 6.5 (with + ASM)

    Our company wants to migrate databases EE 1 and 2 to 3 and 4 nodes node:

    -Downgrade of EA to itself

    -Upgrade 11.2.0.3 to 11.2.0.4

    Normally I do this by creating databases with DBCA under a new name and migrate data using Data Pump.

    The problem is that the company wants to have the same name of database on all nodes.

    Is it possible to have the same name of database on all 4 nodes simultaneously (from 2 houses of Oracle RDBMS different with different versions while all 4 nodes share 1GI?

    It is a method of support?

    Let's see if I can describe it more in detail.

    You have a database named ORCL in your RAC environment. Create a new blank database named SLAVE_PORT_NUM. Perform a dump of export of ORCL and import in SLAVE_PORT_NUM. At this point, you have the data in the new database. The original database to stop now:

    srvctl stop database orcl immediate o d

    ORCL is no longer running on all nodes. We just need to allow users to connect to this database with a service name. But before I can do, I need to remove the Cluster registry ORCL:

    srvctl remove the d orcl database

    Now create a service named ORCL that connects to SLAVE_PORT_NUM

    srvctl add service d slave_port_num s orcl r - newdb1, newdb2, newdb3, newdb4

    Your application always tries to connect to "orcl" on this same group. They don't know the PB has changed its name.

    When you change only the name of the service for the new database. I guess that the path of the ASM will have different name right?

    It cares ASM. ASM paths come into play when you create the new db.

    HTH,

    Brian

  • Oracle Cloud Service database schema... Apex Cloud_Scheduler &amp; E-mail procedure

    Professional Hello, etc. of users experienced Apex.

    I am currently using the oracle database schema cloud service, which is related to my extension service java saas.

    I have a requirement to send reports to certain accounts of e-mail per day. Hence the need to use the CLOUD_SCHEDULER API, APEX_EMAIL API and a custom "Get_REPORT"... return blob function to convert the sql result set to CSV file to be attached to e-mails.

    1 function GET_REPORT returns the BLOB as the query results set CSV file. Autonomous œuvres

    2. I created the 'SENDEMAIL' procedure that accepts the responsibility of the sender and receivers email address, subject and a BLOB (GET_REPORT) to be attached to the email. And sends APEX_MAIL. PUSH_QUEUE. : This method works stand-alone.

    3. I created the "ScheduleCSVToEMAIL" procedure that accepts the e-mail to, from, cc, tablefiltertext, tablename: this procedure combines 1 and 2 as below...

    SQL_TEXT := 'SELECT * FROM '||TABLE_NAME||' WHERE ' || SQL_FILTER ||'';
    REPORT := GET_REPORT(SQL_TEXT);
         SENDENDVI_MAIL2(TO_ADDR,COMMA_CC, FROM_ADDR, EM_TITLE,EM_TITLE,null,null,REPORT,null);
    

    above works fine when run standalone.

    4. I created CLOUD_SCHEDULER PROGRAMS as below

    BEGIN
      CLOUD_SCHEDULER.CREATE_PROGRAM(
      program_name => 'emailtest1',
      program_action => 'SENDENDVI_MAIL3',
      program_type => 'STORED_PROCEDURE',
      number_of_arguments=>10, enabled =>false
      );
      CLOUD_SCHEDULER.DEFINE_PROGRAM_ARGUMENT(
      program_name => 'emailtest1',
      argument_position=>1,
      argument_type=>'VARCHAR2',
      DEFAULT_VALUE=>'[email protected]'
      );
      CLOUD_SCHEDULER.DEFINE_PROGRAM_ARGUMENT(
      program_name => 'emailtest1',
      argument_position=>2,
      argument_type=>'VARCHAR2',
      DEFAULT_VALUE=>'[email protected]'
      );
      CLOUD_SCHEDULER.DEFINE_PROGRAM_ARGUMENT(
      program_name => 'emailtest1',
      argument_position=>3,
      argument_type=>'VARCHAR2',
      DEFAULT_VALUE=>'[email protected]'
      );
      CLOUD_SCHEDULER.DEFINE_PROGRAM_ARGUMENT(
      program_name => 'emailtest1',
      argument_position=>4,
      argument_type=>'VARCHAR2',
      DEFAULT_VALUE=>SYSDATE||'_ENDVI_VH_INSURANCE'
      );
      CLOUD_SCHEDULER.DEFINE_PROGRAM_ARGUMENT(
      program_name => 'emailtest1',
      argument_position=>5,
      argument_type=>'VARCHAR2',
      DEFAULT_VALUE=>null
      );
      CLOUD_SCHEDULER.DEFINE_PROGRAM_ARGUMENT(
      program_name => 'emailtest1',
      argument_position=>6,
      argument_type=>'VARCHAR2',
      DEFAULT_VALUE=>null
      );
       CLOUD_SCHEDULER.DEFINE_PROGRAM_ARGUMENT(
      program_name => 'emailtest1',
      argument_position=>7,
      argument_type=>'VARCHAR2',
      DEFAULT_VALUE=>null
      );
       CLOUD_SCHEDULER.DEFINE_PROGRAM_ARGUMENT(
      program_name => 'emailtest1',
      argument_position=>8,
      argument_type=>'VARCHAR2',
      DEFAULT_VALUE=>null
      );
       CLOUD_SCHEDULER.DEFINE_PROGRAM_ARGUMENT(
      program_name => 'emailtest1',
      argument_position=>9,
      argument_type=>'VARCHAR2',
      DEFAULT_VALUE=>'G_EMAIL_VH_INSURANCE_INFO'
      );
      CLOUD_SCHEDULER.DEFINE_PROGRAM_ARGUMENT(
      program_name => 'emailtest1',
      argument_position=>10,
      argument_type=>'VARCHAR2',
      DEFAULT_VALUE=>null
      );
      
      CLOUD_SCHEDULER.ENABLE('emailtest1');
    END;
    

    and

    BEGIN
      CLOUD_SCHEDULER.CREATE_JOB('emailtestrun1', program_name=>'emailtest1');
      CLOUD_SCHEDULER.SET_JOB_ARGUMENT_VALUE('emailtestrun1',1,'[email protected]');
      CLOUD_SCHEDULER.SET_JOB_ARGUMENT_VALUE('emailtestrun1',2,'[email protected]');
      CLOUD_SCHEDULER.SET_JOB_ARGUMENT_VALUE('emailtestrun1',3,'[email protected]');
      CLOUD_SCHEDULER.SET_JOB_ARGUMENT_VALUE('emailtestrun1',4,SYSTIMESTAMP||'_ENDVI_VH_INSURANCE');
      CLOUD_SCHEDULER.SET_JOB_ARGUMENT_VALUE('emailtestrun1',5,SYSTIMESTAMP||'_ENDVI_VH_INSURANCE');
      CLOUD_SCHEDULER.SET_JOB_ARGUMENT_VALUE('emailtestrun1',6,NULL);
      CLOUD_SCHEDULER.SET_JOB_ARGUMENT_VALUE('emailtestrun1',7,NULL);
      CLOUD_SCHEDULER.SET_JOB_ARGUMENT_VALUE('emailtestrun1',8,NULL);
      CLOUD_SCHEDULER.SET_JOB_ARGUMENT_VALUE('emailtestrun1',9,'G_EMAIL_VH_INSURANCE_INFO');
      CLOUD_SCHEDULER.SET_JOB_ARGUMENT_VALUE('emailtestrun1',10,NULL);
      CLOUD_SCHEDULER.ENABLE('emailtestrun1');
    END;
    

    Above, executed but it obviously it didn't, I asked the table "USER_SCHEDULER_JOB_RUN_DETAILS" and I got the reason of failure below.

    ORA-20001: This procedure must be invoked from within an application session. ORA-06512: at "APEX_040200.WWV_FLOW_MAIL", line 339 ORA-06512: at "APEX_040200.WWV_FLOW_MAIL_API", 
    line 97 ORA-06512: at "F1ZKNWJD2RE1.SENDENDVI_MAIL3",line 56
    
    

    Please help what I do to get my work requirement.

    Note: I already tried to use the Scheduler in the format below, who gave the same result

    BEGIN
      CLOUD_SCHEDULER.create_job (
        job_name        => 'ENDVI_AUT_EM_ROCKET',
        job_type        => 'PLSQL_BLOCK',
        job_action      => 'BEGIN SCHEDULE_CSV_EMAIL(''G_EMAIL_VHROCKETDETAILS'',''[email protected]'',''[email protected]'',''[email protected],[email protected]'',NULL,SYSTIMESTAMP||''_TEST SCHEDULE_CSV_EMAIL'',NULL,NULL,NULL); END;',
        repeat_interval => 'FREQ=MINUTELY; INTERVAL=3;',
        enabled         => TRUE);
    
    
      CLOUD_SCHEDULER.set_attribute (
        name      => 'ENDVI_AUT_EM_VH_INSURANCE',
        attribute => 'max_runs',
        value     => 20);
      CLOUD_SCHEDULER.enable(name => 'ENDVI_AUT_EM_VH_INSURANCE');
    END
    

    Right answer

    Hi oladslw,

    You must perform an additional step before calling the API APEX_MAIL outside an Application Express application. Two ways to achieve this are described in the APEX_MAIL documentation (see first Note) and in the APEX_UTIL documentation. Another way is:

    for c1 in ( select workspace_id
                  from apex_workspace_schemas
                 where workspace_name = sys_context( 'userenv', 'current_schema' )
                   and rownum = 1 ) loop
        apex_util.set_security_group_id( p_security_group_id => c1.workspace_id );
    end loop;
    

    The code above retrieves your workspace_id of the dictionary of the current schema-based APEX of (your), then sets the security context of APEX. After that, you will be able to call APEX_MAIL in a same database session.

    Thank you

    Vlad

  • How to create a Pool of connections to two different databases in OBIEE

    Hello

    I have a requirement where few reports coming from databases and a few others to another database. I created two pools of connections and added the frame connection details. But when I created the report I get error as below.

    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error occurred. [nQSError: 43113] The message returned by OBIS. [nQSError: 17001] Oracle error code: 942, message: ORA-00942: table or view does not exist to the call of the OCIStmtExecute OIC. [nQSError: 17010] Prepare the SQL statement failed. (HY000)

    Now, I would like to see if I can create a pool of connections to two different databases. If so, how that happen.

    Thank you!

    Hi Antonio,.

    Cross database joins are possible, no problem for it. But you do not do so in an object of unique data base in the physical layer using 2 pools of connections connection to the source A and B, then adding objects to A and B children of your unique database.

    You create 2 databases in the physical layer, each with its own connection pool, then you can do your knuckles crossed on the physical schema.

    As a picture is worth a thousand words, take a look:

    The physical diagram uses objects of A and B and define the rules of the join between them, it is a cross-database join. But in the physical layer of the database A and B are 2 separate objects each with its own pool of connections.

  • REPLICAT: search sqlexec on a table resides on a different database

    Hi all

    I use GoldenGate 12 c as an ETL.

    I need to search on a table resides on a different database than the target of one in a REPLICAT process and insert the records in the target database when an insert on a source table is detected.

    I took a chance with sqlexec but I don't know how to operate it.

    My Replicat process would have two clauses userid (a single connection to the target of access database) and the other to access a database of external research?

    ggsci > edit replicat rep

    Rep REPLICAT

    SETENV (ORACLE_SID = DB1)

    SETENV (NLS_LANG = AMERICAN_AMERICA. AL32UTF8)

    Ggs_owner@db1 username, PASSWORD ggs_owner

    Ggs_owner@DB2 username, PASSWORD ggs_owner

    MAP, TARGET

    SQLEXEC (ID insertLookup, &)

    QUERY ' insert into schema1.table1 (select * from db2:schema2.tab2) where...', &

    PARAMS (...)), and

    I did find an example on sqlexec search against a table resides on a different database.

    I would be grateful if someone colud set an example in this regard. Thank you in advance.

    OGG Admin Guide 17.4 (http://docs.oracle.com/goldengate/1212/gg-winux/GWUAD/wu_customcode.htm#i1047744)

    Oracle GoldenGate provides a marker system event, also known under the name of the event marker

    infrastructure (EMI), which allows the Oracle GoldenGate process to make sense

    action based on event record in the transaction log or in the trail (according to the)

    the process data source).

    ---------------

    As per above mention of documentation OGG, EVENTS are based on log files path or transaction recording, but in your case, it is the event of the outside world (e.g. external database) and rest of things would stay the same. In most cases, users want to use the data or find data based on recordings of the event, but in your case, the need is different and event record is so. It shouldn't be a problem.

    Haddi

  • How to read the data with different XML schemas within the unique connection?

    • I have Oracle database 11g
    • I access it via JDBC: Slim, version 11.2.0.3, same as xdb.
    • I have several tables, each has an XMLType column, all based on patterns.
    • There are three XML schemas different registered in the DB
    • Maybe I need to read the XML data in multiple tables.
    • If all the XMLTypes have the same XML schema, there is no problem,
    • If patterns are different, the second reading will throw BindXMLException.
    • If I reset the connection between the readings of the XMLType column with different schemas, it works.

    The question is: How can I configure the driver, or the connection to be able to read the data with different XML schemas without resetting the connection (which is expensive).

    Code to get data from XMLType is the implementation of case study:

     1   ResultSet resultSet = statement.executeQuery( sql ) ; 
    2   String result = null ;
    3    while(resultSet.next()) {
    4   SQLXML sqlxml = resultSet.getSQLXML(1) ;
    5   result = sqlxml.getString() ;
    6   sqlxml.free();
    7   }
    8   resultSet.close();
    9    return result ;

    It turns out, that I needed to serialize the XML on the server and read it as BLOB. Like this:

     1    final Statement statement = connection.createStatement() ;  2    final String sql = String.format("select xmlserialize(content xml_content_column as blob encoding 'UTF-8') from %s where key='%s'", table, key ) ;  3   ResultSet resultSet = statement.executeQuery( sql ) ;  4   String result = null ;  5    while(resultSet.next()) {  6   Blob blob = resultSet.getBlob( 1 );  7   InputStream inputStream = blob.getBinaryStream();  8   result = new Scanner( inputStream ).useDelimiter( "\\A" ).next();  9   inputStream.close(); 10   blob.free(); 11   } 12   resultSet.close(); 13   statement.close(); 14  15   System.out.println( result ); 16    return result ; 17
    

    Then it works. Still, can't get it work with XMLType in resultset. On the customer XML unwrapping explodes trying to pass to another XML schema. JDBC/XDB problem?

  • import from 9i to 10g database schema

    Hello

    We export dump of 9i database schema and now I want to import into the database 10g.

    May I know what is the proper method to do so.

    What all things are required to take care of.

    I know that we can use only 'imp' to import.

    Please let me know...

    Kind regards

    Milan

    ROUGIER in MILAN wrote:
    Hello

    We export dump of 9i database schema and now I want to import into the database 10g.

    May I know what is the proper method to do so.

    What all things are required to take care of.

    I know that we can use only 'imp' to import.

    Please let me know...

    Kind regards

    Milan

    If you import a single schema, you could just start importing and then correct errors (even back the schema and re - import).

    In any case, this document is more than enough:

    http://www.orafaq.com/wiki/Import_Export_FAQ

    Just a suggestion: use the buffer parameter when importing:

    IMP BUFFER = 10000000...

    Concerning

    Grosbois

  • Execution of the SQL query through 2 different databases to Oracle

    Hi all

    In Microsoft SQL server, we can run on 2 different databases depending on the type of SQL query:

    Select * from TEST1.dbo.GENERIC_TABLE1 union select * from TEST2.dbo.GENERIC_TABLE2;

    Test1 and TEST2 here is 2 different databases.

    Can we do the same in Oracle?

    Of course you can do it.
    Create a [database join | http://www.stanford.edu/dept/itss/docs/oracle/10g/server.101/b10759/statements_5005.htm] from DB1 to DB2.

    Grant select on the tables of the DB2 schema with which you want to connect.

    And then, you can you can query as

    select * from DB1schema.emp
    union
    select * from emp@dblink_name;
    

Maybe you are looking for

  • How to reset my usage data from battery to reset on my iphone?

    How can I reset my use of the battery on my iphone to scratch back?

  • Safari problems 'server is not safari.

    I was using safari without problem, but today I got the message "the server cannot find safari" what to do?

  • g505s 8.1 updates &amp; wireless connection problem

    I just downloaded updates Microsoft last night and as soon as I restarted, my wireless connection was gone/missing/non-functional. I have read some other posts that made me think that the issue may be with the (Qualcomm Atheros AR956x) wireless netwo

  • problems installing updates

    In four attempts to install the latest update to Vista, there are a lot of updates that failed.  The system asked me to install updates and and restart but the updates always fail.  What is happening?  How can I fix it.  My update history is full of

  • TouchPad email synchronizes the number of days back?

    I use gmail folders and it's my tool of the Organizer. It seems the touchpad syncs email only for a few days. Is this correct? On my pre, I think I can put the time and records contain the mails that I need. The TouchPad seems to have no such setting