Best practices for the data pump or import process?

We are trying to copy the existing to another newly created schema schema. Pump data export to succeed the export schema.

However, we met errors when you import dump again file schema. Remapped schema and storage areas, etc.
Most of the errors occur in PL/SQL... For example, we have views as below in the original schema:
"
CREATE the VIEW * oldschema.myview * AS
SELECT col1, col2, col3
OF * oldschema.mytable *.
WHERE coll1 = 10
.....
"
Quite a few functions, procedures, packages and triggers contain "* oldschema.mytable *" in the DML (insert, select, update), for example.

Get the following errors in the import log:
ORA-39082: object ALTER_FUNCTION type: 'TEST '. "' MYFUNC ' created with compilation warnings
ORA-39082: ALTER_PROCEDURE object type: 'TEST '. "" MYPROCEDURE "created with compilation warnings
ORA-39082: the VIEW object type: 'TEST '. "' BIRD ' created with compilation warnings
ORA-39082: object PACKAGE_BODY type: 'TEST '. "' MYPACKAGE ' created with compilation warnings
ORA-39082: TRIGGER object type: 'TEST '. "' MON_TRIGGER ' created with compilation warnings

Many actual errors/no valid in the new schema objects are due to:
ORA-00942: table or view does not exist

My question is:
1. What can we do to correct these errors?
2. is there a better way to do the import with such condition?
3 update PL/SQL and recompile with the new scheme? Or update in the scheme of origin, firstly and export?

Your help will be greatly appreciated!

Thank you!

@?/rdbms/admin/utlrp.sql

Will compile the objects in the database through drawings. In your case, you re-mapping from one schema to another and utlrp objects will not be able to compile.

SQLFILE impdp option allows to generate the DDL of the discharge of export and change the name of the schema on a global scale and run the script in sqlplus. This should solve most of your errors. If you still see errors, now proceed to utlrp.sql.

-André

Tags: Database

Similar Questions

  • best practices for the storage of the vm and vhd

    no doubt this question has been answered not once... Sorry

    I would like to know the best practice for the storage of the vm and its virtual hard disk to a SAN.

    Any show advantage does make sense to keep them on separate LUNS?

    Thank you.

    It will really depend on the application of the virtual machine - but for most of the applications no problem by storing everything on the same data store

  • What is the best practice for the enumeration for the ADF?

    Dear all,

    What is the best practice for the enumeration for the ADF?

    I need to add the enumeration to my request. ex: sex, marital status.

    How to deliver? Declarative custom components or is there another way?

    Thank you.
    Angelique

    Check out this topic - '5.3 fill view object Rows with static data' in Guide of Dev
    http://download.Oracle.com/docs/CD/E17904_01/Web.1111/b31974/bcquerying.htm#CEGCGFCA

  • vSpere 5 Networking of best practices for the use of 4 to 1 GB NIC?

    Hello

    I'm looking for a networking of best practices for the use of 4-1 GB NIC with vSphere 5. I know there are a lot of good practice using 10 GB, but our current config does support only 1 GB. I need to include the management, vMotion, Virtual Machine (VM) and iSCSi. If there are others you would recommend, please let me know.

    I found a diagram that resembles what I need, but it's for 10 GB. I think it works...

    vSphere 5 - 10GbE SegmentedNetworks Ent Design v0_4.jpg(I had this pattern HERE - rights go to Paul Kelly)

    My next question is how much of a traffic load is each object take through the network, percentage wise?

    For example, 'Management' is very small and the only time where it is in use is during the installation of the agent. Then it uses 70%.

    I need the percentage of bandwidth, if possible.

    If anyone out there can help me, that would be so awesome.

    Thank you!

    -Erich

    Without knowing your environment, it would be impossible to give you an idea of the uses of bandwidth.

    That said if you had about 10-15 virtual machines per host with this configuration, you should be fine.

    Sent from my iPhone

  • Best practices for the compression of the image in dps

    Hello! I read up on best practices for the compression of the image in dps and I read the asset from the source of panoramas, sequences of images, Pan and zoom images and audio skins is resampled not downloading. You will need to resize them and compress them before deleting the in your article, because the dps do not do it for you. Hey can do!

    So Im also read as he active source of slideshows, scrolling images, and buttons ARE resampled as PNG images. Does this mean that DPS will compress for you when you build the article? Does this say I shouldn't worth going bother to resize these images at all? I can just pop in 300 DPI files 15 MB used in the print magazine and dps will compress their construction article - and this will have no effect on the size of the file?

    And this is also the case with static background images?


    Thanks for your help!

    All images are automatically resampled to based on the size of the folio you do. You can put in any image resolution you want, it's not serious.

    Neil

  • Best practices for the use of reserved words

    Hello
    What is the best practice for the use of the reserved words as column names.
    For example if I insisted on the use of the word to a column name comment as follows:

    CREATE TABLE...
    VARCHAR2 (4000) "COMMENT."
    ...

    What is the impact on the track I could expect and what problems should I be informed when doing something like that?

    Thank you
    Ben

    The best practice is NOT to use reserved words anywhere.
    Developers are human beings human. Humans have their moments to forget things.
    They will not forget to use the "", or you can force it to use the "' everywhere.
    The two methods are Oracle certified ways to end up in hell.

    ----------
    Sybrand Bakker
    Senior Oracle DBA

  • Best practices for the integration of the Master Data Management (MDM)

    I work on the integration of MDM with Eloqua and are looking for the best approach to sync data lead/Contact changes of Eloqua in our internal MDM Hub (output only). Ideally, we would like that integration practically in real time but my findings to date suggest that there is no option. Any integration will result in a kind of calendar.

    Here are the options that we had:

    1. "Exotic" CRM integration: using internal events to capture and queue in the queue changes internal (QIP) and allows access to the queue from outside Eloqua SOAP/REST API
    2. Data export: set up a Data Export that is "expected" to run on request and exteernally annex survey via the API SOAP/REST/in bulk
    3. API in bulk: changes in voting that has happened since the previous survey through the API in bulk from Eloqua outside (not sure how this is different from the previous option)

    Two other options which may not work at all and who are potentially antimodel:

    • Cloud connector: create a campaign questioning changes to schedule and configure a connector of cloud (if possible at all) to notify MDM endpoint to query contact/lead "record" of Eloqua.
    • "Native" integration CRM (crazy): fake of a native CRM endpoint (for example, Salesforce) and use internal events and external calls to Eloqua push data into our MDM

    Issues related to the:

    1. What is the best practice for this integration?
    2. Give us an option that would give us the close integration in real-time (technically asynchronous but always / event-based reminder)? (something like the outgoing in Salesforce e-mail)
    3. What limits should consider these options? (for example API daily call, size response SOAP/REST)

    If you can, I would try to talk to Informatica...

    To imitate the integrations of native type, you use the QIP and control what activities it validated by internal events as you would with a native integration.

    You will also use the cloud api connector to allow you to set up an integration CRM (or MDM) program.

    You have fields of identification is added objects contact and account in Eloqua for their respective IDs in the MDM system and keep track of the last update of MDM with a date field.

    A task scheduled outside of Eloqua would go to a certain interval and extract the QAP changes send to MDM and pull the contacts waiting to be sent in place of the cloud connector.

    It isn't really much of anything as outgoing unfortunately use Messaging.  You can send form data shall immediately submit data to Server (it would be a bit like from collections of rule of integration running of the steps in processing of forms).

    See you soon,.

    Ben

  • Best practices for the application of sharpness in your workflow

    Recently I tried to get a better understanding of some of the best practices for sharpening in a workflow. I guess that I didn't know, but there are several places to sharpen. Who are the best? They are additive?

    My typical workflow involves capture an image with a professional DSLR in RAW or JPEG or import into Lightroom and export to a JPEG file for the screen or printing of two lab and local. 

    There are three places in this workflow to add sharpening. In the DSLR manually into Lightroom and when exporting a JPEG file or print directly from Lightroom

    It is my understanding that sharpening is added to RAW images even if you add sharpening in your DSLR. However sharpening will be added to the JPEG from the camera. 

    Back to my question, it is preferable to manually the sharpness in the SLR in Lightroom or wait until you export or output of your printer or final JPEG file. And additive effects? If I add sharpening to the three places I am probably more sharpening?

    You have to treat them differently both formats. Data BULLIES never have any sharpening applied by the camera, only JPEG files. Sharpening is often considered to be a workflow where there are three steps (see here for a seminal paper on this idea).

    I. a step sharpening of capture which compensates for the loss of sharp in detail due to the Bayer matrix and the filter anti-repliement and sometimes the lens or diffraction.

    II. A creative sharpening step where some details in the image are 'highlighted' sharpness (do you eyelashes on a model's face), and

    III. output sharpening, where you fix the loss of sharpness due to the scale/resampling for the properties of the average output (as blur because of an impression how process works or blurry because of the way that a screen LCD sets out its pixels).

    These three are implemented in Lightroom. I. and II. are essential, and basically must always be performed. II. until your creative minds. I. is the sharpening that see you in the Panel to develop. You need zoom at 1:1 and optimize the settings. The default settings are OK but quite conservative. Usually you can increase the value of the mask a little so that you're not sharpen noise and play with the other three sliders. Jeff Schewe gives an overview of a strategy simple to find the optimal settings here. It is for the cab, but the principle remains the same. Most of the photos will benefit from optimization a bit. Don't go overboard, but just OK for smoothness to 1:1.

    Stage II as I have said, is not essential, but it can be done using the local adjustment brush, or you can go to Photoshop for this. Stage III, however, is very essential. This is done in the export, the print panel Panel or the web. You can't really get a glimpse of these things (especially the sharpening printing-oriented) and it will take a little experimentation to see what you like.

    For jpeg, sharpening is done already in the camera. You could add a small extra capture sharpening in some cases, or simply lowering the sharpening in camera and then have more control in the post, but generally, it is best to leave it alone. Stage II and III, however, are still necessary.

  • Measurement on the side time server? Best practices for the turn-based game

    Hello

    What would be the best practice for measuring time in a turn based game?

    I was looking at the timeout of the room, but to use that it would mean that, for each round, I put users in a new room?

    Is there a way where I can measure time serverside and keep users in the same room?

    If so, I could use php, otherwise, we would need java that allows to measure a time race.

    See you soon,.

    G

    Hello

    You can definitely use PHP or Java - we provide integration of server

    libraries for either. I don't know exactly what is the use case, so I can't

    comment on what makes the most sense, but if it is not information which must be

    totally secure, grading on the client can be a viable approach also.

    Nigel

  • Best practices for the handling of data for a large number of indicators

    I'm looking for suggestions or recommendations for how to better manage a user interface with a 'large' number of indicators. By big I mean enough to make schema-block big enough and ugly after that the processing of data for each indicator is added. Data must be 'unpacked' and then decoded, for example, Boolean, binary bit shifting fields, etc. The indicators are updated once / sec. I'm leanding towards a method that worked well for me before, that is, binding network shared variable for each indicator, then using several sub-vis to treat the particular piece of data, and write in the appropriate variables.

    I was curious what others have done in similar circumstances.

    Bill

    I highly recommend that you avoid references.  They are useful if you need to update the properties of an indicator (color, police visibility, etc.) or when you need to decide which indicator update when running, but are not a good general solution to write values of indicators.  Do the processing in a Subvi, but aggregate data in an output of cluster and then ungroup for display.  It is more efficient (writing references is slow) - and while that won't matter for a 1 Hz refresh rate, it is not always a good practice.  It takes about the same amount of space of block diagram to build an array of references as it does to ungroup data, so you're not saving space.  I know that I have the very categorical air about it; earlier in my career, I took over the maintenance of an application that makes excessive use of references, and it makes it very difficult to follow came data and how it got there.  (By the way, this application also maintained both a pile of references and a cluster of data, the idea being that you would update the front panel indicator through reference any time you changed the associated value in the data set, unfortunately often someone complete either one or another, leading to unexpected behavior.)

  • Best practices for the reader to 'Data' between VM?

    Hello

    So on my box ESXI, I have a 250 GB drive. I was wondering what the best practice is to have a 'data' drive shared between VM? I'm pretty new to virtualization so would like to view

    I would basically following drive configuration...

    Win 2008 R2 - 60 gb

    Win 2008 R2 - 60 gb

    Ubuntu 10.10 - 20 GB

    (Shared between the two areas of 2008) DATA - 100 GB

    Thank you.

    The only way to do this is to assign the drive to a virtual machine and create a network share. Unless you use a file system that supports concurrent access to files, an attempt to present the disk to several systems would probably end by the corruption of data.

    André

  • Best practices for the storage of data the program Config on Vista?

    Hello world

    I'm looking to get recommendations about what where (and how) to better store the configuration data of the program for a LV executable runs under Vista.  I need to store a number of things like the location of the window, the values of the controls, etc.  Under XP I stored straight in the screw own execution path.  But in Vista, certain directories (for example, C:\Program Files) are now restricted without administrator rights, so if my program runs from there, I don't think it's going to be able to write its configuration file.

    Also at the moment I'm just using scripture to the block of spreadsheet to store my variables.  Is it his good or are they better suggestions?

    Thank you!

    For the configuration data, I use the setting screw or screw Configuration OpenG. The formula proved to be flexible during development (adding new values is backward compatible).

    XML would be nice, but the current implementation is unflexible when you add / change the data structure (the old configuration file cannot be reused).

    If the amount of data is small (just a post windows and size or some of the file path), the windows registry is an alternative (giving a game unique for each user, if you wish).

    I have no experience with Vista.

    Felix

  • Required formula and best practices for the storage of data base of calculation

    Hi gurus of the Oracle

    Need your help to calculate the requirement of storage for the production database.

    Thank you

    Hitgon

    I have a query DBA_DATA_FILES show total space allocated.

    SELECT SUM (bytes) AS allocated_bytes FROM dba_data_files;

    And for 'used' space, I run the present:

    SELECT SUM (bytes) AS used_bytes FROM dba_segments;

    We don't need to digress into the discussion of what is truly used as everyone knows that there is unused space in DBA_SEGMENTS. But it works for management!

    I have a report automated that send me monthly. The same report even it breaks down by tablespace... allocated and used as I noted above. Then, I put it in Excel to generate the graph.

    See you soon,.
    Brian

  • Best practices for the configuration of virtual drive on NGC

    Hello

    I have two C210 M2 Server with 6G of LSI MegaRAID 9261-8i card with 10 each 135 GB HDDs. When I tried the automatic selection of the RAID configuration, the system has created a virtual disk with RAID 6. My concern is that the best practice is to configure the virtual drive? Is - RAID 1 and RAID5 or all in a single drive with RAID6? Any help will be appreciated.

    Thank you.

    Since you've decided to have the CPU on the server apps, voice applications have specified their recommendations here.

    http://docwiki.Cisco.com/wiki/Tested_Reference_Configurations_%28TRC%29

    I think that your server C210 specifications might be corresponding TRC #1, where you need to have

    RAID 1 - first two drives for VMware

    RAID 5 - 8 hard drives within the data store for virtual machines (CUCM and CUC)

    HTH

    Padma

  • Best practices for the Manager of the Ucs to the smooth running of our environment

    Hi team

    We are remaining with data center with Cisco Ucs blades. I want the best practices guide Ucs Manager Manager of Ucs check all things configured correctly in accordance with the recommendation of Cisco and standard to the smooth running of the environment.
    A certain provide suggestions. Thank you

    Hey Mohan,.

    Take a look at the following links. They should provide an overview of the information you are looking for:

    http://www.Cisco.com/c/en/us/products/collateral/servers-unified-computi...

    http://www.Cisco.com/c/en/us/support/servers-unified-computing/UCS-manag...

    HTH,

    Wes

Maybe you are looking for

  • Wifi Lenovo K900 problem

    Hi, I bought lenovo K900 2 days and I have problem with wifi, when I insert the micro sim card in the slot, wifi can not be connected and to use wifi, I have to remove the micro sim card, what is the problem? Thank you.

  • OpenCV in LabVIEW

    Dear people, Can someone guide me how to access the opencv library in LabVIEW using constructor .NET. I already did this and I forgot now. All information is useful. Joined previous sample program FYI.

  • Windows Media Player is not listed to reinstall

    original title: windows MediaPlayer Try to reinstall windows media player... start-all programs-windows MediaPlayer is not. What should I do now?

  • HP 15 g 082no: decommissioning of W8.1 to W7 - drivers needed

    Hello I'm going to W7 but I need drivers for this machine. It is Mobile based AMD. Can you tell us where to find them? BR

  • Google-account for you using connect to the Application

    I develop a native blackberry application and I want the user to be able to connect with their google account, can someone help me?