Network export in datapump

Hello... I'm not a dba, and trying to do an import/exp network using the datapump sql developer. I'm suppozed to do this via the utility wizards in sql developer (view > DBA). I have permission, DATAPUMP_EXP_FULL_DATABASE privileges on the source and the target (and respective import private also)

Please guide me on the process (in the cmd line I know need to add NETWORK_LINK =source_database_link of normal export), but how can we do the wizard?

Please suggest.

You have metalink: If so, please check: Impdp fault on Network_link with ORA-904 "PARENT_PROCESS_ORDER": invalid identifier (Doc ID 1170933.1)

Thank you

Tags: Database

Similar Questions

  • Export of DataPump API - number of rows exported using

    Hello

    I'm working on the procedure to export data in the table before deleting the partition. It will be run by the Scheduler of the data, that's why I want to run the datapump job using the API.

    I wonder, if it is possible to get the number of rows exported. I would compare with the number of rows in a partition before you delete the partition.


    Thank you

    Krystian

    Hello

    Don't know exactly how you want the number of rows per partition that have been exported, but here are a few ideas:

    1. create a log file by using 'add_file ':

    -Add a log file

    dbms_datapump.add_file (h, ' DEPTJOB.log ', a', NULL,)

    dbms_datapump.Ku$ _file_type_log_file);

    It is also in my example included below.  Here is the content after the DEPTJOB.log workload (situated in Oracle Directory object would be "in my example):

    $ cat /tmp/DEPTJOB.log

    Departure 'SCOTT '. "" DEPTJOB ":

    Object type TABLE_EXPORT/TABLE/TABLE processing

    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA

    . . "exported"SCOTT" DEPT': 'SYS_P1581' 5,929 KB 2 lines

    . . "exported"SCOTT" DEPT': 'SYS_P1582' 5,914 KB 1 lines

    . . "exported"SCOTT" DEPT': 'SYS_P1583' 5,906 KB 1 lines

    Table main 'SCOTT '. "' DEPTJOB ' properly load/unloaded

    ******************************************************************************

    Empty the files together for SCOTT. DEPTJOB is:

    /tmp/Dept.dmp

    Job 'SCOTT '. "" DEPTJOB "managed to 00:00

    You can then review or extract the information from the log file.

    2. save the master table and the query for the busy lines.

    Use the parameter "KEEP_MASTER":

    -Keep the main table to be deleted after employment ends

    dbms_datapump.set_parameter(h,'KEEP_MASTER',1);

    Here's my example, the request to the main table is at the end.

    $ sqlplus scott/tiger @deptapi

    SQL * more: version 12.2.0.0.2 Beta on Fri Jan 22 12:55:52 2016

    Copyright (c) 1982, 2015, Oracle.  All rights reserved.

    Last successful login time: Friday, January 22, 2016 12:55:05-08:00

    Connected to:

    Database Oracle 12 c Enterprise Edition Release 12.2.0.0.2 - 64-bit Beta

    With the options of partitioning, OLAP, advanced analytics and Real Application Testing

    Connected.

    SQL > SET FEEDBACK 1

    SQL > SET 10 NUMLARGEUR

    SQL > SET LINESIZE 2000

    SQL > SET TRIMSPOOL ON

    SQL > SET TAB OFF

    SQL > SET PAGESIZE 100

    SQL > SET SERVEROUTPUT ON

    SQL >

    SQL > Rem save on the old table of scott.dept

    SQL > dept and rename it dept_old.

    Renamed table.

    SQL >

    SQL > Rem re-create it with partitions

    SQL > CREATE TABLE dept (deptno NUMBER varchar (14) dname, loc varchar (13)) PARTITION INTO 3 PARTITIONS HASH (deptno)

    2.

    Table created.

    SQL >

    SQL > Rem fill the dept table

    SQL > insert into dept select * from dept_old;

    4 lines were created.

    SQL >

    SQL > Rem now create datapump job export SCOTT. DEPT. using the API

    SQL > DECLARE

    2: NUMBER;         -Handle Datapump

    3 jobState VARCHAR2 (30);   -To keep track of job status

    4 ind NUMBER;         -Index of the loop

    5 the ku$ _LogEntry;   -For error messages and work in PROGRESS

    6 js ku$ _JobStatus;  -The State of the work of get_status

    7 jd ku$ _JobDesc;    -The get_status job description

    8 m ku$ _Status;     -The status returned by get_status object

    9 sql_stmt VARCHAR2 (1024);

    nom_partition 10-VARCHAR2 (50);

    11 rows_completed NUMBER;

    12

    BEGIN 13

    14-

    15 run the Installer based on the operation to perform.

    16-

    17 h: = dbms_datapump.open ('EXPORT', 'TABLE', NULL, 'DEPTJOB', NULL);

    18 dbms_datapump.add_file (h, 'dept.dmp', 'd', NULL,

    dbms_datapump.Ku$ _file_type_dump_file 19, 1);

    20

    21    --- Add a logfile                                                         

    22 dbms_datapump.add_file (h, ' DEPTJOB.log ', a', NULL,)

    23 dbms_datapump.ku$ _file_type_log_file);

    24

    25 dbms_datapump.metadata_filter (h, 'SCHEMA_EXPR', ' IN ("SCOTT") ");

    26 dbms_datapump.metadata_filter (h, 'NAME_LIST', "'DEPT"');

    27

    28

    29-

    30 start work.

    31-

    32 dbms_datapump.set_parameter (h, 'SILENT', 'banner');

    33

    34 -keep the main table to be deleted after employment ends

    35 dbms_datapump.set_parameter(h,'KEEP_MASTER',1);

    36

    37 dbms_datapump.start_job (h);

    38

    39-

    40 - run to grabbing the output of the job and write in the output log.

    41-

    42 jobState: = "UNDEFINED";

    43 WHILE (jobState! = "COMPLETED") AND (jobState! = "STOPPED")

    44 LOOP

    45 dbms_datapump.get_status (h,

    dbms_datapump.Ku$ _status_job_error 46.

    dbms_datapump.Ku$ _status_wip 47, -1, jobState, m);

    48

    49      --

    50. If we received messages WIP or error for the work, display them.

    51      --

    52 IF (BITAND(sts.mask,dbms_datapump.ku$_status_wip)! = 0)

    53 THEN

    54: = sts.wip;

    55 ON THE OTHER

    56 IF (bitand(sts.mask,dbms_datapump.ku$_status_job_error)! = 0)

    57 THEN

    58: = sts.error;

    59 ON THE OTHER

    the 60: = NULL;

    61 END IF;

    62 END IF;

    63

    64 the IS NOT NULL IF

    65 THEN

    66 ind: = the. FIRST;

    67 then AS ind IS NOT NULL

    68 LOOP

    69 dbms_output.put_line ((ind). LogText);

    70 ind: = the. Next (IND);

    LOOP END 71;

    72 END IF;

    73 END LOOP;

    74

    75-

    76 - release work.

    77-

    78 dbms_datapump.detach (h);

    79

    80-

    81. all exceptions that spread at this point will be captured.

    82 - the details are extracted from get_status and displayed.

    83-

    EXCEPTION OF 84

    85, SO THAN OTHERS THEN

    BEGIN 86

    87 dbms_datapump.get_status (h,

    dbms_datapump.Ku$ _status_job_error, 0-88,.

    89 jobState, sts);

    90 IF (BITAND(sts.mask,dbms_datapump.ku$_status_job_error)! = 0)

    91 THEN

    the 92: = sts.error;

    93 the IS NOT NULL IF

    94 THEN

    95 ind: = the. FIRST;

    96 although ind IS NOT NULL

    LOOP OF 97

    98 dbms_output.put_line ((ind). LogText);

    99 ind: = the. Next (IND);

    100 END LOOP;

    101 END IF;

    102 END IF;

    103

    BEGIN 104

    105 DBMS_DATAPUMP. STOP_JOB (m, 1, 0, 0);

    EXCEPTION OF 106

    107. WHEN OTHER NULL THEN;

    END 108;

    109

    110 EXCEPTION

    111, SO THAN OTHERS THEN

    112 dbms_output.put_line ('ORA-00000: an unexpected exception during ' |)

    113 ' Manager of exceptions. ' ||

    114 ' sqlcode = ' | TO_CHAR (SQLCODE));

    END 115;

    END 116;

    117.

    Departure 'SCOTT '. "" DEPTJOB ":

    Object type TABLE_EXPORT/TABLE/TABLE processing

    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA

    . . "exported"SCOTT" DEPT': 'SYS_P1581' 5,929 KB 2 lines

    . . "exported"SCOTT" DEPT': 'SYS_P1582' 5,914 KB 1 lines

    . . "exported"SCOTT" DEPT': 'SYS_P1583' 5,906 KB 1 lines

    Table main 'SCOTT '. "' DEPTJOB ' properly load/unloaded

    ******************************************************************************

    Empty the files together for SCOTT. DEPTJOB is:

    /tmp/Dept.dmp

    Job 'SCOTT '. "" DEPTJOB "managed to 00:00

    PL/SQL procedure successfully completed.

    SQL >

    SQL > table main query Rem for number of lines completed

    SQL > column nom_partition format a10

    SQL > format 9999 column lines

    SQL > SELECT nom_partition, COMPLETED_ROWS FROM SCOTT . DEPTJOB WHERE BASE_OBJECT_NAME = "DEPT";

    PARTITION_ COMPLETED_ROWS

    ---------- --------------

    SYS_P1581 2

    SYS_P1583 1

    SYS_P1582 1

    3 selected lines.

    SQL >

    SQL > "EXIT";

    3. you might even extract information of the call from the command line:

    $ sqlplus scott/tiger @deptapi.sql | grep 'exported ' | AWK ' {print "Table:" $4, 'charge' $7, $8} '

    Table: 'SCOTT '. "" DEPT ":"SYS_P1581"loaded 2 rows

    Table: 'SCOTT '. "' DEPT ': 'SYS_P1583' loaded 1 lines

    Table: 'SCOTT '. "' DEPT ': 'SYS_P1582' loaded 1 lines

  • Advantage of the import and export of datapump during the first export and import

    Hello

    Let me know the advantage of datapump export (expdm) and (impdm) on export (exp) import and import (imp) of origin.

    Hello

    Let me know the advantage of datapump export (expdm) and (impdm) on export (exp) import and import (imp) of origin.

    There are many advantages over the use of DATAPUMP.

    For example, to INCLUDE / EXCLUDE the settings you can filter exactly what object and / or type you intend to export or import. Which is not easy with the Original Export / Import (with the exception of Tables, indexes, constraints,...).

    You can import directly in to a NETWORK_LINK without using a Dump.

    You have many interesting features like COMPRESSION, FLASHBACK_SCN / _TIME *,...

    You can use the API in PL/SQL to perform your export / import instead of using the command line Interface.

    Increasingly, the DATAPUMP is much more optimized than the Original Export/Import and use path Direct or external Tables,... and what to say about REMAP_ % settings that allow you to rename files of data, schema, storage...

    There are a lot of things to tell about DATAPUMP. You will find an overview of this tool very good on the following links:

    http://www.Oracle-base.com/articles/10G/OracleDataPump10g.php

    http://www.Oracle-base.com/articles/11g/DataPumpEnhancements_11gR1.php

    Hope this helps.
    Best regards
    Jean Valentine

  • Hour current TC Export 10g DataPump

    Does anyone know how to specify the date and time for FLASHBACK_TIME in a 10 gr 2 expdp?

    GR 11, 2 Data Pump usefully provides the definition of systimestamp so you can specify
    FLASHBACK_TIME = systimestamp

    However 10 gr 2 Data Pump does not have this option. I could use the RCS or specify explicitly date & time but I was wondering is it possible to easily define the current date / time in gr 10 2 expdp

    Thank you
    Jim

    I know, there was little work for the specification

    flashback_time = sysdate

    during the period 11 but don't remember if it of a bug or not. Remember that it was. I don't know if it was backported. He could have. You can contact the Oracle Support to see if there is a patch available for the version you are on.

    If you call it from a script, you can use something like:

    > left echo
    > set linesize 132
    > connect the username/password
    > position
    > spool expdp.par
    > select 'flashback_time =' | To_char (sysdate, "YYYY - MM - DD HH24:MI:SS") of the double;
    > spool off
    > output

    Then, you can use this:

    expdp username/password parfile = expdp.par...

    Hope this helps

    Dean

  • Import with datapump when exporting datapump was executed as user SYS

    Hi all

    all I have is a dumpfile and an export a datapump log file. Export has been executed as user sys:

    Export: Release 11.2.0.1.0 - Production on mid Dec 3 12:02:22 2014

    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    ;;;
    Join EIB: Oracle Database 11 g Release 11.2.0.1.0 - 64 bit Production
    "SYS". "' SYS_EXPORT_FULL_01 ':"sys/***@database AS SYSDBA"directory = data_pump_dir dumpfile = db_full.dmp db_full.log = complete logfile = gestartet wird y
    Mobiltelefondienste participations mit method BLOCKS...
    Objekttyp DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA wird thanks
    Approximately Schatzung mit BLOCKS method: 52.84 GB

    Now I have to import (with datapump) user USER01 and export USER02. But I don't know the name of the source database tablespaces.

    I want to keep the name of the user (USER01/USER02). That's why I created these users in the target database.

    The questions I have are:

    -should I start the import by datapump as user SYS?

    -Should what settings I use to import users USER01, USER02 and data in the target database? Since I do not know the names of tablespaces

    in the source database parameter REMAP_TABLESPACE will not help

    Any help will be appreciated

    J.

    Hi J.

    The questions I have are:

    -should I start the import by datapump as user SYS?

    No, you need to import with a user in the role of "imp_full_database".

    -Should what settings I use to import users USER01, USER02 and data in the target database? Since I don't know the names of the storage spaces in the source database parameter REMAP_TABLESPACE will not help

    Well, an idea is to generate you a schema import sqlfile and see in the ddl in what tablespace, it will try to create objects.

    Impdp------' / as sysdba------' directory = dumpfile = = = USER01, USER02 patterns sqlfile

    For more information, take a look at the import documentation

    http://docs.Oracle.com/CD/B19306_01/server.102/b14215/dp_import.htm

    Hope this helps,

    Kind regards

    Anatoli has.

  • DataPump: can dumpfiles copied during the datapump export still work?

    Hello

    Let's define that a datapump export settings

    DUMPFILE=exportdp_DUMPFILE_%U.dmp
    FILE SIZE = 2G
    PARALLEL = 2

    will generate 20 dumpfiles size 2G each. The datapump worker will create two dumpfiles in parallel and write on the two parallel lines. As soon as a dumpfile has a size of 2G, the next file will be created and data will be written in.
    My question now is:
    can you files, which are the size of 2 GB already copied to another server? This would have the advantage that I don't have to wait until the end of the export of datapump before starting the transfer of files to another server. This ability would certainly save little time and speedup the datapump export - / import process...
    Or should I wait until the end of the export of datapump above can get copied files - simply because that the datapump will update the files with 'something '?

    Any help will be appreciated

    Rgds
    JH

    You could start when the file appear full and then you may need to copy a bit more if they are updated the copy. This only happens if your main table that spans the dump files.

    Dean

  • Export DataPump with the query option

    Hi all

    My environment is IBM AIX, Oracle 10.2.0.4.0 database.

    I need a few sets of records using a query in export production. Request is attached to several tables. Since we have the BLOB data type, we export using datapump.

    We have weaker environments, but have not the same set of data and tables, and therefore not able to simulate the same query in lower environment. But created a small table and faked the query.

    My order is

    expdp system / < pwd > @orcl tables = dump.dump1 query = dump.dump1:' ' where num < 3 ' ' directory = DATA_PUMP_DIR dumpfile = exp_dp.dmp logfile = exp_dp.log

    Query in the command pulls two records directly. By running the command above, I see the size 80KO dump file,
    In the export log file.

    I see Total estimation using BLOCKS method: 64 KB.
    export Dump.Dump1 = 4,921 KB 2 rows.

    My doubts are,
    (1) is the correct command that I am running.
    (2) estimate said 64 k, considering that it says also exported 4,921 KB. But the dump file created is 80KO. It is exported correctly?
    (3) given that I run with the use of the system, apart from 2 rows, export all data. We must send the dump file to the other Department. We should not export all of the data other than the query output.
    (4) in the order if I am not using "tables = dump.dump1), the export file big mess." Don't know which is the right.

    Your answers will be more useful.

    The short answer is 'YES', he did the right thing.

    The long answer is:

    Query in the command pulls two records directly. By running the command above, I see the size 80KO dump file,
    In the export log file.

    I see Total estimation using BLOCKS method: 64 KB.
    export Dump.Dump1 = 4,921 KB 2 rows.

    My doubts are,
    (1) is the correct command that I am running.

    Yes. As long as you query is correct. DataPump will export on the lines that match this query.

    (2) estimate said 64 k, considering that it says also exported 4,921 KB. But the dump file created is 80KO. It is exported correctly?

    Estimate is made using the full picture. Since you specify, he used the method of estimation of block. Basically, how many blocks have been attributed to this table. In your case, I guess it was 80KB.

    (3) given that I run with the use of the system, apart from 2 rows, export all data. We need to send the dump file to other > Department. We should not export all of the data other than the query output.

    I will export all the data, but going to export metadata. It exports the table definition, all indexes on it, all the statistics on tables or indexes, etc. This is why the dump file could be bigger. There is also a 'main' table that describes the export job who gets exproted. This is used by export and import to find what is in the dumpfile, and where in the dumpfile these things are. It is not user data. This table needs to be exported and will take place in the dumpfile.

    (4) in the order if I am not using "tables = dump.dump1), the export file big mess." Don't know which is the right.

    If you only want this table, then you order export is right. If you want to export more, then you need to change your export command. From what you say, it seems that you order is correct.

    If you do not want any expoirted metadata, you can add:

    content = data_only

    at the command line. This will only export the data and when the dumpfile is imported, it must have the table already created.

    Dean

  • How to create a nwetork directory in datapump

    Hello

    I use the export utility to export the database in a network folder \\oranasxxx\oraclebackup. This has been possible because I can give the file=\\oranasxxx\oraclebackup\xxx.dmp

    However, I intend to move the backups to data pump where in I need to create a directory inside the database. (create the directory dir as "\oranasxxx\oraclebackup")
    When I create an Active Directory as above. Oracle cannot see the network directory.

    said

    ORA-39002: invalid operation
    ORA-39070: unable to open the log file.
    ORA-29283: invalid file operation
    ORA-06512: at "SYS." UTL_FILE", line 475
    ORA-29283: invalid file operation

    Is there anyway to create a directory of network location and take backup expdp dump of the data directly to the network folder.

    Thanks for your suggestions.

    Hello

    Check the note metalink ID 428130.1 - Impossible to export to a mapped network Drive using DataPump, I think it explains how to use with network directory datapump.

    Ioan

    Published by: ioan on 30.12.2009 07:46

  • Connecting more than one import export even

    Hello

    Is it a good idea (to save time) to run that many imports (against a different database of course) the same value van export files, using import/export utility Datapump?

    I mean from several orders impdp using the same on different databases dump file.

    Kind regards.

    Hello

    I assumed you would be explicitly name the log file for each separate so task would not a problem - you can even separate the log file in a separate Directory from the dumpfiles using the DIR:file syntax - should not be the same directory as the dumpfiles.

    If you tried to use the same log file then they would probably crush each other data or you will get a file very messy - but I don't think that jobs would fail...

    See you soon,.

    Rich

  • Tablespace level export, import of schema level - is it possible?

    Hello

    If I have an export level DataPump tablespace (played to = < the tablespaces list > STORAGE space), it is possible to import only tables and dependent objects to a specific schema residing in exported tablespaces? DB version is 11.2.0.3.0. According to the documentation, it should be possible: http://docs.oracle.com/cd/E11882_01/server.112/e22490/dp_import.htm#i1011943 : a schema import is specified using the SCHEMAS. Source can be a full, table, tablespace, or schema mode export dump file set or another database.  

    Perform a quick test seems, however, that it is not so:

    (1) source DB - I have two tablespaces (TS1, TS2) and two schemas (USER1, USER2):

    SQL > select nom_segment, nom_tablespace, segment_type, owner

    from dba_segments

    where owner in ("USER1", "User2");

    2 3

    OWNER NOM_SEGMENT SEGMENT_TYPE TABLESPACE_NAME

    ------ --------------- ------------------ ----------------

    USER1 UQ_1 INDEX TS1

    USER1 T2 TABLE TS1

    USER1 T1 TABLE TS1

    USER2 T4 TABLE TS2

    USER2 T3 TABLE TS2

    (2) I am not a tablespace level to export:

    Expdp system directory $ = dp_dir = ts1 tablespaces, ts2 dumpfile = test.dmp

    Export: Release 11.2.0.3.0 - Production on Fri Jul 11 14:02:54 2014

    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

    Password:

    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production

    With partitioning, OLAP, Data Mining and Real Application Testing options

    Start "SYSTEM". "" SYS_EXPORT_TABLESPACE_01 ": System / * Directory = dp_dir tablespaces = ts1, ts2 dumpfile = test.dmp

    Current estimation using BLOCKS method...

    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA

    Total estimation using BLOCKS method: 256 KB

    Object type TABLE_EXPORT/TABLE/TABLE processing

    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX

    Object type TABLE_EXPORT/TABLE/CONSTRAINT/treatment

    Object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS of treatment

    . . exported "USER1". "" T1 "5,007 KB 1 lines

    . . exported "USER1". "" T2 "5,007 KB 1 lines

    . . exported "user2". "" T3 "5,007 KB 1 lines

    . . exported "user2". "" T4 "5,007 KB 1 lines

    Main table 'SYSTEM '. "' SYS_EXPORT_TABLESPACE_01 ' properly load/unloaded

    ******************************************************************************

    "(3) I'm trying to import only the objects belonging to User1 and I get the 'ORA-39039: schema '(' USER1')' expression contains no valid schema" error: "

    Impdp system directory $ = dp_dir patterns = USER1 dumpfile = test.dmp

    Import: Release 11.2.0.3.0 - Production on Fri Jul 11 14:05:15 2014

    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

    Password:

    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production

    With partitioning, OLAP, Data Mining and Real Application Testing options

    ORA-31655: no data or metadata of objects selected for employment

    ORA-39039: pattern Expression "(' USER1')" contains no valid schema.

    (4) However, the dump file contains clearly the owner of the tables:

    Impdp system directory = dp_dir dumpfile = test.dmp sqlfile = imp_dump.txt

    excerpt from imp_dump.txt:

    -path of the new object type: TABLE_EXPORT/TABLE/TABLE

    CREATE TABLE "USER1". "" T1 ".

    ("DUMMY" VARCHAR2 (1 BYTE)

    )

    So is it possible to somehow filter the objects belonging to a certain pattern?

    Thanks in advance for any suggestions.

    Swear

    Hi swear,.

    This led a small survey of me I thought I was worthy of a blog in the end...

    Oracle DBA Blog 2.0: A datapump bug or a feature and an obscure workaround

    Initially, I thought that you made a mistake, but that doesn't seem to be the way it behaves. I've included a few possible solutions - see if one of them responds to what you want to do...

    See you soon,.

    Rich

  • Exported data for research

    What is the best way to make available, a 25 GB table that has been exported through datapump users to query only? External tables? If the external tables are the answer, I'll have to import it back in and then somehow get it out commas? Let me know your thoughts

    kirkladb wrote:
    What is the best way to make available, a 25 GB table that has been exported through datapump users to query only? External tables? If the external tables are the answer, I'll have to import it back in and then somehow get it out commas? Let me know your thoughts

    External tables are mainly used for SysEx and reading flat files to a file in a proprietary format

    If you want to read the data in a table located in an export Oracle Datapump then you will need to import into a database first.

  • Problem with triggers after datapump with remapping of the color scheme.

    I have an import/export using datapump and remappe the name of the schema for a new name. All exported very well except for triggers that are now fire the old schema name. Nobody knows in a quick way, I can change all of these triggers to fire on the new scheme?

    You can use the dbms_metadata.get_ddl function, run this old pattern then you get a few scripts and run this new scheme.

    declare
    r varchar2 (4000);
    Start
    -Select dbms_metadata.get_ddl ('TRIGGER', TRIGGER_NAME) from user_triggers
    c in the loop (select TRIGGER_NAME from user_triggers)
    r: = dbms_metadata.get_ddl ('TRIGGER', c.TRIGGER_NAME);
    dbms_output.put_line ('R =' | r);
    end loop;
    end;

  • Option FLASHBACK_SCN in datapump

    I am using the datapump FLASHBACK_SCN option by setting a variable in a script as the value of "SELECT dbms_flashback.get_system_change_number FROM dual;" in the following way.

    sqlplus "/ as sysdba" < < EOF
    Go head
    coil flashback_scn.output
    Dbms_flashback.get_system_change_number SELECT FROM dual;
    spool off
    "exit";
    EXPRESSIONS OF FOLKLORE
    SED ' / ^ $/ to flashback_scn.output | \
    SED-e "s / [< tab >] * / / g ' > flashback_scn.list
    Export FLASHBACK_SCN = 'cat flashback_scn.list '.

    The variable FLASHBACK_SCN then turned to the option and export of datapump performed. It worked well on a test database, but the CPN is much smaller number than in production. Now, I try to run it in production, but the value that comes from "SELECT dbms_flashback.get_system_change_number FROM dual;" is as follows.

    GET_SYSTEM_CHANGE_NUMBER
    ------------------------
    9.2196E + 12


    Datapump does not see this as valid a SNA and made a select statement by using the option "AS OF SNA", I just get the error "number specified is not a valid change number.


    Can anyone suggest a way to get value as a value and not a value ' to the power of what ever.

    Kind regards.

    Graeme.

    Hi Graeme,
    It is only a display think sqlplus.
    Try this:

    sqlplus '/ as sysdba' <
    

    the format to change the display of numbers in option (in this case 22 numbers, you can set it to what you want).

    Also, I took the date, and use FLASHBACK_TIME instead of FLASHBACK_SCN, but it is not really important.

    HTH
    Liron Amitzi
    Consultant senior s/n
    http://www.dbsnaps.com

  • export of dmp + compress (using hose)

    Hello

    can you help me correct the procedure?
    I need to compress the dmp on the fly file to save space.
    ---
    #! / bin/ksh
    export ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1
    export PIPE = exp.pipe
    RM $PIPE
    mknod $PIPE p
    gzip-c < $PIPE > exp.dmp.gz &
    $ORACLE_HOME/bin/expdp system/oracle@DB11G DUMPFILE = $PIPE LOGFILE = exp_pipe.log = INCLUDE NOKRWW = TABLE SCHEMAS
    ---

    the error I get now is:
    ---
    Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
    With partitioning, OLAP, Data Mining and Real Application Testing options
    ORA-39001: invalid argument value
    ORA-39000: bad dump file specification
    ORA-31641: failed to create the dump file ' / u01/app/oracle/admin/DB11G/dpdump/exp.pipe '.
    ORA-27038: created file already exists
    Additional information: 1
    ---

    Unlike the classic export the datapump isn't is no longer in charge of piping, use the COMPRESSION parameter instead:

    http://download.Oracle.com/docs/CD/B28359_01/server.111/b28319/dp_export.htm#BABCAJHC

    Werner

  • DataPump - workspace, users, applications

    Hey, it's a matter of newbe.

    We are looking for a datapump job to export our workspace, users, applications, and data.
    It would have made every night.
    To test, we dropped the schema, recreated the schema and imported with the export of datapump.

    We have an our tables, views, etc.
    But we don't have the workspace, users and applications.

    Is it possible with the datapump to get it?

    (We know that we can create the workspace & schema within the administration interface of the apex and importation, workspace, etc. which exported using APEX.) However our dba suggested the other way round.)

    Hello

    Yes it won't save your workspace users or applications, since those who do not live in your schema (analysis), they are stored in the underlying metadata of the APEX.

    Currently there is no easy way to automate the export of workspaces, but you can automate the export of the applications themselves. You can find my blog on this useful-

    http://Jes.blogs.shellprompt.NET/2006/12/12/backing-up-your-applications/

    Fortunately the definition of workspace does not tend to change frequently, so export manually (with users) should not be as boring as if you had to manually export applications.

    Hope this helps,

    John.
    --------------------------------------------
    Blog: http://jes.blogs.shellprompt.net
    Work: http://www.sumneva.com (formerly http://www.apex-evangelists.com)
    Author of Pro Application Express: http://tinyurl.com/3gu7cd
    AWARDS: Don't forget to mark correct or useful posts on the forum, not only for my answers, but for everyone!

Maybe you are looking for

  • Pavilion laptop TS 11: Wifi does not

    I got this noteboof from a friend and he had a partition called (D) RECOVERY of 22GB. I accidentally formatted and now my wifi does not work. I updated my drivers and tried many things but its still does not. What can I do?

  • Major difference between the Group and the organizational unit?

    Pls explain me what is the difference between the groups and the organization unit in simple terms with an example in real time.

  • How do we install VPN on Server 2008 R2?

    How do we install VPN on Server 2008 R2 edition? I followed so many tutorials on youtube but doesn't seem to work! My goal is to be able to give access to my server remotely and access shared files and folders that I have activated. The only way I ca

  • unit test framework

    Is it possible to group together the Unit Test framework in an EXE?  I would like to pass a path of the project file and path of the report file to an exe file that will reside on a dedicated test server and run tests or user-defined Unit Tests that

  • Win 7 Pro 64 bit - Virtual PC

    I initially bought Win 7 Pro 64 to take advantage of the virtual machine. This option is available in Win 10 Pro?