DataPump

Hi all

We are in 11.2.0.3 and I want to separate the .dmp file in several files. Please let me know if my syntex is correct or not. I'd appreciate your help.

expdp ' sys / @ as sysdba "patterns is ps_dw directory = AP_DATAPUMP dumpfile=PS_DW%U.dmp logfile is PS_DW-'date' + %m %%d Y_ hour %M %s" parallel .log = cluster 10 = COMPRESSION N = METADATA_ONLY "

The syntax is correct, that even you can restrict the size of the file with the file size setting.

I tried in test, here is the result:

bash - $3.2 expdp complete dumpfile=exp_%U.dmp = compression = metadata_only logfile = PS_DW-'date' + %m %%d Y_ hour %M %s "parallel .log = 4

Export: Release 11.2.0.2.0 - Production on Tue Aug 26 23:28:05 2014

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

User name: / as sysdba

..

..

..

Empty the files together for SYS. SYS_EXPORT_FULL_01 is:

/oraeng/product/base/Admin/testdb/dpdump/exp_01.dmp

/oraeng/product/base/Admin/testdb/dpdump/exp_02.dmp

/oraeng/product/base/Admin/testdb/dpdump/exp_03.dmp

/oraeng/product/base/Admin/testdb/dpdump/exp_04.dmp

Job 'SYS '. "" SYS_EXPORT_FULL_01 "carried out at 23:30:13

I hope it helps.

Kind regards

Bigot

Tags: Database

Similar Questions

  • Export of DataPump API - number of rows exported using

    Hello

    I'm working on the procedure to export data in the table before deleting the partition. It will be run by the Scheduler of the data, that's why I want to run the datapump job using the API.

    I wonder, if it is possible to get the number of rows exported. I would compare with the number of rows in a partition before you delete the partition.


    Thank you

    Krystian

    Hello

    Don't know exactly how you want the number of rows per partition that have been exported, but here are a few ideas:

    1. create a log file by using 'add_file ':

    -Add a log file

    dbms_datapump.add_file (h, ' DEPTJOB.log ', a', NULL,)

    dbms_datapump.Ku$ _file_type_log_file);

    It is also in my example included below.  Here is the content after the DEPTJOB.log workload (situated in Oracle Directory object would be "in my example):

    $ cat /tmp/DEPTJOB.log

    Departure 'SCOTT '. "" DEPTJOB ":

    Object type TABLE_EXPORT/TABLE/TABLE processing

    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA

    . . "exported"SCOTT" DEPT': 'SYS_P1581' 5,929 KB 2 lines

    . . "exported"SCOTT" DEPT': 'SYS_P1582' 5,914 KB 1 lines

    . . "exported"SCOTT" DEPT': 'SYS_P1583' 5,906 KB 1 lines

    Table main 'SCOTT '. "' DEPTJOB ' properly load/unloaded

    ******************************************************************************

    Empty the files together for SCOTT. DEPTJOB is:

    /tmp/Dept.dmp

    Job 'SCOTT '. "" DEPTJOB "managed to 00:00

    You can then review or extract the information from the log file.

    2. save the master table and the query for the busy lines.

    Use the parameter "KEEP_MASTER":

    -Keep the main table to be deleted after employment ends

    dbms_datapump.set_parameter(h,'KEEP_MASTER',1);

    Here's my example, the request to the main table is at the end.

    $ sqlplus scott/tiger @deptapi

    SQL * more: version 12.2.0.0.2 Beta on Fri Jan 22 12:55:52 2016

    Copyright (c) 1982, 2015, Oracle.  All rights reserved.

    Last successful login time: Friday, January 22, 2016 12:55:05-08:00

    Connected to:

    Database Oracle 12 c Enterprise Edition Release 12.2.0.0.2 - 64-bit Beta

    With the options of partitioning, OLAP, advanced analytics and Real Application Testing

    Connected.

    SQL > SET FEEDBACK 1

    SQL > SET 10 NUMLARGEUR

    SQL > SET LINESIZE 2000

    SQL > SET TRIMSPOOL ON

    SQL > SET TAB OFF

    SQL > SET PAGESIZE 100

    SQL > SET SERVEROUTPUT ON

    SQL >

    SQL > Rem save on the old table of scott.dept

    SQL > dept and rename it dept_old.

    Renamed table.

    SQL >

    SQL > Rem re-create it with partitions

    SQL > CREATE TABLE dept (deptno NUMBER varchar (14) dname, loc varchar (13)) PARTITION INTO 3 PARTITIONS HASH (deptno)

    2.

    Table created.

    SQL >

    SQL > Rem fill the dept table

    SQL > insert into dept select * from dept_old;

    4 lines were created.

    SQL >

    SQL > Rem now create datapump job export SCOTT. DEPT. using the API

    SQL > DECLARE

    2: NUMBER;         -Handle Datapump

    3 jobState VARCHAR2 (30);   -To keep track of job status

    4 ind NUMBER;         -Index of the loop

    5 the ku$ _LogEntry;   -For error messages and work in PROGRESS

    6 js ku$ _JobStatus;  -The State of the work of get_status

    7 jd ku$ _JobDesc;    -The get_status job description

    8 m ku$ _Status;     -The status returned by get_status object

    9 sql_stmt VARCHAR2 (1024);

    nom_partition 10-VARCHAR2 (50);

    11 rows_completed NUMBER;

    12

    BEGIN 13

    14-

    15 run the Installer based on the operation to perform.

    16-

    17 h: = dbms_datapump.open ('EXPORT', 'TABLE', NULL, 'DEPTJOB', NULL);

    18 dbms_datapump.add_file (h, 'dept.dmp', 'd', NULL,

    dbms_datapump.Ku$ _file_type_dump_file 19, 1);

    20

    21    --- Add a logfile                                                         

    22 dbms_datapump.add_file (h, ' DEPTJOB.log ', a', NULL,)

    23 dbms_datapump.ku$ _file_type_log_file);

    24

    25 dbms_datapump.metadata_filter (h, 'SCHEMA_EXPR', ' IN ("SCOTT") ");

    26 dbms_datapump.metadata_filter (h, 'NAME_LIST', "'DEPT"');

    27

    28

    29-

    30 start work.

    31-

    32 dbms_datapump.set_parameter (h, 'SILENT', 'banner');

    33

    34 -keep the main table to be deleted after employment ends

    35 dbms_datapump.set_parameter(h,'KEEP_MASTER',1);

    36

    37 dbms_datapump.start_job (h);

    38

    39-

    40 - run to grabbing the output of the job and write in the output log.

    41-

    42 jobState: = "UNDEFINED";

    43 WHILE (jobState! = "COMPLETED") AND (jobState! = "STOPPED")

    44 LOOP

    45 dbms_datapump.get_status (h,

    dbms_datapump.Ku$ _status_job_error 46.

    dbms_datapump.Ku$ _status_wip 47, -1, jobState, m);

    48

    49      --

    50. If we received messages WIP or error for the work, display them.

    51      --

    52 IF (BITAND(sts.mask,dbms_datapump.ku$_status_wip)! = 0)

    53 THEN

    54: = sts.wip;

    55 ON THE OTHER

    56 IF (bitand(sts.mask,dbms_datapump.ku$_status_job_error)! = 0)

    57 THEN

    58: = sts.error;

    59 ON THE OTHER

    the 60: = NULL;

    61 END IF;

    62 END IF;

    63

    64 the IS NOT NULL IF

    65 THEN

    66 ind: = the. FIRST;

    67 then AS ind IS NOT NULL

    68 LOOP

    69 dbms_output.put_line ((ind). LogText);

    70 ind: = the. Next (IND);

    LOOP END 71;

    72 END IF;

    73 END LOOP;

    74

    75-

    76 - release work.

    77-

    78 dbms_datapump.detach (h);

    79

    80-

    81. all exceptions that spread at this point will be captured.

    82 - the details are extracted from get_status and displayed.

    83-

    EXCEPTION OF 84

    85, SO THAN OTHERS THEN

    BEGIN 86

    87 dbms_datapump.get_status (h,

    dbms_datapump.Ku$ _status_job_error, 0-88,.

    89 jobState, sts);

    90 IF (BITAND(sts.mask,dbms_datapump.ku$_status_job_error)! = 0)

    91 THEN

    the 92: = sts.error;

    93 the IS NOT NULL IF

    94 THEN

    95 ind: = the. FIRST;

    96 although ind IS NOT NULL

    LOOP OF 97

    98 dbms_output.put_line ((ind). LogText);

    99 ind: = the. Next (IND);

    100 END LOOP;

    101 END IF;

    102 END IF;

    103

    BEGIN 104

    105 DBMS_DATAPUMP. STOP_JOB (m, 1, 0, 0);

    EXCEPTION OF 106

    107. WHEN OTHER NULL THEN;

    END 108;

    109

    110 EXCEPTION

    111, SO THAN OTHERS THEN

    112 dbms_output.put_line ('ORA-00000: an unexpected exception during ' |)

    113 ' Manager of exceptions. ' ||

    114 ' sqlcode = ' | TO_CHAR (SQLCODE));

    END 115;

    END 116;

    117.

    Departure 'SCOTT '. "" DEPTJOB ":

    Object type TABLE_EXPORT/TABLE/TABLE processing

    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA

    . . "exported"SCOTT" DEPT': 'SYS_P1581' 5,929 KB 2 lines

    . . "exported"SCOTT" DEPT': 'SYS_P1582' 5,914 KB 1 lines

    . . "exported"SCOTT" DEPT': 'SYS_P1583' 5,906 KB 1 lines

    Table main 'SCOTT '. "' DEPTJOB ' properly load/unloaded

    ******************************************************************************

    Empty the files together for SCOTT. DEPTJOB is:

    /tmp/Dept.dmp

    Job 'SCOTT '. "" DEPTJOB "managed to 00:00

    PL/SQL procedure successfully completed.

    SQL >

    SQL > table main query Rem for number of lines completed

    SQL > column nom_partition format a10

    SQL > format 9999 column lines

    SQL > SELECT nom_partition, COMPLETED_ROWS FROM SCOTT . DEPTJOB WHERE BASE_OBJECT_NAME = "DEPT";

    PARTITION_ COMPLETED_ROWS

    ---------- --------------

    SYS_P1581 2

    SYS_P1583 1

    SYS_P1582 1

    3 selected lines.

    SQL >

    SQL > "EXIT";

    3. you might even extract information of the call from the command line:

    $ sqlplus scott/tiger @deptapi.sql | grep 'exported ' | AWK ' {print "Table:" $4, 'charge' $7, $8} '

    Table: 'SCOTT '. "" DEPT ":"SYS_P1581"loaded 2 rows

    Table: 'SCOTT '. "' DEPT ': 'SYS_P1583' loaded 1 lines

    Table: 'SCOTT '. "' DEPT ': 'SYS_P1582' loaded 1 lines

  • Cannot export TEXT Index preferences via datapump

    Hello

    Source db 10.2.03

    Source o/s: Win 2003

    Target db: 12.2.0.1

    O/s goal: win 2012

    Please bear with me, since I am new to the Oracle TEXT and I do not have a background of developer side Oracle.

    I took a datapump against my user on the source db schema and doesn't have a datapump imported on target db, and it does not create index on one of the table (received error DRG-10700), doing some research on Google and searching through Oracle metalink, I found that my table contains a search index of text for which certain preferences ctx that are present in the source never made to target db in order to solve the problem, I had to create these preferences manually on target db, so my question is does anyone know why datapump exported not ctx default preferences?

    Here is a post that I found to be useful, which recommended to script the source db and recreate it on target db.

    https://community.Oracle.com/thread/2152091?start=0

    Is it reasonable to assume that datapump doesn't handle/export ctx preferences?

    -Learner

    This post may be useful - https://asktom.oracle.com/pls/asktom/f?p=100:11:0:P11_QUESTION_ID:7469700400346057467

  • moving to a different database schema (datapump or?)

    Friends and Experts,

    DB: 11 GR 2

    OS: Linux

    (Sorry for the long post but did not give us any information)

    I move a scheme from 1 host to another host-2 schema size is 400 GB.

    I had planned to use datapump since this method I'm more comfortable, export everything, including statistics.

    300 GB partition of a table. (including indexes)

    170 GB of segments like '% TABLE % '.

    Schema has the table to partition table, business segments, lob indexes, indexes, index partitions

    Then he was killed by mistake of snapshot has exported about 250GB of dumpfile.

    Host-1 have only 4 CPU can not really use more than 2 parallel channels in the file export settings, tried using 4 channels and host load reached 10 in a few minutes. Export was killed on the spot to avoid the closure of the host.

    Host-2 is faster, so no, I don't not delay while import.

    We no license for the Golden Gate, but helped advanced compression.

    Problem:

    I started to export but it was so slow that only generated 10 GB/HR, I let him turn to test and it failed after 14 hours with snapshot too old error.

    Not a lot of process runs on the host or the schema, main problem I see is host/drive is slow and not seen in any case to move this huge scheme to another real data base. In the worst case I have the schema lock before the interview for 15 hours and another 10 hours for import or so but I still don't think will end export.

    Issues related to the:

    1. What can be done here to move the schema using datapump?

    2. any other safe method to move the schema? I know that this can be done through transportable tablespace but I have never done this and this is the pattern of production in order to don't want to take the risk.

    3. how anyone with similar project sharing their experience?

    4. any other advice/method/suggestions?

    File export settings:

    DIRECTORY = DATA_PUMP_REFRESH

    DUMPFILE = EXP_01.DBF, EXP_02.DBF

    LOGFILE = EXP_USER1. JOURNAL

    PARALLEL = 2

    SCHEMAS = USER1

    CONTENT = ALL

    Add the parameter parallel and added the size of segments

    You pay conservation discard at least the hour of your longest running query (and ensure that this cancellation can develop that much space).

    Your 'senior' is correct in saying, "we will not do this for the first time in production", but one could say that whatever it is. If there is one, transportable tablespace will probably be the fastest and easiest option.

    Yes, you will still need enough UNDO for a network-link data pump job, but if the writing of the album is the cause of the problem of the speed, the approach of the network link * may * be faster, which means you need less to cancel.

  • How to stop oracle datapump job when importing

    Hello world

    I would stop the import task without killing of operating at the level of the system.


    any suggestions?



    Hello

    You can stop it by the datapump utility.

    User/pass Impdp tie =

    Then type:

    kill_job (or stop_job).

    The task name, which you can find in dba_datapump_jobs...

    Hope this helps,

    Anatoli has.

  • IMPDP import DataPump fails with error ORA-31626 ORA-6512 31637 ORA ORA-31632 ORA-31635

    Hello


    As he attempted an impdp, I get after the sequence of errors-


    ORA-31626: there is no job
    ORA-06512: at "SYS." DBMS_SYS_ERROR', line 79
    ORA-06512: at "SYS." "KUPV$ FT", line 885
    ORA-31637: failed to create the SYS_IMPORT_TABLE_01 work for user LSU
    ORA-31632: table main "LSU. SYS_IMPORT_TABLE_01"not found, invalid or unreachable
    ORA-31635: impossible to establish synchronization job resources
    ORA-06512: at "SYS." DBMS_SYS_ERROR', line 79
    ORA-06512: at "SYS." "KUPV$ FT_INT", line 1986
    ORA-04063: package body 'SYS. DBMS_LOCK"contains errors
    ORA-06508: PL/SQL: called program unit is not found: 'SYS. DBMS_LOCK.


    Parameters passed to impdp are also less

    Impdp sjm/xxxxxx@flman700 DIRECTORY = temp_dir DUMPFILE = dyman01_exp01.dmp, dyman01_exp02.dmp, dyman01_exp03.dmp, dyman01_exp04.dmp, dyman01_exp05.dmp, dyman01_exp06.dmp LOGFILE is one. JOURNAL OF THE TABLES = CST_ACTVT TABLE_EXISTS_ACTION = ADD


    Any help on how to proceed to another. Cannot find jobs datapump hung in the target database.

    Version of data source and target - 10.2.0.3


    Thank you


    Sery

    Hello

    According to DBA_DEPENDENCIES (for 11.1 anyway) the dependencies of the table for DBMS_LOCK are DUAL and DBMS_LOCK_ALLOCATED - these two exist?

    I suggest that you get a failure and the rerun catalog and the catproc.

    See you soon,.

    Rich

  • Parallel degree importing Datapump

    Hello

    For several imports that I ran, I observed a constant degree of parallelism = 10 when creating index.

    Moreover, I parallel_degree_limit = en parallel_degree_policy = MANUAL CPU.

    The first parameter specifies that Oracle sets up to an upper limit on the DOP.

    If this explains the value observed?

    Kind regards

    Hello

    At the same time goes like this:

    all the tables, plsql definitions are created in series

    All data in the table of loading in parallel - so not parallel insertion into a table but a work process separate datapump by table - so EMP would be charged by slave1 and DEPT by slave2 - but at the same time. IN your case 10 separate tables would all be imported at the same time.

    The indexes are built at the same time you specify

    Constraints are made in series one and the same process, I think

    I wrote a short blog about it earlier that can be useful: it contains pictures! :-)

    Oracle DBA Blog 2.0: What happens during an import parallel datapump?

    See you soon,.

    Rich

  • Using the parallel option with DataPump

    Hello

    Does make sense to make a DataPump export and import with the option parallel > 1 when I only have a single dump file?

    Thanks and greetings

    Hello

    Thank you for the feedback.

    But as I said I am concerned by the presence of a single dump file.

    It does not work and have abortions. So I can't use this option unless I generate export by many dump files.

    Kind regards

  • Import with datapump when exporting datapump was executed as user SYS

    Hi all

    all I have is a dumpfile and an export a datapump log file. Export has been executed as user sys:

    Export: Release 11.2.0.1.0 - Production on mid Dec 3 12:02:22 2014

    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    ;;;
    Join EIB: Oracle Database 11 g Release 11.2.0.1.0 - 64 bit Production
    "SYS". "' SYS_EXPORT_FULL_01 ':"sys/***@database AS SYSDBA"directory = data_pump_dir dumpfile = db_full.dmp db_full.log = complete logfile = gestartet wird y
    Mobiltelefondienste participations mit method BLOCKS...
    Objekttyp DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA wird thanks
    Approximately Schatzung mit BLOCKS method: 52.84 GB

    Now I have to import (with datapump) user USER01 and export USER02. But I don't know the name of the source database tablespaces.

    I want to keep the name of the user (USER01/USER02). That's why I created these users in the target database.

    The questions I have are:

    -should I start the import by datapump as user SYS?

    -Should what settings I use to import users USER01, USER02 and data in the target database? Since I do not know the names of tablespaces

    in the source database parameter REMAP_TABLESPACE will not help

    Any help will be appreciated

    J.

    Hi J.

    The questions I have are:

    -should I start the import by datapump as user SYS?

    No, you need to import with a user in the role of "imp_full_database".

    -Should what settings I use to import users USER01, USER02 and data in the target database? Since I don't know the names of the storage spaces in the source database parameter REMAP_TABLESPACE will not help

    Well, an idea is to generate you a schema import sqlfile and see in the ddl in what tablespace, it will try to create objects.

    Impdp------' / as sysdba------' directory = dumpfile = = = USER01, USER02 patterns sqlfile

    For more information, take a look at the import documentation

    http://docs.Oracle.com/CD/B19306_01/server.102/b14215/dp_import.htm

    Hope this helps,

    Kind regards

    Anatoli has.

  • 12 c to 11g, ORA-39142 DataPump: incompatible version number 4.1

    Hi gurus of the Oracle,.

    I tried to transfer data from 12 c to 11 g,

    I took datapump 12 c with VERSION = 12.1 backup

    and I tried to import 11g with version = 12.1

    get the error when importing

    ORA-39142: incompatible version number 4.1

    Please help me in this regard.

    Thank you in advance

    Kind regards

    REDA

    Hi reda,.

    You just set the version 11.2 instead of 12.1 to say expdp "extract data and metadata into oracle 11.2 format"-you can then load this very well.

    See the example here Oracle DBA Blog 2.0: decommissioning with datapump

    See you soon,.

    Rich

  • DIRECTORY DATAPUMP with variable Date

    Dear friends,

    I'm using oracle 11g R2. During export my database, I use the script to split my DMP files below:

    export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1

    export ORACLE_SID = ISLPRIM

    Date = 'date + %d %b %Y'.' date + %I'. ' date + %m %p '

    Dat = 'date + %d %b %Y'

    mkdir Pei /asm/ISLDB.$Dat

    sqlplus / as sysdba < <!

    create or replace directory test_dir as ' / asm/ISLDB. $Dat';

    EXIT;

    expdp system/sys123 directory = test_dir dumpfile=exp1%U_Date.dmp diagrams = use PARALLEL = 4 = db_ logfile$ Date.log

    Here my intention first to create today "date OF THE file" under "/asm" filesystem AND this dated, export FILE dumps are created. But my problem now, how can I create DIRECTORY DATAPUMP using a Date variable?

    Need your help...

    Hello

    Use a shell script when you calculate the path/name of directory and script sql with the syntax to create the directory.

    connect via sqlplus

    Run the script

    output

    Delete the file temporary sql

    Kind regards

    Ionut

  • DataPump with different time zone files

    Hello

    I'm trying to import a schema with datapump to a 12 c in oracle 11.2 instance oracle instance. I got an error cause different time zone files. 11.2 is on version 14 and 12 c is on version 18. So I tried to import a single table, make a new export with that one table (table got no timestamp with time zone column data type) and exported. That has worked well. So I modified my schema export to exclude these tables with columns with data type timestatmp with time zone. When you try to import this new dump, it still does not for the same reason.

    So, to import a table worked, but import a schema failed. Does anyone have a suggestion for me, how can I import this discharge? And no, I can't install a patch with the new zone files.

    sql_coder

    I could solve it myself, had to exclude more tables, not only those with timestamp with time zone, but also excluded arrays with data like AQ types $ _ %.

    sql_coder

  • How to import with datapump tablespaces and another schema?

    Hi, I need to import with datapump to another schema, data in 'x' tablespace and indexes in the tablespace 'y'

    example:

    My schema: prueba

    I need to import the schema "prueba" but the data must be in the tablespace called "Tb1" and its index should be in the tablespace called "Tb2".

    So, how can I do?

    Any help will be appreciated.

    As the noted the ground, do two imports, exclude to limit import and map objects to the appropriate storage space (to know what tablespace they were however)

    import 1 - exclude = index remap_tablespace = source: tb1

    import 2 - exclude = remap_tablespace = source table: tb2

  • Definition of version in LKM oracle for oracle (datapump)

    create table < % = odiRef.getTable ("COLL_NAME") % >

    (

    < % = odiRef.getColList ("", "[CX_COL_NAME]", "\n\tclick", "", "") % >

    )

    (EXTERNAL) ORGANIZATION

    TYPE oracle_datapump

    DIRECTORY DEFAULT < % = odiRef.getOption ("X_DATAPUMP_ORACLE_DIR") % >

    LOCATION ("< % = odiRef.getOption ("X_DATAPUMP_NAME") % > .dmp '")

    )

    PARALLEL

    in SELECT < % = odiRef.getPop("DISTINCT_ROWS") % >

    < % = odiRef.getColList ("", "[PHRASE]", "\n\tclick", "", "") % >

    < % = odiRef.getFrom () % >

    where (1 = 1)

    < % = odiRef.getFilter () % >

    < % = odiRef.getJrnFilter () % >

    < % = odiRef.getJoin () % >

    < % = odiRef.getGrpBy () % >

    < % = odiRef.getHaving () % >

    I have this code that works, to generate a datapump file. However, I must make it clear to make it compatible version 10.2.

    I tried to put the 10.2 VERSION / VERSION = 10.2 / VERSION "10.2" / VERSION =' 10.2 ' /... almost everywhere and I can't seem to find the exact spot, because I keep getting errors. ".

    This seems like a stupid question that I hope gets easily solved but where do I put the correct statement?

    Thank you!

    Found the problem:

    create table< %="odiRef.getTable" (« coll_name »)="" %="">

    (

      <%=odiRef.getColList("", "[CX_COL_NAME]", ",\n\t", "", "")%>

    )

    (EXTERNAL) ORGANIZATION

    TYPE oracle_datapump

    DEFAULT DIRECTORY <%=odiRef.getOption("X_DATAPUMP_ORACLE_DIR")%>

    (SETTINGS) ACCESS

    VERSION "10.2'."

    NOLOGFILE

    )

    LOCATION ("<%=odiRef.getOption("X_DATAPUMP_NAME")%> »)

    )

    PARALLEL

    in SELECT <%=odiRef.getPop("DISTINCT_ROWS")%>

      <%=odiRef.getColList("", "[EXPRESSION]", ",\n\t", "", "")%>

    of <%=odiRef.getFrom()%>

    where (1 = 1)

    <%=odiRef.getFilter()%>

    <%=odiRef.getJrnFilter()%>

    <%=odiRef.getJoin()%>

    <%=odiRef.getGrpBy()%>

    <%=odiRef.getHaving()%>

  • DataPump to export some tables

    Hi all

    I would like to export some tables via datapump. For example, all the tables start with EMP, but not EMP_ADMIN. The following script does NOT work for me:

    expdp / SCHEMAS = MYSCHEMA INCLUDE = TABLE: 'LIKE 'EMP %' ' EXCLUDE = TABLE: 'LIKE '% EMP_ADMIN' ' DUMPFILE = dpump_dir1:exp_inc.dmp NOLOGFILE = y

    Any suggestion on how to use the name clause in the INCLUDE parameter?

    Thank you!

    Hello

    I've updated by blog post link above - I responded to your comment here.

    The problem is that you can't say

    Select x from y

    Union z

    you have to say

    Select x from y

    Union

    Select double z

    The additional table names must be selected from somewhere.

    See you soon,.

    Rich

  • DataPump on Standby Database

    Is it possible to export datapump on my database pending.  It works using Oracle export utility, but I need to spend to Datapump.  Some tables may not be backup using old exp.

    Our database awaiting is in recovery mode.

    The steps we are taking today are.

    1. take the database pending out of recovery mode, allowing read-only.

    2 export using old export utility.

    3. once the export is complete, put on hold database in recovery mode

    Switching to datapump we should be able to shorten the time of export, and the pump has no problem all tables backup.

    How can I, or if possible use against a database export Datapump ensures only one reading?

    Thank you

    N °

    DataPump must create (or insert) data in tables.

Maybe you are looking for

  • Delete an e-mail account

    HelloHow can I remove an e-mail account in Thunderbird to 38.1.0?, the responses I see here do not work. Kurt

  • Satellite P100 - 115 ACPI problems on Linux

    I have a P100-115 (United States is P105 -?) with geforce 7600. I tried to install suse 10.1. Everything was fine, but after awhile, I noticed a strange thing. The main vent only lights when 65 ° C is reached and the vent on the side left (graphics c

  • HELPPP PLEASEEE

    Well, we have a HP desktop with Vista OS computer. The beginning of the week the computer just stops. the screen went black and had to press the power button to get to reboot and we have activated it back it beeps and beeps and beeps. FINALLY, he wen

  • No sound at all

    No audio at all, not even Microsoft agreements to start and stop. I've updated all the audio drivers which, according to their status in the properties, works well without any problem.  System Restore has been of any help

  • An XP activation code usable after a Windows 7 upgraded?

    Hello I have an older office that I would like to reformat and reinstall Windows XP. However, when I bought Windows 7 for my new desktop PC, I bought the upgrade version and used my XP key.  If I reinstall XP on the old machine, I'll be able to activ