EXPORT DATAPUMP with query on a table

Hello

I would like to ask how to include a query in one parfile using EXPDP.

My statement is:

Select * from apt.sales where TO_CHAR (or_date, 'YYYY') = '2011';

My goal is to export only the GOLD (receipts) generated in 2011 in the table APT. SALES

My parfile is correct:

schemas = APT
Query = APT. SALES:'"where TO_CHAR(or_date,'YYYY') = '2011'" ' "
Directory = DATA_PUMP_DIR
dumpfile = ORin2011.dmp
logfile - ORin2011.log

I'm working on an Oracle 10.2.0.1 database in Windows Server 2003.

Help, please.

Try to:
1 use the keyword TABLES otherwise all data schema is exported
2. use quoting easier for the QUERY parameter
3. use '=' for the LOGFILE parameter

Example with the table HR. T on Windows:

tables=HR.T
query=HR.T:"where TO_CHAR(x,'YYYY') = '2011'"
directory=DATA_PUMP_DIR
dumpfile=ORin2011.dmp
logfile=ORin2011.log
C:\>expdp system/xxx parfile=et.par

Export: Release 11.2.0.2.0 - Production on Mar. Oct. 18 08:17:36 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Express Edition Release 11.2.0.2.0 - Production
Starting "SYSTEM"."SYS_EXPORT_TABLE_01":  system/******** parfile=et.par
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "HR"."T"                                    5.007 KB       1 rows
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
  C:\ORACLEXE\APP\ORACLE\ADMIN\XE\DPDUMP\ORIN2011.DMP
Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 08:17:40

Tags: Database

Similar Questions

  • Export DataPump with the query option

    Hi all

    My environment is IBM AIX, Oracle 10.2.0.4.0 database.

    I need a few sets of records using a query in export production. Request is attached to several tables. Since we have the BLOB data type, we export using datapump.

    We have weaker environments, but have not the same set of data and tables, and therefore not able to simulate the same query in lower environment. But created a small table and faked the query.

    My order is

    expdp system / < pwd > @orcl tables = dump.dump1 query = dump.dump1:' ' where num < 3 ' ' directory = DATA_PUMP_DIR dumpfile = exp_dp.dmp logfile = exp_dp.log

    Query in the command pulls two records directly. By running the command above, I see the size 80KO dump file,
    In the export log file.

    I see Total estimation using BLOCKS method: 64 KB.
    export Dump.Dump1 = 4,921 KB 2 rows.

    My doubts are,
    (1) is the correct command that I am running.
    (2) estimate said 64 k, considering that it says also exported 4,921 KB. But the dump file created is 80KO. It is exported correctly?
    (3) given that I run with the use of the system, apart from 2 rows, export all data. We must send the dump file to the other Department. We should not export all of the data other than the query output.
    (4) in the order if I am not using "tables = dump.dump1), the export file big mess." Don't know which is the right.

    Your answers will be more useful.

    The short answer is 'YES', he did the right thing.

    The long answer is:

    Query in the command pulls two records directly. By running the command above, I see the size 80KO dump file,
    In the export log file.

    I see Total estimation using BLOCKS method: 64 KB.
    export Dump.Dump1 = 4,921 KB 2 rows.

    My doubts are,
    (1) is the correct command that I am running.

    Yes. As long as you query is correct. DataPump will export on the lines that match this query.

    (2) estimate said 64 k, considering that it says also exported 4,921 KB. But the dump file created is 80KO. It is exported correctly?

    Estimate is made using the full picture. Since you specify, he used the method of estimation of block. Basically, how many blocks have been attributed to this table. In your case, I guess it was 80KB.

    (3) given that I run with the use of the system, apart from 2 rows, export all data. We need to send the dump file to other > Department. We should not export all of the data other than the query output.

    I will export all the data, but going to export metadata. It exports the table definition, all indexes on it, all the statistics on tables or indexes, etc. This is why the dump file could be bigger. There is also a 'main' table that describes the export job who gets exproted. This is used by export and import to find what is in the dumpfile, and where in the dumpfile these things are. It is not user data. This table needs to be exported and will take place in the dumpfile.

    (4) in the order if I am not using "tables = dump.dump1), the export file big mess." Don't know which is the right.

    If you only want this table, then you order export is right. If you want to export more, then you need to change your export command. From what you say, it seems that you order is correct.

    If you do not want any expoirted metadata, you can add:

    content = data_only

    at the command line. This will only export the data and when the dumpfile is imported, it must have the table already created.

    Dean

  • DataPump export using the query option

    Hello

    OracleVersion: 10.2.0.1
    OperatingSystem:windows

    Here, I need to take an export datapump based on a specific condition. Here, I'm not able to get the backup.

    Please help me. Get the backup to this condition what's possible or not.
    E:\oracle\dbdump>expdp scott/tiger directory=dbdump dumpfile=extract_1.dmp logfile=extract_1.log tables=emp,dept query='"where deptno< (select deptno from dept where loc = DALLAS) "'
    
    Export: Release 10.2.0.1.0 - Production on Thursday, 29 October, 2009 12:16:20
    
    Copyright (c) 2003, 2005, Oracle.  All rights reserved.
    
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Starting "SCOTT"."SYS_EXPORT_TABLE_01":  scott/******** directory=dbdump dumpfile=extract_1.dmp logfile=extract_1.log tables=emp,dept query='where deptno< (select deptno from dept where loc = DALLAS) '
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 128 KB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/POST_TABLE_ACTION
    ORA-31693: Table data object "SCOTT"."DEPT" failed to load/unload and is being skipped due to error:
    ORA-00904: "DALLAS": invalid identifier
    ORA-31693: Table data object "SCOTT"."EMP" failed to load/unload and is being skipped due to error:
    ORA-00904: "DALLAS": invalid identifier
    Master table "SCOTT"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
    ******************************************************************************
    Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
      E:\ORACLE\DBDUMP\EXTRACT_1.DMP
    Job "SCOTT"."SYS_EXPORT_TABLE_01" completed with 2 error(s) at 12:17:10

    Hello..

    Try

    expdp scott/tiger directory=dbdump dumpfile=extract_1.dmp logfile=extract_1.log tables=emp,dept query=\"where deptno< (select deptno from dept where loc = 'DALLAS') \"
    

    HTH
    Anand

    Published by: Anand... on October 29, 2009 13:04

  • Querying the oracle table that has a column with the name of "FILE".

    Hi all

    I need to have an oracle table that has the column with the name "FILE".

    I can query all columns with the query "select * from table".

    But I'm not able to use the query "select the table file.

    This table is converted from ms access to oracle and I need to have this column with the name "FILE".

    Any suggestions?

    Thank you

    Abdellaoui

    Hello

    FILE is a keyword from Oracle, so it's not a good name,

    Use FILEDATE, or DATE_FILED, or something else that is not in V$ RESERVED_WORDS. KEYWORD as the name column.

    If you need to use the FILE, then you must use quotation marks.

  • How to join this per_rating_levels this table with query table.

    Dear all,

    Guide how 2 join me per_rating_levels this table with query because, I want 2 see the per_rating_levels.name against all employees.
    When I join this table with query it shows several recording/cortion against this record.

    Query:

    SELECT
    PAPF.full_name employee_name,
    papf1.full_name supervisor_name,
    WOMEN'S WEAR. Employee_number,

    hr_general.decode_job (PAAF.job_id) job_name,
    Department of hr_general.decode_organization (PAAF.organization_id),
    PC.Name, PCE.Comments EmployeeComments,
    (by selecting pce1.comments in per_competence_elements pce1
    where
    PCE.assessment_id = pce1.assessment_id
    AND pce.competence_id = pce1.competence_id
    AND pce1.object_id = pce.object_id) ManagerComments;

    --(sélectionnez rtl.name dans rtl où les pc.) RATING_SCALE_ID = rtl. Name RATING_SCALE_ID)


    OF per_all_people_f women's wear.
    per_all_people_f papf1,
    per_all_assignments_f ADP,
    PA per_appraisals,
    pat per_appraisal_templates,
    per_assessments not,
    per_competence_elements pce,
    per_competences pc


    WHERE papf.person_id = paaf.person_id
    AND paaf.supervisor_id = papf1.person_id
    AND paaf.primary_flag = 'Y '.
    AND pa.appraisee_person_id = papf.person_id
    AND pa.appraisal_template_id = pat.appraisal_template_id
    AND pa.appraisal_id = pas.appraisal_id
    AND pat.assessment_type_id = pas.assessment_type_id
    AND pas.assessment_id = pce.assessment_id
    AND pce.object_id = papf.person_id
    AND pce.competence_id = pc.competence_id
    AND trunc (sysdate) BETWEEN papf.effective_start_date AND papf.effective_end_date
    AND trunc (sysdate) BETWEEN papf1.effective_start_date AND papf1.effective_end_date
    AND trunc (sysdate) BETWEEN paaf.effective_start_date AND paaf.effective_end_date

    - AND papf.employee_number =: p_employee_number
    - AND pa.appraisal_date =: p_appraisal_date
    - AND papf.business_group_id =: p_bg_id

    order of papf.employee_number


    Concerning

    user10941925 wrote:
    Dear all,

    Guide how 2 join me per_rating_levels this table with query because, I want 2 see the per_rating_levels.name against all employees.
    When I join this table with query it shows several recording/cortion against this record.

    '2' in your question means "to"? If so please do not use text instant message in this forum.

    Now I suppose that PRE_RATING_LEVELS is a table in your application. And you are trying to include this table in an existing query. But in doing so, you have found the Cartesian product, correct?

    In fact, how do you think someone a public forum without any knowledge of your table and data structure could help you?

    Lets see, here's your query. I formatted.

    
    select papf.full_name                                       employee_name
         , papf1.full_name                                      supervisor_name
         , papf.employee_number                                 employee_number
         , hr_general.decode_job(paaf.job_id)                   job_name
         , hr_general.decode_organization(paaf.organization_id) department
         , pc.name                                              name
         , pce.comments                                         employeecomments
         , (
              select pce1.comments
                from per_competence_elements pce1
               where pce.assessment_id = pce1.assessment_id
                 and pce.competence_id = pce1.competence_id
                 and pce1.object_id = pce.object_id
           )                                                    managercomments
      from per_all_people_f        papf
         , per_all_people_f        papf1
         , per_all_assignments_f   paaf
         , per_appraisals          pa
         , per_appraisal_templates pat
         , per_assessments         pas
         , per_competence_elements pce
         , per_competences         pc
     where papf.person_id           = paaf.person_id
       and paaf.supervisor_id       = papf1.person_id
       and paaf.primary_flag        = 'Y'
       and pa.appraisee_person_id   = papf.person_id
       and pa.appraisal_template_id = pat.appraisal_template_id
       and pa.appraisal_id          = pas.appraisal_id
       and pat.assessment_type_id   = pas.assessment_type_id
       and pas.assessment_id        = pce.assessment_id
       and pce.object_id            = papf.person_id
       and pce.competence_id        = pc.competence_id
       and trunc(sysdate) between papf.effective_start_date  and papf.effective_end_date
       and trunc(sysdate) between papf1.effective_start_date and papf1.effective_end_date
       and trunc(sysdate) between paaf.effective_start_date  and paaf.effective_end_date
    order
        by papf.employee_number 
    

    Now, you want to add the PRE_RATING_LEVELS in the list so that you can use the column NAME.

    First thing you need to do is to determine the relationship between PRE_RATING_LEVELS and other tables. A relationship can be

    1. one on one
    2 one-to-many
    3. - to-several

    So when you tried to join, your state of health has resulted in 2nd or 3rd type of relationship. If you arrive with someone who knows the business and the data and find the table that could uniquely identify a line of PRE_RATING_LEVELS.

  • Unique problem with selction on the table (Query Page)

    Unique problem with selction on the table (Query Page)

    I have a VO data Bulletin Board, with a transitional attribute for selction unique column.
    My requirement is that I need to identify the line that was selected in the table.

    I associated with fireAction singleSelection column, such that whenever the user selects the line
    I'm looking for that VO using some rowIterator.
    But when running the loop of the transient VO that is mapped to the singleSelection variable is the show as "n" / NUll
    for all the lines...

    So how do you identify the selected line in singleSelection to a table.

    -Sasi

    In the property inspector of the element for which you've put firePartialAction, you can find a property named "parameters". That mention primary key as your setting.

    You can get the arameter using pageContext.getparameter ();

    -Anand

  • Parallel sessions on Export Datapump (10.2.0.4)

    Hello

    We use Oracle 10.2.0.4 on Solaris and I export a table using Datapump export.

    The export includes a query that selects from three tables based on relevant conditions. Specifies him parfile "parallel = 4' and the dumpfile setting uses % you so that it creates an appropriate number of files."

    When I run the export using my own account (DBA) (i.e. expdp mr_dba parfile = exp_xyz.par) export complete in 15 minutes and creates four dumpfiles. When I run the export as the owner of the schema using the parfile her exact same (i.e. expdp schema_own parfile = exp_xyz.par) export takes more than two hours and does that with two dumpfiles.

    Could someone suggest the things I could look at to find out why there is such a difference in elapsed time? Exports have been executed repeatedly as long as both users with the box with similar loads and the results are pretty consistent that is 15 mins for my user and two hours for the owner of the schema.

    The owner of the schema has a different profile and another group of consumers of resources but also of my profile and the profile of schema owners have 'sessions_per_user' defined on Unlimited. In the resource manager in the Parallel_Degree_Limit_P1 value is set to 16 for my group of consumer and is not at all defined for the consumer of schema owners group.

    I did notice that when exporting under the schema owner of the DBA_DATAPUMP_SESSIONS showed a DBMS_DATAPUMP session, a MASTER session and two WORKING sessions. When I run it under my user name, it shows these four sessions but shows also three sessions TABLE OUTSIDE. This suggests that it is using a different approach, but I don't know that this would cause.

    Any advice would be very welcome. I have not posted specific information on the settings file or the tables that I don't know what information people may require - so if you need the details of anything please let me know.

    Thank you very much.

    Is the scheme which takes more privileged time or not? If she didn't have permission, can you give it and let me know if that makes a difference. I seem to remember hearing something about parallel and not private accounts, but can't seem to find anything about it.

    Thank you

    Dean

  • Export (expdp) with where clause

    Hello gurus,

    I'm trying to export with where clause. I am getting error below.


    Here is my order of export.
    expdp "'/ as sysdba'" tables = USER1.TABLE1 directory=DATA_PUMP dumpfile=TABLE1.dmp logfile=TABLE1.log query= “USER1.TABLE1:where auditdate>'01-JAN-10'”
    Here is the error
    [keeth]DB1 /oracle/data_15/db1> DATA_PUMP dumpfile=TABLE1.dmp logfile=TABLE1.log query= USER1.TABLE1:where auditdate>'01-JAN-10'                    <
    
    Export: Release 11.2.0.3.0 - Production on Tue Mar 26 03:03:26 2013
    
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Starting "SYS"."SYS_EXPORT_TABLE_03":  "/******** AS SYSDBA" tables=USER1.TABLE1 directory=DATA_PUMP dumpfile=TABLE1.dmp logfile=TABLE1.log query= USER1.TABLE1:where auditdate
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 386 MB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/TRIGGER
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    ORA-31693: Table data object "USER1"."TABLE1" failed to load/unload and is being skipped due to error:
    ORA-00933: SQL command not properly ended
    Master table "SYS"."SYS_EXPORT_TABLE_03" successfully loaded/unloaded
    ******************************************************************************
    Dump file set for SYS.SYS_EXPORT_TABLE_03 is:
      /oracle/data_15/db1/TABLE1.dmp
    Job "SYS"."SYS_EXPORT_TABLE_03" completed with 1 error(s) at 03:03:58
    Version
    SQL> select * from v$version;
    
    BANNER
    --------------------------------------------------------------------------------
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE    11.2.0.3.0      Production
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production

    Hello

    You must use the settings file. Another question, I see you are using 11g. Why don't you use a data pump.?
    Data pump is faster and has more features and that regular improvement imp and exp.

    You can do the following:

    sqlplus / as sysdba
    
    Create directory DPUMP_DIR3  for 'Type here your os path that you want to export to';
    

    then tap on a file:
    Touch par.txt

    In this file, type the following line the following:

    tables=schema.table_name
    dumpfile=yourdump.dmp
    DIRECTORY=DPUMP_DIR3
    logfile=Your_logfile.log
    QUERY =abs.texp:"where hiredate>'01-JAN-13' "
    

    then proceed as follows
    expdp username/password name parfile = 'par.txt'

    You will import to Oracle 11 g to 10g version should add 'version = 10' parameter in the above setting file

    BR
    Mohamed enry
    http://mohamedelazab.blogspot.com/

  • Problem in the export using the QUERY functionality

    Problem in the export using the QUERY functionality


    I'm trying to export some rows in a table using the query functionality
    and I have some errors... I'm using the syntax is

    system@orcl QUERY = scott.emp expdp: '"WHERE emp_no = 123455" '
    DIRECTORY = data_pump_dir DUMPFILE = data_pump.dmp
    LOGFILE = data_pump_12345.log INDEX = n

    Can someone tell me please the problem with that statement

    I also tried to use the simple export

    exp file system@orcl = orcl_export.dmp log = orcl_export.log
    tables = Scott.EMP index = QUERY = n------"WHERE emp_no\ = 123455\"

    and this error

    EXP-00008: ORACLE error 904
    ORA-00904: identify invalid

    My os is Solaris
    Please let me know what the problem

    Hello

    Try to create parfile and use that, otherwise, you will need to escape each clause correctly to run exp or expdp successfully.

    test.par

    tables=emp
    query="WHERE emp_no=123455"
    
    or
    tables=myobjects
    query="WHERE owner='SYS'"
    
    $> exp username/password parfile=test.par
    
    Export: Release 10.2.0.1.0 - Production on Thu Mar 19 10:17:48 2009
    
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining Scoring Engine options
    Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
    
    About to export specified tables via Conventional Path ...
    . . exporting table                      MYOBJECTS      22650 rows exported
    Export terminated successfully without warnings.
    

    Concerning

    Published by: OrionNet on March 19, 2009 10:21

  • with the clause against table inline

    Dear Experts,

    With clause or inline table are identical from the point of view of performance.

    What query is better?

    query with clause

    with table_getbuckets as
    (
    SELECT * FROM TABLE(pack_activitymonitoring.f_getmatbuckets (start_settle_date,end_settle_date))
    )
    SELECT
          nvl(baludhar.pct,0) AS pctudhar
    FROM t_option opt
    LEFT OUTER JOIN table_getbuckets balinv ON
                opt.a_id = balinv.collateral_a_id AND
                opt.s_id = balinv.s_id
    WHERE opt.for_principal=1
    
    

    query without a clause

    SELECT
          nvl(baludhar.pct,0) AS pctudhar
    FROM t_option opt
    LEFT OUTER JOIN (
                     TABLE(pack_activitymonitoring.f_getmatbuckets
                                                        (start_settle_date,end_settle_date))
      ) balinv ON
                opt.a_id = balinv.collateral_a_id AND
                opt.s_id = balinv.s_id
    WHERE opt.for_principal=1
    
    

    In this case, since you are using only the view inline once, I expect to exercise the same.

    For purposes of readability, I prefer the one that uses the WITH clause.

  • query the partition tables

    Hi all

    I want to query the partition tables that are over 1 GB in size?

    How can I ask questions?

    Salvation;

    You can also try with dba_tab_partitions view

    Respect of
    HELIOS

  • Query a nested table of sdo_geometry

    I have the following situation in the my database:
    -----
    SQL > drop type obj_type1 force;
    create or replace TYPE obj_type1 AS OBJECT (id integer, geom sdo_geometry);

    Drop type obj_type2 force;
    create or replace TYPE obj_type2 AS OBJECT (id integer, m obj_type1);

    Drop type geom_tab force;
    create or replace TYPE geom_tab AS table obj_type2;

    drop table tab1;
    create table tab1 (name varchar2 (40), geom_info geom_tab) store geom_info table nested as geom_nt;

    -Inserting data
    insert into tab1 values ('1', geom_tab)
    obj_type2 (1, obj_type1 (1, MDSYS.) SDO_GEOMETRY (2002, NULL, NULL, MDSYS.) SDO_ELEM_INFO_ARRAY (1,2,1), MDSYS. SDO_ORDINATE_ARRAY (2804.31, 12678.5, 12679.2, 2832.08))),
    obj_type2 (2, obj_type1 (2, MDSYS.) SDO_GEOMETRY (2002, NULL, NULL, MDSYS.) SDO_ELEM_INFO_ARRAY (1,2,1), MDSYS. SDO_ORDINATE_ARRAY (2832.08, 12677.9, 12678.5, 2859.85)));
    insert into tab1 values ('2', geom_tab)
    obj_type2 (1, obj_type1 (1, MDSYS.) SDO_GEOMETRY (2002, NULL, NULL, MDSYS.) SDO_ELEM_INFO_ARRAY (1,2,1), MDSYS. SDO_ORDINATE_ARRAY (2859.85, 12677.3, 12677.9, 2887.63))),
    obj_type2 (2, obj_type1 (MDSYS. SDO_GEOMETRY (2002, NULL, NULL, MDSYS.) SDO_ELEM_INFO_ARRAY (1,2,1), MDSYS. SDO_ORDINATE_ARRAY (2887.63, 12676.7, 12677.3, 2915.4)));
    -update of metadata
    INSERT INTO user_sdo_geom_metadata
    (TABLE_NAME,
    COLUMN_NAME,
    DIMINFO,
    SRID)
    VALUES)
    'geom_nt', - name of the nested table
    "m.geom,"
    MDSYS. SDO_DIM_ARRAY (MDSYS. SDO_DIM_ELEMENT ('X',-9267, 28329, 5-7), MDSYS. SDO_DIM_ELEMENT('Y',-2598,26587,5E-7)),
    NULL - SRID VALUE
    );
    commit;
    -creation of a spatial index
    create index geom_ump_idx1 on geom_nt (m.geom) INDEXTYPE IS MDSYS. SPATIAL_INDEX;
    -another table to the query
    create the table querypointss (geometry sdo_geometry);
    INSERT INTO user_sdo_geom_metadata
    (TABLE_NAME,
    COLUMN_NAME,
    DIMINFO,
    SRID)
    VALUES)
    'querypointss', - name of the nested table
    "geometry."
    MDSYS. SDO_DIM_ARRAY (MDSYS. SDO_DIM_ELEMENT ('X',-9267, 28329, 5-7), MDSYS. SDO_DIM_ELEMENT('Y',-2598,26587,5E-7)),
    NULL - SRID VALUE
    );
    Insert values (querypointss)
    MDSYS. SDO_GEOMETRY (2001, NULL, MDSYS.) SDO_POINT_TYPE (4224.0,6828.0,NULL), NULL, NULL));
    commit;
    -The following query works fine
    SELECT *.
    OF tab1 C, table (c.geom_info) tt, PP querypointss
    WHERE mdsys.sdo_anyinteract (tt.m.geom,
    MDSYS. SDO_GEOMETRY (2002, NULL, NULL, MDSYS.) SDO_ELEM_INFO_ARRAY (1,2,1), MDSYS. SDO_ORDINATE_ARRAY (2859.85, 12677.3, 12677.9, 2887.63))
    ) = "TRUE";
    - But when the second argument of the sdo_anyinteract is a column in a table, it will not work
    SELECT *.
    OF tab1 C, table (c.geom_info) tt, PP querypointss
    WHERE mdsys.sdo_anyinteract (tt.m.geom,
    PP.geometry
    ) = "TRUE";
    -----
    and he returned this error:
    Error report:
    SQL error: ORA-13268: error to get the dimension of the USER_SDO_GEOM_METADATA
    ORA-06512: at the 'MDSYS. MD", line 1723
    ORA-06512: at the 'MDSYS. MDERR", line 8
    ORA-06512: at the 'MDSYS. SDO_3GL", line 132
    ORA-06512: at the 'MDSYS. SDO_3GL', line 256
    13268 00000 - «Error get the dimension of the USER_SDO_GEOM_METADATA»
    * Cause: There is no entry in the USER_SDO_GEOM_METADATA view for the specified
    table of geometry.
    * Action: Insert an entry for the destination geometry table with
    correct size information.

    What is
    / * + foreground (pp tt) use_nl (pp tt) use_index (tt geom_ump_idx1) * /.

    or

    / * + leading (pp tt) USE_NL_WITH_INDEX (tt geom_ump_idx1) * /.

  • error when exporting datapump

    Hi gurus,

    Oracle Version: 10.2.0.3
    Operating system: Linux

    Today that we had a problem when exporting datapump runs in the production server, the error is
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 9.126 GB
    Processing object type SCHEMA_EXPORT/USER
    Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
    Processing object type SCHEMA_EXPORT/ROLE_GRANT
    Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
    Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
    Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
    ORA-39127: unexpected error from call to export_string := SYS.DBMS_RMGR_GROUP_EXPORT.grant_exp(4869,1,...) 
    ORA-06502: PL/SQL: numeric or value error: NULL index table key value
    ORA-06512: at "SYS.DBMS_RMGR_GROUP_EXPORT", line 130
    ORA-06512: at line 1
    ORA-06512: at "SYS.DBMS_METADATA", line 5118
    ORA-39127: unexpected error from call to export_string := SYS.DBMS_RMGR_GROUP_EXPORT.grant_exp(4868,1,...) 
    ORA-06502: PL/SQL: numeric or value error: NULL index table key value
    ORA-06512: at "SYS.DBMS_RMGR_GROUP_EXPORT", line 130
    ORA-06512: at line 1
    ORA-06512: at "SYS.DBMS_METADATA", line 5118
    ORA-39127: unexpected error from call to export_string := SYS.DBMS_RMGR_GROUP_EXPORT.grant_exp(4867,1,...) 
    ORA-06502: PL/SQL: numeric or value error: NULL index table key value
    ORA-06512: at "SYS.DBMS_RMGR_GROUP_EXPORT", line 130
    ORA-06512: at line 1
    ORA-06512: at "SYS.DBMS_METADATA", line 5118
    Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/COMMENT
    Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC
    Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION
    Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
    Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
    Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION
    Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
    Processing object type SCHEMA_EXPORT/VIEW/VIEW
    Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
    ORA-39127: unexpected error from call to export_string := SYS.DBMS_RMGR_GROUP_EXPORT.grant_exp(4869,1,...) 
    ORA-06502: PL/SQL: numeric or value error: NULL index table key value
    ORA-06512: at "SYS.DBMS_RMGR_GROUP_EXPORT", line 130
    ORA-06512: at line 1
    ORA-06512: at "SYS.DBMS_METADATA", line 5118
    ORA-39127: unexpected error from call to export_string := SYS.DBMS_RMGR_GROUP_EXPORT.grant_exp(4868,1,...) 
    ORA-06502: PL/SQL: numeric or value error: NULL index table key value
    ORA-06512: at "SYS.DBMS_RMGR_GROUP_EXPORT", line 130
    ORA-06512: at line 1
    ORA-06512: at "SYS.DBMS_METADATA", line 5118
    ORA-39127: unexpected error from call to export_string := SYS.DBMS_RMGR_GROUP_EXPORT.grant_exp(4867,1,...) 
    ORA-06502: PL/SQL: numeric or value error: NULL index table key value
    ORA-06512: at "SYS.DBMS_RMGR_GROUP_EXPORT", line 130
    ORA-06512: at line 1
    ORA-06512: at "SYS.DBMS_METADATA", line 5118
    . . exported "QFUNDAPAYPROD_JUN09"."ST_LO_TRANS"         213.3 MB 2017789 rows
    . . exported "QFUNDAPAYPROD_JUN09"."ST_EXTERNAL_LOG"     712.6 MB 14029747 rows
    . . exported "QFUNDAPAYPROD_JUN09"."ST_LO_MASTER"        198.1 MB  915035 rows
    . . exported "QFUNDAPAYPROD_JUN09"."ST_LO_APPORTIONS"    110.7 MB 3951926 rows
    Can someone help me please it is a serious problem and how to solve it.

    Thank you and best regards,
    Kahina Prasad.S

    SIDDABATHUNI wrote:
    Hi orawiss,

    Here is my order export

    expdp username/password directory=dbdump dumpfile=D_JUN09_$DT.dmp exclude=statistics,grants,PROCOBJ logfile=D_JUN09_$DT.log job_name=QFU
    

    But today morning alone i have added PROCOBJ prameter to exclude it.

    Thank you best regards &,.
    Kahina Prasad.S

    Bug 4358907, as the MOS ID: ORA-39127 use datapump exp [451987.1 ID]

    Try not to use the exclude option

  • Need help with query to get days of work there are using the calendar BOM

    RDBMS: 10.2.0.4.0
    Oracle Applications: 11.5.10.2

    I try to use the BOM existing in the EBS to calculate working days calendar, there is a report that I am train. The BOM calendar presents the working days and non-working with the weekend days and holidays listed as non-working days. The following query gives correct results, but I'm looking for two different ways to do the same without "the union. I have nothing against the "unions", but I feel that I'm missing a more elegant to get there way. The query will in Discoverer Plus 10.1.2.3, so using a statement 'By' is not supported.
    sample data
    
    calendar_code                calendar_date                 seq_num                     
    SAC-WRKDAY                   12/3/2010                     1817
    SAC-WRKDAY                   12/4/2010                                                              
    SAC-WRKDAY                   12/5/2010                                                              
    SAC-WRKDAY                   12/6/2010                     1818                                 
    SAC-WRKDAY                   12/7/2010                     1819                                 
    SAC-WRKDAY                   12/8/2010                     1820     
    SAC-WRKDAY                   12/9/2010                     1821
    SAC-WRKDAY                   12/10/2010                    1822
    SAC-WRKDAY                   12/11/2010                   
    SAC-WRKDAY                   12/12/2010
    SAC-WRKDAY                   12/13/2010                    1823             
    select calendar_code
         , calendar_date
         , seq_num
         , sum(decode(bcd.seq_num,null,0,1))over(partition by bcd.calendar_code order by bcd.calendar_date desc) workdays_ago
    from   bom.bom_calendar_dates bcd
    where  calendar_code = 'SAC-WRKDAY'     
    and    trunc(bcd.calendar_date) < trunc(sysdate)
    
    union
    
    select calendar_code
         , calendar_date
         , seq_num
         , -sum(decode(bcd.seq_num,null,0,1))over(partition by bcd.calendar_code order by bcd.calendar_date ) workdays_ago
    from   bom.bom_calendar_dates bcd
    where  calendar_code = 'SAC-WRKDAY'
    and    trunc(bcd.calendar_date) > trunc(sysdate)
    sample output
    
    calendar_code                calendar_date                 seq_num                      workdays_ago
    SAC-WRKDAY                   12/3/2010                     1817                                  3
    SAC-WRKDAY                   12/4/2010                                                           2
    SAC-WRKDAY                   12/5/2010                                                           2
    SAC-WRKDAY                   12/6/2010                     1818                                  2
    SAC-WRKDAY                   12/7/2010                     1819                                  1
    SAC-WRKDAY                   12/9/2010                     1821                                 -1
    SAC-WRKDAY                   12/10/2010                    1822                                 -2
    SAC-WRKDAY                   12/11/2010                                                         -2
    SAC-WRKDAY                   12/12/2010                                                         -2
    SAC-WRKDAY                   12/13/2010                    1823                                 -3

    Hello

    Of course, you should be able to combine these queries, something like this:

    select calendar_code
         , calendar_date
         , seq_num
         , sum ( CASE
               WHEN  bcd.seq_num IS NULL
               THEN  0
               WHEN  bcd.calendar_date < TRUNC (SYSDATE)
               THEN  1
               ELSE  -1
              END
            ) over ( partition by  bcd.calendar_code
                  ,            SIGN (SYSDATE - bcd.calendar_date)
                  order by        ABS  (SYSDATE - bcd.calendar_date)
                ) workdays_ago
    from   bom.bom_calendar_dates bcd
    where  calendar_code = 'SAC-WRKDAY'
    and    trunc(bcd.calendar_date) < trunc(sysdate)
    

    If you do not want to display some CREATE TABLE and INSERT statements for the sample data, and then I could test it.

    t_norwillo wrote:
    ... I have nothing against the "unions", but I feel that I'm missing a more elegant to get there way.

    Good thinking!
    When the two branches of the UNION query the same table, there is usually a more effective way: something that only requires a pass through the table.

  • BAD RESULTS WITH OUTER JOINS AND TABLES WITH A CHECK CONSTRAINT

    HII All,
    Could any such a me when we encounter this bug? Please help me with a simple example so that I can search for them in my PB.


    Bug:-8447623

    Bug / / Desc: BAD RESULTS WITH OUTER JOINS AND TABLES WITH a CHECK CONSTRAINT


    I ran the outer joins with check queries constraint 11G 11.1.0.7.0 and 10 g 2, but the result is the same. Need to know the scenario where I will face this bug of your experts and people who have already experienced this bug.


    Version: -.
    SQL> select * from v$version;
    
    BANNER
    --------------------------------------------------------------------------------
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    PL/SQL Release 11.1.0.7.0 - Production
    CORE    11.1.0.7.0      Production
    TNS for Solaris: Version 11.1.0.7.0 - Production
    NLSRTL Version 11.1.0.7.0 - Production

    Why do you not use the description of the bug test case in Metalink (we obviously can't post it here because it would violate the copyright of Metalink)? Your test case is not a candidate for the elimination of the join, so he did not have the bug.

    Have you really read the description of the bug in Metalink rather than just looking at the title of the bug? The bug itself is quite clear that a query plan that involves the elimination of the join is a necessary condition. The title of bug nothing will never tell the whole story.

    If you try to work through a few tens of thousands of bugs in 11.1.0.7, of which many are not published, trying to determine whether your application would be affected by the bug? Wouldn't be order of magnitude easier to upgrade the application to 11.1.0.7 in a test environment and test the application to see what, if anything, breaks? Understand that the vast majority of the problems that people experience during an upgrade are not the result of bugs - they are the result of changes in behaviour documented as changes in query plans. And among those who encounter bugs, a relatively large fraction of the new variety. Even if you have completed the Herculean task of verifying each bug on your code base, which would not significantly easier upgrade. In addition, at the time wherever you actually performed this analysis, Oracle reportedly released 3 or 4 new versions.

    And at this stage would be unwise to consider an upgrade to 11.2?

    Justin

Maybe you are looking for

  • How to save my host file?

    I use El Capitan and try to add an entry to my host file. I am all good up to the point where I have to enter ^ O. At that point, he opens a Finder window to register but I have no idea where to save to. I tried pretty much all the options available,

  • Where is my arrow/download bar that was on the right side, the top of the screen?

    Before the last update, there was a download arrow green on the right side of the screen which would activate and display the progress of a download. Now, he's not here and I don't know where to look. Thank you.

  • Urgent issue: Mac Mini not start...

    Hi all, I've tried turning on and turning on my Mac Mini several times today and it won't start not just straight up. It seems that the loading with the Apple logo bar is about 50% then stops to progress. What can I do? Thank you!

  • Satellite M60 - display driver

    Hello I have a problem with the game Hello Kitty, I get an error message from Windows with loading.I tried to installed Catalyst 5.13, I have the following message: error INF. Video driver not foundI then downloaded on the last driver from Toshiba si

  • Acer H6510BD what type of 3D glasses?

    I bought an Acer H6510BD. IM there setting up 3d im wondering what lenses a little I need. Thank you.