Hour current TC Export 10g DataPump

Does anyone know how to specify the date and time for FLASHBACK_TIME in a 10 gr 2 expdp?

GR 11, 2 Data Pump usefully provides the definition of systimestamp so you can specify
FLASHBACK_TIME = systimestamp

However 10 gr 2 Data Pump does not have this option. I could use the RCS or specify explicitly date & time but I was wondering is it possible to easily define the current date / time in gr 10 2 expdp

Thank you
Jim

I know, there was little work for the specification

flashback_time = sysdate

during the period 11 but don't remember if it of a bug or not. Remember that it was. I don't know if it was backported. He could have. You can contact the Oracle Support to see if there is a patch available for the version you are on.

If you call it from a script, you can use something like:

> left echo
> set linesize 132
> connect the username/password
> position
> spool expdp.par
> select 'flashback_time =' | To_char (sysdate, "YYYY - MM - DD HH24:MI:SS") of the double;
> spool off
> output

Then, you can use this:

expdp username/password parfile = expdp.par...

Hope this helps

Dean

Tags: Database

Similar Questions

  • Advantage of the import and export of datapump during the first export and import

    Hello

    Let me know the advantage of datapump export (expdm) and (impdm) on export (exp) import and import (imp) of origin.

    Hello

    Let me know the advantage of datapump export (expdm) and (impdm) on export (exp) import and import (imp) of origin.

    There are many advantages over the use of DATAPUMP.

    For example, to INCLUDE / EXCLUDE the settings you can filter exactly what object and / or type you intend to export or import. Which is not easy with the Original Export / Import (with the exception of Tables, indexes, constraints,...).

    You can import directly in to a NETWORK_LINK without using a Dump.

    You have many interesting features like COMPRESSION, FLASHBACK_SCN / _TIME *,...

    You can use the API in PL/SQL to perform your export / import instead of using the command line Interface.

    Increasingly, the DATAPUMP is much more optimized than the Original Export/Import and use path Direct or external Tables,... and what to say about REMAP_ % settings that allow you to rename files of data, schema, storage...

    There are a lot of things to tell about DATAPUMP. You will find an overview of this tool very good on the following links:

    http://www.Oracle-base.com/articles/10G/OracleDataPump10g.php

    http://www.Oracle-base.com/articles/11g/DataPumpEnhancements_11gR1.php

    Hope this helps.
    Best regards
    Jean Valentine

  • Export of DataPump API - number of rows exported using

    Hello

    I'm working on the procedure to export data in the table before deleting the partition. It will be run by the Scheduler of the data, that's why I want to run the datapump job using the API.

    I wonder, if it is possible to get the number of rows exported. I would compare with the number of rows in a partition before you delete the partition.


    Thank you

    Krystian

    Hello

    Don't know exactly how you want the number of rows per partition that have been exported, but here are a few ideas:

    1. create a log file by using 'add_file ':

    -Add a log file

    dbms_datapump.add_file (h, ' DEPTJOB.log ', a', NULL,)

    dbms_datapump.Ku$ _file_type_log_file);

    It is also in my example included below.  Here is the content after the DEPTJOB.log workload (situated in Oracle Directory object would be "in my example):

    $ cat /tmp/DEPTJOB.log

    Departure 'SCOTT '. "" DEPTJOB ":

    Object type TABLE_EXPORT/TABLE/TABLE processing

    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA

    . . "exported"SCOTT" DEPT': 'SYS_P1581' 5,929 KB 2 lines

    . . "exported"SCOTT" DEPT': 'SYS_P1582' 5,914 KB 1 lines

    . . "exported"SCOTT" DEPT': 'SYS_P1583' 5,906 KB 1 lines

    Table main 'SCOTT '. "' DEPTJOB ' properly load/unloaded

    ******************************************************************************

    Empty the files together for SCOTT. DEPTJOB is:

    /tmp/Dept.dmp

    Job 'SCOTT '. "" DEPTJOB "managed to 00:00

    You can then review or extract the information from the log file.

    2. save the master table and the query for the busy lines.

    Use the parameter "KEEP_MASTER":

    -Keep the main table to be deleted after employment ends

    dbms_datapump.set_parameter(h,'KEEP_MASTER',1);

    Here's my example, the request to the main table is at the end.

    $ sqlplus scott/tiger @deptapi

    SQL * more: version 12.2.0.0.2 Beta on Fri Jan 22 12:55:52 2016

    Copyright (c) 1982, 2015, Oracle.  All rights reserved.

    Last successful login time: Friday, January 22, 2016 12:55:05-08:00

    Connected to:

    Database Oracle 12 c Enterprise Edition Release 12.2.0.0.2 - 64-bit Beta

    With the options of partitioning, OLAP, advanced analytics and Real Application Testing

    Connected.

    SQL > SET FEEDBACK 1

    SQL > SET 10 NUMLARGEUR

    SQL > SET LINESIZE 2000

    SQL > SET TRIMSPOOL ON

    SQL > SET TAB OFF

    SQL > SET PAGESIZE 100

    SQL > SET SERVEROUTPUT ON

    SQL >

    SQL > Rem save on the old table of scott.dept

    SQL > dept and rename it dept_old.

    Renamed table.

    SQL >

    SQL > Rem re-create it with partitions

    SQL > CREATE TABLE dept (deptno NUMBER varchar (14) dname, loc varchar (13)) PARTITION INTO 3 PARTITIONS HASH (deptno)

    2.

    Table created.

    SQL >

    SQL > Rem fill the dept table

    SQL > insert into dept select * from dept_old;

    4 lines were created.

    SQL >

    SQL > Rem now create datapump job export SCOTT. DEPT. using the API

    SQL > DECLARE

    2: NUMBER;         -Handle Datapump

    3 jobState VARCHAR2 (30);   -To keep track of job status

    4 ind NUMBER;         -Index of the loop

    5 the ku$ _LogEntry;   -For error messages and work in PROGRESS

    6 js ku$ _JobStatus;  -The State of the work of get_status

    7 jd ku$ _JobDesc;    -The get_status job description

    8 m ku$ _Status;     -The status returned by get_status object

    9 sql_stmt VARCHAR2 (1024);

    nom_partition 10-VARCHAR2 (50);

    11 rows_completed NUMBER;

    12

    BEGIN 13

    14-

    15 run the Installer based on the operation to perform.

    16-

    17 h: = dbms_datapump.open ('EXPORT', 'TABLE', NULL, 'DEPTJOB', NULL);

    18 dbms_datapump.add_file (h, 'dept.dmp', 'd', NULL,

    dbms_datapump.Ku$ _file_type_dump_file 19, 1);

    20

    21    --- Add a logfile                                                         

    22 dbms_datapump.add_file (h, ' DEPTJOB.log ', a', NULL,)

    23 dbms_datapump.ku$ _file_type_log_file);

    24

    25 dbms_datapump.metadata_filter (h, 'SCHEMA_EXPR', ' IN ("SCOTT") ");

    26 dbms_datapump.metadata_filter (h, 'NAME_LIST', "'DEPT"');

    27

    28

    29-

    30 start work.

    31-

    32 dbms_datapump.set_parameter (h, 'SILENT', 'banner');

    33

    34 -keep the main table to be deleted after employment ends

    35 dbms_datapump.set_parameter(h,'KEEP_MASTER',1);

    36

    37 dbms_datapump.start_job (h);

    38

    39-

    40 - run to grabbing the output of the job and write in the output log.

    41-

    42 jobState: = "UNDEFINED";

    43 WHILE (jobState! = "COMPLETED") AND (jobState! = "STOPPED")

    44 LOOP

    45 dbms_datapump.get_status (h,

    dbms_datapump.Ku$ _status_job_error 46.

    dbms_datapump.Ku$ _status_wip 47, -1, jobState, m);

    48

    49      --

    50. If we received messages WIP or error for the work, display them.

    51      --

    52 IF (BITAND(sts.mask,dbms_datapump.ku$_status_wip)! = 0)

    53 THEN

    54: = sts.wip;

    55 ON THE OTHER

    56 IF (bitand(sts.mask,dbms_datapump.ku$_status_job_error)! = 0)

    57 THEN

    58: = sts.error;

    59 ON THE OTHER

    the 60: = NULL;

    61 END IF;

    62 END IF;

    63

    64 the IS NOT NULL IF

    65 THEN

    66 ind: = the. FIRST;

    67 then AS ind IS NOT NULL

    68 LOOP

    69 dbms_output.put_line ((ind). LogText);

    70 ind: = the. Next (IND);

    LOOP END 71;

    72 END IF;

    73 END LOOP;

    74

    75-

    76 - release work.

    77-

    78 dbms_datapump.detach (h);

    79

    80-

    81. all exceptions that spread at this point will be captured.

    82 - the details are extracted from get_status and displayed.

    83-

    EXCEPTION OF 84

    85, SO THAN OTHERS THEN

    BEGIN 86

    87 dbms_datapump.get_status (h,

    dbms_datapump.Ku$ _status_job_error, 0-88,.

    89 jobState, sts);

    90 IF (BITAND(sts.mask,dbms_datapump.ku$_status_job_error)! = 0)

    91 THEN

    the 92: = sts.error;

    93 the IS NOT NULL IF

    94 THEN

    95 ind: = the. FIRST;

    96 although ind IS NOT NULL

    LOOP OF 97

    98 dbms_output.put_line ((ind). LogText);

    99 ind: = the. Next (IND);

    100 END LOOP;

    101 END IF;

    102 END IF;

    103

    BEGIN 104

    105 DBMS_DATAPUMP. STOP_JOB (m, 1, 0, 0);

    EXCEPTION OF 106

    107. WHEN OTHER NULL THEN;

    END 108;

    109

    110 EXCEPTION

    111, SO THAN OTHERS THEN

    112 dbms_output.put_line ('ORA-00000: an unexpected exception during ' |)

    113 ' Manager of exceptions. ' ||

    114 ' sqlcode = ' | TO_CHAR (SQLCODE));

    END 115;

    END 116;

    117.

    Departure 'SCOTT '. "" DEPTJOB ":

    Object type TABLE_EXPORT/TABLE/TABLE processing

    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA

    . . "exported"SCOTT" DEPT': 'SYS_P1581' 5,929 KB 2 lines

    . . "exported"SCOTT" DEPT': 'SYS_P1582' 5,914 KB 1 lines

    . . "exported"SCOTT" DEPT': 'SYS_P1583' 5,906 KB 1 lines

    Table main 'SCOTT '. "' DEPTJOB ' properly load/unloaded

    ******************************************************************************

    Empty the files together for SCOTT. DEPTJOB is:

    /tmp/Dept.dmp

    Job 'SCOTT '. "" DEPTJOB "managed to 00:00

    PL/SQL procedure successfully completed.

    SQL >

    SQL > table main query Rem for number of lines completed

    SQL > column nom_partition format a10

    SQL > format 9999 column lines

    SQL > SELECT nom_partition, COMPLETED_ROWS FROM SCOTT . DEPTJOB WHERE BASE_OBJECT_NAME = "DEPT";

    PARTITION_ COMPLETED_ROWS

    ---------- --------------

    SYS_P1581 2

    SYS_P1583 1

    SYS_P1582 1

    3 selected lines.

    SQL >

    SQL > "EXIT";

    3. you might even extract information of the call from the command line:

    $ sqlplus scott/tiger @deptapi.sql | grep 'exported ' | AWK ' {print "Table:" $4, 'charge' $7, $8} '

    Table: 'SCOTT '. "" DEPT ":"SYS_P1581"loaded 2 rows

    Table: 'SCOTT '. "' DEPT ': 'SYS_P1583' loaded 1 lines

    Table: 'SCOTT '. "' DEPT ': 'SYS_P1582' loaded 1 lines

  • Network export in datapump

    Hello... I'm not a dba, and trying to do an import/exp network using the datapump sql developer. I'm suppozed to do this via the utility wizards in sql developer (view > DBA). I have permission, DATAPUMP_EXP_FULL_DATABASE privileges on the source and the target (and respective import private also)

    Please guide me on the process (in the cmd line I know need to add NETWORK_LINK =source_database_link of normal export), but how can we do the wizard?

    Please suggest.

    You have metalink: If so, please check: Impdp fault on Network_link with ORA-904 "PARENT_PROCESS_ORDER": invalid identifier (Doc ID 1170933.1)

    Thank you

  • Hour current - 5

    Hello, I am building a subselect to display the current date, the month and the 4 HOURS in the format DDMM(HH-4), for example current date: 17 Sep 10:00-> necessary select output 170906

    I tried the following and it's close enough:

    SELECT TO_CHAR (SYSDATE, 'DD') |
    TO_CHAR (SYSDATE, 'MM') |
    DECODE ((TO_CHAR (SYSDATE, 'HH24')),
    -4: 00,
    -3, 01,
    -2: 02
    -1, 03,
    (TO_CHAR(SYSDATE, 'HH24')-04))
    FROM DUAL;

    Unfortunatelly it truncates the value 0 of the time and I need an hour in format HH24
    -> necessary 170906
    17096 > what I get.

    Any help will be much appreciated.

    Thank you

    Or you can use the notation of the interval to be even more clear:

    SQL> select sysdate
      2       , to_char(sysdate - interval '4' hour,'ddmmhh24')
      3    from dual
      4  /
    
    SYSDATE             TO_CHA
    ------------------- ------
    17-09-2010 09:38:04 170905
    
    1 row selected.
    

    Kind regards
    Rob.

  • Import with datapump when exporting datapump was executed as user SYS

    Hi all

    all I have is a dumpfile and an export a datapump log file. Export has been executed as user sys:

    Export: Release 11.2.0.1.0 - Production on mid Dec 3 12:02:22 2014

    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    ;;;
    Join EIB: Oracle Database 11 g Release 11.2.0.1.0 - 64 bit Production
    "SYS". "' SYS_EXPORT_FULL_01 ':"sys/***@database AS SYSDBA"directory = data_pump_dir dumpfile = db_full.dmp db_full.log = complete logfile = gestartet wird y
    Mobiltelefondienste participations mit method BLOCKS...
    Objekttyp DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA wird thanks
    Approximately Schatzung mit BLOCKS method: 52.84 GB

    Now I have to import (with datapump) user USER01 and export USER02. But I don't know the name of the source database tablespaces.

    I want to keep the name of the user (USER01/USER02). That's why I created these users in the target database.

    The questions I have are:

    -should I start the import by datapump as user SYS?

    -Should what settings I use to import users USER01, USER02 and data in the target database? Since I do not know the names of tablespaces

    in the source database parameter REMAP_TABLESPACE will not help

    Any help will be appreciated

    J.

    Hi J.

    The questions I have are:

    -should I start the import by datapump as user SYS?

    No, you need to import with a user in the role of "imp_full_database".

    -Should what settings I use to import users USER01, USER02 and data in the target database? Since I don't know the names of the storage spaces in the source database parameter REMAP_TABLESPACE will not help

    Well, an idea is to generate you a schema import sqlfile and see in the ddl in what tablespace, it will try to create objects.

    Impdp------' / as sysdba------' directory = dumpfile = = = USER01, USER02 patterns sqlfile

    For more information, take a look at the import documentation

    http://docs.Oracle.com/CD/B19306_01/server.102/b14215/dp_import.htm

    Hope this helps,

    Kind regards

    Anatoli has.

  • DataPump: can dumpfiles copied during the datapump export still work?

    Hello

    Let's define that a datapump export settings

    DUMPFILE=exportdp_DUMPFILE_%U.dmp
    FILE SIZE = 2G
    PARALLEL = 2

    will generate 20 dumpfiles size 2G each. The datapump worker will create two dumpfiles in parallel and write on the two parallel lines. As soon as a dumpfile has a size of 2G, the next file will be created and data will be written in.
    My question now is:
    can you files, which are the size of 2 GB already copied to another server? This would have the advantage that I don't have to wait until the end of the export of datapump before starting the transfer of files to another server. This ability would certainly save little time and speedup the datapump export - / import process...
    Or should I wait until the end of the export of datapump above can get copied files - simply because that the datapump will update the files with 'something '?

    Any help will be appreciated

    Rgds
    JH

    You could start when the file appear full and then you may need to copy a bit more if they are updated the copy. This only happens if your main table that spans the dump files.

    Dean

  • Export DataPump with the query option

    Hi all

    My environment is IBM AIX, Oracle 10.2.0.4.0 database.

    I need a few sets of records using a query in export production. Request is attached to several tables. Since we have the BLOB data type, we export using datapump.

    We have weaker environments, but have not the same set of data and tables, and therefore not able to simulate the same query in lower environment. But created a small table and faked the query.

    My order is

    expdp system / < pwd > @orcl tables = dump.dump1 query = dump.dump1:' ' where num < 3 ' ' directory = DATA_PUMP_DIR dumpfile = exp_dp.dmp logfile = exp_dp.log

    Query in the command pulls two records directly. By running the command above, I see the size 80KO dump file,
    In the export log file.

    I see Total estimation using BLOCKS method: 64 KB.
    export Dump.Dump1 = 4,921 KB 2 rows.

    My doubts are,
    (1) is the correct command that I am running.
    (2) estimate said 64 k, considering that it says also exported 4,921 KB. But the dump file created is 80KO. It is exported correctly?
    (3) given that I run with the use of the system, apart from 2 rows, export all data. We must send the dump file to the other Department. We should not export all of the data other than the query output.
    (4) in the order if I am not using "tables = dump.dump1), the export file big mess." Don't know which is the right.

    Your answers will be more useful.

    The short answer is 'YES', he did the right thing.

    The long answer is:

    Query in the command pulls two records directly. By running the command above, I see the size 80KO dump file,
    In the export log file.

    I see Total estimation using BLOCKS method: 64 KB.
    export Dump.Dump1 = 4,921 KB 2 rows.

    My doubts are,
    (1) is the correct command that I am running.

    Yes. As long as you query is correct. DataPump will export on the lines that match this query.

    (2) estimate said 64 k, considering that it says also exported 4,921 KB. But the dump file created is 80KO. It is exported correctly?

    Estimate is made using the full picture. Since you specify, he used the method of estimation of block. Basically, how many blocks have been attributed to this table. In your case, I guess it was 80KB.

    (3) given that I run with the use of the system, apart from 2 rows, export all data. We need to send the dump file to other > Department. We should not export all of the data other than the query output.

    I will export all the data, but going to export metadata. It exports the table definition, all indexes on it, all the statistics on tables or indexes, etc. This is why the dump file could be bigger. There is also a 'main' table that describes the export job who gets exproted. This is used by export and import to find what is in the dumpfile, and where in the dumpfile these things are. It is not user data. This table needs to be exported and will take place in the dumpfile.

    (4) in the order if I am not using "tables = dump.dump1), the export file big mess." Don't know which is the right.

    If you only want this table, then you order export is right. If you want to export more, then you need to change your export command. From what you say, it seems that you order is correct.

    If you do not want any expoirted metadata, you can add:

    content = data_only

    at the command line. This will only export the data and when the dumpfile is imported, it must have the table already created.

    Dean

  • 10g RAC archive current log

    How the command 'Alter system archive log current' work with 10g RAC. It will apply to all instances of the cars or just the connected instance.

    Because your connection is "RAC_DBA" I think you should be able to test this and answer your question?

  • Tablespace level export, import of schema level - is it possible?

    Hello

    If I have an export level DataPump tablespace (played to = < the tablespaces list > STORAGE space), it is possible to import only tables and dependent objects to a specific schema residing in exported tablespaces? DB version is 11.2.0.3.0. According to the documentation, it should be possible: http://docs.oracle.com/cd/E11882_01/server.112/e22490/dp_import.htm#i1011943 : a schema import is specified using the SCHEMAS. Source can be a full, table, tablespace, or schema mode export dump file set or another database.  

    Perform a quick test seems, however, that it is not so:

    (1) source DB - I have two tablespaces (TS1, TS2) and two schemas (USER1, USER2):

    SQL > select nom_segment, nom_tablespace, segment_type, owner

    from dba_segments

    where owner in ("USER1", "User2");

    2 3

    OWNER NOM_SEGMENT SEGMENT_TYPE TABLESPACE_NAME

    ------ --------------- ------------------ ----------------

    USER1 UQ_1 INDEX TS1

    USER1 T2 TABLE TS1

    USER1 T1 TABLE TS1

    USER2 T4 TABLE TS2

    USER2 T3 TABLE TS2

    (2) I am not a tablespace level to export:

    Expdp system directory $ = dp_dir = ts1 tablespaces, ts2 dumpfile = test.dmp

    Export: Release 11.2.0.3.0 - Production on Fri Jul 11 14:02:54 2014

    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

    Password:

    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production

    With partitioning, OLAP, Data Mining and Real Application Testing options

    Start "SYSTEM". "" SYS_EXPORT_TABLESPACE_01 ": System / * Directory = dp_dir tablespaces = ts1, ts2 dumpfile = test.dmp

    Current estimation using BLOCKS method...

    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA

    Total estimation using BLOCKS method: 256 KB

    Object type TABLE_EXPORT/TABLE/TABLE processing

    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX

    Object type TABLE_EXPORT/TABLE/CONSTRAINT/treatment

    Object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS of treatment

    . . exported "USER1". "" T1 "5,007 KB 1 lines

    . . exported "USER1". "" T2 "5,007 KB 1 lines

    . . exported "user2". "" T3 "5,007 KB 1 lines

    . . exported "user2". "" T4 "5,007 KB 1 lines

    Main table 'SYSTEM '. "' SYS_EXPORT_TABLESPACE_01 ' properly load/unloaded

    ******************************************************************************

    "(3) I'm trying to import only the objects belonging to User1 and I get the 'ORA-39039: schema '(' USER1')' expression contains no valid schema" error: "

    Impdp system directory $ = dp_dir patterns = USER1 dumpfile = test.dmp

    Import: Release 11.2.0.3.0 - Production on Fri Jul 11 14:05:15 2014

    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

    Password:

    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production

    With partitioning, OLAP, Data Mining and Real Application Testing options

    ORA-31655: no data or metadata of objects selected for employment

    ORA-39039: pattern Expression "(' USER1')" contains no valid schema.

    (4) However, the dump file contains clearly the owner of the tables:

    Impdp system directory = dp_dir dumpfile = test.dmp sqlfile = imp_dump.txt

    excerpt from imp_dump.txt:

    -path of the new object type: TABLE_EXPORT/TABLE/TABLE

    CREATE TABLE "USER1". "" T1 ".

    ("DUMMY" VARCHAR2 (1 BYTE)

    )

    So is it possible to somehow filter the objects belonging to a certain pattern?

    Thanks in advance for any suggestions.

    Swear

    Hi swear,.

    This led a small survey of me I thought I was worthy of a blog in the end...

    Oracle DBA Blog 2.0: A datapump bug or a feature and an obscure workaround

    Initially, I thought that you made a mistake, but that doesn't seem to be the way it behaves. I've included a few possible solutions - see if one of them responds to what you want to do...

    See you soon,.

    Rich

  • tablespace expdp 10g default ORA-31626 31633 - ORA ORA-06512 ORA-06512 ORA-01647

    Hello

    I have currently a database 10g running on a server, AIX, one of its tablespace (CTSPRODDOC) data files located in two places;

    1 /doc/ctsproddoc.dbf

    2 /image/ctsproddoc_01.dbf

    As he tried to export the tablespace for help after a command;

    Directory of cts/cts expdp = tablespaces dumpfile = CTSPRODDOC.dmp dmpdir1 = CTSPRODDOC

    It fails with following error messages;

    ORA-31626: there is no job

    ORA-31633: could not create the main table 'CTSPRODDOC. SYS_EXPORT_TABLESPACE_05 ".

    ORA-06512: at "SYS." DBMS_SYS_ERROR', line 95

    ORA-06512: at "SYS." "KUPV$ FT", line 863

    ORA-01647: tablespace "CTSPRODDOC" is read-only, cannot allocate space inside

    I was able to export a different tablespace with success of the same database to the same directory. But I'm unable to export CTSPRODDOC.

    Can someone please advise.

    Thank you

    Hem

    Hello

    I suspect that the default tablesapce for the user of the CTS is CTSPRODDOC. You can change this something else? or do some export under a different name that has a different default tablespace?

    See you soon,.

    Harry

  • Really there 22 hours for SOUL to transcode 1 hour of ProRes apcn for presetting of YouTube?

    My question is, "it will always take this long?

    If you have not already verified my information system to it, I'm at the end of a 6 years old HP 8400.  I (finally) got the OK to acquire a new system.  I know that this new system (http://www.pugetsystems.com/saveconfig.php?id=111597) will improve 'timeline' speed and preview of After Effects & rendering and must export most of the CODECs much faster, I'm concerned about the work of transcoding straight I do in the SOUL.  More precisely ProRes to something else.

    The company for which I work steps and produces a lot of live corporate events.  I create content for the event, then the handle whatever the client wants done with recorded images later.  Sometimes, it's the complete boat and sometimes it's just for them to share internally and on its Web site.  Whatever it is, generally the first thing they want is to just see the images.  So, regardless of what would later become images, I have run through the SOUL and turn it into H.264 using YouTube friendly preset.

    Currently, on my aging machine, SOUL uses only about 30% of my 8 cores.  Only 7-8 GB of my 20GB of ram and, of course, 0% of my GTX285.  So, should I assume that lack of CODEC, SOUL will take much longer to transcode on a machine with twice the kernels (including higher clock speeds) and 3 times the ram?  Will be just use even less of the system and even to last 22 hours?  I see at least a reduction in time in the transition from DDR - 2 DDR-3 and other benefits of latency lower current material?

    I was going to include the "MediaInfo" for the source and the final files as well as the newspaper of the SOUL and some other info, but since I have this done already a wall of text, stay until what someone needs to answer my question.

    Thanks in advance.

    Jason

    BTW - I've been using Adobe products since the middle of the 1990s (4.0 first, After Effects "Melmet") and I've been lurking here and scraping info from all of you there are at least 2 Forums this is my first post in the forum "new".

    Hi Jason,

    How can I put things in perspective for you? Looks like you don't know what to expect, which is understandable when you use really old hardware. And what first version are you running?

    Computer technology comes from leaps and bounds in recent years, nothing short of amazing! From CS5, first now uses the GPU (graphics card) to provide acceleration of mounting hardware and effects, using the Mercury playback engine. The maximum permissible error can operate on CPU only, but it makes a world of difference when using GPU hardware as well. Of course the software and the OS are 64-bit now, which really helps, especially with the large amount of RAM allowed.

    Years, it was almost mandatory to get a machine with two processors for editing. Core i7 quad-core machines today essentially have 4 processors on a single chip, it's like having more than one processor. "Multithreading" there, so 4 hearts resemble 8 cores for the software.

    So, basically, any recent Core i7 machine will do a great job of editing and export of images HD of first CS6. Many effects, titles, superposition and scaling of the video all is possible without a red bar! I stacked 15 or 20 layers of 1080i AVCHD once, make several PIPs on the screen, and it played smoothly without a framework.

    If you work mainly with the first and yet, then a machine 4-core or 6-core i7 will make most very happy publishers. If After Effects is a big part of your flow of work, and/or 3D animation, where a double-XEON machine can make a difference. Really well, first, you would see little difference. Time is in real time, a machine 11 k $ is not do it faster.

    On a machine nice i7 costs $3-5 k, you should expect to export H.264 in real time or better. With a DV film, goes to the MPEG-2 DVD format, a timeline of two hours can be exported in unnder 10 minutes. Seriously. The audio .wav a timeline of two hours? Seconds for export.

    Puget is a company respected using gear nice and if you feel the need to 'the best of the best' and have the budget to save it, this system of 11 k $ will certainly rock. We are also an Adobe Certified System Integrator and is happy to consult and provide the price of comparison as well.

    Here is a link to Dave Helmly discuss options of new workstation for CS6 - http://blog.sharbor.com/blog/2012/12/adobes-dave-helmly-discusses-thunderbolt-for-pc-and-t riptide sunami /

    Thank you for your attention

    Jeff Pulera
    Safe Harbor computers

    sharbor.com

  • DataPump - workspace, users, applications

    Hey, it's a matter of newbe.

    We are looking for a datapump job to export our workspace, users, applications, and data.
    It would have made every night.
    To test, we dropped the schema, recreated the schema and imported with the export of datapump.

    We have an our tables, views, etc.
    But we don't have the workspace, users and applications.

    Is it possible with the datapump to get it?

    (We know that we can create the workspace & schema within the administration interface of the apex and importation, workspace, etc. which exported using APEX.) However our dba suggested the other way round.)

    Hello

    Yes it won't save your workspace users or applications, since those who do not live in your schema (analysis), they are stored in the underlying metadata of the APEX.

    Currently there is no easy way to automate the export of workspaces, but you can automate the export of the applications themselves. You can find my blog on this useful-

    http://Jes.blogs.shellprompt.NET/2006/12/12/backing-up-your-applications/

    Fortunately the definition of workspace does not tend to change frequently, so export manually (with users) should not be as boring as if you had to manually export applications.

    Hope this helps,

    John.
    --------------------------------------------
    Blog: http://jes.blogs.shellprompt.net
    Work: http://www.sumneva.com (formerly http://www.apex-evangelists.com)
    Author of Pro Application Express: http://tinyurl.com/3gu7cd
    AWARDS: Don't forget to mark correct or useful posts on the forum, not only for my answers, but for everyone!

  • EXPDP with FLASHBACK_SCN/HOUR

    It will take more time to take backup EXPDP with FLASHBACK_SCN/FLASHBACK_TIME? If so, what will be the difference?

    Hello

    My guess is it would depend. When a table is exported using datapump, the table is always exported on a regular basis. Let's say you have a table that is partitioned with 10 scores and say that the export job is parallel = 1. This means that each partition is exported in series. It could be minutes between partitions, or if they are more than enough, maybe even hours. When the datapump assigns the first partition to be exported, he remembers the SNA and use it for all partitions of the same table, so, even if you do not specify a value of flashback, flashback is always used for partitioned tables. Now, if your database lacks all the partitioned tables, so no flashback would be.

    If the performance would vary according to what looks like your database. If you have many partitioned tables, then I think that performance is not necessarily noticeable because Data Pump is already using the return of flame. If you have not all the partitioned tables and you use flashback, you can see a performance hit since all data would be exported using flashback.

    In addition, flashback is used that for data, it is never used for metadata. So, if you have a lot of data, you can see more than a performance hit if you don't have a lot of data.

    As the difference between the time of flashback and flashback Yvert, DataPump converts the time of flashback for a value of SNA. So once this conversion takes place, the value of SNA is used for all data unload.

    Dean

  • the dump of the exp 10g file security issue

    Hi all
    If I lost my dmp file, is there a way to secure this dmp file?
    I mean people cannot view its contents by using the "ropes" or other flush commands.

    exp cannot support Encrypt with 10g. is this fair?

    Thank you

    You use the export utility DataPump version? Or the classic version? The DataPump version offers options to specify settings for encryption (ENCRYPTION, ENCRYPTION_ALGORITHM, ENCRYPTION_MODE and ENCRYPTION_PASSWORD). The classic version does not work.

    Since you are using 10g, you should really use the DataPump version, but use of the subject of EXP implies that you are using the classic version.

    Justin

Maybe you are looking for

  • Qosmio G20 - change disc

    Hello I need to change the HARD drive in a Toshiba Qosmio G20. 120 GB HARD drive is broken. I did not understand if the drive is SATA or PATA. If exist the 2 HDD RAID or if have 1 single standard drive? And if I install a disc of marger (example 320)

  • Satellite A660 - driver nVidia kernel crash - even after the update

    Hello Can anyone help? For the past two weeks, the screen goes black, momentarily, and I get a message that the nVidia kernel driver crashed and recovered. I've updated the driver located on the Toshiba download page and the problem persists. Any use

  • Satellite P300 cannot connect WiFi after update BIOS

    Hello After using the original BIOS updated to programm my P300 - 18M can not find wireless networks. I've updated with file BIOS on 2008-09-22 without any problem. The strange thing is: using the network wireless switchin front of the P300, it activ

  • G5056EA laptop memory

    Hello Is an update of the bios to allow more than 2 GB of available memory use for the this machine please? Thank you. trustytrev.

  • Listen to music and watch videos at the same time?

    I wonder if I can listen to music on the music player app while I watch videos on youtube, because everytime I try to automatically do the music stops. Is it possible to disable it?