Transportable tablespace in no log archiving

Used tablespaces transportable when data source and destination are in noarchive log mode?

Fahd Mirza says:
Used tablespaces transportable when data source and destination are in noarchive log mode?

Yes... It's utility exp/imp, so it has nothing to do with archivelog.

Concerning
Rajesh

Tags: Database

Similar Questions

  • Need to help the transportable tablespace, after conversion and hover over datafile

    I need the transportable tablespace, help you convert and move the data file on the target server, how I their connection to the ASM instance?

    And how to start the database? should what controlfile I use?

    Thank you.

    I hope this helps.

    http://taliphakanozturken.WordPress.com/2012/03/27/moving-datafiles-and-redo-log-files-from-filesystem-to-ASM-storage/

    http://taliphakanozturken.WordPress.com/2012/03/21/how-to-move-a-datafile-from-file-system-to-ASM-disk-group/

    Talip Hakan Öztürk
    http://taliphakanozturken.WordPress.com/

  • RMAN-08137: WARNING: log archived not deleted, necessary for intelligence or upstream collection procedure

    Hi Oracle Community,

    I have a few databases (11.2.0.4 POWER 2 block and 12.1.0.2) with Data Guard. Back up this database with RMAN to ribbons, only on the primaries.

    During the backup, sometimes I got RMAN-08137: WARNING: log archived not deleted, necessary for intelligence or upstream collection procedure

    By reason of this global warning to backup status is "COMPLETED WITH WARNINGS". Is it possible to remove this warning, or perhaps change RMAN configuration to avoid the appearance of this warning?

    Concerning

    J

    Where is the problem on these warnings? If you do not want to see such warnings, why not simply to perform the removal of the archivelogs in a separate step? For example:

    RMAN> backup database plus archivelog;
    RMAN> delete noprompt archivelog all completed before 'sysdate-1';
    
  • Delete logs archive > 1 day

    Hi all

    9i

    RHEL5

    I posted a thread here on how to remove the log archiving liked only 1 day for both PRIMARY and standby database.

    But I don't find it anymore

    Is it possible to search all the contents of my son using Keywork "delete archive logs?

    Thank you all,

    JC

    Hello;

    You old thread:

    Remove the archivelogs and old backups

    Best regards

    mseberg

  • Transportable tablespace

    Hello

    Tablespace transportable does means Impdp/Expdp? This includes in Oracle standard edition?

    Thank you.

    Hello

    Oracle Data Pump is also available in Standard edition. Does not allow parallel execution in the standard to apply parallelism to Data Pump, but jobs can be performed.

    Import Transportable Tablespaces is also available in Standard edition.

    For more details see availability features by edition here:

    http://docs.Oracle.com/CD/B28359_01/license.111/b28287/editions.htm#DBLIC116

    Kind regards

    IonutC

  • Purge logs archiving on things primary and Standby and for Data Guard RMAN


    Hi, I saw a couple of orders in regard to the purge archive records in a Data Guard configuration.

    Set UP the STRATEGY of SUPPRESSION of ARCHIVE to SHIPPED to All RELIEF;

    Set UP the STRATEGY of ARCHIVELOG DELETION to APPLIED on All RELIEF;

    Q1. The above only removes logs archiving to the primary or primary and Standby?

    Q2. If deletions above archive logs on the primary, is it really remove them (immediately) or does the FRA to delete if space is needed?

    I also saw

    CONFIGURE ARCHIVELOG DELETION POLICY TO SAVED;

    Q3. That what precedes, and once again it is something that you re primary side?

    I saw the following advice in the manual of Concepts of data protection & Admin

    Configure the DB_UNIQUE_NAME in RMAN for each database (primary and Standby) so RMAN can connect remotely to him

    Q4. Why would I want my primary ro connect to the RMAN Repository (I use the local control file) of the standby? (is this for me to say define RMAN configuration settings in the Standby) manual

    Q5. Should I only work with the RMAN Repository on the primary or should I be also in things in the deposits of RMAN (i.e. control files) of the standby?

    Q6. If I am (usually mounted but not open) Physics, of standby I able to connect to its own local repository for RMAN (i.e. control based on files) while sleep mode is just mounted?

    Q7. Similiarly if I have an old lofgical (i.e. effectively read-only), to even connect to the local RMAN Repository of Eve?

    Q8. What is the most common way to schedule a RMAN backup to such an environment? example cron, planner of the OEM, DBMS_SCHEDULER? MY instinct is cron script as Planner OEM requires running the OEM service and DBMS_SCHEDULER requires the data runs?

    Any idea greatly appreciated,

    Jim

    The above only removes logs archiving to the primary or primary and Standby?

    When CONFIGURE you a deletion policy, the configuration applies to all destinations archive

    including the flash recovery area. BACKUP - ENTRY DELETE and DELETE - ARCHIVELOG obey this configuration, like the flash recovery area.

    You can also CONFIGURE an archived redo log political suppression that newspapers are eligible for deletion only after being applied to or transferred to database backup destinations.

    If deletions above archive logs on the primary, is it really remove them (immediately) or does the FRA to delete if space is needed?

    Its a configuration, it will not erase by itself.

    If you want to use FRA for the automatic removal of the archivelogs on a database of physical before you do this:

    1. make sure that DB_RECOVERY_FILE_DEST is set to FRA - view parameter DB_RECOVERY_FILE_DEST - and - setting DB_RECOVERY_FILE_DEST_SIZE

    2. do you have political RMAN primary and Standby set - CONFIGURE ARCHIVELOG DELETION POLICY to APPLY ON ALL STANDBY;

    If you want to keep archives more you can control how long keep logs by adjusting the size of the FRA

    Great example:

    http://emrebaransel.blogspot.com/2009/03/delete-applied-archivelogs-on-standby.html

    All Oracle is worth a peek here:

    http://emrebaransel.blogspot.com/

    That what precedes, and once again it is something that you re primary side?

    I would never use it. I always had it put it this way:

    CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;

    I would use only 'BACKED UP' for the data protection system. Usually it tell oracle how many times to back up before removing.

    Why would I want my primary ro connect to the RMAN Repository?

    Because if you break down, you want to be able to backup there, too.

    Also if it remains idle for awhile you want.

    Should I only work with the RMAN Repository on the primary or should I be also in things in the deposits of RMAN (i.e. control files) of the standby?

    Always use a database of catalog RMAN with Data Guard.

    If I am (usually mounted but not open) Physics, of standby I able to connect to its own local repository for RMAN (i.e. control based on files) while sleep mode is just mounted?

    Same answer as 5, use a catalog database.

    Similiarly if I have an old lofgical (i.e. effectively read-only), to even connect to the local RMAN Repository of Eve?

    Same answer as 5, use a catalog database.

    What is the most common way to schedule a RMAN backup to such an environment? example cron, planner of the OEM, DBMS_SCHEDULER? MY instinct is cron script as Planner OEM requires running the OEM service and DBMS_SCHEDULER requires the data runs?

    I think cron is still, but they all work. I like cron because the database has a problem at work always reports its results.

    Best regards

    mseberg

    Summary

    Always use a database with RMAN catalog.

    Always use CRF with RMAN.

    Always set the deletion «To APPLY ON ALL STANDBY» policy

    DB_RECOVERY_FILE_DEST_SIZE determines how long to keep the archives with this configuration.

    Post edited by: mseberg

  • Transportable tablespaces and data files

    Hello

    I wonder if the following is possible.

    With the help of 11.2.0.3

    Export datapump job to export an entire table every day to dumpfile daily functioning

    Then, slide old partitions for example 1 day

    About destination databased days later (imgaeine 4) if need import current dumpfile and datafile for the affected tablespace that will import the deleted partition (that is, the 1st day

    Thank you

    I'm a little confused as to what you're asking, but since you're on 11.2.0.3 you have called transportable table mode.  You can export a single partition of a table using transportable tablespace data movement.  It helps a lot when you create a new storage space for each partition.  It also works if you do not have, but there are a lot of additional data you need to store.  Let me give you an example:

    table foo with:

    partition 1 - tablespace 1 - day 1

    partition 2 - tablespace 2 - day 2

    partition 3 - tablespace 3 - day 3

    partition 4 - tablespace 4 - day 4

    If you were to run this export:

    tables of user/password = foo:partition1 transportable expdp = always dumpfile = foo_part1.dmp...

    you would get a dumpfile with just partition1.  You must save the dumpfile and the data file if you ever need to restore.  To restore it, run a command like:

    Impdp username/password DB11G = dumpfile = foo_part1.dmp...

    It would be important only the partition, and it would create a table called foo_partition1.  You can then use the swap partition to put back in the table where you need restored.

    If all of these scores have been in the same table, you could do the same thing for export and import, but you would need to keep a copy of the dumpfile with a copy of the storage space for every day.  Then

    day 1 - would thus dumpfile datafile with just data from day 1

    day 2 - would thus dumpfile data file with data from day 1 and the day2

    day 3 - would thus dumpfile datafile on day 1, day 2 + data of the day 3

    etc.

    The reason for this is because the import checks that the tablespace that has the same features when it was exported.  It uses a checksum or something to verify this.  So, if you've added data to day 2, the data file would not be the same and the import fails.

    I hope this helps.

    Dean

  • Issue during the import of metadata for the transportable tablespace

    Hello

    We are one of our databases of migration from linux to AIX (test phase). We are using the transportable tablespace for the same thing. All the steps are done and we are stuck during the import of metadata. When importing metadata, we need the full data file name which should be plugged. The number of data files is 309 and the name of all the data files must be in a single line. Now, if we try to write in a single line using the editor vi, the editor gives error ' ex: 0602-140'.

    Is there a way by which we can remedy this situation?

    Kind regards

    try to make option parfile a file from vi editor that contains the datafile option and 309 lines of datafile.this way you nned do not all put in the same line.

  • Reg apply log archiving after the transfer of data files

    Hi all

    That I reinstalled the main server of the D-Drive E-reader data files using the command line.
    C:\>Move <source_path> <destination_path>
    The redo logs for the move operation will apply on the eve of the database?
    In addition, what happens if the data files are moved manually in the primary database (i.e. without using the command prompt)?


    Thank you
    Madhu

    See this doc. Keyword search Rename a data file in the primary database

    http://docs.Oracle.com/CD/B28359_01/server.111/b28294/manage_ps.htm#i1034172

    Also, you need to update primary database controlfile if some moment of the file made...

    And also close this thread

    Reg apply log archiving after the transfer of data files

    As it would help in the forum of maintenance to clean.

  • Size of the huge log archive

    I have 5 redo log files. Each is 4 GB. There are huge transactions (1800-2000 per second). So we have every hour 5-6 journal of the switches. So all the hours of archives journal 20 to 24 GB and daily increase in size, the size 480 GB.

    I had planned to take an incremental backup of level 1 and level 0 per week daily with the option delete log archiving 'sysdate-1 '.

    But we have little storage if, it is impossible to the sphere 480 GB per day and take backup of these 480 GB for 7 days (480 * 7) GB = 4 TB. How can I correct this situation? Help, please.

    1 disk is cheap
    2 the onus on those who "Advanced" application
    3 you can use Log Miner to check what is happening.

    You need to this address with application developers and/or clients and/or buy the disc.
    There are no other solutions.

    -----------
    Sybrand Bakker
    Senior Oracle DBA

    Published by: sybrand_b on July 16, 2012 11:11

  • Eve not synchronized because test on primary transportable tablespace etc.

    This environment is the new generation environment, do not yet use.

    DB version is 11.2.0.3 in linux, RAC nodes are configured primary/secondary two and storage are stored to the DSO.

    primary DB was tested by data migration using transportable tablespace methods. If the imported data files was put in a local file system that I have to go eventually to the ASM.

    So the standby db felt that data files must be in the local file system and it got struck down.

    Here is the journal pending error:
    ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
    ORA-01110: data file 4: ' / stage/index.341.785186191'

    and the names of data files are all in the local file system in standby mode: / stage /... /.

    However in the primary data file, the names are in the ASM instance:
    +DAT/PRD/datafile/arindex.320.788067609

    How do I resolve this situation?

    Thank you

    user569151 wrote:
    so after the fall of the standby db, I use rman duplicate from active command?

    Yes, after the fall of database

    (1) starting in Nomount
    (2) check the configuraiton of net work
    (3) make sure password file exist
    (4) now run RMAN duplciate of Active database.
    That should be fine.

  • Why a transportable tablespace for the migration of the platform in the same endian format?

    RDBMS version: 10.2.0.4

    We intend to migrate our database to another platform. The two platforms are in BIG endian format. Search on Google, I came across the following link
    http://levipereira.files.WordPress.com/2011/01/oracle_generic_migration_version_1.PDF

    In this IBM paper, they migrate to AIX 6 of 5.9 on Solaris (SPARC). Both are BIG endian format. Since both are the same size endian can they use TRANSPORTABLE database? Why use RMAN COVERT DATAFILE (Transportable tablespace)?

    In this IBM paper, they migrate to AIX 6 of 5.9 on Solaris (SPARC). Both are BIG endian format. Since both are the same size endian can they use TRANSPORTABLE database? Why use RMAN COVERT DATAFILE (Transportable tablespace)?

    they use portable database - that they do not import data to the dictionary, not creating users--rather than use data conversion, they used convert datafile to avoid to convert all the data files (you will need to convert only cancel + system tablespace)-there are MOS Note: avoid Conversion Datafile during the transportable database [732053.1 ID].

    Basic steps to convert database:
    1. check the prerequisites
    2. identify the directories and external files with DBMS_TDB. CHECK_EXTERNAL.
    3. (coherent) stop and restart the database source in READ ONLY mode.
    4. use DBMS_TDB. CHECK_DB to ensure that the database is ready to be transported.
    5. run RMAN convert database command.
    6 copy the files converted into the target database. Note that this implies that you will need 2 x the storage on the database source for converted files.
    7 copy the parameter in the target database.
    8. set the configuration files as required (setting, listener.ora and tnsnames, etc.).
    9 start the new database!

    All other details are:
    http://docs.Oracle.com/CD/B19306_01/backup.102/b14191/dbxptrn.htm#CHDFHBFI

    Lukas

  • Migration to a new transportable tablespace partition table

    I created a table that is partitioned with 2 partitions (2010 & 2011) and transportable tablespace to migrate data to a new envionment. My question is, if I decide to add a partition (2012) in the future, can I simply move as new portable partition as well as the associated through tablespace data file or I have to spend all the partitions (2010, 2011, 2012).

    user564785 wrote:
    I created a table that is partitioned with 2 partitions (2010 & 2011) and transportable tablespace to migrate data to a new envionment. My question is, if I decide to add a partition (2012) in the future, can I simply move as new portable partition as well as the associated through tablespace data file or I have to spend all the partitions (2010, 2011, 2012).

    Yes, why not.
    (1) create a table like new Tablespace on source 2012 DEC
    (2) transport the tablespace
    (3) add the existing partition table partition or swap partition

    Oracle has also documented this procedure:
    http://docs.Oracle.com/CD/B28359_01/server.111/b28310/tspaces013.htm#i1007549

  • Log archiving

    Dear experts,

    I have to test the size of the log archive every transaction. the next step needs to be done for this test

    1 running dml scripts in bulk
    2 need to find how the size of the log file archive created on the archive location.

    based on what we have to give the stats that particular transaction generated as much redo log file from archive. can you please provide the script to do this. Thanks in advance.

    Oragg wrote:
    Dear experts,

    I have to test the size of the log archive every transaction. the next step needs to be done for this test

    1 running dml scripts in bulk
    2 need to find how the size of the log file archive created on the archive location.

    based on what we have to give the stats that particular transaction generated as much redo log file from archive. can you please provide the script to do this. Thanks in advance.

    Lets assume that you have loaded data for 1 day or 1 hour as below

    alter session set nls_date_format = 'YYYY-MM-DD HH24';
    
    select
      trunc(COMPLETION_TIME,'HH24') TIME,
       SUM(BLOCKS * BLOCK_SIZE)/1024/1024 SIZE_MB
    from
      V$ARCHIVED_LOG
    group by
      trunc (COMPLETION_TIME,'HH24') order by 1;
    
  • ORA-00439: feature not enabled: export transportable tablespaces

    Hi all

    I got following error while using the transpotable tablespace feature. My OS is windows XP and version 10.2.0.1.0 oracle

    ORA-00439: feature not enabled: export transportable tablespaces

    What is the version of your db? This is an Enterprise Edition or Standard Edition? Teh transport of storage space is only one feature of Enterprise edition.

    http://www.Oracle.com/us/products/database/product-editions-066501.html

    HTH
    Aman...

Maybe you are looking for

  • iBooks does only not on iPad

    The iBooks on my iPad app has stopped working. I can see my books. I can't get. None of the icons at the bottom of the page works except "my books".

  • How can I remove the cover for the printing of books

    Hello I just bought a 5510e Photosmart aio, B111a. It says in the manual that it is possible to print or scan books by unscrewing the lid. But how do you go? Seems I have to resort to violence!

  • Satellite P200-1EE - is there an available Blu - Ray player?

    I asked this question before but at the time (March 2008), there is no disk in Toshiba or any other manufacturer.Some readers are now available. Has anyone tried to install a Blu - ray player in replacement of HD - DVD unit provided with this compute

  • Iomega ZIP and JAZ dirvers for Windows 7 64-bit

    I'm looking for drivers 64-bit Windows 7 for old Iomega ZIP and Jaz drives. They are available anywhere? So far no luck...

  • Can I put my HD in my X 1 carbon (3. GEN / 2015)?

    Hello Anyone know if I can upgrade my HDD 250GB to 512GB default in my Lenovo X 1 carbon (model 2015). I know it must be a SSD SATA m2. What hard drives is recommended and supported with X 1 carbon (2015)? Anyone have experience with this? Thank you