migration data file location

Due to the lack of space that can happen in the next few months, I have to spend oradata (datafiles) to subsystem (partition) of different disk. Is there a manual on how to do it correctly?

Thanks for all the answers!

Published by: 40 on November 10, 2008 15:30

You can follow this doc Oracle to move your data files.

Procedure to rename data files in a single table

http://download.Oracle.com/docs/CD/B19306_01/server.102/b14231/dfiles.htm#sthref1386

Tags: Database

Similar Questions

  • all export and import and the data files location

    Hi guys,.

    can someone please confirm...

    If I export the entire base and my X tablespace uses the loc * 1 data file * / test1.dbf, in the database where I matters now in the data file is in loc * 2 * / test1.dbf, it does make a difference and the data gets imported correctly, Yes?

    If I export the entire base and my X tablespace uses the loc * 1 data file * / test1.dbf, in the database where I matters now in the data file is in loc * 2 * / test1.dbf, it does make a difference and the data gets imported correctly, Yes?

    No, it makes no difference and the import of the data correctly.

  • Data file not shipped from the primary database to the standby database

    Hello

    (1) create a new user
    (2) created a new table space and add datafile on the primary database
    (3) the value tablespace default user

    But so far, this new data file is not delivered on the standby database.

    Version of database - 10.2.0.3
    OS - Windows

    Setting of primary and Standby: -.

    SQL > show parameter standby_file_management;

    VALUE OF TYPE NAME
    ------------------------------------ ----------- ------------------------------
    standby_file_management string AUTO

    Both have the same file system, but the difference is case sensitive.

    Primary data file location - D:\testdata

    Location of file data at rest - D:\TESTDATA

    Help, please.

    MRP process is running on the standby database?

    If it is then try to pass, log on the primary database and share your result.

  • I'm trying to migrate data from a server to a new one with the file permissions of the files of users and records lost.

    original title: robocopy

    I'm trying to migrate data from a server to a new one with the file permissions of the files of users and records lost. So far, that's what I did, I used \\server1\share \\server2\share/sec /mir robocopy and robocopy \\server1\share \\serve2\share/e/s /copyall. It seams like they copied all files with the permissions of the user for the files, but not files. For example, if a user makes a folder with the files in the folder appear them have permissions appropriate for them but not the root folder or subfolders, they did... How can I fix this and what is the difference between / s /mir and/e/s /copyall?

    Hello

    You can find the Server forums on TechNet support, please create a new post at the following link:

    http://social.technet.Microsoft.com/forums/en/category/WindowsServer/

  • Try to restore the data file, get error 'you don't have permission to save in this location"in Vista

    I am new to Windows Vista, but relatively competent in older versions. I installed the software on my new desktop computer. I am the only user on this computer and I have administrative rights. I installed the software in the Program Files directory. In the software, there is a folder for the working data file. I now try to 'restore' my data on this new computer file so that I can start working. I get an error message that says "you don't have permission to save in this location. Contact an administrator. "I have considered similar questions on this site and followed the advice to take possession of the file without success. I also created a new folder on the D drive, thinking maybe Vista I just wanted to keep C clean, but it gives me the same message. I can't save to the folder I created. Any advice?

    Program Files is a protected directory. If possible you create the working folder for the data in your user account instead. Is it possible in the options of the program the program in a directory of your user account? Or, if you think that any other user you may need to use this program, you could create the directory Public. MS - MVP - Elephant Boy computers - don't panic!

  • How to store the ASO dat file in different location

    Hi all

    I have a requirement where I need to store the .dat (ASO) file in one location other than the folder where are stored the artifcats of enforcement. In OSB, we can right-click the database properties, edit, and in the storage tab, we can choose where the data files and indexes must be stored. We have a similar option of ASO too? because the space we have for the application files are limited and we have a separate drive of SAN for the storage of the data and index files, and this is where the BSO .pag files are stored. Since our ASO cube is more I want to move the data to a location where the data of the BSO, I mean for the player designated for data. Any suggestion will be highly appreciated.

    Thank you.

    Yes, ASO has the notion of 'spaces '. There are four (by default, temp, log and metadata), but you probably just want to pass the tablespaces 'default' and 'temp' in your location of SAN. Temp will look really small (i.e. empty) until you run a restructuring or consolidation, then it will explode to the size of the default storage space so it is important not to miss.

    See the section "Managing storage for ASO bases" here http://docs.oracle.com/cd/E17236_01/epm.1112/esb_dbag/asysadmn.html, or the MaxL "alter tablespace" command http://docs.oracle.com/cd/E17236_01/epm.1112/esb_tech_ref/maxl_alttabl.html here.

    EDIT: Actually, looking at the documentation, I see that you can't move the metadata or journal storage. But you're probably only going to worry about 'default' and 'temp' anyway.

  • Get the former locations of the data files and Redo logs

    Version: 11.2
    Platform: Solaris 10

    When we manage hundreds of DBs, we do not know the locations of all DB files these allows DBs. say a DB goes down and you have all the required RMAN backups.

    When you restore the DB in a new location in the path of the new server, you must run the commands for the data files and ORLs below. But how do we know

    The former location of the data files.

    B. the old location of redo online stores that I can run

    run
    alter database rename file 'oldPath_of_OnlineRedoLogs' to 'newPath_of_OnlineRedoLogs' ;  --- Without this command , the restored control file will still reflect the old control file location
    run {
    set newname for datafile 1 to '/u04/oradata/lmnprod/lmnprod_system01.dbf' ;
    set newname for datafile 2 to '/u04/oradata/lmnprod/lmnprod_sysaux01.dbf' ;
    set newname for datafile 3 to '/u04/oradata/lmnprod/lmnprod_undotbs101.dbf' ;
    set newname for datafile 4 to '/u04/oradata/lmnprod/lmnprod_audit_ts01.dbf' ;
    set newname for datafile 5 to '/u04/oradata/lmnprod/lmnprod_quest_ts01.dbf' ;
    set newname for datafile 6 to '/u04/oradata/lmnprod/lmnprod_yelxr_ts01.dbf' ;
    .
    .
    .
    .
    .
    }

    Hello

    With the help of Oracle 11.2, you can use feature 'set newname for database' using OMF.

    SET NEWNAME FOR DATABASE TO '/oradata/%U';
    RESTORE DATABASE;
    SWITCH DATAFILE ALL;
    SWITCH TEMPFILE ALL;
    RECOVER DATABASE;
    

    After the restore and recover databases (i.e. before resetlog open) you can do to rename redolog. Just a query column member from v$ logfile and deliver ' alter database rename file 'oldPath_of_OnlineRedoLogs' to 'newPath_of_OnlineRedoLogs ';

    When we use the DSO is much easier to use OMF because Oracle automatically creates the directory structure.
    But when we use the file system that the OMF does not serve due DBA dislikes system generated on file system names.

    If you don't like OMF file system, you can use the script on thread below to help restore you using readable for datafile names, tempfile, and redo.

    {message: id = 9866752}

    Kind regards
    Levi Pereira

  • Migration - MySQL to Oracle - do not generate data files

    Hello

    I have been using the SQL Developer migration Assistant to move data from a MySQL 5 database to an Oracle 11 g server.
    I used it successfully a couple of times and its has all worked.

    However, I am currently having a problem whereby there is no offline data file generated. Control files and all other scripts generated don't... just no data file.
    It worked before, so I'm a bit puzzled as to why no logner work.

    I looked at newspapers of migration information and there is no errors shown - datamove is marked as success.
    I tried deleting and recreating rhe repository migration and checked all grants and privs.

    Is there an error message then it would be something to continue but have tried several times and checked everything I can think.

    I also tried the approach of migration command-line... same thing. Everything works fine... no errors... but only the table creation and control script files are generated.
    The schema of the source is very simple and there is only the tables to migrate... no procedure or anything else.

    Can anyone suggest anything?

    Thank you very much
    Mike

    Hi Mike,.

    I'm so clear.
    You use SQL Developer 3.0?
    You walked through the migration wizard and choose Move Offline mode data.
    The generation of DDL files are created as are the scripts to move data.
    But no data (DAT) file is created and no data has been entered in the Oracle target tables.

    With offline data move, Developer SQL generates (saved in your project, under the DataMove dir directory) 2 sets of scripts.
    (a) a set of scripts to unload data from MySQL to DAT files.
    (b) a set of scripts to load data from the DAT files in the Oracle target tables.
    These scripts must be run by hand, specifying the details for the source databases MySQL and Oracle target.

    "no offline data file generated. Control files and all other scripts generated don't... just no data file. »
    «.. . but only the creation and control file table scripts are generated. »

    What you mean
    (1) the DAT files are not generated automatically. This should, if we need to run the scripts yourself
    (2) after manually running the scripts that the DAT are not present, or that the DAT files are present, but the data does not load in Oracle tables.
    (3) the scripts to move data in offline mode does not get generated

    Kind regards
    Dermot
    SQL development team.

  • accidentally create data file with the same name but different location

    Hi all

    I was accidentally create data file in the tablespace with the same name but in another location.

    / U03/DataFile/REKON65. DBF

    new data file: / u04/datafile/REKON65. DBF

    My question: what happens with data/u03/datafile/REKON65 file. DBF? the data inside/u03/datafile/REKON65. DBF went?


    Thank you

    Indra says:
    Hi all

    I was accidentally create data file in the tablespace with the same name but in another location.

    / U03/DataFile/REKON65. DBF

    new data file: / u04/datafile/REKON65. DBF

    My question: what happens with data/u03/datafile/REKON65 file. DBF?

    nothing

    the data inside/u03/datafile/REKON65. DBF went?

    not gone

  • Error after you move the temporary data file to another location

    Hello

    I do a few large trials and for this I created the tablespaces on my external hard drive. I moved the file from temporary data to my location on the hard disk by following the procedure below.
    SQL> startup mount
    ORACLE instance started.
    
    Total System Global Area  313860096 bytes
    Fixed Size                  1332892 bytes
    Variable Size             218106212 bytes
    Database Buffers           88080384 bytes
    Redo Buffers                6340608 bytes
    Database mounted.
    SQL> ALTER DATABASE RENAME FILE 'D:\app\Administrator\oradata\orcl\TEMP01.DBF' TO 'G:\KAM\TEMP01.DBF';
    
    Database altered.
    
    SQL> ALTER DATABASE OPEN;
    
    Database altered.
    2.3 times that I closed the db, even restarted the computer, but when I ask

    SQL > select * from dba_data_files;

    It does not show the file to its new location and temporary data.

    In any case, I ran my request but got the following error.

    ERROR at line 13:
    ORA-01115: IO error reading block from file 15 (block # 2507160)
    ORA-27070: async read/write failed
    OSD-04016: Error queuing an asynchronous I/O request.
    O/S-Error: (OS 23) Data error (cyclic redundancy check).
    ORA-01115: IO error reading block from file 15 (block # 2507160)
    ORA-27070: async read/write failed
    OSD-04016: Error queuing an asynchronous I/O request.
    O/S-Error: (OS 23) Data error (cyclic redundancy check).
    I'll appreciate any suggestions.

    Thanks and greetings

    Oracle 11g (11.1.0.6.0), winxp. No archive Log mode

    Shutdwon unnecessary your database. The operation could be built with an open instance.

    Gievn the steps below;

    CREATE a TEMPORARY TABLESPACE temp2 TEMPFILE ' / u01/oraindx/temp01.dbf' SIZE 5 M;

    ALTER DATABASE by DEFAULT TABLESPACE TEMPORARY temp2;

    DROP TABLESPACE temp including CONTENTS AND DATA files;

    Temp to CREATE a TABLESPACE TEMPORARY TEMPFILE ' / u01/oradata/DB01/temp01.dbf' SIZE 1024 M;

    ALTER DATABASE default TEMPORARY TABLESPACE temp;

    DROP TABLESPACE temp2 including CONTENT AND DATA files;

    Orawiss

  • Moving ORACLE_HOME, data files, controlfiles, redo the location of the log files

    Version: 10.2.0.1.0

    One of the rental (ORACLE_HOME) from our test of the DB software has been installed wrongly X11R6. We used it for a year now. Now, we are planning to move the ORACLE_HOME to one/U02 the new situation. Due to another disk maintenance activity, move all data files, redo log files, control files, tempfiles to a different location as well!

    This database is not in mode (luckily) ARCHIVELOG.

    If I do a new install of 10.2.0.1.0 in / U02, I can't use the old installation system01.dbf, sysaux01.dbf, undotbs01.dbf files for this new facility. Right?

    How can I go to do this thing any approach?

    Hello

    1. take the backup to cold
    2. restore on the new location
    2.1 change all the file init.ora move the new location
    2.2 startuo nomount
    2.3 change the editing of the database
    2.4 change the location of data file or redlogfile (ALTER DATABASE RENAME FILE)
    2.5 ALTER DATABASE OPEN;

    3. take the cold backup (optional)
    4. change db in archivelog mode. (Required)

    Kind regards
    Taj

  • Location of file data file FTP/adapters

    Hello

    When I create a BPEL process using File/FTP adapter to read the data in a data example "test.xml" file, so my process has mainly two nodes:
    a. from The FTP/File adapter of the data file "test.xml".
    b. a receiving node to read data from the File/FTP adapter.

    I find that I am able to manage to 'do' (or compile) the BPEL process.

    When I try to deploy, I get the two possible outcomes depending on the location of the data file "test.xml".
    Best-case scenario:
    If I place the file "test.xml" under the ApplicationServer House the BPEL process extracts the data successfully.

    Breakdown scenario:
    If I place the "test.xml" in any other folder other than the one under the server host Application, deployment fails with the following error message:

    \Build.XML:79 < filelocation >: there was a problem connecting to the server "servername" using the "portnumber" port: bpel_bpelprocessname_1.0.jar failed to deploy. Exception message is: ORABPEL-09903
    Could not initialize the agent of activation.
    An error occurred during initialization of a process enabling agent 'bpelprocessname', version "1.0".
    Please ensure that the agents of activation are configured correctly in the bpel (bpel.xml) deployment descriptor.
    oracle.tip.adapter.fw.agent.jca.JCAActivationAgent: java.lang.reflect.InvocationTargetException
    at com.collaxa.cube.engine.core.BaseCubeProcess.startAllActivationAgents(BaseCubeProcess.java:370)
    at com.collaxa.cube.engine.deployment.DeploymentManager.activateDefaultRevision(DeploymentManager.java:1577)
    at com.collaxa.cube.engine.deployment.DeploymentManager.setDefaultRevision(DeploymentManager.java:1536)
    at com.collaxa.cube.engine.deployment.DeploymentManager.deployProcess(DeploymentManager.java:886)
    at com.collaxa.cube.engine.deployment.DeploymentManager.deploySuitcase(DeploymentManager.java:728)
    at com.collaxa.cube.ejb.impl.BPELDomainManagerBean.deploySuitcase(BPELDomainManagerBean.java:445)
    at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:585)
    at com.evermind.server.ejb.interceptor.joinpoint.EJBJoinPointImpl.invoke(EJBJoinPointImpl.java:35)
    at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:119)
    at com.evermind.server.ejb.interceptor.system.DMSInterceptor.invoke(DMSInterceptor.java:52)
    at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:119)
    to com.evermind.server.ejb.interceptor.system.JAASInterceptor$ 1.run(JAASInterceptor.java:31)
    at com.evermind.server.ThreadState.runAs(ThreadState.java:646)
    at com.evermind.server.ejb.interceptor.system.JAASInterceptor.invoke(JAASInterceptor.java:34)
    at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:119)
    at com.evermind.server.ejb.interceptor.system.TxRequiredInterceptor.invoke(TxRequiredInterceptor.java:50)
    at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:119)
    at com.evermind.server.ejb.interceptor.system.DMSInterceptor.invoke(DMSInterceptor.java:52)
    at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:119)
    at com.evermind.server.ejb.InvocationContextPool.invoke(InvocationContextPool.java:55)
    at com.evermind.server.ejb.StatelessSessionEJBObject.OC4J_invokeMethod(StatelessSessionEJBObject.java:87)
    at DomainManagerBean_RemoteProxy_4bin6i8.deploySuitcase (unknown Source)
    at com.oracle.bpel.client.BPELDomainHandle.deploySuitcase(BPELDomainHandle.java:319)
    at com.oracle.bpel.client.BPELDomainHandle.deployProcess(BPELDomainHandle.java:341)
    at deployHttpClientProcess. jspService(_deployHttpClientProcess.java:376)
    at com.orionserver.http.OrionHttpJspPage.service(OrionHttpJspPage.java:59)
    at oracle.jsp.runtimev2.JspPageTable.service(JspPageTable.java:462)
    at oracle.jsp.runtimev2.JspServlet.internalService(JspServlet.java:594)
    at oracle.jsp.runtimev2.JspServlet.service(JspServlet.java:518)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:856)
    at com.evermind.server.http.ResourceFilterChain.doFilter(ResourceFilterChain.java:65)
    to oracle.security.jazn.oc4j.JAZNFilter$ 1.run(JAZNFilter.java:396)
    at java.security.AccessController.doPrivileged (Native Method)
    at javax.security.auth.Subject.doAsPrivileged(Subject.java:517)
    at oracle.security.jazn.oc4j.JAZNFilter.doFilter(JAZNFilter.java:410)
    at com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:623)
    at com.evermind.server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:370)
    at com.evermind.server.http.HttpRequestHandler.doProcessRequest(HttpRequestHandler.java:871)
    at com.evermind.server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:453)
    at com.evermind.server.http.AJPRequestHandler.run(AJPRequestHandler.java:302)
    at com.evermind.server.http.AJPRequestHandler.run(AJPRequestHandler.java:190)
    to oracle.oc4j.network.ServerSocketReadHandler$ SafeRunnable.run (ServerSocketReadHandler.java:260)
    to com.evermind.util.ReleasableResourcePooledExecutor$ MyWorker.run (ReleasableResourcePooledExecutor.java:303)
    at java.lang.Thread.run(Thread.java:595)


    I am interested to know,
    "Is there a way through which I can successfully deploy my BPEL process with my data file that is placed outside the Application Server home folder?"

    Thank you
    Shakur

    Absolutely,.

    What adapter are automatically using FTP or files?

    I'll assume file.

    When you create your BPEL process you can he of a logical or physical directory. Logic is the best option because it allows to change on the fly. It's a little hard because you need to change the bpel.xml with the directory to deploy.

    You must ensure that the user who starts the SOA Suite has all the rights to this directory, and that the case matches the directory. I would also use the directories that don't have spaces or special characters.

    see you soon
    James

  • Restore a specific full backup of data file to a new location

    Hello

    I have a test instance (10.2.0.4). I took a full backup of the database and has fallen to a data file.

    Now, I created an auxiliary Instance where I restored the controlfile since the full backup to help:

    RMAN > run {}
    up to the time "to_date('2011-03-16:15:09:00','yyyy-mm-dd:hh24:mi:ss')";
    Restore controlfile to ' / home/rtest/aux/control01.ctl';
    }

    Now, what is the procedure to restore the deleted data file in ' / home/rtest/aux/***.dbf';

    Help, please.

    Thanks in advance.

    Kind regards.

    You can 'Transport' the Tablespace containing the datafile after completing the Point in time of recovery on the auxiliary.

    All data in the tablespace must be made up - for example, you can't FKs referencing tables in other storage spaces.

    See http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/tspaces.htm#sthref1281

    Hemant K Collette

  • XTTS - problem by copying the data file in ASM

    I test the migration of database from AIX to Linux using the cross-platform transportable tablespaces.
    DB version: source: 10.2.0.4
    Destination: 10.2.0.5
    OS version: source: AIX6.1 - AIX - Based Systems (64-bit)
    Destination: RedHat Linux - Linux x 86 64-bit

    I have run the commands before copying the file data to destination below.
    EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('TBLSP1,TBLSP2', TRUE);
    
    SELECT * FROM TRANSPORT_SET_VIOLATIONS;
    
    no rows selected
    
    alter system archive log current;
    
    alter tablespace TBLSP1 read only;
    
    alter tablespace TBLSP2 read only;
    
    expdp DUMPFILE=xtts_exp.dmp DIRECTORY=DUMP_DIR logfile=xtts_exp.log TRANSPORT_TABLESPACES=TBLSP1,TBLSP2
    
    CONVERT TABLESPACE TBLSP1,TBLSP2 
    TO PLATFORM 'Linux x86 64-bit'
    FORMAT '/dataimport/%U';
    /DataImport is a shared file system mounted on Linux server and am able to see the file of data out there. But receive the error below to try to copy the data file on the linux server. Could someone let me know if I missed something / how to fix this error.
    $rman target /
    
    RMAN> copy datafile '/dataimport/data_D-DBMGRT_I-3320277811_TS-TBLSP1_FNO-26_05m8miia' to '+DATA';
    Starting backup at 05-APR-11
    using channel ORA_DISK_1
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of backup command at 04/05/2011 10:19:44
    RMAN-20201: datafile not found in the recovery catalog
    RMAN-06010: error while looking up datafile: /dataimport/data_D-DBMGRT_I-3320277811_TS-TBLSP1_FNO-26_05m8miia

    To be honest, I don't really see what you have done in the op, so I can't really comment on. I see you have an export of the TTS, but that's all.

    If you have followed all the list I mentioned (you did write the TTS reading?), then you can try to save the file with RMAN, but I suspect that it does not work. Do you get the error even if you move the shared location (/ donneesimporter) to a local directory on the target area? Maybe he doesn't like the fact that it's a NFS mount (or whatever the proportion is).

    Or maybe it's because you going to 10.2.0.4 10.2.0.5. I have always had the same group of patches and updated once I had plugged the tablespaces on the target database.

  • When I back up my data files, I want to understand my favorites. Which folders and/or files should I include in my backup routine?

    Under Windows XP. With the help of "ntbackup" to do the job. All software is located in C:, all my data files are on a second physical drive E:.

    Firefox stores the bookmarks and the browser history in places.sqlite and either use a HTML file or creates a backup of the HTML for the bookmarks.

    There are 10 daily rotating backups of JSON in the bookmarkbackups folder in the Firefox profile folder, one for each day when Firefox is started.

    You can make Firefox create an automatic HTML (bookmarks.html) backup when you exit Firefox, if you set the pref to true on the subject browser.bookmarks.autoExportHTML: page config you can open via the address bar.

    See also:

Maybe you are looking for