A fundamental question about archiving

Versions: 10.2, 11.1, 11.2
Platforms: Unix, Unix-like

This isn't a specific question Dataguard. But I thought it would be the best response data experts warn.

I have a DB in Archive log mode. It has 3 files redo log (file1, file2, File3)
file1

 |----------------> file1 got full, LGWR starts writing to file2
 |
 V

file2

 |----------------> file2 got full, LGWR starts writing to file3
 |
 V

file3
 |----------------> file3 got full, LGWR starts the new cycle and starts writing to file1
Journal of archiving of the file1 will be created as soon as LGWR began writing in File2 or only when the new cycle starts it IE. When the last rebuild (file 3 in the example above) of the group got full and LGWR begin in writing a file1?

Journal of archiving of the file1 will be created as soon as LGWR began writing in File2 or only when the new cycle starts it IE. When the last rebuild (file 3 in the example above) of the group got full and LGWR begin in writing a file1?

Hello

What you say, correct, its a CYCLICAL once file3 archived then it goes again to a file1
Here one more point, once the redo log file is full when it is archived , then only it will start writing in file2

How Oracle database written in the Redo Log

Log roll forward of a database consists of two or more redo log files. The database requires a minimum of two files in order to ensure that one is still available for writing, while the other is more archived (if the database is in ARCHIVELOG mode). For more information, see "Managing archived Redo Logs".

LGWR writes redo logs in a circular fashion. When the current redo log file fills, LGWR begins writing in the redo log file available at the following address. When the last log file available redo is full, LGWR revisits the first restoration of progression and writes log file, starting the cycle again.  the circular writing redo log file. The numbers next to each line indicate the order in which LGWR writes into each redo log file.

I mention only one link, here you can see the image how to rebuild cycling works.

http://docs.Oracle.com/CD/B19306_01/server.102/b14231/onlineredo.htm#i1006187

Thank you.

Tags: Database

Similar Questions

  • Question about archiving of projects

    I have about 60-70 projects on my drive to work and a plan to archive them this winter during our downtime. I looked at a few tutorials on how to use the project manager to consolidate a project, but I was wondering how this thing. Almost all of my projects have AE comps in them that use images, pictures, ect, other places on the hard drive. This will actually extract these files in archive too?

    It will catch most elements of film - video, audio, photos, etc - but it will not cut the vast majority (sequences, specifically). It will create a great him upward media folder - you may have to clean a little later. AE comps don't come along - well well, they remain in the Premiere Pro project, but AE projects are not copied. I just copy those manually and then recreate a link to them - type of pain.

    I recommend you to do a test at your convenience to determine a good workflow. Personally, I gave up on the project long ago Manager, especially when I went hard HD are pretty cheap these days - I just ordered 4 x 2 TB drives and a RAID5 unit for about 6 TB of storage for $400 today ' today - that I find it easier to copy everything on wholesale. I try to be aware on the front-end server that I put into the project, and where actually the media.

  • X121e fundamental questions about the i3

    Hi on the edge of the command the i3 model of the thinkpad x121e but saw that it was that 1.3 ghz just worried it may not be powerful enough to dreamweaver. This would be the case?

    really going to use it as a standard netbook but I need dreamweaver there to combat the emergency situations and maybe corel paintshop pro 12 for web stuff.

    Adobe technical specification are vague with CPU unfortunately so wanted to know if you guys had a bit of luck, or is it better to get the AMD E-350?

    Hello, newby on the forums here I just got my x121e last week myself (AMD Fusion, RAM of 4 GB model).

    I have Adobe Web Premium Suite 5.5 installed on it and I can't complain about the performance. The only time I ran in the 'problem' is when you do something in particular stand out, as massive filters in Photoshop.

    Direct edit mode in Dreamweaver is not useful, it's a little too slow, but found the same content in Firefox on its own works perfectly.

    I would say that if you're going to make miracles, slight editing or smaller projects, that the x121e should be fine. Maybe not if you make very complex websites.

    My original idea for the x121e was to bring in seminars and training (Dreamweaver, among others) and it didn't let me down so far.

    Can't comment on the difference between the i3 and the E-350 processors but considering the price and the convenience of the x121e I think it's so much better to have a netbook. AFAIK the i3 is faster but I didn't want to pay the premium for the Intel chip, and so far I am happy with the coice.

    It will be useful.

  • 10.2.0.4 to 11.2.0.1 reclassification; a fundamental question about ORACLE_HOME

    Operating system: Solaris 5.10

    I did only the updates of the Group of hotfixes. It is the first time that I do a upgrade version to another.

    That's what I heard in ID 429825.1.

    To upgrade a 10.2.0.4 DB to 11.2.0.1

    Step 1. Install 11.2.0.1 (software only) in a separate House

    If my current 10.2.0.4 ORACLE_HOME is /u01/app/oracle/product/10.2.0/db
    then install 11.2.0.1 software in /u01/app/oracle/product/11.2.0/db

    Step 2. Run utlu112i.sql and solve all the problem mentioned in the output

    Step 3. Stop the db 10g

    Step 4. Copy the new file inti.ora with the new 11g setting in the ORACLE_HOME/dbs 11.2

    Step 6. Set ORACLE_HOME as a new ORACLE_HOME 11.2

    Step 7. set the ORACLE_SID for 10g DB and starts in UPGRADE mode
    startup upgrade pfile=11g_R2HOME/dbs/init.ora
    8. Run 11gR2_home/rdbms/admin/catupgrd.sql


    My question concerns step 6. How can you start a 10.2.0.4 DB after you set the ORACLE_HOME a 11.2 House?

    The previous poster, exactly gave you the best recommendations.
    regarding your question on step 6 (maybe 7 or you meant) so the answer is that you start up using 'upgrade', not normal mode. You are starting/editing the 'old' database of version under the new oracle_home/binary software and do the scripts (catupgrd.sql) special to upgrade the database itself. Saying in other words, start upgrade is not startup, it opens a database in a special mode

  • (I hope) a fundamental question about the buttons on the Flash site

    I'm new to Flash & Actionscript and learning that I create a basic site.  My question is: I have a home page on the site with the navigation buttons, but on all other pages of the site, the need for buttons to be in a different position - is it possible to have the buttons leaving once the homepage? Or would it not make more sense to create a new set of buttons for the rest of the site?  If the latter is an option, do I get rid of the original set once the user leaves the home page so they are not sitting there in the wrong place?

    I use a Macbook Pro OSX 10.6.6 and Flash Professional CS4.

    Also learning through a Lynda.com membership if it's useful to know.

    Thanks for the tips!

    DisplayObject items may change on a keyframe.  You can change the positions in the creation or with actionscript environment.

  • Hello. A fundamental question about secure PDF

    How can I view an attachment in a place secured PDF on an iOS and OS X Apple devices?

    Thank you.

    Hello Ronen,

    You can find the attachment on the bottom right of your iOS device icon.

    In addition, on mac, you can locate the icon of the spare part on the very left side bar of Acrobat / Reader DC.

    Kind regards
    Rahul

  • Fundamental questions about the behavior of tables

    I have a table with four columns fixed-width formatted. Say, 20, 100, 200 and 80. When manipulating this table graphically I can sometimes added and delete lines very well but then, unexpectedly, deleting of a line causes all other rows (columns) jump to equal width, 100, 100, 100, 100. Almost always will happen if I delete the last row, it never happens if I remove the code. Any help would be greatly appreciated.

    CS6 using in a Windows 7 environment.

    So I can't be too harsh with them, I built a company 15 years around Adobe products.  What I can say is that to use them effectively, you really know your basics.  This is especially true with DW.  If we approach it with the mentality that you use it as a tool and you want to learn the tricks of the trade through and through, then you learn that there are things that you do and the things you do, even if you COULD do it.  Complex tables is one of those things that you USED to learn in a hurry.  Today, I guess a lot of people get in the swing without ever going through the stage of table layout.  But for me, I learned is good, and I still believe that tables can be a powerful tool in your arsenal you put on according to needs but keep well oiled in case.  In addition, I need tables on once all 100 pages, but when I need them they are very practical to be cold.

    That said, I think that KISS is a good approach for you here.

  • Various Questions about wireless access controller

    Help me please with these fundamental questions about the role of the access (AC) wireless controller.

    Assume that the access controller and Access Point are connected via IP:

    -Wireless frames sent to AP to acre; include the original MAC header (on the way to wireless access)?  If Yes, is there a Cisco AC gets to fill the WLAN and LAN it is plugged (which means that it outputs as ethernet frames as if they were issued by Mobile Stations).

    -Is the AC necessarily the default gateway for mobile stations? I guess not. But it is possible the default gateway?

    The Cisco AC can function as a DHCP relay?

    The AP creates a tunnel to the controller. All IP traffic from the AP to the controller will address the AP source and dest IP to the interface of the Manager of the AP on the controller. The wireless client traffic is encapsulated inside this tunnel. When it hits the controller the CAPWAP is removed leaving the customer's original package to be sent to the local network through the controller.

    The controller should not be the default gateway for wireless clients because it is not a router. Think of it as a device that converted into wired wireless traffic.

    Normally, the controller acts as a proxy DHCP. Once the customer has joined a WLAN, the controller sends the DHCP packets to the DHCP server on behalf of clients such as the IP address of assistance normally configured on the router for cable customers. You can also configure the controller to act as a DHCP server for wireless clients.

  • Questions about the parameters of database using a fast recovery area and the writing of two copies of archived redo logs.

    My databases are 11.2.0.3.7 Enterprise Edition. My OS is AIX 7.1.

    I am to convert databases to use individual zones of rapid recovery and have two questions about what values to assign to database settings related to archived redo logs. This example refers to a database.

    I read that if I specify

    Log_archive_dest_1 =' LOCATION = USE_DB_RECOVERY_FILE_DEST'

    the names of newspapers archived redo written in the default quick recovery area is '% t_%S_%r.dbf '.

    In the past my archived redo logs have been appointed based on the parameter

    log_archive_format='GPAIDT_archive_log_%t_%s_%r.arc'

    I think log_archive_format will be ignored for logs archived redo written in the fast recovery area.

    I am planning to write a second copy of the archived redo logs based on the parameter

    ALTER system set log_archive_dest_2 = ' LOCATION = / t07/admin/GPAIDT/arch.

    If I do this, the copy of logs placed in /t07 will be called '% t_%S_%r.dbf' or 'GPAIDT_archive_log_%t_%s_%r.arc '?

    Before my use of a fast recovery area, I used the OEM 12 c Console to specify settings of backup of database that has been deleted and archived redo logs after 1 backup. Oracle manuals say rather specify a deletion of "none" policy and allow Oracle delete newspapers in the area of fast recovery if necessary. Since I got to keep a second copy of these log files in /t07 should I keep the policy that says to delete logs after 1 backup? If I don't do that, how will they removed from /t07?

    Thank you

    Bill

    If I do this, the copy of logs placed in /t07 will be called '% t_%S_%r.dbf' or 'GPAIDT_archive_log_%t_%s_%r.arc '?

    They will be "GPAIDT_archive_log_%t_%s_%r.arc". LOG_ARCHIVE_FORMAT is only ignored for directories under OMF.

    Since I got to keep a second copy of these log files in /t07 should I keep the policy that says to delete logs after 1 backup? If I don't do that, how will they removed from /t07?

    You can hold the deletion policy as it is. Oracle documentation, defining the STRATEGY of the ARCHIVELOG DELETION: "the deletion of archived newspaper policy applies to logs archive destinations, including the area of fast recovery."

  • Technical questions about X 1 carbon Touch base

    Please forgive me, I'm still recovering after the shock and frustration generated by dealing with Lenovo Tech Support.

    I recently received my X 1 carbon Touch and I have tried to understand some - I think - fundamental questions.

    As you know, to win 8, Lenovo doesn't have a custom recovery solution / owner. However, there are a score of Lenovo recovery on the machine. I'm /assuming/ this, somehow, can be used through the default / built-in mechanisms of recovery Windows, but I want to know exactly how without necessarily experiencing myself with it.

    In addition, the computer seems to be equipped with a mechanism of alternative stand-by, fast recovery called Intel or something like that. It is enabled in the BIOS and apparently makes use of a hidden partition of ~ 8 GB on the SSD. However, I find no indication of this mechanism working. There are no settings that I could find in Windows. I changed the BIOS to 'Immediately' setting in order to test the functionality, but put the computer to sleep does not seem to generate behavior that is somehow different from regular sleep.

    In any case, I would like advice or information on my questions above. I understand works the thing of recovery fast in order to decide if I want to recover the partition or not. I would also like to know my options I should never need re - install the delivered factory OS without actually crossed the steps. I would also if someone could tell me the best way to get support from Lenovo.

    / * I called Lenovo technical support with these issues. The person sponsor and those to whom I then was sent to had no idea what I was talking about. They had difficulty understanding terms like partition, standby, sleep, resume, and even software/hardware and implementation. It was suggested that my computer may be Intel and Lenovo not... They seemed to really come from an age of the computer. I asked to speak with someone who would have some basics of the machine that Lenovo delivered to me. I find it reasonable. At that time, I was told to talk to a knowledgeable person, I have to pay a fee. Seriously?

    */

    elfstone,

    To find out how to work the various Windows recovery methods, please see this Microsoft Web site:

    http://Windows.Microsoft.com/en-us/Windows-8/restore-refresh-reset-PC

    The sole purpose of the recovery partition is to work with the "All delete and reinstall Windows" function of Windows 8.  If you have specific questions about the factory recovery that you can't get items from Microsoft, let me know.

    Regarding the plan of hibernation from Intel, as you noted, it takes a partition dedicated to your SSD which cannot be used for other purposes.  The only parameter in Windows is inside the application of arrangements of Lenovo in the power section.  Lenovo calls this function "in waiting for 30 days.  The only thing you can do is it turn on or off.  Here's how works "waiting for 30 days:

    1. When you close the lid, or otherwise put the system into standby mode, the system of standby for 3 hours.

    03:02 hours, the system will wake up and check things like not attached AC, wake-on-LAN not activated, USB device not attached, etc..  If conditions permit, the system will enter wakefulness of 30 days for example deep sleep.  If the system returns in normal mode to 'sleep'.

    3. deep sleep means that the content of the memory is written on the special hibernation on the SSD partition.  It is very similar to the traditional hibernation except what happens faster using methods of BIOS instead of Windows methods.  But this isn't anywhere near as fast as normal sleep/recovery.

    4. the system out of deep sleep when the cover is opened or when you press the power button.

    You have found the BIOS settings, but in fact they are ignored when Lenovo settings (and package of dependence of the Lenovo parameters) are installed on the system.  I don't really know why this design choice has been made.

    Personally, I don't see value in 30 days standby, not enough value to want to give up my expensive SSD 8 GB.  I exclusively use sleep/recovery.  The battery will last several days.  And if I'm going to be away from the computer for a long time, then I'll just stop it.

  • Small questions about Dataguard

    Hello

    I have a few questions about Dataguard. I'm new to DG, carry me so for the display of these fundamental questions:

    We plan to install Oracle 12 c to Win 2012 R2 with primary and physical intelligence on different physical servers. We are looking for HA solution, but cannot afford to license CARS. The two primary & Eve are going to be in the same data center connected via a 1 gigabit network. So I have a few questions related to it.

    (a) since both of our primary & Eve is going to be in the same data center with good connectivity. Can I assume that although Data guard is more DR solution, in our case it would be HA solution since we have no network latency.

    (b) is it safe to assume that where primary fails, then the day before can automatically take the role of primary education without any manual intervention, possibly using Dataguard broker?

    (c) so I understand that where primary breaks down, the existing connections transfer automatically from primary to standby db (most recent primary), but how my web based java application will know that more recent connections must be sent to the standby db Server (more recent primary)? I guess net Oracle has a provision to handle this? Who can be automated?

    (d) given that I have only 2 physical servers, can I deploy broker Dataguard on a normal PC (since it is a thin client) assuming the PC is kept in the data center and is online 24/7?

    (e) I was reading about the modes of Protection, account required to our current situation & infrastructure, I choose 'Maximum availability Mode' instead of 'maximum Performance '.

    (f) what is your point of view about Oracle integrated security? I hear too much on Oracle failsafe, because people generally go to a combination of CARS & dataguard for HA? We have a few additional storage that can be shared between the two servers, but I feel with literally no DG network latency can provide us HA.

    Concerning

    Learner

    (a) since both of our main & secondary is going to be in the same data center with good connectivity. Can I assume that although Data guard is more DR solution, in our case it would be HA solution since we have no network latency.

    Well it will not provide the level of HA that is RAC. RAC is a Transparent Application failover. With a physical standby, you need to apply some delay for a failover operation. You can implement the observer that can automate the failover and reduce the time.

    (b) is it safe to assume that where primary fails, then the day before can automatically take the role of primary education without any manual intervention, possibly using Dataguard broker?

    This isn't really the DG broker, but rather the observer you will need to implement. The DG broker automate failover. It will be the observer who "sees" (where its name) that the principal is down and initiate the failover process. To do this, the observer will use the DG broker, so it will be necessary.

    (c) so I understand that where primary breaks down, the existing connections transfer automatically from primary to standby db (most recent primary), but how my web based java application will know that more recent connections must be sent to the standby db Server (more recent primary)? I guess net Oracle has a provision to handle this? Who can be automated?

    You'll want to read on the QuickStart failover. http://www.Oracle.com/technetwork/articles/smiley-fsfo-084973.html

    (d) given that I have only 2 physical servers, can I deploy broker Dataguard on a normal PC (since it is a thin client) assuming the PC is kept in the data center and is online 24/7?

    The DG broker is required to run on the database server. The DG broker will run on the primary and standby. I think that what you're asking really anything is if the observer can run on a PC. Yes it can. However, it is probably not a good idea. A PC will not have the internal hardware redundancy, a server can have. If the PC can fail more often than to a server. And if the gods are bad, it will fail when you most need.

    (e) I was reading about the modes of Protection, account required to our current scenario & infrastructure, I choose 'Maximum availability Mode' instead of 'maximum Performance '.

    Your choice should depend on the amount of data loss is acceptable.

    (f) what is your point of view about Oracle integrated security? I hear too much on Oracle failsafe, because people generally go to a combination of CARS & dataguard for HA? We have a few additional storage that can be shared between the two servers, but I feel with literally no DG network latency can provide us HA.

    Oracle integrated security only works on Windows with a MS Failover Cluster. Real DBAs do not have Oracle on MSFC.

    See you soon,.
    Brian

  • a few questions about the migration of content/web content

    Hi all


    I have a few questions about the migration of the website and content (from development to test) which are not specified after reading the oracle documentation.

    -> When we make the site replication will be migrated content (data files).

    -> when we move will contained all the ED and RD etc (any object type web site migrated).

    --> are the two the necessary step for migration when we migrate web site.


    Thank you

    -Yves

    --> When we make the site replication of the content will be migrated (data files).

    It depends on how you do it. The tool 'Site Studio Replicator' won't move any content, only the structure of the site. The 'Manage replication Site' page can be used to migrate content with the structure of the site but I do not recommend for large sites, I use separate tasks to archive to move the content. "Backup and restore" page stores the entire site in a ZIP file, not advisable for large sites.

    --> When migrate us content will be all ED and RD etc (any object type web site migrated).

    Yes. It uses the metadata xWebsites field to identify the elements that belong to the site.

    --> are the two the necessary step for migration when we migrate web site.

    Depends on how you do it, but yes, all bits are needed.

  • Questions about the installation of ColdFusion 9.0.2 version of existing in version 9.0.1

    Hello

    I'm currently running ColdFusion 9.0.1 enterprise edition and plan to upgrade to version 9.0.2 version. I understand that this requires uninstalling existing 9.0.1 and perform a new installation of version 9.0.2 version. My concerns are below:

    1. for the installation of the 9.0.2 version the 9.0.2 installer (demo / trial version) must be downloaded and the existing serial number (used in version 9.0.1) can be given during installation to convert it in fully registered mode - please correct me if my interpretation is incorrect here. My question is will the existing serial key (as shown in the page Admin of CF 9.0.1 version version) work with the 9.0.2 version as well?

    2. There are many features that are present in the current 9.0.1 version - for example the sources of data configured in the Admin see Y page there a way I can save these configurations of the existing version and to migrate settings to the new 9.0.2 install, or what I have to do the entire configuration manually after the installation of new? Data sources is something I can think; What other configurations can I save, if such a provision is available?

    3. in the installation of the 9.0.2 version there is a question about "configure the Web server for CF Connector" allows me to add Web/sites Internet servers during installation. If I select "built-in Web server" as the option (so do not configure any Web-site Web server), can I change this and configure Server/site later, after compeleting installation?

    What would be the things I need to be careful during this installation to version 9.0.2 from 9.0.1?

    Thank you!

    Arun-

    I did a successful version 9.0.2; installation answering my own questions, for the benefit of future development-

    1. Yes. Only the serial number is required (as shown in the Page of the CF Admin of existing facility)

    2 configurations (for example: data sources) can be exported. Go to CF Admin Page, click on packaging & deployment, click on Archives of CF. Basically, you export settings as files with the extension archive. car. Once exported from the current installation, it can be imported by clicking on the button deploying car file.

    3 server configuration can be modified (Add/Remove) by using the "Web Server Configuration Tool" which is installed by default with CF 9. You can find it under program files (windows OS).

    I had some problems during the installation. After installation, the CF Admin page would not be made. Virtually no cfm page don't take charge, because the extension is not known by the IIS (or the server you are using). To set this up, use the "Web Server Config Tool" mentioned above. This launch and activate the check box that is on the rendering cfm pages (forgot the exact name). Once this is done, cfm pages must load. I had to enable this and then do a reinstall but.

    -arun

  • MAA - RAEVEN & DataGuard conceptual question about Photo (10.2 doc)

    Hello experts,

    I have question about the figure:
    'D.1.2 putting into place of a primary of multiple instances with a multi-Instance standby'
    http://download.Oracle.com/docs/CD/B19306_01/server.102/b14239/IMG/rac_arch.gif

    page http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/rac_support.htm

    Detailed explanation is provideded here:
    http://download.Oracle.com/docs/CD/B19306_01/server.102/b14239/img_text/rac_arch.htm

    It is said:
    «This illustration shows a primary database archiving online redo logs to a database, multi-instance multi-instance ensures in a Real Application Clusters environment.» In this configuration, there are two instances of primary database: has the instance primary and primary Instance B. There are also two instances of sleep: standby receiving Instance C and standby recovery Instance D. The definition and purpose to receive the bodies and recovery is described in the text that follows this illustration. Each primary instance uses a LGWR to write again online newspapers and recovery logs archived local processes on the primary instance. In addition, the process LGWR on the primary Instance a sends its changes over an Oracle Net network to the RFS in First Instance B process and to the RFS process on Standby receiving Instance C. primary Instance B sends its changes over an Oracle Net network to the RFS on Standby Recovery Instance D process. The RFS process on each standby instance written in local newspapers do sleep. This figure also shows how the process ARCn on Standby receiving Instance C sends its changes over an Oracle Net network to the process on Standby Recovery Instance D RFS. The process on Standby Recovery Instance D ARCn also archives its changes in newspapers local archived redo. »

    Question I would like to ask because I'd like to better understand the Internals:
    (1) why it is written that LGWR writes Archives newspapers and not ARCH process? Is this some sort of error doc. ?
    (2) what is the reason that LGWR sends redo changes made to the FIU to the instance of the same (primary) cluster? What is the purpose? What happens if it has many nodes in the primary? This means that this instance would be multicast it to each of them in this way?
    (3) on the backup site: MRP is located on the D instance in this scenario? (1 single standby instance is applying the data but several can recive redo and write it to the SRls)?

    (1) why it is written that LGWR writes Archives newspapers and not ARCH process? Is this some sort of error doc. ?
    Yes, it seems that this picture has been simplified, and it does not show the level of appropriate detail.
    (2) what is the reason that LGWR sends redo changes made to the FIU to the instance of the same (primary) cluster? What is the purpose? What happens if it has many nodes in the primary? This means that this instance would be multicast it to each of them in this way?
    This shows what we call "cross instance archiving". If you enable this, one storing data on multiple nodes. So if you are in a cluster, and archive locally, if this node dies, so how you get that archive the data to retrieve, put the archives in several places gives you extra security for those who are paranoid. I think that it was more useful in the days where Oracle shipped just archiving logs, now that lgwr writes to the remote node, you are less likely to need it.
    (3) on the backup site: MRP is located on the D instance in this scenario? (1 single standby instance is applying the data but several can recive redo and write it to the SRls)?
    Yes one instance applies to the remake.

  • A question about the restoration of cold backup (backup of control file is not clear)

    Hello

    I had another question about restoring from backup to cold. My database is in noarchivelog mode and after that take a consistent backup of cold, all I have to do is restore the backup right? -Why I got this question is because: when I save my control to trace file, I see statements like this: -.
    -Orders to recreate the table of the incarnation
    -Under journal names MUST be replaced by names of existing files
    -disc. The log files from each branch can be used for
    -re - create the folders of the incarnation.
    -LOGFILE of REGISTRY ALTER DATABASE ' / uo1/app1/arch1_1_647102958.dbf';
    -Recovery is required if any of the data files are restored backups.
    - or if the last shutdown was not normal or immediate.
    RECOVER THE DATABASE
    -Database can now be opened normally.
    ALTER DATABASE OPEN;
    -----
    My database is in noarchivelog mode now so don't know why these statements (of the registry the logfile) is there in the backup of the control file? so when I restore the cold backup of this database, it will still work OK? (there is no log file I only have the CRD files in cold don't backup - no log file archive.)

    Thank you
    Cedric

    It is a generic message in trace control file. It's not affect you.

Maybe you are looking for