Doubts about databases on hold

My understanding is that whenever a will be archived online redo log, same redo log is transported or even Redo that generated in the primary also will be generated in the standby database online redo log, which is true
and

What is real-time applies

I don't ask the size, Consider physical standby basis will have to redo log files online (when the role is passed, it can be used).
My question is in fact the same files will be used as log restore pending files by progression or a set is necessary.

No. , you must create additional log files Eve again. However online redo custom log files used when data bases in the State, these serve again when you perform the switchover. Until the redo transport will be made in the log restore pending files forward.

Thanks for the clarification about the logic of sleep

Thank you.

Tags: Database

Similar Questions

  • Doubt about database encryption

    I'm trying to encrypt a database by using the signing key. I followed the .key file generation process date of signature of the key tool of bb and created a GWL. Key file and I tried to help in my project, but I m get Excelption, but it is the creation of database with success... Here is my code to encrypt the .db file:

    DatabaseSecurityOptions dbso = new DatabaseSecurityOptions (false);
    cardDetails = DatabaseFactory.create (uri, dbso);

    CodeSigningKey codeSigningKey = CodeSigningKey.get(CodeModuleManager.getModuleHandle( "SQLite" ), "GWL");
    
                    try
                    {
                        // Encrypt and protect the database.  If the database is already
                        // encrypted, the method will exit gracefully.
                        DatabaseFactory.encrypt(uri, new DatabaseSecurityOptions(codeSigningKey));
                    }
                    catch(DatabaseException dbe)
                    {
                        Dialog.inform("Encryption failed - " + dbe.toString());         
    
                    }
    
    CodeModuleManager.getModuleHandle( "SQLite" )
    

    the above line is return '0' and I m get IllegalArgumentexception on the threshold:

    CodeSigningKey codeSigningKey = CodeSigningKey.get(CodeModuleManager.getModuleHandle( "SQLite" ), "GWL");
    

    I solved the problem... It was with the name of the module... Thanks anyway...

  • Doubts about RAC infrastructure with a disk array

    Hello everyone,

    I am writing because we have a doubt about the correct infrastructure to implement RAC.

    Please, let me first explain the current design we use for storage Oracle DB. Currently, we are conducting multiple instances in multiple servers, all connected to a SAN disk storage array. As we know that it is a single point of failure so we have redundant controlfiles, archiveds and Orde in the table and in the internal drive of each server, in which case table has completely failed us 'just' need to recover cold backup nightly, applied hoops and Oder and everything is ok. This is possible because we have autonomous bodies and we can assume that this downtime of 1 hour.

    Now, we want to use these servers and implementing this table to a RAC solution and we know that this table is our only point of failure and wonder if it is possible to have a RAC multi-user solution (not AS a node) with controlfiles/archs/oder redundant internal drives. Is it possible to have each written full node RAC controlfiles/archs/oder in drives internal and applies these files systematically when the ASM filesystem used for CARS is restorations (i.e. with a softlink in an internal drive and using a single node)? Maybe the recommended solution is to have a second table to avoid this single point of failure?

    Thank you very much!

    CSSL wrote:

    Maybe the recommended solution is to have a second table to avoid this single point of failure?

    Fix. It is the right solution.

    In this case, you can also decide to simply use the distribution on both tables and mirror of the array1 array2 on table data using the ASM redundancy options.

    Keep in mind that the redundancy is also necessary for connectivity. If you need at least 2 switches to connect on two tables and two HBA ports on each server, 2 fibers running, one to each switch. You will need driver multichannel s/w on the server to deal with the multiple I/O paths for storing same lun.

    Similarly, you will need to repeat this step for your Interconnect. 2 private switches, 2 cards on each server which are pasted. Connect then these 2 network cards on the 2 switches, one NETWORK card per switch.

    Also, don't forget to spare parts. Spare switches (one for the storage and interconnection). Spare cables - fiber and everything that is used for the interconnection.

    Bottom line - not a cheap to have a redundancy solution. What we can do is to combine the layer of Protocol/connection of storage with the interconnection layer and run both on the same architecture. Oracle database machine and Exadata storage to servers. You can run your storage Protocol (e.g. PRSS) and your Protocol (TCP or RDS) interconnection on the same 40 GB Infiniband infrastructure.

    As well as 2 switches Infiniband are needed for redundancy, plus 1 spare. With each server running a dual port HCA and one cable for each of these 2 switches.

  • Doubts about licenses

    Hi all

    I have a few doubts about the price of licenses.

    I understand, I can deploy an APEX Server 11g XE free of charge, but what happens, if I want to install, a version no XE?

    Imagine a billing application, for 10 users, and I will assume that a Standard is sufficient. With the help of [this price list | http://www.oracle.com/us/corporate/pricing/technology-price-list-070617.pdf], how much exactly will cost?

    I understand I can get a license by user or server, or I have to license user and server too?

    Kind regards.

    Hello
    metric license is named plu user or license CPU (see the table of the core).

    for a quote, you can take a look in the oracle store or ask your dealer for an exact price oracle.

    concerning
    Peter

  • I have a doubt about the file .folio and publications

    Hello, I m new here.

    I want to start working with DPS, but I have a doubt about which version to buy.

    At the moment I have one customer just wants to publish a magazine, but my intention is to have more customers and publish more magazines.

    If I buy the unique edition of DPS, I read that I can publish a single file .folio. What it means? Each folio file represents a publication?

    Please, I need help to understand this before you purchase the software.

    Thank you very much

    Paul

    Here's a quick blog I wrote to compare the simple edition and

    multifolio apps:

    http://boblevine.us/Digital-Publishing-Suite-101-single-Edition-vs-multi-Folio-apps/

    Bob

  • Doubts about event handlers

    Hello

    I had some doubts about the event handlers in the IOM 11.1.1.5...

    (1) I want to use the same event handler for the message insert and update Post task... Can I use the same handler for this... If Yes, then how can I make...

    (2) can I create the single class of Plugin.xml and add all the jar files in IE single lib folder and zip them all together... If yes then what changes I need to do? Need only add that the plugin tags for different class in the plugin.xml file files? OR need to do something extra too...?

    (3) if I need to change something in any class handler... Is it need to unregister the plugin and register again...?
    If Yes... Is it need to delete the event handler using the weblogicDeleteMetadata command?

    (4) that we import the event handler of the path as event manager/db /... If we add all the evetn handler.xml files in this folder... As when importing weblogicImportMetadata called recursively all files in this folder... Now, if I need to change anything in one of the event handler class... so if import us from the same event manager/db folder... What to do... Create the copy of the eventhandlers? OR should I not add Eventhandler.xml files to class files, I made the changes...

    (5) given that I need to create emails on the creation of the user while recon and identification of email updated as a first name or surname updates... I had to use in the event handler.xml (entity-type = 'User' operation = "CRΘER") or something else...


    Help me clarify my doubts...

    Yes, on the update post you need to be check first if the first and last name change to update the mail electronic id, rather then calculation always email identification. So, you can check the path name are updated through the previous code.

    -Marie

  • Doubt about appsutil.zip in R12

    Hi all
    I have doubts about the application of rapid Clone on 12.1.3.I the latest patches have applied the fix using adpatch. After that, it must synchronize directories appsutil
    in RDBMS oracle home. I created appsutil.zip in the application layer and copied in the RDBMS oracle home. If I move the old appsutil to appsutil.old and extract appsutil.zip, the new appsutil directory should not constituted by the context file (I think). So, I have to run the automatic configuration based on the old cotextfile. Below, I have summarized the steps that I follow. Please check and correct me if I'm wrong.

    Copy appsutil.zip to $INST_TOP/admin/out of RDBMS oracle home
    CP $CONTEXT_FILE /tmp/mytest_vis.xml
    MV appsutil appsutil.orig
    unzip appsutil.zip
    Run autoconfig based on/tmp/mytest_vis.xml.


    Thank you
    Jay

    Jay,

    Is there a reason why do not use the old file context? What is the difference between the context file that will be generated by adbldxml.pl and the old file context?

    If there are updates in the application, it will be updated in the new xml file generated by adbldxml.sh, but he's not in the old file.

    So it is always best to run adbldxml.sh and autoconfig.

    Amulya

  • Some doubts about Oracle DBA...

    Respected, professionals

    I am new to Oracle RAC DBA. Knew just CARS and have some doubts, associated with Oracle RAC databases and a few other s/n basic questions.

    (1) client connected to one of the rac node in the database of node three cars by scan listener. The node to which it is connected was dead. My question is - is that its connection automatically goes to the next available node

    without using TAF? Or he needs to connect again? I have to configure TAF as in 10g RAC? or the configuration of TAF to perform a SCAN is completely misconceptual?

    (2) a database connection request will be going to one of the node system of three-node rac (shooting technology in return for load balancing). How can I connect remotely to PuTTY tru particular node?

    Do I need to enter particular node (virtual host as IP) TNS in tnsnames.ora in my system oracle client?

    (3) we have a listener to scan with three IP for RAC 3-node addresses. If you add a new node, I need to create another listener with a single IP address scan? or do I need to add another IP existing scan listener? What should I do to connect again with existing scan listener node?

    My sincere apologies over the three questions are completely misleading or false.

    (4) incremental 11 g rman are speed, compared with 10 g rman. WHY?

    (5) how connect us using sqlplus. (No high level architectural decoration). I mean what happens when connect us to the database using SQL * more (SQLPLUS scott/tiger)?

    My sincere apologies if any inconvenience caused... I know this isn't a chat BOX... Please forgive me If my way of questioning is insulting and rude.

    Thank you

    Hello

    (1) client connected to one of the rac node in the database of node three cars by scan listener. The node to which it is connected was dead. My question is - is that its connection automatically goes to the next available node

    without using TAF? Or he needs to connect again? I have to configure TAF as in 10g RAC? or the configuration of TAF to perform a SCAN is completely misconceptual?

    If there is no TAF not configured you will have to reconnect.

    TAF can be configured on the side client (tnsnames.ora) or side (srvctl) server.

    (2) a database connection request will be going to one of the node system of three-node rac (shooting technology in return for load balancing). How can I connect remotely to PuTTY tru particular node?

    Do I need to enter particular node (virtual host as IP) TNS in tnsnames.ora in my system oracle client?

    You can connect to one of the nodes via PuTTY using the public IP node.

    To connect to a particular instance through SQLPLU, then you must specify the IP node-vip in tnsnames.ora.

    (3) we have a listener to scan with three IP for RAC 3-node addresses. If you add a new node, I need to create another listener with a single IP address scan? or do I need to add another IP existing scan listener? What should I do to connect again with existing scan listener node?

    You don't need to create another auditor SCAN.

    For your new node, you must assign the listener SCAN and LOCAL_LISTENER REMOTE_LISTENER must be defined on node-VIP.

    Ivica

    Post edited by: Ivica Arsov

  • Some doubts about the navigation in unifying

    Hi all

    I had a few questions about unifying navigation.

    Is it possible to move the admin mode user mode access level?

    I mean, if a particular feature as Manager of the shell I can only access from admin mode is it possible to provide access even in user mode?

    If so, how?

    My 2nd question of doubt is, currently, we can access company BPs level "Journal of society" or "Resource Manager" under shell 'Company Workspace'.

    Is it possible to move the "journal of the society" or "Resource Manager" in the folder? If yes how?

    I tried in "navigation user mode" to move the company BPs level at shell of the House, but I can't do it.

    To answer your questions:

    (1) User-Mode browser can have the user feature included. You cannot change the view mode Admin or move functions admin for user mode.

    (2) you cannot move these on the Home tab.

  • Confused about databases.

    A few days ago, I was told to look at using a database to receive information from my e-commerce site. That's what I was doing. Although I have a little trouble. I'm extremely new basically everything that's involved in my sql and wamp server, phpmyadmin. Basically, I don't know what I'm doing. I did search tons of tutorials online on this topic and it seems that I do no progress. Although I was able to set up a sql server even if I have no idea what it does. I just want to know if there is an easier way to go about it. I want to include online registration, papal, add to cart ect. on my site, but I have not found a way to accomplish this task for someone who has only a local root folder on their computer. Should I learn php? SQL? principles of database? As you can see I'm very lost a very frustrated on this subject. Any help will be much appreciated.

    I think I remember Nancy mentioning Adobe Business Catalyst can give you ecommerce without having to learn to code yourself. You can consider the issue.

    Otherwise, what you describe is an advanced project. And Yes, you would need to learn PHP, SQL and a little on the other. Of course, learn the tools and do the work yourself, if you want, but do not expect to be completed within a year.

  • Doubt about the Index

    Hi all

    Oracle Database 11 g Enterprise Edition Release 11.2.0.2.0 - 64 bit Production
    PL/SQL Release 11.2.0.2.0 - Production
    "CORE 11.2.0.2.0 Production."
    AMT for Linux: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - Production

    I have a question about the index. Is - this required that the index will be useful if we have a "WHERE" clause I tried to find myself there but do not.
    In this example I haven't used where clause used but group. But it gives a comprehensive analysis. Is it possible to get the scan interval or something else using Group by?
    SELECT tag_id FROM taggen.tag_master GROUP by tag_id 
    
    Explain Plan:
    Plan hash value: 1688408656
     
    ---------------------------------------------------------------------------------------
    | Id  | Operation             | Name          | Rows  | Bytes | Cost (%CPU)| Time     |
    ---------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT      |               |  4045 | 20225 |     6  (17)| 00:00:01 |
    |   1 |  HASH GROUP BY        |               |  4045 | 20225 |     6  (17)| 00:00:01 |
    |   2 |   INDEX FAST FULL SCAN| TAG_MASTER_PK |  4045 | 20225 |     5   (0)| 00:00:01 |
    ---------------------------------------------------------------------------------------

    Hello

    SamFisher wrote:
    Since I was on what they do full scan. Is it possible to restrict of fullscan without using where clause?
    I guess having limit clause but not quite know.

    Why?
    If this query is producing good results, then you need a full analysis.
    If fool you somehow the optimizer by doing a scan of interval, it will be slower.

  • Some doubts about 11g R2 Policy Based/Scan

    Hello

    I went through several documents, including the help of Oracle to understand oracle new feature Policy Based management/Scan, I also explored a few RACSIG Webinar by Markus which were useful, but still not able to understand the concept. I will be grateful if someone can help me understand the concept if you please.

    Written document - the advantage of the creation of a pool of servers (policy) is to manage the workload automatically. I do not understand how it will manage the workload automatically. I understand according to importance and setting min/max, the servers will allocate/deallocate pools as a result.

    For example - is a server pool: high_priority pool with min 2 servers. first and foremost, he is serving 2 instance of database on payroll
    another pool of servers: low_priority pool with min 2 servers with low priority, he is serving 2 HR database instance

    In this case, any expelled obtained high_priority server, it should take only one low_priority server. (That's what expression I so far after going through the documentation. Please correction if I'm mistaken).

    1st question: doest which means that he's going to stop a case of HR running on low_priority pool and will allocate for this server at high_priority
    and as a result instance payroll goes up.

    2nd question: can still reach TAF feature using server pools. That is, I read that I can have a uniform service that runs on all instances
    (I guess that means favorite)... or the service can run on any single server (singletone)


    question 3: feature with SCAN, where we put remote_listener = SCAN_LISTENER, and clients will connect first to the SCAN and then had diversion to local listener.
    What is the advantage to have VIP for each node. SCAN listener knows which instance is in place, which is the less loaded can connections directly to the front
    local listener. (we can start using public ip for listener)

    my doubt is: 10 g, we only used vip to get quick notification or avoid tcp timeout... and then we got to redirect to some other listener/node to connect
    now SCAN can do the same features...

    Thank you in advance, I thank your time.

    Kind regards
    Lyxx

    Hello

    In this case, any expelled obtained high_priority server, it should take only one low_priority server. (That's what expression I so far after going through the documentation. Please correction if I'm mistaken).

    Yes, that is the correct hypothesis.

    1.) Yes. The instance of database that HR will be stopped and a third instance of payroll will start.
    2 TAF.) will work for uniform and singleton services. However if you don't have 1 server in the serverpool (or have a singleton service) that there is no sense to work with preconnect.
    Note: TAF time varies based on uniform/singleton service and needs if service failover will need time so that it starts an instance. If you must configure TAF for such services with new higher attempts.
    (3.) the SCAN will redirect the customer to use the VIP for the final sign in the local listener. If it would use the public IP address and the server fails in exactly this moment, the client would wait for the TCP/IP timeout (which may take some time).
    With the VIP, he will tell him immediately, report that its connect has been unsuccessful and rerun the ANALYSIS to get a new connection.

    While the SCAN done the same thing in case of a change of itinerary as the VIP there 2 advantages:
    a.) no matter what your cluster, you must always 3 SCANs.
    (b) and they stay the same, no matter if the cluster has changed. (The VIPs are).

    Concerning
    Sebastian

  • Doubt about the restoration

    Hi all

    I have a doubt.

    I took the back of a database.

    All in restoration, the same I found that it is storing thw control file in the folder $ORACLE_HOME/dbs instead of the actual file.

    Can someone explain to me why this happens?

    The acutal controlfile location is "/ orasoft/test '.

    If 'restore spfile' already fails, you have a "dummy" spfile, who does not have an entry CONTROL_FILES. I guess that you did not indicate the DBID, which is mandatory when a catalog is not used. It is an example of the documentation, how to restore the spfile:

    http://download.Oracle.com/docs/CD/B19306_01/backup.102/b14192/recov004.htm#sthref582

    After a successful restore spfile deliver 'force startup nomount', so that the instance is restarted with a correct spfile. To restore the controlfiles and the 'rest' of the database again follow the docs:

    http://download.Oracle.com/docs/CD/B19306_01/backup.102/b14192/recov004.htm#sthref564

    Werner

  • little doubt about import datapump

    Hello

    OracleVersion:10.2.0.1
    Operating system: linux

    Here, I have a small doubt please help me it is to say

    I took a backup of tables emp and dept now I need to import only the table emp based on condition specified in another schema
    select * from emp where deptno in (select deptno from dept where loc='DALLAS')
    Here is my script to import that I had tried failed. Please help me how to
    E:\oracle\dbdump>impdp sample/sample directory=dbdump dumpfile=TABLES.DMP logfile=tales_imp.log remap_schema=scott:sample tables=emp remap_tablespace=users:sample query=\"where deptno in \(select deptno from dept where loc='DALLAS')\"
    
    Import: Release 10.2.0.1.0 - Production on Thursday, 29 October, 2009 17:59:05
    
    Copyright (c) 2003, 2005, Oracle.  All rights reserved.
    
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Master table "SAMPLE"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
    Starting "SAMPLE"."SYS_IMPORT_TABLE_01":  sample/******** directory=dbdump dumpfile=TABLES.DMP logfile=tales_imp.log remap_schema=scott:sample tables=emp remap_tablespace=users:sample query="where deptno in \(select deptno from dept where loc='DALLAS')"
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    ORA-31693: Table data object "SAMPLE"."EMP" failed to load/unload and is being skipped due to error:
    ORA-00911: invalid character
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
    Processing object type TABLE_EXPORT/TABLE/POST_TABLE_ACTION
    ORA-31685: Object type POST_TABLE_ACTION failed due to insufficient privileges. Failing sql is:
    BEGIN
     SYS.DBMS_SNAPSHOT_UTL.SYNC_UP_LOG('SCOTT','EMP');
     END;
    
    Job "SAMPLE"."SYS_IMPORT_TABLE_01" completed with 2 error(s) at 17:59:15

    SIDDABATHUNI wrote:
    Hello

    I'm looking for here is I'll have the release of the full scheme. Now, I want to import only a single table based on a specific condition rather than import the full dump.

    I get the error when you use parfile as you suggest.

    Not too, I've said to parfile "+ no need to back slashes +"...?

    Nicolas.

  • Doubts about the merger of cache - PI (last Image) in the CARS

    We are Oracle 10.2.0.4 database configuration 2-node RAC on Linux x86_64

    I was going through the Oracle documentation and some books that were discussing on the merging of cache (reading-reading, reading / writing, writing-write, read-write behavior) etc.

    It was mentioned that the line lock is similar to that single instance so whevever a block is necessary an instance finishes its operation on the block and transfers the block to the other instance that requires exclusive access.
    What I realized, this is once the transaction is completed the first transfers to instance the block to the other instance to have exclusive access to this block.

    Also the first instance maintains a PI (last Image) of the block that she can use for read operations, but he can't change the block as he has transferred the block to the other instance and he only shared access.

    Now when a checkpoint occurs all instances should serve their IP so in this case, if the first instance is in the middle of a select statement that uses the PI of the block, what will heppen to the query, if it causes an error (something like snapshot too old)?

    I'm not able to understand the notion of PI and what happens during flushing data to disk blocks.

    Evelyne says:

    While I don't understand, what's the meaning of the Image here?

    A PI (last image) is a copy of the block which is known to have existed as the current version (CU) of the block at some point in the past-, then it can be used as starting point for recovery if the node that contains the current block goes down. If IP blocks can reduce recovery times in the database.

    In my view, blocks IP can also be used by the holder of the IP for generating local copies of CR without reference to DRM - if the SNA are appropriate. Thus PIs can also be used the interswitching traffic.

    Initially, there was a rule that when a node has written the courses block, it should send a message to all other nodes brand their IP copy as a free buffer (indeed, forget them) - although I saw a note once Oracle can change this option to convert IP blocks to the blocks of CR.

    Concerning
    Jonathan Lewis
    http://jonathanlewis.WordPress.com
    http://www.jlcomp.demon.co.UK

    "All experts it is a equal and opposite expert."
    Clarke

Maybe you are looking for

  • ABC Player Lite is safe?

    Hello! Sorry for the dumb poll question, but not everything I own Apple Player play. So I downloaded ABC Player Lite, but seeing how there is no notice, someone can share experience - is - this software safe? I mean, not spying on me?

  • iPhone 5 c

    Hello I recently bought an iPhone 5 c and my clock is not correct. I tried to change the time zone, I typed in manually, I even completely reset my phone to factory setting. Nothing seems to work then, how would I fix it? I have a 5 c blue iPhone tha

  • Fees 10.9.5 (by crushing) installation?

    I've updated since a Powermac G5 to a Mac Pro early 2009 (used) running 10.9.5.  One problem - I have no photo app.  I thought that pictures on it but not now.  I could have deleted when I migrated from my old mac.  What is the best way to get the iP

  • Problem with drivers for HP Pavilion 15-P017AX

    I downgraded windows 8.1 on HP Pavilion 15-P017AX for Windows 7. I managed to install windows 7, but now I'm missing the drivers for it. If I check in the device, then under 'Other devices' Manager, there are yellow labels on: 1. network controller P

  • How do I cancel my monthly subscription to photoshop

    How to cancel my monthly subscription to photoshop