ASM for an ERP database

Hello

Can I use ASM for an instance of database in ERP and other for the file system normal .ext?
If so, what impact would the system?


Here is the explanation:
Server1:
Basis of data = prod [used for Oracle E-business Suite r12
Used file system is ext3 as normal file system.
Basis of data = prod1 [used for managementV19.x the Oracle utility]
Intend to use the ASM file system.
Can my question I have systems of ASM file on the same server to a different database?
but please provide some documents

Double post? - ASM filesystem and ext.

Tags: Oracle Applications

Similar Questions

  • cannot use ASM for database storage.

    Hi all

    I use oracle linux 5.4
    and 11g R2 grid

    But while using DBCA to configure the database
    but when I select the type of Storage Management (ASM) of automatic storage

    while I can remember disk group has been given
    When I use data then he gave me the following error.
    cannot use ASM for storage of the database for the following reason
    No found DISKGROUP.

    Please help me thanks in advance.

    See the link: -.
    DBCA could not detect diskgroup
    Thread: DBCA could not detect diskgroup

  • The FRA diskgroup separate for each prod database?

    Hello

    Env: Oracle 11 g 2 EE (11.2.0.3), 6.2 RHEL 64 bit

    Storage: file system

    Databses: ten databases on the PROD and 20 on the DEV

    I have two existing servers with above configuration - a DEV and a single PROD. I have to move all of the databases from two servers to new servers.

    I have two options. Configure the servers again exactly the same way (same software oracle, same mounts-points, directory structures, etc.) and move/copy the databases above. The other option is to use ASM for storage instead of file system.

    Customer asks a diskgroup FRA separate for each DBA database. His reason is that if the FRA gets filled by archivelogs because of some process in the databases, all databases stops responding.

    • It's a legitimate concern and what is the best way to deal with this kind of situation?
    • Should I create a diskgroup FRA separate for each database?

    The archivelogs are backed up every 20 minutes for each PROD database.

    Please advise!

    Best regards

    You can limit FRA for each database with the parameter DB_RECOVERY_FILE_DEST_SIZE.

  • What to do for the main database when a physical database ensures converts a standby time of the snapshot

    Hello

    Need of the primary database to implement the flashback database when a physical database ensures converts a standby time of the snapshot? Or something to do with the primary database? I find some documents this work to allow the return of flame for the primary databases, but I think that he didn't need to do.

    Thank you

    Best regards.

    I did recently, I have not configured flashback on primary, only configured in the standby mode. I converted the standby database and restored the changes after the test. Primary database continued to send archives to the standby site. Instant once converted into sleep mode, as mseberg mentioned overlaps with the sync state after starting MRP.

  • How can suggest us a new DBA ECO certification for very large databases?

    How can suggest us a new DBA ECO certification for very large databases?

    This web site, or what phone number can call us to suggest the creation of a certification of VLDB ECO.

    The most important databases that I've ever worked with barely more than 1 trillion bytes.

    Some people told me that the results to be a DBA change completely when you have a VERY BIG DATABASE.

    I could guess that maybe some of the following topics in the configuration can be on it,

    * Partitioning

    * parallel

    * more large block size - MAS vs. OLTP

    * etc.

    Where I could send a recommendation?

    Thank you Roger

    This forum is probably your best choice, Brandye would be one of the contacts I would use to make such a request.  If you are really interested in this, I would be further demand more.  Specifically, most Oracle certifications available to have a corresponding course available in the OU. If you want to create a new certification, one of the most compelling arguments would be to suggest a certification that tests information for which there are already one or more courses.  Looking at the current offerings of courses 12 c, the following two seem to come more close to what you ask for:

    Database Oracle 12 c: implement partitioning Ed 1

    Parallel processing in Oracle Database 12 c Ed 1

    These areas are covered somewhat in 1Z0 - 117.  You will need to make a case for a certification that hit different areas of knowledge.  Looking at the course for both subjects, partitioning has more that are not covered in-depth on 117.  However, it is supposed to be a 12 c Data Warehousing expertise certification coming out this year that may require a partitioning good knowledge.

  • Configuration of the DBUM connector for an Oracle database more

    Hi, I only configure DBUM connector for Provisioning users for an oracle database. I need the provision to users of other databases on the test and production environment... I use OIM11.1.2.2 and DB version 11 GR 2.

    I duplicate the work using an existing task that is used for the existing work. Create n number of jobs and to provide different it resource for each work and the calendar (see: https://community.oracle.com/message/10594440).

    When I set up my opinion of cannt users DB roles associated with the resource, see only the roles for the 1st set up DB... What is the way to do this task? I have read the document and they talk about copying the files of connector... Is there step with more detailed steps to do this?

    Thanks in advance.

    Check one time how to clone connector

    http://docs.Oracle.com/CD/E21764_01/doc.1111/e14308/conn_mgmt.htm#OMADM4457

  • most long time value of service for all registered databases in Oracle enterprise manager

    Hello

    We use the grid control Oracle enterprise manager 11g in our environment.

    I want to capture the long service time values for all the databases that are stored in the grid control.

    Is there any query to get the values of the database of the grid control.

    Please help me.

    Thank you

    Is not a 11g to test, but try mark for the target_name + target_type = "oracle_database' of mgmt$ metric_daily or mgmt$ metric_details, and do a like on metric_column looking for this particular measure... Once you have found the metric_name + metric_column, you can shoot for all the DB.

  • Data modeling for a small database tutorial - understand the part 'Creating relationships between entities'

    I'm trying to understand and make use of Tutorial: modeling of data for a small database

    In this tutorial, I'm supposed to make Entity Transactions containing two attributes that designates the bosses (patron_id) and Books (book_id) (2.1.4) entities

    Later, I add two one-to-many relationships that attributes mentioned twice in the entity of Transactions (patron_id1 and book_id1). (2.1.5)

    So here are my questions: what is the purpose of creating attributed to point 2.1.4 if they are then reproduced in paragraph 2.1.5?

    If she could be crucial, I use Oracle SQL Developer Data Modeler Version 4.0.0.825 Build 825 on jdk1.7.0_25.

    Bonus question: how to turn attributes types on the logical diagram? I can't find the option anywhere...

    I would be really grateful for each answer and all the stuff!

    Looking at the documentation for version 2.  I checked 3.3 and 4.0 EA3 and corrected tutorial you can download the latest version and use this documentation.

  • ASM for 11.2 stand-alone DB

    DB version: 11.2.0.3

    Platform: RHEL 5.8

    We want to use the DSO as our file system for a stand-alone DB. I believe that 11.2 from ASM is not part of the RDBMS House. We need to install network infrastructure to use ASM for a stand-alone DB?

    Hello

    Yes, you need to install the grid

    Check this box

    http://www.Oracle.com/WebFolder/technetwork/tutorials/OBE/DB/11g/R2/prod/install/gridinstss/gridinstss.htm

  • Enabled caching for a client database, applies to all clients on this DB?

    When I active result caching client for a specific database (mDBService) for example with [for the next restart of the mDBService] sql
    more [email protected]
    password
    ALTER system set client_result_cache_size = 50 M scope = SPFILE;
    This means any client that connects on that DB, (SQL Plus or other API) puts the results cached?

    So it is a command from the Service DB at any client connected to cache the results cause client_result_cache_size is enabled

    Published by: Goodfire George on May 27, 2013 14:25

    Well, you change a setting in database level, so that if a client to update its own session level settings, then Yes, all customers will see this db value.

    Nicolas.

    PS: I wanted to say that if this setting itself is not "editable", it can be overridden
    Read more
    http://docs.Oracle.com/CD/E11882_01/server.112/e25513/initparams026.htm#REFRN10287

    Published by: Gasparotto N on May 27, 2013 14:30

  • You are looking for a compatible database XML system

    Hi all

    I'm looking for an RDBMS (Relational Database Management System) that manages the XML data in a relational way.
    I found the IBM DB 2 XML Extender. However, I don't know if this system is already in use (because the last release was in 2003).
    I would be very grateful if you could help me find what I'm looking for.

    PS. for RDBMS, XML data processing, we find two categories: systems that support XML (they store XML data using a special type of data as a BLOB, CLOB,...), or the native XML systems (data XML stor in its hierarchical structure, so the fundamental unit of storage is XML document). So, I am looking for a system that supports XML and NOT a native XML system.

    Thanks in advance.

    Best regards

    I misunderstood your found thing

    >
    But I am looking for a compatible database XML, Oracle XML DB is a native database.
    >
    and
    >
    PS. for RDBMS, XML data processing, we find two categories: systems that support XML (they store XML data using a special type of data as a BLOB, CLOB,...), or the native XML systems (data XML stor in its hierarchical structure, so the fundamental unit of storage is XML document). So, I am looking for a system that supports XML and NOT a native XML system.
    >

    If I understand correctly say you

    Database compatible XML - store XML data using a special like BLOB, CLOB data type
    database XML Native - XML data in the hierarchical structure stor

    projection of oracle xml db, you can store your xml file in xmltype in clob or BLOB or as a file in the operating system
    looks so much like oracle xml db is compatible XML native XML databases :)

    I'm going crazy ;)

  • upgrade from 3.2 to 4.1 - error handling for the unavailable database links

    Hello

    I have a 3.2-> upgrade problem 4.1 associated with error handling for the damaged database links.

    I have a conditional button Exists on a page that contains a SQL query to related tables. However, for 10 minutes every day where the target of the link database becomes a cold backup, the query fails. In the apex 3.2 old, I just had an error within the region where the button is located but otherwise, the page was still visible:

    "Is not valid/not exists condition: ORA-02068: following a serious error of MYDBLINK ORA-01034: ORACLE not available ORA-27101: there is no shared memory realm."

    However, in the apex 4.1.0.00.32 I get the following unhandled error and click 'OK' brings me to the edit page when logged in as a developer.

    that is, the page cannot run at all so that the links to the database fail to this one area.

    Treatment of error condition.
    ORA-12518: TNS:listener could not hand off client connection
    Technical information (only visible for developers):
    is_internal_error: true
    apex_error_code: APEX. CONDITION. UNHANDLED_ERROR
    ora_sqlcode:-12518
    ora_sqlerrm: ORA-12518: TNS:listener could not hand off client connection
    Component.type: APEX_APPLICATION_PAGE_REGIONS
    Component.ID: 4
    Component.Name: alerts today
    error_backtrace:
    ORA-06512: at "SYS." WWV_DBMS_SQL', line 1041
    ORA-06512: at "APEX_040100.WWV_FLOW_DYNAMIC_EXEC", line 687
    ORA-06512: at "APEX_040100.WWV_FLOW_CONDITIONS", line 272

    Users generally see this:

    Treatment of error condition.
    ORA-01034: ORACLE not available ORA-02063: preceding the line of MYDBLINK

    by clicking on 'OK' takes the user to another page, don't know how the Summit decides that, but not a concern at the moment.

    I did a search and read the http://www.inside-oracle-apex.com/apex-4-1-error-handling-improvements-part-1/ page, but the new apex error handling is not clear to me, and I don't know if the apex_error_handling_example provided on this page would be applicable to this situation.

    Hello

    It was my fault, I forgot that the code will be compiled on the fly which already fails if the table/view of deletion is not accessible. Nice that you found yourself workaround.

    Concerning
    Patrick
    -----------
    My Blog: http://www.inside-oracle-apex.com
    APEX Plug-Ins: http://apex.oracle.com/plugins
    Twitter: http://www.twitter.com/patrickwolf

  • How we happen to know that the number of presents for this user database

    How we happen to know that the number of presents for this user database.
    Both those who are connected and not connected yet.

    Thanks in advance

    Concerning
    SUBJ.

    using v$ session...

  • A SCN for the entire database and the different SNA for the data files?

    DB Version: 11 g

    I always thought that there is a unique SCN for the database as a whole.
    A quote from the link below as
    When a control point is completed, Oracle stores the RCS individually in the control for each data file file
    http://www.dbapool.com/articles/1029200701.html


    What does that mean? There is a SNA for the entire database, and there are individual SCN for each data files?

    Well, unfortunately, the article says more bad than good things. Or if I can't call them wrong, they are rather confusing and rather than clear things for the reader, its making them appear to look more confused.

    First things, YVERT is used for read consistency (CR) mechanism and the backbone of the notion of Multiversioning. The control point is the mechanism to help that recovery is decided. Contrary to what said article, not any kind of checkpoints update both the data file and the control file, and also, there is not a type of them as well. In addition, the article says that the LAST_CHECKPOOINT is set to NULL, while its actually set to the infinity since it is not possible to detect the moment when the database is opened, that the last issue of control over the file would be. In the case of complete control point, this number is saved and is also associated with the toa Controlfile own database leader at the next startup. If this is not the case, there is an inconsistency in the stop_checkpoint of the data file and the stop_checkpoint reocrded in the control file, leading to a recovery of the instance.

    There are several types of control points. Similarly, there are several types of SNA as well. Without going into the details of these, IMO, the article simply means that when the control point write over a file passes, oracle updates the file checkpoint on it and this is recorded in the Controlifle. as well.

    HTH
    Aman...

  • A query takes twice as long for an identical database on two separate systems.

    Hi, I'm looking for help with a performance issue on a particular database.

    I'm running the query
    ' for $o in the collection ("rx.dbxml") / RX [dbxml:metadata("janusId") = "12345"] descending return $o/@dos < id item = '{replace (dbxml:metadata("dbxml:name",_$o)' & #34; (», «_»)} "> {$o} < / item > '"
    on two different systems through a ServerProxy xmlrpclib python object. This is the my test request, but the other query performance presents the same problem.

    Data on both systems are identical, and the time required to execute this query is the same for both systems throughout
    each database, except one. For this particular database, the time required to run the query on system B is double
    the system has.

    This is only the case for a particular database that makes me think it's a matter of data?
    However, the data are identical on both systems.

    Is anyone aware of all data related issues that could trigger such a success of performance-based
    the configurations below? What should be my next step to diagnose this problem?

    System A
    ---
    -1 x Quad core Intel Xeon CPU E5450 @ 3 .00GHz
    -32 GB OF RAM

    -Red Hat Enterprise Linux Server 5.1 (Tikanga)
    -Berkeley dbxml 2.4.13 patched for 2.4.16
    -bsddb3 4.5.0
    -Python 2.4.4
    twisted-8.2.0
    -0.12.0 sOAPpy

    -Size used Avail use % mounted file system
    / dev/sda5 39 33 G G 4.3 G 89%.
    ...
    ---

    System B
    ---
    -2 x Intel Xeon CPU @ 2.80 GHz Dual core
    -12 GB OF RAM

    -Red Hat Enterprise Linux Server 5.3 (Tikanga)
    -Berkeley dbxml 2.4.16
    -bsddb3 4.7.5
    -Python 2.4.4
    twisted-8.2.0
    -0.12.0 sOAPpy

    -Size used Avail use % mounted file system
    / dev/map/VolGroup00-LogVol00 384 G 88 277 G 25%.
    ...
    ---

    Thanks for any help.

    Try looking at the exit of db_stat for each system to the query itself.
    1 clear stats:
    db_stat z m
    2. run the query
    3. look at his stats:
    db_stat m

    Note the suspicious differences. You can / must also monitor the disk and the e/s during queries. If the plans are really the same and the data are the same that points to differences in system that would essentially be IO. At least this is the first place to look.

    Kind regards
    George

Maybe you are looking for