Location of the DB - best practices repository

Hello

We have two main Oracle DB, Production and Test servers.  The Production Server using dataguard and there is of course a server standby db.

I'm getting up 12 c Cloud control EM and running on a new server. Is it better to install the DB repository on the SAME (local) server, or on our Production DB server?  Regardless of the place where, I'll raise dataguard to replicate the repository to a standby DB.

I would like to have all the answers.  I looked through a few white papers, but can't seem to find the answer to this.

Thank you

I suggest to install on the same server, as long as you have a good amount of space on the server.

Sincere greetings,

Ansari

Tags: Enterprise Manager

Similar Questions

  • change the vswitch, best practical question

    Hello

    Here is the scenario, I invited XP and I want to spend the vswitch he connected too.

    Is it safe to simply change the properties of comments in the vshpere client, set up the different vswitch, select ok. (without closing comments)

    Here is what happened last week and I would like to get feedback to see if I did something outside best practices.

    I have a xp machine who's job it is to move files within our company.  In house app, had problems with the performance, of course the bandwidth network has been a problem but also a sustained the 100% CPU usage whenever the application in the House is running. I made a few changes, first I changed the network card to a vswitch with no other guests connected, than giving a non-shared on the network connection to this comments

    I stopped our applications in the House that copies the edited files the client settings, change to the another vswitch, selecting ok...  Everything seemed fine, restarted apps and found no problem.

    The next day, I increased the RAM on the host of Meg 512 to 1024 Meg as the available physical ram was weak, and I suspected disk cache.  Stop the guest computer, editing the memory and it has increased from 1024 to vsphere...  Restarted and is the reason for my questions, which connects to run the application has been corrupted and would not load the profile.

    I should add that the application users often use the method "end task" through windows to complete the process as soon as they are sometimes does not.  Not something I tend to do what I think may be a cause of file corruption.

    My boss suggested that he believes that my approach is the cause of profile corruption, citing specifically the way in which I changed the vswitch that the guest has been connected.  My understanding at this point is that my approach was equivalent to not patch a machine and plug in another switch, and I don't see how, which could cause a windows become corrupt profile.

    Ideas of the community?  Expect honesty burtal if my method is in error.

    Change the who vswitch a virtual computer to connect during operation is perfectly accetable as only the switch that you move to can access the subnet of the virtual computer is configured for - it's like unplugging a [machine to go to an actual physical switch and plug it into a new physical switch -

  • Size of the file - best practices

    I am putting together my first iPad app DPS and looking for a few tips to keep the manageable file size.

    I did a quick project about 6 pages and it came as a huge 28 MB. My document end will be about 100 pages (x 2 for directions)

    I really did nothing to try to reduce the size of the files in the project, I used PSD images with layers and not a not resize to 1024 x 768. Same thing with any jpg I used. I should do? Also, I did not bother to check if the image was CMYK or RGB. Basicially, I was working on the assumption that the images would be optimized in the process.

    I created the files using JPG because I wanted to use the content viewer Desktop and I get PDF's will help with the size of the file. But that will make a big difference?

    Is the largest part of this DPS overload just 28Mo or will it increase proportionally I've added pages?

    You are looking for just best practices.

    Thank you

    Brian

    A few tips:

    (1) we will be re-sample images to the right size, so don't worry if your sourc eimages are bigger they must do.

    2) publish as PDF. This will make a big difference if you have a lot of text content.

    (3) not to publish two orientations. While we do support the feature, this means you have to do twice the work of layout and the size of the file is quite a bit larger. Choose an orientation that works best for your content, and stick with that. The vast majority of our publishers only portrait or landscape, not both.

    Neil

  • What is the new 'best practice' for suncing VM in Vsphere 4.0

    I'm using my DC as the source of time used to an external time source... which is always recommended?

    I have all the Windows virtual computer.

    fix.  Windows time service is set to automatic and started by default and the synchronization of comment tools is not.

  • Best location for the archived redo logs

    Hello

    I am OOF instructions and I want to make life for the DBA, that looks like my job easier.

    So, as the title says what is the location of the "standard/Best Practice' for archived redo logs? particularly the dest_1 which is usually local on the same server

    Thank you.

    Hello

    For you, I recommend the use of the Flash/fast recovery area.

    Configuration of the archived of Redo Log locations

    Oracle recommends that you use fast area recovery to an archive location, because the archived logs are managed automatically by the database. The file names generated for the newspapers archived in the fast recovery area correspond to Oracle managed files and are not determined by the parameter LOG_ARCHIVE_FORMAT . Whatever archiving scheme you choose, it is always advisable to create multiple copies of archived redo logs.

    Ref 1:

    http://docs.Oracle.com/CD/E11882_01/backup.112/e10642/rcmconfb.htm#CHDEHHDH

    Ref 2:

    http://docs.Oracle.com/CD/E11882_01/server.112/e17157/unplanned.htm#BABEEEFH

    REF 3:

    http://docs.Oracle.com/CD/E11882_01/backup.112/e10642/rcmconfb.htm#BRADV89418

    Kind regards

    Juan M

  • Support/best practices to restore the application of planning between servers

    I have two servers with Hyperion Planning 9.3.1 (prod and dev) I want to copy the application called "BFS" Production of "NewBFS" - Dev server.

    As directed by our consultants, they said to do the following:

    1. backup the repository database that contains the application production BFS
    2 do a restore of the .bak file to "NewBFS" on the dev server database
    3. resync orphaned connections (from sql server connections and database connections)
    4 Ouvrezunesession planning via the default admin user ID
    5. go in the app's settings and change the URL
    6 enter the Shared Services URL
    7 manage the database
    8 check all the boxes and click Refresh
    9 go to SSP and resynchronize the native directory users

    However then try to connect in planning with something other than «admin", we receive an error «user xx is not configured...» »

    From my experience of db user tables are either still referencing production and/or do not have resynced correctly.

    If long story short... I can restore a planning application to another server, and if so what is the supported/best practices?

    Thank you

    JTS

    It is in the hyperion\planning\bin directory.

    Use :-updateusers.cmd serverName adminName password applicationName

    See you soon

    John
    http://John-Goodwin.blogspot.com/

  • Types of discs DC Win2008R2 best practices...

    Hello

    I am trying to upgrade my AD environmental to 2008R2 victory, and I ask for feedback on best practices for the selection of the disk type and the parameters of the virtual machine.  I intend to install new scratch and transfer AD ads and much more thereafter.

    I intend to create a volume of 50Gb for the OS itself and a 200 GB volume for departure records and other data to the user.

    In a virtual environment, it is adviceable to locate the sysvol and AD database in a separate location other than the default system drive?

    For my 50 GB disk OS (domain controller), is this normal / good practice to store it with its virtual machine?

    My drive 200 GB of user data should ideally be able to reassign to another VM intact to keep the data in case of failure.

    Which drive 'provisioning' would be normal to use with these types of discs?

    What disc 'mode' (independent / persistent) it would be normal to specify in this scenario?

    When such a virtual disk is created, is it possible to increase / decrease its size?

    Thank you very much for the comments on these issues.

    Best regards

    Tor

    Hello.

    In a virtual environment, it is adviceable to locate the sysvol and AD database in a separate location other than the default system drive?

    Yes, follow the same best practices as you would in the physical environment here.

    For my 50 GB disk OS (domain controller), is this normal / good practice to store it with its virtual machine?

    Probably yes.  Why would you consider storing elsewhere?  You do something with SAN replication or y at - there a reason?

    Which drive 'provisioning' would be normal to use with these types of discs?

    I tend to use thin for the volumes to the OS.  For the absolute best performance, you can go with thick-eager discs set to zero.  If you're talking about data sharing standard file for users, then all disc types would probably be fine.  Check it out "[study of the performance of VMware vStorage Thin Provisioning | ]. "[http://www.VMware.com/PDF/vsp_4_thinprov_perf.pdf]" for more information.

    What disc 'mode' (independent / persistent) it would be normal to specify in this scenario?

    This can depend on how you back up the server, but I guess you wouldn't use not independent.  If you do, make sure you use persistent, you do not want to lose user data.

    When such a virtual disk is created, is it possible to increase / decrease its size?

    The increase is very easy with vSphere and 2008.  Very easy, and it can be done without interruption.  Shrinkage is a more convoluted process and will most likely require a manual intervention and downtime on your part.

    Good luck!

  • Best practices ESXI 4 DRS

    DRS allows guests to get to very high processor use before moving the virtual machines. The question is, customer MANAGER has read only access to vcenter. He sees that some ESX hosts are constantly, getting CPU usage up to 98% with a warning on the host. He wondered about this.

    I'm not aware of the harmful effects on the production environment, but it's a bit noticeable when displaying the cluster on the hosts tab. Sometimes four of the sixteen hosts have a caveat during production hours. The attached picture shows a typical morning, with some hosts to 98% and 50%.

    The cluster has a capacity of 11 guests on the 16 failover, but we have very busy periods.

    Is there anything that can be done to configure things differently from the DRS? We could use Affinity rules to keep busy VMs on different hosts?

    Among the DRS Best practice:

    (1) in deciding which hosts in a DRS cluster, try to choose hosts that are as homogeneous as possible in the processor and memory. This ensures the stability and predictability of performance higher. VMotion is not supported on hosts with incompatible processors. So with heterogeneous systems that have incompatible processors, DRS is limited in the number of opportunities to improve the balance of the load on the cluster.

    (2) where multiple ESX hosts in a DRS cluster are compatible VMotion, DRS has more choices to better balance the load on the cluster

    (3) do not specify affinity rules unless you have a specific need to do so. In some cases, however, specifying the rules of affinity can improve performance

    (4) allocate resources to virtual machines and pools of resources with care. Be aware of the impact of the limits, reserves and overload of virtual machine memory

    (5) virtual machines with smaller sizes of memory or CPU virtual less offer more possibilities for DRS to migrate them to improve the balance on the cluster. Virtual machines with larger memory or virtual CPU add more constraints in the migration of virtual machines. Therefore, you must configure only as many virtual processors and memory for a virtual machine as needed.

  • 10 GB NIC redundancy-Best Practice

    Hi all

    I have now deployed successfully to the 5 different poles of vSphere using 4 x 1 GB network adapters on all servers to ESX/ESXi hosts. I used 2 active/passive NIC in vSwitch0 vMotion and network management traffic (or Service Console for ESX) and used 2 active/active NIC in vSwitch1 for VM traffic as shown below. The uplinks are using 802. 1 q trunks in redundant Cisco switches.

    We are now moving to 10 GB networks. I have the opportunity to use this same design of map NETWORK 4 (using the HP Flex 10 vNIC or cards Cisco Palo) or go with a simpler model of map NETWORK 2. What is the recommended best practice to separate VMkernel Port and port VM group traffic among the physical network interface cards? Any suggestions or links to published documents would be greatly appreciated of VMware.

    Thank you!

    ScreenHunter_01 Mar. 04 14.32.jpg

    Take a look at my post here.  https://www.myciscocommunity.com/message/63033#63033

    Can help you with your design questions.  There is also a generic 'best practice' guide for 10G networks & VMware environments.

    As Cisco I obviously partial to the Palo adapter.  Unlike the Flex-10 option you set up not a bandwidth 'max' virtual NETWORK card, rather, you set a guarantee min. for each virtual host NIC given that you can create multiple network cards virtual with Palo, you can create one for VMotion, VM data, IP storage, etc. and define your QoS guarantees quickly & easily while offering redundancy with the recovery of host-transparent fabric.

    Let us know if you have specific questions about a deployment.

    Kind regards

    Robert

  • Computer configuration virtual for SRM - files from memory Swap file/Virtual - best practice replication?

    Hello, I am very new to the model DR VMWare and have a few persistent questinos.

    What is the recommended best practice re: virtual computers invited windows and virtual memory files.  I think it would be unnecessary to replicate these changing data on the DR site.  I have several virtual machines with 6-8gig memory and I'm just wondering how can I isolate the ESX memory swap files and virtual memory in Windows feedback, so they get not replicated on the Dr site as often as changing os/data.  If it's even necessary to replicate.

    We use vspere vcenter 4.1 related modes, RS 4.1 the two on-site Cellera NX4 ReplicatorV2.

    I wonder if I can install the files of pages to be in a condition NOT replicated file system or a file system which is the only value replicates every 24 hours, or once a week.

    My reasoning here is.  Once the file is there, the operating system really cares about the changes in this file and dumps windows virtual memory on a restart, as well as the .vswp file.

    what I mean is built on a virtual disk full and placing the windows file on this virtual disk to Exchange.  This virtual disk could exist in a data store hosted on a celerra file system that does not get repeated as frequently as data/OS file systems.

    or am I completely off base here.

    I think that if you tried to reproduce these files and maintain a sync time minute 10, you would need a ton of bandwidth.

    any suggestions or recommendations or even pointers to Articles/items worth points = D.

    Thanks for your time.

    Given that the configuration is made at the level of the cluster and has no impact on the SRM, replication is not a factor.  Don't forget, not to replicate the LUNS with the swap, just files have a LUN configuration on the side of recovery as well and have this defined cluster Exchange files.  Also, on the virtual machine critical we define reservations memory so the pagefile is not a factor at all.

    Kind regards...

    Jamie

    If you found this information useful, please consider awarding points to 'Correct' or 'useful '.

    Remember, if it isn't one thing, it's your mother...

  • Physically, move a server blade in a different location - best practices?

    Hello

    Is there a best practice to move a physical blade to another location?  I can't find a document about it.

    We want to move a slide to another location due to keep grouped similar environments but do not want to disassociate the profile.

    Can we just unassign, and then take it out and put in another location and let it automatically reactivate.  The profile will remain with the server?

    You say - associate profile and then re - associate profile again?

    Thank you

    A

    Hi Adrien,.

    Thank you for poining, I guess I wrote too soon.

    Yes it takes the decomission.

    Here are the steps to update:

    (1) shut down the operating system on the blade,

    (2) Disassocite the service profile, wait dissociate it in full (check the status on the WSF of the blade tab)

    (3) Decomission the blade of the UCSM (check the status on the tab of the WSF for the process)

    (4) to remove the blade from the current location,

    (5) move the blade into the slot again.

    (6) wait for blade to discover fully (even once, check the condition of the WSF of the blade tab), you may be required to validate / recognize the slot to accept the blade and trigger the discovery.

    7) once discovered associated with the profile back

    (8) start the dos operating system.

    I would check the documents and let you know.

    . / Afonso

  • Best practices for the DRM version

    Hi Experts,

    I need a help on the sub query, any suggestions are very appreciated.

    I want to know what is the best practice to keep DRM versions when I have a certain common hierarchies of upstream DRM applications.

    While my flow to downstream applications are different i, various e-business.

    What I have to keep these hierarchies as different versions or is it possible to manage with these hierarchies in a single version.

    The reason for this request is, I need to map hierarchies of my Commons on hierarchies of two different companies located in two different versions, and if I'm not mistaken the mapping of members is not possible in two different versions.

    do I have to have these common hierarchies in two different versions, that i then e for two companies different or any lead time is possible to have them in one version and match them to the members of the second version.

    Hope you get my request, please return back if no confusion in my question.

    Thank you

    Madhabika

    Hello

    The best way to group related hierarchies is to put them in a group of hierarchies, Versions are mainly used to maintain the data lifecycle, or good if you have quite foreign game of hierarchies you can keep them in different versions and their life cycles maintain separately.

    Inter communication Version and mappings are impossible, so if members are mapped, good thing would be to keep them in only one version.

    Thank you

    Denzz

  • What is the best practice to block through several layers sizes: hardware and hypervisor VM OS?

    The example below is not a real Setup, I work with, but should get the message. Here's my example of what I'm doing as a reference layer:

    (LAYER1) Hardware: The hardware RAID controller

    • -1 TB Volume configured in the 4K block size. (RAW)?


    (Layer2) Hypervisor: Data store ESXi

    • -1 TB of Raid Controller formatted with VMFS5 @ block size of 1 MB.


    Layer (3) the VM OS: Server 2008 R2 w/SQL

    • -100 GB virtual HD using NTFS @ 4 K for the OS block size.
    • -900 GB virtual HD set up using NTFS @ 64 K block size to store the SQL database.

    It seems that vmfs5 is limited to only having a block size of 1 MB. It would be preferable that all or part of the size of the blocks matched on different layers and why or why not? What are the different block sizes on other layers and performance? Could you suggest better alternative or best practices for the sample configuration above?

    If a San participated instead of a hardware on the host computer RAID controller, it would be better to store the vmdk of OS on the VMFS5 data store and create an iSCSI separated THAT LUN formatted to a block size of 64 K, then fix it with the initiator iSCSI in the operating system and which size to 64 K. The corresponding block sizes through layers increase performance or is it advisable? Any help answer and/or explaining best practices is greatly appreciated.

    itsolution,

    Thanks for the helpful response points.  I wrote a blog about this which I hope will help:

    Alignment of partition and blocks of size VMware 5 | blog.jgriffiths.org

    To answer your questions here, will:

    I have 1 TB of space (around) and create two Virutal Drives.

    Virtual Drive 1-10GB - to use for OS Hyper-visiere files

    Virtual Drive 2 - 990 GB - used for the storage of data/VM VMFS store

    The element size of default allocation on the Perc6 / i is 64 KB, but can be 8,16,32,64,128,256,512 or 1024 KB.

    What size block would you use table 1, which is where the real hyper-visiere will be installed?

    -> If you have two tables I would set the size of the block on the table of the hypervisor to 8 KB

    What block size that you use in table 2, which will be used as the VM data store in ESXi?

    ->, I'd go with 1024KO on VMFS 5 size

    -Do you want 1024KO to match the VMFS size which will be finally formatted on top he block?

    -> Yes

    * Consider that this database would eventually contain several virtual hard drives for each OS, database SQL, SQL logs formatted to NTFS to the recommended block, 4K, 8K, 64K size.

    -> The problem here is THAT VMFS will go with 1 MB, no matter what you're doing so sculpture located lower in the RAID will cause no problems but does not help either.  You have 4 k sectors on the disk.  RAID 1 MB, 1 MB invited VMFS, 4 k, 8K, 64 K.   Really, 64K gains are lost a little when the back-end storage is 1 MB.

    If the RAID stripe element size is set to 1 024 Ko so that it matches the VMFS 1 MB size of block, which would be better practice or is it indifferent?

    -> So that's 1024KB, or 4 KB chucks it doesn't really matter.

    What effect this has on the OS/Virtual HD and their sizes respective block installed on top of the tape and the size of block VMFS element?

    -> The effect is minimal on the performance but that exists.   It would be a lie to say that he didn't.

    I could be completely on the overall situation of the thought, but for me it seems that this must be some kind of correlation between the three different "layers" as I call it and a best practice in service.

    Hope that helps.  I'll tell you I ran block size SQL and Exchange time virtualized without any problem and without changing the operating system.  I just stuck with the standard size of microsoft.  I'd be much more concerned by the performance of the raid on your server controller.  They continue to do these things cheaper and cheaper with cache less and less.  If performance is the primary concern then I would consider a matrix or a RAID5/6 solution, or at least look at the amount of cache on your raid controller (reading is normally essential to the database)

    Just my two cents.

    Let me know if you have any additional questions.

    Thank you

    J

  • Recommendations or best practices around change of the audio data on a network share?

    I have several users editing audio located on the same network share. They are always complaining about the performance.  Is it a best practice to edit audio located on the network?  I see so many issues (latency time, possible corruption, etc.) with that from the computer SCIENCE point of view, but I would like the opinion of those more familiar with the application and its best practices.  Thanks in advance.

    It's crazy! So that any audio to be edited with any degree of speed and security, it must be downloaded to a local computer, edited on that, and then the final result re-recorded on the network.

    You might as well do this anyway - at the time wherever you make a change, you store a local version of the file temp on the editing machine, and it has real save, or save as who turned to the network drive. Also, you would be working on a copy, the original is still available in the case of the vis - is up, and would not be the case if you edit the original files directly on the network, so it is intrinsically safer.

  • Where to put the java code - best practices

    Hello. I work with the Jdeveloper 11.2.2. I'm trying to understand the best practices for where to put the code. After reviewing the http://docs.oracle.com/cd/E26098_01/web.1112/e16182.pdf, it seemed that request module was the preferred location (although many examples in the pdf file reside in the main methods). After some time of coding, if, I noticed that there was a certain libraries imported and wondered if this would impact performance.

    I looked at the articles published on the forum, in particular Re: programmatically access the method of service (customer interface) . This link mentions for access to the code a bean of support - and the bulk of the recommendations seem to be using the data control to drag to the Joint Strike Fighter, or use the links to access code.

    My interest lies in where to put the java code in the first place; In the view object, entity object, and... other Am, backing bean object?

    I can describe several guess better know where to put the code and the advantages and disadvantages:

    1. in the application module
    Benefits: Central location for code makes development and support easier as there are not multiple access points. Kinda like a data control centralizes the services, the module of the application can act as a conduit for the different parts of the code you have in your model objects.
    Cons: Everything in one place means that the module of the application becomes bloated. I don't know how the memory works in java - if the app module has tons of different libraries are all called when even a method of re - run a simple query is called? Memory of pigs?

    2. write the code in the objects it affects. If you write code that accesses a view object, write it to a display object. Then make it visible for the customer.
    benefits: the code is accessible through ducts less (for example, I expect that if you call the module from the application of a JSF backing bean, then the module of the application calls the view object, you have three different pieces of code-)
    CONT: the code gets spread, more difficult to locate etc.

    I would greatly appreciate your thought on the issue.


    Kind regards
    Stuart

    Published by: Stuart Fleming on May 20, 2012 05:25

    Published by: Stuart Fleming on May 20, 2012 05:27

    First point here is when you say 'where to put the code of java' and you're referring to ADF BC, the point is that you put 'code of java business logic' in the ADF business components. Of course it is very good to have the Java code in the ViewController layer that covers the user interface layer. Just don't put the business logic in the user interface layer and don't put no logical user interface in the model layer. In your 2 examples you seem to consider the ADF BC layer only, so I'll assume that you're not only serious logic java code.

    Meanwhile, I'm not keen on best practices in the term that people are following best practices without thinking, usually best practices come with conditions and forget to apply. Fortunately you do not here that you have thought through the pros and cons of each (nice work).

    Anyway, back on topic and turn off my soap box, regarding where to put your code, my thoughts:

    (1) If you have only 1 or 2 methods set in the AppModuleImpl

    (2) If you have hundreds of methods, or there is that a chance #1 above will turn into #2, divide the code between the AppModuleImpl, the ViewImpl and the ViewRowImpls. Why? Because your AM will become overloaded with hundreds of methods making it unreadable. Put the code where it should logically go instead. Methods that operate on a specific line of VO Approfondissez partner ViewRowImpl, methods that work across lines in a VO enter the ViewImpl and methods that work throughout your in the associated AppModuleImpl.

    To be honest that you never the option you choose, one thing I recommend as a best practice is to be consistent and document standard so not know your other programmers.

    BTW, it is not a question about loading a lot of libraries/imports in a class, it has no performance cost. However if your methods require a lot of class variables, then yes there will be a memory of the costs.

    On a side note, if you are interested in more ideas on how to create ADF applications properly think about joining the EMG "ADF", a forum which deals with ADF architecture, best practices (cough), deployment architectures free online and more.

    Kind regards

    CM.

Maybe you are looking for