Exadata performance

In our results of exachk, there is an article for shared_server.

our current production environment has shared_server set to 1. SHARED_SERVER = 1.

Now, I got those exachk:

Benefit / Impact:

As a decision of the Oracle kernel design, shared servers are designed to make fast transactions and therefore do not direct readings of series (not PQ) issue. Therefore, the shared servers do not series (not PQ) Exadata smart scans.

The impact of verify that shared servers are not full table scan series is minimal. Changing the shared server environment to avoid full table scans shared server set varies according to the behavior of configuration and application, so the impact can be estimated here.

Risk:

Shared servers making scans of complete set of the table in an environment Exadata lead to an impact on performance due to the loss of the Exadata smart analytics.

Action / repair:

To check the shared servers are not running, run the following SQL query under the user name "oracle":

SQL > select NAME, value of the parameter $ v whose name = "shared_servers;

The expected output is:

VALUE NAME

--------------- ------------------------------

SHARED_SERVERS 0

If the output is not '0', use the following command under the user name "oracle" with correctly set environment variables and check the flow rate for the 'SHARED ': configurations

Service of $ORACLE_HOME/bin/lsnrctl

If shared server are confirmed to be present, check for the scans of the complete series of the table they commit. If the shared servers, perform scans of the complete series of the table are found, the behavior of the environment and application of shared server should be changed as to encourage normal Oracle foreground processes so that the direct reading sequential and Exadata smart analytics can be used.

Oracle lsnrctl service on production environments present shows all the "Local Server".

What should I do here?

Thanks again in advance.

XMLDB is a component that is optional on Oracle 11 g 2, but included in the default install Exadata.  But anyway, keep as-is system avoids any risk of impact XMLDB.

Marc

Tags: Database

Similar Questions

  • How to perform a complete stop of the Exadata machine

    In the next month or so, we have to stop our Exadata Database Machine (V2).

    I understand that the steps necessary to stop the layout nodes and storage cells themselves, but not so much the switches.

    Can I stop the Infiniband switches before the Cisco switch? Or vice versa? I presume that the order is reversed when we come back the machine?

    How is it important for us to do a clean shutdown (I presume you can make 'shutdown-h now' on the IB switches as they are Linux?) Is just down the model of the lymphatic cells / storage itself and the power of unity sufficient traction?

    Any advice would be appreciated.

    Mark

    >

    How is it important for us to do a clean shutdown (I presume you can make 'shutdown-h now' on the IB switches as they are Linux?) Is just down the model of the lymphatic cells / storage itself and the power of unity sufficient traction?

    You will be not able to stop the IB switches - it's by design. After that all cells storage and compute nodes are shutdown, power can be removed from the PDU that will obviously stop switches.

    In addition, there is no order with respect to the start/stop switch. The Cisco Ethernet switch and switches of the IB can be started at the same time. All these switches must be operational (leave 4 or 5 minutes for the boot) before starting the storage cells, then the compute nodes.

    Startup and shutdown procedures are in the documentation - chapter owner's guide on the maintenance of database machine (should be chapter 7).

  • Difference in performance between the CTA and INSERT / * + APPEND * / IN

    Hi all

    I have a question about the ETG and "Insert / * + Append * / Into" statements.

    Suite deal, I have a question that I did not understand the difference in operating times EXADATA.

    The two tables of selection (g02_f01 and g02_f02) have not any partition. But I could partition tables with the method of partition by column "ip_id" hash and I tried to run the same query with partition tables. Change anything in execution times.

    I executed plan gather statistics for all tables. The two paintings were 13.176.888 records. The two arrays have same "ip_id' unique columns. I want to combine these tables into a single table.

    First request:

    Insert / * + append parallel (a, 16) * / in dg.tiz_irdm_g02_cc one

    (ip_id, process_date,...)

    Select / * + parallel (a, 16) parallel (16B) * / *.

    tgarstg.tst_irdm_g02_f01 a.,

    tgarstg.tst_irdm_g02_f02 b

    where a.ip_id = b.ip_id


    Elapsed = > 45: 00 minutes


    Second request:

    create table dg.tiz_irdm_g02_cc nologging parallel 16 compress for than query

    Select / * + parallel (a, 16) (b, 16) parallel * / *.

    tgarstg.tst_irdm_g02_f01 a.,

    tgarstg.tst_irdm_g02_f02 b

    where a.ip_id = b.ip_id

    Elapsed = > 04:00 minutes


    Execution plans are:


    1. Enter the statement execution Plan:

    Hash value of plan: 3814019933

    ------------------------------------------------------------------------------------------------------------------------------------------

    | ID | Operation | Name | Lines | Bytes | TempSpc | Cost (% CPU). Time |    TQ | IN-OUT | PQ Distrib.

    ------------------------------------------------------------------------------------------------------------------------------------------

    |   0 | INSERT STATEMENT.                  |    13 M |    36G |       |   127K (1) | 00:00:05 |        |      |            |

    |   1.  LOAD SELECT ACE | TIZ_IRDM_G02_CC |       |       |       |            |          |        |      |            |

    |   2.   COORDINATOR OF PX |                  |       |       |       |            |          |        |      |            |

    |   5:    PX SEND QC (RANDOM). : TQ10002 |    13 M |    36G |       |   127K (1) | 00:00:05 |  Q1, 02 | P > S | QC (RAND) |

    |*  4 |     IN THE BUFFER HASH JOIN |                  |    13 M |    36G |   921 M |   127K (1) | 00:00:05 |  Q1, 02 | SVCP |            |

    |   3:      RECEIVE PX |                  |    13 M |    14G |       |  5732 (5) | 00:00:01 |  Q1, 02 | SVCP |            |

    |   6.       PX SEND HASH | : TQ10000 |    13 M |    14G |       |  5732 (5) | 00:00:01 |  Q1 00 | P > P | HASH |

    |   7.        ITERATOR BLOCK PX |                  |    13 M |    14G |       |  5732 (5) | 00:00:01 |  Q1 00 | ISSUE |            |

    |   8.         STORE TABLE FULL ACCESS | TST_IRDM_G02_F02 |    13 M |    14G |       |  5732 (5) | 00:00:01 |  Q1 00 | SVCP |            |

    |   9.      RECEIVE PX |                  |    13 M |    21G |       | 18353 (3) | 00:00:01 |  Q1, 02 | SVCP |            |

    |  10.       PX SEND HASH | : TQ10001 |    13 M |    21G |       | 18353 (3) | 00:00:01 |  Q1, 01 | P > P | HASH |

    |  11.        ITERATOR BLOCK PX |                  |    13 M |    21G |       | 18353 (3) | 00:00:01 |  Q1, 01 | ISSUE |            |

    |  12.         STORE TABLE FULL ACCESS | TST_IRDM_G02_F01 |    13 M |    21G |       | 18353 (3) | 00:00:01 |  Q1, 01 | SVCP |            |

    ------------------------------------------------------------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):

    ---------------------------------------------------

    4 - access("AIRDM_G02_F01".") IP_ID '= 'AIRDM_G02_F02'.' IP_ID")

    2 - DEC execution Plan:

    Hash value of plan: 3613570869

    ------------------------------------------------------------------------------------------------------------------------------------------

    | ID | Operation | Name | Lines | Bytes | TempSpc | Cost (% CPU). Time |    TQ | IN-OUT | PQ Distrib.

    ------------------------------------------------------------------------------------------------------------------------------------------

    |   0 | CREATE TABLE STATEMENT.                  |    13 M |    36G |       |   397K (1) | 00:00:14 |        |      |            |

    |   1.  COORDINATOR OF PX |                  |       |       |       |            |          |        |      |            |

    |   2.   PX SEND QC (RANDOM). : TQ10002 |    13 M |    36G |       |   255K (1) | 00:00:09 |  Q1, 02 | P > S | QC (RAND) |

    |   3.    LOAD SELECT ACE | TIZ_IRDM_G02_CC |       |       |       |            |          |  Q1, 02 | SVCP |            |

    |*  4 |     HASH JOIN |                  |    13 M |    36G |  1842M |   255K (1) | 00:00:09 |  Q1, 02 | SVCP |            |

    |   5.      RECEIVE PX |                  |    13 M |    14G |       | 11465 (5) | 00:00:01 |  Q1, 02 | SVCP |            |

    |   6.       PX SEND HASH | : TQ10000 |    13 M |    14G |       | 11465 (5) | 00:00:01 |  Q1 00 | P > P | HASH |

    |   7.        ITERATOR BLOCK PX |                  |    13 M |    14G |       | 11465 (5) | 00:00:01 |  Q1 00 | ISSUE |            |

    |   8.         STORE TABLE FULL ACCESS | TST_IRDM_G02_F02 |    13 M |    14G |       | 11465 (5) | 00:00:01 |  Q1 00 | SVCP |            |

    |   9.      RECEIVE PX |                  |    13 M |    21G |       | 36706 (3) | 00:00:02 |  Q1, 02 | SVCP |            |

    |  10.       PX SEND HASH | : TQ10001 |    13 M |    21G |       | 36706 (3) | 00:00:02 |  Q1, 01 | P > P | HASH |

    |  11.        ITERATOR BLOCK PX |                  |    13 M |    21G |       | 36706 (3) | 00:00:02 |  Q1, 01 | ISSUE |            |

    |  12.         STORE TABLE FULL ACCESS | TST_IRDM_G02_F01 |    13 M |    21G |       | 36706 (3) | 00:00:02 |  Q1, 01 | SVCP |            |

    ------------------------------------------------------------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):

    ---------------------------------------------------

    4 - access("AIRDM_G02_F01".") IP_ID '= 'AIRDM_G02_F02'.' IP_ID")

    Oracle version:

    Oracle Database 11 g Enterprise Edition Release 11.2.0.4.0 - 64 bit Production

    PL/SQL Release 11.2.0.4.0 - Production

    CORE Production 11.2.0.4.0

    AMT for Linux: Version 11.2.0.4.0 - Production

    NLSRTL Version 11.2.0.4.0 - Production

    Notice how this additional distribution has disappeared from the non-partitioned table.

    I think that with the partitioned table that oracle has tried to balance the number of slaves against the number of scores he expected to use and decided to distribute the data to get a 'fair sharing' workload, but had not authorized for the side effects of the buffer hash join which was to appear and extra messaging for distribution.

    You could try the indicator pq_distribute() for the insert to tell Oracle that he should not disrtibute like that. for example, based on your original code:

    Insert / * + append pq_distribute parallel (a, 16) (a zero) * / in dg.tiz_irdm_g02_cc one...

    This can give you the performance you want with the partitioned table, but check what it does to the space allocation that it can introduce a large number (16) of extensions by segment that are not completely filled and therefore be rather waste of space.

    Concerning

    Jonathan Lewis

  • RTO in Exadata

    I have a few questions on OTN, please help.

    • for the 2 TB database, what is the objective of minimum possible recovery (RTO) for the X 5 - 2 Exadata quarter rack?
    • It is possible to get 2 minutes RTO for the 2 TB database running on rack quarter X 5-2?
    • What are the ways to reduce the OTR?
    • How long it is necessary to perform a full restore of database 2 TB quarter rack?

    Thank you

    Aerts

    Abhi-

    Here are my best attempts to answer your questions:

    • for the 2 TB database, what is the objective of minimum possible recovery (RTO) for the X 5 - 2 Exadata quarter rack?

      • There are so many variables to minimize the resulting RTO of used tools, network, storage, platform, distance/time of latency between physics and sleep, etc., it is difficult to respond to the possible minimum RTO. Theoretically using a solution like Golden Gate in a multi-master (two-way replication active/active) configuration, you may reach an RTO 0. This assumes that your application and database can support such a configuration. A more traditional and less complex DR configuration, using the data guard must be able to measure the RTO in a few minutes.
    • It is possible to get 2 minutes RTO for the 2 TB database running on rack quarter X 5-2?
      • In a real recovery architecture disaster primary/secondary with Exadata, the best solution is Data Guard. If correctly configured and able to minimize latency between main and backup, Server data hold switch or fail should be measured in minutes and should be possible to get almost 2 minutes RTO.
    • What are the ways to reduce the OTR?
    • How long it is necessary to perform a full restore of database 2 TB quarter rack?
      • Depends on where you are restoring from. We have seen great results in the backup and restore of time using a ZFS storage device connected via Infiniband Exadata. With 2 controller 4 disc tray configuration we were able to restore a database of 2 TB in just 8 minutes.

    HTH,

    Kasey

  • Need help with performance & setting in an environment of memory data storage/warehousing

    Dear all,

    Nice day.

    We had successfully migrated from a 4 node rack Exadata V2 to a quarter of 2 node rack X 4 - 2 Exadata. However, we are facing some problems with performance for a few charges while others were indeed marked a good improvement.

    1. the total memory on the BONE is 250 GB for each node (two of compute nodes for a quarter of rack).

    2. you would be grateful if someone could help me with the best equation to calculate the SGA and PGA (as well as in the distribution of the shared_pool, large_pool etc.) or automatic memory management is recommended?

    3. we run the exachk report which proposes to set up huge pages.

    4. when we tried to increase the SGA to more than 30 GB system does not do. We set however of the PGA to 85 GB.

    5. in addition, we noted that some of the queries with joins and indexes take more time.

    Any advice would be greatly appreciated.

    Best regards,

    Vikram.

    Hi Vikram,

    There is no formula on SGA and PGA, but best practices for environments OLTP is amounting to give memory (which can be up to 80% of total RAM of the server) you can do 80% gas and 20% of the PGA. Envs Data Warehouse, values are like PGA SGA and 40% to 60%, or it may be up to 50% - 50%. In addition, some disencourage docs you keep the auto memory when management database you use a big SGA (> 10).

    You use a CCR environment, you need to configure huge Pages. And if the systems are not allowing you to increase the memory, just take a glance on the semaphore settings, probably, they establish lower values. And for the poor performance queries, we need to explain the structure of the table plans, and you also analyserions if smart scan is the game.

    Kind regards.

  • Several tenants on the configuration of the disks Exadata

    Hello

    Exadata pre-built 2 bow TIE, ASM 3 configured disk group.

    Please may I know what is the configuration of Group of disc for the consolidation of Multi tenant on exadata.

    Thank you

    Since the DSO balances actually all batteries on all counts on all the disks on all storage servers, it is actually irrelevant whether you have one or several starts as long as your start is spread across all disks. This is one reason why having less starts can be beneficial, because they are guaranteed to be spread across all disks in the exadata rack. If people try to change the standard configuration after installation there are chances of them being wrong, dropping disks off starts and slice not uniformly leading to loss of performance, availability and recovereability.

  • backup rman Exadata with zfs

    Our company has recently decided to start backup rman ZFS instead of regular tape backup.

    I would like to know what is the right measures to check if ITS configured right device of ZFS, the implementation of rman backups, etc.

    I'd appreciate someone here can point me to the right document.  I read ZFS Q and A and backup using ZFS best practices. All very unclear and seems none of them provide a digital step step.

    I wonder if Oracle provides an instruction step by step for the DBA.

    9233598-

    Q documents & you examined include the "Oracle ZFS storage pool: FAQ: Exadata RMAN backup with Oracle ZFS Storage Appliance (Doc ID 1354980.1)" Note? This note has a lot of information and is updated regularly and includes information about configuring ZFS and RMAN backups to the device of ZFS.

    Some useful books of whites, an I have used in the past and has been updated again last year: Oracle Exadata with the Sun ZFS Storage Appliance protection: best practices in Configuration and backup and recovery Performance and best practices using Sun ZFS Storage Appliance with Oracle Exadata Database Machine

    These documents will in many details - including providing examples of scripts.

    HTH.

    Thank you

    Kasey

  • Exadata redo

    Hi, I need advice here.

    I created data bases on our x 3 full rac environment.  Version of DB is 11204.

    I did an exachk, and it displays this warning:

    Database machine provides again extremely high rates of generation that you plan to increase the size of the log files. After the initial deployment, the size of the logs online should be 4 GB each.

    My current size of the redo log is 1 g, old non-exadata rac environment redo log size is 500 m.

    This environment has enormous data and lots of OLTP more query, so it's sort of a mix environment.

    How can be sure that the size of redo log is optimal and where I place the redo logs? What type of discs? Flash drive or regular disks?

    Thanks in advance.

    Based on your current redo 1500 MB of use is fine.

    That being said, I don't think not having 4 GB newspapers adversely affect performance either, so maybe that's why the exachk threshold is a bit on the side high.

  • AIX vs EXADATA

    Hello

    Can someone please send me some good link where I can see how EXADATA is useful compared to AIX.

    And tips for AIX to EXADATA query tunning.

    Hi Raghu,

    From a quick search on Google (assuming that you are referring to AIX on a platform of type POWER):

    http://www.Oracle.com/us/products/database/Exadata-vs-IBM-1870172.PDF

    http://www.SlideShare.NET/miguelnoronha/why-Oracle-on-IBM-POWER7-is-better-than-Oracle-Exadata-the-advantages-of-IBM-POWER7-systems

    http://public.DHE.IBM.com/common/SSI/ECM/en/orw03036usen/ORW03036USEN.PDF

    Regarding the performance management during the migration to an Exadata platform:

    http://www.Oracle.com/au/products/database/xmigration-11-133466.PDF

    http://www.SlideShare.NET/tanelp/Tanel-Poder-performance-stories-from-Exadata-migrations

    http://kerryosborne.Oracle-guy.com/papers/tuning%20Exadata.PDF

    (the General advice of the migration/upgrade of the platform and Oracle also applies here)

    HTH!

    Marc

  • Is it possible to create different starts to exadata?

    We have a need to have a few more starts other than + DATA and + RECO.

    How do we proceed? Exadata version is x 3.

    Thank you

    user569151-

    As 1188454 said it's possible. First of all, I would like to ask why do you need to create groups of additional disks data, the reco, and dbfs disk group is created by default. I see Exadata users often question the default disk groups and add or modify the disk groups to follow what they did previously on no RAC environments/Exadata ASM. However, usually the groups of data disks and reco are sufficient and allow the best flexibility and performance. One of the reasons to create multiple groups of disks could be wanting to have redundancy options different two different for a group of data disks - to have a prod on high redundancy database and a database of test on normal redundancy for example; but it is not much needs to change.

    Adding groups of disks, you will also need to rearrange and add new disks of grid. You must keep the prefix grid drive and disc corresponding equivalent group names. Do not forget that all the Exadata storage is allocated to the existing network disks and disk groups - and it is necessary to keep the configuration of the necessary balance and optimize performance. So add and resize the disks of the grid and disk groups are not a trivial task if you have already running on the Exadata DB environments and especially if you have enough of free space in the data and the recommendation to drop the grid of all disks in a failgroup - because it requires the removal of data before you proceed to add and resize network disks. I also met problems with the grid disks that end up forcing you to move the data on resizing disks - even if you think you have enough space for a fall aloo any group fail.

    Be sure to correctly estimate the size of the groups drive - factoring in redundancy, lack of groups and book space to handle the failure of the cell - thus the anticipated growth of the data on disk groups. Because if you run out of space in a disk group, you need to go through the process of resizing of all disks in the grid and groups of disks as a result - or buy an additional extension of Exadata or Exadata storage grids. This is one of the reasons why it is often better to stick with just the data and existing Reco.

    To add new records grid and disk groups and resize other becomes very familiar with the information and follow the steps the section "Resizing storage Griddisks" Ch. 7 user's guide the Exadata Machine as well as the information and examples in the MOS score: ' resizing disks grid Exadata: examples (Doc ID 1467056.1). I also like to refer to MOS often remark "How to add Exadata Storage servers using disks 3 TB for an existing database Machine (Doc ID 1476336.1)" what grid addition of disc or resize operations. The use case may not match, but several steps in this note are useful as discusses adding new disks to grid and treats even to create a new group of disks for occasions when you have discs of cells of different sizes.

    Be sure also to that you stick to Exadata recommended for storage as described in "Oracle Exadata Best Practices (Doc ID 757552.1). For example, the total number of griddisks by server storage for a prefix name given (for example: DATA) must match on all storage servers where the given prefix name exist. Also, for best performance you should have each grid prefix disc and corresponding disk group, spread evenly over each cell storage. You will also need to maintain the integrity of the group fail, separation of groups fail by the bin to the loss of cells without losing data.

    Hope that helps. Good luck!

    -Kasey

  • Spot on Exadata

    Dear experts,

    I'm new on exadata and I get a chance to work on it

    Need for tasks to perform are

    (1) a list of installation check that is to say: How can I see all things are properly installed.

    2) upgrade the exadata OS version.

    3) upgrade the database.

    (4) Migration of database test Oracle to Exadata.

    (5) now he RAC servers are installed without domain name, I need to add the domain name to existing CARS.

    Thanks in advance, there is all the documents useful to follow and run easily.


    Thank you
    Shir khan

    Hello

    I recently wrote a blog on exachk. It is a great tool.

    Thank you.

    Exachk & #8211; Oracle Exadata diagnosis Tool | Mehmet Eser & #039; s Oracle Blog

  • Exadata characteristic and execution of study plan

    Hi all

    I have questions about Exadata functionality.
    We have meet a performance problem in the current system and to improve the system. We are studying how much benefits that Exadata can give.
    After reviewing the daily execution of the SQL statement, we find that there is certain amount of "index range scan", operation "index full scan" (from execution plan). Exadata helps these operation (e.g. smart scan)? On the other hand, it is that all execute SQL (for example the full table scan) can be advantage in using Exadata?

    Thank you and best regards,
    M.T.

    903714 wrote:
    Dear christian Christmas Pal Singh,

    Thanks for your information.
    Currently, the SPA is not available for evulation. It is any document describe what type of operation (e.g. Index scan interval, scan of table full) Exadata eligible in terms of performance?

    Kind regards
    M.T.

    Yes, you can have simulation setting CELL_OFFLOAD_PLAN_DISPLAY to always. With this setting at the level of the session we could see change explain the plan, such as optimizer would think exadata is available.

    Please note that oracle would not recommend to use/infact analyzes all your indexes in Exadata, because by FTS is much more effective in Exadata. Once you have moved your application to Exadata, you apply main index in INVISIBLE mode and the benefits you get (there would be huge :)).

    Also check
    http://Antognini.ch/2010/04/Exadata-storage-server-and-the-query-optimizer-part-1/

  • Migration to exadata - DB_BLOCK_SIZE

    Hello
    We plan to move our DWH, 5 tera 11 GR 2 database running on a Linux machine rac exadata 1/4.
    We are running with 32 K block size.
    Our statements must use FTS, and index number is low.
    We want to get your advice if we stay with the size of 32K block on the exadata machine or install on exadata database machine with 8K block size.

    Thank you

    If you are able (have the time and resources available), test the 8 KB and 32 KB would be the best thing.

    I've never tested (all my clients use 8 KB blocksize), but I'd be surprised if 32 KB would make much difference in performance.

    So if you have the opportunity, I would stick with 8 KB on exadata.

  • SQL Tuning for Exadata

    Hello

    I would like to know specific to Oracle exadata SQL Tuning methods so that they could improve the performance of the database?
    I am aware that oracle exadata works with Oracle 11 g, but I would like to know weather there any s w.r.t. to SQL tuning on exadata?

    concerning
    Sunil

    Well, there are some things that are very different on Exadata. Remember all the standard Oracle SQL tuning you have already learned as Exadata runs the code of database 11g standard, but there are many optimizations that were added that you should be aware of. At a high level, if you make the OLTP type work you should try to make sure you enjoy Exadata Smart Flash Cache that will greatly accelerate your small I / O of the but long-running queries are where appear the great benefits. The high level tuning approach for them is the following:

    1. check if you have found smart Scans.
    2. If you are not, difficulty that never prevents them being used.

    We have been involved in somewhere between 25-30 DB now machinery facilities and in many cases, a little bit of effort changes performance considerably. If you get only 2 to 3 X improvement on your previous platform on these long term requests you get probably not all the benefits of the Exadata optimizations. So the first step is to learn how to determine if you've found the smart analysis or not, and what parts of the statement. Wait events, session statistics, V$ SQL, SQL monitoring are viable tools that can show you this information.

  • Subpartition Hash will be outdated in the era of the Exadata?

    PARTITION OF SUBPARTITION RANGE (SALES_DAY) OF the HASH (PRODUCT_ID) was a fairly common way to organize facts in ORACLE tables.

    Since Exadata automatically distributes data on storage server depending on the block size of 4MB, should we still subpartition table based on the hash value of a column of high cardinality? Join partition-wise will be discharged to the storage without hash subpartition Server? Please inform and explain.


    Another question: since both OF HASH PARTITION and SUBPARTITION OF HASH are the same algorithm in the form of ORA_HASH() or ORA_HASH() function, what kind of hash function Exadata uses?

    Thank you.

    Even with any database without sharing the distribution of hash key must be specified. Some yo column default or random, but generally you will want to check this for optimal performance.

    I don't know the future of the Exadata holds a lot of good things... :)
    --
    Kind regards
    Greg Rahn
    http://structureddata.org

Maybe you are looking for

  • Lollipop 5.0.2

    It's ok if I am updating mi bike g xt1069? I've heard of problems with the battery

  • How can I send emails through peoplepc instead of windows live mail?

    MS live E-mail I received & send all messages through peoplepc.  I find that I can't transmit sites Web etc. than e-mail, because the Windows Live program tries to support and send the email instead of leaving my peoplepc Webmail account send.  Progr

  • M127FW: M127FW cannot receive faxes

    Hello How to set up M127FW, it would automatically pick up all calls? Almost tried averything, but only a few faxes are received. It's a dedicated line, firmware is newer. How to disable "resourcefulness" and pick up the phone after a number of rings

  • Handly you turn off compatibility mode in Windows 7 Home Premium

    I loaded iTunes 9.1 program has 64 bit for Windows 7. When I open it, I get a message that the program has been set to run in compatibility mode. He then States for best results, turn off the compatibility mode for itunes before you open it. How can

  • problem upgrade n1000v

    There are two VSMs and four VEM on ESXi 5.1 servers. They are all the same version, version 4.2 (1) SV2(1.1a). I want to add two two 5.5 ESXi servers. Can I just install version 4.2 (1) SV2(2.1a) on two server ESXi 5.5, then add two MEC with version