Mode of the database for the data warehouse

I am running Oracle 10 g 2 on solaris 5.10.

The recommendation that the data store database must be running in dedicated mode is right?

concerning

long-running queries are unsuitable for shared server mode because duration long query will block the shared server and reduce the number of shared servers available.
Data warehouses usually run long-running queries.

----------
Sybrand Bakker
Senior Oracle DBA

Tags: Database

Similar Questions

  • Need ideas for compare current Oracle EBS data against the data warehouse to establish data matches course.

    Hello, I am new to the Oracle Forum. I'm a BI developer and I need to compare data Oracle EBS in my organization with the data in the data warehouse to make sure they match. I am using Informatica for this process by pulling the two sources and comparing. Can someone give me a brief example to make this process or similar methods with Informatica and its transformations so that it can be useful. Thanks in advance. Let me know if you need more information about the process.

    Looks like you are trying to make a reconciliation process? That is you can have implemented BIAPPS (or something custom) and now I want to check your ETL? If this is the case then it's good enough for a test case - we usually start with senior level (actual numbers for each group of companies, for example), then a subset of other different queries for example as per a level in the hierarchy of the org, by position, dates etc.

    and much more expensive as the implement of OLIVIER

    I don't think there are many in the world that is more expensive than the implementation of OLIVIER!

  • Need to reset the Data Warehouse

    Hello

    I need to reset the data warehouse so that I can start my first ETL process. However, I am not able to do DAC customer worm 7.9.5. When I click on 'Tools'-> 'management of ETL"->"Reset Data Warehouse"nothing pop up to do not know why, all other links and windows opens, with the exception of this. How can I solve this problem?

    Is it possible for me to do the reset manually?

    I checked on the internet and found AS below: -.

    D4 how do you run a full load (incremental)
    A: reset DataWarehouse. Just truncate the table S_ETL_REFRESH_DT in the repository of the dac

    So in our case, to truncate table will then be W_ETL_REFRESH_DT rite? Please confirm. We have planned to operate the ETL at 18:00 today.
    Y at - it all the other tables I need to truncate this way?

    Thank you
    Saran.

    Hello

    I worry if this window does not work correctly, you should probably work out why, I use 1.5.0_17 jdk in our environments (with apps 7.9.5) and that works fine, I suggest to try another version of the JDK and see if that fixes the problem.

    Kind regards

    Matt

  • PowerCLI script for DatastoreCluster, data warehouses and the size info, DataCenter, Clusters

    Hello - I am looking to remove the DatastoreClusters and then list data warehouses as well with their size (total size, used space, free space, put in service, uncommitted space) and the total number of virtual machines on this data store. I would also like to understand what data center and they are on clusters. Is this possible? I might want to limit what is displayed in data warehouses that are 13 percent of free space or less.

    Thank you

    LORRI

    Of course, try this way

    Get-Datastore.

    Select @{N = 'Center'; E={$_. Datacenter.Name}},

    @{N = "DSC"; E = {Get-DatastoreCluster - Datastore $_______ |} {{Select - ExpandProperty name}}.

    Name,CapacityGB,@{N='FreespaceGB'; E = {[math]: tour ($_.)} (FreespaceGB, 2)}},

    @{N = "ProvisionedSpaceGB"; E = {}

    [math]: Round (($_.)) ExtensionData.Summary.Capacity - $_. Extensiondata.Summary.FreeSpace + $_. ExtensionData.Summary.Uncommitted)/1GB,2)}}.

    @{N = "UnCommittedGB"; E = {[math]: tour ($_.)} ExtensionData.Summary.Uncommitted/1GB,2)}}.

    @{N = "VM"; E={$_. ExtensionData.VM.Count}} |

    Export Csv report.csv - NoTypeInformation - UseCulture

  • How to navigate the data warehouses to run vmkfstools

    I am a huge n00b at VMAs, vCLI, PowerCLI, etc..

    I'm trying to clone a virtual machine using ESXi. I have installed vMA and I already added to my server through vifpadd server and I already have vifpinit on my server. But I don't know what's next? How to view my data stores so I can run a-i against my VM vmkfstools?

    Thank you

    There are many ways to 'Browse', some of the default scripts canned are not the best but you can basically get you for example need to locate the VMDK (s) of a virtual machine if you already know the data store or the host.

    Take a look at /usr/lib/vmware-vcli/apps/host/dsbrowse.pl

    With the remote version of the vmkfstools, the arguments of the command for the VMDK are slightly different and not what you would be used to fi you worked only with classic Service Console vmkfstools. Within the VMware API defines a path by using the following format:

    folder/anotherfolder/anotheranotherfolder \[datastore_name\]

    So, to make it more concrete, allows that we have datatstore1 and datastore2 and we have a virtual machine named 'myvm' and he lives on datastore1 and there 1 vmdk named 'myvm.vmdk' and 'myvm - flat hard. " Now you want to copy VMDK myvm to datastore2 and a folder "virtualmachines/myvm-clone".

    Now if you went on your ESX or ESXi host, you will see the following for 'myvm"on datastore1

    /vmfs/volumes/MyVM/MyVM.vmx

    /vmfs/volumes/MyVM/MyVM.VMDK

    /vmfs/volumes/MyVM/MyVM-flat.VMDK

    .. .etc

    Now to copy real vmkfstools using the vCLI:

    "[vmkfstools-i \[datastore1\] myvm/myvm.vmdk d thin \[datastore2\] virtualmachines/myvm-clone/myvm-clone.vmdk.

    * The assumption is that the "virtualmachines/myvm-clone" subdirectory exisats and again, manipulation of the file system can be done using sharp also well documented by Dave Mischenko to http://www.vm-help.com/esx/esx3i/esx_3i_rcli/vifs.php

    Once again, it should be well documented in the vCLI guide and canned the additional scripts can be found here: http://www.vmware.com/support/developer/viperltoolkit/viperl40/doc/vsperl_util_index.html

    I wrote also a collection of scripts using the vSphere SDK for Perl which is what the vCLI is written that provides additional automation and we hope to improve what exists in the vSphere API through these scripts that are non-PS: repository of scripts vGhetto

    If I ask that you spend time going through the documentation, once you understand how it works, it's pretty simple to use, Yes, it's not clean and some things may be curious of what some, but hopefully the repository I created will be help in any feature/functions missing in all default canned scripts. The main point that I want to drive a lot of people may not understand is, what you can make in vSphere client against a host ESX (i) OR PowerCLI or VI Java can also be done with vSphere SDK for Perl, it may not all be created within the vCLI, but that's all using the same API exposed , it's just how is pulled up and I have to give him the PS guys, they did a good job of that and therefore the ease of use and understanding.

    I hope this clears up things and let me know if you have more questions.

    =========================================================================

    William Lam

    VMware vExpert 2009

    Scripts for VMware ESX/ESXi and resources at: http://engineering.ucsb.edu/~duonglt/vmware/

    Twitter: @lamw

    repository scripts vGhetto

    Introduction to the vMA (tips/tricks)

    Getting started with vSphere SDK for Perl

    VMware Code Central - Scripts/code samples for developers and administrators

    VMware developer community

    If you find this information useful, please give points to "correct" or "useful".

  • Best practices for creating data warehouses

    I have 10 iSCSI LUN (all on the same device), each 1.8 size to which I want to introduce ESXi to create data warehouses.  Are there all recommended for how I divide these LUNS to the top, or should I just do a giant data store?  Maybe there are performance factors to consider here?

    If I had to do 10 1.8 data stores to I can see a problem on the road when I need to expand a vmdk but cannot because there is not enough free space on the data store, it would be less of a problem if I had a giant data store at first.

    Thank you.

    First of all, it's one of those type questions "how long is a piece of string.

    It depends of course of the VMDK number you're going to be running, the available storage, the type of storage, IO, storage, type of virtual machines etc. etc. etc.,

    Things to consider are for example, you have a storage that deduplication and storage cost a major factor (and so on)
    Of course. . almost always, a cost reduction is equivalent to a drop in performance.

    In any case, a rule very loose, I have (in most cases), among which I size LUN somewhere between 400 and 750 GB and rarely (if ever) have more than 30 VMDK per LUN.

    Almost always, redirect the request to the following resources:

    first of all, the maximum rates of configuration:
    http://www.VMware.com/PDF/vSphere4/R40/vsp_40_config_max.PDF

    http://www.gabesvirtualworld.com/?p=68
    http://SearchVMware.TechTarget.com/Tip/0, 289483, sid179_gci1350469, 00.html
    http://communities.VMware.com/thread/104211
    http://communities.VMware.com/thread/238199
    http://www.yellow-bricks.com/2009/06/23/vmfslun-size/

    (although Post-andre above covers most of them)

  • Need help with vSphere data script, packaging and sending it to the data warehouse

    Greetings PowerCLI gurus.


    Everyone can offer suggestions on a script which can query vSphere and pull on the following fields of the virtual computer:

    NameStateStatusHostSpace in useSpace used

    Format in a file in the CSV format and send the file to an FTP server?

    Much respect to all, thanks a lot in advance.

    Hello-

    Happy to help you.

    OK, well, if this database is accessible through a UNC path, you might make a copy directly using Copy-Item.  If you use different credentials, you can encrypt and store in an XML file.  HAL Rottenberg wrote to do http://halr9000.com/article/531.

    Or, if this pension data is something that supports the secure copy (scp) or secure FTP (SFTP), those who would be good options.  Again, you can store the alternative credentials in an encrypted in an XML file format and use them as needed.

    Certainly, there is a balance to be struck between security and ease of use.  It may be such that the transmitted data are not considered sensitive to all, and clear data transfers are acceptable.  Probably still a good idea to take measures to protect the credentials at least.

  • Placeholder for SPPG data warehouses

    I read somewhere that datastore of placeholder is not required for political protection database storage group. My question is why it is not necessary?

    Hi Eric,.

    An excerpt from documentation 6.1 SRM (Site Recovery Manager 6.1 Documentation Center) that can help you:

    For protection of storage strategy, Site Recovery Manager applies the mappings of the inventory of virtual machines when you run a recovery plan that contains a group of protection of the storage policy.

    With the Bay-based and groups protection replication vSphere, Site Recovery Manager applies the mappings of the inventory at the time of set up protection on a virtual machine. With the storage policy groups, protection policy being dynamic storage, Site Recovery Manager applies only the mappings of the inventory at the time where you run a recovery program. Virtual machine placement decisions are made according to the mappings of the inventory during execution a recovery plan, and Site Recovery Manager does not create a placeholder VMs on the recovering site.

    Hope that this is sufficient,

    Daniel G.

  • Example of workflow that allows for multiple data warehouses


    I understand that there is no workflow out-of-the box, who manage the assignment of several data stores. Can someone share an example of how to do this? I'm trying to use one of canned workflows to create virtual machines using templates.

    Found a post that did what I wanted:

    https://communities.VMware.com/thread/464838

  • How can I partition logical volume from 5.5 TB Raid on a ML350 G6 server for ESX data warehouse creation

    Today, we received a new ML350 G6 server. It has 8 hard disks of size 1 TB each.i have created a volume unique raid 6 and raid is 5.5 TB usable for the logical unit space. The server starts with a picture of ESXi 3.5 on a USB key provided by HP with the server. When I login and checking configuration& gt; stroage adapters& gt; pi400 Smart array for physical disk controller, I couldn't see any disk. I know that ESX will not support the volume size more than 2 C.T.

    I don't want to create a raid volume children (example two raid 6 volumes with 4 hard drives), because it will reduce the amount of disk space, I can use.

    Is it possible to create 3 partitions on this volume 5.5 TB raid with the configs below?. All the tools that everyone knows that I can use to create partitions? I tried GPARTED, but it doesn't work

    LUN 0-2 TO

    1-2 TO LUN

    LUN 2-1, 5 TB

    You cannot have a drive that is larger than 2 TB.

    Create multiple logical disks, not different partitions.

    André

  • What are the best solutions for data warehouse configuration in 10 gr 2

    I need help on solutions to provide my client for the upgrade of the data warehouse.

    Current configuration: database Oracle 9.2.0.8. This database contains the data warehouse and a LOBs more on the same host. The sizes are respectively 6 terabyte (3 years retention policy + current year) and 1 terabyte. The ETL tool and BO Reporting tools are also hosted on the same host. The current configuration is really poor.

    Customer cannot go for a major architectural or configuration to its environment changes existing now due to some constraints.

    However, they agreed to separate the databases on separate hosts of ETL tools and objects BO. We also plan to upgrade the database to 10 g 2 to achieve stability, better performance and overcome the current headache.
    We cannot upgrade the database 11g as the BO is a 6.5 version that is not compatible with Oracle 11 g. And the customer cannot afford to spend anything else than the database.

    So, my role is very essential in providing a perfect solution for better performance and take a migration successful Oracle from a host database to another (same platform and OS) in addition to the upgrade.

    I got to thinking now of the following:

    Move the database and Oracle mart for separating the host.
    The host will be the same platform, i.e. HP Superdome with 32-bit HP - UX operating system (we are unable to change to 64-bit as ETL tool does not support)
    Install the new Oracle 10 g on the new host database and move the data to it.
    Discover all the new features of 10gr 2 in order to data warehouse, i.e., introduction of Clause SQL TYPE, Parallel processing, partitioning, Data Pump, SPA to study the pre and post migration.
    Also at RAC to provide more better solution than our main motivation is to show an extraordinary performance improvement.
    I need your help to prepare a good roadmap for my assignment. Please suggest.

    Thank you

    Tapan

    Two major changes since the 9i to 10g which will impact performance are
    a. changes in the default values when executing GATHER_ % _STATS
    b. change in the behavior of the optimizer

    Oracle has published several white papers on them.

    Since 10g is no longer in the Premier Support, you'll have some trouble connecting SRs with Oracle Support - they will continue to ask you to use 11 g.

    The host will be the same platform, i.e. HP Superdome

    Why do you need export if you do not change the platforms? You could install 9.2.0.8 on the new server, the database clone and then level to 10 gr 2 / 11 GR 2.

    Hemant K Collette

  • The Siebel Applications come with data of demonstration as the Vision for EBS data?

    Hi all

    I'm pretty new to Siebel Business Applications. I try to install Siebel 8.1.1.5 on a virtual machine and would like to make an Oracle BI ETL Applications to build the data warehouse. For this I need some source of Siebel system demonstration data.

    For Oracle E-Business Suite - we get a demo data Vision instance... like this do Siebel provides a demonstration for Siebel Business Applications data?

    Thanks for your time,

    DK

    Yes, it's called the database. It can be installed separately.
    At the moment I don't rememeber if she is part of the normal client download, or one if it is a separate download.

  • Question on the reclassification of the Version 3-5 data warehouses

    Background:

    We have a group of hosts running ESX 4.1 that I migrated to ESXi 5.

    All our hosts are connected to a San.

    All existing data stores are file system version 3.21

    I want all my stores data upgrade to version 5.  I know that there is an option to convert warehouses of data version 3 to version 5, but a support technician VMWare told me that he would not have even features a new version 5 data store would.  He said that he would retain the features of version 3.

    Question:

    If I delete a vCenter data store (after I have migrated off the guests) and then recreate the data store by using the same logical unit number, this will be a 5 native data store or it will still retain the features of version 3 when vCenter reuses the lun?

    I'd rather not have to cancel the mapping of the LUN and recreate blank those for native data warehouses 5 if possible.

    When you remove the logic unit number, it will be empty.  Adding, you can format as a LUN VMFS5

  • Create schemas for data warehouse for Reporting with Oracle XE

    It is possible to import home charger with the Oracle XE database and dw?

    I get this error ORA-00439: feature not enabled not: progress of replication in the log of the CIM.

    I saw that the database, we do not have have the feature "advanced replication."

    SQL> select * from v $ option where parameter = 'Advanced replication';
    
    
    PARAMETER
    -------------------------------------------------- --------------
    VALUE
    -------------------------------------------------- --------------
    advanced replication
    FALSE
    
    
    

    Journal of the CIM:

    Mon Feb 23 14:16 BRT 2015 1424711760686 atg.cim.database.dbsetup.CimDBJobManager module of high level of information for Reporting data warehouse Datasource list page: domain controllers. DW, ARF. DW.base, ARF. DW. InternalUsers, Store.Storefront

    Info my Feb 23 14:16:05 BRT 2015 1424711765012 atg.cim.database.dbsetup.CimDBJobManager 0 0 imports has not been executed.

    Info my Feb 23 14:16:05 BRT 2015 1424711765192 atg.cim.database.dbsetup.CimDBJobManager list of module level for Datasource Reporting charger: DafEar.Admin, DCS. DW, DCS. PublishingAgent, ARF.base, Store.EStore, Store.EStore.International

    Info my Feb 23 14:16:05 BRT 2015 1424711765733 atg.cim.database.dbsetup.CimDBJobManager 1 1 imports has not been executed.

    Info my Feb 23 14:16:05 BRT 2015 1424711765953 atg.cim.database.dbsetup.CimDBJobManager list of top level for Datasource Publishing module: DCS - UI. Versioned, BIZUI, PubPortlet, DafEar.admin, DCS - UI. SiteAdmin.Versioned, SiteAdmin.Versioned, DCS. Versioned, DCS - UI, Store.EStore.Versioned, Store.Storefront, DAF. Endeca.Index.Versioned, DCS. Endeca.Index.Versioned, ARF.base, DCS. Endeca.Index.SKUIndexing, Store.EStore.International.Versioned, Store.Mobile, Store.Mobile.Versioned, Store.Endeca.International, Store.KnowledgeBase.International, Portal.paf, Store.Storefront

    Info my Feb 23 14:16:11 BRT 2015 1424711771561 atg.cim.database.dbsetup.CimDBJobManager 65 65 imports has not been executed.

    Info my Feb 23 14:16:11 BRT 2015 1424711771722 atg.cim.database.dbsetup.CimDBJobManager list of top level for Datasource Production Core module: Store.EStore.International, DafEar.Admin, DPS, DSS, DCS. PublishingAgent, DCS. AbandonedOrderServices, DAF. Endeca.Index, DCS. Endeca.Index, Store.Endeca.Index, DAF. Endeca.Assembler, ARF.base, PublishingAgent, DCS. Endeca.Index.SKUIndexing,Store.Storefront,Store.EStore.International,Store.Recommendations,Store.Mobile,Store.Endeca.International,Store.Fluoroscope,Store.KnowledgeBase.International,Store.Mobile.Recommendations,Store.Mobile.International,Store.EStore,Store.Recommendations.International

    Info my Feb 23 14:16:12 1424711772473 2015 BRT atg.cim.database.dbsetup.CimDBJobManager 30 30 imports has not been executed.

    Info my Feb 23 14:16:19 BRT 2015 1424711779573 atg.cim.database.dbsetup.CimDBJobManager creating schema to the Reporting data warehouse data source

    Info my Feb 23 14:16:19 BRT 2015 1424711779653 atg.cim.database.dbsetup.CimDBJobManager list of top level for Datasource Reporting data warehouse module: domain controllers. DW, ARF. DW.base, ARF. DW. InternalUsers, Store.Storefront

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager DatabaseTask Create for Module ARF. DW.base, sql/db_components/oracle/arf_ddl.sql

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager DatabaseTask Create for Module ARF. DW.base, sql/db_components/oracle/arf_view_ddl.sql

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager DatabaseTask Create for Module ARF. DW.base, sql/db_components/oracle/arf_init.sql

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager Create DatabaseTask Module domain controller. DW, sql/db_components/oracle/arf_dcs_ddl.sql

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager Create DatabaseTask Module domain controller. DW, sql/db_components/oracle/arf_dcs_view_ddl.sql * info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager Create DatabaseTask Module domain controller. DW, sql/db_components/oracle/arf_dcs_init.sql

    Info my Feb 23 14:16:21 atg.cim.database.dbsetup.CimDBJobManager BRT 2015 1424711781085 found 2 of the 6 unrun previously of tasks for Datasource Reporting data warehouse

    Info my Feb 23 14:16:21 BRT 2015 1424711781085 atg.cim.database.dbsetup.CimDBJobManager 1 ARF. DW.base: sql/db_components/oracle/arf_view_ddl.sql

    Info my Feb 23 14:16:21 atg.cim.database.dbsetup.CimDBJobManager 1424711781085 2015 BRT 2 domain controllers. DW: sql/db_components/oracle/arf_dcs_view_ddl.sql

    Info my Feb 23 14:16:21 BRT 2015 1424711781085/atg/dynamo/dbsetup/job/DatabaseJobManager starting work of setting up data 1424711781085.

    Error my Feb 23 14:16:21 BRT 2015 1424711781516 /atg/dynamo/dbsetup/database/DatabaseOperationManager---java.sql.SQLException: ORA-00439: feature not enabled not: Advanced replication

    is there a solution?

    Hello

    We have not tested and certified with Oracle XE internally

    You must use an Oracle Enterprise Edition for Advanced Replication

    What version of Oracle trade you install, I can't say extract of newspaper, you've posted

    ++++

    Thank you

    Gareth

    Please indicate any update as "Good response" or "Useful answer" If this update help and answers your question, so that others can identify the correct/good update between the many updates.

  • Target iscsi ESXI 5.0, active path connected, mounted, are not not in the data store data store

    HI -.

    I am looking for help to point me in the right direction.  This problem occurred after a reboot of the system.

    I'm in vmware esxi essentials 5.0.0 821926

    I use the starwind software as my iSCSI target

    I use ISCSI to connect to my server from storage to the esxi hosts.

    I have 2 warehouses of data showing inactive under clusters of data storage and the data store.  I have a third-party data store on the same server that is loading properly.

    I currently have the same behavior on three esxi hosts:

    under configuration - storage adapters is the ISCSI path is active. Data warehouses appear under devices and mounted.  In data warehouses, they are inactive. On the same storage server

    on the starwind Server - I have

    built a new target

    removed and added the databases to the new target

    Changed the IP address of the target.

    On Esxi hosts, I removed the ISCSI server, discover places BOF dynamic and static and added under the same or IPs news discovery

    I can restart esxi hosts & storage servers and get to that same point.

    Whenever I find myself in the same place - course assets, devices mounted, inactive in data warehouses

    I don't know what else to share, let me know what you need to know to help out me.

    Thank you

    Dylan

    OK, incase someone crosses my ramblings, it may help.  My original question was warehouses of data that would not rise.  They were visible as devices in the area of storage under devices and as devices in the scsi adapter.

    When I would try to force mount them, add storage, keep the signature, mount they wouldn't mount.  I did some research and my question was after a failed mount attempt strength as got data attached as snapshots warehouses.  I then mounted force them and he tells will data warehouses and I got them back.

    I followed this KB to identify and resolve the problem

    manipulation of vSphere detected as snapshot LUN LUN (1011387)

    When it was over, I tried to add vms back and found a new show.  Power of vms would expire.  the vmkernel.log showed;

    2014 05-27 T 07: 20:40.010Z [verbose 'Vmsvc' 564D7B90] Vmsvc: filtering Vms: ignored not ready to request power state vm vim. VirtualMachine:75

    AND

    (2014 05-27 T 03: 45:47.821Z cpu4:2052) NMP: nmp_PathDetermineFailure:2084: SCSI cmd RESERVE failed on the vmhba35:C0:T1:L0 path, booking on eui.cff548d1952b1e4c of device status is unknown.

    I had latency huge read write showing upwards, 3K and more

    After several searches, I had in the ESXShell and found that there is no conflict of booking.

    On a whim, I took inventory all virtual machines that are now inaccessible.  I then added a DOS virtual machine. Alto! the latency down to version 1.2 a.7 ms for all data warehouses.

    ultimately the instructions said you may need to add virtual machines in the inventory, but does not remove all virtual machines first.  I was grabbing vms that were not in stock, so I didn't remove the old virtual machines in the inventory.

    A recruit Mennonites, Yes.

Maybe you are looking for