Design VSAN for data warehouses

HI, I'm new with VSAN. I work for a small company, and whereas implementation of the VSAN on cluster 3 knots.

I want to configure VSANS so that I can store the database and the production on faster disks (15 K) servers and other data on disks slow (7.2 K) is there a way I can achieve in VSAN? As I see it, I can only create a vsandatastore (created automatically) and assign the disks in disk groups, but no option to store vhdk on disk selected group or maybe create 2 vsandatastore (don't know it of possible or not) and configure the disk groups on them with faster disks and others with more slowly.

Can someone please guide me here how can achieve us? any suggestion will be appreciated.

Al-tough, it is possible to create a group with 1 x ssd and 3 x 600 GB 15 k rpm and another with 1 x ssd and 2 x 4 TB at 7200 RPM, I strongly advise you against it.

There is (currently) no way to force your vm is to use the 'rapid' or groups 'slow' disks

All disk groups are aggregated into a large data store.

Currently, it is the best practices and recommended to use groups of equal records.

Tags: VMware

Similar Questions

  • Create schemas for data warehouse for Reporting with Oracle XE

    It is possible to import home charger with the Oracle XE database and dw?

    I get this error ORA-00439: feature not enabled not: progress of replication in the log of the CIM.

    I saw that the database, we do not have have the feature "advanced replication."

    SQL> select * from v $ option where parameter = 'Advanced replication';
    
    
    PARAMETER
    -------------------------------------------------- --------------
    VALUE
    -------------------------------------------------- --------------
    advanced replication
    FALSE
    
    
    

    Journal of the CIM:

    Mon Feb 23 14:16 BRT 2015 1424711760686 atg.cim.database.dbsetup.CimDBJobManager module of high level of information for Reporting data warehouse Datasource list page: domain controllers. DW, ARF. DW.base, ARF. DW. InternalUsers, Store.Storefront

    Info my Feb 23 14:16:05 BRT 2015 1424711765012 atg.cim.database.dbsetup.CimDBJobManager 0 0 imports has not been executed.

    Info my Feb 23 14:16:05 BRT 2015 1424711765192 atg.cim.database.dbsetup.CimDBJobManager list of module level for Datasource Reporting charger: DafEar.Admin, DCS. DW, DCS. PublishingAgent, ARF.base, Store.EStore, Store.EStore.International

    Info my Feb 23 14:16:05 BRT 2015 1424711765733 atg.cim.database.dbsetup.CimDBJobManager 1 1 imports has not been executed.

    Info my Feb 23 14:16:05 BRT 2015 1424711765953 atg.cim.database.dbsetup.CimDBJobManager list of top level for Datasource Publishing module: DCS - UI. Versioned, BIZUI, PubPortlet, DafEar.admin, DCS - UI. SiteAdmin.Versioned, SiteAdmin.Versioned, DCS. Versioned, DCS - UI, Store.EStore.Versioned, Store.Storefront, DAF. Endeca.Index.Versioned, DCS. Endeca.Index.Versioned, ARF.base, DCS. Endeca.Index.SKUIndexing, Store.EStore.International.Versioned, Store.Mobile, Store.Mobile.Versioned, Store.Endeca.International, Store.KnowledgeBase.International, Portal.paf, Store.Storefront

    Info my Feb 23 14:16:11 BRT 2015 1424711771561 atg.cim.database.dbsetup.CimDBJobManager 65 65 imports has not been executed.

    Info my Feb 23 14:16:11 BRT 2015 1424711771722 atg.cim.database.dbsetup.CimDBJobManager list of top level for Datasource Production Core module: Store.EStore.International, DafEar.Admin, DPS, DSS, DCS. PublishingAgent, DCS. AbandonedOrderServices, DAF. Endeca.Index, DCS. Endeca.Index, Store.Endeca.Index, DAF. Endeca.Assembler, ARF.base, PublishingAgent, DCS. Endeca.Index.SKUIndexing,Store.Storefront,Store.EStore.International,Store.Recommendations,Store.Mobile,Store.Endeca.International,Store.Fluoroscope,Store.KnowledgeBase.International,Store.Mobile.Recommendations,Store.Mobile.International,Store.EStore,Store.Recommendations.International

    Info my Feb 23 14:16:12 1424711772473 2015 BRT atg.cim.database.dbsetup.CimDBJobManager 30 30 imports has not been executed.

    Info my Feb 23 14:16:19 BRT 2015 1424711779573 atg.cim.database.dbsetup.CimDBJobManager creating schema to the Reporting data warehouse data source

    Info my Feb 23 14:16:19 BRT 2015 1424711779653 atg.cim.database.dbsetup.CimDBJobManager list of top level for Datasource Reporting data warehouse module: domain controllers. DW, ARF. DW.base, ARF. DW. InternalUsers, Store.Storefront

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager DatabaseTask Create for Module ARF. DW.base, sql/db_components/oracle/arf_ddl.sql

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager DatabaseTask Create for Module ARF. DW.base, sql/db_components/oracle/arf_view_ddl.sql

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager DatabaseTask Create for Module ARF. DW.base, sql/db_components/oracle/arf_init.sql

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager Create DatabaseTask Module domain controller. DW, sql/db_components/oracle/arf_dcs_ddl.sql

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager Create DatabaseTask Module domain controller. DW, sql/db_components/oracle/arf_dcs_view_ddl.sql * info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager Create DatabaseTask Module domain controller. DW, sql/db_components/oracle/arf_dcs_init.sql

    Info my Feb 23 14:16:21 atg.cim.database.dbsetup.CimDBJobManager BRT 2015 1424711781085 found 2 of the 6 unrun previously of tasks for Datasource Reporting data warehouse

    Info my Feb 23 14:16:21 BRT 2015 1424711781085 atg.cim.database.dbsetup.CimDBJobManager 1 ARF. DW.base: sql/db_components/oracle/arf_view_ddl.sql

    Info my Feb 23 14:16:21 atg.cim.database.dbsetup.CimDBJobManager 1424711781085 2015 BRT 2 domain controllers. DW: sql/db_components/oracle/arf_dcs_view_ddl.sql

    Info my Feb 23 14:16:21 BRT 2015 1424711781085/atg/dynamo/dbsetup/job/DatabaseJobManager starting work of setting up data 1424711781085.

    Error my Feb 23 14:16:21 BRT 2015 1424711781516 /atg/dynamo/dbsetup/database/DatabaseOperationManager---java.sql.SQLException: ORA-00439: feature not enabled not: Advanced replication

    is there a solution?

    Hello

    We have not tested and certified with Oracle XE internally

    You must use an Oracle Enterprise Edition for Advanced Replication

    What version of Oracle trade you install, I can't say extract of newspaper, you've posted

    ++++

    Thank you

    Gareth

    Please indicate any update as "Good response" or "Useful answer" If this update help and answers your question, so that others can identify the correct/good update between the many updates.

  • What are the best solutions for data warehouse configuration in 10 gr 2

    I need help on solutions to provide my client for the upgrade of the data warehouse.

    Current configuration: database Oracle 9.2.0.8. This database contains the data warehouse and a LOBs more on the same host. The sizes are respectively 6 terabyte (3 years retention policy + current year) and 1 terabyte. The ETL tool and BO Reporting tools are also hosted on the same host. The current configuration is really poor.

    Customer cannot go for a major architectural or configuration to its environment changes existing now due to some constraints.

    However, they agreed to separate the databases on separate hosts of ETL tools and objects BO. We also plan to upgrade the database to 10 g 2 to achieve stability, better performance and overcome the current headache.
    We cannot upgrade the database 11g as the BO is a 6.5 version that is not compatible with Oracle 11 g. And the customer cannot afford to spend anything else than the database.

    So, my role is very essential in providing a perfect solution for better performance and take a migration successful Oracle from a host database to another (same platform and OS) in addition to the upgrade.

    I got to thinking now of the following:

    Move the database and Oracle mart for separating the host.
    The host will be the same platform, i.e. HP Superdome with 32-bit HP - UX operating system (we are unable to change to 64-bit as ETL tool does not support)
    Install the new Oracle 10 g on the new host database and move the data to it.
    Discover all the new features of 10gr 2 in order to data warehouse, i.e., introduction of Clause SQL TYPE, Parallel processing, partitioning, Data Pump, SPA to study the pre and post migration.
    Also at RAC to provide more better solution than our main motivation is to show an extraordinary performance improvement.
    I need your help to prepare a good roadmap for my assignment. Please suggest.

    Thank you

    Tapan

    Two major changes since the 9i to 10g which will impact performance are
    a. changes in the default values when executing GATHER_ % _STATS
    b. change in the behavior of the optimizer

    Oracle has published several white papers on them.

    Since 10g is no longer in the Premier Support, you'll have some trouble connecting SRs with Oracle Support - they will continue to ask you to use 11 g.

    The host will be the same platform, i.e. HP Superdome

    Why do you need export if you do not change the platforms? You could install 9.2.0.8 on the new server, the database clone and then level to 10 gr 2 / 11 GR 2.

    Hemant K Collette

  • Upgrade question for data warehouses - want a confirmation

    I have read in the upgrade guide and think I have the answer, but would like a confirmation e-mail.

    It's my current scenario:

    Three ESX 3.5 update 3 guests share a single fibre channel LUN and two iSCSI LUNS. The LUNS are all VMFS3 format.

    vCenter 4.0. (It was just upgraded from 2.5 to 4.0 today)

    I would add a fourth server from vSphere ESX 4.0 in the data center. I would like for this server to be able to share the same LUN VMFS3 as other ESX 3.5 hosts. Then I will move the VM from 3.5 servers to the servers of 4.0 one at a time, the upgrade of the VMTools on the virtual machine I. When I get enough VM on one of the servers of 3.5, I can pass this host to 4.0, and then keep the VM off the coast of 3.5 moving slowly until they are all on 4.0 hosts and guests 3.5 have been upgraded to 4.0.

    I want to confirm is that I can add a shared LUN of VMFS3 existing to the new host of 4.0 without the host 4.0 formatting or otherwise destroy the VMFS3 data store and the VM 75 who live on that.

    Looking at page 10 of the guide to upgrade, it says "no VMFS upgrade is required if you are upgrading an ESX3.x with VMFS3 data warehouses, then I guess it's ok. I guess that I didn't know, what if I shared a LUN exist with a new server, ESX4, whatever this scenario changes?

    Thanks for your help.

    I want to confirm is that I can add a shared LUN of VMFS3 existing to the new host without the host 4.0 4.0 formatting

    That's right, what ESX 4.0 is non-destructive, unless you explicitly tell it to erase / format partitions.

    If I shared a LUN with a new server, ESX4, change something about this scenario?

    I don't know exactly, but ESX version 3.21 of VMFS, 3.03 3.02 introduced 3.31 VMFS.  ESX 4.0 is as 3.41 (I THINK), but it does not matter they are ALL compatible, these are revisions minor and no need to reformat.

  • converter shows wrong size for data warehouses

    The size of data warehouses is not updated in the converter. I made some space on data warehouses but converter displays always old size. all ESX hosts show the right size, so that the converter look at... How do update you?

    Nevermind, I reread the hosts and it shows correct now.

    Moderators or OP, you can mark this thread as answered because it appears that OP has solved the problem.

  • OBIEE retro-design to go from SQL Server to a data warehouse

    Hello
    I am new to modeling for data warehouses. We currently have an OBIEE environment put in place where the source of data was the transactional tables in SQL Server. The SQL Server data should be moved to a warehouse of data non-Oracle and I need to produce a logical data model for the people of my company warehouse. Unfortunately, the SQL Server data is never copied, so I based the model from the logical and physical of OBIEE diagram/relationships.
    My question is in what concerns the validity of the following relationships to be used in a data warehouse based on what is currently in OBIEE. When I have this model through Erwin, I wonder if I'm away from base in relationships (modeling, don't not personal):

    Dimension 1 has a 0:M with axis 2
    Dimension 1 has a 0:M with 3 Dimesion
    Dimension 2 has a 0:M with 3 Dimension
    Dimension 2 and 3 Dimension have a 0:M with 1 fact

    Through the use of pseudonyms and others, this works in OBIEE. Will this work as a model of data in a data warehouse environment?

    Thank you!

    I think that you started with the wrong foot. I suggest search you Google 'kimball methodology' and have a reading a few articles. Your DWH model should not be based on your transactional tables. You should ask your business users 'questions' they want to answer in the DWH. Then model your DWH base on it. You cannot model a DWH without knowing what questions you must answer. For example, if your users want to know the sales per day and per branch will be a fact of sales with a sales volume measure joining two dimension of branch and time dimensions. The number of facts will depend on the questions you need to answer, the data type, and the granularity of them.

  • Total latancy in data warehouses

    Hello

    My host has some datastores.one of them have high latency. What should I do?

    We know total latency = device katency + kernel latency if suppose us device or kernel is high, how should solve the problem?

    Thank you

    What I often find that alerts latency for data warehouses are related to CONSIDERING moves. Need large pieces of mobile data respond to a project. Are your warehouses of data showing high without cause latency? for long periods of time?

    I have the internal team of vcenter provide guidance on what they consider the latency for ps read and write ps and make these TOUGH KPI threshold in vcops. So when an alert occurs it is a matter of understanding unplanned planned vs, so it's a matter of drilling data with your experts.

    In my humble OPINION - be admin on vcops does not mean that you know the environment better than your teams... This means that you have useful data to share and then develop the best levels together and the SOP for what latency occurs and the steps to follow. We working through such things now... and test the adapter for Symmetrix. Looking is not on the HBA level so miss us some key data points, but we are working on the adapter to txt and other methods to get better data in vcops.

  • RMAN duplication and expiration of the directives on the environment of data warehouse

    Operating systems: Windows Server 2008
    DB: Oracle 11 g 2

    Are there guidelines how often one should do a RMAN duplication and assign an expiration on archivelogs for data warehouse environments?
    It me appears once a day would be enough for overlapping in a data warehouse environment which is updated nightly. Expiration, I don't expect less than 1 week.
    See you soon!

    I agree with damorgan

    See the links for the best practices below.

    http://www.Oracle.com/technetwork/database/features/availability/311394-132335.PDF
    https://blogs.Oracle.com/Datawarehousing/entry/data_warehouse_in_archivelog_m

    Hope this helps,

    Concerning
    http://www.oracleracepxert.com
    Understand the power of the Oracle RMAN
    http://www.oracleracexpert.com/2011/10/understand-power-of-Oracle-RMAN.html
    Duplication of data base RAC with RMAN
    http://www.oracleracexpert.com/2009/12/duplicate-RAC-database-using-RMAN.html

  • choice of the model of design for data acquisition system

    Hi all

    I have a problem on the selection of the model design / architecture for a data acquisition system.

    Here are the details of the desired system:

    There are data acquisition hardware and I need to use by looking at the settings on the user interface.

    the period of data acquisition, channel list to analyze must be selected on the user interface. In addition, there are many interactions with the user interface. for example if the user selects a channel to add scanlist, I need to activate and make it visible to others on the user interface.

    When the user completes the channel selection, then he will press the button to start the data acquisition. Then, I also need to show the values scanned on a graph in real time and save them in a txt file.

    I know that I can not use producer consumer model here. because the data acquisition loop should wait for the settings to scan channels. and it works on a given period by the user. If the loop of user interface makes higher then loop (loop data acquisition) of consumption. This means that queue will be bigger, larger. If I use notifier this will be some data loss comes from the user interface.

    y at - it an idea about it? is there any model of design suitable for this case?

    Thanks in advance

    Best regards

    Veli BAYAR

    Software for embedded systems and hardware engineer

    Veli,

    I recommend the model producer/consumer with some modifications.

    You might need three loops.  I can't tell for sure from your brief description.

    The loop of the User Interface responds to the user for configuration entries and start/stop of acquisition.  The parameters and commands are passed to the Data Acquisition loop via a queue. In this loop is a machine States that slowed, Configuration, Acquisition and stop States (and perhaps others). The data is sent to the processing loop through another line. The processing loop performs any data processing, displays the data from the user, and records to file. A registrant can be used to send the Stop command or stop the loop of the UI for other loops.  If the amount of treatment is minimal and the time of writing files are not too long, the functions of processing loop might be able to happen in the case of the UI loop timeout structure of the event.  This makes things a little easier, but is not as flexible when changes need to be made.

    I'm not sure that there is a type of design for this exact configuration, but it is essentially a combination of the models Design of producer/consumer (data) and producer/consumer (events).

    Lynn

  • Oracle Business Intelligence Data Warehouse Administration Console 11 g and Informatica PowerCenter and Guide of Installation PowerConnect adapters 9.6.1 for Linux x 86 (64-bit)

    Hi all

    I'm looking for full installation GUIDE for Oracle Business Intelligence Data Warehouse Console Administration 11 g and Informatica PowerCenter and PowerConnect 9.6.1 for Linux x 86 (64 bit) adapters. I just wonder is there any url that you can recommend for installation. Please advise.

    Looks like these are ask you.

    http://docs.Oracle.com/CD/E25054_01/fusionapps.1111/e16814/postinstsetup.htm

    http://ashrutp.blogspot.com.by/2014/01/Informatica-PowerCenter-and.html

    Informatica PowerCenter 9 Installation and Configuration Guide complete | Training of Informatica & tutorials

  • Need ideas for compare current Oracle EBS data against the data warehouse to establish data matches course.

    Hello, I am new to the Oracle Forum. I'm a BI developer and I need to compare data Oracle EBS in my organization with the data in the data warehouse to make sure they match. I am using Informatica for this process by pulling the two sources and comparing. Can someone give me a brief example to make this process or similar methods with Informatica and its transformations so that it can be useful. Thanks in advance. Let me know if you need more information about the process.

    Looks like you are trying to make a reconciliation process? That is you can have implemented BIAPPS (or something custom) and now I want to check your ETL? If this is the case then it's good enough for a test case - we usually start with senior level (actual numbers for each group of companies, for example), then a subset of other different queries for example as per a level in the hierarchy of the org, by position, dates etc.

    and much more expensive as the implement of OLIVIER

    I don't think there are many in the world that is more expensive than the implementation of OLIVIER!

  • Data warehouses up-to-date for a pool of VMware using PowerCli

    Using this as a starting block.

    VMware View 5.2 Documentation Library

    I want to combine both functions and the variables to use ".txt files" with my new and old data warehouses listed.

    I edited a bit, combining both functions and variables for lists old and new creation, but I don't know about the context to provide variables for text files.  All PowerShell / PowerCli guru?

    # Function PowerShell to add new, then remove the old warehouses of data in an automatic pool.

    # UpdateDatastoresForAutomaticPool

    # Parameters

    # $Pool ID of the pool pool to update.

    # $OldDatastore full path to OldDatastore.txt to be removed.

    # $NewDatastore full path to NewDatastore.txt to add.

    $Pool = "C:\powercli\PersistentPools.txt".

    $OldDatastore = "C:\powercli\OldDatastore.txt".

    $NewDatastore = "C:\powercli\NewDatastore.txt".

    function RemoveDatastoreFromAutomaticPool

    {param ($Pool, $OldDatastore)

    $PoolSettings = (get-pool-pool_id, $Pool)

    $currentdatastores = $PoolSettings.datastorePaths

    $datastores = «»

    foreach ($path in $currentdatastores.split (";"))) {

    If (-not ($path - eq $OldDatastore)) {}

    $datastores = $datastores + "$path;"

    }

    }

    Update-LITERALLY-pool_id, $Pool - datastorePaths $datastores

    }

    function AddDatastoreToAutomaticPool

    {param ($Pool, $NewDatastore)

    $PoolSettings = (get-pool-pool_id, $Pool)

    $datastores = $PoolSettings.datastorePaths + "; $NewDatastore ".

    Update-LITERALLY-pool_id, $Pool - datastorePaths $datastores

    }

    Thank you

    -Matt

    You use literal strings instead of the content for the files. Assuming that the contents of the file is a list with one entry on each line, you need to change your code actually look at the data, for example:

    $oldstores = get-content "C:\powercli\OldDatastore.txt".

    foreach ($path in $currentdatastores.split (";"))) {

    If (-not ($oldstores - contains $path)) {}

    ...

    }

    }

  • the number of vsphere HA pulsation data warehouses for which is 1

    Only, I have a giant lun created and have no space to create another.

    So now it gives me this error HA my hostsCapture.JPG

    What should I do?

    Click with the right button on the cluster, click on change settings, go to vSphere HA-> Advanced Options and add the das.ignoreInsufficientHbDatastore entry and the value real... vSphere cluster to disable and re-enable HA and the warning will disappear.

    VMware KB: HA error: the number of heartbeat for the host data warehouses is 1, which is less than required: 2

  • The number of heartbeat for the host data warehouses is 0, which is less than required: 2

    Hello

    I have trouble creating my DRS cluster + storage of DRS, I have 3 hosts esxi 5.1 for the task

    First, I created the cluster, no problem with that, so the DRS storage was created and now I can see in the Summary tab

    "The number of heartbeat for the host data warehouses is 0, which is less than required: 2".

    I search the Web and there are similar problems when people has only a single data store (the one that came with ESXi) and need to add another but in my case... vcenter detects any...

    In the views of storage I see the store of data (VMFS) but for some strange reason the cluster not

    In order to achieve data warehouses minimum (2) can I create an NFS and map it in THE 3 esxi hosts? Vcenter which consider a play config?

    Thank you

    You probably only have local data warehouses, which are not that HA would require for this feature (pulsations datastore) to work properly.

    You will need either 2 iSCSI, FC 2 or 2 NFS volumes... Or a combination of the any of them, for this feature to work. If you don't want to use this feature, you can also turn it off:

    http://www.yellow-bricks.com/2012/04/05/the-number-of-vSphere-HA-heartbeat-datastores-for-this-host-is-1-which-is-less-than-required-2/

  • Get-stat - disk for virtual machines on NFS data warehouses

    Hi all

    Through work for VMs on NFS data warehouses get-stat-disc?

    $myVM | Get-stat - disc

    Doesn't seem to work for VMs on NFS data warehouses, but that works for VMs on VMFS data warehouses.

    After a presentation of VMware to http://webcache.googleusercontent.com/search?q=cache:h78Db7LqHcwJ:www.slideshare.net/vmwarecarter/powercli-workshop+%2Bget-stat+%2Bnfs & cd = 2 & hl = in & ct = Europeans & gl = at the & source = www.google.com.au

    «WARNING: NFS performance statistics are not available (to come in a future version of vSphere).»

    When these statistics are available for NFS data storage?

    Kind regards

    marc0

    The answer is in the property of instance data that Get-Stat returns.

    (1) get-Stat ==> canonical name of the LUN on which disk the hard

    (2) get-Stat virtualdisk ==> The SCSI id of the virtual disk inside the VM

    (3) get-Stat data store ==> the name of the data store

    (1) you give statistics for view virtual machine i/o activity starting from the LUN. For a VM with several virtual disks on the same data store, this will display the total IO statistics. And it will also include i/o generated by another VM on the LUN as swap, Flash files related...

    (2) gives statistics for 1 specific virtual disk of your virtual machine

    (3) statistics of e/s of your VM to a specific data store. Interesting when you have a store of data with multiple extensions (multiple LUNS)

    I hope that clarifies it a bit.

Maybe you are looking for