Placeholder data warehouses

Hi all

I am confused on how to use the placeholder data store... What is best practice?

I knew that do not give very large store of data right the placeholder?  For example, is enough 5 to 10 GB of space reserved datastore? What is the size that we should use recommend? I have 200 VM replicated. Data store space how much should I use?

Would you advise me to use space data stores for which stored virtual machines?

Thank you

Best, designate a data store size smaller than please datastore, say a 10 GB. More than enough with couple of hundreds of virtual machines.

When your virtual machines are replicated site protected to the recovery of the site either you use replication or SRM, Arraybased replication in addition to that vSphere helps you create Protection groups and then create recovery Plans using the same set of Protection groups. as part of this process, your site recovery (destination), your placeholder data store will be used to put placeholders VM [file vmx VM, file, file vmsd vmxf, etc.] based VM, by

because these files limited are involved in the creation of your placeholder VM, it is not consuming too much space, so better to have more small datastore as space reserved Datastore.

Actual VM is being replicated to the data source and destination store. So failure during recovery, these placeholder VMs will become actual VMs.

In addition, I should also mention that if you do replication from Site A (Source) Site B (Destination), ideally you configure Datastore space reserved to the only destination site, but please also configure a data store space for your Source site too, so what'Re-protect' and 'BACKSPACE' would work fine.

I have 200 VM replicated. Data store space how much should I use?

A data store by site as a placeholder data store with minimum possible range would be sufficient to store these placeholders 200 VM.

Would you advise me to use space data stores for which stored virtual machines?

No, please use a separate data store as a placeholder

Tags: VMware

Similar Questions

  • The best location for vcenter VM and placeholder data warehouses

    Hello

    I have an SRM installation with 2 sites.  The sites are separated by metro trunk of layer 2 ethernet between the sites.

    Site has

    1 EqualLogic PS6100

    2 ESX servers configured as cluster

    There is a single data store volume over the San with all virtual machines in this cluster

    There is one instance of vCenter to this place located in the same data as the other virtual machines store

    There are also a few other volumes on the SAN used for the RDM disks in virtual machines

    I also created a small volume of 1 GB as a VMFS data store and have it mapped in SRM to use as a placeholder for a data store

    What are your thoughts on that?  Is this the best way?  I could use the local storage on the ESX servers?  Main VM can be used for fictional settings or my data store?

    SRM is installed on the same VM as vCenter

    MRS. uses SQL Express on the local vCenter VM

    vCenter also uses SQL Express

    Site B

    1 EqualLogic PS4100

    1 ESX Server, so no clustering

    There is a single data store volume over the San with all virtual machines running on ESX Server

    According to the requirements of the SRM, there is a separate instance of vCenter operating as a virtual machine, and it is not in bound mode.  Should it be? / Should it be?

    The number of volumes on the SAN and their goals is the same Site A, the only difference is the size of the volume is smaller because less machines virtual running on this site.

    Replication has been implemented and verified the bi-directional working.

    We use replication on Bay

    OK, here's the problem... When I do a recovery expected, the process stops all the virtual machines on this site that includes the server vCenter/SRM.  I guess that to happen because it's on the same database as my other virtual machines?

    If I run the recovery in emergency mode, vCenter is then moved to the other site and power.  Because the sites are layer 2 and on the same vlan, MRS. watch sites are connected by 2 instances SRM can meet eventhough they are now running on the same site.  This does not seem right to me.

    Can someone suggest a better configuration?

    Thank you

    Brian

    Hello

    There is one instance of vCenter to this place located in the same data as the other virtual machines store

    What are your thoughts on that?  Is this the best way?

    In addition to vCenter VMS residing on the store of data replicated, it's OK.

    I could use the local storage on the ESX servers?

    I think that you can, but it is not recommended, because it creates dependency on the availability of the single ESX host.

    Main VM can be used for fictional settings or my data store?

    I would not recommend this. This will create a mess on the data store. Continue to 1 GB placeholder data store you created and do not replicate.

    MRS. uses SQL Express on the local vCenter VM

    vCenter also uses SQL Express

    Consider that SQL express is adapted to the environment of very small size (don't remember the numbers now). Make sure that you do not exceed the limits.

    1 ESX Server, so no clustering

    There is a single data store volume over the San with all virtual machines running on ESX Server

    This host has sufficient capacity to run all the virtual machines together (Site A and Site B)?

    I recommend having two hosts for prallele and accelerate the recovery process (multiple virtual machines can be powered on simultaneously).

    According to the requirements of the SRM, there is a separate instance of vCenter operating as a virtual machine, and it is not in bound mode.  Should it be? / Should it be?

    You don't have to configure them in bound Mode. This is for ease of management only (license, roles and inventory)

    OK, here's the problem... When I do a recovery expected, the process stops all the virtual machines on this site that includes the server vCenter/SRM.  I guess that to happen because it's on the same database as my other virtual machines?

    Yep, do not put vCenter VM on the VMS data store and reproduce at all.

    If I run the recovery in emergency mode, vCenter is then moved to the other site and power.  Because the sites are layer 2 and on the same vlan, MRS. watch sites are connected by 2 instances SRM can meet eventhough they are now running on the same site.  This does not seem right to me.

    Do not replicate vCenter / SRM VM and that will not happen.

    Michael.

  • How to change data warehouses placeholder?

    Hello

    This might be a silly, but after my first selection of place holder data warehouses, I want to change them, but when I click to select the original datatstores, I chose are greyed out... so I can't deselect?

    I can add more effort to be placeholders, but I can't remove the currently selected links? Do I have to disable the replication of first or something?

    See you soon,.

    Bilal

    Check if help: Site Recovery Manager - change a placeholder data store | VMware vSphere Blog - VMware Blogs

  • Oracle Business Intelligence Data Warehouse Administration Console 11 g and Informatica PowerCenter and Guide of Installation PowerConnect adapters 9.6.1 for Linux x 86 (64-bit)

    Hi all

    I'm looking for full installation GUIDE for Oracle Business Intelligence Data Warehouse Console Administration 11 g and Informatica PowerCenter and PowerConnect 9.6.1 for Linux x 86 (64 bit) adapters. I just wonder is there any url that you can recommend for installation. Please advise.

    Looks like these are ask you.

    http://docs.Oracle.com/CD/E25054_01/fusionapps.1111/e16814/postinstsetup.htm

    http://ashrutp.blogspot.com.by/2014/01/Informatica-PowerCenter-and.html

    Informatica PowerCenter 9 Installation and Configuration Guide complete | Training of Informatica & tutorials

  • Need ideas for compare current Oracle EBS data against the data warehouse to establish data matches course.

    Hello, I am new to the Oracle Forum. I'm a BI developer and I need to compare data Oracle EBS in my organization with the data in the data warehouse to make sure they match. I am using Informatica for this process by pulling the two sources and comparing. Can someone give me a brief example to make this process or similar methods with Informatica and its transformations so that it can be useful. Thanks in advance. Let me know if you need more information about the process.

    Looks like you are trying to make a reconciliation process? That is you can have implemented BIAPPS (or something custom) and now I want to check your ETL? If this is the case then it's good enough for a test case - we usually start with senior level (actual numbers for each group of companies, for example), then a subset of other different queries for example as per a level in the hierarchy of the org, by position, dates etc.

    and much more expensive as the implement of OLIVIER

    I don't think there are many in the world that is more expensive than the implementation of OLIVIER!

  • Do we need data warehouse, if we only create dashboards and reports in obiee?

    Hello! I'm new to obiee.

    My organization has decided to build their reports and dashboards using obiee. I am involved in this mission, but I don't have in-depth knowledge he obiee.  My question is what do we need to have the installation of the data warehouse? Or I just need to just install obiee by the creation of a repository, and then by creating a data source in publisher bi and then create dashboards or reports?

    I'm confused too please help me in this regard. Please share any document or link where I can easily understand these things. Thank you

    Please share any document or link where I can easily understand these things. Thank you

    OBIEE is a software to run without a good understanding of its complex concepts. I would really recommend attending a training course, or at least a book (for example this or this). There are MANY items of general blog on OBIEE, many of which are of poor quality and are all step-by-step guides on how to do a particular task, without explaining the overall situation.

    If you want to use OBIEE and to make it a success, you have learned to understand the basics.

    To answer your question directly:

    -BI Publisher is not the same thing as OBIEE. It is a component of it (but also autonomous available). OBIEE makes data accessible through 'Dashboards' which is made up of 'Analysis', written in the answers tool. Dashboards can also contain content BI Publisher if you want

    -OBIEE can report against the many sources of different data, one or more data warehouse and transactional. Most of the OBIEE implementations that perform well are based against dedicated DW, but is not a mandatory condition.

    -If reports against a DW real or not, when you build the repository OBIEE you build a "virtual" data warehouse, in other words, you dimensionally model all your business in one data set of logic diagrams in Star.

  • Create schemas for data warehouse for Reporting with Oracle XE

    It is possible to import home charger with the Oracle XE database and dw?

    I get this error ORA-00439: feature not enabled not: progress of replication in the log of the CIM.

    I saw that the database, we do not have have the feature "advanced replication."

    SQL> select * from v $ option where parameter = 'Advanced replication';
    
    
    PARAMETER
    -------------------------------------------------- --------------
    VALUE
    -------------------------------------------------- --------------
    advanced replication
    FALSE
    
    
    

    Journal of the CIM:

    Mon Feb 23 14:16 BRT 2015 1424711760686 atg.cim.database.dbsetup.CimDBJobManager module of high level of information for Reporting data warehouse Datasource list page: domain controllers. DW, ARF. DW.base, ARF. DW. InternalUsers, Store.Storefront

    Info my Feb 23 14:16:05 BRT 2015 1424711765012 atg.cim.database.dbsetup.CimDBJobManager 0 0 imports has not been executed.

    Info my Feb 23 14:16:05 BRT 2015 1424711765192 atg.cim.database.dbsetup.CimDBJobManager list of module level for Datasource Reporting charger: DafEar.Admin, DCS. DW, DCS. PublishingAgent, ARF.base, Store.EStore, Store.EStore.International

    Info my Feb 23 14:16:05 BRT 2015 1424711765733 atg.cim.database.dbsetup.CimDBJobManager 1 1 imports has not been executed.

    Info my Feb 23 14:16:05 BRT 2015 1424711765953 atg.cim.database.dbsetup.CimDBJobManager list of top level for Datasource Publishing module: DCS - UI. Versioned, BIZUI, PubPortlet, DafEar.admin, DCS - UI. SiteAdmin.Versioned, SiteAdmin.Versioned, DCS. Versioned, DCS - UI, Store.EStore.Versioned, Store.Storefront, DAF. Endeca.Index.Versioned, DCS. Endeca.Index.Versioned, ARF.base, DCS. Endeca.Index.SKUIndexing, Store.EStore.International.Versioned, Store.Mobile, Store.Mobile.Versioned, Store.Endeca.International, Store.KnowledgeBase.International, Portal.paf, Store.Storefront

    Info my Feb 23 14:16:11 BRT 2015 1424711771561 atg.cim.database.dbsetup.CimDBJobManager 65 65 imports has not been executed.

    Info my Feb 23 14:16:11 BRT 2015 1424711771722 atg.cim.database.dbsetup.CimDBJobManager list of top level for Datasource Production Core module: Store.EStore.International, DafEar.Admin, DPS, DSS, DCS. PublishingAgent, DCS. AbandonedOrderServices, DAF. Endeca.Index, DCS. Endeca.Index, Store.Endeca.Index, DAF. Endeca.Assembler, ARF.base, PublishingAgent, DCS. Endeca.Index.SKUIndexing,Store.Storefront,Store.EStore.International,Store.Recommendations,Store.Mobile,Store.Endeca.International,Store.Fluoroscope,Store.KnowledgeBase.International,Store.Mobile.Recommendations,Store.Mobile.International,Store.EStore,Store.Recommendations.International

    Info my Feb 23 14:16:12 1424711772473 2015 BRT atg.cim.database.dbsetup.CimDBJobManager 30 30 imports has not been executed.

    Info my Feb 23 14:16:19 BRT 2015 1424711779573 atg.cim.database.dbsetup.CimDBJobManager creating schema to the Reporting data warehouse data source

    Info my Feb 23 14:16:19 BRT 2015 1424711779653 atg.cim.database.dbsetup.CimDBJobManager list of top level for Datasource Reporting data warehouse module: domain controllers. DW, ARF. DW.base, ARF. DW. InternalUsers, Store.Storefront

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager DatabaseTask Create for Module ARF. DW.base, sql/db_components/oracle/arf_ddl.sql

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager DatabaseTask Create for Module ARF. DW.base, sql/db_components/oracle/arf_view_ddl.sql

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager DatabaseTask Create for Module ARF. DW.base, sql/db_components/oracle/arf_init.sql

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager Create DatabaseTask Module domain controller. DW, sql/db_components/oracle/arf_dcs_ddl.sql

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager Create DatabaseTask Module domain controller. DW, sql/db_components/oracle/arf_dcs_view_ddl.sql * info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager Create DatabaseTask Module domain controller. DW, sql/db_components/oracle/arf_dcs_init.sql

    Info my Feb 23 14:16:21 atg.cim.database.dbsetup.CimDBJobManager BRT 2015 1424711781085 found 2 of the 6 unrun previously of tasks for Datasource Reporting data warehouse

    Info my Feb 23 14:16:21 BRT 2015 1424711781085 atg.cim.database.dbsetup.CimDBJobManager 1 ARF. DW.base: sql/db_components/oracle/arf_view_ddl.sql

    Info my Feb 23 14:16:21 atg.cim.database.dbsetup.CimDBJobManager 1424711781085 2015 BRT 2 domain controllers. DW: sql/db_components/oracle/arf_dcs_view_ddl.sql

    Info my Feb 23 14:16:21 BRT 2015 1424711781085/atg/dynamo/dbsetup/job/DatabaseJobManager starting work of setting up data 1424711781085.

    Error my Feb 23 14:16:21 BRT 2015 1424711781516 /atg/dynamo/dbsetup/database/DatabaseOperationManager---java.sql.SQLException: ORA-00439: feature not enabled not: Advanced replication

    is there a solution?

    Hello

    We have not tested and certified with Oracle XE internally

    You must use an Oracle Enterprise Edition for Advanced Replication

    What version of Oracle trade you install, I can't say extract of newspaper, you've posted

    ++++

    Thank you

    Gareth

    Please indicate any update as "Good response" or "Useful answer" If this update help and answers your question, so that others can identify the correct/good update between the many updates.

  • Integration Hub - Eloqua of marketing data warehouse need?

    We believe that in the heart of Eloqua real company, it is often necessary to embrace existing platforms and architecture, there is great need for marketing middleware, meeting between IT and Marketing systems.

    Thus, we have developed the integration Marketing CleverTouch Hub, allowing users of Eloqua dynamically integrate their data warehouse and the existing COMPUTER infrastructure.

    Is there a requirement for such a product, providing business space?

    To date, we were delighted with enthusiasm of Eloqua and support, the company real thought, but we would like to get comments from the end-user and community partners also.

    http://bit.LY/gfbEiI

    YES!  Something is needed to fill this gap in enterprise-class reports.  Please tell us more.

  • E.M.P. 11.1.2 Essbase data warehouse Infrastructure

    Hello

    We'll implement Hyperion Planning 11.1.2 and we intend having the data warehouse, pushing the budget data for Hyperion Planning and have push Hyperion and retrieve data in Essbase.  It is, has she a sense also push and pull data from essbase in the data warehouse? To make it more clear, we take the budget data from the data warehouse, and he will push to Hyperion Planning.  Budgetary data provided will be also pushed Essbase data warehouse.  Hyperion Planning will then do the what if analysis and then push back to essbase with essbase here will push the hypothetical scenarios back to the manipulated data warehouse.

    Please let me know if the script need for clarification.

    Thank you

    I did something similar in the past the concept is perfectly feasible.

    See you soon

    John

    http://John-Goodwin.blogspot.com/

  • Removal of host SDR data warehouses

    Hello

    I have a scenario where I want to migrate to a host and the VM between two clusters. I unfortunately had a number of DTS data warehouses attached to the host, no virtual machine is running from these storages of data on this host... I'm trying to disassemble stores but as expected get the error indicating that data warehouses are part of a cluster of EIM.

    Is it possible to no disturbance delete Host data warehouses.

    Thank you

    Steve

    So, what you want, remove some cluster DTS data warehouses and pass a host of a DRS cluster to another's is?

    Try the following steps:

    (1) remove the DTS cluster data warehouses, just move the data store outside the cluster for DTS;

    (2) move the host to the new DRS cluster target, if you have virtual machines running on it and cannot put the host in maintenance mode, you will need disconnect host, remove from the inventory and then add the host again to the new cluster.

    (3) If you wish, add data warehouses to the DTS of the new DRS cluster cluster.

  • Collection of data warehouses in the Script of commissioning

    Hi all

    First of all, I want to say thank you for all the help provided in these communities.  It has been very valuable in recent years.

    I had the opportunity to work on a configuration script treated for over a week now and have got almost ready for release but got stuck on the type of a basic element - collection of data warehouses.

    The idea is that we can use this to launch several environments identical demand - ripe for automation!

    We get the number of machines required is in $clcount, and $dslist can be equal to something like this...

    NameFreeSpaceGBCapacityGB
    SAN-ds-33,399.115,119.75
    SAN-ds-41,275.265,119.75
    SAN-ds-2661.8135,119.75
    SAN-ds-5292.3425,119.75
    SAN-ds-8273.2045,119.75

    My method works as long as the number of machines is less than the number of available data warehouses, but fails if the number of machines exceeds available data warehouses.

    $resources = Get-Cluster "Compute 1"
    $OSSpec = Get-OSCustomizationSpec "Base 2012 R2"
    $dslist = get-datastore | where {$_.Name -match "SAN" -and $_.FreeSpaceGB -gt 200} | Sort FreeSpaceGB -Descending
    $folder = Get-Folder "Lab 2"
    $clcount = "17"
    $envn = "Lab2-"
    $OSSpec = $OSSpec | New-OSCustomizationSpec -Name Temp-Spec -Type NonPersistent -Confirm:$false
    foreach ($num in 1..$clcount){
        $suffix = "{0:D2}" -f $num
        $datastore = $dslist[$num-1]
        $OSSpec = $OSSpec | Set-OSCustomizationSpec -NamingScheme fixed -NamingPrefix "APPCL$suffix"
        New-VM -Name $envn"APPCL"$suffix -Template $template -OSCustomizationSpec $OSSpec -Location $folder -ResourcePool $resources -Datastore $datastore
    }
    ##End build Client Machines
    $OSSpec | Remove-OSCustomizationSpec -Confirm:$false
    
    

    I know it would be easy to solve with the clusters and SDR data store, but I believe that would always choose the store of data with the most free space and you can see our environment could be a little unbalanced, so I am trying to build in a little more intelligence in the distribution of these machines in data warehouses.

    Any help or pointers in the right direction would be greatly appreciated!

    Use % (remainder of division) to transform $num into something that will be less than the size of option:

    $datastore = $dslist [$num % $dslist.count]

    Then change the ascending sort.

  • Cluster data warehouses

    I would like to create a dashboard that displays a list of all VMWare clusters and for each handset display data warehouses that are used for each of them. Looking for a way to create the dashboard without slipping each store data in the dashboard.

    1 is it possible with a query of the user interface?

    Data > VMWare > centers > (name datacenter) > cluster > (cluster name) > ESX host > (host name) > storage > data warehouses

    If you want the names and what may not be a little cleaner that I would try this, it also removes the need for the additional query keep things all neat.

    cluster = server.get ("QueryService").queryTopologyObjects("!) VMWCluster')

    output =]

    (cluster in clusters)

    {

    data warehouses = cluster.esxServers.datastores

    for (data in data warehouses store)

    {

    map = [VMWCluster:cluster.name, VMWDatastore: datastore.name]

    output. Add (Map)

    }

    }

    return output

  • 5.5u2 ESXi host will not mount CF data warehouses

    I am the admin in a test lab, so many times I have to "make do" with what I have for the material.  I recently reassigned a R910 Dell with 64 GB memory in a 5.5u2 ESXi host.  It has some storage SAS (2.5 to) and some (CT 11) directly connected FC storage and some FC HBAs for use by VMs to send data to the tape library.

    Recently, we had a power failure, and this host does not reconnect to the CF storage when she returned to the top.  (Duty to "make do" with what I means not having is not the capabilities of UPS).  I can tell the host of HBA see it properly and sees the LUNS on RAID controllers, but refuses to mount.  I go through the add storage Wizard, and he sees that there is a store of data storage VMFS, but always refuses to mount.  I also tried to remove the redirection to the two FC HBA that have been set up for this purpose, but it made no difference.

    I have now three users unable to work because they cannot access their virtual machines on storage of this FC.  I can't even transfer them off to use the SAS storage form.

    I think it might be the cause file system damage, but I'm not sure.  Someone at - it suggestions?

    Have you ever tried to set up data warehouses from the command line (see, for example, http://kb.vmware.com/kb/1011387)?

    André

  • Gerar information back data warehouses

    E possible Gerar um Relatório tipo not vCenter, das VMs showing wave esta amendments disco datasotres os?

    EXEMPLO, tenho VMs clothing wave passed criadas e muito depois added discos em outros data warehouses, com isso preciso saber is our data are free os discos, OU can I eliminate elas sendo warehouses.

    Recomendo você use o as RVTools free e you can download seguinte link: http://www.robware.net/

    Para sua need, apos rodar para o RVTools VA get an aba e Vsante por Zombie VMDK, POI esses São com certeza archives VMDK found our very sem estar associados a VM, portanto não band data warehouses.

  • Rename data warehouses

    Hi all.  I have a simple script to rename data warehouses.  An entry is a list of names of data store.  New name is the current data store name prefixed by "DoNotUse.  However, it is very slow.

    Any suggestions on speeding to the top?  Thank you very much

    $DSList = get-Content-Path c:\temp\DSlist.txt

    {Foreach ($DS in $DSList)

    $NewDS = ' DoNotUse_$ DS.
    Get-Datastore-name $DS | Store game data-name $NewDS

    }

    See if what follows is faster

    $dsList = get-Content-Path c:\temp\DSlist.txt

    $filter = $dsList - join ' |'

    Notice-EEG - ViewType Datastore-property name - filter @{'Name' = $filter} | %{

    $_. Rename ("DoNotUse_$ ($_.)") Name)')

    }

Maybe you are looking for