Placeholder for SPPG data warehouses

I read somewhere that datastore of placeholder is not required for political protection database storage group. My question is why it is not necessary?

Hi Eric,.

An excerpt from documentation 6.1 SRM (Site Recovery Manager 6.1 Documentation Center) that can help you:

For protection of storage strategy, Site Recovery Manager applies the mappings of the inventory of virtual machines when you run a recovery plan that contains a group of protection of the storage policy.

With the Bay-based and groups protection replication vSphere, Site Recovery Manager applies the mappings of the inventory at the time of set up protection on a virtual machine. With the storage policy groups, protection policy being dynamic storage, Site Recovery Manager applies only the mappings of the inventory at the time where you run a recovery program. Virtual machine placement decisions are made according to the mappings of the inventory during execution a recovery plan, and Site Recovery Manager does not create a placeholder VMs on the recovering site.

Hope that this is sufficient,

Daniel G.

Tags: VMware

Similar Questions

  • Best practices for creating data warehouses

    I have 10 iSCSI LUN (all on the same device), each 1.8 size to which I want to introduce ESXi to create data warehouses.  Are there all recommended for how I divide these LUNS to the top, or should I just do a giant data store?  Maybe there are performance factors to consider here?

    If I had to do 10 1.8 data stores to I can see a problem on the road when I need to expand a vmdk but cannot because there is not enough free space on the data store, it would be less of a problem if I had a giant data store at first.

    Thank you.

    First of all, it's one of those type questions "how long is a piece of string.

    It depends of course of the VMDK number you're going to be running, the available storage, the type of storage, IO, storage, type of virtual machines etc. etc. etc.,

    Things to consider are for example, you have a storage that deduplication and storage cost a major factor (and so on)
    Of course. . almost always, a cost reduction is equivalent to a drop in performance.

    In any case, a rule very loose, I have (in most cases), among which I size LUN somewhere between 400 and 750 GB and rarely (if ever) have more than 30 VMDK per LUN.

    Almost always, redirect the request to the following resources:

    first of all, the maximum rates of configuration:
    http://www.VMware.com/PDF/vSphere4/R40/vsp_40_config_max.PDF

    http://www.gabesvirtualworld.com/?p=68
    http://SearchVMware.TechTarget.com/Tip/0, 289483, sid179_gci1350469, 00.html
    http://communities.VMware.com/thread/104211
    http://communities.VMware.com/thread/238199
    http://www.yellow-bricks.com/2009/06/23/vmfslun-size/

    (although Post-andre above covers most of them)

  • Mode of the database for the data warehouse

    I am running Oracle 10 g 2 on solaris 5.10.

    The recommendation that the data store database must be running in dedicated mode is right?

    concerning

    long-running queries are unsuitable for shared server mode because duration long query will block the shared server and reduce the number of shared servers available.
    Data warehouses usually run long-running queries.

    ----------
    Sybrand Bakker
    Senior Oracle DBA

  • PowerCLI script for DatastoreCluster, data warehouses and the size info, DataCenter, Clusters

    Hello - I am looking to remove the DatastoreClusters and then list data warehouses as well with their size (total size, used space, free space, put in service, uncommitted space) and the total number of virtual machines on this data store. I would also like to understand what data center and they are on clusters. Is this possible? I might want to limit what is displayed in data warehouses that are 13 percent of free space or less.

    Thank you

    LORRI

    Of course, try this way

    Get-Datastore.

    Select @{N = 'Center'; E={$_. Datacenter.Name}},

    @{N = "DSC"; E = {Get-DatastoreCluster - Datastore $_______ |} {{Select - ExpandProperty name}}.

    Name,CapacityGB,@{N='FreespaceGB'; E = {[math]: tour ($_.)} (FreespaceGB, 2)}},

    @{N = "ProvisionedSpaceGB"; E = {}

    [math]: Round (($_.)) ExtensionData.Summary.Capacity - $_. Extensiondata.Summary.FreeSpace + $_. ExtensionData.Summary.Uncommitted)/1GB,2)}}.

    @{N = "UnCommittedGB"; E = {[math]: tour ($_.)} ExtensionData.Summary.Uncommitted/1GB,2)}}.

    @{N = "VM"; E={$_. ExtensionData.VM.Count}} |

    Export Csv report.csv - NoTypeInformation - UseCulture

  • Example of workflow that allows for multiple data warehouses


    I understand that there is no workflow out-of-the box, who manage the assignment of several data stores. Can someone share an example of how to do this? I'm trying to use one of canned workflows to create virtual machines using templates.

    Found a post that did what I wanted:

    https://communities.VMware.com/thread/464838

  • How can I partition logical volume from 5.5 TB Raid on a ML350 G6 server for ESX data warehouse creation

    Today, we received a new ML350 G6 server. It has 8 hard disks of size 1 TB each.i have created a volume unique raid 6 and raid is 5.5 TB usable for the logical unit space. The server starts with a picture of ESXi 3.5 on a USB key provided by HP with the server. When I login and checking configuration& gt; stroage adapters& gt; pi400 Smart array for physical disk controller, I couldn't see any disk. I know that ESX will not support the volume size more than 2 C.T.

    I don't want to create a raid volume children (example two raid 6 volumes with 4 hard drives), because it will reduce the amount of disk space, I can use.

    Is it possible to create 3 partitions on this volume 5.5 TB raid with the configs below?. All the tools that everyone knows that I can use to create partitions? I tried GPARTED, but it doesn't work

    LUN 0-2 TO

    1-2 TO LUN

    LUN 2-1, 5 TB

    You cannot have a drive that is larger than 2 TB.

    Create multiple logical disks, not different partitions.

    André

  • The best location for vcenter VM and placeholder data warehouses

    Hello

    I have an SRM installation with 2 sites.  The sites are separated by metro trunk of layer 2 ethernet between the sites.

    Site has

    1 EqualLogic PS6100

    2 ESX servers configured as cluster

    There is a single data store volume over the San with all virtual machines in this cluster

    There is one instance of vCenter to this place located in the same data as the other virtual machines store

    There are also a few other volumes on the SAN used for the RDM disks in virtual machines

    I also created a small volume of 1 GB as a VMFS data store and have it mapped in SRM to use as a placeholder for a data store

    What are your thoughts on that?  Is this the best way?  I could use the local storage on the ESX servers?  Main VM can be used for fictional settings or my data store?

    SRM is installed on the same VM as vCenter

    MRS. uses SQL Express on the local vCenter VM

    vCenter also uses SQL Express

    Site B

    1 EqualLogic PS4100

    1 ESX Server, so no clustering

    There is a single data store volume over the San with all virtual machines running on ESX Server

    According to the requirements of the SRM, there is a separate instance of vCenter operating as a virtual machine, and it is not in bound mode.  Should it be? / Should it be?

    The number of volumes on the SAN and their goals is the same Site A, the only difference is the size of the volume is smaller because less machines virtual running on this site.

    Replication has been implemented and verified the bi-directional working.

    We use replication on Bay

    OK, here's the problem... When I do a recovery expected, the process stops all the virtual machines on this site that includes the server vCenter/SRM.  I guess that to happen because it's on the same database as my other virtual machines?

    If I run the recovery in emergency mode, vCenter is then moved to the other site and power.  Because the sites are layer 2 and on the same vlan, MRS. watch sites are connected by 2 instances SRM can meet eventhough they are now running on the same site.  This does not seem right to me.

    Can someone suggest a better configuration?

    Thank you

    Brian

    Hello

    There is one instance of vCenter to this place located in the same data as the other virtual machines store

    What are your thoughts on that?  Is this the best way?

    In addition to vCenter VMS residing on the store of data replicated, it's OK.

    I could use the local storage on the ESX servers?

    I think that you can, but it is not recommended, because it creates dependency on the availability of the single ESX host.

    Main VM can be used for fictional settings or my data store?

    I would not recommend this. This will create a mess on the data store. Continue to 1 GB placeholder data store you created and do not replicate.

    MRS. uses SQL Express on the local vCenter VM

    vCenter also uses SQL Express

    Consider that SQL express is adapted to the environment of very small size (don't remember the numbers now). Make sure that you do not exceed the limits.

    1 ESX Server, so no clustering

    There is a single data store volume over the San with all virtual machines running on ESX Server

    This host has sufficient capacity to run all the virtual machines together (Site A and Site B)?

    I recommend having two hosts for prallele and accelerate the recovery process (multiple virtual machines can be powered on simultaneously).

    According to the requirements of the SRM, there is a separate instance of vCenter operating as a virtual machine, and it is not in bound mode.  Should it be? / Should it be?

    You don't have to configure them in bound Mode. This is for ease of management only (license, roles and inventory)

    OK, here's the problem... When I do a recovery expected, the process stops all the virtual machines on this site that includes the server vCenter/SRM.  I guess that to happen because it's on the same database as my other virtual machines?

    Yep, do not put vCenter VM on the VMS data store and reproduce at all.

    If I run the recovery in emergency mode, vCenter is then moved to the other site and power.  Because the sites are layer 2 and on the same vlan, MRS. watch sites are connected by 2 instances SRM can meet eventhough they are now running on the same site.  This does not seem right to me.

    Do not replicate vCenter / SRM VM and that will not happen.

    Michael.

  • Create schemas for data warehouse for Reporting with Oracle XE

    It is possible to import home charger with the Oracle XE database and dw?

    I get this error ORA-00439: feature not enabled not: progress of replication in the log of the CIM.

    I saw that the database, we do not have have the feature "advanced replication."

    SQL> select * from v $ option where parameter = 'Advanced replication';
    
    
    PARAMETER
    -------------------------------------------------- --------------
    VALUE
    -------------------------------------------------- --------------
    advanced replication
    FALSE
    
    
    

    Journal of the CIM:

    Mon Feb 23 14:16 BRT 2015 1424711760686 atg.cim.database.dbsetup.CimDBJobManager module of high level of information for Reporting data warehouse Datasource list page: domain controllers. DW, ARF. DW.base, ARF. DW. InternalUsers, Store.Storefront

    Info my Feb 23 14:16:05 BRT 2015 1424711765012 atg.cim.database.dbsetup.CimDBJobManager 0 0 imports has not been executed.

    Info my Feb 23 14:16:05 BRT 2015 1424711765192 atg.cim.database.dbsetup.CimDBJobManager list of module level for Datasource Reporting charger: DafEar.Admin, DCS. DW, DCS. PublishingAgent, ARF.base, Store.EStore, Store.EStore.International

    Info my Feb 23 14:16:05 BRT 2015 1424711765733 atg.cim.database.dbsetup.CimDBJobManager 1 1 imports has not been executed.

    Info my Feb 23 14:16:05 BRT 2015 1424711765953 atg.cim.database.dbsetup.CimDBJobManager list of top level for Datasource Publishing module: DCS - UI. Versioned, BIZUI, PubPortlet, DafEar.admin, DCS - UI. SiteAdmin.Versioned, SiteAdmin.Versioned, DCS. Versioned, DCS - UI, Store.EStore.Versioned, Store.Storefront, DAF. Endeca.Index.Versioned, DCS. Endeca.Index.Versioned, ARF.base, DCS. Endeca.Index.SKUIndexing, Store.EStore.International.Versioned, Store.Mobile, Store.Mobile.Versioned, Store.Endeca.International, Store.KnowledgeBase.International, Portal.paf, Store.Storefront

    Info my Feb 23 14:16:11 BRT 2015 1424711771561 atg.cim.database.dbsetup.CimDBJobManager 65 65 imports has not been executed.

    Info my Feb 23 14:16:11 BRT 2015 1424711771722 atg.cim.database.dbsetup.CimDBJobManager list of top level for Datasource Production Core module: Store.EStore.International, DafEar.Admin, DPS, DSS, DCS. PublishingAgent, DCS. AbandonedOrderServices, DAF. Endeca.Index, DCS. Endeca.Index, Store.Endeca.Index, DAF. Endeca.Assembler, ARF.base, PublishingAgent, DCS. Endeca.Index.SKUIndexing,Store.Storefront,Store.EStore.International,Store.Recommendations,Store.Mobile,Store.Endeca.International,Store.Fluoroscope,Store.KnowledgeBase.International,Store.Mobile.Recommendations,Store.Mobile.International,Store.EStore,Store.Recommendations.International

    Info my Feb 23 14:16:12 1424711772473 2015 BRT atg.cim.database.dbsetup.CimDBJobManager 30 30 imports has not been executed.

    Info my Feb 23 14:16:19 BRT 2015 1424711779573 atg.cim.database.dbsetup.CimDBJobManager creating schema to the Reporting data warehouse data source

    Info my Feb 23 14:16:19 BRT 2015 1424711779653 atg.cim.database.dbsetup.CimDBJobManager list of top level for Datasource Reporting data warehouse module: domain controllers. DW, ARF. DW.base, ARF. DW. InternalUsers, Store.Storefront

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager DatabaseTask Create for Module ARF. DW.base, sql/db_components/oracle/arf_ddl.sql

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager DatabaseTask Create for Module ARF. DW.base, sql/db_components/oracle/arf_view_ddl.sql

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager DatabaseTask Create for Module ARF. DW.base, sql/db_components/oracle/arf_init.sql

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager Create DatabaseTask Module domain controller. DW, sql/db_components/oracle/arf_dcs_ddl.sql

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager Create DatabaseTask Module domain controller. DW, sql/db_components/oracle/arf_dcs_view_ddl.sql * info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager Create DatabaseTask Module domain controller. DW, sql/db_components/oracle/arf_dcs_init.sql

    Info my Feb 23 14:16:21 atg.cim.database.dbsetup.CimDBJobManager BRT 2015 1424711781085 found 2 of the 6 unrun previously of tasks for Datasource Reporting data warehouse

    Info my Feb 23 14:16:21 BRT 2015 1424711781085 atg.cim.database.dbsetup.CimDBJobManager 1 ARF. DW.base: sql/db_components/oracle/arf_view_ddl.sql

    Info my Feb 23 14:16:21 atg.cim.database.dbsetup.CimDBJobManager 1424711781085 2015 BRT 2 domain controllers. DW: sql/db_components/oracle/arf_dcs_view_ddl.sql

    Info my Feb 23 14:16:21 BRT 2015 1424711781085/atg/dynamo/dbsetup/job/DatabaseJobManager starting work of setting up data 1424711781085.

    Error my Feb 23 14:16:21 BRT 2015 1424711781516 /atg/dynamo/dbsetup/database/DatabaseOperationManager---java.sql.SQLException: ORA-00439: feature not enabled not: Advanced replication

    is there a solution?

    Hello

    We have not tested and certified with Oracle XE internally

    You must use an Oracle Enterprise Edition for Advanced Replication

    What version of Oracle trade you install, I can't say extract of newspaper, you've posted

    ++++

    Thank you

    Gareth

    Please indicate any update as "Good response" or "Useful answer" If this update help and answers your question, so that others can identify the correct/good update between the many updates.

  • ESXi V5.0 U1 - activate SIOC on creating NFS data warehouses?

    I read on the IGCS implementation on our warehouses of NFS data because of a problem experienced by the following KB:

    http://KB.VMware.com/selfservice/search.do?cmd=displayKC & docType = kc & externalId = 2016122 & sliceId = 2 & docTypeID = DT_KB_1_1 & dialogID = 669628658 & StateID = 1% 200% 20669648380

    I have opened a support ticket and have decided to allow the SIOC for our data warehouses, even if the support could not tell me if it was something I could do on the production.  My assumption would be that adding this additional configuration would not be a problem, although I have questions, confirming that.  Everyone helped SIOC on live production data warehouses?

    Thank you

    The f

    Yes, you can activate without risk IGCS to a data store active with running virtual machines.

  • Question on the reclassification of the Version 3-5 data warehouses

    Background:

    We have a group of hosts running ESX 4.1 that I migrated to ESXi 5.

    All our hosts are connected to a San.

    All existing data stores are file system version 3.21

    I want all my stores data upgrade to version 5.  I know that there is an option to convert warehouses of data version 3 to version 5, but a support technician VMWare told me that he would not have even features a new version 5 data store would.  He said that he would retain the features of version 3.

    Question:

    If I delete a vCenter data store (after I have migrated off the guests) and then recreate the data store by using the same logical unit number, this will be a 5 native data store or it will still retain the features of version 3 when vCenter reuses the lun?

    I'd rather not have to cancel the mapping of the LUN and recreate blank those for native data warehouses 5 if possible.

    When you remove the logic unit number, it will be empty.  Adding, you can format as a LUN VMFS5

  • [SRM 4.1] Dealing with local data warehouses?

    Hello

    I'm currently loaded with SRM 4.1 installation on our companys vsphere and while I did this before that I have never worked with VM on local data warehouses. There are three additional centers for both, I'll be tipping. These three data centers all run on Cisco UCS 210 M2 servers spread across two data stores. The virtual machine can be found on the second partition of the UCS VMFS.

    I don't know why it was created this way that I wasn't there when it was put into service (it seems strange, though, as they have a shedload of space on the Symmetrix). So I ask really (a person with a more experienced eye), what are the options for local data warehouses with MRS? I'm guessing limited to no support... so think I might watch svMotion

    Thanks for any advice.

    Hello

    With MRS. 4.1, the only option is based on the Bay of replication, i.e. SRM is only able to protect virtual machines residing on storage arrays supported with replication configured between them. SRM itself does not perform the replication. SRM is able to display your replica equipment and perform a few operations on the Bay of storage through the SRA (storage replication adapter) - software written by the storage provider.

    Yes, unless you use a storage device which can present these local data warehouses such as those shared and replicate them on the (and is supported with SRM), you cannot use local data warehouses. I have a limited knowledge of such things, maybe the other guys will be able to help more.

    In SRM 5 extra option has been introduced - vSphere replication, which allows the replication of virtual machines between ESXi hosts. You will need vcenter / SRM and ESXi 5 for this to work.

    I do not understand your configuration. How many centres of data do you have? SRM only supports the scenarios one to one, and many-to-one.

    Michael.

  • How to change alerts on data warehouses?

    Greetings! I have a situation where I know that one of my data stores will be almost full all the time but it never filling so I'm not worried to watch but there a mistake I want to disappear. On the other hand, I don't want to follow all the other data stores. The problem is that it seems that I can't include some data warehouses under the definition of an alert and another store of data under another definition. Does anyone know how do you allocate the alert definitions, some for specific data warehouses and another to monitor another data store? I am using vCenter Server 4.1.

    Thank you!

    Christopher Beard

    You can turn off the alarm level of vCenter, then go in your storages of data display in your inventory and facility alarms there. However, you will need to do this for each data store.

  • Oracle Business Intelligence Data Warehouse Administration Console 11 g and Informatica PowerCenter and Guide of Installation PowerConnect adapters 9.6.1 for Linux x 86 (64-bit)

    Hi all

    I'm looking for full installation GUIDE for Oracle Business Intelligence Data Warehouse Console Administration 11 g and Informatica PowerCenter and PowerConnect 9.6.1 for Linux x 86 (64 bit) adapters. I just wonder is there any url that you can recommend for installation. Please advise.

    Looks like these are ask you.

    http://docs.Oracle.com/CD/E25054_01/fusionapps.1111/e16814/postinstsetup.htm

    http://ashrutp.blogspot.com.by/2014/01/Informatica-PowerCenter-and.html

    Informatica PowerCenter 9 Installation and Configuration Guide complete | Training of Informatica & tutorials

  • Need ideas for compare current Oracle EBS data against the data warehouse to establish data matches course.

    Hello, I am new to the Oracle Forum. I'm a BI developer and I need to compare data Oracle EBS in my organization with the data in the data warehouse to make sure they match. I am using Informatica for this process by pulling the two sources and comparing. Can someone give me a brief example to make this process or similar methods with Informatica and its transformations so that it can be useful. Thanks in advance. Let me know if you need more information about the process.

    Looks like you are trying to make a reconciliation process? That is you can have implemented BIAPPS (or something custom) and now I want to check your ETL? If this is the case then it's good enough for a test case - we usually start with senior level (actual numbers for each group of companies, for example), then a subset of other different queries for example as per a level in the hierarchy of the org, by position, dates etc.

    and much more expensive as the implement of OLIVIER

    I don't think there are many in the world that is more expensive than the implementation of OLIVIER!

  • Data warehouses up-to-date for a pool of VMware using PowerCli

    Using this as a starting block.

    VMware View 5.2 Documentation Library

    I want to combine both functions and the variables to use ".txt files" with my new and old data warehouses listed.

    I edited a bit, combining both functions and variables for lists old and new creation, but I don't know about the context to provide variables for text files.  All PowerShell / PowerCli guru?

    # Function PowerShell to add new, then remove the old warehouses of data in an automatic pool.

    # UpdateDatastoresForAutomaticPool

    # Parameters

    # $Pool ID of the pool pool to update.

    # $OldDatastore full path to OldDatastore.txt to be removed.

    # $NewDatastore full path to NewDatastore.txt to add.

    $Pool = "C:\powercli\PersistentPools.txt".

    $OldDatastore = "C:\powercli\OldDatastore.txt".

    $NewDatastore = "C:\powercli\NewDatastore.txt".

    function RemoveDatastoreFromAutomaticPool

    {param ($Pool, $OldDatastore)

    $PoolSettings = (get-pool-pool_id, $Pool)

    $currentdatastores = $PoolSettings.datastorePaths

    $datastores = «»

    foreach ($path in $currentdatastores.split (";"))) {

    If (-not ($path - eq $OldDatastore)) {}

    $datastores = $datastores + "$path;"

    }

    }

    Update-LITERALLY-pool_id, $Pool - datastorePaths $datastores

    }

    function AddDatastoreToAutomaticPool

    {param ($Pool, $NewDatastore)

    $PoolSettings = (get-pool-pool_id, $Pool)

    $datastores = $PoolSettings.datastorePaths + "; $NewDatastore ".

    Update-LITERALLY-pool_id, $Pool - datastorePaths $datastores

    }

    Thank you

    -Matt

    You use literal strings instead of the content for the files. Assuming that the contents of the file is a list with one entry on each line, you need to change your code actually look at the data, for example:

    $oldstores = get-content "C:\powercli\OldDatastore.txt".

    foreach ($path in $currentdatastores.split (";"))) {

    If (-not ($oldstores - contains $path)) {}

    ...

    }

    }

Maybe you are looking for

  • using the usb key in veristand

    Hello Use Embedded data logger custom peripheral in veristand we could record data to the tdms file format located in the "c:\logs" on RT PXI. If I want to access this file, I am the pxi start in windows and then I could only access the file for anal

  • Unexplainded BSOD on a Windows Terminal Server 2008R2 VM under HyperV 2008R2

    Can not find cause of BSOD, trying to get my head around WinDdg, but difficulties with the symbols, can we interpret a minidump file? Thank you Kevin

  • XP - integration features mode does not not after his stop.

    I installed xp mode, let the option by default close = hibernation.  After closing and restarting, integration features still worked fine.  (for example, I can see the files host.) Recommended Windows Update I have install the critical updates.  To c

  • Post Lollipop unable to update the game to Google Apps

    Hey everybody, After the recent D6616 Lollipop, I am not able to update all play the Google Apps. So far, I did the following: -Settings > Apps > Google game store & Google play Services and clear the Cache and clear data. -Restart the phone. -A trie

  • Fixup protocol smtp 25

    Exchange e-mail servers run ESMTP. The only way that the PIX firewall allows ESMTP is to disable the correction of SMTP 25. Does that not create security expsoures on the firewall for SMTP. Is there a way to customize mailguard to protect SMTP and st