Example of workflow that allows for multiple data warehouses


I understand that there is no workflow out-of-the box, who manage the assignment of several data stores. Can someone share an example of how to do this? I'm trying to use one of canned workflows to create virtual machines using templates.

Found a post that did what I wanted:

https://communities.VMware.com/thread/464838

Tags: VMware

Similar Questions

  • LM 4 multiple data warehouses

    Hey all,.

    I am very new when it comes to Lab Manager. It is what is in my CA.

    I have a bunch of 5 4 ESX hosts that have 2 iSCSI data warehouses attached to them all. I just added another today because the first is 90% full. It's only a store 1 TB, so I'm working on obtaining more space. So for now, I placed a second iSCSI store that has about 600 GB of free space for all ESX 5 hosts.

    Lab Manager is currently in the 4.0.1

    In LM under - & gt; Resources - & gt; Warehouses of data - & gt; I see the two stores. I added a new has a green check mark under the "Creation of VM enabled" so I should be able to start using it. When I try and add a virtual machine to one of my setups under his gray "Datastore" outside and I can not change the data store for the new data store. Don't know what I have have done badly but I need to have more space for the Dev to play with.

    If anyone can help that would be swell!

    Thankx

    Jimmy Dean

    To move a model: you could make a full clone of the model to the another data store, and then delete the original model. You can do the same for configurations as well. However, knowledge that full cloning VMs causes their disk station to collapse into a single basic disk, and you should expect the copied on VMs to take more space than the original.

    Alternatively, you can use the utility SSMove provided with Lab Manager to move trees the directory to the new data store.

  • Best practices for creating data warehouses

    I have 10 iSCSI LUN (all on the same device), each 1.8 size to which I want to introduce ESXi to create data warehouses.  Are there all recommended for how I divide these LUNS to the top, or should I just do a giant data store?  Maybe there are performance factors to consider here?

    If I had to do 10 1.8 data stores to I can see a problem on the road when I need to expand a vmdk but cannot because there is not enough free space on the data store, it would be less of a problem if I had a giant data store at first.

    Thank you.

    First of all, it's one of those type questions "how long is a piece of string.

    It depends of course of the VMDK number you're going to be running, the available storage, the type of storage, IO, storage, type of virtual machines etc. etc. etc.,

    Things to consider are for example, you have a storage that deduplication and storage cost a major factor (and so on)
    Of course. . almost always, a cost reduction is equivalent to a drop in performance.

    In any case, a rule very loose, I have (in most cases), among which I size LUN somewhere between 400 and 750 GB and rarely (if ever) have more than 30 VMDK per LUN.

    Almost always, redirect the request to the following resources:

    first of all, the maximum rates of configuration:
    http://www.VMware.com/PDF/vSphere4/R40/vsp_40_config_max.PDF

    http://www.gabesvirtualworld.com/?p=68
    http://SearchVMware.TechTarget.com/Tip/0, 289483, sid179_gci1350469, 00.html
    http://communities.VMware.com/thread/104211
    http://communities.VMware.com/thread/238199
    http://www.yellow-bricks.com/2009/06/23/vmfslun-size/

    (although Post-andre above covers most of them)

  • Mode of the database for the data warehouse

    I am running Oracle 10 g 2 on solaris 5.10.

    The recommendation that the data store database must be running in dedicated mode is right?

    concerning

    long-running queries are unsuitable for shared server mode because duration long query will block the shared server and reduce the number of shared servers available.
    Data warehouses usually run long-running queries.

    ----------
    Sybrand Bakker
    Senior Oracle DBA

  • PowerCLI script for DatastoreCluster, data warehouses and the size info, DataCenter, Clusters

    Hello - I am looking to remove the DatastoreClusters and then list data warehouses as well with their size (total size, used space, free space, put in service, uncommitted space) and the total number of virtual machines on this data store. I would also like to understand what data center and they are on clusters. Is this possible? I might want to limit what is displayed in data warehouses that are 13 percent of free space or less.

    Thank you

    LORRI

    Of course, try this way

    Get-Datastore.

    Select @{N = 'Center'; E={$_. Datacenter.Name}},

    @{N = "DSC"; E = {Get-DatastoreCluster - Datastore $_______ |} {{Select - ExpandProperty name}}.

    Name,CapacityGB,@{N='FreespaceGB'; E = {[math]: tour ($_.)} (FreespaceGB, 2)}},

    @{N = "ProvisionedSpaceGB"; E = {}

    [math]: Round (($_.)) ExtensionData.Summary.Capacity - $_. Extensiondata.Summary.FreeSpace + $_. ExtensionData.Summary.Uncommitted)/1GB,2)}}.

    @{N = "UnCommittedGB"; E = {[math]: tour ($_.)} ExtensionData.Summary.Uncommitted/1GB,2)}}.

    @{N = "VM"; E={$_. ExtensionData.VM.Count}} |

    Export Csv report.csv - NoTypeInformation - UseCulture

  • It is a group policy setting that allows Silverlight store data for a roaming profile user limited account?

    We have customers that use our Silverlight application. Application off because errors that the software is not able to store data about user roaming profile folder. So far, developers have a setting which allows the application to be listened to, but that makes it real slow. We try to get out of that. So my question is there a group policy setting they are system administrator can setup to allow Silverlight to store data for a roaming profile user limited account? Maybe allow data to be written locally only to the specific folder that requires Silverlight?

    Thank you!
    Carlos.

    Hi Carlos,

    ·         Are these computers on the domain?

    If the computers are on the field then I suggest you to ask your question on the Microsoft TechNet Forums.

    Windows XP IT Pro category

    If this isn't the case, you can post your question on the forums of Microsoft SilverLight.

  • How can I partition logical volume from 5.5 TB Raid on a ML350 G6 server for ESX data warehouse creation

    Today, we received a new ML350 G6 server. It has 8 hard disks of size 1 TB each.i have created a volume unique raid 6 and raid is 5.5 TB usable for the logical unit space. The server starts with a picture of ESXi 3.5 on a USB key provided by HP with the server. When I login and checking configuration& gt; stroage adapters& gt; pi400 Smart array for physical disk controller, I couldn't see any disk. I know that ESX will not support the volume size more than 2 C.T.

    I don't want to create a raid volume children (example two raid 6 volumes with 4 hard drives), because it will reduce the amount of disk space, I can use.

    Is it possible to create 3 partitions on this volume 5.5 TB raid with the configs below?. All the tools that everyone knows that I can use to create partitions? I tried GPARTED, but it doesn't work

    LUN 0-2 TO

    1-2 TO LUN

    LUN 2-1, 5 TB

    You cannot have a drive that is larger than 2 TB.

    Create multiple logical disks, not different partitions.

    André

  • Placeholder for SPPG data warehouses

    I read somewhere that datastore of placeholder is not required for political protection database storage group. My question is why it is not necessary?

    Hi Eric,.

    An excerpt from documentation 6.1 SRM (Site Recovery Manager 6.1 Documentation Center) that can help you:

    For protection of storage strategy, Site Recovery Manager applies the mappings of the inventory of virtual machines when you run a recovery plan that contains a group of protection of the storage policy.

    With the Bay-based and groups protection replication vSphere, Site Recovery Manager applies the mappings of the inventory at the time of set up protection on a virtual machine. With the storage policy groups, protection policy being dynamic storage, Site Recovery Manager applies only the mappings of the inventory at the time where you run a recovery program. Virtual machine placement decisions are made according to the mappings of the inventory during execution a recovery plan, and Site Recovery Manager does not create a placeholder VMs on the recovering site.

    Hope that this is sufficient,

    Daniel G.

  • R1: tcAPIException: duplicate the element of planning for a task that is not multiple.

    Hello

    I am struggling with the next task:

    I have to ensure that an account exists for a given resource. I have available with la.tcUserOperationsIntf.provisionObject ().

    I created a createUser task to create the account.

    The task code checks to see if it is already matching account.

    If no account does exist, is created in a disabled state, and the State of the object of the IOM account is set to "Disabled" through return code of task mapping.

    If it exists, it is "related" to the account of the IOM.

    The problem is that if the existing account is enabled, I have to change account either IOM "Enabled" status.

    To implement this (thank you, Kevin Pinski https://forums.oracle.com/thread/2564011)) I created an additional task 'Switch' which is triggered by a special task return code. This task is always successful, and his only side effect is the status of being 'Enabled '.

    I'm getting the exception "Duplicate calendar item for a task that does not allow for multiple" permanently:

    This is the stack trace:

    Thor.API.Exceptions.tcAPIException: Duplicate the element of planning for a task which does not multiples. \

    at com.thortech.xl.ejb.beansimpl.tcUserOperationsBean.provisionObject (tcUserOperationsBean.java:2925).

    at com.thortech.xl.ejb.beansimpl.tcUserOperationsBean.provisionObject (tcUserOperationsBean.java:2666).

    at Thor.API.Operations.tcUserOperationsIntfEJB.provisionObjectx (unknown Source).

    at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method).

    at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:57).

    at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43).

    at java.lang.reflect.Method.invoke (Method.java:601).

    at com.bea.core.repackaged.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection (AopUtils.java:310).

    at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint (ReflectiveMethodInvocation.java:182).

    at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed (ReflectiveMethodInvocation.java:149).

    .. .skipped

    at Thor.API.Operations.tcUserOperationsIntfDelegate.provisionObject (unknown Source).

    ... jumped

    What I did wrong?

    Kind regards

    Vladimir

    Solved. Just forgot to mark the additional task as 'Conditional', then the system has tried to run twice.

  • Create schemas for data warehouse for Reporting with Oracle XE

    It is possible to import home charger with the Oracle XE database and dw?

    I get this error ORA-00439: feature not enabled not: progress of replication in the log of the CIM.

    I saw that the database, we do not have have the feature "advanced replication."

    SQL> select * from v $ option where parameter = 'Advanced replication';
    
    
    PARAMETER
    -------------------------------------------------- --------------
    VALUE
    -------------------------------------------------- --------------
    advanced replication
    FALSE
    
    
    

    Journal of the CIM:

    Mon Feb 23 14:16 BRT 2015 1424711760686 atg.cim.database.dbsetup.CimDBJobManager module of high level of information for Reporting data warehouse Datasource list page: domain controllers. DW, ARF. DW.base, ARF. DW. InternalUsers, Store.Storefront

    Info my Feb 23 14:16:05 BRT 2015 1424711765012 atg.cim.database.dbsetup.CimDBJobManager 0 0 imports has not been executed.

    Info my Feb 23 14:16:05 BRT 2015 1424711765192 atg.cim.database.dbsetup.CimDBJobManager list of module level for Datasource Reporting charger: DafEar.Admin, DCS. DW, DCS. PublishingAgent, ARF.base, Store.EStore, Store.EStore.International

    Info my Feb 23 14:16:05 BRT 2015 1424711765733 atg.cim.database.dbsetup.CimDBJobManager 1 1 imports has not been executed.

    Info my Feb 23 14:16:05 BRT 2015 1424711765953 atg.cim.database.dbsetup.CimDBJobManager list of top level for Datasource Publishing module: DCS - UI. Versioned, BIZUI, PubPortlet, DafEar.admin, DCS - UI. SiteAdmin.Versioned, SiteAdmin.Versioned, DCS. Versioned, DCS - UI, Store.EStore.Versioned, Store.Storefront, DAF. Endeca.Index.Versioned, DCS. Endeca.Index.Versioned, ARF.base, DCS. Endeca.Index.SKUIndexing, Store.EStore.International.Versioned, Store.Mobile, Store.Mobile.Versioned, Store.Endeca.International, Store.KnowledgeBase.International, Portal.paf, Store.Storefront

    Info my Feb 23 14:16:11 BRT 2015 1424711771561 atg.cim.database.dbsetup.CimDBJobManager 65 65 imports has not been executed.

    Info my Feb 23 14:16:11 BRT 2015 1424711771722 atg.cim.database.dbsetup.CimDBJobManager list of top level for Datasource Production Core module: Store.EStore.International, DafEar.Admin, DPS, DSS, DCS. PublishingAgent, DCS. AbandonedOrderServices, DAF. Endeca.Index, DCS. Endeca.Index, Store.Endeca.Index, DAF. Endeca.Assembler, ARF.base, PublishingAgent, DCS. Endeca.Index.SKUIndexing,Store.Storefront,Store.EStore.International,Store.Recommendations,Store.Mobile,Store.Endeca.International,Store.Fluoroscope,Store.KnowledgeBase.International,Store.Mobile.Recommendations,Store.Mobile.International,Store.EStore,Store.Recommendations.International

    Info my Feb 23 14:16:12 1424711772473 2015 BRT atg.cim.database.dbsetup.CimDBJobManager 30 30 imports has not been executed.

    Info my Feb 23 14:16:19 BRT 2015 1424711779573 atg.cim.database.dbsetup.CimDBJobManager creating schema to the Reporting data warehouse data source

    Info my Feb 23 14:16:19 BRT 2015 1424711779653 atg.cim.database.dbsetup.CimDBJobManager list of top level for Datasource Reporting data warehouse module: domain controllers. DW, ARF. DW.base, ARF. DW. InternalUsers, Store.Storefront

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager DatabaseTask Create for Module ARF. DW.base, sql/db_components/oracle/arf_ddl.sql

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager DatabaseTask Create for Module ARF. DW.base, sql/db_components/oracle/arf_view_ddl.sql

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager DatabaseTask Create for Module ARF. DW.base, sql/db_components/oracle/arf_init.sql

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager Create DatabaseTask Module domain controller. DW, sql/db_components/oracle/arf_dcs_ddl.sql

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager Create DatabaseTask Module domain controller. DW, sql/db_components/oracle/arf_dcs_view_ddl.sql * info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager Create DatabaseTask Module domain controller. DW, sql/db_components/oracle/arf_dcs_init.sql

    Info my Feb 23 14:16:21 atg.cim.database.dbsetup.CimDBJobManager BRT 2015 1424711781085 found 2 of the 6 unrun previously of tasks for Datasource Reporting data warehouse

    Info my Feb 23 14:16:21 BRT 2015 1424711781085 atg.cim.database.dbsetup.CimDBJobManager 1 ARF. DW.base: sql/db_components/oracle/arf_view_ddl.sql

    Info my Feb 23 14:16:21 atg.cim.database.dbsetup.CimDBJobManager 1424711781085 2015 BRT 2 domain controllers. DW: sql/db_components/oracle/arf_dcs_view_ddl.sql

    Info my Feb 23 14:16:21 BRT 2015 1424711781085/atg/dynamo/dbsetup/job/DatabaseJobManager starting work of setting up data 1424711781085.

    Error my Feb 23 14:16:21 BRT 2015 1424711781516 /atg/dynamo/dbsetup/database/DatabaseOperationManager---java.sql.SQLException: ORA-00439: feature not enabled not: Advanced replication

    is there a solution?

    Hello

    We have not tested and certified with Oracle XE internally

    You must use an Oracle Enterprise Edition for Advanced Replication

    What version of Oracle trade you install, I can't say extract of newspaper, you've posted

    ++++

    Thank you

    Gareth

    Please indicate any update as "Good response" or "Useful answer" If this update help and answers your question, so that others can identify the correct/good update between the many updates.

  • remove the new HBA analysis when creating data warehouses?

    I created a script to create multiple data warehouses (100 +), but notices after each one is created, it triggers a new analysis of all the HBAS the LUNS is announced to the.

    It is; This new analysis can hold until all data stores have been created, then just scan once at the end?

    example line of code to the ctreation store:

    < New-store data-server vcenter - VMHost esx.domain.com - name san-lun-01-path naa.xxxxxxxxxxxxxxxxxxxxxxxx - Vmfs - BlockSizeMB 4 >

    Thanks a bunch for all help!

    The new analysis is actually performed by vCenter, you could follow Duncans post for this disable manually or you can add a line to your script that adds the parameter disable the Rescan and then change it back after...

    http://www.yellow-bricks.com/2009/08/04/automatic-rescan-of-your-HBAs/

  • ESXi V5.0 U1 - activate SIOC on creating NFS data warehouses?

    I read on the IGCS implementation on our warehouses of NFS data because of a problem experienced by the following KB:

    http://KB.VMware.com/selfservice/search.do?cmd=displayKC & docType = kc & externalId = 2016122 & sliceId = 2 & docTypeID = DT_KB_1_1 & dialogID = 669628658 & StateID = 1% 200% 20669648380

    I have opened a support ticket and have decided to allow the SIOC for our data warehouses, even if the support could not tell me if it was something I could do on the production.  My assumption would be that adding this additional configuration would not be a problem, although I have questions, confirming that.  Everyone helped SIOC on live production data warehouses?

    Thank you

    The f

    Yes, you can activate without risk IGCS to a data store active with running virtual machines.

  • [SRM 4.1] Dealing with local data warehouses?

    Hello

    I'm currently loaded with SRM 4.1 installation on our companys vsphere and while I did this before that I have never worked with VM on local data warehouses. There are three additional centers for both, I'll be tipping. These three data centers all run on Cisco UCS 210 M2 servers spread across two data stores. The virtual machine can be found on the second partition of the UCS VMFS.

    I don't know why it was created this way that I wasn't there when it was put into service (it seems strange, though, as they have a shedload of space on the Symmetrix). So I ask really (a person with a more experienced eye), what are the options for local data warehouses with MRS? I'm guessing limited to no support... so think I might watch svMotion

    Thanks for any advice.

    Hello

    With MRS. 4.1, the only option is based on the Bay of replication, i.e. SRM is only able to protect virtual machines residing on storage arrays supported with replication configured between them. SRM itself does not perform the replication. SRM is able to display your replica equipment and perform a few operations on the Bay of storage through the SRA (storage replication adapter) - software written by the storage provider.

    Yes, unless you use a storage device which can present these local data warehouses such as those shared and replicate them on the (and is supported with SRM), you cannot use local data warehouses. I have a limited knowledge of such things, maybe the other guys will be able to help more.

    In SRM 5 extra option has been introduced - vSphere replication, which allows the replication of virtual machines between ESXi hosts. You will need vcenter / SRM and ESXi 5 for this to work.

    I do not understand your configuration. How many centres of data do you have? SRM only supports the scenarios one to one, and many-to-one.

    Michael.

  • Question on the reclassification of the Version 3-5 data warehouses

    Background:

    We have a group of hosts running ESX 4.1 that I migrated to ESXi 5.

    All our hosts are connected to a San.

    All existing data stores are file system version 3.21

    I want all my stores data upgrade to version 5.  I know that there is an option to convert warehouses of data version 3 to version 5, but a support technician VMWare told me that he would not have even features a new version 5 data store would.  He said that he would retain the features of version 3.

    Question:

    If I delete a vCenter data store (after I have migrated off the guests) and then recreate the data store by using the same logical unit number, this will be a 5 native data store or it will still retain the features of version 3 when vCenter reuses the lun?

    I'd rather not have to cancel the mapping of the LUN and recreate blank those for native data warehouses 5 if possible.

    When you remove the logic unit number, it will be empty.  Adding, you can format as a LUN VMFS5

  • How to change alerts on data warehouses?

    Greetings! I have a situation where I know that one of my data stores will be almost full all the time but it never filling so I'm not worried to watch but there a mistake I want to disappear. On the other hand, I don't want to follow all the other data stores. The problem is that it seems that I can't include some data warehouses under the definition of an alert and another store of data under another definition. Does anyone know how do you allocate the alert definitions, some for specific data warehouses and another to monitor another data store? I am using vCenter Server 4.1.

    Thank you!

    Christopher Beard

    You can turn off the alarm level of vCenter, then go in your storages of data display in your inventory and facility alarms there. However, you will need to do this for each data store.

Maybe you are looking for