SRM and vSphere dedicated HA of data warehouses

Hello

When you use VMware HA and SRM, may that MRS will failover data warehouses to the recovery site, the host HA heartbeat? Locking issues was born when HA uses the data store and SRM wants to failover the data store?

Gabrié

Yes they can-

Tags: VMware

Similar Questions

  • Move cluster configured with replication SRM and Vsphere

    All,

    We go 2 groups in the near future to achieve a geographical distance between the 2 sites.

    A cluster is equipped with VMware 5.0 replication SRM and vSphere and the other cluster running VMware 5.5 and SRM in combination with replication of vSphere.

    What is the best way to move these clusters, I must stop the replication or interrupt the replication process and are these action to perform on SRM or we need to perform any action on the virtual machine itself?

    I read that a stop is setting the State to not configured including destroy all files in the target directory.

    Does anyone have an experience by physically moving a cluster managed by SRM (and replication of vSphere)

    Any help is welcome

    Thank you

    Gerbert Coolen

    Just pause replication before stop you the VR device on the site target/DR. stop vs break with vSphere replication - VMware vSphere Blog - Articles from VMware

  • [SRM 4.1] Dealing with local data warehouses?

    Hello

    I'm currently loaded with SRM 4.1 installation on our companys vsphere and while I did this before that I have never worked with VM on local data warehouses. There are three additional centers for both, I'll be tipping. These three data centers all run on Cisco UCS 210 M2 servers spread across two data stores. The virtual machine can be found on the second partition of the UCS VMFS.

    I don't know why it was created this way that I wasn't there when it was put into service (it seems strange, though, as they have a shedload of space on the Symmetrix). So I ask really (a person with a more experienced eye), what are the options for local data warehouses with MRS? I'm guessing limited to no support... so think I might watch svMotion

    Thanks for any advice.

    Hello

    With MRS. 4.1, the only option is based on the Bay of replication, i.e. SRM is only able to protect virtual machines residing on storage arrays supported with replication configured between them. SRM itself does not perform the replication. SRM is able to display your replica equipment and perform a few operations on the Bay of storage through the SRA (storage replication adapter) - software written by the storage provider.

    Yes, unless you use a storage device which can present these local data warehouses such as those shared and replicate them on the (and is supported with SRM), you cannot use local data warehouses. I have a limited knowledge of such things, maybe the other guys will be able to help more.

    In SRM 5 extra option has been introduced - vSphere replication, which allows the replication of virtual machines between ESXi hosts. You will need vcenter / SRM and ESXi 5 for this to work.

    I do not understand your configuration. How many centres of data do you have? SRM only supports the scenarios one to one, and many-to-one.

    Michael.

  • List VMFS version and block size of all data warehouses

    I'm looking for a PowerShell script (or preferably one-liner) list all with version number data warehouses there VMFS and their blocksizes.

    I am a novice PowerShell and ViToolkit, but I know how to do the following:

    I can list all data stores that begin with a specific name and sort by alphabetical order:

    Get-Datastore-name eva * | Sorting

    Name FreeSpaceMB CapacityMB

    EVA01VMFS01 511744 81552

    511178 511744 EVA01VMFS02

    511744 155143 EVA01VMFS03

    EVA01VMFS04 511744 76301

    301781 511744 EVA01VMFS05

    etc...

    I can get the Info for a specific data store with the following commands:

    $objDataStore = get-Datastore-name 'EVA01VMFS01 '.

    $objDataStore | Format-List

    DatacenterId: Data center-data center-21

    ParentFolderId: File-group-s24

    DatastoreBrowserPath: vmstores:\vCenter-test.local@443\DataCenter\EVA01VMFS01

    FreeSpaceMB: 81552

    CapacityMB: 511744

    Accessible: true

    Type: VMFS

    ID: Datastore-datastore-330

    Name: EVA01VMFS01

    But that's all as far as my knowledge goes.

    Someone out there who could help me with this one?

    This information is not available in the default properties of the DatastoreImpl object.

    But this information is available in the SDK object called a data store.

    You can view these values like this.

    Get-Datastore | Get-View | Select-Object Name,
                                        @{N="VMFS version";E={$_.Info.Vmfs.Version}},
                                        @{N="BlocksizeMB";E={$_.Info.Vmfs.BlockSizeMB}}
    

    If you are using PowerCLI 4.1, you can check with

    Get-PowerCLIVersion
    

    Then, you can use the New-VIProperty cmdlet.

    Something like that

    New-VIProperty -Name VMFSVersion -ObjectType Datastore `
         -Value {
              param($ds)
    
              $ds.ExtensionData.Info.Vmfs.Version
         } `
         -BasedONextensionProperty 'Info' `
         -Force
    
    New-VIProperty -Name VMFSBlockSizeMB -ObjectType Datastore `
         -Value {
              param($ds)
    
              $ds.ExtensionData.Info.Vmfs.BlockSizeMB
         } `
         -BasedONextensionProperty 'Info' `
         -Force
    
    Get-Datastore | Select Name,VMFSVersion,VMFSBlockSizeMB
    

    ____________

    Blog: LucD notes

    Twitter: lucd22

  • Need help with vSphere data script, packaging and sending it to the data warehouse

    Greetings PowerCLI gurus.


    Everyone can offer suggestions on a script which can query vSphere and pull on the following fields of the virtual computer:

    NameStateStatusHostSpace in useSpace used

    Format in a file in the CSV format and send the file to an FTP server?

    Much respect to all, thanks a lot in advance.

    Hello-

    Happy to help you.

    OK, well, if this database is accessible through a UNC path, you might make a copy directly using Copy-Item.  If you use different credentials, you can encrypt and store in an XML file.  HAL Rottenberg wrote to do http://halr9000.com/article/531.

    Or, if this pension data is something that supports the secure copy (scp) or secure FTP (SFTP), those who would be good options.  Again, you can store the alternative credentials in an encrypted in an XML file format and use them as needed.

    Certainly, there is a balance to be struck between security and ease of use.  It may be such that the transmitted data are not considered sensitive to all, and clear data transfers are acceptable.  Probably still a good idea to take measures to protect the credentials at least.

  • Wiping of data warehouses

    We lack VCenter 4.1 with an iSCSI SAN (managed by IBM System Storage DS Storage Manager 10).  We have a few LUNS defined on the SAN and the LUNs (mapped?) data warehouses, but not all of the space on the SAN is still attributed to any logical unit number.

    A former employee, set up our current LUNs, so we learn the entire process of creating for ourselves.  We wanted to do a test before you make drastic changes, but our test... showed us that we didn't understand completely what we were doing again.

    Here's what we did and how it went wrong.

    • We have created a new, test of 250 GB LUN on the SAN (part of the unallocated space).
    • In VSphere Client, we went through the 'Add datastore' Assistant and went through all the stages of this new logic unit number.  Everything went great and we had a new store of VMFS data available to us.
    • As a test, we migrated a small virtual PC to the new VMFS.  It worked.
    • Then we migrated it to a pre-existing data stores. It worked also.
    • Then, in VSphere Client, we have removed our new test data store.
    • Then in the IBM System Storage Manager, we have removed our new test LUN.
    • Then, just to make sure that the space had been rehabilitated, we created a test new new LUN, also in 250 GB.  This has been attributed the same LUN number as the previous test of 250 GB LUN.
    • We returned in VSphere Client, and tried to 'add a data store' by using this new-new test LUN.  Everything works until we get to the current "Available to drive" part of the wizard.  It shows 250 GB as the 'ability' of the LUN, but '-' as 'available'... and the "next" button is grayed out.

    add-storage_scrnsht.png

    Then... How can we do better?  Is it possible to delete a data store (or a logic unit number) so that the space of the deleted/LUN data store becomes 'Available' for the future creation of LUN/datastore?  We really had to be able to create a few large LUN in our space on the SAN, then remove the old, smaller LUNS created by the former employee and make MON new, bigger out of free space by removing the 'old' (since extending LUN does not work... the ex-employee tried this several times and we have hundreds of concerts tied up in space extended but unavailable LUN).

    How can we achieve this?

    Thanks in advance for the attention and/or advice.

    So, if I understand your situation, you had a unit number logic to 250 GB with VMFS, then you removed from the store database and then removed the SAN LUN.

    Then, you create a new LUN on your storage, to the same size and LUN number array, but when you want to create a data store VMFS on it you can't and the size seems to be incorrect in the vSphere Client.

    I wonder if it might be that the ESXi host is confused by the fact that it's the same LUN number. You could switch to the display of the storage card in the Configuration tab and do a rescan 'all '. Make sure you see the new empty LUN when you select the vmhbaXX, and then try to recreate the data store.

  • Help! Data disc lost under "browse data warehouses.

    I use a domain controller to Win2K3 on the free version of esxi 4.0 Vsphere.

    I added a VHD independent on a physical disk separate SATA installed on the same machine.

    I brought the server to change from 1 CPU 2 CPU. When I went to power it back up, I get an error that says:

    "File & lt; filename not specified & gt; not found ".

    I tried to change settings back to 1 CPU without success.

    So, I created a new virtual machine custom and pointing the virtual disk from the server, no problem.

    But when I add the second virtual disk (data drive) via VSphere Client and get the dialog "browse data warehouses."

    the dialog box appears even if empty, if I ssh to the machine I can see the virtual disk file.

    Can someone help me?

    Here is the virtual disk, I need to add but is not seen via VSphere client:

    / vmfs/volumes/4afac8a0-7fd783d3-bc9f-0019b93534d4/Win2k3-DC1 # ls - al

    drwxrwxrwx 1 root root 420 25 October 00:56.

    drwxr-xr-t 1 root root 1120 Oct 25 00:55...

    -rw - 1 root root 1495335814144 25 October 00:26 Win2K3-DC1_1 - flat hard

    I guess the fact that Vsphere has trouble seeing this file is why the original does not come to the top.

    Help, please!

    Flat hard-DC1_1-Win2k3 maybe your data, please save it first.

    I don't know the commands below works or not, you can have a try.

    1, / vmfs/volumes/4afac8a0-7fd783d3-bc9f-0019b93534d4/Win2K3-DC1 cd.

    2, cp, Win2K3 Win2K3-DC1_1 - flat hard-DC1_1 - flat hard .backup

    3, vmkfstools - c 1 g Win2K3 - DC1_recovery.vmdk

    4, vi Win2K3 - DC1_recovery.vmdk and change

    RW 2097152 VMFS "Win2K3-DC1_recovery - flat hard."

    TO

    RW 2920577762 VMFS "Win2K3-DC1_1 - flat hard."

    5, turn on your vm and add Win2K3 - DC1_recovery.vmdk

    Binoche, VMware VCP, Cisco CCNA

  • E.M.P. 11.1.2 Essbase data warehouse Infrastructure

    Hello

    We'll implement Hyperion Planning 11.1.2 and we intend having the data warehouse, pushing the budget data for Hyperion Planning and have push Hyperion and retrieve data in Essbase.  It is, has she a sense also push and pull data from essbase in the data warehouse? To make it more clear, we take the budget data from the data warehouse, and he will push to Hyperion Planning.  Budgetary data provided will be also pushed Essbase data warehouse.  Hyperion Planning will then do the what if analysis and then push back to essbase with essbase here will push the hypothetical scenarios back to the manipulated data warehouse.

    Please let me know if the script need for clarification.

    Thank you

    I did something similar in the past the concept is perfectly feasible.

    See you soon

    John

    http://John-Goodwin.blogspot.com/

  • Cluster data warehouses

    I would like to create a dashboard that displays a list of all VMWare clusters and for each handset display data warehouses that are used for each of them. Looking for a way to create the dashboard without slipping each store data in the dashboard.

    1 is it possible with a query of the user interface?

    Data > VMWare > centers > (name datacenter) > cluster > (cluster name) > ESX host > (host name) > storage > data warehouses

    If you want the names and what may not be a little cleaner that I would try this, it also removes the need for the additional query keep things all neat.

    cluster = server.get ("QueryService").queryTopologyObjects("!) VMWCluster')

    output =]

    (cluster in clusters)

    {

    data warehouses = cluster.esxServers.datastores

    for (data in data warehouses store)

    {

    map = [VMWCluster:cluster.name, VMWDatastore: datastore.name]

    output. Add (Map)

    }

    }

    return output

  • 11g tutorials data warehouse builder

    Hello

    I'm new to data warehouses and should work on 11g data warehouse generator

    I'm going through the tutorial following the Oracle,.

    http://www.Oracle.com/technology/OBE/11gr1_owb/index.htm

    can someone kindly guide me to any good tutorials.

    Thanks in advance,
    Kind regards

    OBE is a good start, I recommend also OWB manuals and other books on OWB and warehouses of data in general. A lot of help, you can find on blogs, this forum and oracle wiki.

    http://www.training-classes.com/learn/_K/o/r/a/oracle_warehouse_builder/_T/ILT/

    http://blogs.Oracle.com/

    http://wiki.Oracle.com/

    And don't forget to metalink.

    https://support.Oracle.com/CSP/UI/Flash.html

  • RS 4 and NFS data warehouses

    Hi, I have a problem with SRM and NFS data warehouses

    When I set up the table through vCentre Customer Manager in the MRS Plugin I get the following message when scanning for NFS data storage

    "Replicated devices could not be matched with inventory data warehouses."

    We have a 4.01 vSphere server with ESX 3.5u4 hosts.

    SRM 4.0 is deadlocked.

    Our storage is on IBM Nseries N5600 reporting using DataOntap 7.2.5.1.

    The SRA we use are ver 1.4 of IBM.

    Someone at - it never experianced this problem and if so how to fix it.

    Thank you

    Mahmood

    I used this SRA without any problems, so I suspect that something is wrong with your replicated configuration of NFS export.

    To help further, I would need to see the SRM journal from the site server protected SRM immediately after seeing this message. You can download that on this subject?

    I also suggest that you connect a SR with VMware support, because they can help here as well.

    see you soon

    Lee

  • Do we need data warehouse, if we only create dashboards and reports in obiee?

    Hello! I'm new to obiee.

    My organization has decided to build their reports and dashboards using obiee. I am involved in this mission, but I don't have in-depth knowledge he obiee.  My question is what do we need to have the installation of the data warehouse? Or I just need to just install obiee by the creation of a repository, and then by creating a data source in publisher bi and then create dashboards or reports?

    I'm confused too please help me in this regard. Please share any document or link where I can easily understand these things. Thank you

    Please share any document or link where I can easily understand these things. Thank you

    OBIEE is a software to run without a good understanding of its complex concepts. I would really recommend attending a training course, or at least a book (for example this or this). There are MANY items of general blog on OBIEE, many of which are of poor quality and are all step-by-step guides on how to do a particular task, without explaining the overall situation.

    If you want to use OBIEE and to make it a success, you have learned to understand the basics.

    To answer your question directly:

    -BI Publisher is not the same thing as OBIEE. It is a component of it (but also autonomous available). OBIEE makes data accessible through 'Dashboards' which is made up of 'Analysis', written in the answers tool. Dashboards can also contain content BI Publisher if you want

    -OBIEE can report against the many sources of different data, one or more data warehouse and transactional. Most of the OBIEE implementations that perform well are based against dedicated DW, but is not a mandatory condition.

    -If reports against a DW real or not, when you build the repository OBIEE you build a "virtual" data warehouse, in other words, you dimensionally model all your business in one data set of logic diagrams in Star.

  • the number of vsphere HA pulsation data warehouses for which is 1

    Only, I have a giant lun created and have no space to create another.

    So now it gives me this error HA my hostsCapture.JPG

    What should I do?

    Click with the right button on the cluster, click on change settings, go to vSphere HA-> Advanced Options and add the das.ignoreInsufficientHbDatastore entry and the value real... vSphere cluster to disable and re-enable HA and the warning will disappear.

    VMware KB: HA error: the number of heartbeat for the host data warehouses is 1, which is less than required: 2

  • How to speed up the transfer of files to and from data warehouses ESXi

    Hi people!

    Our company has a large number of

    VMware ESXi 3.5 and 4.0 servers. (Note: this is not the free ESXi, the)

    Paid ESX Server.) They generally work very well for us. The only area of

    problem I have is to transfer files to and from data warehouses on

    the machines.

    Because we do not have the ESX Server paid, I can't

    use vMotion or VirtualCenter to transfer virtual machines from one location

    to the other. So, if I need to move a virtual machine to a different host,

    the only solution I have is to download the VM

    files of the database on a local or network drive and then download

    paste them into the new data store, using the vSphere Client. The problem

    with this is that I found it is extremely slow. This may take

    hours just to copy a virtual machine.

    Does anyone know of one

    best way to do that would reduce the transfer time?

    Thank you!

    -

    Tom

    Hello

    Use veeam fastscp 3.0 its freeware. You can connect n number of esxi server.

    You can move or copy directly from one server to another server...

    Concerning

    Senthil Kumar D

  • Oracle Business Intelligence Data Warehouse Administration Console 11 g and Informatica PowerCenter and Guide of Installation PowerConnect adapters 9.6.1 for Linux x 86 (64-bit)

    Hi all

    I'm looking for full installation GUIDE for Oracle Business Intelligence Data Warehouse Console Administration 11 g and Informatica PowerCenter and PowerConnect 9.6.1 for Linux x 86 (64 bit) adapters. I just wonder is there any url that you can recommend for installation. Please advise.

    Looks like these are ask you.

    http://docs.Oracle.com/CD/E25054_01/fusionapps.1111/e16814/postinstsetup.htm

    http://ashrutp.blogspot.com.by/2014/01/Informatica-PowerCenter-and.html

    Informatica PowerCenter 9 Installation and Configuration Guide complete | Training of Informatica & tutorials

Maybe you are looking for