Clone VM runs on several data warehouses

Hi all

I have a virtual machine that spans 2 data warehouses and I don't know how it happened like this:

I cloned a VM from a template.  The model is in my store of data VOL3.

In the summary for my VM tab, it shows that it is connected to and VOL2 VOL3 data warehouses.  When I sail both VOL2 VOL3 data warehouses, I see all the files associated with the virtual machine on VOL3, so I do not know why the VIC is reported as connected to VOL3?  It has only 1 drive, and who connects to a single vmdk on VOL2

Some notes when I cloned the virtual machine template, I forgot that I had the CD connected to a .iso on the VOL3, maybe that's the question?  I unplugged the virtualCD, but no change?  It doesn't seem to be hurt anything at the moment, but I guess it could complicate things on the road.

Thank you

Chad

I think that the reason that you see your vm split into several data stores is the .iso file, presented during the configuration of the virtual machine. Instead of unplugging the drive (which has the .iso file) change the cd configuration to use a client device.

I hope this helps.

Best wishes / Saludos

Por don't favor no olvides calificar las responses that you were should o ayuda valiosos.

Please, do not forget the points of call of the "useful" or "correct" answers

________________________________________

Nicolas Solop

VCP 410 - VCP 310 - VAC - VTSP

My Linkedin profile

Join the Virtualizacion en Español group in LinkedIn

! http://feeds.feedburner.com/WetcomGroup.1.gif!

Tags: VMware

Similar Questions

  • Several data stores?

    Hello.

    I create a configuration on my VM project and future document.

    I run x 4 company vSphere5 hosts running on a Dell R710 all connected to a Dell EqualLogic PS6100XS for their shared iSCSI storage.

    Looking at best practices and not knowing the correct limits of data warehouses... What is the best Setup here. It is better to create a large (7 by TB) Volume and transfer this to vSphere as storage... Or...

    Create several data warehouses (each 1 TB in size) for use with Storage vMotion?

    Thank you

    J.

    Theres more to it than just the workload, IMO.

    You must consider that usually get you slightly better performance (5-10%) with several data warehouses/LUN, simply because there is more available queues.

    Also, several data stores means that you have more options to make backups, prioritization, etc.  Finally, you also have more possibilities for reproduction in the future.

    I would * not * create a unique 7TB data store.  I'd be more likely to create a smaller data warehouses from 7-10 and broadcast my virtual machines around on them.

  • risk of rebalancing of data warehouses

    some time I have been told never storage vmotion a clone related vm because that would create the sparse disk to expand to the size of the replica.

    Now in one of the guides view 5.1 he mentions using rebalancing to distribute the load/area of clones related data warehouses. So my question is what also run the risk of actually taking more space causing the sparse disk to expand.

    You should be aware of what a rebalancing also does a refresh.

    Linjo

  • Internal data warehouses DMZ hosts?

    Hi all

    I hope I'm posting in the right section, and this makes sense.

    We currently manage two groups separated, managed by vcenter.  One for all our internal servers and one for our DMZ servers.  We have our guests DMZ nic 1 and nic 2 combined for the console, vmkernel, and data warehouses.  Two of these network adapters are connected to our internal network.  We then nic 3 dmz1 and nic 4 on dmz2.  Each set of network interface cards is assigned its own vswitch.

    Data warehouses for all customers of vm on the DMZ cluster are NFS targets on our internal network of san. Each guest virtual machine is only affected 1 network card with access to it's special DMZ.

    My question is about safety.  We have been operating this way for awhile, but my networks/security guy is concerned that in some way, the virtual machine can be hacked to access all the cards that are connected to the host, or the virtual machine can be hacked and someone might potentially have access to in-house on our SAN data.

    What are the best practices for this scenario?  Should I currently have security vulnerabilities?  Assuming that this configuration is ok, can I give my guy from network to facilitate his concern for information?

    Our san is a Netapp.

    Edit: misspelling of nic4 for dmz2

    Hi SLCSam

    The way I understand how ESXi handles this (and someone please correct me if I'm wrong)

    Is that ESXi handles all the traffic of independent disk of the virtual machine.

    That's why the only VM knows how to send SCSI comarnds to her hyperviser who then rewrote the comarnds and sends it to the virtual disk files.

    This is why each VM has access to its own virtual disk and nothing else on the data store.

    So if one of the VMS is hacked, data on this virtual machine will be on display, but that's all.

    The virtual machines that are running on these clusters are all independent computing environments. And do not have access to each of the other files.

    Consider as turning several independent servers. It cannot write to other discs except via CIFS or similar.

    From what you wrote, you have nic1 and nic2 as a trunk serving admin. and drive networks

    You have 2 different DMZ then nic3 and nic2

    Nic2 therefore seems to be linked to the network disk in the DMZ.

    It is a major problem. Since then, if one of the virtual machines in the DMZ is compromised, VM said could speak to the SAN via NFS (since they are on the same layer 2 network) and this will expose the hard for other virtual machines (possibly internal) files.

    This assumes that there no VLAN involved. Because if the DMZ and the disc of the net are on different VLANS this problem does not occur.

    If it is typo and the network admin and the drive is on nic0 and nic1 or there is configuration of VLANS to separate the traffic of layer 2, then there is no problem with this Setup, even if a virtual DMZ computer is hacked.

    Concerning

    Cyclooctane

  • What are the best solutions for data warehouse configuration in 10 gr 2

    I need help on solutions to provide my client for the upgrade of the data warehouse.

    Current configuration: database Oracle 9.2.0.8. This database contains the data warehouse and a LOBs more on the same host. The sizes are respectively 6 terabyte (3 years retention policy + current year) and 1 terabyte. The ETL tool and BO Reporting tools are also hosted on the same host. The current configuration is really poor.

    Customer cannot go for a major architectural or configuration to its environment changes existing now due to some constraints.

    However, they agreed to separate the databases on separate hosts of ETL tools and objects BO. We also plan to upgrade the database to 10 g 2 to achieve stability, better performance and overcome the current headache.
    We cannot upgrade the database 11g as the BO is a 6.5 version that is not compatible with Oracle 11 g. And the customer cannot afford to spend anything else than the database.

    So, my role is very essential in providing a perfect solution for better performance and take a migration successful Oracle from a host database to another (same platform and OS) in addition to the upgrade.

    I got to thinking now of the following:

    Move the database and Oracle mart for separating the host.
    The host will be the same platform, i.e. HP Superdome with 32-bit HP - UX operating system (we are unable to change to 64-bit as ETL tool does not support)
    Install the new Oracle 10 g on the new host database and move the data to it.
    Discover all the new features of 10gr 2 in order to data warehouse, i.e., introduction of Clause SQL TYPE, Parallel processing, partitioning, Data Pump, SPA to study the pre and post migration.
    Also at RAC to provide more better solution than our main motivation is to show an extraordinary performance improvement.
    I need your help to prepare a good roadmap for my assignment. Please suggest.

    Thank you

    Tapan

    Two major changes since the 9i to 10g which will impact performance are
    a. changes in the default values when executing GATHER_ % _STATS
    b. change in the behavior of the optimizer

    Oracle has published several white papers on them.

    Since 10g is no longer in the Premier Support, you'll have some trouble connecting SRs with Oracle Support - they will continue to ask you to use 11 g.

    The host will be the same platform, i.e. HP Superdome

    Why do you need export if you do not change the platforms? You could install 9.2.0.8 on the new server, the database clone and then level to 10 gr 2 / 11 GR 2.

    Hemant K Collette

  • VM with disks of different data warehouses. Snapshots on the same data store?

    Hello

    We have several virtual machines with disks on different data stores for reasons RAID.

    Last night, we found that the snapshots of all disks are left on the data store main vmx and not on each datastore disk.

    How will we change this as our OS data warehouses are small and run out of space when running shadow copies?

    Thank you

    Frogbeef

    You can change the location of snapshots, but they will all go to this place - as far as I know, you cannot change the location of each disk.

    The KB http://kb.vmware.com/kb/1002929 explains how to do this.

    Kind regards

    Marcelo Soares

    VMWare Certified Professional 310/410

    Master virtualization technology

    Globant Argentina

    Review the allocation of points for "useful" or "right" answers.

  • Do we need data warehouse, if we only create dashboards and reports in obiee?

    Hello! I'm new to obiee.

    My organization has decided to build their reports and dashboards using obiee. I am involved in this mission, but I don't have in-depth knowledge he obiee.  My question is what do we need to have the installation of the data warehouse? Or I just need to just install obiee by the creation of a repository, and then by creating a data source in publisher bi and then create dashboards or reports?

    I'm confused too please help me in this regard. Please share any document or link where I can easily understand these things. Thank you

    Please share any document or link where I can easily understand these things. Thank you

    OBIEE is a software to run without a good understanding of its complex concepts. I would really recommend attending a training course, or at least a book (for example this or this). There are MANY items of general blog on OBIEE, many of which are of poor quality and are all step-by-step guides on how to do a particular task, without explaining the overall situation.

    If you want to use OBIEE and to make it a success, you have learned to understand the basics.

    To answer your question directly:

    -BI Publisher is not the same thing as OBIEE. It is a component of it (but also autonomous available). OBIEE makes data accessible through 'Dashboards' which is made up of 'Analysis', written in the answers tool. Dashboards can also contain content BI Publisher if you want

    -OBIEE can report against the many sources of different data, one or more data warehouse and transactional. Most of the OBIEE implementations that perform well are based against dedicated DW, but is not a mandatory condition.

    -If reports against a DW real or not, when you build the repository OBIEE you build a "virtual" data warehouse, in other words, you dimensionally model all your business in one data set of logic diagrams in Star.

  • Removal of host SDR data warehouses

    Hello

    I have a scenario where I want to migrate to a host and the VM between two clusters. I unfortunately had a number of DTS data warehouses attached to the host, no virtual machine is running from these storages of data on this host... I'm trying to disassemble stores but as expected get the error indicating that data warehouses are part of a cluster of EIM.

    Is it possible to no disturbance delete Host data warehouses.

    Thank you

    Steve

    So, what you want, remove some cluster DTS data warehouses and pass a host of a DRS cluster to another's is?

    Try the following steps:

    (1) remove the DTS cluster data warehouses, just move the data store outside the cluster for DTS;

    (2) move the host to the new DRS cluster target, if you have virtual machines running on it and cannot put the host in maintenance mode, you will need disconnect host, remove from the inventory and then add the host again to the new cluster.

    (3) If you wish, add data warehouses to the DTS of the new DRS cluster cluster.

  • Collection of data warehouses in the Script of commissioning

    Hi all

    First of all, I want to say thank you for all the help provided in these communities.  It has been very valuable in recent years.

    I had the opportunity to work on a configuration script treated for over a week now and have got almost ready for release but got stuck on the type of a basic element - collection of data warehouses.

    The idea is that we can use this to launch several environments identical demand - ripe for automation!

    We get the number of machines required is in $clcount, and $dslist can be equal to something like this...

    NameFreeSpaceGBCapacityGB
    SAN-ds-33,399.115,119.75
    SAN-ds-41,275.265,119.75
    SAN-ds-2661.8135,119.75
    SAN-ds-5292.3425,119.75
    SAN-ds-8273.2045,119.75

    My method works as long as the number of machines is less than the number of available data warehouses, but fails if the number of machines exceeds available data warehouses.

    $resources = Get-Cluster "Compute 1"
    $OSSpec = Get-OSCustomizationSpec "Base 2012 R2"
    $dslist = get-datastore | where {$_.Name -match "SAN" -and $_.FreeSpaceGB -gt 200} | Sort FreeSpaceGB -Descending
    $folder = Get-Folder "Lab 2"
    $clcount = "17"
    $envn = "Lab2-"
    $OSSpec = $OSSpec | New-OSCustomizationSpec -Name Temp-Spec -Type NonPersistent -Confirm:$false
    foreach ($num in 1..$clcount){
        $suffix = "{0:D2}" -f $num
        $datastore = $dslist[$num-1]
        $OSSpec = $OSSpec | Set-OSCustomizationSpec -NamingScheme fixed -NamingPrefix "APPCL$suffix"
        New-VM -Name $envn"APPCL"$suffix -Template $template -OSCustomizationSpec $OSSpec -Location $folder -ResourcePool $resources -Datastore $datastore
    }
    ##End build Client Machines
    $OSSpec | Remove-OSCustomizationSpec -Confirm:$false
    
    

    I know it would be easy to solve with the clusters and SDR data store, but I believe that would always choose the store of data with the most free space and you can see our environment could be a little unbalanced, so I am trying to build in a little more intelligence in the distribution of these machines in data warehouses.

    Any help or pointers in the right direction would be greatly appreciated!

    Use % (remainder of division) to transform $num into something that will be less than the size of option:

    $datastore = $dslist [$num % $dslist.count]

    Then change the ascending sort.

  • Formatted VMFS3 ISCSI data warehouses are compatible with ESX5.5?

    Hi all

    I'm currently building an ESX 5.5 environment to run in parallel with my environment 4.1 existing and need the new 5.5 ESX servers to connect via iSCSI to a couple of VMFS3 formatted lun.

    I can't see them via the iSCSI storage adapter, but when I try to mount it asks me to format first. I wanted to confirm if anyone else had this problem or has confirmed that VMFS3 is fully compatible with ESX 5.5 connected via iSCSI.

    Thanks in advance.

    Looks like that the MTU on my NICs is set too high. Once I lowered around 1500, they began to present correct data warehouses.

    Thanks for the help, I think I have a handle on it.

    Thank you, MP.

  • Great move VM to new data warehouses

    Read some of the other discussions on this topic... I have a slight twist... can someone offer some guidelines\options on the best way to move a very large virtual machine

    starting from a NetApp array to an array of EMC VNX.  The virtual machine has 4 discs - including 1 + to 2 each and is currently running on an ESX 4.0 host.  We want to move the

    warehouses of data and comments to a new host ESXi 5.  Can be 2 steps if necessary.  No downtime will be difficult and should be kept to a minimum.  I used vMotion

    and the converter.  A test of vMotion generated a complaint of block size.  Converter is very slow and often times out.  If the prompt can be turned off, I can use the data store

    browser to move the. VMDK, then the .vmx unregister\re-registry.  Slowly perhaps, but should be better than the converter.  Unfortunately, the 4 discs are all separate data storage

    and therefore do not reside with the dtatastore containing the .vmx.  Can I still do this by changing the .vmx to point to the new locations of disc?

    The reason why I try whether the Storage vMotion is possible, is mainly to avoid outages. With ESXi 5 host that has access to all data warehouses, you can storage vMtotion the power VM, VMFS versions no matter what and block sizes. However, keep in mind that migration between different block sizes is slower than the migration with between the warehouses of data with the same block size.

    André

  • Sort free space data warehouses in a cluster

    Hello, I am trying to sort data warehouses on a cluster of the amount of free space available, so I can return the cluster and the data store to a virtual machine on so far, I have this:

    Var data center is Server.findForType ("VC:Datacenter", "vc06vc.lab.601travis.jpmchase.net/datacenter-22");.

    clusters of var = datacenter.hostFolder.childEntity;

    var host = new Array;

    warehouses of var data = new Array;

    var dataStoreMostFreeRank = new Array;

    System.log (clusters);

    for each {(VcComputeResource in clusters)

    var clusterName = VcComputeResource.name;

    data var store = VcComputeResource.datastore;

    dataStores.push (datastore);

    System.log (NOMCLUSTER + "" + data warehouses);

    for each (var I in the data store) {}

    var dataStoreName = i.name;

    var dataStoreFreeSpace = i.summary.freeSpace;

    function sortDataStores(a,b) {}

    Back to a.dataStoreFreeSpace - b.dataStoreFreeSpace

    }

    var dataStoreRank = new Arrary;

    dataStoreRank.push (dataStoreFreeSpace);

    dataStoreRank.sort (sortDataStores)

    }

    }

    }

    but I get a syntax error when I run it in cli vco

    Rather than to troubleshoot your code, I thought I should share the code, I've had for a few years. I just he added to the tab documents of this community here: Action of storages of data sorting and workflow example

  • POS 5.5 could not obtain data with analytical performance data warehouses

    Hi all

    I have two devices POS running version: 5.5.5.180.

    All of a sudden I can not connect to the Web Client for each device.

    POS status show all services in green on the two Pdvs.

    root@vdp:~/#: dpnctl status all
    Identity added: /home/dpn/.ssh/dpnid (/home/dpn/.ssh/dpnid)
    dpnctl: INFO: gsan status: up
    dpnctl: INFO: MCS status: up.
    dpnctl: INFO: Backup scheduler status: up.
    dpnctl: INFO: axionfs status: up.
    dpnctl: INFO: Maintenance windows scheduler status: enabled.
    dpnctl: INFO: Unattended startup status: enabled.
    
    

    By clicking on the Storage tab, displays the error message: "Unable to get data with analytical performance data warehouses" and no data warehouses are listed.

    • VCenter restarts, Pdvs, doesn't change anything.
    • I can connect to Pdvs very well.
    • CP are created.

    I found similar topics but no response... (POS 5.5 ERROR)

    Open a support case and turned out that the POS password user (a user defined in the domain of the @vsphere.local) that was used to access the vCenter has expired. Apparently, there's a bug in vCenter for some versions that makes them expire in 65 days.

  • Heartbeat data warehouses

    If a virtual machine cannot access the warehouses of data used for the pulse, is it possible that it would introduce an alarm as "HA virtual machine control error" with the reason for the "failure of the heartbeat tool VMware?

    Wow... several topics in a question

    The heartbeat of the data warehouse is used by DRS storage cluster and checks to see if a data store is offline, then he would try to migrate or a DS nearly in full. The tool VMware failure of heartbeats you receive is a part of HA [tools only] analysis of virtual machine options.

    VM HA followed by error is the result of the AP detect that it does not receive the heartbeat of the vmware tools on the guest who could be offline due to the data store is missing or detached by a person

  • Assessment vCOPS 5.7.1 - did they add a way to exclude specific data warehouses?

    Hello

    I tried vCOPS in the past and I know that this was not possible.

    I just started a trial now, but I still want to exclude all data warehouses and a couple more which come from a file server that is always 99% full. These data warehouses are not used for virtual machines and shouldn't affect calculations in vCOPS.

    Is there a way to exclude the calculations? Thank you.

    Put all your data warehouses in vCenter the in a folder called "LOCALS" (or something else you like) - this folder will also be known by the VCOPS (may take 5 minutes). Then in VCOPS create a new strategy and clone of "Ignore these objects." On the page of the association take place 'record' and add your folder "INHABITANTS". Check 'current effective policy', it must be on "ignore objects.

    Now, whenever you add a host to vCenter, make sure you put this local data store in the right folder "INHABITANTS". This way you will have exclude them calculations.

Maybe you are looking for