Sharing NFS on RDM

Hello

I had to get to an interim design and I was wondering how come this is evidence.

The idea is that the data is accessible by several virtual machines (web server nodes) if needed, however, a node is fine for now.

I created a virtual RDM (1 TB on SAN) and it mapped to my VM. It is formatted with ext3.

What I might need to do in the future, is give access to these data other nodes.

So I thought to do are, when this need arises, is to share this RDM as an NFS share and map additional VMs

to this share.

My question is what would be the best solution:

-Exit ROW mapped to VM1 and create the NFS share on it and map other webnodes on this? Is there a problem with a node directly written and on the other (locking, corruption)

-Plan of the RDM to a server dedicated to NFS and spread to all the nodes in the web?

The main reason for the choice of the RDM is that she must be flexible because we don't really know a final design at this point. And the dedicated NFS server could be a hardware solution.

The pitfalls regarding VMotion and RDM in this scenario?

See you soon

Hello

What I might need to do in the future, is give access to these data other nodes.

So I thought to do are, when this need arises, is to share this RDM as an NFS share and map additional VMs

to this share.

This is the best solution because a ROW of mapping to multiple virtual machines requires a clustered file system and ext3 isn't a clustered file system.

My question is what would be the best solution:

-Exit ROW mapped to VM1 and create the NFS share on it and map other webnodes on this? Is there a problem with a node directly written and on the other (locking, corruption)

It is a good solution.

-Plan of the RDM to a server dedicated to NFS and spread to all the nodes in the web?

I think it's a better solution that you can set the NFS server, independent of the web servers. Web servers and NFS servers are often different ways to resolve things for optimal performance. In addition, you can possibly use the LUN for a physical NFS server as well.

The main reason for the choice of the RDM is that she must be flexible because we don't really know a final design at this point. And the dedicated NFS server could be a hardware solution.

RDM is a great solution for this.

The pitfalls regarding VMotion and RDM in this scenario?

Make sure that the ROW is a "virtual RDM", then you will have no problems. Physics might work, but virtual is better.

Best regards

Edward L. Haletky

VMware communities user moderator

====

Author of the book "VMWare ESX Server in the enterprise: planning and securing virtualization servers, Copyright 2008 Pearson Education.»

Blue gears and SearchVMware Pro Articles: http://www.astroarch.com/wiki/index.php/Blog_Roll

Security Virtualization top of page links: http://www.astroarch.com/wiki/index.php/Top_Virtualization_Security_Links

Tags: VMware

Similar Questions

  • shared NFS on ubuntu

    Hello

    I use a ubuntu 10.04 to host the shared NFS for my ESXi nodes.

    I did all the NFS configuration in ubuntu and the NFS has added to the storage of the hosts section in vCenter server.

    When I try to create a new virtual machine on one of the NFS storage, or migrate storage of data from a virtual computer to it, I face an error that says "cannot access the file...". "

    I can browse NFS content, but it seems that it cannot write to it.

    In ubuntu, I gave the authority of Scripture of this NFS.

    Does anyone know the solution?

    Thank you

    Ali

    you need no_root_squash, not root_squash (which is the default).

  • Impossible to create RDM on sharing NFS - unimplemented function

    Hi all

    I have a problem, creating a map of CRUDE device on a NFS share. Get an error "failed to create ramdisk: function not implemented (2490377)' when you use vmkfstools:

    / usr/sbin/vmkfstools z - /vmfs/devices/disks/naa.60a9800050336d755a5a5a3473355659 /vmfs/volumes/VMDATANFS/eea-cl1-rdm/eeacl1-sbd1.vmdk - a free

    Tried several suggestion in forums, such as from the creation of inside the right way, but it just doesn't work.

    The environment is a vShere 4.1 new installation that has storage NFS was assigned a 2040 of NetApp FAS. For a virtualized Linux cluster we want or need to create a bundel of RDM disks. When I add a disk in the CR of the host (or the customer vSpere directly connected to the ESX host) and select Raw Device Mappings, and then select the LUN exposed, then I need to specify the data store but only can select 'local' and not the NFS share my VM runs on. The store just does not appear here in the list, or I can select 'Save with Virtual Machine' it (the virtual machine is sitting on the store NFS of course).

    To test, I created the NFS name lowercase. does not work

    Also created a VMFS on the NetApp volume and attached to the host iSCSI hollow. This volume appears in the list to store ROW on the use of VC to add a RDM disk for a virtual machine, also I can use the vmkfstools command line successfully this way.

    I think a RDM link must be supported on an NFS share, right?

    Thanks in advance!

    Welcome to the forums!

    You need a number of logic unit or soemthing which are accessible through SCSI (such as iSCSI). NFS is not supported for this feature.

    http://pubs.VMware.com/vsp40u1/wwhelp/wwhimpl/js/html/wwhelp.htm#href=server_config/c_limitations_of_raw_device_mapping.html#1_12_9_7_8_1

    AWo

    VCP 3 & 4

    \[:o]===\[o:]

    = You want to have this ad as a ringtone on your mobile phone? =

    = Send 'Assignment' to 911 for only $999999,99! =

  • Creation of model on sharing NFS with ESXi 4.1

    Hi all

    I would like to implement a model of an installation of a Debian 6 Server and the file on an NFS share to copy it if necessary.

    Currently, the creation of the model and the dispossessed on the NFS sharing are not a problem.


    In fact, from the moment when I want to rappartrier on one of the servers copy *.vmdk image take a crazy time.

    During the image upload it took me 2 minutes (stopwatch in hand), this using the following command:


    vmkfstools-i /vmfs/volumes/datastore1/template/template.vmdk d /partagenfs/template.vmdk thin

    Downloading the plug-in image so the following command:


    vmkfstools-i /partagenfs/template.vmdk d /vmfs/volumes/datastore1/serveurXYZ/template.vmdk thin

    And there I download launched United Nations there are now 1 h 30 and it is still not finished.

    Did someone an idea?

    Thank you very much.

    Hi Hekeo88,

    Welcome to the VMware forums.

    Why not create a data store on your NFS share? Here, you'll be in in the conditions standard and it should work much better. It must be on your NFS storage makes sense in terms of performance.

    Good luck!

    @+

    Franck

  • Cannot remove the sharing NFS file after the installation of ForeFront 2010.

    After installing Forefront Endpoint Protection 2010, I am unable to delete files from an NFS Server 2003 share. Before installing, I had no problem, I removed the tip as a test and I was able to delete the file, reinstalled Forefront and was not able to delte the file again.

    You have posted in the forum for Microsoft Security Essentials. Forefront Endpoint Protection has its own forum.

    http://social.technet.Microsoft.com/forums/en-us/FCSNext/threads

    I hope this helps.

  • Sharing NFS Win2k8

    I created a Windows 2 k 8 NFS share. I am able to add storage without any problem. The problem comes when I try to create a virtual machine. It all starts, but it fails stating that the vmx file already exists. I browse the data store and find he did begin to create the necessary files to the virtual machine, but never completed it.

    I then try to delete in the navigation data store and it is said that he can't remove the files. I do have to connect to the storage server and change the permissions to all of the Admin to delete files.

    Someone at - it met similar problems?

    I have the following Setup on my NFS share.

    Encoding: ANSI

    Allow access to Anon

    anonuid:-2

    anongid:-2

    permissions:

    ESX1 enabled root r/w access

    ESX2 enabled root r/w access

    Security on the nfs root folder:

    Complete control of ANON

    Thanks for any info!

    Hello

    I really recommend that you change your configuration of permissions.

    permissions:

    All authorized MACHINES root access

    The reason is:

    When ESX mount the NAS share, it's the vmkernel to connect with the Server NAS - not the IP (esx1 or esx2) host name

    Hostname is reserved to the console OS ip.

    Please let me know if it is unclear

  • Storage shared by NFS for RAC

    Hello

    I intend to set up a node 2 CARS. However, I have only the internal drive to use and no SIN /SAN. So I intend to share some of the ability of alternatives I have on the internal nodes 1 & 2 discs and make shared storage using NFS. However I have a few questions about it.

    Q1. Since the NFS mount will indeed be file system media, this means that for the storage for the Cluster option and the RAC database that I can only use option 'shared file system' and not 'ASM '?

    I think what is confusing me is that I think that I read that the ASM disks can be complete physical disks, partitions of a disk, SAN LUNS or NFS?

    If this is the case then it suggests that I can create an ASM disk based on a NFS mount?

    I don't know how to do this - any help / advice would be appreciated because ideally, I would have preferred to create candidate ASM disks based on the shared NFS disk

    Thank you

    Jim

    You can use NFS as your clustered file system, but if you want to use ASM we top of the page so it no problem: dd to create large files on the file system shared, for example,

    If DD \u003d/dev/zero of = / mnt/nfs/asmdisk1 bs = 1048576 count = 2048

    and set your asm_diskstring parameter to point to them (or specify them as devices to be used at installation time). It is very easy to set up and is fully supported by Oracle of the uncle. I recorded a few demos that include this a year or two before, Oracle ASM free tutorial

    --

    John Watson

    Oracle Certified Master s/n

  • SLES + OCFS2 shared physical RDM

    I intend to install a cluster node Linux SLES OCFS2 using a RDM in physical mode 3.

    I did this once before with both VM and set bus SCSI physical Division on the 1st virtual machine, and then for the 2nd server selected the 1st mapping VMDK file. This gave the desired effect to allow the physical RDM to be shared, but meant that vMotion was not possible for one of these two servers.

    What I intend to do this time is to configure all the same VM 3, everyone has the physical RDM attached without bus sharing together on the SCSI bus.  This is possible, I think that, by setting config.vpxd.filter.rdmFilter to false before you add the ROW to each virtual machine. Then the anti-affinites rules will be set to stop 3 VM running on the same host. This should allow the sharing of the RDM OCFS2 and vMotion of virtual machines.

    I think it will work. Can someone tell me if there are problems with it? Is this a supported configuration?

    Thank you

    Neil.

    More than 2 nodes is not supported by VMware. OCFS2 supports more than two nodes. It has been popular with Oracle RAC, until everyone has opted for the DSO. While VMware does exactly what the no-MSCS Clustering support or requirements are. Most of the things are based on the limitations of MSCS clusters in VMware. I am about to implement a number of clusters for a customer with OCFS2, but will be limited to two nodes. We were just will you use FT for our high availability needs, but with the guest operating system as a single point of failure have failed to meet application SLAS.

  • NFS datastore on vCenter ESXi hosts

    I'm running out of space on our NAS storage.

    Another unit would be very costly.

    It seems easy enough to add an another data store from an NFS share.

    Are there problems I should be aware of if I were to add an NFS share from a physical windows 2008R2 Server?

    I understand that, since the part is right on a SATA drive performance will be bad enough.

    I was thinking more just for backup.

    Snapshots can be taken of vm that are located on a store of data VMFS5 and stored in a NFS data store?

    There will be problems with the addition of shared NFS of different targets to the ESXi hosts.

    However, put pictures on different levels is not a good idea. Snapshots in VMware products function as a string. When you create a snapshot, delta / only modified data is written and read in the snapshot files / delta. The parent files are still used, but opened in read-only mode.

    Remember, a snapshot is not a backup!

    André

  • Mounting NFS using CSV?

    All I'm trying to change one knows Script NFS on internet to use a CSV file rather that connect all the data on the PS Script.  Once I imported my CSV, the variables do not correctly.  I know it's probably easy, but the PS is not my strong hand. Can someone point me in the right direction?  Thank you!

    # The user variables: adjusted for the environment

    ###################################

    # Load VMWare supplements

    If (-not (Get-PSSnapin VMware.VimAutomation.Core - ErrorAction SilentlyContinue)) {}

    Add-PSSnapin VMware.VimAutomation.Core

    }

    # Define the location of the file of credentials

    $credsFile = "C:\Scripts\creds.crd".

    # import credential file

    $Creds = get-VICredentialStoreItem-file $credsFile

    # Name of the cluster

    $clusterName = ' * MaintenanceCluster '.

    I WANT TO ELIMINATE THIS PART TO USE CSV.

    # NFS host IP

    #$nfsHost = "nfshost.domain.com".

    # Name of the NFS share

    #$nfsShare = "/ flight/atl_vsphere_nfs_isos.

    # New data store name

    #$nfsDatastore = "atl_vsphere_nfs_isos".

    $import = import-CSV "C:\Scripts\datastorelist.csv".

    ####################################################################################################################

    # Start of execution

    ####################################################################################################################

    #connect to vCenter using credentails provided in the credentials file

    SE connect-VIServer-Server $Creds.Host-$Creds.User username-password $Creds.Password - WarningAction SilentlyContinue | Out-Null

    echo "connected to vCenter.

    echo "adding to ESXi hosts to start NFS share.

    foreach ($esx get-cluster-name $clusterName | get-VMhost | name to sort)

    {

    $esx | New datastore - Nfs - name $nfsDatastore - NFSHost $nfsHost - $nfsShare path

    echo 'Sharing NFS added to $esx'

    }

    echo "completed."

    Disconnect-VIServer-Server $Creds.Host - force - confirm: $False

    Try like this

    He starts from the premise that your CSV file looks like this

    'nfsHost', 'nfsShare', 'nfsDatastore '.

    "srv1", "action1", "datastore1".

    "srv1", "action2", "datastore2".

    "srv2', 'share4', 'datastore3 '.

    # The user variables: adjusted for the environment

    ###################################

    # Load VMWare supplements

    If (-not (Get-PSSnapin VMware.VimAutomation.Core - ErrorAction SilentlyContinue)) {}

    Add-PSSnapin VMware.VimAutomation.Core

    }

    # Define the location of the file of credentials

    $credsFile = "C:\Scripts\creds.crd".

    # import credential file

    $Creds = get-VICredentialStoreItem-file $credsFile

    # Name of the cluster

    $clusterName = ' * MaintenanceCluster '.

    I WANT TO ELIMINATE THIS PART TO USE CSV.

    # NFS host IP

    #$nfsHost = "nfshost.domain.com".

    # Name of the NFS share

    #$nfsShare = "/ flight/atl_vsphere_nfs_isos.

    # New data store name

    #$nfsDatastore = "atl_vsphere_nfs_isos".

    ####################################################################################################################

    # Start of execution

    ####################################################################################################################

    #connect to vCenter using credentails provided in the credentials file

    SE connect-VIServer-Server $Creds.Host-$Creds.User username-password $Creds.Password - WarningAction SilentlyContinue | Out-Null

    echo "connected to vCenter.

    echo "adding to ESXi hosts to start NFS share.

    Import-CSV "C:\Scripts\datastorelist.csv" | %{

    foreach ($esx get-cluster-name $clusterName | get-VMhost | name to sort)

    {

    $esx | New-Datastore - Nfs - name $_.nfsDatastore - NFSHost $.nfsHost-path _ $_.nfsShare

    echo 'Sharing NFS added to $esx'

    }

    echo "completed."

    }

    Disconnect-VIServer-Server $Creds.Host - force - confirm: $False

  • Adding NFS Datastore for all guests

    I would add a NFS data store to all my guests who are spread across different data centers in vCenter. Is there anyway I can change this script to make it work... with a list of all hosts or maybe a list of data centers.
    We have more than 50 domain controllers and I Don t want to have 50 different Scriptures

    # The user variables: adjusted for the environment

    ###################################

    # Load VMWare supplements

    If (-not (Get-PSSnapin VMware.VimAutomation.Core - ErrorAction SilentlyContinue)) {}

    Add-PSSnapin VMware.VimAutomation.Core

    }

    # Define the location of the file of credentials

    $credsFile = "C:\Scripts\creds.crd".

    # import credential file

    $Creds = get-VICredentialStoreItem-file $credsFile

    # Datacenter name

    $datacenterName = "Ourdatacentername".

    # NFS host IP

    $nfsHost = 'xx.xx.xx.xx.

    # Name of the NFS share

    $nfsShare = "oursharename".

    # New data store name

    $nfsDatastore = "temp".

    ####################################################################################################################

    # Start of execution

    ####################################################################################################################

    #connect to vCenter using credentails provided in the credentials file

    SE connect-VIServer-Server $Creds.Host-$Creds.User username-password $Creds.Password - WarningAction SilentlyContinue | Out-Null

    echo "connected to vCenter.

    echo "adding to ESXi hosts to start NFS share.

    foreach ($esx in data center-get-name $datacenterName | get-VMhost | name to sort)

    {

    $esx | New datastore - Nfs - name $nfsDatastore - NFSHost $nfsHost - $nfsShare path

    echo 'Sharing NFS added to $esx'

    }

    echo "completed."

    Disconnect-VIServer-Server $Creds.Host - force - confirm: $False

    Try the attached script

  • Issue of NFS Datastore


    Hello

    We have a couple of ESX clusters which, for the most part, connect to FC storage. However, we also connect to a single NFS store, hosted by a Windows Server in their respective site.

    Site1ESXCluster connects to the Site1WinNFSStore

    Site2ESXCluster connects to the Site2WinNFSStore

    Last week we had network problems that appear to have been shared NFS become inactive on the ESX hosts:

    inactive.JPG

    I assumed that the servers are entered ODA and them restarted but still no luck.

    I disassembled the and tried to link, but I get this error:

    NFSerror.JPG

    Windows servers receive the request and respond to the event log with "mount operation managed to < ESXIP >.

    If I try to add NFS Site1WinNFSStore to Site2ESXCluster, it works fine. And vice versa. But they will just not connect to stores they once had.

    I also connected a host ESX cluster no at the end store.

    There is no limitation of access on the store NFS on Windows servers.

    It is alsmot, as if the ESX host refuses to connect to a store once problem!

    Any ideas?

    I know it sounds crazy, but you have restarted windows NFS servers.

    I had a similar problem with NFS windows shares using a Centos 6.4 mounting box.

    In this case, a loss of network connectivity caused a stale mount on Centos system.

    The best way to clean is to restart the system Centos and windows NFS server.

    If you haven't tried it, so it can be worth a go.

    Concerning

  • NFS share for one (1) added when adding to a new host in the existing data center

    I'm sure it's a simple answer, but I've been beating my head on the wall, trying to find a solution. The scenario is as follows:

    1 Vcenter has several hosts in different groups. All built at the same time with a NFS share that is shared among all.

    2. I add a new host to the data center (I tried independently and also added to a cluster as well).

    3. the new host sees its local disk as a data store, as expected, but I have to add the shared NFS datastore.

    4. at the level of the host, I go to configuration-storage-add storage and the NFS share.

    It shows and normal with NFS (1) as the name. I can't rename it as it says THAT NFS already exist. I understand that there are in the vcenter DB, but how to add this new host or host an existing environment and maintain the name of the NFS share? vCenter 5.5, 5.5 esxi

    Thanks for your help

    Neil

    finally found the problem,

    the mounting data store I used FQDN.domain.com /Datashare

    I needed to use the FULL domain. DOMAIN.COM /Datashare

    Plugs in the name of area COMPLETE the variance in the symbolic name that caused the problem.

  • Cannot mount NFS Debian to esxi 5.1 - Please help

    I've tried everything. I'm extremely frustrated on the way this beginner step...

    Idea was the installation, a debian for nfs box and use sharing nfs as a data store in esxi.

    debian installation, Setup nfs-kernel-server

    tried to get through vshpere

    I get the following message is displayed:

    Call "HostDatastoreSystem.CreateNasDatastore" of object "ha-datastoresystem" on ESXi '192.168.1.95' failed.
    192.168.1.97:/esxi has no mounting NFS: the mount request was refused by the NFS server. Verify that the export exists and that the customer is allowed to ride.

    Ive looked everywhere and found very little on this subject... I think that its pretty beginner so no one's going in depth on how to do it step by step... I think that sharing is good and the permissions are good but I don't know how to check via the coast host or client...

    any help would be appreciated... I tried a lot of online how tos with no luck.

    Jon

    What is the absolute path to /esxi?

    You said above "my root" and I assumed to mean that esxi is a directory directly under /, but is that correct? Looks like maybe not.

  • ESXi 5.0 servers don't recognize that same NFS share to the same data store

    I have a setup of vSphere 5.0 running VMware Workstation 8.  I have two ESXi servers that are accumulating the same NFS share (with the same IP address and the same directory path), but for some reason, they don't recognize it as the same data store.  I deleted and added to the data store again and checked that I am getting the same way on both nodes.  However, vSphere don't let me use the same name when I ride sharing NFS as a data store on the second node. vSphere changes the name of "NFSdatastore" to "NFSdatastore (1)" and will not let me change, say that the name already exists.

    The NFS file server is a VM of 2.99 OpenFiler, which IP 192.168.0.100 (and is in DNS as "openfiler1").  The path is "/ mnt/vsphere/vcs/datastore.

    The NFS mount works great as a data store on the two servers ESXi, except for the fact that vSphere sees two data stores.  I have a Red Hat VM running on each of them, and when I browse the data of each ESXi Server store, I can definitely see two configs VM, as well as a few ISO images that I put on the NFS share.  However, I can't vMotion as he complains about not being able to access the configuration file and the file virtual disk (vmdk) of the other node.

    On the OpenFiler, the directory/mnt/vsphere/vcs/datastore is the property of "ofguest" in the "ofguest" group, and the permissions are "drwxrwsrwx +".  All files under the directory are also held by "ofguest" in the group "ofguest".  Vmx files have permissions "rwxr-xr-x + ', and the vmdk files have permissions ' rw +-

    I did a lot of Google search, read best practices, etc.  Any help is appreciated.

    UPDATE: he Figured out - leakage / on one of the paths of NFS mounting.  DOH!

    Maybe this can help you:

    http://KB.VMware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalID=1005930

    http://KB.VMware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalID=1005057

    Concerning

Maybe you are looking for