Re-mapping Disk Quorum MSCS RDM

I have a problem on an Exchange 2003 MSCS on VMWare.  VM1 (active), VM2 (Passive)

VM2 off and turns not back with the error:

Virtual disk 'X' is a direct access mapped LUN which is not accessible

I've referenced this Ko http://kb.vmware.com/selfservice/microsites/search.do?language=en_US & cmd = displayKC & externalId = 1016210

and vmware contacted support.

We discovered that the identifier of vml in the mapping of RDM file is completely different from vml with all 4 the host for the RDM - infact vml in the mapping of RDM file does not exist on any of the hosts.

VM2 is currently offline and had deleted ROW mapping, but they advised there are:

1. turn off both nodes.

2. on node 1 - Note the SCSI controller that is used for the RDM (disc 2).

3 remove the ROW of the virtual machine and remove the disc (this will remove the file pointer to the ROW, it will not erase the data).

4. Add the ROW to the node 01 - using the same SCSI controller (which I think SCSI1:1) and the physical bus shares.

5. Add the ROW to the node 02 - however this time above to add the LUN as an existing drive and use the pointer file created in step 4.

6. turn power on node 1.

7. turn power on node 2.

I can see the logic, but I'm worried because this ROW is the Quorom player for the cluster, so if there is any questions/differencies when VM1 starts up with the new mapping then the cluster does not start and we will lose Exchange. (other resourses in cluster are iSCSI LUNS Netapp connect with SnapDrive one in the virtual machine)

somebody had to do something similar?

taking a copy of the driectory vm before making any munmap or remove on VM1 would provide a roll back option?

any advice appreciated,

GAV

I did something similar and the process they describe is what you need to do.

I also did without delting the pointer in the first vm, but delting the poitner won't matter.

the disc is already signing, which is on the row, not the pointer.

Tags: VMware

Similar Questions

  • Change of MSCS RDM disk offline after VMware Server or patching firmware finishing or multi path IOPS / s - SQL MSCS RDM disk

    Hello

    Yesterday, we installed a fifth ESXi host at our development center and along that we also had a window of maintenance for other hosts.

    We made:

    • HP Proliant HP Service Pack upgrade to 2014.06.0_784915_001. We have HP BL460c Gen8 servers. The old PSP performed from September 2013
    • Installed patches for ESXi 5.1 U1, the latest 10-15 patches since February. Do not install 5.1 U2.
    • Upgrade VMware Tools on virtual machines.
    • Changed multi path IOPS / s for HP 3PAR from 100 to 1. (esxcli storage nmp ATAS add s "VMW_SATP_ALUA' Pei"VMW_PSP_RR"- IOPS O / s = 1 - c"tpgs_on"v '3PARdata' - M 'VV' EI 'HP 3PAR Custom iSCSI/FC/FCoE ALUA rule')

    After this interview little MSCS SQL cluster stopped working. It does not start because it cannot access the drive. I can see the drive in disk management, but they are all in an offline mode. Can't do anything about them, except a single disc. This one is online!

    Did anyone here have no idea what on Earth is wrong? If I does not tomorrow I have to list a species of Microsoft or VMware. I have not tried to remove/add the RDM-disc.

    Windows 2008 x 64 Enterprise

    SQL 2008 R2

    ESXi 5.1 U1 (latest patches)

    Thank you!

    Finally solved with assistance from HP support.

    Solved this KB KB VMware: change a LUN to use a policy different selection of the path (PSP)

    The "funny", it is that the cluster has been around for 4-5 years and works on EVA SAN and now 3PAR with RR as a policy path, and all of a sudden it stopped working. Perhaps a host profile political trigged in, I don't know. And I could also see and access 5/6 disks, why not 6? And was on the way from RR.

    I asked HP about this.

    Thank you for all your help.

  • SRM with virtual disks not replicated RDM Recovery Plan.

    We try to do a test of SRM with a Virtual Machine that has disks Virutal RDM attached, however these volumes are not replicated with RecoverPoint, they exist already on the side of DR, so basically we're already reproducing machine virtual and it's virtual disks, but these 3 RDM are outside it.  The reason for this is because they have massive volumes that had previously been replicated using another product before virtualize us and there is no need to replicate this info again because the volumes are never going to change.


    Then...

    It is done within the SRM, when you look at tab groups protection, we see a warning "not configured VMs: 1" because SRM / RecoverPoint believe that this virtual machine has volumes that are not replicated.

    Is there any way to exclude this or saying SRM to ignore what we will manually add the RDM on the DR side ourselves?  Also on the side of DR, I don't see the VM under hosts and Clusters as I see the replication of other virtual machines, it is because MRS. think he isn't entirely reproduced?  Or is it a manual step where you need to create a virtual machine on the side of the DR and to disable this option in order to see it?  Also, we see what VM listed on the virtual computers tab for this group of special protection, but the other two VMs residing on the sam MON appear and I already tested successfully the two machines.

    Here are the settings in the file for the RDM .vmx

    scsi0:4.filename = "server_3.vmdk".
    scsi0:4.mode = "persistent".
    scsi0:4.ctkEnabled = "FALSE".
    scsi0:4.DeviceType = "scsi-disk" hard
    scsi0:4.present = 'TRUE '.
    scsi0:4.redo = «»

    scsi0:5.filename = "server_4.vmdk".
    scsi0:5.mode = "persistent".
    scsi0:5.ctkEnabled = "FALSE".
    scsi0:5.DeviceType = "scsi-disk" hard
    scsi0:5.present = 'TRUE '.
    scsi0:5.redo = «»
    scsi0:6.filename = "server_5.vmdk".
    scsi0:6.mode = "persistent".
    scsi0:6.ctkEnabled = "FALSE".
    scsi0:6.DeviceType = "scsi-disk" hard
    scsi0:6.present = 'TRUE '.
    scsi0:6.redo = «»

    Hello

    You do not see the DR VM (usually it's called shadow or the space reserved VM), because currently the VM is not protected by SRM. You can't manually create.

    What you can do is run the Wizard "To configure Protection" for your virtual machine. One of the steps, "storage options" or something like this, allows you to select specific disks and mark as detached, allowing you to protect the computer with disks not replicated virtual.

    Michael.

  • Virtual disk "Hard disk 3' is a direct access mapped lun that is not accessible

    I am currently in the process of upgrading our cluster of ESX 4.0 U1 to 4.1. In the process, I came across a physical MSCS 2 with RDM cluster. I stop the VMs and vmotion can them between the hosts of 4.0. Now as soon as I update hosts 4.1 I cannot vmotion the VMs to these 4.1 hosts. The error I get is: "virtual disk"drive hard 3' is a direct access mapped lun that is not accessible". I found an article KB http://kb.vmware.com/kb/1016210

    who speaks ensuring that LUN number is the same in the presentation of all ESX hosts and I checked, it is the case, moreover, as I said vmotions does not always work for guests, that I have not yet updated. I have not tried anyone all LUNS and re - introduce, don't know if I should try that. Looking for suggestions.

    Thank you
    greendxr

    You have to poweroff the node, remove the RDM drives, migrate the virtual machine, and then add again in RDM disk.

    RDM LUN must be on all hosts.

    André

  • SRM 5.1 & NetApp - VM with RDM and MSCS

    Hello

    I implement SRM 5.1 with NetApp (with SnapMirror) and I have several questions:

    -Replication of the volumes through the table Manager, runs successfully

    -How the replication of virtual machines with RDM? can also be done through the Bay (with SnapMirror) Manager or better do with vSphere replication. Can be achieved by vSphere replication?

    -Make all the tips for the replication of virtual machines in a cluster of Microsoft?

    Thanks for all!

    I'm not a person of NetApp, but this should be documented in the documentation of the SRA.

    Another point is that MSCS RDM cannot be configured with political tourniquet (which is usually is the default value for NetApp LUN). You must reconfigure the LUNS to MRU.

    Edit: Just have done a quick search on Google and found this pdf that looks like what you need:

    http://www.NetApp.com/us/media/tr-4064.PDF

    Michael.

  • Convert the RDM virtual disk

    I have two or three of Phsyical MSY and want to conver in VMDK... What is the process to do this?

    I got your PM, basically just treat your virtual machines of RDM, such as a physical server and convert it through the process and it will be automatically created in VMDK format.   Just make sure you select the disks attached to RDM and it can convert.  Which requires you have enough storage space, but the same process as you did with the physical server.

    If you found this information useful, please consider awarding points to 'Correct' or 'useful '. Thank you!!!

    Kind regards

    Stefan Nguyen

    VMware vExpert 2009

    iGeek Systems Inc.

    VMware, Citrix, Microsoft Consultant

  • Best practices - addition RDM to the second node (W2K3) MCSC Machine virtual nodes on the physical host computers

    Addition of RDM to the second node (W2K3) MCSC Machine virtual nodes on the physical host computers

    Unable to find another thread on this

    When you add RAW disks to the second node in Virtual Machines across physical hosts in the Cluster / Cluster across boxes,.

    VMware said shared point of the storage drives in the same location as the first node sharing storage disc *.

    -Select use an existing virtual drive...

    -In the drive path, navigate to the location of the disk (quorum) specified for the first node

    -Select the same virtual device node you chose for the first virtual machine shared storage disks, IE SCSI (1:0)...

    In other words to add the RDM to mscs-node2, navigate to/vmfs/volumes/lun1/mscs-node1 / mscs - node1_2.vmdk (mscs-node1_2 - rdmp.vmdk)

    For years we have directly added the ROW the second node specifying RDM disk not existing does not, in general we do directly from the host, not the vCenter, it seems to work fine.

    For what is the safest way, the official method can cause all sorts of problems if you need to cancel the registration of the ROW on the first node (here's where I found no official documentation).

    Delete you or keep the file descriptor? We tried to keep him, but ended up with several mappings to la.vmdk/rdmp.vmdk, so now, this system has disk2_.vmdk / disk2_ - rdmp.vmdk and disk4_.vmdk / disk4_ - rdmp.vmdk pointing to the same RAW.

    What really bothers me is safety, these are very important boxes, I prefer to continue to have the VMDK and rdmp.vmdk in separate data warehouses, and do not have this dependence on the head node

    Your comments please, we are viewing the store only setup of MSCS clusters with lanes separated from RDM and are there risks associated with this?

    * Ref: "Setup for Microsoft Cluster Service - 4.1 and Failover Clustering".

    I realize there was an error in my logic

    When you work with the main node, if there is a requirement to unmap drives ROUGH (move to another cluster vmware, cloning the system etc.)

    Take note of the location of all the rdmp.vmdks

    Remove each rdm disk without deleting

    To add

    Add "Use an existing virtual disk to disk" (yes I know its bad, but once you create the host calling now think its virtual)

    Navigate to the mapping of existing raw device, appearing like a vmdk * and add using the former location of scsi

    Graphic interface hides the descriptor

    A virtual disk has a vmdk and a flat.vmdk

    A CRUDE disk has a vmdk and a rdmp.vmdk (the dish is replaced by vocation)

    A suggestion of one of my companions is to locate the al the ROW in a single data store small, visibility of virtual machines with raw disk went like this

  • VMware 5.1 with compatibility virtual RDM and Microsoft SQL Cluster mode

    Hello

    I am a bit confused by the VMWare documentation and hope someone can point me in the right direction.

    I want to know if it is possible and supported to create a cluster of SQL 2008 R2 2 nodes (Server 2008 R2 SP2 are VM) on a 2 node cluster VMWare 5.1 with the use of Virtual RDM compatibility?

    When you read the PDF on vsphere5.1 on the link below on page 9, there's a indicating note 'NOTE Clusters on multiple physical computers with no-pass-through RDM is supported only for Windows Server 2003 clusters. It is not supported for clustering with Windows Server 2008. »

    http://pubs.VMware.com/vSphere-51/topic/com.VMware.ICbase/PDF/vSphere-ESXi-vCenter-Server-51-Setup-MSCS.PDF

    So that means that I want "a 2-node cluster sql 2008 R2 server" is not supported?

    But I also found this link below and in the table of the column on the Cluster SQL line RDM is said 'yes' with a 2.

    VMware KB: Microsoft Clustering on VMware vSphere: guidelines for supported configurations

    Means 2/redirects-> for more information on shared disk configurations, refer to the Disk Configurations section in this article.

    -> Disk configurations

    • RDM: Configuration using a shared Quorum for storage or data must be on Fiber Channel (FC) based on RDM (physical cluster across boxes "CAB" mode, virtual mode for cluster in a box "IPC") in vSphere 5.1 and previous versions. RDM on storage other than CF (iSCSI and FCoE) are supported only in vSphere 5.5. However, in earlier versions, FCoE is supported in very specific configurations. For more information, see Microsoft clustering solutions table above note 4 .

    What follow-up note4->

    1. In vSphere 5.5, native FCoE is supported. In vSphere 5.1 update 1 and 5.0 Update 3, two cluster configuration of the node with Cisco NAC (VIC-1240/1280) cards and driver version 1.5.0.8 is compatible with Windows 2008 R2 SP1 64-bit guest operating system. For more information, see the VMware hardware compatibility guide:

    This means that it is suuported, I "m confused.

    Hi Bypy,

    Two windows 2008 R2 SQL virtual cluster nodes with RDM is supported only for the IPC or Cluster In a Box that is to say if the two virtual machines reside on the same host ESXi. The same configuration is not supported for the cabin or Cluster across boxes (virtual machines running on different hosts ESXi).

    The CAB, you go for the physical RDM mode.

    According to this link, http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1037959

    SQL Cluster using windows 2008 R2 is supported for both physical and virtual mode RDM. Between physical and virtual mode depends on whether you want the CAB or IPC respectively.

    I hope this helps.

    See you soon,.

    Arun

  • Political multi-channel Fixed and enter preffered paths balance for a set of RDM Lun

    Hi all

    I was hoping that someone with some experience PowerCLI might help me with the following problem.

    Short version:

    I need to adapt this script http://vmjunkie.wordpress.com/2009/01/29/balancing-lun-paths-on-your-esx-hosts-with-powershell/ to accept the entry of a canonical name text file...

    Long version:

    We have a number of LUNS of RDM used by Microsoft the Failover Clusters, in accordance with the recommendation from the storage provider (Dell), we need to define all teas LUN to have a multi-path of FIXED policy and want to balance these on both paths.

    I can get the canonical name for each LUN RDM using this script;

    Get-VM-Location"Cluster Name" | Get-HardDisk -DiskType "RawPhysical","RawVirtual"| SelectParent,Name,DiskType,ScsiCanonicalName
    I then manually identify the LUNS that have to be changed since the text output.  So, basically, I have a text file containing the canonical each of the RDM LUN name I need to change to fixed.

    LUNS.txt

    NAA.6000d31000331e0000000000000000c0
    NAA.6000d31000331e0000000000000000c1
    NAA.6000d31000331e0000000000000000c2
    NAA.6000d31000331e0000000000000000c3
    NAA.6000d31000331e0000000000000000c4
    NAA.6000d31000331e0000000000000000c5

    I can't run this command to set the Fixed multichannel strategy because I need the PreferredPath variable.

    Get-vmhost 'host name ' | Get-scsilun 'naa.6000d31000331e0000000000000000c0 ' | game-scsilun - multipathpolicy 'Fixed '.


    "If the MultipathPolicy parameter is set to 'Fixed', you must specify the parameter of PreferredPath."

    I can then get the paths by using this command:

    Get-vmhost 'host name ' | Get-scsilun 'naa.6000d31000331e0000000000000000c0 ' | Get-scsilunpath

    Favorite SanID state name
    ----       -----                                    -----      ---------
    FC.2000... False Active 50:00:D3:10:00:33:1e:19
    FC.2000... False Active 50:00:D3:10:00:33:1e:1 has

    Basically, I need to adapt this script (from here http://vmjunkie.wordpress.com/2009/01/29/balancing-lun-paths-on-your-esx-hosts-with-powershell/) at the entrance to a text file of canonical names.

    # Cluster-wide LUN Path Load Balancing Script
    # Written by Justin Emerson, http://vmjunkie.wordpress.com
    # Idea originally from a PERL script I saw here:
    # This script requires the VI Toolkit version 1.5
    # NOTE: This script assumes that every LUN has the same number of paths.
    #       If you have multiple storage arrays, and they have different numbers of paths,
    #       I make no guarentees that this will work!
    # If you have an improvement to this script, please feel free to leave a comment on my blog!
    Write-Host"This script will modify the policy of all your shared LUNs on all ESX Servers"-ForegroundColorCyan
    Write-Host"in a Cluster to Fixed and select a preferred path in a round-robin fashion."-ForegroundColorCyan
    if ($args.Length -eq0) {$clusterName= Read-Host"Please enter the Cluster name"} else {$clusterName= $args[0]}
    $VMHosts= Get-Cluster$clusterName| Get-VMHost
    # Run through this loop for each host in the cluster
    foreach($VMHostin $VMHosts)
    {
        # Keep only disks of luntype "disk" to avoid any storageArrayController devices.
        # Filter to only objects where the ConsoleDeviceName starts with vml to avoid any DAS disks.
        # Note: I have tested both HP EVA and Xiotech storage and SAN LUNs always appear this way.
        # Please check if this is the same on your storage before running.
        $luns= $VMHost|get-scsilun-luntypedisk|
        where-object{$_.ConsoleDeviceName -like"/vmfs/devices/disks/vml*"}|Sort-ObjectCanonicalName
        $firstLUNPaths= Get-ScsiLunPath$luns[0]
        $numPaths= $firstLUNPaths.Length
        $count= 0
        foreach($lunin $luns)
        {
            if ($count-ge$numPaths) { $count= 0 }
            $paths= Get-ScsiLunPath-ScsiLun$lun
            $lun|Set-ScsiLun-MultipathPolicyFixed -PreferredPath$paths[$count]
            $count+= 1
            # Sleep for 30 seconds as I've heard some arrays dont like doing this too fast.
            Start-Sleep-Seconds30
        }
    }
    Any help would be greatly appreciated.
    See you soon,.
    Patrick

    Do you have something like this in mind?

    $esxName = "MyEsx" $lunFile = "./luns.txt"
    $lunNames = Get-Content $lunFile $VMHost = Get-VMHost -Name $esxName
    # Find the LUN with the least number of paths
    $leastLUNPaths = Get-ScsiLun -VmHost $VMHost -LunType disk | Sort-Object -Descending -Property {
      Get-ScsiLunPath -ScsiLun $_ | Measure-Object | Select -ExpandProperty Count} | Select -First 1 | Get-ScsiLunPath
    $numPaths = $leastLUNPaths.Length
    $count = 0 foreach ($lunName in $lunNames)
    {
      $lun = Get-ScsiLun $lunName -VmHost $VMHost  if ($count -ge $numPaths) { $count = 0 }
      $paths = Get-ScsiLunPath -ScsiLun $lun  $lun|Set-ScsiLun -MultipathPolicy Fixed -PreferredPath $paths[$count]
      $count += 1  # Sleep for 30 seconds as I've heard some arrays dont like doing this too fast.
      Start-Sleep -Seconds 30}
    
  • Paths multiple parameters for RDM attached to ESXi 5 guests

    We are implementing a new SAN, an EMC VNX and have been advised by soing installation provider to set multiple paths for all devices to roundrobin.  Also advised it us to change the OPS are / s by default of 1000 to 1.  So I wrote a PowerCLI script to do this and everything works great.  I run the script every day to ensure that all new devices are replaced with the recommended settings.

    Yesterday when I ran the script, our attached SQL of VM with RDM server has started having huge disk latency issues.  As soon as we have changed the OPS are / default 1000 s, disk latency issues went away.  We could not find EMC information saying that RDM devices should have their IOPS / s set at 1000 or what their Multipathing parameters should (currently runnin on turnstile without no significant problem), but we found information from other storage vendors saying that we should go with the settings best practices of Microsoft at least the multi-pathing is set to MRU.

    Anyone out there has any information or advice on what we should do.  We will of course ensure that the OPS are / s for the RDM are set to 1000, but what are the recommendations for multiple paths of the RDM.

    See you soon

    Ant

    As a general rule, Multipathing for RDM must be the same as that of storage of data VMFS, with the exception of MSCS RDMs, which are required to be set multiple access paths, by VMware

    I'm going to guess in your case that you had a few LUNS of the questions that you should study the trespass (or get Powerpath/VE for your env and never worry about it again).

  • Thin/TBZ disks cannot be opened in multiwriter mode

    Hello

    I have 2 virtual machines that I'm setting up a cluster of MS. I created the shared quorum disk (tried both RDM and create a new thick disc), but I get this error when I try to turn on the machine.  The problem is, he does reference the main drive in the error that is not who I am eager to share between servers.  This disc IS thin, but I don't understand why that would matter since I'm not to be shared.  I created a new controller for each of the new discs.  Must my main drive be thick also although it won't share?

    Thank you.

    I guess that's a Microsoft requirement to be considered a supported configuration.  Even on the side of VMware.  Also, what is big of a compromised to ZeroThick your cluster nodes?

    If you have found this helpful at all prices please points using the correct or useful!  Thank you!

  • ISCSI Target VM address RDM?

    I have 300 GB local disks on some of my hosts connected to our SAN.  I would like to take advantage of this local space for models and ISO files.  I intend to install the iSCSI Target Framework (tgt).  My question is...

    If this CentOS VM that turns our iSCSI Target just plug the local disks on VMFS or should I try to present the local disk as a RDM (Raw Device Mapping).

    Since the machine will use local disks, vMotion and anything that is out of the question anyway, so I think this could be a good situation for the use of RDM.

    Hello.

    If this CentOS VM that turns our iSCSI Target just plug the local disks on VMFS or should I try to present the local disk as a RDM (Raw Device Mapping).

    I would use VMFS for this. You will have more flexibility with VMFS and RDM the local approach seems to add unnecessary complexity.

    Good luck!

  • Local disk D

    While the error checking my local disk d, I accidentally shut up before finishing error checking and now I cannot open my local drive d. It says access denied. My computer is windows7 sp 1. What should I do? Please help me.

    * Proposed by the moderator of Performance & system failures. Reason: Add clarity.* *.

    Your first step is to launch the disk (diskmgmt.msc) Manager and check if the D: drive has a valid file system.

  • Get address Local SCSI virtual disk (Bus: Target)

    Hi all

    I'm having a little trouble getting this info from a couple of virtual machines. I need the bus address: local target for all virtual disks VMDK or RDM. I have seen a few examples to compare the widonws configuration to the configuration of the virtual machine but I am not interested right now, I just need the configuration of the virtual machine. I would appreciate your help. Thank you...

    Try something like this

    Get - VM | Get-hard drive |

    Select @{N = "Name VM"; E={$_. Parent.Name}},

    @{N = 'HD name'; E={$_. Name}},

    @{N = "SCSIid"; E = {$strControllerKey = $_.} ExtensionData.ControllerKey.ToString (); "{0}" ': {1} "$strControllerKey [$strControllerKey.Length f - 1], $_." ExtensionData.Unitnumber}}

  • Voting Quorum failgroup or regular failgroup files location?

    About the notion of quorum failgroup ORACLE writes:
    QUORUM of discs, or in groups of lack of quorum, cannot contain all database files,
    (OCR) Oracle Cluster registry, or dynamic volumes. However, the QUORUM disks
    can contain the file with voting for synchronization of Cluster Services (CSS). Oracle ASM
    whenever files disk quorum uses or groups failed quorum for the vote
    possible.

    To sum up, I think that is the difference between a regular failgroup and a quorum quorum failgroup failgroup can only contain files of voice and an ordinary one can contains several types of files.
    So I don't see any advantage to place files with right to vote on a quorum failgroup on a regular. Why Oracle has introduced the notion of quorum failgroup?
    Thanks in advance.

    Why Oracle has introduced the notion of quorum failgroup?

    You must configure an odd number of disks with right to vote because the vote of the files are concerned, a node must be able to access more than half of the files with the right to vote at any time (simple majority). In order to be able to tolerate failure of n files in vote, need at least 2n + 1 configured. (n = number of files with voting rights) for the cluster.

    If you lose 1/2 or more of all your disks with right to vote, then nodes get evicted from the cluster, or nodes to expel from the cluster.

    For this reason when using Oracle for the redundancy of your discs with voting rights, Oracle recommends that customers use disks of 3 or more voting.

    If you use only a 1 H/W storage and it is fail. Together the cluster falls down including all the voting disk is not serious the number configured.

    The problem in an extended cluster configuration (Extended RAC) is that most of the facilities use only two storage systems (one on each site), which means that the site that hosts the majority of voting records is a potential single point of failure for the entire cluster. If the storage or the site where the files of n + 1 vote is configured fails, the entire cluster will go down, because Oracle Clusterware will lose the majority of the files.

    To avoid a complete cluster failure, Oracle will support a third vote file on a cheap lowend, standard NFS mounted device somewhere in the network. Oracle recommends the file NFS vote on a dedicated server, which belongs to a production environment.

    Thus, you will create a file on NFS "cooked leader" (as a disk) and present to the DSO. From this point of ASM does not know that ASMDISK between the network (wan) and it's a "cooked" file

    Then, you must mark this ASMDISK as QUORUM, because Oracle will use this ASMDISK only to store the VOTEDISK. This will prevent him causing problem perfomance or dataloss store data (such as data files).

Maybe you are looking for

  • HP Pavilion DV6: Wireless in the mobility settings icon disappeared

    Normally, I do not use my wireless but connect via cable.  I used it in the past but not recently.  I have upgraded from the computer to Windows 10 about a month ago, but did not check to see if the wireless worked before I upgraded so I don't know i

  • Format of path in vxWorks

    Hi people, I develop code for a cRio-9014. I understand that this serves as its vxWorks operating system and, for this reason, uses the unix path convention. Being a'nix-head myself, I thought I'd be happy with that. I also understand that the system

  • Toshiba Portege R500 touchpad freezes

    My toshiba portege touchpad works when I first starts but freezes then - and an external mouse works fine.  Other times the cursor flies around the screen crazily and opens or closes things unexpectedly.  I tried to download the readers but no help. 

  • DeskJet 3050 wireless printing

    I am using windows Is it possible to print to Deskjet 3050 wireless? No router? I wonder if it is possible to print wireless and have the printer to connect wirellessly to the laptop without going through a network (router). Currently I can fine prnt

  • What is the difference between upgrade and custom installation options?

    * Original title: windows 7 What is the difference between upgrade and custom installation options?