Performance RDM vs sober Fibra SAN discos VMDK

Hola,

Tengo una duda me gustaria that alguien con mas experiencia me pudiese resolver.

El a SAN entorno con fibra discos, the duda are TR alojar el disco a virtual that contain una DB como o como VMDK RDM server.

Es muy diferente el performance that will get?

Gracias.

Hola Jquilez, you recommend the reading of the following back VMware benchmarks, Aduaneiras one you question:

http://www.VMware.com/files/PDF/vmfs_rdm_perf.PDF

http://www.VMware.com/files/PDF/performance_char_vmfs_rdm.PDF

Debes recordar el acceso has the LUNS as Raw Device Mapping adhere directly material al y formateas unidad directly in the telephone box of discos. In con cambio VMFS tienes esa capa intermedia. Personally, if encuentro mayor speed in el acceso a los RDM vs VMDK, pero solo para casos concretos. Debes tener in cuenta con Québec en modo fisico podras hacer VMotion RDM y HA y con RDM virtual, podras clonar.

Tags: VMware

Similar Questions

  • Thin Provisioning - SAN or VMDK layer

    Hello

    With the option to set up a thin disk with vSphere or the SAN layer, which is the best?  Is it possible the provision at the level of the SAN and VMDK or could eventually create confusion.  I guess that SAN provisioning would offer better performance too?

    See you soon

    gogogo5

    Very detailed information on this subject available on the blog of Chad from EMC here: http://virtualgeek.typepad.com/virtual_geek/2009/04/thin-on-thin-where-should-you-do-thin-provisioning-vsphere-40-or-array-level.html

    Kind regards

    Hany Michael

    HyperViZor.com | Virtualization and all around him

  • Back maternal dudas, sober y SAN clustering

    Hola mundo, tengo dudas back sober sober general contents soluciones clusterizadas correcta y con primario storage con vmware, estoy humildemente beginning in esto, y me seria mucho TR ayuda me Mani one cable las mas things claras para tener.

    Duda 1

    Tengo UN stage DRS + Ha con con back host fisicos Québec albergan una vm con una BBDD app, o a web server, o any service, in mid stage of formacion lo he probado wont y configurado, todo esta Ok, el bajo works, entonces... ESX cluster ¿porqué aun veo scenarios that montan con Microsoft cluster services, bajo una Québec are can soportar ese cluster with ESX infrastructure?

    Duda 2

    That're mejor y donde is applied every stage...

    Create a sober directly las LUN Universidad san datastore, y montar structure para los discos office of las vms (asi're como lo he hecho yo put en labs y aprendizajes)

    Crear las particiones y volumes led operating system directly atacando a San of the kbes LUN (accediendo a ellas desde el sagna comments directly). No tengo muy claro en than can beneficiar o damaging cada situation.

    Gracias por adelantado y perdonar TR cometo alguna burrada, are only of como el titulo del post, estoy formandome aun in this type of solutions.

    Para you pregunta segunda, comentarte that certainly are kbes back media:

    Crear DataStores are using el formateo del VMFS File System o use RAW Devices (Discos RDM) discos y formatearlos degeneration Neurofibrillaire for example para access directly a ellos.

    The use of RDM discos, to use en Máquinas Québec Office:

    -Tengan alguna the SAN management application y necesiten access directly has the LUN, sin pasar por datastore.

    -Para montar microsoft Cluster (el disco of quorum for example must be RDM).

    -Para determinadas por ejemeplo con VMs Exchange, what requieran una pizca & adicional en los IOPS, applications / s.

    El uso clubs office en VMFS (mediante data warehouses) allows use las advantages of VMWare y esa capa Quebec disco allows managing los storages of data between different hosts there should be the option use por defecto, tener of in case of Máquinas Office sin needs as he EDI in el punto anterior, con discos RDM.

    Saludos.

    Xavier

    VCP4 and professional VCP3 certified VMware.

    -

  • P2V with SAN = > VM + VMDK

    I have a question:

    I want to P2V a physical server with data on SAN for stand-alone with VMDK on ESXi virtual machine.

    How can I do this?

    I think, I can:

    1. a hot P2V physical server with SAN = > copy entire disc VMware P2V tool
    2. Start a virtual machine with VMDK

    You will be able to use VMware Standalone Converter - it read the SAN as a classic drive and create a corresponding VMDK.

  • Slow performance read/write on iSCSI SAN.

    This is a new configuration of ESXi 4.0 running virtual machines off a Cybernetics miSAN D iSCSI SAN.

    Having a high data read the test on a virtual machine, it took 8 minutes vs 1.5 minutes the

    the same VM on a slower 1.0 Server of VMWare host with virtual computers on

    local disk.   I look at my reading speed of the SAN, and

    It's getting a little more than 3 MB/s max in reading, and usage of the disk on the virtual computer is slower 3MB/s...horribly.

    The SAN and the server is both connected to the same switch 1 GB.  I followed this guide

    virtualgeek.typepad.com/virtual_geek/2009/09/a-multivendor-post-on-using-iSCSI-w

    ITH-vmware - vsphere.html

    to get the configuration of multiple paths correctly, but I still do not get good

    performance with my VM.   I know that the SAN and network

    must be able to handle more than 100 MB/s, but I am not getting

    it. I have two network cards of GB on the multipath SAN to GB two network cards on the

    ESXi host.  One NETWORK card per VMkernel.  Is there anything else I can check

    or do to improve my speed?  Thanks in advance for advice.

    Another vote for IOMeter.

    Try to test 32K 100% sequential read (and write) with 64 IOs in circulation, this will give sequential performance.  Should be close to 100 Mbps per active path of GigE, depending on how much the storage system can get out.

    Then 32K 0% sequential read (and write) with 64 traffic against a LUN will IOs (say 4 GB +) good test size give a value for IOPS, which is the main factor for virtualization.  Look at the latency, must remain below approximately 50ms usually in order to be able to work if the default 32 IOs in circulation (by host) is OK (say you had six hosts, the table should be able to deliver the random i/o with a latency)<50ms with="" 192="" outstanding="" ios="" (="">

    Do not use the "test connect rate" and cela effectively tests only cached flow, which we are not so interested in any case.

    Please give points for any helpful answer.

  • Connections Fibre and SCSI SAN/DAS directly to VM (RDM)

    We currently have an implementation of the 3 very generic host to ESX 4 where all virtual machines using vmdk disk or drives mapped to the external storage.  We are looking in the update to ESXi 4.1 and potentially you virtualize two of our primary file servers, however these boxes connect to several storage units SCSI and fibre volumes.   These volumes are 4 TB and more.  For example, a server connects to 2 x 4 TB volumes by fiber (spent) and 2x2TB via SCSI DAS.  Server B connects to the volume of 1 x 4TB through fiber and 1 x 4TB volume via SCSI DAS

    I'm trying to get a handle on storage that connect directly to the VM, but not to find a resource that speaks directly (at least that I am able to decern in speed reading).   My understanding is that RDM or NPIV fibre are possible methods.   NPIV seems to be out of reach, as the switch must be a mode NPIV which excludes standard SAN usage (i.e. I must have a dedicated switch just for NPIV.)  In addition, Brocade switches seem to only are NPIV supported if they are entirely stored (i.e. the E300 must have 24 active ports... it's expensive to say the least).

    RDM seems to require a total appropriation of the connector.  that is, I have one HBA per hectolitre VM.  On a host ESX I virtualize the server A and B, then have 1 VMFS volume via a fiber connection, I have 4 single port / 2 dual port HBA and 3 channels for SCSI.

    Basically, I think I know there is no way to use a single HBA or card SCSI to connect multiple external storage to various VM direct is just not possible (i.e. as NAT is network, NPIV is the closet in fiber and nothing available for SCSI).

    iSCSI is a solution, but I have currently no iSCSI storage nor the funds to migrate the current storage in such an environment.  Have also some concerns about the performance of iSCSI... but I think it's especially of ignorance of technology than of reality.  I looked at a few fibers to iSCSI (e.g. sandbox 6140) routers, but these seem very inefficient the costs as well.

    I have all options here, or is physical remaning with these solutions server my best option for now?

    With RDM, you could have the present LUNS for the HBA WWD from this phase you can add it only to the VM particual. So, it's like multiple LUNS, one of THEM and a lot of VM. so he would feed your needs.

  • RDM and VCB Performance

    I have RDMs finally backup via VCB no problem... other than the performance is terrible. The level of the files back ups seem to. But when I do a fullvm it seems to back up the whole ROW, without worrying whether it has data on it. I presented a lun 500 GB as a test, put a few text files, has launched a backup and it takes hours to complete. We are talking about a few kilobytes here. In this case, if I use the BEIM or native vcbmounter... same results. It is not the SAN... performance is stellar other than this activity. Is this expected behavior? VCB is not smart enough to know if there are data on the RDM or not? I've seen some other posts to this topic, but has never seen no resolution. Just curious, if you have any ideas. It is good to be a better way.

    Ok. By default, VCB will back up fragmented file as long as you do not pass the-m or F - mode export flags is not transmitted:

    Flags of export:

    -M: If, together, the disk is exported in a single file (monolithic).

    When it is off (default), the disc is divided into several files of 2 GB.

    -F: so together, the disk is exported in the form of disc "flat", with no optimization.

    When it is off (default), the exported files disk will be more compact than

    unused space in the disk image is not included in the exported file.

    The BEST you will get with backups of type fashion SAN FullVM is about 1 GB / minute. It is still slow down with RDM, because they must be converted VMDK when FullVM saving. You know that if you practice a full recovery from the virtual machine (you'RE done this right?). When you retrieve a RDM saved using VCB, it becomes a VMDK.

    RDM are das LUN in the native OS file system format (NTFS, ext3, etc), you may be better using an agent or perform file-level backups. Think about why you use a RDM. If you are recovering to a VMDK, what are the consequences. If there is not, amounted to a VMDK and be done with it. If there ARE implications use of files or agent in function of the backups. If this ROW is for a file server, or either method is fine. If they are boxes in the letters or databases, you better make backups of basis unless you REALLY know how to recover a database store or potentially incompatible mail. The driver of the sync and VSS providers are not a guarantee that you will be able to recover.

    Dave Convery

    VMware vExpert 2009

    http://www.dailyhypervisor.com

    Prudent. We do not want to make of this.

    Bill Watterson, "Calvin and Hobbes".

  • Como puedo volver al formato vmfs UN disco con un disco fisico vmdk virtual

    Hola buen dia.

    I have a server vmware esxi 4.1, what me rompio targeta of red.

    Entonces no access los datos than tenia en un disco virtual podiums vmdk.

    Probando index an Office Ubuntu 10.04 I brought montar el disco, alguna manera, entonces al discos del ubuntu administrator sign in, I etiq el disco tapeworm the-> retribution, I me dio por change the etiqueta a formato vmfs (0xfb), gran error; I got AHI put come problems.

    After bringing poner el disco in su estado original-> retribution, pero no habia esa opcion. No could recover nada.

    MAS soon could una nueva targeta I pude iniciar el esxi 4.1 sin any problema. By a hora of ese disco fisico al vmware add, not me appears el archivo disco vmdk virtual.

    If possible, puedo hacer, para volver al formato original of disco, I have recover los datos.

    MUCHAS gracias.

    Hola,

    Antes of nada shopping realices UN backup del disco. From ahi puedes seguir el documento.

    A greeting.

    -

  • Conversion of RDM vmdk slim: I can interrupt the command when vmdk stop its growth?

    Hello

    working in cli ESX 4.1, I am converting a virtual rdm 1 TB to a thin vmdk (vmkfstools-i & lt; path to rdm & gt; thin d & lt; path to vmdk)

    I know that only 120 GB of data are used inside the RDM: now, the command is 55% (after 4 hours) and I see in the browser of data store as the vmdk for fdestination grows no more: I would like to safely interrupt the command (Ctrl-c) and safely use the new thin vmdk?

    Thank you

    Guido

    I suggest not to interrupt the process.

    Unless you are sure that all the data are in the first part of the disc RDM to 100%.

    André

  • Mid-week mejor forma add discos VMs system

    Buenas tardes,

    Estoy dando vueltas has training in the configurated put en nuevas disco unidades VMs (access a LUN of mi storage).

    Tengo maquinas montadas in las as creo discos (unidades) mediante initiator Microsoft iSCSI, unidades RAW, unidades creadas en propia Máquina (virtual disk)...  El problema, no tengo claro cual're the mejor forma.

    (1) the option of Microsoft iSCSI initiator descarto directly por lo that there is hablamos en otro post as shelter.  You can reach a pudiendo el performance of Vcpu al usar misma CPD para iSCSI is the VM.

    (2) the option of crear las unidades RAW acceso mediante me dijeron than era muy recomendable. SE me plantea una duda con esta opcion. If por algun reason tengo mover the VM has otro storage, puedo moverlo con directly VMotion?. Tengo back equalogic, if the first falla tengo suerte of tener otra equalogic that poder mover of formed temporal los discos. ¿podria to get a mover una unidad RAW of the misma forma puedo mover than a virtual disk? Dando a migration Quebec mueva todo y a the diga donde...

    (3) virtual disk to add a VM of the. This option is no se tal behaved of performance o If tiene alguna pega.

    MUCHAS gracias,.

    Saludos.

    Hola como estas?

    Esto you can serve.

    http://www.VMware.com/files/PDF/performance_char_vmfs_rdm.PDF (ex of forma detailed)

    The mayoria of los casos para will be replaced by Recomiendan use discos Office VMDK, menos that exista una necesidad especifica, como crear a MSCS Cluster, Quebec requiera el uso clubs RDM.

    • Los mas VMDK son mobile discos
    • Los discos son mas easy redimensionar VMDK.
    • Hay menos environment administrativa con discos VMDK.
    • Los discos VMDK son mas easy respaldar y recover.
    • ES mas sencillo clonar MV con discos VMDK.
    • Taken by el uso instant.

    Tanto los discos VMDK como los discos RDM tienen features similar performance, por lo as esto no debiera ser a relevant factor was the hora to define el tipo do clubs use.

    En as casos podemos better use RDM discos?

    • Para configurar a MSCS Cluster in una Cluster config between boxes.
    • Para configurar una MV para usar NPort ID Virtualization (NPIV)
    • By using tools of Administration United Nations of SAN storage nativas.  For example, perform Snapshots has storage level, Replicación a level of storage, etc.
    • If requerimos adjuntar una LUN existente a MV, sin than sea formateada como esta una a VMFS Datastore.  For example, the LUN adjuntar a file server.  This can be of utility Esto con el end of avoid además the Migración of large volumes of data a UN disco VMDK during a P2V conversion.
    • Para eliminate potential compatibility problems o please than las applications to ejecuten in ambiente virtualizado sin perder na none.
    • Para run software of administration of SAN desde the MV, y este tiene impacto directo en the LUN.

    RDM compatibility o en física virtual?

    In case of decide use RDM discos, to debe decide además TR use el virtual compatibility mode o el modo compatibility física.

    • Modo de virtual compatibility: Este modo virtualiza completely el device Editor, el cual appears in the virtual Máquina como a UN volumen VMFS virtual disco.  Este modo benefits of VMFS, the protection of datos como los provee y el uso of Snapshots.
    • Physical compatibility mode: Este modo provee acceso a las caracteristicas del device editor material most.  VMKernel pasa todos los comandos SCSI al device, exponiendo este modo todas las caracteristicas del physical material subyacente.

    El virtual compatibility mode are el recomendado para los casos, a menos most what requiera explicitamente el uso of the physical compatibility.  For example, en UN sober Microsoft Windows Server 2008 Failover Cluster, is it requires a persistent group (PMP) SCSI-3, which booking are available solo in utilizando RDM en física compatibility mode.

    Al use discos RDM en modo compatibility fisica we must consider what habra una series of funcionalidades that there is no be available encontraran:

    • Snapshots.  Esto especially can be muy crítico, whereas muchas respaldo soluciones, como vDR sub-granting del uso of Snapshots.
    • vMotion; No posible mover are not con una MV vMotion RDM compatibility física en con.
    • Clonado of stupid MV RDM compatibility fisica en.
    • No en are posible convert MV una con discos RDM modelUnited Nations.

    Al use RDM virtual compatibility mode en discos to pueden superar casi todas estas reduced, permitting the use of vMotion, snapshot y Clonado of VM.

    Para los discos VMDK

    Cuando is crea a disco virtual en VMware, por defecto, el tipo use are thick.  Reserva UN thick disco todo el espacio durante the creation del disco, utilizandolo efectivamente en el Datastore especificado.

    UN do thin disco no annual una reserva todo el espacio durante disco the creation del asignado previa.  Los blocking in el archivo VMDK not el son en reserved storage fisico ellos son written during the course of normal until knew Operación.

    Cada tipo disco tiene advantages claras y and/or, y del uso as the al disco daremos depends decision.  For example, if el disco will be used por una base datos con muchas helps, will be replaced by Recomiendan el uso clubs in thick formato, are only in formato thin el disco creceria muy rapidamente el uso injustificado doing this type of nightclubs.

    Guide is thick and/or of Provisioning:

    • Config y Monitoreo of storage mas simple
    • Fault tolerance Soporte en para VMware
    • Menos in el espacio uso eficiente
    • Mayores costos storage

    Advantages y and/or of Thin provisioning

    • El costo of storage to reduce
    • Elimina the need of dedicar desde a King, toda capacidad por una application required.
    • The fragmentacion los discos tiene effects fan en del disco performance.
    • Thin only Provisioning could no ser lo mas spending en ambientes donde el tamano los discos office will have a rapida tasa of politics.
    • Thin Provisioning aumenta of El riesgo quedarse sin espacio in a data store.  Este riesgo can be mitigado using alarmas is deferred storage.
    • SE deben monitorear y crear alarmas apropiadas.
  • Storage vMotion - WITH RDM? - Or will they ignored most of the time?

    Hi all - I have a storage vMotion, 5 hosts in the cluster - five or 6 virtual machines with RDM are pinned to specific hosts for the love of the DRS, but I want to go where the readers of the OS, then to VMDK is one San to another. If the virtual machine has a C:\ drive on the data store, all the other disks on the virtual computer is through physical RDM. ALL I need to move is the c:\(VMDK) in the process of storage vmotion.

    Time without are connected and work today to the ESX host in the cluster, all hosts are sized to see the RDM.

    My question is, when we go to perform the storage vMotion, how to avoid such failure? the RDM are my concern in the past, I think I remember a machine do not migrate when he had attached RDMs.

    any comments or thoughts would be really useful.

    Nick

    You can also move the ROW pointers with the data store 'change' - Assistant. Just be sure to select "same as source" for the format of the target disk. If you choose thick or thin, the RDM will be transformed into a VMDK, but with "same format" simply the ROW pointer is moved to the new data store.

    -Andreas

  • VMotion VMs Wit RDM

    People,

    Can you vmotion a VM with RDM LUN inside from one node to another in a cluster on the fly?

    Also, I would like to know if I put the ESXi in maintenance mode, all virtual machines including those with RDM connected gets vmotioned to other nodes in the cluster automatically

    Rahul-

    Hi Rahul

    It is stated in the Documentation that you can use with VM with RDM vSphere vMotion.

    Documentation Center of vSphere 5.5 - on Raw Device Mapping

    With RDM, you can:

    - Use vMotion to migrate virtual machines using raw LUNS.


    If you use vMotion to migrate the virtual machines with RDM, be sure to maintain Consistent ID LUN for RDM on all participating ESXi hosts.

    Considerations of RDM & limits are listed here: Library of vSphere 5.5-RDM considerations and Limitations

    FYI, links related to RDM:

    VMware KB: Microsoft Clustering on VMware vSphere: guidelines for supported configurations

    VMware KB: Difference between RDMs physical compatibility and virtual RDM compatibility

    VMware KB: Migration of virtual machines with Raw Device mapping (RDM)

    VMware KB: Frequently asked Questions on VMware vSphere 5.x for VMFS-5

    VMware KB: Cannot use snapshots or perform a backup on virtual machines configured with bus sharing

    vSphere 5.1 - VMDK versus RDM | VMware vSphere Blog - VMware Blogs

    Use RDM for practical reasons and not for Performance reasons. VMETC.com

    http://www.VMware.com/files/PDF/performance_char_vmfs_rdm.PDF

    Physical RDM for the functionality of Migration VMDK | VMware vSphere Blog - VMware Blogs

    Migration of RDM and a question for users of RDM. | VMware vSphere Blog - VMware Blogs

  • Help with vdmk migration of RDM using vmkfstools-i

    Hi all

    I need to migrate a 590GB hard file to a RDM in virtual compatibility mode.

    I want to avoid using a method of file copy there are has over 6 million of small files on this volume.

    I'm came across a blog that said this could be done using the ' vmkfstools-i "command and I saw other posts here on the community forums that tell a similar story."

    I have three questions:

    • The virtual machine should be turned off when you use "vmkfstools-i" command?

    • The following syntax would be just so executed with the VM source directory and assuming that vmhba1:0:2:0 is the path to the RDM LUN on the SAN: vmkfstools rdm d-i F - drive.vmdk F-Drive - rdm.vmdk: / vmfs/devices/drives / vmhba1:0:2:0

    • Should I join/create the ROW for the virtual machine before or after that I use "vmkfstools-i" command?

    Thank you!

    The virtual computer source must be turned off.

    Your command syntax is good

    The command will create the mapping RDM once finished you just add to your virtual machine

  • Sharing NFS on RDM

    Hello

    I had to get to an interim design and I was wondering how come this is evidence.

    The idea is that the data is accessible by several virtual machines (web server nodes) if needed, however, a node is fine for now.

    I created a virtual RDM (1 TB on SAN) and it mapped to my VM. It is formatted with ext3.

    What I might need to do in the future, is give access to these data other nodes.

    So I thought to do are, when this need arises, is to share this RDM as an NFS share and map additional VMs

    to this share.

    My question is what would be the best solution:

    -Exit ROW mapped to VM1 and create the NFS share on it and map other webnodes on this? Is there a problem with a node directly written and on the other (locking, corruption)

    -Plan of the RDM to a server dedicated to NFS and spread to all the nodes in the web?

    The main reason for the choice of the RDM is that she must be flexible because we don't really know a final design at this point. And the dedicated NFS server could be a hardware solution.

    The pitfalls regarding VMotion and RDM in this scenario?

    See you soon

    Hello

    What I might need to do in the future, is give access to these data other nodes.

    So I thought to do are, when this need arises, is to share this RDM as an NFS share and map additional VMs

    to this share.

    This is the best solution because a ROW of mapping to multiple virtual machines requires a clustered file system and ext3 isn't a clustered file system.

    My question is what would be the best solution:

    -Exit ROW mapped to VM1 and create the NFS share on it and map other webnodes on this? Is there a problem with a node directly written and on the other (locking, corruption)

    It is a good solution.

    -Plan of the RDM to a server dedicated to NFS and spread to all the nodes in the web?

    I think it's a better solution that you can set the NFS server, independent of the web servers. Web servers and NFS servers are often different ways to resolve things for optimal performance. In addition, you can possibly use the LUN for a physical NFS server as well.

    The main reason for the choice of the RDM is that she must be flexible because we don't really know a final design at this point. And the dedicated NFS server could be a hardware solution.

    RDM is a great solution for this.

    The pitfalls regarding VMotion and RDM in this scenario?

    Make sure that the ROW is a "virtual RDM", then you will have no problems. Physics might work, but virtual is better.

    Best regards

    Edward L. Haletky

    VMware communities user moderator

    ====

    Author of the book "VMWare ESX Server in the enterprise: planning and securing virtualization servers, Copyright 2008 Pearson Education.»

    Blue gears and SearchVMware Pro Articles: http://www.astroarch.com/wiki/index.php/Blog_Roll

    Security Virtualization top of page links: http://www.astroarch.com/wiki/index.php/Top_Virtualization_Security_Links

  • Remove the vmdk files

    I have a situation here: more than 40 GB of vmdk files.

    So, I would like to know it is possible to remove these files and how can I do this.

    Thanks in advance for any help

    Captura de Tela 2015-10-05 às 10.32.30.png

    renatob, read my previous comment.  You should not delete these files you will break your virtual machine.  The reason why you have so many files vmdk is because they are currently divided into 2 GB files.  Following the indications that I have described in the previous post to convert all these split files in a single disk VMDK file.  Then I recommend to perform a disk cleanup to compress the VMDK and reclaim space.

    Yo this by going to the settings of the VM, click general, and then Clean Up the Virtual Machine.  I hope this helps!

Maybe you are looking for