EMC Clariion CX3-80

Hello

We'll put our vSphere on EMC CX3-80 with 600GB 15K drives RAID5. Our plan is to have about 30 discs so EMC DAE 2.

Which were not sure is the best way to configure the LUNS and RAID for optimal performanceeg. Use the metaLUN, or simply through the same DAE?

Also what size the VMFS should be and whether it would be a good idea to use VMFS extensions. Typically, we will have virtual machines where the OS drive is on 70GBs and the D:\ can vary from 200 GB to a few larger 400-600GBs.

Thank you very much

The reason that I thought holding through the DAE 2 metaLUN using the 600 GB drives to ensure that we get allows to optimize more SPs rather have LUN through a single DAE using 1 SP.

This could be a good reason.

But you can quite the same reach by using multiple LUNS owen by different SP.

We'll probably use RAID5 with 4-5disks by RAID group and a global spare wheel.

If you have multiple disks also look to build a group of Raid RAID10.

But what about the size of the block VMFS - y at - there no performance impact?

None.

But the alignment could help. See: disk partition alignment

In our case, we will use very probably the last 800 GB but it also means my VMFS must be mapped to each LUN 1VMFS thus: 1LUN.

800 GB IMHO is a good size and is better to have 1:1 for VMFS/LUN and Raid group.

But the VMFS extensions should be used

I'd rather not use extensions.

And also if some VMs have d:\ e:\ f:\ which can go up to more than 800 GB, which would be the best approach

Consider using the RDM disk in this case.

André

Tags: VMware

Similar Questions

  • EMC Clariion CX3 not supported in 5.1, but will it work?

    Hi all.  We are about to move to ESXI 4.1U2 5.1 in the coming weeks.  We have an old EMC Clariion CX3 - 20 c I used as a test for some time zone.  It is not our main Bay (those who are Equallogics) and is used only for testing and other uses completely non-critical.

    The HCL for 5.1 shows that nothing in the CX3 family is supported.  My questions are, it still works and if yes, are there the required settings?  I agree with it not being supported because it serves such a non-critical role, as long as I can get it connected.  Just trying to squeeze a few more life to this old Bay.

    We are connecting via the server software, without fibre channel stuff.

    Thank you!

    We have one and use it with ESXi 5.1..., it is stable with MRU and failover mode 1, no Robin available on CX3 series.

    not perform better, but as you we aren't using it for production stuff.

  • SRM with different DELL/EMC CLARiiON systems

    I have a client who currently owns a Dell Clariion CX300. They seek to replace it with a CX4-120 and would send the CX300 to an offsite location for use as a solution using MRS. DR.

    According to my research, I believe that this solution is possible using EMC RecoverPoint/SE to mirror, both on an IP network, but what is not clear is if I need is more SnapView and MirrorView (I don't think I do)

    HA - that someone does something similar to this before and care to share their experiences?

    Hi Sean

    I am writing as an employee of EMC society UK VMware affinity team. Please see my responses below online

    "I have a customer who currently has a Dell Clariion CX300. They seek to replace it with a CX4-120 and I would send the CX300 to an offsite location for use as a solution using MRS DR. »

    > > > > It is possible - if as you suggest with the CX300 not support MirrorView based SRM (support for this begins with the CX3 family) the only way to achieve consistent SRM and SRM with the CX300 is through the use of RecoverPoint as you suggest

    According to my research, I believe that this solution is possible using EMC RecoverPoint/SE to mirror, both on an IP network, but what is not clear is if I need is more SnapView and MirrorView (I don't think I do)

    > > > > RecoverPoint uses its own capabilities in conjunction with a host, fabric based or CLARiiON based write splitter log all entries available to a designated SAN "volume of the journal. SnapView or MirrorView are required for this

    > > > > the CX3 / generation CLARiiON CX4 supports the separator RecoverPoint CLARiiON that allows these units to split the writings of two VMS with RDM (which is a limitation when using the separator with ESX host - the host splitter only supports Microsoft VMs using RDM - so no support of the SRM) while being able to split the Scriptures volumes VMFS suitable for SRM.

    > > > > fabric database splitting and splitting based CLARiiON are fully integrated with ESX, including support for Site Recovery Manager (in accordance with the comparability of VMware SRM Matrix) as the CX300 supports not the CLARiiON splitter - this gives you the choice of fabric-based splitting of the side DR using something like the Brocade 7600 or the switch Cisco SSM blade or 9222i Fabric my concern with what is the cost of implementation of a fabric RecoverPoint intelligent solution based on the side DR may exceed what the client will budget for his site of DR

    > > > > According to the profile of the ESX data replicated to the site of DR it can be cost-effective to improve unity DR to another CX4-120 and use MirrorView together with SRM or if RecoverPoint remains the preferred - solution have CLARiiON both production and DR site able to support the CLARiiON RecoverPoint splitter

    Let me know your thoughts

    Kind regards

    Alex Tanner

  • Expanding EMC Clariion LUN

    I would like to migrate LUNS using Navisphere manager to move a small logical unit number to a larger.  How can I get ESX to consider it as a ONE of the largest?  In Windows, you just see the new space using disk management and diskpart/expand to get the new space.

    Is this possible in ESX?

    No, you can't develop VMFS in this way. Only:

    (1) create new larger LUN, create new VMFS, migrate all virtual machines to new LUN, delete the old VMFS.

    (2) first save all virtual machines, remove VMFS, expand LUN, create VMFS on larger LUN, restore virtual machines

    (3) create new LUN on the place only where you want to add, add that LUN as a measure.

    ---

    VMware vExpert 2009

    http://blog.vadmin.ru

  • Emc LUNs move to another host esxi

    We have an ESXi server connected to an emc clariion cx3 - 10 c (iscsi, 1 logical unit number). We will fully ESX and this server needs to be rebuilt to that. I have built a new server esxi and want to keep the virtual machine residing on the lun. Is there a way to communicate the number of logical unit on the new server esxi? I see an option migrate in Navisphere, don't know what it does. I am a rookie!

    You should be able to present the existing LUN as your return VMFS datastore to the new ESXi host by adding the new host of the storage group on the CLARiiON. Then the new server will see the VMFS datastore and as long as (Vswitches) network settings are the same on the new host of ESXi, you can migrate virtual machines to the new host and then dismantle the old host.

  • Latency even when high ping ping localhost

    Hello

    Since a few days, maybe a week we are faced with a strange ping latency.

    At first, we thought it has something to do with our network equipment, but after a few hours of investigation, we arrive at the following conclusions:

    (1) Pinging VM VM (both hosted on the same host ESXi) lead to an average 1.5 ms > with spikes up to 50ms / 100 pings to the default interval.

    (2) created two virtual machines on a vSwitch separated without a dedicated NIC and tried a ping between them. The result was better but not good enough - average 0.5ms > with spikes up to 3ms and same 10ms

    Management interfaces Pinging ESXi 3) other devices on the same LAN has revealed a good ping - average latency around 0.2ms with spikes up to 1.7ms

    (4) peripheral Pinging on the ESXi console itself (from SSH) network showed us a higher latency than expected - medium > 0.6ms with spikes up to 5ms

    (5) the interesting part: ping localhost ESXi console - average > 0.3ms with spikes up to 2-3 ms

    We thought it might be an argument/bottleneck somewhere on the ESXi but could not conclude that, not yet at least. The CPU usage is about 65-80% with spikes up to 85% in esxtop. This may be the cause of our problem? Here's an esxtop output:


    PCPU USED (%): 59 61 52 63 74 59 68 74 AVG: 64

    PCPU UTIL (%): 60 61 53 63 75 60 69 74 AVG: 64

    ID NAME NWLD % USED GID % PERFORMANCE SYS % WAITING % VMWAIT % % IDLE % OVRLP % % MLMTD % SWPWT CSTP RDY

    1 8 275.55 slowed 1 546.00 0.01 0.00 - 224,18 7.03 0.00 0.00 0.00 0.00

    8 8 86 helper 97,66 99,76 0.00 0.00 8101.96 - 51, 99 2,24 0.00 0.00 0.00

    1786346 1786346 FreeBSD9_037 10 71,96 71,78 2.37 771.33 1.69 58,69 196.71 2.49 53,55 0.00 0.00

    6218425 6218425 FreeBSD9_152 8 69,71 70,18 2.55 657,31 0.43 35.38 85.34 3.17 0.36 0.00 0.00

    4825332 4825332 webhosting01.wh 12 41.80 39,86 3.44 1070,75 0.43 36,05 305.25 2.12 1.45 0.00 0.00

    6363251 6363251 esxtop.36586035 1 20.00 19.33 75.09 0.00 - 0.11 0.00 0.04 0.00 0.00 0.00

    5587218 5587218 CentOS5_148 10 17.43 15,48 1.89 907.99 1.08 32,95 333.24 0.62 0.00 0.00 0.00

    1528430 1528430 FreeBSD9_116 8 17.00 16,97 0.81 707,03 0.13 39.60 134.28 0.19 0.95 0.00 0.00

    4108400 4108400 FreeBSD9_140 8 13.54 13.67 0.38 725,60 4.52 22,63 146.88 4.08 0.57 0.00 0.00

    1884461 1884461 FreeBSD9_134 8 12.79 12.49 0.67 738,98 0.18 13.53 165,37 0,53 0.00 0.00 0.00

    6112231 6112231 FreeBSD9_143 7 12.24 11.99 0.96 0.00 647,13 7.75 75,78 0.79 0.00 0.00 0.00

    4409984 4409984 Win7_128 8 9.06 9,20 0.04 0.02 742,67 3.87 176.57 0.25 0.00 0.00 0.00

    6285951 6285951 Unattended_Depl 9 8.73 8,01 0.84 835,92 0.04 6.54 174.64 0.48 0.00 0.00 0.00

    support process uses much CPU but I have no ideea how to debug this process more away or could be the cause of this.

    There is no bottleneck in strangulation/contention on the side network.

    Our installation is quite simple:

    An ESXi 5.1.0 build 1065491 running on an old HP DL585 G2 with 8 x Opteron 8218 processors and 64 GB of RAM. The host's link to the rest of the infrastructure through two switches: a gigabit switch for production and accessible from the outside (public IP addresses) and a gigabit to 100 Mbps for internal management using IP addresses private. One NIC is connected to each of these two switches and we use vDS. Two management/VMKernel interfaces - one on the public interface and the internal interface. Virtual machines of the customer are in the same network/LAN with the public management interface, no VLAN.

    We use for storage SAN - EMC Clariion CX3-20 - connected to the ESXi Server interrrupteurs 2xBrocade clocked at 4 Gbps.

    If someone had similar problems or if you have any ideea what could cause these latencies I would appreciate a little help :)

    Kind regards

    Raul

    As mentioned by Jon before your CPU RDY and CSTP % are far too high. I suspect that this could be one of (if not) the main cause. From what I can tell of your dough esxtop, you seem to be run (at least) about 24 vCPUs in an old server with 8 physical CPU cores. As a general rule, you should aim % RDY below 5 per vCPU.

    If you can update the hardware or throw in another server to balance the load, then you should reduce the number of vCPUs on your VM. Check this than your VMS really need and adjust the number of vCPUs accordingly.

    Of course an update at least a recent release 5.1 does not hurt, but I doubt it will get rid of the high values of time loan CPU if you keep an oversupply vCPUs on this host.

  • Add esxi 4.1 in the same store of data of esx 4.0

    Dear community

    I connect two ESXi 4.1 to the data store three that are currently connected to host four VMS ESX 4.0 in production, all the infrastructure and then migrate to ESXi without a stop from the virtual machine
    I have a problem? All this and ' on an EMC Clariion CX3 SAN.

    Thank you

    Fabio

    You can have 4.1 and 4.0 connect to the same store of data without any problem.

  • Upgrade suggestions to replace euipment in a cluster VMware ESX 3.5

    Hello everyone. I have a Cluster VMware coming off lease 3 years and I'm looking looking for some suggestions to upgrade for the following rental equipment:

    • Cluster VMware ESX 3.5

    • 3 x blades HP Proliant BL460c G1 acting as hosts (2 x Xeon E5345, 12 GB of Ram

    • 1 x EMC Clariion CX3 - 10 c iSCSI SAN with 4 TB Raw (15 x 268,4 GB SAS)

    • Currently hosts 12 Windows Server 2 k 3 VM

    We our currently buying the blades since the acquisition of the fair market value was less than $200. I am currently exploring the following options to present to the owners of the company for more how it will go to the best situation of DR

    1. Stay with current equipment

      • Upgrade to vSphere 4.1

      • Updated all the VMS to Windows Server 2 K 8 R2

      • Acquisition by SAN and blades

      • Re - up the support of at least one year, return to upgrade next year

      • Increase the capacity of SAN

      • Search for Amazon EC2 or Windows Azure failure possible

    2. Stay with blades, buy new SAN with additional capacity

      • Upgrade to vSphere 4.1

      • Updated all the VMS to Windows Server 2 K 8 R2

      • Blades of redemption

      • For new SAN buy either:

        • Celerra NX4

        • HP Lefthand SAN

        • HP Storage Works Smart Array

      • Search for Amazon EC2 or Windows Azure failure possible

    3. Buy new knives and the new SAN, send old blades to a failover data center

      • Upgrade to vSphere 4.1

      • Updated all the VMS to Windows Server 2 K 8 R2

      • 3 new knives to HP to help purchase

        • Intel Xeon E55XX family

        • AMD Opteron 41XX family

      • For new SAN buy either:

        • Celerra NX4

        • HP Lefthand SAN

        • HP Storage Works Smart Array

      • Send the old blades off site with a cheap SAN (DRoBo maybe?)

        • Work on a failover system

    Sorry for the long details, I can clarify anything if necessary. Thank you!

    I would also tender hand to International Computer Inc.. to see what they can do for you... I asked a friend I have who work there, and cover the area PA... They are among partners VMware major/main companies in the northeastern part of the United States, in order to get a quality of their own work (regardless of what you get)... I would also tender hand to GreenPagesto see if they could work with you... I don't know if they cover PA, or Phi, but worth a phone/email to find out... Both companies are major players in the VMware partner space, so I don't think you could go wrong anyway... I worked with people of GreenPages on some projects, a couple of different companies and have always had good experiences with the people of their engineers and sales...

    Most quality partner rep will work with his engineers on the solutions until they present it to you. I know that GP for this, and I do not know what IC is also...

    VMware VCP4

    Review the allocation of points for "useful" or "right" answers.

  • Virtual machines with oscillating clusters RDM

    People,

    We have two clusters at two knots with all nodes running ESX3.5 U1. Clusters have different CPU types that prevent from VMotion between clusters. All nodes are attached CF and share storage on an EMC CLARiiON CX3-80. Currently VMFS data warehouses, which are for most of the OS disks, are shared by all clusters, but the RDM (data readers) are presented specifically to one or other of the clusters. We are planning a major reorganization of our infrastructure which means Exchange effectively clusters for the majority of our virtual machines. Is there a more effective way of migration that detach the RDM, cold, migrate virtual machines and adding the back ROW on the other side?

    As you can tell my experience of internals VMware is currently quite limited, but I learn - as sometimes the hard way!

    Kind regards

    Paul Esson

    Hi Paul:

    Everyone here has some excellent suggestions. However, cold migrate a virtual machine that has the RDM will actually convert or clone the content of the ROW in a virtual disk. The original content of the RDM will be left intact, and the only way to avoid the conversion with a cold-migrate is to remove from the configuration of your virtual machine first ROW (as you mentioned). With the help of Storage Vmotion will only move the pointer of the RDM files and actually does nothing with the RDM themselves. You'd still have to place them in your storage array to another cluster.

    See the following article for a bit more detail:

    http://KB.VMware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=1005241&sliceId=1&docTypeID=DT_KB_1_1&dialogID=42936094&StateID=0%200%2041484096

    -Dave

  • Recovery plan test failed - a file is not found error

    Hi all

    Finally, I got my SRM environment installed a few days ago and made a small test with 2 VMS successfully recovery plan. Today, we run a new trial with 18 virtual machines of low priority and everything went smoothly until after 15 minutes, the process failed with the error "file not found" turn on/off the virtual machines on the remote site.

    Basically, we have 1 EMC Clariion each site with mirrorview which have been configured after EMC VMware Site Recovery Manager with EMC CLARiiON CX3 and MirrorViewS implementation and MRS. 1.0 Guide. We have detected that the snashot created recovery Clarrion is placed to the right of the active state after the beginning of the trial, but after 15 minutes later he returned to Inactive.

    Other research showed me that the error that gives me the recovery plan test is correct given that the snapshot is no longer active and all files in the virtual machines (VMX and VMDK) that the SRM tries to light are on the snapshot.

    Any of you have seen something like that?

    Thanks in advance.

    Saludos/looks

    Nicolas Solop

    Buenos Aires, Argentina

    -


    Mi empresa

    En mi perfil LinkedIn

    Grupo de Virtualizacion en Español of Linkedin

    -


    Hello

    Our best practices for configuring CLARiiON SnapView, which copies itself on the first point of entry in the catch instant times the volume of production is to distribute between 20-30% of the production LUN for the RLP LUNS box

    Regarding your volume of 300 GB of LUN LUN Pool reserved (and you might need more than one - that is to say if you have a consistent snapshot that includes multiple LUNS it will add ot the space required)-you should provide at least a 30 GB, otherwise a 60 GB LUNS in your region Reserved LUN Pool

    I would try to increase the RLP LUNS for this kind of size and try again

    Concerning

    Alex Tanner

  • NaviAgent in ESX 4?

    I upgraded an of our host ESX 3.5 ESX 4 using the Update Manager, and everything went well.! http://communities.VMware.com/images/emoticons/grin.gif !

    Now as expected the newly improved ESX 4 host doesn't have the naviagent for our EMC Clariion CX3-20 installed or running. No problem, I thought I was going to reinstall it. We have the version 6.26 and I ran the .sh script and it seemed to go ok. I started the service and again, everything seems ok. But the Navisphere does not see the agent and I don't know where to go from here. The PowerLink is one of the worst web sites that I can imagine trying to find something, and I couldn't find a reference to esx 4 anywhere. Is it supported? Is there a different installation procedure? Anyone know?

    For naviagent on usal ESX using the RPM version.

    You have to configure the agent.config file and open Firewall ports.

    http://KB.VMware.com/selfservice/viewContent.do?externalId=1007938&sliceId=1

    http://seandobsonsitblog.blogspot.com/2009/02/VMware-Navisphere-Agent-Install.html

    André

  • The management of multi-channel

    Hello-

    I have a question about the paths multiple parameters on ESX 3.5.0 158874.  I have four PowerEdge 2950's connection in an EMC Clariion CX3-20 (26 flair, I think) with two fiber switches.  Each server has two QLogic connection in separate passes by connecting the redundant storage processors.  I try to use VRangerPro (backup of Vizioncore software) on my backup VCB server and are unable to run VM backups.  The seller told me it's because I'm running multipathing MRU and need to change to fixed multipathing.  VCB works well on the backup server when I do not use the vRanger connection software.

    I have another cluster ESX 3.5 connection on a SAN IBM network, who is using multipathing fixed, with the installation of vRanger VCB/connection executing very well VM backups.  My question is, has anyone seen this problem before with VRangerPro where you have set multiple paths to run VM backups?  If you can't comment on the product vRanger connection can someone comment on using fixed paths with a Clariion CX3 - 20 in conjunction with the ESX software and how from MRU fixed could affect my production environment?

    For what it's worth, EMC refers to my SAN as with active/active processors, but VMware refers to it as active/passive because it cannot access the same LUN at the same time.  At this point, I'm confused about which would be the affects to move me from MRU Fixed in my environment.  Any thoughts are appreciated?

    Kind regards

    Erik

    PowerPath will use more path at a time and VCB lose the connection.

    I usually install PowerPath without a license key, to make it work in simple failover.

    Alternatively, you can try to uninstall PowerPath.

    André

    * If you found this device or any other answer useful please consider awarding points for correct or helpful answers

  • v/m NIC stay connected

    Powering SAN environment over the weekend.  Normal, using the prescribed procedures IPL stop.  Engine back on and all but two v/m do not work.  v/m NIC stay connected even after selection and allowing a change in v/m.  The two v / m residence on the same ESX host.  Have reinstalled NIC, vlan changed, enable/disable NETWORK card all without success.  The three ESX host are operational.  All v / m on host (except for the two) is operational.

    SAN: EMC Clariion CX3-10, Dell PE 6850/R900, ESX 3.5.0 systems 123630, VMware Tools v123630, o/s W2K3 R2 SP2 Std Ed

    No problem.

    Don't forget to leave some points for messages useful/correct.

    -KjB

    VMware vExpert

  • Looking for a SAN solution

    We have a small infrastructure we use for software development purposes. We have 4 ESX servers, all currently running ESX 3.5 u3 and about 30 mV. We store the virtual machines on the local storage of ESX. We want to move to a centralized storage for virtual machines and that we want to use VMotion and HA. Is there a low-cost SAN solution for a facility like ours?

    A Dell MD3000i would be a good choice for a 4 ESX server environment, very cheap compared to blown full SAN solutions (even the stuff Equalogic Dell is at least 5 times the cost). We use them for small customer installs (internally, we use Dell |) (EMC Clariion CX3 and 4, but they would be an exaggeration). Make sure you budget for a couple of decent network for iSCSI switches to (we use Cisco those but those are probably fine if you want to buy everything from a seller to relieve the problems of Dell support)

  • Question of CLARiiON/ESX

    Maybe that's more a problem of CEM, but I thought I'd run it by anyone using a CLARiiON here.

    We have 3 ESX 4 hosts connected to Clariion CX3 - 10 c. Last night we received a few errors on one SP.

    -Storage DAE Faulted SPE enclosure: wiring information differs between SPs; may indicate disconnected cabinets.

    -Disk Processore pregnant (enclosure SPE) is defective. Servers may have lost access to the hard drives in this storage system. See Navisphere Manager alerts for details.

    -Navisphere can manage is more (SP - A). It does not server I/O to the storage system. See Navisphere Manager alerts for details.

    Nothing has changed physically on our system. We have 3 ESX hosts that are connected to this table. As a result of these errors the other SP got the below error.

    '10.150.1.106' cannot be managed.  All attempts to manage the system failed. Agent runs do not.

    The IP referenced is one of our ESX host. I don't even know if we are short of Clariion agents on our guests. Can anyone shed some light on these? It appeared that us never lost the connection to our ESX host physical or virtual.

    Thank you

    Scott

    You run Navisphere Agents on your ESX host?  Looks like you had MS fail briefly.  You have any visible forced LUNS in Navisphere console?  You should definitely talk to EMC this particular series of events.

    VCP 3, 4

    www.vstable.com

Maybe you are looking for

  • Updating BIOS for Satellite Pro 4600

    Hello everyone, I'm trying to update the bios and I followed the instructions, but I don't know what the problem is. 1 - download the files2 - unzip them filed in a non-bootable disk is.3 - restart the computer with the disk in.4 - press f12 during s

  • CAN I REPLACE THE GRAPHICS CARD IN MY LAPTOP VAIO (E-SERIES)

    I bought a computer laptop vaio eseries, but now I find that my computer laptop vaio has no high graphics to play good games. So for that I want to replace my graphics card. Thanks in advance.

  • Computer security

    My computer is slow, especially when playing games I have to install another security software? I already installed Road Runner.

  • Product name: laptop HP Pavilion 15, Windows 8.1. Touchpad freeze constantly

    Product name: HP Pavilion 15 PC laptop, product number: E3B56PA #ACJ, 8.1 only 64-bit language. Touchpad freeze constantly. Please help as soon as possible.the laptop is almost a year.

  • Windows XP Pro product key

    I got a laptop for my father and I try to reformat & get going for him. I got an operating system disk Windows XP Pro to the place of the local computer. I am trying to install & using the product key on the bottom of the laptop tells me that the CD