Migrazione SAN - FC SAN - SAS

Hello to all,

HO a client currently has una configurazione che con singolo e storage FC Dual Controller is al server point to point.

Vorrebbe una soluzione completely nuova:

2 host nuovi

1 SAN in SAS

Così da poter utilizzare HA; HA una essential Plus licenza gia.

Domanda, qual'e seconda vostra esperienza, the best solution by migrare I datastore da una SAN - FC ad una nuova SAN - SAS?

Grazie in pre-empted by the risposta

UNA possibility it could essere questa:

Enter una sul vecchio SAS hba hosts.

Colleghi it nuovo presenti nuove lun storage.

Tramite Storage vMotion migri the VM. Per farlo a tipo caldo ti will serve una licenza Standard. You can use Eval una.

The sposti oppure a freddo.

POI colleghi I have Microsoft host al nuovo storage nuovi. Migri tramite vMotion VM.

Server Elimini it vecchio.

SE non you can take una hba in più, una delle nuove ricicli e fai a cluster a temporary 2.

Spero di essere stato chiaro

Ciao

Tags: VMware

Similar Questions

  • Issue backup - SAN/SAS/Disk-to-Disk

    We are currently planning to introduce the VMware with a Dell SAS connected SAN and a couple of physical hosts. A separate physical host is expected to run only with no installed VMware backups, there a SAS card is connected to the San and the backup tape device.

    The backup application will be CA Arcserve. With this situation, that we will be able to backup individual files/folder on each virtual computer via integration with VMware? What part of VMware would need to be installed on the backup server to enable this? A separate physical host was chosen so that a disk to disk-to-tape strategy could be used to reduce the backup window.

    Is there a better method?

    CA Arcserve must be installed on a virtual machine instead?

    what you look at is vstorage APIs for data protection. You can read more here, and also you will need reach CA on how to activate and work. http://KB.VMware.com/kb/1021175

  • Storage - SAN iSCSI 1Gbit o 6Gbit SAS HBA

    Ciao,.

    scusate ragazzi, serve my e find the retta via...
    GIA conosco iscsi sas e-tra rant...
    Proposte che mi sono arrivate familières grossomodo configurazioni stessa, con due nodi, my ognuna di interconnettersi allo storage prevedere to was different.
    The SAN PS4100 of mi DELL offers
    The IBM mid DAS (SAN) DS5300 offers.
    Adesso, considerazioni IOPS / s a parte che ho gia fatto e ad incidere vanno solo dei sulla configurazione brain (tipo, e number raid)... I wanted to concentrarmi sulla parte interconnessione.
    Ho come idea percorrendo the DELL mi ritrovi imprigionato strada che... Imprigionato dal fatto di avere una con 2 connessioni sole di rete ridontate entrylevel SAN da 1 Gbit. (4 + 2 management totali)
    Percorrendo strada IBM invece ho impressed di avere a canale nice technologist e veloce. 2 SAS HBA controller per ogni nodo di 6Gbit. (4 SAS HBA + 2 NIC management).
    Non lo so is mi must impressionare questi valori, mi'd leggere it vostro punto di vista.
    IO non riesco a choose.
    THX

    Luca Dell'Oca ha scritto:

    Resterei sul discorso di Francesco did, SAS e molto più easy da configurare, my live attach no you andare oltre 4 Server substitutes i can usually; iSCSI ha invertiti pro e contro.

    Ciao, Luca

    rispondendo All allowed:

    preferirei sentirmi più libero di add server dedicated negoziazioni difficolta di configurazione... che queste TR % in some modo.

    Ben più difficult it could essere superare no problema "fisico.

    Giusto?

    ForSE no, siamo capiti SAS in DA e moooolto più semplice da configurare. In the HBA AI controller nel modo product, ISP he lun masking e hai finito. You can max 4 host collegare.

    Con iSCSI which prevedere 2 eth level di certo UN, configurare correctly switch VLANs, LACP, trunking, jumbo ecc. ECC.

    Potrai collegare tutti I server che nel degli limit you want to switch.

    All'aumentare degli host treatment probabilmente raggiungerai degli index degli storage limits.

    You quanti host ESXi you want to employ? Hai anche altri tipi di collegare host da? Linux o Windows.

    Hai weather di crescita per I 4/5 of next years?

    Ciao

  • HP SAS HBAS with IBM Storage and SAS SAN vs iSCSI SAN

    Hello, I am working on a project, and I have to add a SAN solution to my client infaestructure. My client already has a HP Server and we wanted to add another server, but an IBM one. I thought I'd add storage IBM (DS3200) that has two SAS interfaces. The problem is that the Distributor IBM says who does not know if the HP Server will be able to communicate with the SAS IBM storage. So, one possibility was to put on the server HP HP SAS HBA Controller. The second option is to put a SAS HBA Controller IBM. I think that the SAS is a standard, it doesn't matter what brand I use of HBA. Should not it works anyway?

    There is another option which adds instead of SAS one iSCSI storage. The problem is that I do not know the performance than the iSCSI storage. I want to put inside it's VM with Exchange Server (50 mailboxes), an ERP (for 50 users) with it's SQL Server, a computer file virtual, a virtual machine server with Active Directory.

    So, the questions are:

    1 - does anyone have experience with iSCSI storage to see if its performance if goog enough (I intend to use a dedicated to the iSCSI network gigabit switch)?

    2 does anyone know if I use different brands of servers (in this example an IBM and HP) with a SAS storage IBM? Who be it already done? The HP SAS HBA are compatible with the IBM iSCSI storage? IBM SAS HBA are compatible with HP servers?

    3. in the case of being able to do what I say to question 2, that do you suggest? Put a HP SAS HBA within the HP Server or upgrade an IBM SAS HBA inside HP Server?

    Of course the option I want to use is the addition of SAS IBM storage instead of using an iSCSI one, but I'm not sure of their compatibility between HP and IBM storage servers.

    Thanks in advance.

    In this case, the reason why I wouldn't use SAS is that your application requires this level of performance and the cost as not justified. You can configure iSCSI and direct savings in most important features such as snapshot, replication and dedup. If you were available to this solution with the DS3200 spend you 10K and that none of these features. Performance is very good, but you will find that the system is not yet warmed up with this load. The SAS Protocol can support high IO, but is not what provisions IO, is the number of disks and the RAID mode that defines your ability to e/s. I put 40 K terraced small IOPS / s on a single caching of 1 GB iSCSI target, so he can make the e/s. flow is not as high , but do you really need it. In the future you can profitable move to 10Gbe if you find insufficient bandwidth, which would be faster than the SAS 3 GB limit that is uneconomic to upgrade 3 + times.

    Kind regards

    Mike

    http://blog.laspina.ca/

    vExpert 2009

  • Comparison of local VMFS SATA vs SAS VMFS SAN deployment

    Hi all

    According to my understanding, run a virtual machine on top of my SAN with VMFS, supposed to be the best practice of the industry of performance running VM salvation

    However, with the current setup, I am very confused why the performance is slower than Local hard disk

    HARD drive space: 4 x 500 GB SATA 7200 RPM RAID-5

    C:\SQLTEST & gt; sqlio.exe

    SQLIO v1.5.SG

    1-wire plays 30 seconds of the file testfile.dat

    using 2 k IOs over 128 KB stripes with 64 IOs by race

    initialization done

    AGGREGATED DATA:

    flow measurements:

    IOs/s: 8826.73

    MBs/s: 17.23

    While on the SAN HARD drive: 14 x 300 GB SAS 15000 rpm RAID-5

    C:\SQLTEST & gt; sqlio.exe

    SQLIO v1.5.SG

    1-wire plays 30 seconds of the file testfile.dat

    using 2 k IOs over 128 KB stripes with 64 IOs by race

    initialization done

    AGGREGATED DATA:

    flow measurements:

    IOs/s: 2314.03

    MBs/s: 4.51

    No idea how this could happen if you please?

    Kind regards

    AWT

    iSCSI is what it is...   You get wow numbers like you do on the DAS.

    On the MD3000i?  We found it important enough to keep really hard hitting servers / vm / guest on LUNs dedicated and virtual disks.  The limits of 160MO iSCSI are sort by target where each lun is considered to be a separate objective.  On the MD3000i controller double load balancing is done by the virtual disk.   If you match a dedicated to a virtual disk lun and divide the VD upward between the controllers, the server's performance seems strong in production.  Still, you won't get the numbers of wow on the tests you get on DAS devices.

    If you want really good performance on account of the VM?    Use the MS iSCSI initiators, even if we do not like this, tests proved much better performance in tests of IO.  In production?  you won't really see a lot of difference in my experiences.

  • VMWare ESXi 3.5 u4 suffering from slow performance in iSCSI RAID-5 of 15 k RPM SAS SAN

    Hi all

    I suffer from very slow performance using my deployed virtual computer on the

    iSCSI SAN-VMFS datastore, the attachment following shows the deployment

    scheme which I believe already according to best practices around of

    the net separating the SAN network in the server.

    According to: http://virtualgeek.typepad.com/virtual_geek/2009/01/a-multivendor-post-to-help-our-mutual-iSCSI-customers-using-VMware.html

    It seems that no matter how fast the disk is, using SAN

    in a VMWare environment, it will be always slow around 160 MB/s http://www.petri.co.il/forums/images/smilies/icon_neutral.gif CMIIW

    Usually, this means that customers are

    which for a single iSCSI target (and however multiple LUNS that may be)

    behind this objective 1 or more), they cannot travel more than 120-160MBps.

    I do to boost performance?

    Thank you

    Kind regards

    AWT

    > I'll that connect to my network of Production, as well as on vSwitch0?

    Yes, just leave the default options team and it works perfectly.

    > I also tried to add two network cards more

    Do not group of NETWORK cards next to the virtual machine, is not useful. Do it right next to vSwitch.

    André

    * If you found this device or any other answer useful please consider awarding points for correct or helpful answers

  • Acceptable DELL PS4100xv and SAN HQ disk performance

    Hello

    I have a Dell PS4100xv with 12 x 600 GB 15K SAS drives configured in RAID 6.

    3 three R420 servers are connected via 2 x Power Connect 6224 (bunk).

    I'm under ESXi 5.1 U1 with a mixture of SQL virtual computers, file servers, and Proxy servers.

    What are the acceptable values for a healthy EQL in terms of latency, disk queue, Ops ARE / s and etc?

    Just recently, I transferred a virtual machine to the PS4100XV. The virtual machine has a very slow behavior when using the EQL Dell. This is SQL Server. You have a lot of users complaining about the sluggish behavior.

    I need to determine if I have a problem on my PS4100xv in terms of performance before you start to troubleshoot the VMs and ESXi hosts.

    These are values measured HQ SAN during the last eight hours.

    If I got this right that my own analysis

    • The disk queue is ok, since its constantly @ 0. EQL can handle the load and nothing is pending.
    • IOPS / s is ok, because the base on the RAID my table evaluator should support 1500 IOPs. Current reading is 742 IOPS / s only.
    • To write latency is ok because he is 13 Ms and does not even approach 20 ms.
    • Latency of reading, I have a problem? It fits above 20 Ms flat around 25 ms to 40 ms.      But not more than is not 40 ms.
    • For the size of the IO rate & IO, I don't know if that's a good indicator. What are the acceptable values?

    Disk queue depth:

    zero (0), the graph shows his dish to zero.

    Average READ IOPS:

    625

    Average WRITE IOPS / s:

    117

    Total average IOPS / s

    742

    Latency average reading

    30 ms (ms 20 to 40 ms)

    Average latency of writing

    13 ms (straight line @ 13 ms)

    Weighted average

    28 ms

    E/s average rate readings

    26.41 MB/sec

    Writing of average IO

    2.26 Mbps

    Total

    28.67 MB/s

    Average size of READ i/o

    43,30 KB

    Average size of i/o WRITE

    19.91 KB

    Weighted average

    39.62 KB

    There is no retransmission of packets and the network usage is only 3%.

    Would really appreciate the input and recommendations.

    What are the acceptable values for a healthy EQL in terms of latency and IOPS / s, queue disk, etc.?

    Thank you

    Paul

    I strongly suggest you open a record of support on any show of related issues.  However, with ESXi especially when I see low i/o + latency, the first thing I usually find is that DelayedACK is enabled.  Sometimes, large receive the unloading (LRO) can do that as well.   Another common problem is not optimize MPIO.  And not not not using Round Robin with 3 IOs by path from the default value of 1000 IOs.  Or if you have a company or business license + installation Dell EQL MEM.   Sharing also several VMDK with one card virtual SCSI inside each virtual machine is another cause of latency.   Each virtual machine can have up to four 4 virtual SCSI adapters.  With a single adapter, single VMDK (disc) can be active at a given time.  If an operation of e/s on the "C:" drive will block your SQL disks.

    This link is a PDF that will show you how to fix common incorrect configurations with ESXi + EQL.

    en.Community.Dell.com/.../20434601.aspx

    HOWEVER.  If you find delayed ACK is enabled the process described here probably won't be enough to solve the current connections.   With the mode node maint.  Remove the address of discovery EQL, remove the "Static discovery" entries EQL volumes.   Restart the node.   Add discovery addresses but NOT yet rescan.  Make sure that login_timeout is 60 years old, DelayedACK is disabled.   Then make the new analysis.

    Kind regards

  • SAN Performance issues

    Hello

    Background: I have a PS6100XS hybrid.  I have SAN HQ installed on a virtual machine.

    I was wondering if there is a way to monitor the statistics I/O of the volumes as the depth of queue, read/write statistics, etc.  Also, is there a way to see where the current volume resides (SSD or SAS 10 k disks)?  And finally, is there a way to capture the statistics of e/s data in a spreadsheet?

    Thank you.

    Hello

    In SANHQ, you can explore the volume level.   Disk queue depth can be seen at the level of the Member or the pool.

    Re: SSD/SAS.  No there is no way to see what's on the RAIDset RAIDset SAS vs hybrid.

    Re: Stats I/O.  You can print a variety of reports in PDF, CSV or XLS formats.

    Kind regards

  • the upgrade of SAN MD3220i PowerVault

    I try to go on the road our PowerVault SAN MD3220i series SSD and documentation for the SAN said that the SSDS only supported are 2.5, 150GBs, SSD SAS 3 Gb/s capacity.
    My question is: does this mean I can get a 240GBs 6 Gbps SSD and it will simply be not ready to tackle more 150GBs of storage and it just goes back to a 3Gbs interface speed or actually means that I can literally not an SSD that exceeds 150 GBs and 3Gbs?

    Thank you

    Hello hyperAdmin,
    I don't know what document you look like it might be a bit old but you can add more big SSD drives in the MD3220i.  Here is a link to the support for the MD3220i matrix & if you look at page 11-14 as it lists all supported hard drives. FTP://FTP.Dell.com/manuals/common/PowerVault-md3200_Concept%20Guide_en-us.PDF
    Please let us know if you have any other questions.

  • B200 M4 SAN Boot with attached local disk

    Hello

    I have a B200 M4 blade want to do boot SAN in the storage Bay. The blade has a local disk. I need maintain disk local and would like to bypass for the start of SAN. When I set up a 'political Local Disk Configuration' using 'No Local Storage"as shown in the screenshot, and when I associate the blade with a service profile, got the error:"server does not meet the requirements of local drive of the service profile". I did some research and tried a BIOS policy that can disable the SAS RAID config, but it did not work, still getting the same error. See the screenshot of error and politics of BIOS. Advice to solve this problem?

    I use UCSM 2.2 (3f).

    Appreciate the pointers.

    Thank you

    Braven

    If you just want to bypass the drive during startup, just don't include in your startup strategy.

    No need to disable the onboard RAID controller.

    -Kenny

  • SAN - RAID set

    Hello

    I am a newbie to vSphere 5 & am in the process of setting up a test lab. I use 2 x ESXi5 (HP 380 G5) hosts & an iSCSI SAN (380 G5).  I'm not sure of the best configuration for the entire SAN RAID. I would appreciate some recommendations on the best way to configure this small SAN lab.

    Should I create 1x1.4TB than defined R5 or a 2 x 700 GB RAID 5?  A configuration is better than the other?

    He is create 2 LUNs, one per raid array & a VMFS per LUN. I may be running on the 5-7 VM per LUN, none of the VM being I/O intensive. The hosts does not also start from SAN.

    SAN Configuration (HP 380 G5)

    2 x DUAL core CPU Xenon

    16 GB OF RAM

    1 X P400 controller

    2 x 72 GB - RAID 1 (OS - ESXi5 + Open spinning)

    6 x 300 10 k SAS = 1.4 TB in RAID 5

    Concerning

    Bob

    I create a unique set of RAID5 with all 6 discs (or 5 drives + spare), and use the ACU (Array Configuration Utility) to create at least two logical volumes on this RAID set. Although it is not necessary to create multiple volumes from a technical point of view, this configuration will allow you to 'play' with features like the migration/storage vMotion in the lab environment. BTW. you will have a much better performance of disk with BBWC (512 MB or more) for the RAID controller.

    André

  • the storage SAN boot configuration

    I replaces my VMware host and have a question about best practices to configure local storage. Currently I have three Dell servers with embedded vmware 4.1. My new host has 4 300 GB SAS 15 k drives. I am currently begin directly to an EMC SAN using iSCSI. My plan is to continue to start directly on a new EMC e 3300 SAN using iSCSI. What is the best way to use local storage?

    1 create a Raid 5, but only use it for the installation of vmware?

    2. create a smaller Raid for VMware, then another for the space to use the?

    3 another option all together?

    Thanks in advance for your help,

    Don

    If you install your ESXi on that RAID group, the difference will be available in a local data store.

  • Crash with GROUP - SAN related?

    I m a strange problem with one of our customers. We´re using a server (with the following specifications) connected to a Dell MD3220 (connected via redundant SAS cable). Sometimes, VMWare hangs with a GROUP requiring a hard reboot.

    This doesn´t seems to occur at a specific time (it happens either during periods of high load and at night, when the e/s is teduced at least). In my view, that it is unrelated to the server load, but a specific cause.

    Could you please help me to know why did this happen.

    Please note:

    -I m using VMWare ESXi 5.1 However, it happened also in VMWare ESXi 5.0.

    Material:

    -Asus RS724Q-E6/RS12

    -32 GB OF RAM

    -2 x Intel Xeon 5675

    -Dell 6Gbps HBA (supplied with the SAN)

    -SAN: Dell MD3220

    Software:

    -ESXi (free) 5.1

    -No VCenter used

    -VM´s running CentOS 6.3 with VMWare Tools installed and updated.

    Thank you

    Hello

    This is KB that explain this question http://kb.vmware.com/kb/2040583

    This is the driver mpt2sas problem

    Concerning

    MOhammed

  • vSphere 5 drs through san storage separated 2

    Would hi be possible installation storage drs 2 SAN storage?

    To add to that, Yes, it shouldn't be a problem. You can use Storage DRS through multiples of SEM, as long as all guests can see the LUN. Keep in mind however, DRS of storage is not particularly aware of the disks you join, so ideally you want to keep all quite similar - I would not, for example, use the DRS of storage in a cluster with disks SATA and SAS - he would be an impact of significant performance according to the records for the servers had put on which LUN...

    Storage DRS will separate disks if must - if you have several drives assigned to a server, they could potentially find themselves on LUN in a storage DRS cluster. Really, this isn't a bad thing if all of the LUNS is at the same level of performance, just something to know...

  • RAC extended: 2 site without interconnection SAN cluster?

    Hello:

    First of all thank you for your time and your support.

    We intend to deploy a Cluster extended RAC with 2 nodes (Dell PowerEdge) and SAS (Dell Powervault MD3220) 1 box on site A and the same on site B (and a third site of Quorum with vote NFS disk)

    the two sites will be interconnected with high-speed redundant (interconnection of cluster) and a redundant connection to the public network.

    I know that usually a SAN interconnection between site A and site B is necessary too, but the question is:

    It would be possible to deploy a cluster of 2 site without SAN interconnection between site A and B?


    It is possible to deploy the cluster without the dedicated storage site (SAN link) connection?

    ASM may reflect storage from site A to site B and vice versa through the cluster interconnect network ?

    Best regards.

    Hello

    As far as I know, we cannot build cluster without SAN interconnection between site A and site B.

    No, ASM provides mirroring across SAN

Maybe you are looking for