Marking of SAN SSD as SSD in VSphere 5.1

Hello


We got 4 runnings of host ESX Vsphere 5.1 and attached to a unit of IBM V3700 storage via iSCSI to work as our SAN. Recently we have added some SSD in the SAN, created the volume and the ESX host can see everything. However the data store is still reported as Non - SSD in the THAT VCenter for storage.

Capture.JPG

I understand that I can use the CLi to force guests to mark readers like SSD but are most of the articles that I ran to the local SSD.

I was wondering can I do for iSCSI SAN please? My thought is that the storage subsystem would have handled all the features for the SSD as cache. I'm just not sure if I need to do this also for the ESX host.

Thank you


Edmond.

This applies also to the SAN lun.

Tags: VMware

Similar Questions

  • Anyone use SAN SSD for view (assessment xlr8r Whiptail)

    Hello

    We look at our environment seen running on a SAN - specifically a device XLR8R Whiptail - 1.5 to SSD and it dedupes so we should easily be able to adapt our fast environment on it at great speed. However, it is not certified by VMware (even if they can show me those who use it successfully). It's Citrix certified well and results were impressive.

    So, no matter who uses a Whiptail or have general comments on SSD SAN to see?

    See you soon

    We have about 250 workstations running on replicas stored on disks SSD EqualLogic. We chose the SSD to eliminate any possibility of thunderstorms start because of disk bottlenecks. So far, we have seen no problems with them, and in fact, complete desktop computers, no cloned disc seem to run pretty instant. I have no knowledge of the device of Whiptail, well that sounds interesting with dedup functionality.

    Do you have specific questions about SSDS?

    Ken

  • Portege Z930 - need of external controller for SSD drive internal

    My Board TOSHIBA Portege (ex-portable) (Z930) burned with hot water. I removed the small SSD and now need to transfer the data with a USB drive on another PC.

    I borrowed from a friend a SATA USB adapter with 2 interfaces and even small connectors for SATA was not small enough for SSD from TOSHIBA. Apparently, I need a special SATA USB adapter that installs on TOSHIBA Portege mSata.

    The reason why I wrote here is because there are many SATA USB adapters in internet advertised, but it seems that they have larger connectors. While TOSHIBA calls the marked thereon mSATA SSD, but several people call mSATA for one size larger SSD.

    I don't want to order that something does not again. Is there someone who already removed SSD Portege?
    Could you give me a link for such adapter?

    The laptop was equipped with a chip HM77 Intel Panther Point and with the launch of the Intel chip allowed laptop manufacturers to integrate SSD form factor mSATA (micro SATA).

    There is also a difference between the mSATA form factors
    The mSATA being limited to four chips (NAND) only, Intel has developed a new standard mSATA called m2 or mSATA2.

    In many cases, this is also called NGFF (new generation form factor)
    This standard m2 is longer and has more low a profile

    I'm not sure if this Z930 Portege supports mSATA or mSATA2, but you check the internal SSD drive where this information should be available

  • Following the upgrade to 34.0.5, I can access is no longer our SAN, airco, vSphere customers Web web pages, etc. How can I fix this?

    After upgrade to 34.05, can no longer access the web pages of the SAN, the air conditioning system, vSphere Client Web management interface, etc. All still work in Chrome.

    The air conditioning system gives an error:

       Firefox cannot guarantee the safety of your data on 10.32.16.50 because it uses SSLv3, a broken security protocol.
       Advanced info: ssl_error_no_cypher_overlap
    

    The management interface SAN and vSphere customers Web display an empty page. Not sure if they have problems due to SSLv3.

    I understand that you want to be careful, but I seriously doubt that I have to deal with an attack of our SAN. Please, at least give us an option for you connect without worrying.

    Unfortunately, it is not possible so set this preference selectively - with preferences of internal firefox mentioned previously, you can only enable or disable the low globally.

  • Newbee Q Boot from SAN image via ethernet?

    Hi all

    No experience with this, where the question:

    Not sure if this is possible - I said it is, but I'm skeptical...

    We have an IBM DS4400 SAN FC (FastT700) implemented with connections to fiber to two boxes of host controller running windows 2008 with Emulex PL1000 cards.

    We want to try and computer images virtual of a partition to boot on the SAN. Can we get VSphere (ESXi 4.1) on x 3650 servers connnect to the LUN via ethernet 'by' these machines to host FC connected?

    I see the documentation on how to configure a connection to a San, where the host hardware contains CF cards installed. Y at - it software middleware (iSCSI) that will allow me to do that through the boxes 2008 hypervisor connected?

    I suspect that this can only be done using the FC HBA in the 3650's connect directly via switch FC (zoning?) to the FastT700 controller?

    Let me know if this is not clearly explained. I'm trying to understand it in my mind that I will along\learn.

    Thank you

    ESXi 4.1 can boot from SAN - Fibre Channel, iSCSI and FCoE. If Windows servers can be configured as an iSCSI target, it should be possible. You must add the iSCSI target software. StarWind, iSCSIcake and others offer this feature. If the OS is Windows Storage Server while the ability is built into the OS.

  • Possibility of setting up new vSphere 4 and pass the ESX 3.5 hosts?

    The upgrade path makes me nervous (am I too paranoid?).

    In any case, I have enough material and storage SAN to configure a new vSphere environment.

    Is it possible to install a new enviornemnt and separate vSphere 4 and then migrate the 3.5 environment hosts? The doc has always start with VC upgrade to version 4 and be careful not to break stuff...

    I actually finished a project a few days ago in this respect. I deleted the machine virtual old inventory, moved with the former VM of the LUN to the storage for the new environment group and added to the inventory again. Of course, if you do not use the same storage, it will be more in depth, but it certainly can be done. The plan was to do what weinstein said in his post, regarding the old host to the new Victoria Cross add, but I move the first LUN and had to finish like I did. It went well and I don't have problems with the virtual machine again.

  • HA and SAN traffic on dedicated NIC port

    Hello

    We have a small vmware network 2 HP servers and a SAN. We buy ESS Vsphere with HA.

    If I understand correctly, it is considered preferable for HA and SAN traffic on a port on the network adapter.

    What I need is help with how to implement that.

    We will have 2 spare nic per server ports dedicate to this, one for HA and one for SAN traffic.

    Currently, we use 2 ports per server in swarming mode connected to a subnet and two ports more connected to another subnet, for a total of 4 ports per server. We will buy a 6 port of HP network card very soon, it is most important that they do for our DL 365 HP. This will give us the extra 2 ports per server, to devote to SAN and HA traffic.

    OR should we just use the spare ports 2 Teemed in the SAN and forget to devote to the HA.

    Thank you!!!

    Peter

    HA the traffic runs on the service console / management network. It is advisable to separate, but it is not a must. I will say if you don't have enough NIC to use the grouping of network cards to introduce redundancy, it is more important than anything else, when you run HA.

    Duncan

    VMware communities user moderator | VCP | VCDX

    -

  • VSAN - Performance/license associated mystery

    Howdy,

    I received a few questions about VSAN and SSD. I hope some of you can help me there / or has already answered this question.

    In my setup, I had 4 servers configured in a database server cluster. Each server has 2 SAS SSD where an SSD is used as cache flash and another hard drive SSD are used to capacity. ESXI is installed on a separate DOM. SATA

    1. 1. when I first set up these servers using PuTTY and ssh shows two SSDS like all flash SSD in vSphere client web. More about they are configured as "IsSSD = 1" and "IsCapacityFlash = 0".

    bilde1.jpg

    bilde2.jpg

    This shows both according to the type of flash in vsphere client to web.

    bilde3.jpg

    When I set up SSD now using the web client vSphere or ssh an SSD displayed flash and another show as a HARD drive. There should be no specific difference in either of these approaches to what I did before. However, I can't two SSDS to redisplay as 'Flash' under player type.

    So my question here is basically if there is a significant difference of using either of these approaches to the marking of the SSD.

    One thing I think is, for example, performance; If I tag an SSD as a regular HDD using vSphere client of web performance will be reduced to a hybrid solution on this specific server? Or will there be no associated performance the difference? It shows essentially just like a HARD drive, but the performance is still similar to SSD & All-flash solution. It is a conclusion that I opted for some time, but now I'm kind of unsure.

    When will "peripheral storage" show two SSD Flash under "Type of drive. This is one of the reasons why I ignored if they show that the HARD drive under vSandatastore-> manage-> settings-> device backup like the photo above.

    bilde4.jpg


    Note! After the server in maintenance mode Setup, restart and exit maintenance mode see the two SSDS as 'Flash' again.


    However: The additional license all-flash is required to use the VSAN and necessary to give all-flash performance? Or it could fall and only ordinary VSAN license is required to give all-flash performance?

    Best regards

    A little confused new members first post

    (a) its performance hybrid not entirely, because the device is still faster than a normal HARD drive. But you also have not found full Flash performance. So, something between the two.

    (b) SSH + CLI or RVC, Yes. You cannot do it from the user interface in the initial support of the All-Flash (at the time of writing these, July 2015).

  • New design of the ESX environment design

    Community,.

    I could use help on the thoughts, currently there are a few projects concerning the upgrade of components of storage network/etc but I thought on a few different approaches.

    Our installation of 175 5.1 ESXi hosts is currently roughly:

    CPU/memory:

    According to the model, over the life cycle, we have about 4 different models active (same HW provider), everyone has the storage/NETWORK card configuration below:

    Storage:

    2 x 4 GB FibreChannel (connected to different fabrics)

    Network:

    NICs 8 x 1 Gbps

    -2 x vMotion (on different switches)

    -2 x management (on different switches)

    -4 x VM traffic (on different switches)

    So what to do in the future? Currently the network is mainly related to 1 Gbit/s because the team of specialists who are not being updated again.

    It is not on the creation of infrastructure plans converged etc, but I thought about it.

    We have no real problem that will push us in one direction. Because the fragmentation of disciplines each team has a different point of view how to do things.

    Case 1

    Keep the current design, but upgrade the network components and storage as needed in the future. It's our way of work now.

    Case 2

    Take a look at a converged infrastructure solution? BladeCenter / infiniband?

    Case 3

    What network virtualization (VMware NSX) with a convergent solution (or not?)

    A few things I want to achieve (no operational requirements (yet))

    -Performance increase/expand

    -Demanding latency

    -Lower costs (cables, DC, power, cooling and administration space) in the future

    What are your experiences, problems, things to think about, what not to do?

    I Don t necessarily need a full explanation (would be nice of course), but if I can get some new thoughts or directions from your answers I could dig into it a little deeper myself. I was wondering based on experiences if it's worth to put more effort into it.

    Do not worry about the investment, if it is interesting to build a business case that can help achieve savings, management etc. it might be useful.

    Hope to hear from you guys

    Hey Wh33ly,

    Breaks down each section a bit here:

    Server hardware (CPU/memory, reduce cables ect)

    -From this point support CPU/memory is still any provider you have access.  However, the obligation to reduce the latency and cables, you may look at some converged computer worms, or something like a Cisco UCS chassis.  The Cisco UCS chassis has two FI who are sitting on the top of the grille that act as the brains of the operation.  At the rear of the chassis, you will have two IOM failet (failet of input/output) these are the external ports for all blade chassis.  Most configuration I see usually have 20Ggbps coming out of the AKA 2 cables by IOM goes up to the FI or 40Gbps, up aka YEW 4 cables.  Nevertheless the frame Cisco UCS reducing much cables as only cables coming out of the chassis are those of your IOM modules.  The cables only leaving this rack will be those from the FI to your basic network infrastructure.  With this in mind, any Center of blade will usually reduce your cable management and it will also reduce traffic on your network East/West as live all the blades in the same chassis.

    Because the calculation is similarly any provider you are with, questions you would have to ask is what are the problems that this chassis resolved during my current infrastructure?  This problem must be a value of extra time and resources it will take to install the new infrastructure and migrate on.

    Networking

    -When it comes to networking, nothing wrong with 1 GB but 10 GB price sank so quickly become the default.  Also with 40 GB coming down the pipe very little time this should also lower the cost of 10 GB which can make a little more progress a best migration plan.

    -10 GB also reduces the cables that you get 10 cables with a value of 1 speed

    -If you decide to go all the 10 GB, you want to consider migration to the VDS switches.  Virtual distributed switches offer many benefits and also allows you to use the new control of NIOC to control how each group of fate as 10 GB ports.  So now, you can define each group of ports on the amount of traffic it gets and how much stock it.  This gives a lot of flexibility

    -NSX, NSX is ideal if you want to be able to control your VLAN s, Firewall, VLAN, Speration through Vmware / vSPhere.  This gives a huge amount of automation, flexibility and free service capabilities, but it does NOT increase the speed.  Even with the NSX your sub with connections are always going to be your performance brand bottle neck or a bench.  So if your running all the 1 GB of network even if you move to NSX will always only clocked you at 1 GB.  Yes NSX will give you more flexibility and may also reduce some of the East/West traffic, but your still limited to your physical hardware.  In this spirit, I would say that the biggest assets NSX is if you're a provide service, multitendcy environment, or you require the ability to create a VLAN ect so automation, scripts or self service portals.  Like anything else, I watch NSX as a tool, and if it's the right tool for the job then I use it, if not then I don't have.

    Long story short, networking is quickly moving to 10 GB, there is no direct NEED to jump ship as soon as possible, but its something that you might want to look into the establishment of the budget over the next year or two to define yourself to easier transitions.  10 GB also allows backup of the computer virtual a LOT faster that can help with RTO, RPO requirements.

    Storage

    All boils down to how much space do you need and how much performance do you need?   One thing is for certain that the SSD is here to stay and will be the future of the SAN.  Almost each SAN vendor now has a solution at several levels, hybrid solution or any SSD solution.  The nice thing about all SAN SSD solutions is that you no longer have to spend cycles administrative construction of storage pools, tweaking them, with groups bronze, silver, gold, for performance.  This all goes with all SSD without that everything is super fast.  That said, there are also many storage solutions converged to come, the local host caching and VSAN type stuff.  It is all the pressure put on businesses in SAN to find ways to be more innovative to make storage and lower prices as its much cheaper to buy 3 guests with drives then just put some kind of Storage Virtualization top layer all to share the lun/storage.  In the next year or 2 you will start to see the space of SAN saturated with many small businesses that make the converged storage, VSAN and cached local solutions duly lower price points.

    When your storage decision goes forward, you want to go with the SAN/supplier who offers the best performance for your money and gives you enough space for future growth with IOPS / s.  Tpyically when I size a SAN, I put 2-3 years worth of memory buffer in there for consumtion IOPS / s and space, like the SAN buy is one of the biggest, a COMPUTING organization must do.  If this is done on arrival of the IOPS / s and badly or space in a year it's hard to go back to the honeypot fruitful asking more money again, but also all SSD SAN are that they are still a bit expensive and not always in the budgets of peoples.

    I hope this helped,

    If you have any questions please let me know.

  • How redundency for vmotion multiple vmkernels?

    Hello

    I have an equallogic SAN on a host of vSphere. The connection is set up as follows:

    vSwitch1

    *************************

    -Service 2 console

    -iSCSI4

    -iSCSI3

    -iSCSI2

    -iSCSI2

    -


    vmnic4

    -


    vmnic5

    **************************

    The iSCSI1... 4 VMK are each and every one is related to a NETWORK card (either 4 or 5). If I can activate only vmotion on a VMK, how to make redundancy?

    I guess that NIC dedicated for vmotion is an option (?), but that seems to be an ineffective approach...

    Hello Gheywood,

    In fact, devoting two cards (team) network for Vmotion redundancy is the best choice if possible. If you can't do that, then you should not put your VMotion traffic on the same NIC team as your iSCSI traffic. You have dedicated network cards / network (physically separated if possible) for iSCSI.

    If you combine a group of ports VMotion with another group of ports, the Service Console (or in the case of ESXi management network) is usually a good choice.

    Unless you use convergent network (CNAs) and 10 GbE adapters, this isn't uncommmon to see 10 NICs in an ESX host using the IP storage. I hope this helps.

    Don't forget to mark this "correct" answer or 'useful', if you found it useful (you'll also points).

    Kind regards

    Harley stagnate

    VCP3, VCP4

  • Migration of virtual machines in ESXi 4.1 to 5.5 to new hardware

    I'm new to VMware and our company bought 4 new servers and a SAN to replace our old VSphere 4.1 installation of 12 year.  The old configuration consisting of a server VCenter 4.1 physics 2003, 2 Dell 2900 ESXi hosts and 1 MD300i SAN.  Gen9 again the Installer 4 HP DL360 (3 host 1 VCenterServer) and MSA 2040.  Can someone please explain to me what would be the best way to go about moving more of our VM current for the new hardware? I found this from a previous post here, but it does not move the virtual machine to a SAN to another.

    Migration of the VMs of ESXi 4.x to 5.5

    "If I understand your question, continue to build the new environment of 5.5 with your 3 hosts from scratch 'ignoring' your old hosts. Once your new environment up & running, connect 4.1 guests to your new server vCenter Server and vMotion to move virtual machines while they run to the new cluster (live migration without interruption).

    VMTools upgrades do not have place automatically during migration.

    Once your virtual machines on new hosts, update tools as usual and you can then also upgrade virtual hardware of your virtual machines.

    Virtual HW update is optional, though; If you do not have the VMs that need the new features you can skip this last step and keep them as they are. »

    TYIA.

    The previous answer is pretty good, but I would like to add a few elements.

    You can 'disconnect' your 4.1 hosts of your old vcenter with running virtual machines, and then when you connect the hosts in your environment of 5.5 it will import your VM running as well.  As long as you have properly configured virtual networks you can live migrate vmfs old host and the old store of data simultaneously via the web client as long as your esx license allows it.  Otherwise close your virtual machines down to 1 at a time to migrate them above.

    Alternatively, you can consider running your vcenter as a virtual machine within the cluster it manages.  This gives you some DRS and HA for vcenter, plus a 4th host for virtual machines.

    If you don't have additional license you can also install your new hosts of 5.5 and run for 60 days before that you upgrade your 4.1 licenses to apply to your new host.

  • Sharing the drive between several virtual machines

    Hi all

    We have a configuration with a SAN (Dell Compellent) and use vSphere 6 to host a number of Windows servers. We now have a giant machine of Windows with a large number of small files for a particular application.

    Now, we would like to create a cluster with a LoadBalancer in front of him, so that we can handle more load. To do this, we want to create a disk that is shared between 2 (or more) of windows machines.

    One of the problems is that NTFS is not a supported clustered file system. So, I did a lot of research on Google to see what my options are. In my opinion, they are the following:

    1. set up a Cluster Shared Volume (CSV) Microsoft (use SMB if I read the documentation)

    2. the user Windows shares on a separate file server

    3. the user NFS (depreacted under Windows)

    4. switch to Linux and use NFS

    The problem I have, is that they use all the network traffic. For example, option 2 is ridiculous slow. NFS under Linux is also a way slower than the local drive (managed by VMWare iSCSI) disk access and NFS on windows does not appear to be

    support very well.

    I know that there are aware filesystems, clustered as VMFS etc. Is there a way to access it directly from my Windows VM or are there devices SAN which is directly accessible from the virtual Windows machine?

    Or maybe there are other solutions to set up a shared drive?

    I know that there are aware filesystems, clustered as VMFS etc. Is there a way to access it directly from my Windows VM or are there devices SAN which is directly accessible from the virtual Windows machine?

    VMFS is a really clustered file system and allow access of multiple virtual machine the same amount of storage (data store) which is different from that to create a virtual disk (hard) and present to multiple virtual machines. You can do this, BUT the guest virtual machine should take care of several accesses to the hard, for Windows, you must turn on the feature... without this clustering with failover data may be corrupted.

    I think the best and supported solution for you is to create another cluster with Failover Cluster functionality and create a resource sharing files with files that will access your NLB nodes.

  • Cannot remove the disk group

    Unable to identify the virtual SAN performance statistics in the vSphere Client Web UI.

    Use observers VSAN of RVC for monitoring and collecting depth for Virtual SAN performance indicators. VSphere Web Client is not currently able to view the performance counters for virtual SAN.

    Observer VSAN

    Launch the VSAN observer to start to accumulate performance indicators. Type the command "vsan.observer ~/computers/- runwebserver - force" to start the tool, use a modern web browser and access the portal of metrology. VSAN observer provides an in-depth analysis of the performance of layer of physical disks in Virtual SAN, cache hit rate, latency, etc.

  • New addition ESXi host to vSAN, put in the new group of network Partition

    Add a new R730xd from Dell to my cluster active vSAN (also all the Dell R730xds, same features), but when doing so, the new host is put into another group of network Partition.  Same configuration of the switch, the new host is plugged as that of others.  Can check the connectivity between each host (in addition, confirmed as much as I can ping to the interface vSAN between one of the four hosts), yet always says there is a communication error.  Any ideas?

    Certainly a multicast problem, strongly recommend the reading of these:

    Virtual SAN troubleshooting: Multicast. VMware vSphere Blog - VMware Blogs

    http://www.yellow-bricks.com/2014/03/31/VSAN-misconfiguration-detected-2/

  • Where can I download vsan trial of?

    Hello

    I chose a trial of san virtual vmware for download, but on the page download, all I see is the attached screenshot and not vSAN himself. Could someone let me know where to download the trial of the product of?

    The page range:

    Virtual SAN is hyper-convergé and incorporated into the core of vSphere. To evaluate Virtual SAN, you can download the vSphere architecture, which consists of the following:

    • -L' hypervisor, VMware, vSphere, ESXi Installable, which is installed on each physical server to host virtual machines.
    • -A single instance of a management server called VMware vCenter Server that allows centralized management of several vSphere hosts.

    But I already have this?

    Thank you

    There is no separate download.

    Virtual SAN is part of the installation of vSphere.

    If you have the trial version of vSphere, just go to the properties of the cluster and activate VSAN.

    Consult the following links for assistance

    VMware KB: Activation or deactivation of a virtual cluster SAN

    Documentation Centre of vSphere 5.5

Maybe you are looking for

  • DAQ 6008 must be reconnected

    Hello I have a problem during execution of a measure of voltage using 6008 DAQ board. I need to reconnect to the Commission whenever I want to measure otherwise if I launch the program, stop and then run it again I get error 200284 saying that some o

  • Down on the Image, but not on the mouse scroll bar event?

    Hi all I use an event "Mouse Down" on an image to save the coordinates X and Y of the click and increment a counter.  My problem is that I'm looking at an image with a scroll bar that I want to use to set what part of the image is visible.  When I cl

  • During defragmentation receive the message "There is a"VALORISATION"fragmented 1%"

    During defragmentation - there is a "VALORISATION" fragmented to 1%. How can I resolve this issue. Can I have an answer on "* address email is removed from the privacy *." -Thanks for your reply. B.Michel Duffy * removed *. * original title - during

  • How can I delete old backups made with backup?

    The software that I use at home for backup allows me to program it so that I keep only full backups n.  It deletes the oldest backup automatically performed after the backup complete n + 1. At work, I have to use the backup utility that comes with XP

  • Office - 2007 on Windows XP SP3 SaveAs Dialogbox crashes

    Hello Every time I open a new spreadsheet or document or the TPP and just try to do a save as to save a name office crashes. The only solution I've found so far is to create an empty txt file, rename the extension .xlxs or docx, pptx and then open th