vSphere on iSCSI SAN disk size

Hello

I installed the 5.1 version and will also use VMWare View.

I see in the documentation that there always a size of 2 TB drive but also given conflicting reports on that.  I tested it and I see that he can see more then 2 TB.

My question is if I want to 2 TB standard or can we go more then 2 TB and if yes, what size Max now.

Thank you.


Gary

Hello

If you're talking about VMFS datastore now we support 64 to http://www.vmware.com/pdf/vsphere5/r50/vsphere-50-configuration-maximums.pdf

If you talk about virtual disk, virtual disk size 2 TB less 512 bytes

Concerning

Mohammed

Tags: VMware

Similar Questions

  • question put on the vSphere iSCSI SAN infrastructure network

    Hi all

    We are a small company with a small virtualized environment (3 ESX servers) and are about to buy an AX-5 SAN EMC (model Ethernet not CF) to implement some of the features of high availability of vSphere. My question is related to the networking of the SAN: we switch dual Cisco 2960 G Gigabit and dual Cisco ASA 5510 firewalls in a redundant configuration.

    I understand that the best practice is to implement iSCSI traffic on a separate from all other traffic LAN switch. However, I do not have the knowledge and experience to determine the real difference than a separate switch would really vs the plan to create a separate VLAN on the switches Cisco dedicated to iSCSI traffic only. I would ensure that the iSCSI traffic has been on a VLAN dedicated Physics (not just a logic) with any other VLAN logical (subinterfaces) on the same VLAN). It is difficult for me to understand how a port gigabit on a VLAN isolated on kit Cisco will perform much poorer than on a dedicated Cisco switch somehow. But then again, I don't know what I don't know...

    The thoughts and the input would be appreciated here: I'm (very) hard not to drop another $6 + in another pair of Cisco switches, at least that this decision will significantly compromised performance iSCSI SAN.

    Enjoy your time,

    Rob

    You have 2 SP each with 2 iSCSI, for example:

    SPA: 10.0.101.1 101 VLANS and VLAN 102 10.0.102.1

    SPB: 10.0.101.2 101 VLANS and VLAN 102 10.0.102.2

    On your ESX to create 2 vSwithes, each with a port vmkernel on an iSCSI network and each with a single physical NIC.

    See also:

    http://www.DG.com/microsites/CLARiiON-support/PDF/300-003-807.PDF

    André

  • Slow performance read/write on iSCSI SAN.

    This is a new configuration of ESXi 4.0 running virtual machines off a Cybernetics miSAN D iSCSI SAN.

    Having a high data read the test on a virtual machine, it took 8 minutes vs 1.5 minutes the

    the same VM on a slower 1.0 Server of VMWare host with virtual computers on

    local disk.   I look at my reading speed of the SAN, and

    It's getting a little more than 3 MB/s max in reading, and usage of the disk on the virtual computer is slower 3MB/s...horribly.

    The SAN and the server is both connected to the same switch 1 GB.  I followed this guide

    virtualgeek.typepad.com/virtual_geek/2009/09/a-multivendor-post-on-using-iSCSI-w

    ITH-vmware - vsphere.html

    to get the configuration of multiple paths correctly, but I still do not get good

    performance with my VM.   I know that the SAN and network

    must be able to handle more than 100 MB/s, but I am not getting

    it. I have two network cards of GB on the multipath SAN to GB two network cards on the

    ESXi host.  One NETWORK card per VMkernel.  Is there anything else I can check

    or do to improve my speed?  Thanks in advance for advice.

    Another vote for IOMeter.

    Try to test 32K 100% sequential read (and write) with 64 IOs in circulation, this will give sequential performance.  Should be close to 100 Mbps per active path of GigE, depending on how much the storage system can get out.

    Then 32K 0% sequential read (and write) with 64 traffic against a LUN will IOs (say 4 GB +) good test size give a value for IOPS, which is the main factor for virtualization.  Look at the latency, must remain below approximately 50ms usually in order to be able to work if the default 32 IOs in circulation (by host) is OK (say you had six hosts, the table should be able to deliver the random i/o with a latency)<50ms with="" 192="" outstanding="" ios="" (="">

    Do not use the "test connect rate" and cela effectively tests only cached flow, which we are not so interested in any case.

    Please give points for any helpful answer.

  • question of iSCSI san

    Hello everyone,

    I just created my first with software e open iscsi san solution.

    My situation: on the local disk, I put my VM

    I created 4 iscsi data warehouses

    1 will be used to back up my VM and can be reached for Vsphere

    The other 3 serve as a backup for my 3 VMS solution. I am connected to them through the Iscsi initiator in the virtual machine, format them, and now they appear as an additional drive in the virtual machine. I can see them in Vsphere but is no longer in data warehouses, but in devices (because they are formatted as NTFS)

    Now the question:

    Is it possible to see what data is on data warehouses that are formatted to ntfs When you are not connected to the virtual computer?

    When a virtual machine should crash, backupped data should always be saved to disk iscsi, but where can I get it?

    Thanks in advance, onkt

    NTFS is only accessible via windows server only. You can mount the iSCSI drive backup on another virtual computer.

  • My san disk will not work I have to do

    does not work my san disk

    could be 1 million reasons, maybe you have a large disk of size and it is not recognized, it is faulty, not formatted, card reader is faulty, the cable is defective, integrated card reader is faulty, drivers for card reader not installed properly...

    Give us more details on how we can talk about pc and what happens, you get the bing connecting xp, y at - it none exclamation in Device Manager, all error messages, the size of the disc etc etc...

  • Simple question vSphere replication NAS/SAN

    We have two remote sites running vSphere and our main office, also running vSphere.  We think of you use vSphere replication to provide a DR solution for two of our critical applications in these remote sites.  One of these applications is running on a virtual machine on the remote site and is connected to a NetApp NAS.  The second application is using EMC SAN to the other remote site.  Our main office using EMC SAN for our vSphere.

    I have a few questions about it.

    1 can use replication of vSphere on the site that uses the NetApp storage?  Will be replicated on our main site, EMC SAN disk data on NetApp storage?

    2. don't only need 1 unit of replication vSphere on the site of the main office?  And then another at each remote site?  Or do we need a separate vSphere replication device at Headquarters for each remote site?

    Thank you.  Sorry for the newb questions.

    Hello

    1 vSphere replication is independent of the peripheral type storage. The changes are to the hard level. You can use different bays of storage on different sites.

    2. If the remote sites have their own server vCenter Server, then you need a VR device at a remote site. If all sites are under same vCenter Server, then you need a device of VR. VRMS (server administration VR) is extension of vCenter Server and there's only a VRMS will be deployed by vCenter Server. The combined unit of VR contains VRMS + embedded DB for server VRMS + VR. There are also some additional device VR-server only, in case you have some hosts within the same server vCenter Server, but on a larger distance and replication traffic can benefit if she moves directly to the VR server closed to the target data store.

    Kind regards

    Martin

  • Impossible to mount simultaneously multiple targets since a single iSCSI SAN iSCSI

    Hi guys,.

    I recently deployed FreeNAS 8.0.3 as a practice laboratory iSCSI SAN on one of my HP Proliant microservers.  The first target is a measure of physical disk since it was noted that the drive is faster than file extensions.  I have install the Initiator Software for dynamic and found discovery.  I built on that vCenter.  It was pretty slow going to see that it is also on a 100 MB network.  I decided to buy a gigabit switch and a 240 GB SSD.  I added the SSD as another target.  However, I can't get two data warehouses at the same time.  Am I missing something?  The two are listed under iSCSI Software Initiator...

    Connected targets: 2

    Features: 1

    Paths: 2

    When I'm actually in the paths, I see two targets, but only one can be Active (e/s) and the other is active.  To switch between data warehouses, I had to disable the currently active (e/s) an and the reactivate to failover to the other.  I wish that both target mounted at the same time.  How can I do this?  Thanks for any help.

    Mark

    My apologies in advance if I'm wrong understand something in the description of the problem, but it seems that you have only 1 "" meaning a block at the level of the device (iscsi) LUNS presented through two separate iscsi (two separate cobinations iqn/IP) targets. You can't have a VMFS in LUN data store so I am bit confused with your statement "I can't climb the two warehouses of data at the same time"...

    Peter D

  • ESXi 4.1 or ESXi 5.0 with iSCSI SAN

    Hello

    I'm trying to establishing an environment that is / will be an iSCSI SAN backend entirely.  I know of a lot of reading that the best practice is to config the ESXi host with:

    VMKernel > vmnic > physical network adapter > switch ports > port on SAN (config 1:1 - no grouping of vmnic)

    Environment:

    ESXi 4.1 & 5.0

    Force10 switches

    EqualLogic iSCSI SAN (PS4000 & 6000)

    Here's my real question (playing the Devil's advocate):

    Why I shouldn't team 3-4 vnmics on a vswitch with multiple vmkernel ports?

    Given my environment.  Can someone point me to a technical document that explains what will / could happen if I set up the environment ESXi in this way?

    Thank you

    BDAboy22

    So, basically, we want the SAN to take full responsibility for the failover instead of the network and/or the SAN for who should resolve the failure (which could lead to the time of restitutio in integrum longer session).   I understand that correctly?

    Yes, that's right, we want to stack storage for vSphere manage with path selection and tipping - and by connecting each vmknic directly to the physical vmnic map we sort of "simulate" how to CF cards and also ignores the network failover mechanisms. As you probably know you also need to "bind" the iSCSI Software to the two vmnic (GUI now in 5.0, esxcli in 4.x)

  • iSCSI SAN Connectivity

    I'm kinda new to this Yes, you can be sweet.

    We are migrating some of our less critical servers to VSphere. Because the servers are at camp, we'll install eSXI on our server and use hosted camp via iSCSI SAN. The camp provided us with a block of IP addresses that we can use for SAN connectivity.

    SAN connectivity is on it's own dedicated NIC (1 for testing purposes) and physical switch.

    Management network
    192.168.72/24

    SAN IP blocked
    172.26.11.0/26
    172.26.11.1 - GW

    The target IPs
    172.31.3.105
    172.31.3.109

    I created a virtual switch for iSCSI and tied a physical NETWORK adapter to it. I then added the iSCSI adapter software, responsible for the target in the dynamic, linked discovery the NIC to the iSCSI Software card.

    I then added a road 172.31.3.0/24 to go to 172.26.11.1.

    When I scan for the storage of the new, I just blank. If I go back into the adapter software, targets are now listed on the static discovery tab. The camp is saying that their HDS does not all applications.

    So I built a virtual Windows machine and loaded on this host (using an Openfiler iSCSI on the management network) and installed the Microsoft iSCSI initiator. By using this software, I am able to connect to the SAN CoLo network from inside the virtual machine.

    What Miss me? Why can I not connect to the network the host SAN? Any help will be much appreciated.

    Bob

    http://pubs.VMware.com/vSphere-50/topic/com.VMware.vSphere.storage.doc_50/GUID-0D31125F-DC9D-475B-BC3D-A3E131251642.html

    (Physical network adapters must be on the same subnet as the iSCSI storage system, that they connect)

    / Rubeck

  • Make Accessible iSCSI SAN-VM via Windows iSCSI initiator

    I have a "total newbie" question that I hope can respond quickly enough.

    I added a new DELL MD3000i SAN to my storage network in order to use this SAN exclusively for Windows virtual machines. I have the vdisks large (2) 4 to 7 to each (Yes, I REALLY need unique and large volumes) defined on the MD3000i. (4) the MD3000i ports are connected to my iSCSI VLANS and have the id default host port 192.168.130.101/102 and 192.168.131.101/102.

    I have a MD3220i installed in the network and working with hosts ESXi 4.1 (2) (192.168.230.101/102, 192.168.231.101/102 of the subnets). I am quite familiar with how to make the storage available to the host via the initiator of the iSCCI, but I know not how to make accessible storage for virtual machines WITHOUT using the host to connect to the iSCSI SAN, create a data store, and then add a new virtual disk to the virtual machine.

    Only the vmnic dedicated to the iSCSI initiator have physical links to iSCSI VLANS (vSwitch01). The network switch has (2) network adapters connected to the network via vSwitch0 "inside".

    Any ideas on the best way to "get there from here"?

    Hello.

    You will need to create a group of ports in virtual machine on the same vSwitch created in your iSCSI ports group.  Give the virtual machine a 2nd NETWORK card and then assign it to the created virtual machine port group previously.

    Good luck!

  • Live iSCSI SAN upgrade production network (Dell MD3000i and PowerConnect &amp; ESXi 4)

    Hi all

    For the moment, I have a direct connection to my Dell MD3000i iSCSI SAN in ESXi 4 2 x host with no switch between the two, all of the virtual machine is within the SAN and running on both hosts.

    If I want to put a switch between the two and add one more iSCSI SAN so that the two SAN can be accessed / shared with the 3 ESXi 4 guests total, how should I approach the issue?

    any idea or guideline would be greatly appreciated.

    What I have to plan the time or can I change the old IP on the existing SAN iSCSI manually cable one by one without damage/alter the existing operation of the virtual machine on SAN? or I have to stop the virtual machine and then terminate the connection manually iSCSI session and then restart the host SAN?

    Thank you

    AWT

    Standard installation on the MD3000i is for a port iSCSI on each controller on a subnet and one on each controller to be on another.

    There have been people working successfully on a subnet, reported problems too.   I wish we were longer on our Vsphere upgrade, I'm not at a point but where I test things like Vmotion.

  • iSCSI SAN, ESX4 Datastore recommended sizing

    Hi all.  I am in the final stages of our construction of vSphere and I'm looking for best practices in terms of sizing of the warehouses of data/Lun on a SAN, iSCSI for several hosts of ESX4.

    What we have:

    Two HP c7000 blade enclosure with three BL460c in each enclosure.

    All servers will run ESX4 and use an iSCSI SAN (Software initiator).

    I'm not sure of the best way to configure these elements.  A large unit number logic holding a large store of data?  A few small LUN each holding a data store? A unit number logic/data store for each ESX host?  I don't want to use vMotion and who could help narrow down the answer.

    Any help that you experienced people from VMware can each give this noob is very much appreciated.

    -Dave

    Dave

    This is my golden rule.

    Virtual machines are mixed on data warehouses to optimize storage efficiency and the efficiency of the I/O - low mix I/O virtual machines with virtual machines I/O high on the same data store.

    VMs per VMFS datastore 12-16 maximum.

    16 VMDK files per VMFS datastore maximum.

    * A VMFS volume per LUN storage.

    * At least 15% to 20% of a VMFS data store should be left as open space to meet requirements such as the virtual machine swap files and snapshots.

    According to your needs - you will initially 30 virtual machines, you must look for set 2 data warehouses-, but the number of vmdk files or e/s for a virtual machine needs can change that.

  • iSCSI SAN Solution

    I'm looking into buying an iSCSI SAN solution and seeking feedback on sellers work better with vSphere. I looked at EqualLogic and LeftHand and want imput people using these or other solutions.

    Not at all. It won't make any difference, I prefer the blades. LeftHand will work even with the blades or rack.

    ----

  • iSCSI SAN, blocking the ability to attach a RDM to two VM?

    Hello

    We have an iSCSI SAN and I wanted to test with clusters on this stuff, but I can't get a RDM attached to more than one virtual computer at a time.

    I know that MSCS is not taken in charge by iSCSI, but is not supported or is it not possible to join a ROW over a virtual computer at a time?

    Thank you!

    Kenneth

    The solution should work... so with RDM, but also with an initiator of software inside each node of the virtual computer.

    If you use RDM, you must configure the SCSI controller in physical mode.

    On the second node does not add a new drive... but easy to use existing RDM disk (or disable ROW filtering of the customer).

    André

  • Best practices for Exchange 2003 with VMWare ESXi 3.5 and iSCSI SAN

    Hello guys,.

    Here's the Q? We have 1 physical Exchange 2003, the HOST of 4 and 1 iSCSI SAN with LUN 3, 1 for data, 1 for VMWare and 1 for SQL, if we're going to virtualize it, I don't know where to put data Exchage and newspapers. I do not think that that is a good practice to put together the data but I do not have another SAN. So, what can I do?

    Thank you.

    We have 813 mailbox.

    I agree with cainics, start an average size and go from there.  I know it's a production mail server and you can not exactly 'play' with the settings because this requires time, but if you do the VM too big, you would have nothing left for other virtual machines.

    I would go 2 vCPU and at least 4 GB of RAM, maybe 8GB.  There must be parameters for the Exchange mailbox 813 and X users # to implement your environments in order to get an idea of the amount of RAM that will be... 4GB seems minimal to me, but 8 GB would probably be better like that.

Maybe you are looking for

  • Why my macbook showing 70 GB of apps

    I used to keep apps for my iPad and iPhone on my macbook, I recently deleted all as storage showed 70 GB of apps. However since the deletion then the storage always shows 70 GB of apps. My question is what is this 70 GB of apps, I know that's not app

  • convert mp3 iTunes 12.4 and worst id3 tags

    What the * don't have done with the new itunes Aplle? The converter is not in the right click Menu anymore and they put much worse... AND WHERE THE * IS THE ID3 TAG CONVERTER? Did remove it or did someone knows where to find it now? Happy to answers

  • OfficeJet Pro 8500 leaking blue ink.

    I'm trying to disassemble the printer to gain access to the rear of the cartridge that assembles. I'm unable to remove the side panel of the case. Advice or suggestions appreciated! If I find the leak at the back of the Assembly, is - this repairable

  • Z800 beep on the market?

    Just disassembled my z800 to clean any dust - power on after reassembling the power led remains lit in blue (no flashing) and I get a beep, long pause, another beep, then nothing (without video - port monitor wakes up but no text / no splash). This b

  • OMSA 7.2 How will we change the keystore password and key word

    We run OMSA 7.2 on a windows server.  Need to replace the key file to import our own CA certified certificate. Interface of the OMSA so that it does not work well. Passwords are hidden in the server.xml file in ${keystore_password} and ${key_password