Connection iSCSI SAN to ESXi servers

Hi all

I am trying to work through the iSCSI SAN Config Guide and get our three ESXi servers connected to our iSCSI SAN, but I'm confused about a few points.

Background - Our production network is 172.16.10.x, our management network is 172.16.64.x and the SAN is on the 10.1.1.x network.

The Guide says that the service console and VMkernel need to access the iSCSI storage - but in ESXi, there is no service console. Our network administrator (who has used 2 ESX) said just put in place two vSwitches and configuring each of them to VMkernel access, one on network 10 and the other on the management network. Every time I try this, I lose connectivity to the management console. It seems VMkernel wants a default gateway set and when I put it for 172.16.64.1, the SAN does not connect, but when I put it to 10.1.1.1, vCenter loses the connection. Our network administrator says he won't let the 172.16.64.x network traffice to the 10.1.1.x network, but I think that we have to set up in this way.

Is there something I'm missing. There actually seems to be three issues. (1) when the Guide says of console, VMkernel and service is possible to distinguish between the two in ESXi? (2) if we set VMkernel network 172.16.64.x, can we create a second VMkernel on the 10.1.1.x network? and (3) should just bite the bullet and set up routing between the VMkernel and the San?

Thank you. I know it was long and complicated, but that reflects my confusion! Any help would be appreciated.

The answer to the first question is no, there is no distinction between console and vmkernel port in ESXi, like network service management of walks on the portgroup vmkernel.

You should be able to create a vmkernel 2nd interface, and you do not change the default gateway.  As you have noticed, you can have a default gateway, that is why the default part.  That alone, and you should have two exchanges vmkernel, one on the network management and the other on the storage network.  The vmkernel being on the same segment as the storage, traffic goes out through it and not the default gateway, so no routing must take place.

-KjB

VMware vExpert

Tags: VMware

Similar Questions

  • FC to iSCSI SAN using ESXi 5.1

    Hi all

    We have 4 guests with ESX5.1 installed and use vCentre to manage with Enterprise license. Guests began life as 3.5 and we have updated since then. Had to be replaced in 6 months.

    A key element is that the VMKernel port is a management network using two physical NICS for redundancy.

    All servers used multihomed all FC HBA on a CF polyester fabric. We replace FC SAN with a SAN iSCSI only.

    My question is what is the best way to implement this? I created a vSwitch with two VMKernel ports on it. The vSwitch is introduced as two whole separate from the active NIC to failover. It is all well and done on all hosts.

    However, I was planning a vLAN dedicated iSCSI with just the SAN on it. This seems to be a problem, because I can not put a specific default gateway on port groups because they are VMKernel ports and will only use the default gateway, that I put on the management network which will not work!

    vSwitch0

    Management

    vmk1:192.168.200.102

    vSwitch6

    iSCSI2

    vmk3: 192.168.30.162

    iSCSI

    vmk2: 192.168.30.161

    There is only a single routing table shared for all VMKernel ports? If yes how can I apply this? I'd rather not have iSCSI on my network but according to online guides, I get so far and cannot route iSCSI interfaces I install outside its own subnet. Is it normal?

    Any advice much apprecaited!

    Thanks - Steve

    Highly recommended best practice is to configure a VLAN separate with use IP network only for iSCSI, possible even with a dedicated layer two switches used only for iSCSI, depending on the type and quality of the switches.

    stevehoot wrote:

    Just to be 100% sure, I seem to have done is correct, and that ESXI does not allow a VMK iSCSI have a default gateway on it?

    It is not really the storage network cannot have a default gateway, but rather that the operating system of ESXi (vmkernel) has IP stack with the internal routing table shared on different functions such as management, vMotion, storage and other.

    This means that you have a gateway by default common for each host and not by function, and translates as more generally as the 'common' default gateway will be on the management network.

  • Live iSCSI SAN upgrade production network (Dell MD3000i and PowerConnect & ESXi 4)

    Hi all

    For the moment, I have a direct connection to my Dell MD3000i iSCSI SAN in ESXi 4 2 x host with no switch between the two, all of the virtual machine is within the SAN and running on both hosts.

    If I want to put a switch between the two and add one more iSCSI SAN so that the two SAN can be accessed / shared with the 3 ESXi 4 guests total, how should I approach the issue?

    any idea or guideline would be greatly appreciated.

    What I have to plan the time or can I change the old IP on the existing SAN iSCSI manually cable one by one without damage/alter the existing operation of the virtual machine on SAN? or I have to stop the virtual machine and then terminate the connection manually iSCSI session and then restart the host SAN?

    Thank you

    AWT

    Standard installation on the MD3000i is for a port iSCSI on each controller on a subnet and one on each controller to be on another.

    There have been people working successfully on a subnet, reported problems too.   I wish we were longer on our Vsphere upgrade, I'm not at a point but where I test things like Vmotion.

  • iSCSI SAN Connectivity

    I'm kinda new to this Yes, you can be sweet.

    We are migrating some of our less critical servers to VSphere. Because the servers are at camp, we'll install eSXI on our server and use hosted camp via iSCSI SAN. The camp provided us with a block of IP addresses that we can use for SAN connectivity.

    SAN connectivity is on it's own dedicated NIC (1 for testing purposes) and physical switch.

    Management network
    192.168.72/24

    SAN IP blocked
    172.26.11.0/26
    172.26.11.1 - GW

    The target IPs
    172.31.3.105
    172.31.3.109

    I created a virtual switch for iSCSI and tied a physical NETWORK adapter to it. I then added the iSCSI adapter software, responsible for the target in the dynamic, linked discovery the NIC to the iSCSI Software card.

    I then added a road 172.31.3.0/24 to go to 172.26.11.1.

    When I scan for the storage of the new, I just blank. If I go back into the adapter software, targets are now listed on the static discovery tab. The camp is saying that their HDS does not all applications.

    So I built a virtual Windows machine and loaded on this host (using an Openfiler iSCSI on the management network) and installed the Microsoft iSCSI initiator. By using this software, I am able to connect to the SAN CoLo network from inside the virtual machine.

    What Miss me? Why can I not connect to the network the host SAN? Any help will be much appreciated.

    Bob

    http://pubs.VMware.com/vSphere-50/topic/com.VMware.vSphere.storage.doc_50/GUID-0D31125F-DC9D-475B-BC3D-A3E131251642.html

    (Physical network adapters must be on the same subnet as the iSCSI storage system, that they connect)

    / Rubeck

  • ESXi 4.1 or ESXi 5.0 with iSCSI SAN

    Hello

    I'm trying to establishing an environment that is / will be an iSCSI SAN backend entirely.  I know of a lot of reading that the best practice is to config the ESXi host with:

    VMKernel > vmnic > physical network adapter > switch ports > port on SAN (config 1:1 - no grouping of vmnic)

    Environment:

    ESXi 4.1 & 5.0

    Force10 switches

    EqualLogic iSCSI SAN (PS4000 & 6000)

    Here's my real question (playing the Devil's advocate):

    Why I shouldn't team 3-4 vnmics on a vswitch with multiple vmkernel ports?

    Given my environment.  Can someone point me to a technical document that explains what will / could happen if I set up the environment ESXi in this way?

    Thank you

    BDAboy22

    So, basically, we want the SAN to take full responsibility for the failover instead of the network and/or the SAN for who should resolve the failure (which could lead to the time of restitutio in integrum longer session).   I understand that correctly?

    Yes, that's right, we want to stack storage for vSphere manage with path selection and tipping - and by connecting each vmknic directly to the physical vmnic map we sort of "simulate" how to CF cards and also ignores the network failover mechanisms. As you probably know you also need to "bind" the iSCSI Software to the two vmnic (GUI now in 5.0, esxcli in 4.x)

  • How to reconnect safely iSCSI SAN connection after new ESXi4u1 reinstall on USB?

    Hi all

    Can anyone here guide me how to add the connection to the current host of ESXi 4 iSCSI SAN?

    I think not as export the ESXi existing profile using vMA can do much because the version number is different so ESXi profile export is useless.

    Note: during the installation of 4u1 ESXi I unplug the host LAN cable.

    Thank you.

    Kind regards

    AWT

    Access to the host on the MD3000i configured?

    Cause the host will have a different IQN, you must reconfigure the access.

    André

  • ESXi 3.5 4 problem to upgrade by adding the VMFS partition existing on iSCSI SAN

    Hi all

    I've got 2 x Dell PowerEdge 2950 III linked directly with Dell MD3000i iSCSI-SAN and today I had once hard to improve my production ESXi 3.5 u 4 ESXi 4.0,.

    First is that the upgrade is failed miserably after following this thread: url = http://communities.vmware.com/thread/211413;jsessionid=47217F9BCCA2DF1325DBE7A266D48A8D?tstart=0VMware communities: error on upgrade the host: this host is not compatible with this upgrade. [url]

    with the following error message: ERROR: unsupported boot disk

    The layout of boot device on the host computer does not support the upgrade.

    That's why I did a clean reinstall of ESXi 4 on top of ESXi 3.5 u 4 and lost all unique settings on my ESXi host. I am now stuck in SAN iSCSI partition mapping after following the guide from: http://support.dell.com/support/edocs/software/eslvmwre/AdditionalDocs/cnfigrtnguide/storage_guide.pdf [/ URL]

    See the following attachment.

    I wonder how people make their upgrade in ESXi 4.0 in the production system running ESXi 3.5 u 4 filled with hundreds of VM? The reason why I use 2 x ESXi server connected to unique SAN server, is that I can just start the virtual machine on the SAN from the other ESXi - & gt; that's fine and tested, but then when I rebuild the failed server, I need to delete all the data inside the shared partition that contains my VMs.

    So in conclusion after one of the ESXi server failed, mapping of iSCSI would eventually destroy all data in the partition?

    CMIIW, any idea and suggestion would be gladly appreciated

    Kind regards

    AWT

    The configuration-> tab cards storage, you should be able to recrawl and find existing VMFS volumes.  You shouldn't need to add storage.

    Andy

  • How to connect to iSCSI SAN without compromising security

    Hello:

    How to enable server OSes (VMS or physical host computers) to connect and LUN iSCSI mount without compromising the safety of our ESX host?  We have a few Microsoft servers that need to use iSCSI initiators at Mount MON for MSCS.   We cannot use the ESX initiators because VMware doesn't support iSCSI to virtual storage with MSCS.  We have already read all the documentation and spoke with VMware support, so we know that our only option is to use the iSCSI initiators in Microsoft servers to connect to the LUN.

    Our concern is security.  If we let the servers use their iSCSI initiators to connect to the San, then they also won't have access to our service and the vkernels via the iSCSI network console?  ESX requires that you have a port the service console and the port of vkernel on the iSCSI network for each ESX box that you want to use the ESX initiator for.  We struggle to understand how to connect any machine (virtual or physical) to the iSCSI network to mount LUN without exposing our service and vkernels consoles.  I know that the best practice is to remove this VMs network for this exact reason, but of course many organizations also have physical servers (UNIX, Windows) who need to access their iSCSI SAN.  How people treat this?  How much of a security problem is it?  Is there a way to secure the service console and vkernel ports while allowing host ESX - no access to the SAN?  I know that many of you are facing this exact in your organizations situation, please help.  Obviously, it is not supposed that nobody uses their SAN iSCSI for anything else except for the ESX host.  I thank very you much.

    James

    Hello

    Check out this blog

    Use of firewall is certainly a step in the right direction for that. If you can't have separate iSCSI networks, then you will need to isolate nodes NON-ESX/VCB iSCSI using other mechanisms. I would certainly opt for firewalls or reduce the redundancy to just 2 network-by-network cards and not 4 to a single network.

    Someone at - it any other suggestions? Surely many ESX users share their iSCSI SAN with a lot of different systems and operating systems. Thanks again.

    They do, but they do not secure their networks for their VMs ESX iSCSI / other physical systems. You have asked a very important question and it's how to connect to iSCSI SAN without compromising safety. If the options are currently:

    1. Physically isolate

    2. Isolate using firewall

    Given that ESX speaks in clear text and does not support IPsec for iSCSI, you have very limited options that are available to you. The firewall you use and charge iSCSI, you send through it will determine if there is no latency. Yes its a cost extra, but if it is an independent network switches/ports/etc.

    Best regards
    Edward L. Haletky
    VMware communities user moderator, VMware vExpert 2009
    ====
    Author of the book ' VMWare ESX Server in the enterprise: planning and securing virtualization servers, Copyright 2008 Pearson Education.
    Blue gears and SearchVMware Pro items - top of page links of security virtualization - Security Virtualization Round Table Podcast

  • Problem with ESXi 3.5 host or our iSCSI SAN

    Hello

    Last week, twice we came into work and notice that some virtual machines on a host somehow the bomb and break the connection or work habit and are turned off.  It happens with mutiple errors, but we are not sure if it's the host itself or iSCSI SAN is too connected a NETgear ReadyNAS.  All the vmdk and vmx VM files are on the SAN storage are loacted here for targets.  Your opinion on this if someone could help determine what may be the question, please, thank you?

    Welcome to the forums

    I edit your images and delete the business credentials. There is a security risk that you should not expose.

    Take a look at the following.

    http://communities.VMware.com/message/1333118#1333118

  • How to fix a SAN iSCSI on an ESXI 3.5 host?

    Hello guys,.

    We have a new 4.5 TB iSCSI SAN is already set up with 3 logic of partition 2 TB, 2 TB and 500 GB the last of them. Windows, I see partition 2 but since the ESXI host I can't. I have try to add storage / disk / LUN but nothing. Did I miss something?

    Thank you.

    Have you given access to all of your guests? Judging by your picture, you have not.

    You must give the same access to the each host initiator. Not only vmhost1

    On a HP MSA SAN you can make also accessible to everyone that I would not recommend it.

    Currently, this is not the case so keep it like that.

    Please consider my answer as helpful or correct even if its completely false marking

  • How to get iSCSI SAN connected to ESX when the SAN is on a different subnet?

    Hi all

    I have two machines ESX which are directly connected to a San that are on a different subnet other network cards.  Other network cards are 10.x.x.x and the SAN is on 172.16.x.x, however whenever I try to add the iSCSI storage, the system will never find the LUN.  I put the vmkernel 172.16.x.x but it does not always find them.  I know that the LUNS are configured correctly if its ESX just having problems with the other subnet.

    You need to activate the Service Console to see the iSCSI SAN, too. This is because you use the initiator software. For example, you can add a second SC in the vSwitch that your VMKernel Port is connected and align on the same subnet.

    Kind regards

    Gerrit Lehr

    If you have found this or other useful information, please consider awarding points to 'Correct' or 'useful '.

  • Best practices for Exchange 2003 with VMWare ESXi 3.5 and iSCSI SAN

    Hello guys,.

    Here's the Q? We have 1 physical Exchange 2003, the HOST of 4 and 1 iSCSI SAN with LUN 3, 1 for data, 1 for VMWare and 1 for SQL, if we're going to virtualize it, I don't know where to put data Exchage and newspapers. I do not think that that is a good practice to put together the data but I do not have another SAN. So, what can I do?

    Thank you.

    We have 813 mailbox.

    I agree with cainics, start an average size and go from there.  I know it's a production mail server and you can not exactly 'play' with the settings because this requires time, but if you do the VM too big, you would have nothing left for other virtual machines.

    I would go 2 vCPU and at least 4 GB of RAM, maybe 8GB.  There must be parameters for the Exchange mailbox 813 and X users # to implement your environments in order to get an idea of the amount of RAM that will be... 4GB seems minimal to me, but 8 GB would probably be better like that.

  • iSCSI storage problem ESXi 5 u1

    Hello

    We have six ESXi 5 u1 servers connected to the storage unit Dell EqualLogic 65xx series. On ESXi systems we use the iSCSI Software and PSP is set to RoundRobin. We investigated a question where randomly throughout the day that we receive the following warning:

    VCenter:

    Failed to write command to write-suspended partition naa.6090a08890b27eeab8cee499fb01a0f6:1

    VMKernel:

    WARNING: ScsiDeviceIO: 3075: order of writing-suspended bulkhead 6090a08890b27eeab8cee499fb01a0f6:1 write failure

    Capability3: 13359: Max (10) attempts exceeded for appellant Fil3_FileIO (status "IO has been abandoned by VMFS via virt-reset of the unit")

    BC: 1858: cannot write (couriers) object '. iormstats.SF': level core exceeded Maximum attempts

    Also, we see a lot of messages indicating a loss of connectivity, iSCSI volumes:

    Lost access to the volume (LUN_NAME) 4f58ac5f-ef525322-XXXXXXXXXXXXXXX because of connectivity issues. Recovery attempt is underway and the result will be
    reported in no time.

    This event is generated by all six guests at random time intervals and LUN iSCSI target different. At the same time, we notice a very high spike in CPU usage of 100% on the host that generates the message

    One thing we found through tests of vmkping is that ESXi hosts are configured for extended frames (9000 MTU vmk, vswitch and adapters), but the storage network, does not allow frames.

    This could be an indication of serious IP fragmentation on iSCSI? How we measure this?

    conraddel wrote:

    One thing we found through tests of vmkping is that ESXi hosts are configured for extended frames (9000 MTU vmk, vswitch and adapters), but the storage network, does not allow frames.

    This could be an indication of serious IP fragmentation on iSCSI? How we measure this?

    Do you know if the side SAN is configured for frames? If not, then it should probably not be a problem, given that the TCP connection will take care of the other so-called MSS sizes and your guests should get off by default the images of sizes.

    The worst situation would be if both the ESXi and the SAN supports jumbo frames, but not the switches between the two. Because the switches are 'invisible' problems don't happen, until you actually start sending larger images and those will arrive in silence by switches, i.e. no IP fragmentation.

    Have you also checked with vmkping what kind of end-to-end connectivity, you have? Options-d and s are very important get right: http://rickardnobel.se/archives/992

  • The upgrade to ESXi 4.1 with storage iSCSI software to ESXi 5.5

    I intend to upgrade ESXi 4.1 with which I have attached to ESXi 5.5 iSCSI software storage

    I've updated all my guests who were on the SAN storage with no problems.

    I would like to know that if it there's nothing I should take care before I update the host connected iSCSI to ESXi 5.5, as for the hosts with SAN attached storage I made to remove the cables of SAN before upgrading the host.

    Also, if all known problems, you will be informed during the upgrade to ESXi 4.1 to ESXi 5.5 with iSCSI Software storage, please let me know?

    If there is no different process to upgrade coz in ESXi 4.1 I do not see this link ports of iSCSI vmkernel ports software iSCSI but I know in ESXi5.5 we have to or I just launch a procedure of standard upgrade via the Update Manager and everything will be taken care?

    Thanks in advance

    With ESXi prior to version 5, linking ports had to be done via command line. However, if it has been configured correctly, you should be able to upgrade to ESXi 5.5 (assuming that the host has not been previously updated since version 3). BTW. the last time that I disconnected the host of storage for an upgrade has been with ESX 3.0.

    André

  • Make Accessible iSCSI SAN-VM via Windows iSCSI initiator

    I have a "total newbie" question that I hope can respond quickly enough.

    I added a new DELL MD3000i SAN to my storage network in order to use this SAN exclusively for Windows virtual machines. I have the vdisks large (2) 4 to 7 to each (Yes, I REALLY need unique and large volumes) defined on the MD3000i. (4) the MD3000i ports are connected to my iSCSI VLANS and have the id default host port 192.168.130.101/102 and 192.168.131.101/102.

    I have a MD3220i installed in the network and working with hosts ESXi 4.1 (2) (192.168.230.101/102, 192.168.231.101/102 of the subnets). I am quite familiar with how to make the storage available to the host via the initiator of the iSCCI, but I know not how to make accessible storage for virtual machines WITHOUT using the host to connect to the iSCSI SAN, create a data store, and then add a new virtual disk to the virtual machine.

    Only the vmnic dedicated to the iSCSI initiator have physical links to iSCSI VLANS (vSwitch01). The network switch has (2) network adapters connected to the network via vSwitch0 "inside".

    Any ideas on the best way to "get there from here"?

    Hello.

    You will need to create a group of ports in virtual machine on the same vSwitch created in your iSCSI ports group.  Give the virtual machine a 2nd NETWORK card and then assign it to the created virtual machine port group previously.

    Good luck!

Maybe you are looking for

  • IOS 10 videos

    When I updated to ios 10, I was suddenly unable to Watch youtube videos or netflix videos. the video loads again the play button would not work. It is not my wifi, and nobody else seems to have the problem.

  • Nightly icon does not appear in Debian GNOME SHELL

    Hi, I'm Tiago.The Nightly icon does not appear on GNOME 3. I used Ubuntu and it works very well. But now, I use Debian with GNOME 3 and after extract the .tar and create the 'Nightly.desktop (necessary to pin the menu icon)", the icon does not appear

  • XP vs Vista Ultimate

    A matter of $ million I have a new W500 which should arrive soon, it is pre-loaded with Vista Ultimate, my current T42 has XP and all my applications run on XP, I have tested with a slow R backup series and were able to get 99% of my apps to work. Sh

  • How to get down on keyboard line & missing my wen screen I connect

    How can I remove the keyboard on line & Magnifier off my screen

  • Cursor moves when entering text

    I am usuing editing field, but when I type any text in this field, then cursor does not move with the text, it's show into typed text, but the position of the cursor is on start. I use 6 Os and Os 7/7.1. Please let me know for the problem.