question of EqualLogic

I prepare my physical server for conversion to be a vmware VM esxi4.1.  It is a file server and data disks currently on a San fibre channel.  I plan on how I'm going to move 3 TB of data on an iscsi san value before the virtual computer.  Then I will use iscsi initiator to access the volume of 3 TB.  To get the 3 TB flies over I was thinking about using Windows 2003 dynamic disks to create a volume for I can simply mirror the san fc on the drive of iscsi san, and then break the mirror and continue on the mirrored copy located on the iscsi san.  If all of this relies on having the disk converted into "dynamic disks" in Windows.  My big question is a 'dynamic disk' will cause problems VMware or my Equallogic SAN?

I found (and others) Robocopy behave odd whereupon whether newer than Windows Server 2003.  Since it's raw data, just a restore of backup band is perhaps more appropriate.  You will have the advantage of pretty decent down on temporary disk backup speeds (or band) and then do one restore to another target.  In this way, it would be able to make the consistency checks and preserve permissions, etc.

Tags: VMware

Similar Questions

  • Question of EqualLogic SAN

    I'm working on the implementation of a virtual infrastructure.  A cross training for vsphere 4 months ago, so I forgot a lot.  Also, I'm not at all a passage or an expert SAN.  I am an administrator of Windows, simple and clear.  In any case, here is where I am.  I have two two ESXi hosts with 2 network cards with 4 Gigabit ports in them.  They are Dell PowerEdge R710s.  We have a 6000 EqualLogic (forget what model) SAN.  It has 2 network cards two 4 gigabit ports.  All 8 of its ports are connected to 2 switches PowerConnect 6224 stacked redundantly.  2 ESXi hosts have three virtual switches.  1 for management, 1 for IP traffic to clients and one for iSCSI traffic.  Each virtual switch has redundant connections to each network adapter.  The virtual switch iSCSI has 4 of the 8 ports assigned to him (he just seemed to me that the iSCSI would most need, I may be wrong).  Now, management and IP traffic that NIC is attached to a single switch (PowerConnect 6248 switch), this will change in the future.

    Now, my question relates to the EqualLogic SAN.  It seems that only one NETWORK adapter can be active simultaneously, which gives us 4 ethernet ports.  My boss wants that all ports in a configuration group aggregation of links.  I think he is confused.  My understanding of this was a way to connect the switches.  I think he's talking about grouping of NETWORK cards, but I don't see anyway of y to achieve on the SAN.  Each port on the NETWORK adapter receives its own IP address.  So, I'm quite confused.  Are all connections on the San shared so as to increase the bandwidth?  The SAN is also a management IP address.  It's the address I did the iSCSI initiator to.  As I said, I'm not a guy to switch or SAN guy.  Any help would be greatly appreicated.

    All of you two right really.  LAG can not be used with ESX, you can configure load balancing, so it will use all the paths.  You must first update your hosts to the latest generation of ESX given that some important fixes available for the for use with EqualLogic iSCSI initiator.

    There are a lot of details on this forum in this and this thread.

    Please give points for any helpful answer.

  • Question about Equallogic snapshots.

    Hello forum

    I could not confirm this, but instant never need to commit anything to the basic volume?

    Lets say I have 10 snapshots and delete the oldest? There will be a load of e/s to validate these instant changes?

    Also, if someone could point me to any type of indepth explanation of this technique of snapshot, I would be interested to read all that.

    Thank you!

    N ° current data are already on the base volume.  There is nothing to commit.   Only if you are completely restoring a volume from a snapshot will be there in writing for the base volume.

    On the site of Equallogic Support KB:

    Solution title TABLE: how the snapshot reserve space is allocated and used

    Solution details Snapshot Reserve Allocation and use

    -------------------------------------

    In a series group PS, snapshots can protect against errors, of a volume

    virus or the database. A cliché represents the content of a volume

    at the time you created the snapshot. Creating a snapshot does not disturb

    access to the volume, and the snapshot is instantly available allowed

    iSCSI initiators.

    Before you can create snapshots for a volume, you must allocate space (called

    snapshot pool) for snapshots. You set the value of the snapshot reserve

    When you create or modify a volume. Snapshot reserve benefits from the same

    as the volume of the pool data.

    When snapshot data consume entire supply snapshot, the Group remove is

    the oldest snapshots to free up space for new images or sets the volume and

    snapshots offline, according to the policy you selected for instant recovery

    space.

    The functionality for creating snapshots is called hybrid allocate when writing.

    Operation of sharing and snapshot of the page

    -----------------------------------

    A PS Series Group organizes the physical storage into logical segments called pages.

    Each page is composed of several logical blocks. This is similar to the way file

    systems combine areas of physical disk in 'clusters' or 'chunks '. Each page has

    a reference count which records the number of volumes and snapshots that share

    the page.

    When you create a volume, the group creates an internal table, called the volume

    table, which contains pointers to pages that use the volume. When you create

    a snapshot of a volume, the group creates a snapshot table by making a copy of

    the volume table, which is usually an operation in memory that the Group

    run in a transactional manner. For each page of volume in use, the support group increases the

    reference count to indicate that the volume and the snapshot share page.

    Because the group does not move or copy the data or allocates a new pages, photos

    are fast and efficient.

    Reserve of the snapshot stores the differences between the data on the volume and snapshot data

    (in addition to differences between the data of multiple snapshots). When you

    first create a snapshot, the page for the volume and the snapshot tables are

    In brief identical copies and the snapshot consumes no snapshot

    reserve. A reading of the same logical block of the volume application or the

    snapshot returns the same data because the application is reading from the same page.

    However, if you write a page that has a volume and a snapshot of share, snapshot

    reserve is consumed.

    Here's a simplified example of a snapshot operation. In general, no.

    additional I/o operations are needed to manage the data volume or snapshot.

    However, other internal operations can occur due to virtualization and

    data balancing on berries of the PS Series.

    If an application performs a write to 8 KB for a volume containing a snapshot, the

    Group:

    1. determine what page is modified by the write operation.

    2 - If the page is not shared, writes the data to the page.

    3. If the page is shared:

    . (a) allocates a new page of disk space and reduces the instant to reserve

    . the volume of a single page.

    . (b) update the page of volume table to point to the newly allocated page.

    . (c) mark the newly allocated page as having new data on the volume and the references of the

    . original page for unchanged data.

    . (d) writes the data to the new page.

    When writing is complete, if you read the data on the volume, you have access to the

    new page and new data. However, if you read the same logical block of the

    Instant, you get the original data, because the snapshot will always point towards the

    original page. Similarly, if you set a snapshot online, write to the snapshot.

    feature hybrid write protects the original data volume by allocating

    a new page for the new snapshot data.

    Only the first page of writing to a volume shared (or snapshot) consumes additional

    snapshot reserve. Each subsequent entry is considered identical to a writing on a

    non-shared the page because the original data are already protected.

    Functionality similar to hybrid allocate when writing is used in cloning operations.

    However, unlike when you create a snapshot, cloning a volume immediately

    consumes space additional group. If a clone is moved to another pool, data

    is copied during the operation of moving pool.

    Restoring a Volume from a snapshot

    ----------------------------------

    Because of the layout table, restore a volume from a snapshot is very

    quick. First of all, the group automatically creates a snapshot of the volume by copying

    the volume table to a new table of snapshot. Then the Group transposes the page tables

    the volume and the snapshot you selected for the restore operation. NO.

    additional space is required, and no data is moved.

    Deletion of Volumes and Snapshots

    ------------------------------

    Because volumes and snapshots to share pages, if you delete a volume, you

    automatically remove all the clichés associated with the volume.

    You can manually delete snapshots (for example, if you need more of)

    snapshot data). In addition, the group can delete snapshots automatically in the

    following situations:

    1 - failure instant free reserve. If the snapshot data consume the snapshot

    . reserve, the group either deletes the oldest snapshots to free up space for new

    . snapshots or sets the volume and snapshots in offline mode, according to the policy

    . you have chosen for the snapshot space recovery.

    2 - maximum number of snapshots, created from an agreed timetable. If you set up a

    . timetable for the creation of snapshots, you can specify the maximum number of

    . photos you want to keep. Once the program creates the maximum number

    . clichés, the group will delete the oldest snapshot for planning

    . to create a new snapshot.

    Snapshot are deleted in the background queue. The group travels

    the snapshot page table and decremented the reference count on every shared page.

    Any page that has a zero reference count is released into free space. Pages

    with a zero reference count are not released because they are shared with

    the volume or other snapshots.

    Because stereotypes can be automatically deleted, if you want to keep the

    given to a snapshot, you can clone the snapshot.

    Reserve snapshots use agreement

    ------------------------------------

    The snapshot reserve is consumed only if you write a shared volume or snapshot

    page. However, it is difficult to establish a correlation between the amount of data written in one

    volume (or snapshot) with the amount of instant reserve consumed as a result.

    especially if you have multiple snapshots.

    Because the pages consist of several logical blocks, size e/s and distribution

    volume merge affect overall performance of e/s in addition to the snapshot

    reserve its use.

    For example, do much written about a narrow range of logical blocks in a volume

    consumes a relatively low amount of reserve of the snapshot. This is because as Scripture

    the same logic block more than once, does not require not additional snapshot

    reserve.

    However, doing random number wrote a range of logical blocks in a

    volume can consume a large amount of reserve of the snapshot, because many other

    pages are affected.

    In general, use instant reserve depends on the following:

    1. number of entries that occur in the volume (or snapshot) and at least one

    . snapshot exists. In general, more Scriptures tend to use more snapshot reserve.

    . Although multiple writes to the same logical block do not require additional

    . space.

    2 - the range of logical blocks, on which occur the Scriptures. Written in a wide range of

    . logical blocks tend to use more instant reserve written in a narrow

    . rank, because more of the written word are to different pages.

    3. number of snapshots of the volume and timing of the write operations. Most

    . snapshots that you create more snapshot reserve is necessary, unless there is

    . few entries on the volume or snapshots.

    4 - age of snapshot. Snapshots older tend to consume more snapshot reserve only

    . the clichés of recent because the group must retain the original data for a

    . longer time.

    Design Snapshot reserve

    -----------------------

    You cannot create snapshots until you book instant space. Snapshot reserve

    is set as a percentage of the reserve volume (space) for the

    volume.

    When you create a volume, by default, the volume uses instant group-wide

    reserve adjustment. (The reserve in the scale default setting is 100%. You

    can change this default value.) You can change the reserve snapshot setting when you

    create a volume or later, even if the volume is in use.

    Although the snapshot reserve is not used until the volume or writing snapshot occur, it

    is consumed immediately free pool space. For example, if you create a

    fully allocated (not thin provisioned) 200 GB volume and specify a snapshot of 50%

    pool of reserve, free space is immediately reduced to 300 GB (200 GB for the volume

    reserve and 100 GB for Snap reserve), even if there are no pages in use.

    Therefore, before you create a volume, you should consider how many snapshot

    reserves, if any, to assign to the volume. The reserve of the snapshot set to zero (0)

    If you do not want to create snapshots.

    The optimal size of the snapshot reserve depends on the amount and type of

    changes in the volume and the number of shots you want to keep.

    By example, if you set the snapshot reserve 100%, and then create a snapshot.

    You can write to each byte of the volume without missing snapshot

    reserve. However, if you create another snapshot and then write in each byte of

    the volume, the first snapshot is deleted in disk space available for the new snapshot. If

    you set the instant reserve at 200%, there would be a sufficient reserve of snapshot

    for the two snapshots.

    A very conservative strategy in terms of instant reserve sizing is to put the snapshot

    book value at 100 times the number of shots you want to keep. This

    guarantees that keep you at least the number of snapshots, regardless of the

    the number of entries on the volume. However, this strategy is generally allocates

    book an excessive amount of snapshot, because that rarely crush you all the

    the data in a volume during the lifetime of a snapshot.

    The best way to instant size reserves is to assign an initial value to the reserve

    and watch how instant you can keep over a period of time specified under a

    normal workload. If you use tables to create snapshots, allow the

    calendar of work for several days.

    To get an initial value for a snapshot reserve volume, you must estimate

    the quantity and the type of entries in volume and the number of snapshots, you want

    keep. For example:

    -If you wait a few Scriptures or writings which are concentrated in a narrow range

    . logical blocks and you want to keep only a few shots, start with a value

    . 30%.

    -If you wait several entries or entries that are random in a wide range of

    . logical blocks and you want to keep more than a few shots, start with a value

    . 100%.

    If the snapshots are deleted until you reach the desired number of snapshots,

    increase the percentage of snapshot reserve. If you reach the desired number

    shots without consuming much of the instant free reserve, decrease the

    percentage of reserve snapshot. Continue to follow instant reserve and

    adjustments as needed.

    How Thin Provisioning button Snapshot Reserve

    ----------------------------------------------

    The snapshot reserve is based on a percentage of volume reserve (allocated space

    a volume). For a volume fully provisioned, the reserve volume is equal to the

    stated volume size. However, a thin volume put into service, the volume of reserve

    is initially very inferior to the reported size (default is subject to minimum volume

    10% of the reported size) and increases as the written volume occur.

    If you change a thin volume supplied in a volume fully provisioned, the

    amount of reserved snapshot increases automatically, because the volume of reserve

    increase in the size of the stated volume.

    If you change a volume of fully provisioned to thin-provisioned, the amount of

    snapshot of reserve decreases automatically, because the volume of reserve declines.

    However, if the snapshot resulting reserves will be too small to store all the

    existing snapshots, the group will automatically increase the instant reserve

    percentage of value that preserves all existing snapshots.

    Reducing the use of instant reserve

    -------------------------------

    Over time, you can reduce the use of instant reserve by periodically (for example,

    Once a month) defragmentation of databases and file systems. Defrag operations

    try to consolidate segments of files in a volume and, consequently, to reduce the scope

    logical blocks addresses used in the pages of the volume.

    Defragment the data read operations from one place and then write data to a new

    location. So increased use of instant reserve during and immediately

    after defragmentation, because existing snapshots will use more of the usual

    amount of the snapshot reserve. However, snapshots created after defragmentation

    operation must use less instant reserve, because the data on the volume are more contiguous.

    When a volume is highly fragmented, potential reduction of the snapshot reserve

    use can be dramatic, especially after removing the large before you defragment

    snapshots. Defrag operations can also reduce the I/O load on the group,

    because the contiguous data makes more efficient i/o operations, in order to improve the

    e/s overall performance.

    Latest defragmenters are good to reduce the fragmentation that is not seeking to be

    too thorough. Some defragmenters also try to combine the inactive data

    further restricting the likelihood of changes to shared pages. However, are not

    Defragment too frequently.

    Sector alignment may also affect the use of snapshot of the space, especially in

    larger volumes. File systems must be correctly aligned with the sector. It comes

    described in technical reports for VMware and Windows environments.

  • Module Extension Dell EqualLogic Multipathing v1.4 missing setup.pl

    The new v1.4 Dell EqualLogic Multipathing Module of Extension is missing setup.pl script.

    Hello

    Thank you.  A new package is created.  In the meantime, you can try the setup.pl v1.3...   Or use alternative methods to install it.

    Feel free to open a support on this file as well.   They can add you the question open with genius.

    Kind regards

    Don

  • EqualLogic compatibility

    Maybe this isn't the best place for this but I wasn't sure whether storage or equallogic would be able to answer this question. In the last list compatibility equallogic for ps4100xv and powerconnect 6224 matrix says to use firmware 3.3.7.2 for switches, but I can't.

    This version (3.3.7.2) is no longer available and has been replaced by 3.3.7.3 3.3.8.2 and the newer version v3.3.9.1. The most recent version v3.3.9.1 is recommended for this switch.

    http://www.Dell.com/support/drivers/us/en/rc1009777/DriverDetails/product/PowerConnect-6224?driverId=MMXWW&osCode=NAA&FILEID=3336417050&LanguageCode=en&CategoryID=NI

    Note: further down the page is a drop-down list to see the previous versions.

    Since the version listed in the matrix of computability is no longer available, you can use any version newer than the specified version and it should continue to work as having been tested and will be supported.

    -joe

  • Clean shutdown for EqualLogic PS6100 without stopping the host/iSCSI initiator

    In fact, I've known the procedure on how to stop the Equallogic PS storage. But I have a question as well as the procedure.

    Do you really need to stop the iSCSI initiator or the host connected to the storage before you shut down the storage controller?

    Because we move the Equallogic box on the rack on the other. I just want to stop the Equallogic without stopping the hosts.

    Hello

    The hosts are connected to some other storage that you want to keep as you move the table EQL?  If you stop the storage, it is similar to pulling on a hard drive in its operation.  Most of the time nothing bad will happen.  But all entries that did not allow the storage is lost.   For applications such as SQL or Exchange is not something you want.

    If you need to keep the server up, stop all applications accessing the volumes, then disconnect all EQL volumes first, and then stop the EQL table.  Will be served waiting for the written word.

    Kind regards

    Don

  • Aggregation of links for Equallogic

    A few questions on the Equallogic aggregation and link:

    The physical cable how can I use to set my Berry of EQuallogic iSCSI on my physical switch Cisco 7 K? 2 or 4?

    How aggregation of links can I use: LACP?

    Thank you!

    Hello

    EqualLogic doesn't support grouping/collage/LACP, etc...   You must configure MPIO on the guests instead.

    Kind regards

  • EqualLogic PS4000 discs

    Hello

    We have a SAN EqualLogic PS4000 network for our Veeam backup and the warranty is over. My problem is that a drive has faild and needs to be replaced. The thing is, we have a 'spare' PS4000 that we do use. The question is whether it is possible to use the disks in the SAN spare wheel in that we use for backups? Has anyone tried this before?

    Thank you

    Magnus

    Hello Magnus,

    Assuming that the units have identical type discs AND the other Bay is reset, yes you could swap the drives.   If the other Bay is configured, the RAID information on the disk will confuse the table of production thinking there are a number of logical unit that is failing.  So it won't work as a spare.

    Kind regards

  • Dell ASEAN Free EqualLogic in-depth video Masterclass series available to display

    You can register here for part 12 very in depth masterclass on EqualLogic.

    Join us for a technical session of deep dive on how to unlock the full potential of the Dell EqualLogic storage solution.

    The purpose of this intimate video of Master Class series is to help you gain a deeper understanding into the advanced features of the EqualLogic systems which, in turn, should help you to maximize your investment in this excellent system.

    The MasterClass series is structured to cover all important aspects of the EqualLogic system. You can also choose to display selected topics, tailored to your specific interest area.

    We know that this is not a specific question, but we felt the content would be very useful for people looking on this forum, there are 12 videos a each 20 to 40 minutes long that could answer a lot of questions that arise in this forum. We hope you find this useful

  • Update firmware on AN Equallogic SAN on a cluster

    I'm new to Equallogic and need to update the firmware on a PS 6000 SAN that is in a cluster with TWO of SEM, that the firmware needs go to v6.0.7 to v7.0.5

    My questions: I was told that I need to evacuate the SAN (migrate everything out of him) to the second SAN in the cluster, so there is no risk on everything that's wrong in the virtual machine, while the firmware update coming.

    Where am I supposed to go? I checked the workgroup SAN Equallogic and VMWare vsphere GUI Manager interface, and I don't see a place where I can do.

    Any advice will be greatly appreciated.

    We assume that your EQLs have two controllers on the right?  EQL use a controller Active and standby and in the event of a failure of the controller failover took about 10 sec. To survive this event or need a propper configuration of your phys. Switch, hosts and virtual machines. Just follow the installation and best practices.

    Then... a firmware perform the same type of failover. From an update FW flashing the standby controller first, then asks for a 'restart' which performs failover. That the 'new' standby also will Flash.

    All our EQL + VMware configurations remain 10 seconds (4 pings are missing) breakage.

    If you wish to cancel a member of a pool for maintenance it takes something like that

    -Both of your members are in the same pool

    -You create a new and "move a member" of the existing pool in the new. If there is enough space left all volumes is automatically migrated to the existing Member and the last in the pool

    -If the Member is now in your pool maintenance you can perform the FW update

    Its fully transparent and non interruptible for hosts/VMs. But takes a looooonnnnng time finshed.

    If your members are in different pools already and assuming that enough space

    -Use 'move the volume' to transfer a volume of a pool to another. You need to check if there is enough space left

    -Another method would be to activate SyncRep for all Volumes and perform a switch over. If a member holds all the SyncAlternates you can put paused the SyncRep and perform the update of FW.

    In the first 7 months of this year we perform 3 (4?) FW upgrade of our 11 EQLs that currently spans 7.0.6. The process time vacuate trust has never been an option for us.

    The FW comes with a PDF file on how to perform the upgrade and contains some notes so please take a look inside.

    Kind regards

    Joerg

  • EqualLogic management through SCVMM

    Hello

    now that the question on physical interfaces is solved, I try to add the volume management in SCVMM.

    So far, we have used chap usernames and passwords to access iSCSI volume.

    As far as I understand (and tried) the white paper of TR1094, when I use VMM to assign the "logical units" to the guests the SMP creates an ACL entry on the volume with the nominitiateur.

    Is there a way to avoid this and still uses chap username/password combinations?

    Anyone of you going with names of initiator iSCSI only as an entry?

    I don't have that many hosts, so my way would be to do the old way in the iSCSI initiator iSCSI connection.

    Hello...

    In your scenario, I think that SCVMM will default using IQN for access to volumes.  I'm not aware of a way rather use CHAP at this point.  You can, of course, later use the Dell EqualLogic management tools to modify the list of volume access control to use IQN, IP or CHAP - although it is not necessary.

  • VLANning questions about blades MXL

    Hello

    I have a new M100e with a SAN PS4110xs case, a few servers blade and two switches Force10 MXL (switches and SAN on fabric)

    I put it all in place and works fine, but using the default VLAN for all

    I want to connect the switches to a Juniper EX3300 via SFP ports + and send the subnet SAN out so that I can connect remotely to the IP address of the EqualLogic for VMM integration management

    However, every time I try to put SAN ports in a VLAN separate-, before even that I look into the chutes for the Juniper - I lose all connectivity on the San.

    I created a new VLAN (VLAN120), but as soon as I set the ports inside they cannot ping each other.

    int vlan120

    Tagged te0/1 (this is the port of SAN)

    Tagged te0/12 (it is a single server)

    Tagged te0/16 (that's the other server)

    as soon as I do that I lose all connectivity to the San and server cannot ping each other via the iSCSI network.

    I tried to use 'no label' instead of 'tag' with the same effect.

    The VLAN is active (non-stop)

    I am at a loss for what is the question.

    Any help gratefully received.

    Select this option.

    I understood the question - I got 2 trunk in the Juniper VLAN120 links - as soon as I disabled the second interface everything arises in life

    Now I need to try to find a way to implement a SHIFT between the two MXL in the stack and a Juniper ex3300 - I managed to get the connectivity that I need, but with only a single link - as it is a remote site I want redundancy :-)

  • EqualLogic & large virtual file shares. 4 TB +.

    Hello

    We have accumulated a large amount of raw video and it begins to grow at 2 TB per year.

    We currently have a virtual windows file server with an OS of RDM level to Equallogic volume.

    Is - that the best way to continue to 10 GB + or have us think differently?

    vVols?

    Thank you

    Damien

    Hello Damien,.

    I'm sorry but I'm not sure what you're asking.

    EQL supports up to a volume of 15 to. Type of GPT partition supports this format. Depending on how your data is stored, you can use several volumes of inside the VM with RDM.  (letters of different drives or mount points)   That would avoid the use of dynamic disk and extend volumes.  Each virtual machine can have four adapters virtual SCSI with 15 drives on each card.  Which is about 900 TB per server. Instead of using RDM, you can use the inside the guest MS iSCSI initiator to connect to EQL volumes. Who removes this limit of 900TB.   That would allow to use the Host Integration Toolkit for Microsoft.  Improved MPIO and uniform/snapshot replication.   Also add support SCSI UNMAP to W2K8 R2 &. He was born in 2012 & R2.

    Re: 10 GB +, I'm assuming you mean 10 Gb Ethernet?   10GbE doesn't impact much the size of storage if you grow only at 2 TB per year.  There are benefits to 10GbE, of course, if you intend to go anyway.

    vVols works very well as RDM.  So not real huge advantage here.

    Questions I would have is what the IO load you currently see?  From SANHQ.  What is your current EQL HW and ability?   The latest addition to the EQL family, the 6610 has up to 84 drives each and you can have 16 of them in a single group.  That is a huge amount of space with 6TBG readers, 1.3PT of RAW capacity.

    Kind regards

    Don

  • Get EqualLogic to work with Ovirt

    If anyone has any questions get Ovirt to work with EqualLogic the following link may be useful:

    https://sites.Google.com/a/Keele.AC.UK/partlycloudy/ovirt/gettingovirttoworkwithdellequallogic

    Ovirt is a very good free alternative to many commercial vm management solutions.

    One thing to change is the path_checker in/etc/multipath.conf.   There is a bug in Multipathd, manifested with the EQL firmware 7.0, since it's 4K sector.  When you use readsector0, it causes a protocol error just after the connection.  Change readsector0 "tur" or "arm."

    It's what I generally use for EQL in /etc/multipath.conf

    the default is {}

    udev_dir/dev

    polling_interval 10

    path_selector ' round-robin 0 ".

    path_grouping_policy multibus

    getuid_callout "/ lib/udev/scsi_id - list blanche--device=/dev/%n.

    path_checker tur

    rr_min_io 10

    setting 8192

    rr_weight priorities

    immediate restore

    no_path_retry fail

    user_friendly_names Yes

    }

    Also make sure the interface file at the same time iSCSI_ifacename and net_ifacename

    Cat/var/lib/iscsi/ifaces/eth0

    # BEGIN 6.2.0 - 873.2.el6 RECORD

    iface.iscsi_ifacename = eth0

    iface.net_ifacename = eth0

    iface.transport_name = tcp

    iface.vlan_id = 0

    iface.vlan_priority = 0

    iface.iface_num = 0

    iface. MTU = 0

    iface.port = 0

    # FILE END

    A couple of current settings of /etc/iscsid/iscsid.conf

    ################################

    session and device queue depth #.

    ################################

    # To control how many orders the session queue together

    # node.session.cmds_max to an integer between 2 and 2048 is also

    # a power of 2. The default value is 128.

    node.session.cmds_max = 1024

    # To the queue of the device to set depth node.session.queue_depth control

    # to a value between 1 and 1024. The default value is 32.

    node.session.queue_depth = 128

    node.session.iscsi.FastAbort = No

    Kind regards

  • ODX accelerates Live migration on EqualLogic?

    • EqualLogic PS6100XS, 6.0.1 firmware
    • Server, Datacenter Edition 2012, fully patched

    Should I see the benefits of ODX when I saw VMs to migrate? Because I'm not...

    Eureka! I finally thought to it. It's the fault of Veeam - I posted on the forums with all the details:

    forums.Veeam.com/viewtopic.php

    I am sure that I never saw actually ODX transfers until I uninstalled Veeam. This raises the question of how I saw incredible speeds, but I would guess that my SAN, combined with MPIO, then I really fly even without ODX. It seems that the real advantage of ODX is how incredibly effective, it is not to use the network. With active ODX, my iSCSI traffic is almost non-existent during these transfers of files. Speeds are maintained around 200 MB/s, which is certainly not shabby, but this isn't the 700 MB/s, I expected to, either.

Maybe you are looking for