Best practices to follow before firmware Equallogic disk Upgradation

Hi team,

I have two Equallogic running in the same group and the default storage unique pool. Old eql is seen RAID 5 and another more recent is RAID 6. Both have 12 600 GB discs. Last week we Club the two EQL in the same group. Everything went well. Current firmware is on the two eql 6.0.9.

We add the group to the HQ of SAN.  Now SAN HQ is showing constant alert of the firmware of the drive is obsolete for EQL OLD.

Please let me know what are best practice before any firmware place a gradation of the EQL disc.

necessary downtime.

No downtime is needed. We do it on our production VM 300 vSphere cluster.

Kind regards

Joerg

Tags: Dell Products

Similar Questions

  • Best practices for moving a large virtual disk to one Windows Server VM to another?

    Environment
    3 blades HP BL460 G1 using an EMC NX4 for shared storage in a cluster of ESXi 4.1 vSphere

    Description
    We have currently two file Windows Server 2003 important server upgrade to Windows server 2008 R2. Both have a thick secondary virtual disc service that is used exclusively for the file service. The first disc of the virtual secondary server is 500 GB and the second is 1.4 TB. I currently do not have space on our SAN to bring up two new virtual machines with the same storage requirements. What I thought to do is raise two new W2K8R2 VM then disconnect the secondary readers of the VM-W2K3 and reconnect them to the W2K82 virtual machine. I guess that the process would be done like this:

    1. Cancel sharing all files and remove the secondary hard drives in Windows on both machines virtual W2K3
    2. Close W2K3 VM down and remove only secondary hard drives in vSphere
    3. Move the secondary vmdk file of each W2k3 VM file in the folder of the virtual of W2k8R2 machine that will replace
    4. Attach the respective secondary virtual drive to each VM W2K8R2. Check that it appears in the BONE of the VM and file data and authorization are intact
    5. Power on the VM-W2K3, rename and delete the domain. When finished, turn off the two
    6. Rename the two W2K8R2 VM to match name W2K3 VM they replaced, then restart the name change to
    7. After the restart to recreate and test access to folders shared

    This sounds like the right way to go on this kind of upgrade? Y at - it a simpler way or a step I'm missing? Thanks in advance!

    The process seems good. I guess these separate the VMDK disk data is NOT stored with the correct current VM? It seems that you are saying that they are simply located on the separate volumes, if that's the case, then Yes, this process won't work.

    The only piece that CAN be useful for you, I don't know how much stock windows you have on this volume of data (by the sounds of the size probably a little) you can export only the actions of lanman of the original registry settings host and import this registry key on the new virtual machine. Just make sure you MERGE them, and disk locations are the same. This is several times this way and worked perfectly. Much easier, and then re-create shares permissions and all that.

    Hope it's of help.

    Jonathan

  • Types of discs DC Win2008R2 best practices...

    Hello

    I am trying to upgrade my AD environmental to 2008R2 victory, and I ask for feedback on best practices for the selection of the disk type and the parameters of the virtual machine.  I intend to install new scratch and transfer AD ads and much more thereafter.

    I intend to create a volume of 50Gb for the OS itself and a 200 GB volume for departure records and other data to the user.

    In a virtual environment, it is adviceable to locate the sysvol and AD database in a separate location other than the default system drive?

    For my 50 GB disk OS (domain controller), is this normal / good practice to store it with its virtual machine?

    My drive 200 GB of user data should ideally be able to reassign to another VM intact to keep the data in case of failure.

    Which drive 'provisioning' would be normal to use with these types of discs?

    What disc 'mode' (independent / persistent) it would be normal to specify in this scenario?

    When such a virtual disk is created, is it possible to increase / decrease its size?

    Thank you very much for the comments on these issues.

    Best regards

    Tor

    Hello.

    In a virtual environment, it is adviceable to locate the sysvol and AD database in a separate location other than the default system drive?

    Yes, follow the same best practices as you would in the physical environment here.

    For my 50 GB disk OS (domain controller), is this normal / good practice to store it with its virtual machine?

    Probably yes.  Why would you consider storing elsewhere?  You do something with SAN replication or y at - there a reason?

    Which drive 'provisioning' would be normal to use with these types of discs?

    I tend to use thin for the volumes to the OS.  For the absolute best performance, you can go with thick-eager discs set to zero.  If you're talking about data sharing standard file for users, then all disc types would probably be fine.  Check it out "[study of the performance of VMware vStorage Thin Provisioning | ]. "[http://www.VMware.com/PDF/vsp_4_thinprov_perf.pdf]" for more information.

    What disc 'mode' (independent / persistent) it would be normal to specify in this scenario?

    This can depend on how you back up the server, but I guess you wouldn't use not independent.  If you do, make sure you use persistent, you do not want to lose user data.

    When such a virtual disk is created, is it possible to increase / decrease its size?

    The increase is very easy with vSphere and 2008.  Very easy, and it can be done without interruption.  Shrinkage is a more convoluted process and will most likely require a manual intervention and downtime on your part.

    Good luck!

  • PS6500e Firmware 4.2 upgrade path and best practices

    recently took on the administration of a PS6500E 1 42 to disks, M1000, PC6224, these units have sat unused for several years. ~ 5-8, I don't think that it was already in use!  Time was limited to undertake this task, however, I have downloaded and read even if a lot of documentation in time allows. .

    Small steps to check the functionality of the hardware has been my first priority.

    Presentation of SIMPLE material:

    > PE R710

    > PS6500E

    > PC6224

    > M1000 w / 16 blades

    < start="" testing="">

    --> create an account on the Eql support site

    --> created an account here!

    --> Documented the original wiring of config, the topology of the network.

    --> All the firmware documented on all units (do not have on hand that I write this aaarg!)

    --> re-cable, stripping everything and restart in a topology to know. (KISS)

    --> Rebuilt R710 head with RHEL-6, 8.

    --> Rebuilt all the blades (16) with RHEL-6, 8

    --> creates user accounts for the test on all systems

    --> installed required use SW for development

    --> Explained to the users it is the trial step ONLY

    --> Users started to test the functionality (adding more requirements on the way!)

    --> Explained to the users it is the trial step ONLY. Laughing out loud

    --> Functionality tests, users (adding more requirements, you get the picture I'm sure)

    --> has added 144G RAM to the head node. 8G (original)

    --> Added 8 TB USB disk storage of data (user does it... äarg!)

    --> Sharing NFS (8 TB USB) to head-node BS1-16 (more added requirements)

    --> Test user account created on the NFS share/users/home and accessible across all systems

    At this point, I need to get the PS6500E ant online tested before this monster grow out of control.

    I was able to access the unit via a serial connection resetting the grpadmin password and run the installation through the command line.

    Now I have access to the GUI and performed the tests at the point of creation and connection to a new volume via iSCSI for the head node.

    --> Presentation of networks of config:

    --> MGMT == 192.168.1.0/24 (IPMI / Dell mgmt?) CMC

    --> Calculation == 192.168.2.0/24 Comm. head-node network and 16 blades

    --> PS6500 / R710 iSCSI == Comm. head-node 10.0.50.0/24 network and PS6500E

    --> PC6224 == do not have access to this device again.  Looks like a flat configured swx

    --> X 2 and 2 x CMC M1000 SWX modules modules connected between them and 1 cable for the PC6224

    --> KVM interconnection works to all nodes (never use however)

    What is the best way to get the firmware on this day and the current version.  I'm not worried about the loss of data at this stage.  Don't forget that we are TESTING!

    Thank you for taking the time and I appreciate any help, advice, comments.

    Hello

    Since it has not been used, I suspect it is no longer under contract?   The service contract is what keeps your license for the software and firmware that Dell is in charge of the PS Series SAN associated.

    If you will not be able to update the firmware, if that's the case.  If it's somehow, then you download the firmware on the site eqlsupport.dell.com, login required.  The upgrade would take action a bit.  4.2.x->->->-> 7 6.0.x 5.2.x 5.0.10.. 1.x-> 8.1.x-> 9.0, which is common.   There is a firmware u; Guide with each version of firmware that includes a table with this in grade.

    Note, firmware update requires not never delete data or resetting the table.

    Once that access you the switch, you'll want to level it as well.  Many patches associated with iSCSI for the switch.

  • Best practices for large virtual disk

    I would add a large virtual disk (16 TB) to use as a backup storage and ideally thin to allow potential available use of, at least initially, a lot of free space.  Headache is the limit of 2 TB for virtual disks in ESXi 5.

    I know that you can extend 2 TB virtual disks in the operating system (win 2008 r2, in this case).  But is it a good idea?  The underlying array is raid 6, so I guess it's safe, but it sounds at least potentially worriesome me on 8 discs.  Not a matter of concern?

    Performance?

    RDM is less flexible and might be impossible (matrix raid is local).

    Anyone have any suggestions on the best practices here?  What would you do?

    I do not recommend the use of the plan span for the file VMDK disks a bad 2 TB and its all parties (several ways that I saw - snapshot problems, corruption etc.)

    Because they are still local drives, you lose flexibility everything that I recommend using RDM.

    A huge ROW for this virtual machine.

    http://blog.davidwarburton.NET/2010/10/25/RDM-mapping-of-local-SATA-storage-for-ESXi/ This departure to configure it.

    As for what I would do, I've done this before and I used RDM, just for 8TB but still.

  • Best practices for the firmware update 40 + switches

    Hello.

    I need for firmware (and boot code) updated over 40 switches PowerConnect (mainly 5324).

    What is the best practice on that?

    I tried to download and install Dell OpenManage Network Manager, but it seems like an application of 'disorder' for someone who doesn't know. I threw a look at the demo of Dell for the application, but the quality is very bad and it is a version different than the one I downloaded (V3.0.1.15) you cannot use the guides directly.

    Are there other options that manual session opening, then TFTP the new firmware and boot code? Or can someone a link to a valid demo or documentation that can help me with the work?

    Thanks in advance.


  • New to ESXi ESXi installation USB or local disks operation best practices?

    I'm new to Vmware and run a small store, what is the best practice or best method to install the OS ESXi.  I currently have a few that I have installed on the usb stick on the server Board.  After some research it would be better to have two small drives SSD that I can raid with the operating system, then another RAID for the VM data store.  USB is a single source of failure.

    Thank you

    Mike

    Hello

    Having the internal hard drives in RAID1 for the o/s will certainly to avoid a single point of failure, as you pointed out correctly. At present the death of your USB key, your host problems quite quickly, and you will need to get a new one and re-install again. You could save your good host configuration and realistically - it does take too long to rebuild a crowd if he dies. Lose other stuff like network configurations and others would however be a pain!

    I think I have two SSD internal drives in RAID1 for the o/s is probably overkill. You will have an advantage any speed of startup, but realistically most of the servers restart everything often and once that ESXi is in place and operational it is very little activity on the disks, a config updates every so often and so on. I'd be inclined to use a SSD to create a Cache of the host for the swap drive, like that you can actually use the SSD and get more performance for your money.

    Many manufacturers (like Dell) use internal SD cards in RAID1. While SD cards are not known to be very robust, because of the congestion of ESXi and the minimum number of necessary paperwork once installed initially, it makes a less expensive alternative to business class for the o/s disks.

    In regards to your data warehouses, having an internal RAID your local disks is best if you use a stand-alone host with no storage attached to the network. You always have the problem of failure of the host if.

    See you soon,.

    Ryan

  • Best practices for the Vm disk partitioning

    I want to create a server of Windows 2012 by c:\ and d:\ partitions what the best practice it is btw SQL database server. Should I create a hd c: and then add another hd d:\ all on the same data store. I read that the use of applications and partitioning tools is more metal applications for real and to create a virtual machine two separate readers would be too easy and better recovery or future expansion on the drive.

    Thank you

    Mike

    Note: Discussion moved successfully to vSphere SDK comments on Virtual Machine & Guest OS
    Yes, create a separate virtual disk (VMDK) for each partition or volume under Windows. If the workload is light - to begin with, you can leave all VMDK in working area of the virtual machine and then if you need to divide the discs in different data stores you can do it later.
    What can talk you about your workload and virtual infrastructure you have - you what version of vSphere are registered for the?
  • Best practices - virtual disk of Raw device mapping vs

    Hello

    A small question about the configuration. What would be described as the best practice to configure a comment server that required a large amount of disk space? What is the preferred option:

    1 - create a virtual machine with a mapping of raw device to the LUN (on a san)

    or

    2 - create a virtual machine with another large virtual disk that is in a data store on the LUN (on a san)

    Thank you

    Dave

    Kader you are limited with 2 TB in any case (except using the iSCSI software from within the guest operating system), the only arguments are:

    (1) use RDM if it is possible that you will connect to this physical machine LUN. Or you already have the unit number logic of a physical machine as a large file server.

    (2) VMFS use in all other situations - VMDK give flexibility, you can migrate the virtual machine where you want.

    ---

    MCSA, MCTS, VCP, VMware vExpert 2009

    http://blog.vadmin.ru

  • Best practices MMO. Download music and heavy files on disk hard users?

    Best practices MMO. Download music and heavy files on disk hard users?

    I just downloaded a Hello Kitty MMO application to search (for my kids of course).

    I am developing my application of teaching of English with LOADS of classical music, mp3 and heavy bottom BGS phrases. The best idea would be client to download them to their drive hard is to say that I wouldn't need their flow and therefore save a fresh wealth of bandwidth from my ISP?

    See you soon

    You will need test and see what works best for you... as I said, the files are cached and used in the cache whenever possible. However, there is a limit to the size of the cache, and if your files are huge then yes there is probably more sense to download. Or get a better reception where bandwidth is not a problem...

  • OEM 12 c Best Practice follow-up 11.1 DB RAC env. + datagurd

    OEM 12 c 5 release is the only monitoring in our environment... is there a better model of practice on what aspects need to be monitored and their default values?

    This doc for 12 c DB is very useful, it's best practices for high availability. They talk a lot about the RAC/DG monitoring.  https://docs.Oracle.com/database/121/HABPT/monitor.htm#HABPT003

    version 11.2 is here https://docs.oracle.com/cd/E11882_01/server.112/e10803/monitor.htm#g1011041, I read them both though, as there are some new features in version 12 c who may still apply to 11 g.

    Regarding the parameters/models go, if you create a template of the target type (monitoring sc_results.php-> create-select target type-> type > select category/target), this will include all the parameters of the type of target (including the CEO) and as close as you'll have to 'default thresholds' Oracle.  It is the best branch of the teams effort produced.   Of course, they will not be perfect for everyone, but it's a starting point!

  • NetApp Best Practice and independent labels

    Hi, Best practices for VMware NetApp recommends, transitional and temporary data such as comments

    pagefile operating system, temporary files and swap files, must be moved to another disk virtual one

    different data store as snapshots of this type of data can consume a large amount of storage in a very short time



    high time due to the rate of change (that is, to create a data store dedicated to transitional and temporary for all VMS data without other types of data or VMDK residing on it).

    NetApp recommends also configure the VMDK residing in these stores data as "Independent persistent" disks in vCenter. Once configured, the transitional and temporary data VMDK will be excluded from the VMware vCenter snapshot and copy snapshot of NetApp initiated by SnapManager for Virtual Infrastructure.

    I would like to understand the impact of this best practice - can anyone advise on the following:

    • If the above is implemented:

      • Snapshots will work via vcenter?

      • Snapshots will work via the Netapp Snapmanager tool?

      • The snapsot includes all of VM disks? If this is not the case, what is the consequence of not having the whole picture of the VM?

      • The snapshot of vcenter can restore ok?

      • Netapp snapshot can restore ok?

    • What impact the foregoing has on the process of return if using a backup product that relies on snapshot technology?

    Thank you





    Hi Joe

    These recommendations is purely to save storage space when the replication or backup.

    For example, you can move your *.vswap (VM swap file) file to a different data store. NetBackup can do instant IVMS of the warehouses of data and with this configuration, you can exclude this particular data store

    This is also true if you create a data store dedicated for OS Swap files, mark independent so that vCenter not relieve these VMDK.

    I did a project with NetApp on boxes of SAP production

    We moved all the files in *.vswap to warehouses of data created and dedicated RDM for the OS Swap locations

    We actually used the SnapDrive one NetApp technology to suspend the DB SQL on the ROW before the ROW is broken, but I won't go into too much detail

    To answer your questions (see the comments in the quote)

    joeflint wrote:

    • If the above is implemented:
      • Snapshots will work via vcenter? -Yes it will be - independent drive gets ignored
      • Snapshots will work via the Netapp Snapmanager tool? -Yes it will be - snaps the entire data store/LUN
      • The snapsot includes all of VM disks? If this is not the case, what is the consequence of not having the whole picture of the VM? -No. - *.vswap file is created when the VM is started (no need to backup)

    -OS Swap VMDK of location must be re-created in the case of restoration. WIndows will be

    always Prime if the Swap disk is missing, and you specify the new location of swap.

    • What impact the foregoing has on the process of return if using a backup product that relies on snapshot technology? -These backup products use vCenter snapshots and because the vCenter snapshots works 100% it shouldn't be a problem.

    It may be useful

    Please allow points if

  • TDMS &amp; Diadem best practices: what happens if my mark has breaks/cuts?

    I created a LV2011 datalogging application that stores a lot of data to TDMS files.  The basic architecture is like this:

    Each channel has these properties:

    To = start time

    DT = sampling interval

    Channel values:

    Table 1 d of the DBL values

    After the start of datalogging, I still just by adding the string values.  And if the size of the file the PDM goes beyond 1 GB, I create a new file and try again.  The application runs continuously for days/weeks, so I get a lot of TDMS files.

    It works very well.  But now I need to change my system to allow the acquisition of data for pause/resume.  In other words, there will be breaks in the signal (probably from 30 seconds to 10 minutes).  I had originally considered two values for each point of registration as a XY Chart (value & timestamp) data.  But I am opposed to this principal in because according to me, it fills your hard drive unnecessarily (twice us much disk footprint for the same data?).

    Also, I've never used a tiara, but I want to ensure that my data can be easily opened and analyzed using DIAdem.

    My question: are there some best practices for the storage of signals that break/break like that?  I would just start a new record with a new time of departure (To) and tiara somehow "bind" these signals... for example, I know that it is a continuation of the same signal.

    Of course, I should install Diadem and play with him.  But I thought I would ask the experts on best practices, first of all, as I have no knowledge of DIAdem.

    Hi josborne;

    Do you plan to create a new PDM file whenever the acquisition stops and starts, or you were missing fewer sections store multiple power the same TDMS file?  The best way to manage the shift of date / time is to store a waveform per channel per section of power and use the channel property who hails from waveform TDMS data - if you are wiring table of orange floating point or a waveform Brown to the TDMS Write.vi "wf_start_time".  Tiara 2011 has the ability to easily access the time offset when it is stored in this property of channel (assuming that it is stored as a date/time and not as a DBL or a string).  If you have only one section of power by PDM file, I would certainly also add a 'DateTime' property at the file level.  If you want to store several sections of power in a single file, PDM, I would recommend using a separate group for each section of power.  Make sure that you store the following properties of the string in the TDMS file if you want information to flow naturally to DIAdem:

    'wf_xname '.
    'wf_xunit_string '.
    'wf_start_time '.
    'wf_start_offset '.
    'wf_increment '.

    Brad Turpin

    Tiara Product Support Engineer

    National Instruments

  • encoding issue "best practices."

    I'm about to add several command objects to my plan, and the source code will increase accordingly. I would be interested in advice on good ways to break the code into multiple files.

    I thought I had a source file (and a header file) for each command object. Does this cause problems when editing and saving the file .uir? When I run the Code-> target... file command, it seems that it changes the file target for all objects, not only that I am currently working on.

    At least, I would like to have all my routines of recall in one file other than the file that contains the main(). Is it a good/bad idea / is not serious? Is there something special I need to know this?

    I guess what I'm asking, what, how much freedom should I when it comes to code in locations other than what the editor of .uir seems to impose? Before I go down, I want to assure you that I'm not going to open a can of worms here.

    Thank you.

    I'm not so comfortable coming to "best practices", maybe because I am partially a self-taught programmer.

    Nevertheless, some concepts are clear to me: you are not limited in any way in how divide you your code in separate files. Personally, I have the habit of grouping panels that are used for a consistent set of functions (e.g. all the panels for layout tests, all the panels for execution of / follow-up... testing) in a single file UIR and related reminders in a single source file, but is not a rigid rule.

    I have a few common callback functions that are in a separate source file, some of them very commonly used in all of my programs are included in my own instrument driver and installed controls in code or in the editor of the IUR.

    When you use the IUR Editor, you can use the Code > target file Set... feature in menu to set the source file where generated code will go. This option can be changed at any time while developing, so ideally, you could place a button on a Panel, set a routine reminder for him, set the target file and then generate the code for this control only (Ctrl + G or Code > Generate > menu control reminders function). Until you change the target file, all code generated will go to the original target file, but you can move it to another source after that time.

  • "When try to turn on my computer: Windows did not start because of the following ARC firmware boot configuration problem: the ' osload partition ' parameter setting is invalid

    Original title: configuration of the CRA

    Hi, I got this message then try to turn on my computer:

    "Windows did not start because of the following ARC firmware boot configuration problem: the ' osload partition ' parameter setting is invalid. "Please check the Windows documentation about the configuration option BOW and your reference manuals of the equipment for more information.

    can someone help me?

    Hi sugenghariyono,

    ·         Did you do changes on the computer before the show?

    ·         What is the brand and model of the computer?

    ·         Are you able to boot to the desktop?

    Follow the steps in the article.

    Error message: "Windows did not start because of a configuration of the disk of the computer problem.

    For reference:

    Advanced Troubleshooting for General startup problems in Windows XP

Maybe you are looking for