No free esxi4 hardware status tab?

Hello

I'm just starting with esxi4 (free version) and the Vsphere Client, for a reason I do not have the hardware status tab.  The server is a proliant dl380 g5.  I can see some info but no hw status, so I have no way of knowing if, like saying a disc breaks down.

Is it by design or something is at stake here?

You should have a selection of health status in the Configuration TAB. You must install the HP of ESXi version (available on the VMware ESXi download area or the HP Web site) and you will all see a list of equipment and the health. There is no alarm function in the free version of ESXi. You can download Free Veeam Monitor that does not have the ability to send an email.

Tags: VMware

Similar Questions

  • Lost hardware status tab

    Hello

    Since yesterday my 'material' status - tab in Virtual Center has disappeared. I ve had problems yesterday (switch down) network, this might be a reason so?

    How can I make it visible again.

    Thank you

    Have you tried to restart vCenter WebServices? and if that does the trick, try restarting the vCenter Server service.

    If this post was useful/solved your problem, please mark the points of wire and price as seem you. Thank you!

  • DELL PE R710 missing sensors on the tab 'Hardware status' vCenter ESX after latest patch update!

    Last month, I update all my ESX host with the latest patches for ESX from 28/04/2011 (ESX410-201104001) via the vCenter Update Manager, earlier this month, accidentally, I have some verification to be done in the server room and noticed a hard drive failure on one of my DELL R710 Rack servers, I you connected the VirtualCenter client and when I click on the tab 'Hardware status' (where the alarms about the) Configuration hardware server usually appear), I noticed that my storage sensors are missing, the network sensors as well! ?? I recall that on March I did have another server hard disk failure, but I saw an alarm (for predictive failure) on my vCenter and I call Dell to replace the drive back!

    I also noticed that now the number of my server sensors are between 73 and 88, before I remember although there were a couple hudred of them! ??

    After some research on the net, I did I frimware/BIOS upgrade of all hardware on the servers, as I install OMSA 6.5 and Dell management plug-in to vCenter and all seems to work just fine, but storage sensors and count sensor under the tab 'Hardware status' issue was not resolved. I think there is a problem with vcenter, because when I connect directly to the host through the vsphere client, in the "State of health" tab all the material appears correctly. I found subjects with similar problems, but they have been with HP servers and not answered or solved!

    Is there anyone with similar problem or no answer to my question?

    Thanks to all in advance!

    godjak

    PSIF there need I can post some screenshots!

    Confirmed.  Without patch ESX410-201104401-SG, my other R710 has a full complement of sensors.

  • How to get IP/MAC information of the governing body ILO as stated in the material status tab

    Hello

    I know there are scripts of HP to collect information of the IPC/MAC (hpconfg get_network.xml) ILO Governing Council and then use VMware powercli IPMI script to feed DPM.

    as published on http://www.vpeeling.com/?tag=scripting

    Add-PSSnapin vmware. VimAutomation.core - ErrorAction SilentlyContinue

    SE connect-VIserver-Server your.vcenter.server

    $VMHosts = @(import-Csv "C:\scripts\host-info.csv")

    $IPMIUser = "dpmuser".
    $IPMIPass = "dpmpass".

    {foreach ($VMhost to $VMHosts)

    $esxMoRef = get-vmhost $VMHost.Hostname | % {Get-view $_.} ID}
    $IpmiInfo = new-Object Vmware.Vim.HostIpmiInfo
    $IpmiInfo.BmcIpAddress = $VMHost.iLOIP
    $IpmiInfo.BmcMacAddress = $VMHost.iLOMAC
    $IpmiInfo.Login = $IPMIUser
    $IpmiInfo.Password = $IPMIPass
    $esxMoRef.UpdateIpmi ($IpmiInfo)

    }

    But, the question I got recently. How can get out us of this info via vCenter? The vClient has the named material status tab and we see this info.

    hw_status.PNG

    Did anyone tried it this way?

    PowerCLI or SDK (c#), his is not serious.

    Thanks in advance

    A.S.

    You may have gotten a solution now...

    In any case, I found this function (Get-VMHostWSManInstance) which works fine:

    http://blogs.VMware.com/vipowershell/2009/03/monitoring-ESX-hardware-with-PowerShell.html

    The most difficult part is to identify the CIM class containing the BMC MAC/IP address (in my case I need just an IP address). After digging in this doc:

    http://www.VMware.com/support/developer/CIM-SDK/smash/U3/GA/apirefdoc/OMC_IPMIIPProtocolEndpoint.html

    I had the chance to locate: OMC_IPMIIPProtocolEndpoint

    The Get-VMHostWSManInstance call:

    Get-VMHostWSManInstance - VMHost (get-vmhost 'vmhost1') - OMC_IPMIIPProtocolEndpoint - ignoreCertFailures of the class | Select IPv4Address, MACAddress

    will give the address IP/MAC of BMC.

    BTW, I'm using PowerCLI 5.0.1 on Windows 7, is the host ESX 4.x

  • VCB 1.5 for free Esxi4 licensed

    Hello

    Can you tell me if VCB 1.5 is compatible with esxi 4.0 free licensed?

    Thanks in advance

    Ki

    Hello

    VCB is compatible with ESXi 4.0.

    But ESXi must be dismissed. With a free ESXi 4.0 VCB is not supported.

    Best wishes / Saludos.

    Pablo

    Please consider providing any useful answer. Thank you!! - Por favor considered premiar las useful responses. MUCHAS gracias!

    Virtually noob blog

  • Temperature for the host hardware status

    Hello to all,

    mia domanda, come da oggetto, e' come fare a parameter dei valori sui alarm set definition by it check della temperatura.

    Dove devo kindly do I valori di Re?

    Non riesco a quali siano soglie by cui notification parta oppure understand not.

    Grazie a tutti

    The healt del sistema e Marin automatically in base alle impostazioni del relativo CIM.

    Quando qualcosa e fuori scala viene triggerato.

  • vCenter 4.1/ESXi 4.1 material status "Communication error with the server.

    I am slot that a brand new installation of vCenter with two new installs of ESXi 4.1 connected on it. I use on Dell PowerEdge 2950 s ESXi servers. I tired the customized version of Dell and vmware ESXi vanilla version. My host vCenter is Windows 2008 R2.

    In vCenter, the two server ESXi, any version of ESXi, I run, I get "Communication error with the server" error when I click the tab hardware state. No data appears in the hardware status tab.

    Only suggestion of Dell has been updated with firmware/BIOS on the server level. I did and it did not help.

    Anyone has any suggestions or ideas what would cause this?

    I had exactly the same problem what is to have everyone on this thread.  I finally called VMware support.  It is certainly a problem with Tomcat.  Apparently, during my initial installation of tomcat server vCenter (on a Win2k8R2 vanilla Server / unreleased) na not create a database file.  The vCenter Server uninstalled eventuall tech and went through the Setup again.  For some reason any Tomcat created db it's good the second time.  After installation, it takes about 5 minutes for the db should be created.

    On Windows 2k8R2 it is located here: C:\Program Files\VMware\Infrastructure\tomcat\lib\xhiveConfig\data

    The name of the comic is VcCache-by default-0. XhiveDatabase.DB

    My server didn't have one at all.  The relocation has created, at the beginning it is 0KB in size.  After 5 minutes, it grows up to 25, 600KB.

    Everything in my vCenter works now correctly.  I'm on 4.1.320137

    I hope this helps.

    Dave

  • VSphere Client errors after upgrade to 4.1

    Since I upgraded our infrastructure to 4.1 4.0, the Vsphere Client throws an error anytime it is left open for an extended time.

    If I open the client and let it sit for a few hours or overnight, I will go back to him and ther will be a popup of error with the following details.

    An internal error has occurred in the vSphere Client.

    Details: No enopugh storage is available to process this command.

    Contact VMware support if necessary.

    Once this bug has produced the error dialog box may not be rejected if you click Close, it just POPs upward.  The customer is inaccessible after this point, as the error dialog box is selected and that you can not touch the main window.  You must kill the client of Taskmgr.   Then it will restart and will work perfectly for a certain period.  Occasionally, almost hitting the error dialog will return to the client, but most of the tabs is not working and clicking on almost anything causing the error.  It is not a show-stopper, but it is a time wasting, irritating bug.  I've never seen the bug pop up while I was working on the customer, so I don't know what triggers it.

    The client runs on Windows 7 64 bit, but I have confirmed same behavior under the Server 2008 32-bit client.  I have not seen the error yet when the client is on Server 2003 32-bit, but I can't be left open long enough so that the error present.  Attached are the log files that the dialog box 'Error report to VMware' wants to send.  The customer is attaching to a server VCenter 2008 64 bit, not the ESXi hosts.  The smallest attached data store has more than 40 GB of free space.

    I have not supported active then the ratio to VMware dialogue suggested I post here.  It doesn't seem to be a community specifically for the customer and this started only after the upgrade, so I posted it in this community.

    Thank you
    Joe

    I think I followed the status tab of the hardware. If I use the hardware status tab and let it sit there for a while, memory begins to be chewed.

    I opened a SR w / Vmware, and they confirmed. Their words:

    "After a webex and looking through our internal KBs, I was led to a bug and the KB and the bug that both speak to the material status page causing a memory leak to do with CIMs and JAVA.  It looks like no to this question will be resolved in a future version, but it does not say clearly what version they will address in.  4.0U1's should be out soon.  I think of 30-90 days.  Workaround at this time is to not let the vSphere Client open for long periods of time. »

  • How to check the HP management agent is working or not?

    Hi all

    I managed to install HP on ESXi 4.0 management agent by following the steps mentioned in http://h20000.www2.HP.com/bizsupport/techsupport/SoftwareDescription.jsp?swItem=MTX-25f06077ad5541f5a962dd2a69 & lang = to & cc = us & idx = 1 & mode = 4 & . Can you please suggest a simple method to make sure that the agent is running? as mark some system info or see the details in the user interface.

    C:\Program VMware vSphere CLI\bin & gt; vihostupdate.pl - root of the server test - username - password test-bundle FTP://testftp/pub/Depot/HP-esxi4.0uX-bundle-1.1.zip - install - bulletin bulletin1 bulletin2

    Please wait install the fix is underway...

    The update has been completed, but the system must be restarted for the changes to be effective.

    C:\Program VMware vSphere CLI\bin & gt; vihostupdate.pl - root of the server test - username - password test-bundle FTP://PDP-BLR-Suite/pub/Depot/HP-esxi4.0uX-bundle-1.1.zip -query

    -


    IDS bulletin-


    -
    Installed-.
    Summary
    .

    -


    HPQ - esxi4.0uX - bundle - 1.1 2010-01-29 T 15: 00:13 HP ESXi Bundle 1.1

    Kind regards

    Oumou khairy

    : +: EMCPA, RHCE, VCP4, VCP3.

    The easier option:

    In VC hardware status tab search HP smart Array storage. The default virtual circuit support HIS HP, need suppliers CIM HP to show this option.

    Anlother method is to stable wbemcli in a linux box, (in Ubuntu, you get using #apt - get wbemcli * installation command, or download a compilation of the sblim packa, redhat has in their CD or you can get from sourceforge.net) and try the following query.

    wbemcli ecn - nl - noverify ' [https://root:[email protected]/root/hpq[email protected]/root/hpq].

    the above query should list a bunch of classes.

  • HP ESXi bundle offline does not?

    I use 6 hosts x ESXi 5.0 on HP BL460c (a mixture of G6 and G7) blades

    I try to get info from State better h\w by installing the HP ESXi management package using the Update Manager.

    I have configured UNIFIED messaging to download the vibs directly from the warehouse of HP. Reviewing the Patch Reposity show that succeeded.

    I created a Patch to host dynamic database to include patches of the seller "Hewlett-Packard Company" (produced, severity, category everything.) Currently showing 23 parcels in the base line.) I set at the data center that contains hosts. I scanned and updated. Re-scanner gives me a nice green checkmark. So far so good, however...

    If I look in the hardware status tab I see more information (I think I should see additional sensors related to food and storage? Not sure)

    I'm also implementing HP Insight Manager for Vcenter. Vsphere Client, when I select a host, then the HP Insight Management tab I get "this program cannot display this webpage". Maybe it's a separate issue :-(

    Is there a way to prove the agents installed HP-return info?

    Thank you

    I created a Patch to host dynamic database

    You have need of a basic extension of the host and not a patch regular host base. While you can add extensions of lines of host patch database, they will have no effect during the reorganization. Make sure that address them you with the base line according to host extension.

    Some tips on creating custom schedules can be found here (shameless Self plug):

    http://alpacapowered.WordPress.com/2012/06/05/ESXi-HP-updates/

    http://alpacapowered.WordPress.com/2012/09/06/September-ESXi-HP-updates/

  • How to find the host ESX 4.1 SCSI tape drive card information...

    Dear team,

    last night we have installed card scsi tape drive on one of the host to ESX 4.1, just want to know how to find the details of scsi card in ESX 4.1 host, is possible to find via cli or hardware status tab?

    need help on the same.

    concerning

    Mr. VMware

    Hello

    You should find the information you are looking for by entering the following command.

    less/proc/scsi/aic79xx/6

    Kind regards

    Ralf

  • ESXi 4.0 and VMs extremely slow

    Hey there.

    I installed an ESXi 4.0.0 Build 208167 with German-000 just a few months, there is and he worked there with a normal return (please).

    Well, last week, the virtual machines have become performance issues. As an example, windows Explorer takes only a few moments to appear like other tasks such as closing windows systems, or create on SLES11 xterm windows.

    I tried to restart my system, but with this, I got new problems: some virtual machines cannot start or became disabled (appears under the name VM and (invalid) appended to the virtual machine).

    I try to start other systems, press play and the machine seems to have started, but it remains on 95% for a minute and then, it appears a message called: "the attempted operation is not in the current state (off) happen."

    My diary in the var (last lines)

    23 Feb 07:22:07 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:22:07 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:22:07 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:22:07 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:22:07 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:22:07 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:22:07 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:22:07 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:22:07 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:22:07 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:22:07 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:22:07 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:22:07 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:22:07 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:22:15 vmkernel: 0:15:19:26.677 cpu1:186428) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:15 vmkernel: 0:15:19:26.677 cpu6:4324) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:22:15 vmkernel: 0:15:19:26.815 cpu6:4324) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:15 vmkernel: 0:15:19:26.938 cpu1:5139) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:15 vmkernel: 0:15:19:26.938 cpu8:4339) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:22:15 vmkernel: 0:15:19:26.938 cpu8:4339) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:15 vmkernel: 0:15:19:26.938 cpu1:5139) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:15 vmkernel: 0:15:19:26.938 cpu14:4334) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:22:15 vmkernel: 0:15:19:26.938 cpu14:4334) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:17 vmkernel: 0:15:19:28.790 cpu1:5139) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:17 vmkernel: 0:15:19:28.790 cpu6:4329) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:22:17 vmkernel: 0:15:19:28.945 cpu6:4329) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:18 vmkernel: 0:15:19:29.069 cpu1:5139) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:26 vmkernel: 0:15:19:37.319 cpu1:186428) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:26 vmkernel: 0:15:19:37.319 cpu2:4336) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:22:26 vmkernel: 0:15:19:37.456 cpu2:4336) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:26 vmkernel: 0:15:19:37.579 cpu1:186428) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:26 vmkernel: 0:15:19:37.579 cpu2:4329) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:22:26 vmkernel: 0:15:19:37.579 cpu2:4329) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:26 vmkernel: 0:15:19:37.579 cpu1:4363) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:26 vmkernel: 0:15:19:37.579 cpu11:4338) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:22:26 vmkernel: 0:15:19:37.579 cpu11:4338) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:28 vmkernel: 0:15:19:39.437 cpu1:9738) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:28 vmkernel: 0:15:19:39.437 cpu14:4333) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:22:28 vmkernel: 0:15:19:39.581 cpu14:4333) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:28 vmkernel: 0:15:19:39.704 cpu1:9738) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:35 vmkernel: 0:15:19:46.078 cpu1:9738) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:35 vmkernel: 0:15:19:46.078 cpu14:4328) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:22:35 vmkernel: 0:15:19:46.078 cpu14:4328) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:36 vmkernel: 0:15:19:47.936 cpu1:186428) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:36 vmkernel: 0:15:19:47.936 cpu14:4334) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:22:37 vmkernel: 0:15:19:48.085 cpu14:4334) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:37 vmkernel: 0:15:19:48.208 cpu1:186428) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:37 vmkernel: 0:15:19:48.208 cpu2:4330) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:22:37 vmkernel: 0:15:19:48.209 cpu2:4330) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:37 vmkernel: 0:15:19:48.209 cpu1:186428) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:37 vmkernel: 0:15:19:48.209 cpu14:4333) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:22:37 vmkernel: 0:15:19:48.209 cpu14:4333) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:39 vmkernel: 0:15:19:50.066 cpu1:4097) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:39 vmkernel: 0:15:19:50.066 cpu4:4337) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:22:39 vmkernel: 0:15:19:50.216 cpu4:4337) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:39 vmkernel: 0:15:19:50.339 cpu1:4097) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:47 vmkernel: 0:15:19:58.547 cpu1:4097) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:47 vmkernel: 0:15:19:58.547 cpu4:4329) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:22:47 vmkernel: 0:15:19:58.739 cpu4:4329) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:47 vmkernel: 0:15:19:58.862 cpu1:4097) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:47 vmkernel: 0:15:19:58.862 cpu4:4337) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:22:47 vmkernel: 0:15:19:58.862 cpu4:4337) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:47 vmkernel: 0:15:19:58.862 cpu1:4367) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:47 vmkernel: 0:15:19:58.862 cpu8:4339) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:22:47 vmkernel: 0:15:19:58.863 cpu8:4339) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:49 vmkernel: 0:15:20:00.719 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:49 vmkernel: 0:15:20:00.720 cpu0:4336) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:22:49 vmkernel: 0:15:20:00.869 cpu0:4336) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:49 vmkernel: 0:15:20:00.992 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:58 vmkernel: 0:15:20:09.242 cpu14:188206) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:58 vmkernel: 0:15:20:09.242 cpu4:4331) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:22:58 vmkernel: 0:15:20:09.385 cpu4:4331) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:58 vmkernel: 0:15:20:09.509 cpu14:188206) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:58 vmkernel: 0:15:20:09.509 cpu2:4329) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:22:58 vmkernel: 0:15:20:09.509 cpu2:4329) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:58 vmkernel: 0:15:20:09.509 cpu14:188206) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:58 vmkernel: 0:15:20:09.509 cpu0:4337) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:22:58 vmkernel: 0:15:20:09.509 cpu2:4337) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23 vmkernel: 0:15:20:11.367 cpu14:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23 vmkernel: 0:15:20:11.367 cpu8:4332) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:23 vmkernel: 0:15:20:11.516 cpu8:4332) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23 vmkernel: 0:15:20:11.639 cpu14:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:08 vmkernel: 0:15:20:19.877 cpu14:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:08 vmkernel: 0:15:20:19.877 cpu8:4333) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:23:09 vmkernel: 0:15:20:20.026 cpu8:4333) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:09 vmkernel: 0:15:20:20.150 cpu14:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:09 vmkernel: 0:15:20:20.150 cpu8:4332) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:23:09 vmkernel: 0:15:20:20.150 cpu8:4332) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:09 vmkernel: 0:15:20:20.150 cpu14:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:09 vmkernel: 0:15:20:20.150 cpu4:4336) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:23:09 vmkernel: 0:15:20:20.150 cpu4:4336) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:11 vmkernel: 0:15:20:22.008 cpu14:4984) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:11 vmkernel: 0:15:20:22.008 cpu2:4335) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:23:11 vmkernel: 0:15:20:22.157 cpu2:4335) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:11 vmkernel: 0:15:20:22.281 cpu14:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:19 vmkernel: 0:15:20:30.512 cpu14:5039) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:19 vmkernel: 0:15:20:30.512 cpu2:4337) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:23:19 vmkernel: 0:15:20:30.650 cpu2:4337) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:19 vmkernel: 0:15:20:30.773 cpu14:5039) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:19 vmkernel: 0:15:20:30.773 cpu0:4325) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:23:19 vmkernel: 0:15:20:30.773 cpu0:4325) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:19 vmkernel: 0:15:20:30.774 cpu14:5039) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:19 vmkernel: 0:15:20:30.774 cpu6:4330) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:23:19 vmkernel: 0:15:20:30.774 cpu6:4330) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:21 vmkernel: 0:15:20:32.625 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:21 vmkernel: 0:15:20:32.625 cpu2:4327) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:23:21 vmkernel: 0:15:20:32.780 cpu2:4327) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:21 vmkernel: 0:15:20:32.904 cpu14:5140) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:30 vmkernel: 0:15:20:41.148 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:30 vmkernel: 0:15:20:41.148 cpu1:4336) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:23:30 vmkernel: 0:15:20:41.285 cpu1:4336) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:30 vmkernel: 0:15:20:41.409 cpu14:6105) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:30 vmkernel: 0:15:20:41.409 cpu8:4338) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:23:30 vmkernel: 0:15:20:41.409 cpu8:4338) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:30 vmkernel: 0:15:20:41.409 cpu14:6105) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:30 vmkernel: 0:15:20:41.409 cpu0:4329) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:23:30 vmkernel: 0:15:20:41.409 cpu0:4329) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:32 vmkernel: 0:15:20:43.260 cpu14:6105) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:32 vmkernel: 0:15:20:43.260 cpu0:4330) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:23:32 vmkernel: 0:15:20:43.410 cpu0:4330) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:32 vmkernel: 0:15:20:43.533 cpu14:6105) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:40 vmkernel: 0:15:20:51.759 cpu14:4984) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:40 vmkernel: 0:15:20:51.759 cpu1:4325) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:23:40 vmkernel: 0:15:20:51.908 cpu1:4325) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:41 vmkernel: 0:15:20:52.032 cpu14:5009) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:41 vmkernel: 0:15:20:52.032 cpu0:4330) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:23:41 vmkernel: 0:15:20:52.032 cpu0:4330) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:41 vmkernel: 0:15:20:52.032 cpu14:5009) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:41 vmkernel: 0:15:20:52.032 cpu6:4326) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:23:41 vmkernel: 0:15:20:52.032 cpu6:4326) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:42 vmkernel: 0:15:20:53.883 cpu14:5040) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:42 vmkernel: 0:15:20:53.883 cpu6:4324) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:23:43 vmkernel: 0:15:20:54.027 cpu6:4324) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:43 vmkernel: 0:15:20:54.150 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:51 vmkernel: 0:15:21:02.370 cpu14:6090) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:51 vmkernel: 0:15:21:02.370 cpu4:4327) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:23:51 vmkernel: 0:15:21:02.513 cpu4:4327) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:51 vmkernel: 0:15:21:02.637 cpu14:6090) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:51 vmkernel: 0:15:21:02.637 cpu0:4331) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:23:51 vmkernel: 0:15:21:02.637 cpu0:4331) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:51 vmkernel: 0:15:21:02.637 cpu14:6090) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:51 vmkernel: 0:15:21:02.637 cpu0:4324) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:23:51 vmkernel: 0:15:21:02.637 cpu0:4324) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:53 vmkernel: 0:15:21:04.489 cpu14:6090) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:53 vmkernel: 0:15:21:04.489 cpu12:4332) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:23:53 vmkernel: 0:15:21:04.638 cpu12:4332) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:53 vmkernel: 0:15:21:04.762 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:02 vmkernel: 0:15:21:12.999 cpu14:5039) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:02 vmkernel: 0:15:21:12.999 cpu0:4326) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:24:02 vmkernel: 0:15:21:13.131 cpu0:4326) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:02 vmkernel: 0:15:21:13.254 cpu14:5039) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:02 vmkernel: 0:15:21:13.254 cpu12:4332) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:24:02 vmkernel: 0:15:21:13.254 cpu12:4332) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:02 vmkernel: 0:15:21:13.255 cpu14:5039) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:02 vmkernel: 0:15:21:13.255 cpu8:4333) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:24:02 vmkernel: 0:15:21:13.255 cpu8:4333) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:04 vmkernel: 0:15:21:15.112 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:04 vmkernel: 0:15:21:15.112 cpu6:4337) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:24:04 vmkernel: 0:15:21:15.249 cpu6:4337) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:04 vmkernel: 0:15:21:15.373 cpu14:188206) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:11 pass: Activation N5Vmomi10ActivationE:0x5b3c7ea8 : invoke made on vmodl.query.PropertyCollector:ha - property-collector

    23 Feb 07:24:11 pass: Arg version: '111 '.

    23 Feb 07:24:11 pass: throw vmodl.fault.RequestCanceled

    23 Feb 07:24:11 pass: result: (vmodl.fault.RequestCanceled) {dynamicType = < unset >, faultCause = (vmodl. {MethodFault) null, msg = "",}

    23 Feb 07:24:11 pass: PendingRequest: HTTP Transaction failed, closes the connection: N7Vmacore15SystemExceptionE (connection reset by peer)

    23 Feb 07:24:12 vmkernel: 0:15:21:23.622 cpu14:188206) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:12 vmkernel: 0:15:21:23.622 cpu0:4331) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:24:12 vmkernel: 0:15:21:23.820 cpu0:4331) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:12 vmkernel: 0:15:21:23.943 cpu14:188206) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:12 vmkernel: 0:15:21:23.944 cpu8:4334) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:24:21 vmkernel: 0:15:21:32.199 cpu14:6105) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:21 vmkernel: 0:15:21:32.199 cpu8:4338) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:24:21 vmkernel: 0:15:21:32.342 cpu8:4338) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:21 vmkernel: 0:15:21:32.466 cpu14:6105) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:21 vmkernel: 0:15:21:32.466 cpu6:4324) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:24:21 vmkernel: 0:15:21:32.466 cpu6:4324) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:21 vmkernel: 0:15:21:32.466 cpu14:6105) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:21 vmkernel: 0:15:21:32.466 cpu2:4331) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:24:21 vmkernel: 0:15:21:32.466 cpu2:4331) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:23 vmkernel: 0:15:21:34.323 cpu14:6090) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:23 vmkernel: 0:15:21:34.323 cpu8:4332) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:24:23 vmkernel: 0:15:21:34.467 cpu8:4332) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:23 vmkernel: 0:15:21:34.590 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:31 vmkernel: 0:15:21:42.822 cpu14:11155) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:31 vmkernel: 0:15:21:42.822 cpu8:4328) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:24:31 vmkernel: 0:15:21:42.971 cpu8:4328) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:32 vmkernel: 0:15:21:43.095 cpu14:11155) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:32 vmkernel: 0:15:21:43.095 cpu8:4332) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:24:32 vmkernel: 0:15:21:43.095 cpu8:4332) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:32 vmkernel: 0:15:21:43.095 cpu14:11155) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:32 vmkernel: 0:15:21:43.095 cpu2:4327) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:24:32 vmkernel: 0:15:21:43.096 cpu2:4327) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:33 vmkernel: 0:15:21:44.953 cpu14:6103) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:33 vmkernel: 0:15:21:44.953 cpu8:4334) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:24:34 vmkernel: 0:15:21:45.078 cpu8:4334) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:34 vmkernel: 0:15:21:45.202 cpu14:6103) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:42 vmkernel: 0:15:21:53.433 cpu14:5009) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:42 vmkernel: 0:15:21:53.433 cpu2:4337) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:24:42 vmkernel: 0:15:21:53.583 cpu2:4337) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:42 vmkernel: 0:15:21:53.706 cpu14:11155) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:42 vmkernel: 0:15:21:53.706 cpu0:4331) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:24:42 vmkernel: 0:15:21:53.707 cpu0:4331) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:42 vmkernel: 0:15:21:53.707 cpu14:11155) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:42 vmkernel: 0:15:21:53.707 cpu0:4335) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:24:42 vmkernel: 0:15:21:53.707 cpu0:4335) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:44 vmkernel: 0:15:21:55.558 cpu14:6105) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:44 vmkernel: 0:15:21:55.558 cpu2:4329) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:24:44 vmkernel: 0:15:21:55.695 cpu2:4329) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:44 vmkernel: 0:15:21:55.819 cpu14:6105) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:53 vmkernel: 0:15:22:04.128 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:53 vmkernel: 0:15:22:04.128 cpu0:4327) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:24:53 vmkernel: 0:15:22:04.284 cpu0:4327) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:53 vmkernel: 0:15:22:04.407 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:53 vmkernel: 0:15:22:04.407 cpu2:4329) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:24:53 vmkernel: 0:15:22:04.408 cpu6:4329) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:53 vmkernel: 0:15:22:04.408 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:53 vmkernel: 0:15:22:04.408 cpu8:4338) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:24:53 vmkernel: 0:15:22:04.408 cpu8:4338) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:55 vmkernel: 0:15:22:06.265 cpu14:5009) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:55 vmkernel: 0:15:22:06.265 cpu0:4330) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:24:55 vmkernel: 0:15:22:06.414 cpu0:4330) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:55 vmkernel: 0:15:22:06.538 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:01 vmkernel: 0:15:22:12.645 cpu14:5039) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:01 vmkernel: 0:15:22:12.645 cpu8:4333) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:25:01 vmkernel: 0:15:22:12.794 cpu8:4333) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:01 vmkernel: 0:15:22:12.918 cpu14:5192) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:01 vmkernel: 0:15:22:12.918 cpu2:4327) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:25:01 vmkernel: 0:15:22:12.918 cpu2:4327) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:01 vmkernel: 0:15:22:12.918 cpu14:5192) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:01 vmkernel: 0:15:22:12.918 cpu8:4338) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:25:01 vmkernel: 0:15:22:12.918 cpu8:4338) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:03 vmkernel: 0:15:22:14.769 cpu14:6389) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:03 vmkernel: 0:15:22:14.770 cpu8:4334) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:25:03 vmkernel: 0:15:22:14.907 cpu8:4334) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:04 vmkernel: 0:15:22:15.031 cpu14:6389) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:12 vmkernel: 0:15:22:23.268 cpu14:5138) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:12 vmkernel: 0:15:22:23.268 cpu3:4331) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:25:12 vmkernel: 0:15:22:23.411 cpu3:4331) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:12 vmkernel: 0:15:22:23.535 cpu14:4984) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:12 vmkernel: 0:15:22:23.535 cpu0:4335) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:25:12 vmkernel: 0:15:22:23.536 cpu0:4335) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:12 vmkernel: 0:15:22:23.536 cpu14:4984) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:12 vmkernel: 0:15:22:23.536 cpu4:4325) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:25:12 vmkernel: 0:15:22:23.536 cpu4:4325) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:14 vmkernel: 0:15:22:25.387 cpu14:5040) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:14 vmkernel: 0:15:22:25.387 cpu3:4329) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:25:14 vmkernel: 0:15:22:25.536 cpu3:4329) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:14 vmkernel: 0:15:22:25.660 cpu14:5040) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:22 vmkernel: 0:15:22:33.873 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:22 vmkernel: 0:15:22:33.873 cpu1:4327) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:25:23 vmkernel: 0:15:22:34.023 cpu1:4327) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:23 vmkernel: 0:15:22:34.147 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:23 vmkernel: 0:15:22:34.147 cpu3:4329) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:25:23 vmkernel: 0:15:22:34.147 cpu3:4329) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:23 vmkernel: 0:15:22:34.147 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:23 vmkernel: 0:15:22:34.147 cpu0:4336) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:25:23 vmkernel: 0:15:22:34.148 cpu0:4336) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:24 vmkernel: 0:15:22:35.998 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:24 vmkernel: 0:15:22:35.998 cpu8:4328) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:25:25 vmkernel: 0:15:22:36.154 cpu8:4328) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:25 vmkernel: 0:15:22:36.278 cpu14:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:31 vmkernel: 0:15:22:42.402 cpu14:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:31 vmkernel: 0:15:22:42.402 cpu1:4326) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:25:31 vmkernel: 0:15:22:42.545 cpu1:4326) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:31 vmkernel: 0:15:22:42.669 cpu14:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:31 vmkernel: 0:15:22:42.669 cpu8:4333) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:25:31 vmkernel: 0:15:22:42.669 cpu8:4333) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:31 vmkernel: 0:15:22:42.670 cpu14:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:31 vmkernel: 0:15:22:42.670 cpu8:4338) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:25:31 vmkernel: 0:15:22:42.670 cpu8:4338) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:33 vmkernel: 0:15:22:44.520 cpu14:11155) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:33 vmkernel: 0:15:22:44.520 cpu8:4334) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:25:33 vmkernel: 0:15:22:44.670 cpu8:4334) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:33 vmkernel: 0:15:22:44.794 cpu14:11155) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:42 vmkernel: 0:15:22:53.037 cpu14:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:42 vmkernel: 0:15:22:53.037 cpu0:4331) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:25:42 vmkernel: 0:15:22:53.180 cpu0:4331) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:42 vmkernel: 0:15:22:53.304 cpu14:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:42 vmkernel: 0:15:22:53.304 cpu2:4335) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:25:42 vmkernel: 0:15:22:53.305 cpu2:4335) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:42 vmkernel: 0:15:22:53.305 cpu14:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:42 vmkernel: 0:15:22:53.305 cpu2:4330) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:25:42 vmkernel: 0:15:22:53.305 cpu2:4330) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:44 vmkernel: 0:15:22:55.156 cpu14:188206) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:44 vmkernel: 0:15:22:55.156 cpu2:4336) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:25:44 vmkernel: 0:15:22:55.305 cpu2:4336) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:44 vmkernel: 0:15:22:55.429 cpu14:188206) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:52 vmkernel: 0:15:23:03.660 cpu14:188206) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:52 vmkernel: 0:15:23:03.660 cpu1:4327) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:25:52 vmkernel: 0:15:23:03.804 cpu1:4327) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:52 vmkernel: 0:15:23:03.928 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:52 vmkernel: 0:15:23:03.928 cpu2:4336) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:25:52 vmkernel: 0:15:23:03.928 cpu2:4336) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:52 vmkernel: 0:15:23:03.928 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:52 vmkernel: 0:15:23:03.928 cpu8:4339) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:25:52 vmkernel: 0:15:23:03.928 cpu8:4339) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:54 vmkernel: 0:15:23:05.779 cpu14:188206) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:54 vmkernel: 0:15:23:05.779 cpu8:4332) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:25:54 vmkernel: 0:15:23:05.928 cpu8:4332) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:55 vmkernel: 0:15:23:06.052 cpu14:188206) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:01 vmkernel: 0:15:23:12.117 cpu14:6090) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:01 vmkernel: 0:15:23:12.117 cpu8:4333) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:26:01 vmkernel: 0:15:23:12.254 cpu8:4333) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:01 vmkernel: 0:15:23:12.379 cpu14:6090) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:01 vmkernel: 0:15:23:12.379 cpu8:4338) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:26:01 vmkernel: 0:15:23:12.379 cpu8:4338) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:01 vmkernel: 0:15:23:12.379 cpu14:6090) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:01 vmkernel: 0:15:23:12.379 cpu1:4327) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:26:01 vmkernel: 0:15:23:12.379 cpu1:4327) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:03 vmkernel: 0:15:23:14.229 cpu14:5009) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:03 vmkernel: 0:15:23:14.230 cpu0:4335) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:26:03 vmkernel: 0:15:23:14.409 cpu0:4335) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:03 vmkernel: 0:15:23:14.533 cpu14:11155) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:11 vmkernel: 0:15:23:22.776 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:11 vmkernel: 0:15:23:22.776 cpu2:4325) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:26:11 vmkernel: 0:15:23:22.931 cpu2:4325) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:12 vmkernel: 0:15:23:23.056 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:12 vmkernel: 0:15:23:23.056 cpu0:4335) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:26:12 vmkernel: 0:15:23:23.056 cpu0:4335) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:12 vmkernel: 0:15:23:23.056 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:12 vmkernel: 0:15:23:23.056 cpu0:4330) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:26:12 vmkernel: 0:15:23:23.056 cpu0:4330) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:13 vmkernel: 0:15:23:24.913 cpu5:185904) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:13 vmkernel: 0:15:23:24.913 cpu8:4339) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:26:14 vmkernel: 0:15:23:25.062 cpu8:4339) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:14 vmkernel: 0:15:23:25.186 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:22 vmkernel: 0:15:23:33.423 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:22 vmkernel: 0:15:23:33.423 cpu6:4327) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:26:22 vmkernel: 0:15:23:33.560 cpu6:4327) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:22 vmkernel: 0:15:23:33.685 cpu5:6088) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:22 vmkernel: 0:15:23:33.685 cpu0:4337) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:26:22 vmkernel: 0:15:23:33.685 cpu0:4337) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:22 vmkernel: 0:15:23:33.685 cpu5:6088) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:22 vmkernel: 0:15:23:33.685 cpu0:4336) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:26:22 vmkernel: 0:15:23:33.685 cpu0:4336) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:24 vmkernel: 0:15:23:35.542 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:24 vmkernel: 0:15:23:35.542 cpu0:4326) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:26:24 vmkernel: 0:15:23:35.685 cpu0:4326) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:24 vmkernel: 0:15:23:35.809 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:30 vmkernel: 0:15:23:41.922 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:30 vmkernel: 0:15:23:41.922 cpu6:4329) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:26:31 vmkernel: 0:15:23:42.059 cpu6:4329) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:31 vmkernel: 0:15:23:42.183 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:31 vmkernel: 0:15:23:42.183 cpu0:4327) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:26:31 vmkernel: 0:15:23:42.183 cpu0:4327) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:31 vmkernel: 0:15:23:42.184 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:31 vmkernel: 0:15:23:42.184 cpu0:4336) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:26:31 vmkernel: 0:15:23:42.184 cpu4:4336) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:33 vmkernel: 0:15:23:44.040 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:33 vmkernel: 0:15:23:44.040 cpu0:4330) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:26:33 vmkernel: 0:15:23:44.190 cpu0:4330) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:33 vmkernel: 0:15:23:44.314 cpu5:6087) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:41 vmkernel: 0:15:23:52.569 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:41 vmkernel: 0:15:23:52.569 cpu8:4334) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:26:41 vmkernel: 0:15:23:52.718 cpu8:4334) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:41 vmkernel: 0:15:23:52.843 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:41 vmkernel: 0:15:23:52.843 cpu0:4325) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:26:41 vmkernel: 0:15:23:52.843 cpu0:4325) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:41 vmkernel: 0:15:23:52.843 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:41 vmkernel: 0:15:23:52.843 cpu2:4335) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:26:41 vmkernel: 0:15:23:52.844 cpu2:4335) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:43 vmkernel: 0:15:23:54.699 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:43 vmkernel: 0:15:23:54.699 cpu10:4339) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:26:43 vmkernel: 0:15:23:54.903 cpu10:4339) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:44 vmkernel: 0:15:23:55.027 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:52 vmkernel: 0:15:24:03.258 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:52 vmkernel: 0:15:24:03.258 cpu0:4327) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:26:52 vmkernel: 0:15:24:03.395 cpu0:4327) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:52 vmkernel: 0:15:24:03.520 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:52 vmkernel: 0:15:24:03.520 cpu14:4339) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:26:52 vmkernel: 0:15:24:03.520 cpu14:4339) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:52 vmkernel: 0:15:24:03.520 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:52 vmkernel: 0:15:24:03.520 cpu0:4337) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:26:52 vmkernel: 0:15:24:03.520 cpu2:4337) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:54 vmkernel: 0:15:24:05.377 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:54 vmkernel: 0:15:24:05.377 cpu10:4332) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:26:54 vmkernel: 0:15:24:05.514 cpu10:4332) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:54 vmkernel: 0:15:24:05.638 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27 vmkernel: 0:15:24:11.733 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27 vmkernel: 0:15:24:11.733 cpu0:4329) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:27 vmkernel: 0:15:24:11.888 cpu0:4329) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:01 vmkernel: 0:15:24:12.012 cpu5:4367) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:01 vmkernel: 0:15:24:12.012 cpu2:4327) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:27:01 vmkernel: 0:15:24:12.012 cpu2:4327) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:01 vmkernel: 0:15:24:12.012 cpu5:4367) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:01 vmkernel: 0:15:24:12.012 cpu2:4331) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:27:01 vmkernel: 0:15:24:12.012 cpu2:4331) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:02 vmkernel: 0:15:24:13.869 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:02 vmkernel: 0:15:24:13.869 cpu0:4330) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:27:03 vmkernel: 0:15:24:14.013 cpu0:4330) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:03 vmkernel: 0:15:24:14.137 cpu5:4367) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:11 vmkernel: 0:15:24:22.392 cpu5:186428) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:11 vmkernel: 0:15:24:22.392 cpu8:4334) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:27:11 vmkernel: 0:15:24:22.541 cpu8:4334) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:11 vmkernel: 0:15:24:22.666 cpu5:186428) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:11 vmkernel: 0:15:24:22.666 cpu0:4326) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:27:11 vmkernel: 0:15:24:22.666 cpu0:4326) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:11 vmkernel: 0:15:24:22.666 cpu5:186428) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:11 vmkernel: 0:15:24:22.666 cpu0:4330) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:27:11 vmkernel: 0:15:24:22.666 cpu0:4330) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:13 vmkernel: 0:15:24:24.516 cpu5:186428) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:13 vmkernel: 0:15:24:24.516 cpu10:4339) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:27:13 vmkernel: 0:15:24:24.660 cpu10:4339) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:13 vmkernel: 0:15:24:24.784 cpu5:186428) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:22 vmkernel: 0:15:24:33.015 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:22 vmkernel: 0:15:24:33.015 cpu0:4327) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:27:22 vmkernel: 0:15:24:33.158 cpu0:4327) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:22 vmkernel: 0:15:24:33.283 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:22 vmkernel: 0:15:24:33.283 cpu10:4339) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:27:22 vmkernel: 0:15:24:33.283 cpu10:4339) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:22 vmkernel: 0:15:24:33.283 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:22 vmkernel: 0:15:24:33.283 cpu2:4324) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:27:22 vmkernel: 0:15:24:33.283 cpu2:4324) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:24 vmkernel: 0:15:24:35.134 cpu5:9738) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:24 vmkernel: 0:15:24:35.134 cpu1:4336) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:27:24 vmkernel: 0:15:24:35.295 cpu1:4336) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:24 vmkernel: 0:15:24:35.419 cpu5:9738) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:30 vmkernel: 0:15:24:41.507 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:30 vmkernel: 0:15:24:41.507 cpu0:4329) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:27:30 vmkernel: 0:15:24:41.675 cpu0:4329) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:30 vmkernel: 0:15:24:41.799 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:30 vmkernel: 0:15:24:41.799 cpu6:4331) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:27:30 vmkernel: 0:15:24:41.799 cpu6:4331) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:30 vmkernel: 0:15:24:41.800 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:30 vmkernel: 0:15:24:41.800 cpu0:4337) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:27:30 vmkernel: 0:15:24:41.800 cpu0:4337) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:32 vmkernel: 0:15:24:43.650 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:32 vmkernel: 0:15:24:43.650 cpu0:4326) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:27:32 vmkernel: 0:15:24:43.793 cpu0:4326) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:32 vmkernel: 0:15:24:43.918 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:41 vmkernel: 0:15:24:52.137 cpu11:11155) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:41 vmkernel: 0:15:24:52.137 cpu6:4335) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:27:41 vmkernel: 0:15:24:52.280 cpu6:4335) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:41 vmkernel: 0:15:24:52.405 cpu11:11155) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:41 vmkernel: 0:15:24:52.405 cpu6:4326) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:27:41 vmkernel: 0:15:24:52.405 cpu6:4326) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:41 vmkernel: 0:15:24:52.405 cpu11:11155) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:41 vmkernel: 0:15:24:52.405 cpu14:4333) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:27:41 vmkernel: 0:15:24:52.405 cpu14:4333) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:42 pass: created task: haTask-304 - vim.VirtualMachine.powerOn - 306

    23 Feb 07:27:42 pass: 2010-02-23 07:27:42.209 5B22BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx' power on request received

    23 Feb 07:27:51 vmkernel: 0:15:25:02.742 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:51 vmkernel: 0:15:25:02.742 cpu0:4337) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:27:51 vmkernel: 0:15:25:02.891 cpu0:4337) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:52 vmkernel: 0:15:25:03.016 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:52 vmkernel: 0:15:25:03.016 cpu6:4335) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:27:52 vmkernel: 0:15:25:03.016 cpu6:4335) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:52 vmkernel: 0:15:25:03.017 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:52 vmkernel: 0:15:25:03.017 cpu0:4326) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:27:52 vmkernel: 0:15:25:03.017 cpu0:4326) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:53 vmkernel: 0:15:25:04.873 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:53 vmkernel: 0:15:25:04.873 cpu0:4324) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:27:54 vmkernel: 0:15:25:05.022 cpu0:4324) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:54 vmkernel: 0:15:25:05.147 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28 vmkernel: 0:15:25:11.246 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28 vmkernel: 0:15:25:11.246 cpu0:4337) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:28 vmkernel: 0:15:25:11.390 cpu0:4337) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28 vmkernel: 0:15:25:11.515 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28 vmkernel: 0:15:25:11.515 cpu2:4330) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:28 vmkernel: 0:15:25:11.515 cpu2:4330) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28 vmkernel: 0:15:25:11.515 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28 vmkernel: 0:15:25:11.515 cpu6:4335) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:28 vmkernel: 0:15:25:11.515 cpu6:4335) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:02 vmkernel: 0:15:25:13.365 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:02 vmkernel: 0:15:25:13.365 cpu8:4339) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:28:02 vmkernel: 0:15:25:13.556 cpu8:4339) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:02 vmkernel: 0:15:25:13.681 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:09 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:28:09 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:28:09 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:28:09 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:28:09 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:28:09 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:28:09 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:28:09 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:28:09 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:28:09 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:28:09 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:28:10 vmkernel: 0:15:25:21.924 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:10 vmkernel: 0:15:25:21.924 cpu0:4329) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:28:11 vmkernel: 0:15:25:22.067 cpu0:4329) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:11 vmkernel: 0:15:25:22.192 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:11 vmkernel: 0:15:25:22.192 cpu8:4339) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:28:11 vmkernel: 0:15:25:22.192 cpu8:4339) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:11 vmkernel: 0:15:25:22.192 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:11 vmkernel: 0:15:25:22.192 cpu14:4328) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:28:11 vmkernel: 0:15:25:22.192 cpu14:4328) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:13 vmkernel: 0:15:25:24.042 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:13 vmkernel: 0:15:25:24.042 cpu0:4336) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:28:13 vmkernel: 0:15:25:24.186 cpu0:4336) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:13 vmkernel: 0:15:25:24.310 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:20 pass: HostCtl Exception when the collection of statistics: SysinfoException: node (VSI_NODE_sched_cpuClients_numVcpus); (Bad0001) status = failure; Message = Instance (1): 0 Input (0)

    23 Feb 07:28:21 vmkernel: 0:15:25:32.529 cpu11:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:21 vmkernel: 0:15:25:32.529 cpu0:4330) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:28:21 vmkernel: 0:15:25:32.672 cpu0:4330) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:21 vmkernel: 0:15:25:32.797 cpu11:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:21 vmkernel: 0:15:25:32.797 cpu0:4326) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:28:21 vmkernel: 0:15:25:32.797 cpu0:4326) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:21 vmkernel: 0:15:25:32.797 cpu11:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:21 vmkernel: 0:15:25:32.797 cpu14:4333) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:28:21 vmkernel: 0:15:25:32.797 cpu14:4333) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:23 vmkernel: 0:15:25:34.653 cpu11:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:23 vmkernel: 0:15:25:34.653 cpu2:4325) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:28:23 vmkernel: 0:15:25:34.815 cpu2:4325) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:30 vmkernel: 0:15:25:41.045 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:30 vmkernel: 0:15:25:41.045 cpu0:4335) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:28:30 vmkernel: 0:15:25:41.195 cpu0:4335) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:30 vmkernel: 0:15:25:41.319 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:30 vmkernel: 0:15:25:41.319 cpu0:4330) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:28:30 vmkernel: 0:15:25:41.320 cpu0:4330) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:30 vmkernel: 0:15:25:41.320 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:30 vmkernel: 0:15:25:41.320 cpu6:4326) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:28:30 vmkernel: 0:15:25:41.320 cpu6:4326) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:32 vmkernel: 0:15:25:43.176 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:32 vmkernel: 0:15:25:43.176 cpu9:4339) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:28:32 vmkernel: 0:15:25:43.325 cpu9:4339) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:32 vmkernel: 0:15:25:43.450 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:38 vmkernel: 0:15:25:49.854 cpu8:4334) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:38 vmkernel: 0:15:25:49.854 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:38 vmkernel: 0:15:25:49.854 cpu2:4337) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:28:38 vmkernel: 0:15:25:49.854 cpu2:4337) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:40 vmkernel: 0:15:25:51.710 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:40 vmkernel: 0:15:25:51.711 cpu6:4327) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:28:40 vmkernel: 0:15:25:51.848 cpu6:4327) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:40 vmkernel: 0:15:25:51.973 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:40 vmkernel: 0:15:25:51.973 cpu9:4339) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:28:40 vmkernel: 0:15:25:51.973 cpu9:4339) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:40 vmkernel: 0:15:25:51.973 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:40 vmkernel: 0:15:25:51.973 cpu6:4325) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:28:40 vmkernel: 0:15:25:51.973 cpu6:4325) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:42 vmkernel: 0:15:25:53.829 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:42 vmkernel: 0:15:25:53.829 cpu6:4336) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:28:43 vmkernel: 0:15:25:54.044 cpu6:4336) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:43 vmkernel: 0:15:25:54.169 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:51 vmkernel: 0:15:26:02.400 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:51 vmkernel: 0:15:26:02.400 cpu0:4326) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:28:51 vmkernel: 0:15:26:02.531 cpu0:4326) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:51 vmkernel: 0:15:26:02.656 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:51 vmkernel: 0:15:26:02.656 cpu9:4338) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:28:51 vmkernel: 0:15:26:02.656 cpu9:4338) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:51 vmkernel: 0:15:26:02.656 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:51 vmkernel: 0:15:26:02.656 cpu6:4336) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:28:51 vmkernel: 0:15:26:02.656 cpu6:4336) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:52 vmkernel: 0:15:26:03.467 cpu4:68711) megasas: sn 352358 cmd = 0 x 28 attempts ABORT = tmo 0 = 0

    23 Feb 07:28:52 vmkernel: 0:15:26:03.467 cpu4:68711) < 5 > 0: megasas: RESET 352358 cmd = 28 attempts sn = 0

    23 Feb 07:28:53 vmkernel: 0:15:26:04.512 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:54 vmkernel: 0:15:26:05.491 cpu0:68711) megasas < 5 >: successful reset

    23 Feb 07:29:02 vmkernel: 0:15:26:12.999 cpu7:6104) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:02 vmkernel: 0:15:26:12.999 cpu14:4339) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:29:02 vmkernel: 0:15:26:13.136 cpu14:4339) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:02 vmkernel: 0:15:26:13.261 cpu7:6104) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:02 vmkernel: 0:15:26:13.261 cpu12:4334) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:29:02 vmkernel: 0:15:26:13.261 cpu12:4334) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:02 vmkernel: 0:15:26:13.261 cpu7:6104) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:02 vmkernel: 0:15:26:13.261 cpu0:4335) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:29:02 vmkernel: 0:15:26:13.261 cpu0:4335) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:04 vmkernel: 0:15:26:15.117 cpu7:6104) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:04 vmkernel: 0:15:26:15.117 cpu0:4327) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:29:04 vmkernel: 0:15:26:15.261 cpu0:4327) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:04 vmkernel: 0:15:26:15.386 cpu7:6104) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:12 vmkernel: 0:15:26:23.616 cpu7:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:12 vmkernel: 0:15:26:23.616 cpu0:4336) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:29:12 vmkernel: 0:15:26:23.754 cpu0:4336) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:12 vmkernel: 0:15:26:23.879 cpu7:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:12 vmkernel: 0:15:26:23.879 cpu0:4329) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:29:12 vmkernel: 0:15:26:23.879 cpu0:4329) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:12 vmkernel: 0:15:26:23.879 cpu7:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:12 vmkernel: 0:15:26:23.879 cpu8:4332) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:29:12 vmkernel: 0:15:26:23.879 cpu8:4332) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:12 vmkernel: 0:15:26:23.995 cpu0:190779) megasas: ABORT sn 352531 cmd = attempts 0x2a = tmo 0 = 0

    23 Feb 07:29:12 vmkernel: 0:15:26:23.995 cpu0:190779) < 5 > 0: megasas: RESET sn 352531 cmd = attempts 2 a = 0

    23 Feb 07:29:14 vmkernel: 0:15:26:25.735 cpu7:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:23 vmkernel: 0:15:26:34.293 cpu4:6389) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:23 vmkernel: 0:15:26:34.293 cpu12:4334) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:29:23 vmkernel: 0:15:26:34.442 cpu12:4334) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:23 vmkernel: 0:15:26:34.567 cpu4:6389) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:23 vmkernel: 0:15:26:34.567 cpu1:4337) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:29:23 vmkernel: 0:15:26:34.568 cpu1:4337) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:23 vmkernel: 0:15:26:34.568 cpu4:4363) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:23 vmkernel: 0:15:26:34.568 cpu8:4333) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:29:23 vmkernel: 0:15:26:34.568 cpu8:4333) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:25 vmkernel: 0:15:26:36.424 cpu4:6389) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:25 vmkernel: 0:15:26:36.424 cpu2:4331) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:29:25 vmkernel: 0:15:26:36.574 cpu2:4331) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:25 vmkernel: 0:15:26:36.699 cpu4:6389) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:32 pass: 2010-02-23 07:29:32.555 verbose 5B22BB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/petas03/petas03.vmx" unknown unexpected toolsVersionStatus, use status offline guestToolsCurrent

    23 Feb 07:29:32 pass: 2010-02-23 07:29:32.562 verbose 5B22BB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx" unknown unexpected toolsVersionStatus, use status offline guestToolsCurrent

    23 Feb 07:29:32 pass: 2010-02-23 07:29:32.568 verbose 5B22BB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/petas02/petas02.vmx" unknown unexpected toolsVersionStatus, use status offline guestToolsCurrent

    23 Feb 07:29:34 vmkernel: 0:15:26:45.030 cpu4:5039) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:34 vmkernel: 0:15:26:45.030 cpu12:4332) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:29:34 vmkernel: 0:15:26:45.161 cpu12:4332) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:34 pass: created task: haTask-80 - vim.VirtualMachine.powerOn - 315

    23 Feb 07:29:34 spend: 2010-02-23 07:29:34.172 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx' power on request received

    23 Feb 07:29:34 spend: 2010-02-23 07:29:34.172 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx' ethernet reconfigures protective is required

    23 Feb 07:29:34 pass: 257 of the event: petas01 on the RTHUS7002.rintra.ruag.com ha-Data Center host begins

    23 Feb 07:29:34 spend: 2010-02-23 07:29:34.173 1680BB90 info 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx' State of Transition (VM_STATE_OFF-> VM_STATE_POWERING_ON)

    23 Feb 07:29:34 vmkernel: 0:15:26:45.286 cpu4:5039) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:44 vmkernel: 0:15:26:55.635 cpu4:9738) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:44 vmkernel: 0:15:26:55.635 cpu8:4333) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:29:44 vmkernel: 0:15:26:55.785 cpu8:4333) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:44 vmkernel: 0:15:26:55.910 cpu4:9738) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:44 vmkernel: 0:15:26:55.910 cpu8:4338) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:29:44 vmkernel: 0:15:26:55.910 cpu8:4338) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:44 vmkernel: 0:15:26:55.910 cpu4:9738) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:44 vmkernel: 0:15:26:55.910 cpu6:4329) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:29:44 vmkernel: 0:15:26:55.910 cpu6:4329) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:46 vmkernel: 0:15:26:57.766 cpu4:6104) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:46 vmkernel: 0:15:26:57.766 cpu0:4324) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:29:46 vmkernel: 0:15:26:57.915 cpu0:4324) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:47 vmkernel: 0:15:26:58.040 cpu4:4363) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:55 vmkernel: 0:15:27:06.276 cpu1:9738) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:55 vmkernel: 0:15:27:06.276 cpu6:4331) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:29:55 vmkernel: 0:15:27:06.414 cpu6:4331) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:55 vmkernel: 0:15:27:06.539 cpu1:9738) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:55 vmkernel: 0:15:27:06.539 cpu2:4324) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:29:55 vmkernel: 0:15:27:06.539 cpu2:4324) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:55 vmkernel: 0:15:27:06.540 cpu1:9738) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:55 vmkernel: 0:15:27:06.540 cpu4:4335) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:29:55 vmkernel: 0:15:27:06.540 cpu4:4335) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:57 vmkernel: 0:15:27:08.395 cpu1:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:57 vmkernel: 0:15:27:08.395 cpu3:4336) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:29:57 vmkernel: 0:15:27:08.538 cpu3:4336) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:57 vmkernel: 0:15:27:08.664 cpu1:190710) megasas_service_aen < 6 > [6]: NEA received

    And then there is my hostd.log

    DISKLIB-VMFS: ' / vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05/app-back-05-flat.vmdk ': open with success (21) size = 53687091200, hd = 0. Type 3

    DISKLIB-VMFS: ' / vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05/app-back-05-flat.vmdk ': closed.

    Reminder of Foundry recorded on 4

    Reminder of Foundry recorded on 17

    Reminder of Foundry recorded on 5

    Reminder of Foundry recorded on 11

    Reminder of Foundry recorded on 12

    Reminder of Foundry recorded 13

    Reminder of Foundry recorded on 14

    Reminder of Foundry recorded on 15

    Reminder of Foundry recorded 26

    Reminder of Foundry saved on 16

    2010-02-23 06:52:24.922 5AF25DC0 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Lack of recall tolerance status received

    2010-02-23 06:52:24.922 5AF25DC0 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Record/replay reminder of the State received

    2010-02-23 06:52:24.922 5AF25DC0 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Current state recovered VM Foundry 5, 2

    2010-02-23 06:52:24.922 info 5AF25DC0 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" The State of Transition (VM_STATE_INITIALIZING-> VM_STATE_OFF)

    Cannot find the content reflected in extended config xml.

    DISKLIB-VMFS: ' / vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05/app-back-05-flat.vmdk ': open with success (21) size = 53687091200, hd = 0. Type 3

    DISKLIB-VMFS: ' / vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05/app-back-05-flat.vmdk ': closed.

    DISKLIB-VMFS: ' / vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05/app-back-05-flat.vmdk ': open success (23) size = 53687091200, hd = 0. Type 3

    DISKLIB-VMFS: ' / vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05/app-back-05-flat.vmdk ': closed.

    2010-02-23 06:52:35.532 info 5AF25DC0 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" Version initial tools: 3:guestToolsNotInstalled

    2010-02-23 06:52:35.532 5AF25DC0 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Overhead VM predicted: 118587392 bytes

    2010-02-23 06:52:35.532 info 5AF25DC0 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" Initialized virtual machine.

    ModeMgr::End: op = current, normal = normal, count = 2

    Task completed: success of haTask-ha-folder-vm-vim.Folder.createVm-243 status

    Event 244: Created the virtual machine on RTHUS7002.rintra.ruag.com ha-data center

    2010-02-23 06:52:35.533 5AF25DC0 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Create worker thread successfully completed

    2010-02-23 06:52:35.539 5B1A9B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Unknown, unexpected toolsVersionStatus use guestToolsNotInstalled of the offline state

    The task was created: haTask-320 - vim.VirtualMachine.powerOn - 246

    2010-02-23 06:52:53.868 5B168B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Power on request received

    2010-02-23 06:52:53.868 5B168B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Reconfigure the ethernet support if necessary

    Event 245: app-back-05-neu on host RTHUS7002.rintra.ruag.com ha-data center begins

    2010-02-23 06:52:53.868 info 5B168B90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" The State of Transition (VM_STATE_OFF-> VM_STATE_POWERING_ON)

    ModeMgr::Begin: op = current, normal = normal, count = 1

    Load: Existing file loading: /etc/vmware/license.cfg

    2010-02-23 06:52:53.899 5B168B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' PowerOn request queue

    2010-02-23 06:52:53.913 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Current state recovered VM Foundry 7, 6

    /VM/ #1dea389a74a8b231 /: VMHSVMCbPower: status of the VM powerOn with option soft

    2010-02-23 06:52:53.913 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" VMHS: Exec () ING/bin/vmx

    VMHS: VMKernel_ForkExec (/ bin/vmx, flag = 1): rc = 0 pid = 190710

    2010-02-23 06:52:53.965 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" An established connection.

    Airfare to CIMOM version 1.0, root user

    PersistAllDvsInfo called

    2010-02-23 06:53:20.200 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" Paths on the virtual machine of mounting connection: /db/connection / #14bc.

    2010-02-23 06:53:20.248 5B1EAB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' No upgrade required

    2010-02-23 06:53:20.253 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" Completion of Mount VM for vm.

    2010-02-23 06:53:20.254 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" Mount VM Complete: OK

    Airfare to CIMOM version 1.0, root user

    2010-02-23 06:54:45.556 5B0E6B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' MKS ready for connections: true

    2010-02-23 06:54:45.556 info 1688CB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" Information request: insufficient video RAM. The maximum resolution of the virtual machine will be limited to 1176 x 885 to 16 bits per pixel. To use the configured maximum resolution of 2360 x 1770 to 16 bits per pixel, increase the amount of video memory allocated to this virtual machine by setting svga.vramSize = "16708800" in the virtual machine configuration file.

    Id: 0: Type: 2, default: 0, number of options: 1

    2010-02-23 06:54:45.556 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" Disconnect the current control.

    2010-02-23 06:54:45.556 5B0E6B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' No upgrade required

    2010-02-23 06:54:45.556 5B168B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Current state recovered VM Foundry 7, 6

    VixVM_AnswerMessage returned 0

    2010-02-23 06:54:45.562 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" Disconnect the current control.

    2010-02-23 06:54:45.562 5B1A9B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Current state recovered VM Foundry 7, 6

    Event 246: Message on app-back-05-neu on RTHUS7002.rintra.ruag.com ha-Datacenter: insufficient video RAM. The maximum resolution of the virtual machine will be limited to 1176 x 885 to 16 bits per pixel. To use the configured maximum resolution of 2360 x 1770 to 16 bits per pixel, increase the amount of video memory allocated to this virtual machine by setting svga.vramSize = "16708800" in the virtual machine configuration file.

    2010-02-23 06:54:45.562 1688CB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Auto-répondu insufficient video RAM issue. The maximum resolution of the virtual machine will be limited to 1176 x 885 to 16 bits per pixel. To use the configured maximum resolution of 2360 x 1770 to 16 bits per pixel, increase the amount of video memory allocated to this virtual machine by setting svga.vramSize = "16708800" in the virtual machine configuration file.

    2010-02-23 06:55:26.096 5B22BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' State of the Tools version: noTools

    2010-02-23 06:55:26.096 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" VMX status has been defined.

    2010-02-23 06:55:26.097 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" Disconnect the current control.

    2010-02-23 06:55:26.097 5B22BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' The event of Imgcust read vmdb tree values State = 0, errorCode = 0, errorMsgSize = 0

    2010-02-23 06:55:26.097 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" Disconnect the current control.

    2010-02-23 06:55:26.097 5B22BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Tools running status changed to: keep

    2010-02-23 06:55:41.030 info 1698FB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" Connected to testAutomation-fd, sent remotely end pid: 190710

    2010-02-23 06:55:41.032 5B127B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Current state recovered VM Foundry 4, 8

    2010-02-23 06:55:41.034 5AF25DC0 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' No upgrade required

    247 event: app-back-05-neu on RTHUS7002.rintra.ruag.com ha-data center power is on

    2010-02-23 06:55:41.036 info 5B0E6B90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" The State of Transition (VM_STATE_POWERING_ON-> VM_STATE_ON)

    Change of State received for VM ' 320'

    2010-02-23 06:55:41.065 5B0E6B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Actual VM overhead: 88780800 bytes

    2010-02-23 06:55:41.139 5B0E6B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Time to pick up some config: 73 (MS)

    Task completed: success of status haTask-320 - vim.VirtualMachine.powerOn - 246

    Add vm 320 poweredOnVms list

    2010-02-23 06:55:43.211 5AF25DC0 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Time to pick up some config: 67 (MS)

    Requested locale "and/or derived from locale ' en_US.utf - 8', is not compatible with the operating system. Using English.

    Requested locale "and/or derived from locale ' en_US.utf - 8', is not compatible with the operating system. With the help of C.

    Event 248: [email protected] user

    2010-02-23 06:55:47.873 info 5B0E6B90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" Airfare for mks to user connections: rzhmi_ope

    2010-02-23 06:55:55.843 1688CB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Number of connections new MKS: 1

    2010-02-23 06:56:00.279 5AF25DC0 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Actual VM overhead: 89305088 bytes

    RefreshVms overhead to 1 VM update

    Airfare to CIMOM version 1.0, root user

    2010-02-23 06:56:40.293 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Actual VM overhead: 88829952 bytes

    RefreshVms overhead to 1 VM update

    Ability to root pool changed 18302 MHz / 43910MB at 18302 MHz / 43909MB

    Airfare to CIMOM version 1.0, root user

    PersistAllDvsInfo called

    Rising with name = "haTask - vim.SearchIndex.findByInventoryPath - 224" failed.

    Rising with name = "haTask - vim.SearchIndex.findByInventoryPath - 224" failed.

    Rising with name = "haTask - vim.SearchIndex.findByInventoryPath - 240" failed.

    Rising with name = "haTask - vim.SearchIndex.findByInventoryPath - 240" failed.

    Airfare to CIMOM version 1.0, root user

    FetchSwitches: added 0 items

    FetchDVPortgroups: added 0 items

    Airfare to CIMOM version 1.0, root user

    Ability to root pool changed 18302 MHz / 43909MB at 18302 MHz / 43908MB

    2010-02-23 07:01:46.927 168CDB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx' Unknown, unexpected toolsVersionStatus use guestToolsCurrent of the offline state

    2010-02-23 07:01:47.586 1688CB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/petaw02/petaw02.vmx' Unknown, unexpected toolsVersionStatus use guestToolsCurrent of the offline state

    2010-02-23 07:01:47.593 1688CB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/petaw01/petaw01.vmx' Unknown, unexpected toolsVersionStatus use guestToolsCurrent of the offline state

    2010-02-23 07:01:48.412 5B127B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/petas03/petas03.vmx' Unknown, unexpected toolsVersionStatus use guestToolsCurrent of the offline state

    2010-02-23 07:01:48.418 5B127B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx' Unknown, unexpected toolsVersionStatus use guestToolsCurrent of the offline state

    2010-02-23 07:01:48.425 5B127B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/petas02/petas02.vmx' Unknown, unexpected toolsVersionStatus use guestToolsCurrent of the offline state

    2010-02-23 07:01:50.707 5B0E6B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Number of connections new MKS: 0

    The task was created: haTask-304 - vim.VirtualMachine.powerOn - 268

    2010-02-23 07:01:52.340 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx' Power on request received

    2010-02-23 07:01:52.340 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx' Reconfigure the ethernet support if necessary

    Event 249: PETA-route on the RTHUS7002.rintra.ruag.com ha-Data Center host starts

    2010-02-23 07:01:52.341 info 1680BB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx" The State of Transition (VM_STATE_OFF-> VM_STATE_POWERING_ON)

    ModeMgr::Begin: op = current, normal = normal, count = 2

    Load: Existing file loading: /etc/vmware/license.cfg

    2010-02-23 07:01:59.118 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx' PowerOn request queue

    2010-02-23 07:02:00.513 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/petas02/petas02.vmx' Unknown, unexpected toolsVersionStatus use guestToolsCurrent of the offline state

    Airfare to CIMOM version 1.0, root user

    2010-02-23 07:02:07.643 5B1A9B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx' Current state recovered VM Foundry 7, 6

    /VM/ #c27a9ee2a691ac1a /: VMHSVMCbPower: status of the VM powerOn with option soft

    CloseSession called for the session id = 5206e96f - 88-2e3f-9187-60932847859f 0c

    Rising with name = "haTask-304 - vim.VirtualMachine.powerOn - 129 ' failed.

    GetPropertyProvider failed for haTask-304 - vim.VirtualMachine.powerOn - 129

    Rising with name = "haTask-256 - vim.VirtualMachine.reset - 124 ' failed.

    GetPropertyProvider failed for haTask-256 - vim.VirtualMachine.reset - 124

    Event 250: Rzhmi_ope disconnected user

    2010-02-23 07:02:24.668 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx" VMHS: Exec () ING/bin/vmx

    VMHS: VMKernel_ForkExec (/ bin/vmx, flag = 1): rc = 0 pid = 192683

    Rising with name = "haTask-ha-folder-vm-vim.Folder.createVm-243" failed.

    Rising with name = "haTask-ha-folder-vm-vim.Folder.createVm-243" failed.

    2010-02-23 07:02:58.683 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx" An established connection.

    PersistAllDvsInfo called

    Ability to root pool changed 18302 MHz / 43908MB at 18302 MHz / 43907MB

    Airfare to CIMOM version 1.0, root user

    Ability to root pool changed 18302 MHz / 43907MB in 18302 MHz / 43909MB

    Airfare to CIMOM version 1.0, root user

    Rising with name = "haTask-320 - vim.VirtualMachine.powerOn - 246" failed.

    Rising with name = "haTask-320 - vim.VirtualMachine.powerOn - 246" failed.

    FoundryVMDBPowerOpCallback: Op to VMDB reports power failed for VM /vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx with error msg = "the operation has timed out" and error code-41.

    2010-02-23 07:06:02.208 5B127B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx' Current state recovered VM Foundry 5, 2

    Have not power Op: error: (3006) the virtual machine must be turned on

    2010-02-23 07:06:02.209 5B22BB90 WARNING "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx" Failed operation

    Event 251: Cannot power on PETA-router on RTHUS7002.rintra.ruag.com ha-data center. A general error occurred:

    2010-02-23 07:06:02.209 info 5B22BB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx" The State of Transition (VM_STATE_POWERING_ON-> VM_STATE_OFF)

    ModeMgr::End: op = current, normal = normal, count = 3

    Task completed: error status haTask-304 - vim.VirtualMachine.powerOn - 268

    Airfare to CIMOM version 1.0, root user

    PersistAllDvsInfo called

    Ability to root pool changed 18302 MHz / 43909MB at 18302 MHz / 43906MB

    Airfare to CIMOM version 1.0, root user

    Airfare to CIMOM version 1.0, root user

    Ability to root pool changed 18302 MHz / 43906MB at 18302 MHz / 43908MB

    Activation N5Vmomi10ActivationE:0x5b68a8b0 : invoke made on vmodl.query.PropertyCollector:ha - property-collector

    ARG version:

    '93.

    Launch vmodl.fault.RequestCanceled

    Result:

    {(vmodl.fault.RequestCanceled)

    dynamicType = < unset >

    faultCause = (vmodl. NULL in MethodFault),

    MSG = ""

    }

    PendingRequest: HTTP Transaction failed, closes the connection: N7Vmacore15SystemExceptionE (connection reset by peer)

    Ability to root pool changed 18302 MHz / 43908MB at 18302 MHz / 43909MB

    Airfare to CIMOM version 1.0, root user

    Airfare to CIMOM version 1.0, root user

    Requested locale "and/or derived from locale ' en_US.utf - 8', is not compatible with the operating system. Using English.

    Requested locale "and/or derived from locale ' en_US.utf - 8', is not compatible with the operating system. With the help of C.

    Event 252: [email protected] user

    2010-02-23 07:13:13.852 info 5B1EAB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx" Airfare for mks to user connections: rzhmi_ope

    PersistAllDvsInfo called

    2010-02-23 07:13:19.794 5B22BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Number of connections new MKS: 1

    2010-02-23 07:14:18.185 5B0E6B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Number of connections new MKS: 0

    Activation N5Vmomi10ActivationE:0x5c0f4dc0 : invoke made on vmodl.query.PropertyCollector:ha - property-collector

    ARG version:

    « 3 »

    CloseSession called for the session id = 528c7c0f-5bce-6ea5-CE22-e4927fab2c89

    Launch vmodl.fault.RequestCanceled

    Result:

    {(vmodl.fault.RequestCanceled)

    dynamicType = < unset >

    faultCause = (vmodl. NULL in MethodFault),

    MSG = ""

    }

    Event 253: Rzhmi_ope disconnected user

    Error reading the client while you wait header: N7Vmacore15SystemExceptionE (connection reset by peer)

    Error reading the client while you wait header: N7Vmacore15SystemExceptionE (connection reset by peer)

    Error reading the client while you wait header: N7Vmacore15SystemExceptionE (connection reset by peer)

    2010-02-23 07:14:20.843 info 1680BB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" Airfare for mks to user connections: rzhmi_ope

    2010-02-23 07:14:26.122 5AF25DC0 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Number of connections new MKS: 1

    Airfare to CIMOM version 1.0, root user

    Rising with name = "haTask-304 - vim.VirtualMachine.powerOn - 268 ' failed.

    Rising with name = "haTask-304 - vim.VirtualMachine.powerOn - 268 ' failed.

    Airfare to CIMOM version 1.0, root user

    Airfare to CIMOM version 1.0, root user

    PersistAllDvsInfo called

    CloseSession called for the session id = 520edca0-62aa - 19 c 8-8c9d-bd710bd6c882

    Event in 254: Rzhmi_ope disconnected user

    2010-02-23 07:19:01.520 5B0A5B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Number of connections new MKS: 0

    Requested locale "and/or derived from locale ' en_US.utf - 8', is not compatible with the operating system. Using English.

    Requested locale "and/or derived from locale ' en_US.utf - 8', is not compatible with the operating system. With the help of C.

    Event 255: [email protected] user

    2010-02-23 07:19:05.616 info 5B0E6B90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx" Airfare for mks to user connections: rzhmi_ope

    Airfare to CIMOM version 1.0, root user

    2010-02-23 07:19:11.260 5B127B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Number of connections new MKS: 1

    Airfare to CIMOM version 1.0, root user

    Airfare to CIMOM version 1.0, root user

    PersistAllDvsInfo called

    Airfare to CIMOM version 1.0, root user

    Activation N5Vmomi10ActivationE:0x5b3c7ea8 : invoke made on vmodl.query.PropertyCollector:ha - property-collector

    ARG version:

    "111".

    Launch vmodl.fault.RequestCanceled

    Result:

    {(vmodl.fault.RequestCanceled)

    dynamicType = < unset >

    faultCause = (vmodl. NULL in MethodFault),

    MSG = ""

    }

    PendingRequest: HTTP Transaction failed, closes the connection: N7Vmacore15SystemExceptionE (connection reset by peer)

    Airfare to CIMOM version 1.0, root user

    Airfare to CIMOM version 1.0, root user

    The task was created: haTask-304 - vim.VirtualMachine.powerOn - 306

    2010-02-23 07:27:42.209 5B22BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx' Power on request received

    2010-02-23 07:27:42.209 5B22BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx' Reconfigure the ethernet support if necessary

    Event 256: PETA-route on the RTHUS7002.rintra.ruag.com ha-Data Center host starts

    2010-02-23 07:27:42.210 info 5B22BB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx" The State of Transition (VM_STATE_OFF-> VM_STATE_POWERING_ON)

    ModeMgr::Begin: op = current, normal = normal, count = 2

    Load: Existing file loading: /etc/vmware/license.cfg

    2010-02-23 07:27:49.686 5B22BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx' PowerOn request queue

    2010-02-23 07:27:58.217 1690EB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx' Current state recovered VM Foundry 7, 6

    /VM/ #c27a9ee2a691ac1a /: VMHSVMCbPower: status of the VM powerOn with option soft

    Airfare to CIMOM version 1.0, root user

    PersistAllDvsInfo called

    2010-02-23 07:28:15.230 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx" VMHS: Exec () ING/bin/vmx

    VMHS: VMKernel_ForkExec (/ bin/vmx, flag = 1): rc = 0 pid = 191945

    HostCtl Exception while collecting statistics: SysinfoException: node (VSI_NODE_sched_cpuClients_numVcpus); (Bad0001) status = failure; Message = Instance (1): 0 Input (0)

    2010-02-23 07:28:34.614 5B22BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' The Tools version state: ok

    2010-02-23 07:28:34.616 168CDB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' MKS ready for connections: false

    2010-02-23 07:28:34.660 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Number of connections new MKS: 0

    2010-02-23 07:28:34.782 info 5AF25DC0 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx" Airfare for mks to user connections: rzhmi_ope

    2010-02-23 07:28:49.390 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx" An established connection.

    2010-02-23 07:29:04.550 5B22BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Overhead VM predicted: 143904768 bytes

    RefreshVms overhead to 1 VM update

    2010-02-23 07:29:29.573 verbose 1690EB90 ' vm: / vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_Radius + Jg - CA/Test_Radius + Jg - CA.vmx' Unknown, unexpected toolsVersionStatus use guestToolsNotInstalled of the offline state

    2010-02-23 07:29:32.555 5B22BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/petas03/petas03.vmx' Unknown, unexpected toolsVersionStatus use guestToolsCurrent of the offline state

    2010-02-23 07:29:32.562 5B22BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx' Unknown, unexpected toolsVersionStatus use guestToolsCurrent of the offline state

    2010-02-23 07:29:32.568 5B22BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/petas02/petas02.vmx' Unknown, unexpected toolsVersionStatus use guestToolsCurrent of the offline state

    The task was created: haTask-80 - vim.VirtualMachine.powerOn - 315

    2010-02-23 07:29:34.172 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx' Power on request received

    2010-02-23 07:29:34.172 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx' Reconfigure the ethernet support if necessary

    Event 257: petas01 on the RTHUS7002.rintra.ruag.com ha-Data Center host begins

    2010-02-23 07:29:34.173 1680BB90 info "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx" The State of Transition (VM_STATE_OFF-> VM_STATE_POWERING_ON)

    ModeMgr::Begin: op = current, normal = normal, count = 3

    Load: Existing file loading: /etc/vmware/license.cfg

    2010-02-23 07:29:34.327 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx' PowerOn request queue

    Airfare to CIMOM version 1.0, root user

    Ability to root pool changed 18302 MHz / 43909MB at 18302 MHz / 43910MB

    Airfare to CIMOM version 1.0, root user

    Ability to root pool changed 18302 MHz / 43910MB at 18302 MHz / 43909MB

    HostCtl Exception while collecting statistics of network for vm 256: SysinfoException: node (VSI_NODE_net_openPorts_type); (Bad0001) status = failure; Message = cannot get Int

    2010-02-23 07:31:52.964 1688CB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx' Current state recovered VM Foundry 7, 6

    /VM/ #dd20eb79a599aa61 /: VMHSVMCbPower: status of the VM powerOn with option soft

    2010-02-23 07:31:52.984 1676AB90 info "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx" VMHS: Exec () ING/bin/vmx

    VMHS: VMKernel_ForkExec (/ bin/vmx, flag = 1): rc = 0 pid = 200813

    FoundryVMDBPowerOpCallback: Op to VMDB reports power failed for VM /vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx with error msg = "the operation has timed out" and error code-41.

    2010-02-23 07:31:52.985 5B168B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx' Current state recovered VM Foundry 5, 2

    Have not power Op: error: (3006) the virtual machine must be turned on

    2010-02-23 07:31:52.985 5B22BB90 WARNING "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx" Failed operation

    Event 258: Cannot power on PETA-router on RTHUS7002.rintra.ruag.com ha-data center. A general error occurred:

    2010-02-23 07:31:52.986 info 5B22BB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx" The State of Transition (VM_STATE_POWERING_ON-> VM_STATE_OFF)

    ModeMgr::End: op = current, normal = normal, count = 4

    Task completed: error status haTask-304 - vim.VirtualMachine.powerOn - 306

    vmdbPipe_Streams could not read: OVL_STATUS_EOF

    2010-02-23 07:31:52.988 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' No upgrade required

    2010-02-23 07:31:52.989 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx" Disconnect the current control.

    2010-02-23 07:31:52.989 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx" Disassembly of the virtual machine.

    2010-02-23 07:31:52.990 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx" VMDB disassemble insiders.

    2010-02-23 07:31:52.990 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx" VM complete disassembly.

    2010-02-23 07:31:53.000 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx" Mt. state values have changed.

    2010-02-23 07:31:53.000 1680BB90 WARNING "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx" Failed to get resource for a motor home on VM settings

    There is no worldId 4294967295 world

    2010-02-23 07:31:53.001 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx" Mt. state values have changed.

    2010-02-23 07:31:53.002 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx" Reloading configuration state.

    2010-02-23 07:31:53.106 1676AB90 info "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx" An established connection.

    2010-02-23 07:32:24.650 1676AB90 info "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx" Paths on the virtual machine of mounting connection: /db/connection/#158 d.

    TAKEN 2 (13)

    client closed connection detected recv

    Intake automation detected ready for VM (/ vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx)

    2010-02-23 07:32:24.709 5B1A9B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx' No upgrade required

    2010-02-23 07:32:24.720 1676AB90 info "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx" Completion of Mount VM for vm.

    2010-02-23 07:32:24.721 1676AB90 info "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx" Mount VM Complete: OK

    2010-02-23 07:32:24.777 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Time to pick up some config: 31779 (MS)

    2010-02-23 07:32:24.782 5B22BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Current state recovered VM Foundry 5, 2

    2010-02-23 07:32:24.782 5B0E6B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Current state recovered VM Foundry 5, 2

    2010-02-23 07:32:24.849 1690EB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Unknown, unexpected toolsVersionStatus use guestToolsNotInstalled of the offline state

    2010-02-23 07:32:24.850 5B22BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Receipt Foundry power state update 5

    2010-02-23 07:32:24.850 info 5B22BB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx" The State of Transition (VM_STATE_ON-> VM_STATE_OFF)

    ModeMgr::End: op = current, normal = normal, count = 3

    Change of State received for VM ' 256'

    2010-02-23 07:32:24.850 5B22BB90 WARNING "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx" Could not find activation record, user unknown event.

    Event 259: Std-Server on RTHUS7002.rintra.ruag.com ha-data center is turned off

    2010-02-23 07:32:24.851 5B22BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Lack of recall tolerance status received

    2010-02-23 07:32:24.851 5B0E6B90 WARNING "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx" Received a double transition of Foundry: 2, 0

    2010-02-23 07:32:24.919 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Time to pick up some config: 66 (MS)

    Remove the vm 256 poweredOnVms list

    2010-02-23 07:32:24.990 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Time to pick up some config: 65 (MS)

    2010-02-23 07:32:25.061 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Time to pick up some config: 65 (MS)

    2010-02-23 07:32:25.065 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' The event of Imgcust read vmdb tree values State = 0, errorCode = 0, errorMsgSize = 0

    2010-02-23 07:32:25.066 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Tools running status changed to: keep

    2010-02-23 07:32:25.066 5B0E6B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' State of the Tools version: unknown

    2010-02-23 07:32:25.066 5B0E6B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' No upgrade required

    2010-02-23 07:32:25.081 16A11B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Unknown, unexpected toolsVersionStatus use guestToolsNotInstalled of the offline state

    2010-02-23 07:32:25.088 1698FB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Unknown, unexpected toolsVersionStatus use guestToolsNotInstalled of the offline state

    2010-02-23 07:32:25.090 16A11B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Unknown, unexpected toolsVersionStatus use guestToolsNotInstalled of the offline state

    2010-02-23 07:32:25.090 1698FB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Unknown, unexpected toolsVersionStatus use guestToolsNotInstalled of the offline state

    Airfare to CIMOM version 1.0, root user

    My Server includes:

    2 x Intel Xeon X 5550, 4 c/8 t 2.66 GHz, 8 MB

    48 GB DDR3 RAM

    12 HD SAS 3 g 146 GB 10 k HOT PLUG

    -11 HD than RAID 5

    -1 global hot spare

    Anyone know more about these symptoms?

    If it is a server supported (Dell, HP, IBM, etc.) the State of health of the host will be visible from the vSphere client (hardware status tab if managed vCentre, or the tab configuration, health if not).

    Otherwise, you will need to get a diagnostic utility currently out of the manufacturer and use it.

    Please give points for any helpful answer.

  • Location of the call API for the information displayed in the material status summary tab

    I'm trying to find the information that is displayed in the status tab of the hardware.  Mainly, the summary at the top.  I can find most of the information through the managed object browser, but what I need are the serial numbers and the labels that are listed.  I need to update a database of the machine with this information.  This is a screenshot attached.

    Screen Shot 2012-06-01 at 4.49.38 PM.png

    This can be done simply by using the CIM API:

    A remote client of CIM deliver what follows. I use wbemcli (apt - get install wbemcli on Ubuntu)

    wbemcli AE - nl - noverify 'https://root:@: 5989/root/Tag of the cimv2:CIM_Chassis, serial number
    Result
    =====
    ': 5989/root/cimv2:OMC_Chassis.CreationClassName="OMC_Chassis",Tag="23.0 '.
    -Tag = "23.0".
    -SerialNumber = "XXXXXXX".
  • Storage sensor is disappear in vCenter Client - State of the Hardware tab

    We run vSphere 5.1 on HP DL580 G7 server with 2 SSD (RAID1 + 0)

    After replace for our ESX host 2 HDD and reinstall vSphere, storage sensor was disappearing in the status tab of the material.

    Please help me to solve it. I thank you very much.esx.jpg

    Visit this link... http://www.jonmunday.NET/2013/04/16/fix-HP-ESXi-5-x-management-bundles/

  • own, hardware installation, service on this monitoring host is unresponsive

    Hallo,

    I have 3 servers running esxi5.1.0 image of the form version 1157734 update 799733 managed by vsphere

    On one of my DGL380 G6 servers have the error: hardware monitoring service on the host is not responding

    is what I did to try to resolve the issue:

    • restart the /etc/init.d/sfcbd-Watchdog
    • Return to material on vCenter Server Status tab and click the update link. It may take up to 5 minutes to refresh
    • Stop the service and resart it
    • Reset sensors
    • update of the data
    • enabled the setting of firewall they are all the same

    the question is always the same

    Does anyone have an idea how to solve this problem?

    After I reinstalled the host State of the material was good (used a new generation iso. # 1065491)

    After that, I set up my host then I patched the host to # 1157734

    then I check if the condition of the equipment was OK, but I noticed that all the sensors, where the normal but the HP ProLiant DL380 G6 was unknown after one minute is Normal.

    so not a real problem then to install the module of Multipathing dell after the sensors when there is still good. before adding the host in the cluster, I rebooted the server again

    still no problem

    before adding the host in the cluster, I removed the old host and then added the new host to the cluster.

    now the material status tab is always OK

    the problem seems to be resolved for now.

Maybe you are looking for

  • iOS 10 went playlists

    Someone had the problem after updating to iOS 10, that the playlists in folders have no content? I disabled music Apple and allowed him once again, but this does not seem to solve the problem. Any ideas?

  • Satellite T - what CPU is faster?

    HelloI'm doubt between these two models:1 Intel Celeron M 743 1.3 GHz800 MHz front side bus1 MB L2 Cache 3 (1 + 2) GB DDR2 800 MHz and 2.Intel Core 2 Solo SU2700 1.3 GHz frekvenci s800 MHz front side bus2 MB of L2 Cache 4 (2 + 2) GB of DDR3 memory at

  • El Capitan: Cannot open the application "app name" because it may be damaged or incomplete

    Came across this several times. After you initially upgrade to El Capitan 10.11.2, after logging into my main account (admin), I was not able to open applications and received the following error: "Cannot open the application name"app"because it may

  • Activation of data roaming connection issues fixes?

    Had the terrible problems of data connectivity. I take the train from Baltimore to DC daily and w / my OG Droid, never had any problems, although the location of one or two only seemed to have 1 x available. With my Bionic, but often I drop my connec

  • InkJet Printhead: InkJet Printhead eligible for Care Pack HP

    I would like to know who should I contact for more information about this page of the site: http://support.HP.com/us-en/document/c03460632 "Consumables HP inkjet - products HP print heads which are eligible for the Care Pack Services coverage. To fin