NPIVs and VMs

Hi, I'm looking for model workloads running on multiple virtual machines on a vSphere 5 / ESXi host Fibre Channel 5. I'm not 100% certain if unique FCIDS and WWPN is used for virtual machines. Here is my understanding. If you know the answer, I'd appreciate it if you can clarify for me.

I think I understand:

-If a virtual computer simply uses the virtual disk, then it is not a unique FCIDS / WWPN assigned to this virtual machine.

-If a virtual computer is configured to use CF NPIV (under VM properties-> change settings for virtual machine-> Options), then a single FCIDS / WWPN is assigned to this virtual machine. The host that performs using NPIV.

-Example 1: If a host has 40 virtual machines installed and that all of them use only the virtual disk (i.e. no NPIV FC enabled), then the host uses a FCIDS (i.e. the HBA N_Port FCIDS) and WWPN for send i/o down the infrastructure of storage for all virtual machines.

-Example 2: If a host has 40 virtual machines, and each of them has CF NPIV is enabled, then the there will be 40 unique FCIDS / WWPN observed out of HBA of the host.

Is the above correct? I myself would trace if I have a FC Analyzer, but I have not at this stage. So I have to ask this question here.

Thank you

-Henry

Yes, you are right! Spot on.

Tags: VMware

Similar Questions

  • Please suggest tool use ability to control host and VMs CPU, memory resources.

    Please suggest tool use ability to control host and VMs CPU, memory resources. What are the parameters to be monitored, their level of thresholds.

    What is the TOP / ESXTOP.

    Hi friend

    Here's the blog post detailed ESXTOP:

    http://www.yellow-bricks.com/ESXTOP/

    There is a VCOPS product that can also help you to understand so many parameters. You can try the license of the Foundation which is free of cost. There is a lot of intelligence if you go for paid VCOPS.

    VCenter graphs/stats can also help you as well.

  • Hosts and VMs missing in VC OPs 5.8.0

    Hello

    Our vCenter web Ops gui displays more guests ESXi, VMs or data warehouses.  Both devices of linux VC been restarted (Analytics and user interface) as well as our vCenter server.  The hierarchy develops only with regard to our clusters.  But I still get alerts e-mail of VC on different memory alerts on the virtual machines, so it always seems to work.

    Can someone advise whats that went wrong?  I also tried to connect to the web gui as root and different browsers.

    Thank you

    Stuart

    [attached screenshot]

    Thank you. It is possible that host systems are in maintenance mode in the CR Ops. Please follow this KB to check your system hosts within the custom user interface resources: (follow the instructions for the custom user interface, not the stuff of postgres db)

    VMware KB: ESXi/ESX hosts is more present in Operations Manager VMware vCenter inventory

  • Hosting CVS and VMs of switching

    So, I have a cluster and one or several hosts with EVC and starting VMs who worked without before I guess that Windows will "notice" a material change and force a reboot? Is this all I need fear?

    As I wrote in the other thread that he should leave 54xx 56xx where the old hosts with the 54xx will always be used. I need it to be prepared for the calculation of the downtime and what can happen during the migration

    Yes, if you have a downtime window, a cold migration would work.  When the client is powered, CPUID mask will be amended to reflect the VCA mode in which you operate.

    There are ways to make warm, but is not 100%, so you can look at the above article.

    However, if you can afford a total downtime of all virtual machines, just to stop them, unplug the host ESX (i) of the old cluster and then drag in the new EVC Cluster and connect.  From there, you can connect it to your guests.

  • SRM 4.0 on coherence HDS9990 and VMs

    Hi all

    This message is an attempt to better understand the workings of the SRM 4.0 with respect to VMs consistency (which is our main goal).

    Our farm will be upgraded to vSphere ESX 3.5 Upd 3, and we do not need to implement a DR solution that will be entirely based on a HDS9990 storage solution.

    What is not clear to us, it's whether or not the SRM 4.0 is *THE* best product we can integrate into our environment in order to ensure consistency in machines virtual or not.

    The SRM 4.0 documentation, it seems that a SRM 4.0 based DR solution should suite our needs.

    Our question is: SRM always ensures the consistency of virtual machines?

    It is based on the mechanisms of replication storage HW? What is the role of the storage replication adapter provider?

    Would it not be better to rely on a snapshot solution (i.e. a build on technology VMware Consolidated Backup), instead?

    Thank you very much in advance for your help.

    Best regards

    Salvatore

    SRM is part of a recovery solution disaster that looks is the project that you are working.

    In a HDS / SRM element of data replication solution is managed by CONTINUOUS table level. The consistency of the data is determined by the replication schedules you apply to volumes on the table (sync / async) and all works at the block level.

    Many clients, ask questions about watching DR VM consistency / SRM solutions, but you need to think about the scenarios in which you would use SRM/HDS. If you have an event suddenly disaterous then you won't have time to do things consistent in almost all scenarios, you can consider as they are sudden and usually catastrophic. So if SRM has taken quiece points would not give you a realistic view of how things would happen in real life.

    One of the highlights of the SRM is that it allows you to conduct trials of DR no - disturb without affecting your production environment. SRM for this using replication storage (SRA) adapters to communicate with the table on the recovering site and to present a snapshot of the storage (containing the protected virtual machine) at the level of the table using the table feature. These snapshots of table are presented by MRS to the ESX host on the recovering site and the virtual machine is recovered and connected to the defined networks 'test '.

    If we hosted SRM to join something like VSS and ensure that all data has been the file system rather than consistent crash on the recovering site, that would give you a false confidence that your environment could be recovered in the case of an event suddenly Dr, for the simple reason if a plane or Meteor suddenley lands on your datacenter at 03:30 on Sunday how would you know? How would you have time to hang anything?

    For this reason, RS does NOT and simply uses the most recent copy of the data on the Bay of storage to test (and also for failovers). This is massively useful because you can now prove it to the company in the case of sudden unexpected failure you can prove that you can recover the cold environment and that all your operating systems and applications can crash recover themselves to the last known state. RDBMS systems are a good example. In a previous life, I was involved with a well known RDBMS for long. A feature he had (like most of them) was the ability to roll forward recording in the case of a system failure. Now even if you could always tell the company such protection was in place via transaction logs application that it was very difficult for the company to test yourself on your claims as it meant pulling the plug on the server. Now with MRS you can test this scenario VERY easily by simply running a recovery plan in test mode, raise the RDBMS on the recovering site and then prove the database gets as its configuration says he should.

    Remember that consistency with regard to storage devices for replication updates can be applied at the level of the table of logifcally, but the data blocks during any recovery with any table will always be the last copies. This means if you have multi levels of applications or applications that use a lot of records, then these are usually grouped on the table to ensure consistency during the replication update. During recovery recover the crash of the OS and applications, and in fact most if not all modern OS we know and love are more than capable to do.

    Your point around VCB is relevant who so as to have a solution like MRS. up with HDS continuous as in your example, there is always a place for VCB. Just because you have DR in place does not mean that you need not backup. Backups you give version control and should go hand in hand with any DR solution. Most of the clients will replicate their backup vaults to their sites of DR as well so that not only they have most recent data (provided by storage replication) but also have the backups available as well if they need never then to perform a collection version. Don't forget THAT VCB is a backup solution is NOT DR. In the case of a situation Dr. SRM will recover your environment after a scheduled recovery plan you created to make sure that everything happens in the right order and it requires input of small operator to do (a mouse click). If you base you backup for DR images you would be a long period of restoration, activation and sequencing and won't get in the majority of cases no where close to the RTO/RPO objectives, you may have set for yourself.

    Hope this helps,

    Lee Dilworth

  • Time synchronization error ESX and VMs despite the existence of the NTP server

    Last time the ESX Server system was wrong, and some virtual machines (windows and linux) took the time ESX as their time.

    The weird part is that these virtual machines have been synchronized with a physical server, and the time synchronization tool VMware has been disabled!

    No idea why this happens?

    Thanks in advance

    ... and the option synchronize tools vmware ESX server has been disabled

    I understand that. However, the KB article, there is a difference between 'disabled' and 'completely disabled.

    André

  • Automatically start vCenter and vms

    Hello

    We have Windows Server 2008 R2 running vCenter 4.1 connected to host computers running ESXi 4.0 Update 1.  Every time we restart the server, the vmware vCenter and vmware vCenter Webservices services start up, they are set to start.  They need to be on the delayed start?  I check the event logs to see what is happening.  Is there a log of vmware that I could look at?

    In addition, we want to implement our virtual machines as the domain controller starts automatically, but I can't find a way to do.

    Thank you

    Mike

    Thank you very much, we are using SQL Express despite 2 virtual machines running SQL Server 2008 :-)  I'm going to the delay value and see.

    If it does not help there are some registry entries, you can make

    http://KB.VMware.com/kb/1007669

    In addition, we vSphere Essentials.  How can I know that if it is configured for high availability and where I see these priorities?

    Essentials does not come with HA.  You can configure your options of start/stop for your guests by going to the configuration of the host tab: start/stop of the VM - properties (upper right corner)

  • The replication SAN and VMs Newbie question

    We do not currently have shared storage, but it will be installed next month. I'm still trying to grasp the basic understanding of how it works with ESXi. My problem is that much of what I learned about VMware has been without the luxury of a San.

    If I have some SITES A and b SITE.

    Storage replication replicates data store SITE A to SITE B? (In general)

    So what happens if VM1 E: drive mapped to the number of logic on the same Bay storage unit. VM1 will always have this high mapping to SITE B?

    I would like some help understanding.

    Storage replication replicates data store SITE A to SITE B? (In general)

    In general, Yes. Storage replication actually doesn't care data on a LUN, but reproduced the LUNS on a block basis.

    So what happens if VM1 E: drive mapped to the number of logic on the same Bay storage unit. VM1 will always have this high mapping to SITE B?

    Basically, Yes, if all the LUNS are replicated / mirror. However, it depends on your storage system if the failover works transparently (for example using a HP/left storage) or if there is a manual of work required for failover.

    André

  • How to use PowerCLI to activate the function VM NPIV and config setting?

    Hi all:

    I'm having a problem on PowerCLI.

    Can I use PowerCLI for NPIV to the VM config?

    OK, I see. This is possible with the operator "set".

    Note that the script must convert the hexadecimal number WWN in a 64-bit decimal number.

    $vmName = "MyVM"
    $nodeWWN = "28:2b:00:0c:29:00:00:07"
    $portWWN = "28:2b:00:0c:29:00:00:08"
    
    # Convert WWN string to 64-bit number
    $nodeWWN64 = [int64]("0x" + $nodeWWN.Replace(":",""))
    $portWWN64 = [int64]("0x" + $portWWN.Replace(":",""))
    
    # Activate NPIV
    $spec = New-Object VMware.Vim.VirtualMachineConfigSpec
    $spec.NpivDesiredNodeWwns = 0
    $spec.NpivDesiredPortWwns = 0
    $spec.NpivTemporaryDisabled = $false
    $spec.NpivWorldWideNameOp = "set"
    $spec.NpivPortWorldWideName = @($portWWN64)
    $spec.NpivNodeWorldWideName = @($nodeWWN64)
    
    $vm = Get-VM -Name $vmName
    $vm.ExtensionData.ReconfigVM($spec)
    

    You can use the cmdlet New - hard drive to attach a disc RDM. Make sure to specify a type of "gross" on the parameter - DiskType.

    You ID LUN with the parameter - DeviceName.

    ____________

    Blog: LucD notes

    Twitter: lucd22

  • ESXi 4.0 and VMs extremely slow

    Hey there.

    I installed an ESXi 4.0.0 Build 208167 with German-000 just a few months, there is and he worked there with a normal return (please).

    Well, last week, the virtual machines have become performance issues. As an example, windows Explorer takes only a few moments to appear like other tasks such as closing windows systems, or create on SLES11 xterm windows.

    I tried to restart my system, but with this, I got new problems: some virtual machines cannot start or became disabled (appears under the name VM and (invalid) appended to the virtual machine).

    I try to start other systems, press play and the machine seems to have started, but it remains on 95% for a minute and then, it appears a message called: "the attempted operation is not in the current state (off) happen."

    My diary in the var (last lines)

    23 Feb 07:22:07 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:22:07 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:22:07 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:22:07 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:22:07 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:22:07 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:22:07 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:22:07 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:22:07 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:22:07 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:22:07 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:22:07 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:22:07 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:22:07 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:22:15 vmkernel: 0:15:19:26.677 cpu1:186428) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:15 vmkernel: 0:15:19:26.677 cpu6:4324) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:22:15 vmkernel: 0:15:19:26.815 cpu6:4324) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:15 vmkernel: 0:15:19:26.938 cpu1:5139) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:15 vmkernel: 0:15:19:26.938 cpu8:4339) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:22:15 vmkernel: 0:15:19:26.938 cpu8:4339) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:15 vmkernel: 0:15:19:26.938 cpu1:5139) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:15 vmkernel: 0:15:19:26.938 cpu14:4334) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:22:15 vmkernel: 0:15:19:26.938 cpu14:4334) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:17 vmkernel: 0:15:19:28.790 cpu1:5139) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:17 vmkernel: 0:15:19:28.790 cpu6:4329) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:22:17 vmkernel: 0:15:19:28.945 cpu6:4329) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:18 vmkernel: 0:15:19:29.069 cpu1:5139) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:26 vmkernel: 0:15:19:37.319 cpu1:186428) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:26 vmkernel: 0:15:19:37.319 cpu2:4336) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:22:26 vmkernel: 0:15:19:37.456 cpu2:4336) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:26 vmkernel: 0:15:19:37.579 cpu1:186428) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:26 vmkernel: 0:15:19:37.579 cpu2:4329) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:22:26 vmkernel: 0:15:19:37.579 cpu2:4329) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:26 vmkernel: 0:15:19:37.579 cpu1:4363) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:26 vmkernel: 0:15:19:37.579 cpu11:4338) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:22:26 vmkernel: 0:15:19:37.579 cpu11:4338) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:28 vmkernel: 0:15:19:39.437 cpu1:9738) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:28 vmkernel: 0:15:19:39.437 cpu14:4333) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:22:28 vmkernel: 0:15:19:39.581 cpu14:4333) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:28 vmkernel: 0:15:19:39.704 cpu1:9738) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:35 vmkernel: 0:15:19:46.078 cpu1:9738) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:35 vmkernel: 0:15:19:46.078 cpu14:4328) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:22:35 vmkernel: 0:15:19:46.078 cpu14:4328) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:36 vmkernel: 0:15:19:47.936 cpu1:186428) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:36 vmkernel: 0:15:19:47.936 cpu14:4334) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:22:37 vmkernel: 0:15:19:48.085 cpu14:4334) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:37 vmkernel: 0:15:19:48.208 cpu1:186428) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:37 vmkernel: 0:15:19:48.208 cpu2:4330) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:22:37 vmkernel: 0:15:19:48.209 cpu2:4330) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:37 vmkernel: 0:15:19:48.209 cpu1:186428) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:37 vmkernel: 0:15:19:48.209 cpu14:4333) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:22:37 vmkernel: 0:15:19:48.209 cpu14:4333) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:39 vmkernel: 0:15:19:50.066 cpu1:4097) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:39 vmkernel: 0:15:19:50.066 cpu4:4337) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:22:39 vmkernel: 0:15:19:50.216 cpu4:4337) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:39 vmkernel: 0:15:19:50.339 cpu1:4097) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:47 vmkernel: 0:15:19:58.547 cpu1:4097) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:47 vmkernel: 0:15:19:58.547 cpu4:4329) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:22:47 vmkernel: 0:15:19:58.739 cpu4:4329) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:47 vmkernel: 0:15:19:58.862 cpu1:4097) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:47 vmkernel: 0:15:19:58.862 cpu4:4337) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:22:47 vmkernel: 0:15:19:58.862 cpu4:4337) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:47 vmkernel: 0:15:19:58.862 cpu1:4367) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:47 vmkernel: 0:15:19:58.862 cpu8:4339) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:22:47 vmkernel: 0:15:19:58.863 cpu8:4339) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:49 vmkernel: 0:15:20:00.719 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:49 vmkernel: 0:15:20:00.720 cpu0:4336) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:22:49 vmkernel: 0:15:20:00.869 cpu0:4336) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:49 vmkernel: 0:15:20:00.992 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:58 vmkernel: 0:15:20:09.242 cpu14:188206) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:58 vmkernel: 0:15:20:09.242 cpu4:4331) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:22:58 vmkernel: 0:15:20:09.385 cpu4:4331) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:58 vmkernel: 0:15:20:09.509 cpu14:188206) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:58 vmkernel: 0:15:20:09.509 cpu2:4329) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:22:58 vmkernel: 0:15:20:09.509 cpu2:4329) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:22:58 vmkernel: 0:15:20:09.509 cpu14:188206) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:22:58 vmkernel: 0:15:20:09.509 cpu0:4337) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:22:58 vmkernel: 0:15:20:09.509 cpu2:4337) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23 vmkernel: 0:15:20:11.367 cpu14:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23 vmkernel: 0:15:20:11.367 cpu8:4332) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:23 vmkernel: 0:15:20:11.516 cpu8:4332) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23 vmkernel: 0:15:20:11.639 cpu14:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:08 vmkernel: 0:15:20:19.877 cpu14:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:08 vmkernel: 0:15:20:19.877 cpu8:4333) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:23:09 vmkernel: 0:15:20:20.026 cpu8:4333) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:09 vmkernel: 0:15:20:20.150 cpu14:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:09 vmkernel: 0:15:20:20.150 cpu8:4332) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:23:09 vmkernel: 0:15:20:20.150 cpu8:4332) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:09 vmkernel: 0:15:20:20.150 cpu14:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:09 vmkernel: 0:15:20:20.150 cpu4:4336) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:23:09 vmkernel: 0:15:20:20.150 cpu4:4336) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:11 vmkernel: 0:15:20:22.008 cpu14:4984) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:11 vmkernel: 0:15:20:22.008 cpu2:4335) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:23:11 vmkernel: 0:15:20:22.157 cpu2:4335) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:11 vmkernel: 0:15:20:22.281 cpu14:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:19 vmkernel: 0:15:20:30.512 cpu14:5039) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:19 vmkernel: 0:15:20:30.512 cpu2:4337) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:23:19 vmkernel: 0:15:20:30.650 cpu2:4337) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:19 vmkernel: 0:15:20:30.773 cpu14:5039) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:19 vmkernel: 0:15:20:30.773 cpu0:4325) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:23:19 vmkernel: 0:15:20:30.773 cpu0:4325) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:19 vmkernel: 0:15:20:30.774 cpu14:5039) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:19 vmkernel: 0:15:20:30.774 cpu6:4330) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:23:19 vmkernel: 0:15:20:30.774 cpu6:4330) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:21 vmkernel: 0:15:20:32.625 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:21 vmkernel: 0:15:20:32.625 cpu2:4327) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:23:21 vmkernel: 0:15:20:32.780 cpu2:4327) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:21 vmkernel: 0:15:20:32.904 cpu14:5140) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:30 vmkernel: 0:15:20:41.148 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:30 vmkernel: 0:15:20:41.148 cpu1:4336) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:23:30 vmkernel: 0:15:20:41.285 cpu1:4336) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:30 vmkernel: 0:15:20:41.409 cpu14:6105) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:30 vmkernel: 0:15:20:41.409 cpu8:4338) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:23:30 vmkernel: 0:15:20:41.409 cpu8:4338) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:30 vmkernel: 0:15:20:41.409 cpu14:6105) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:30 vmkernel: 0:15:20:41.409 cpu0:4329) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:23:30 vmkernel: 0:15:20:41.409 cpu0:4329) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:32 vmkernel: 0:15:20:43.260 cpu14:6105) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:32 vmkernel: 0:15:20:43.260 cpu0:4330) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:23:32 vmkernel: 0:15:20:43.410 cpu0:4330) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:32 vmkernel: 0:15:20:43.533 cpu14:6105) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:40 vmkernel: 0:15:20:51.759 cpu14:4984) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:40 vmkernel: 0:15:20:51.759 cpu1:4325) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:23:40 vmkernel: 0:15:20:51.908 cpu1:4325) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:41 vmkernel: 0:15:20:52.032 cpu14:5009) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:41 vmkernel: 0:15:20:52.032 cpu0:4330) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:23:41 vmkernel: 0:15:20:52.032 cpu0:4330) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:41 vmkernel: 0:15:20:52.032 cpu14:5009) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:41 vmkernel: 0:15:20:52.032 cpu6:4326) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:23:41 vmkernel: 0:15:20:52.032 cpu6:4326) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:42 vmkernel: 0:15:20:53.883 cpu14:5040) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:42 vmkernel: 0:15:20:53.883 cpu6:4324) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:23:43 vmkernel: 0:15:20:54.027 cpu6:4324) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:43 vmkernel: 0:15:20:54.150 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:51 vmkernel: 0:15:21:02.370 cpu14:6090) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:51 vmkernel: 0:15:21:02.370 cpu4:4327) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:23:51 vmkernel: 0:15:21:02.513 cpu4:4327) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:51 vmkernel: 0:15:21:02.637 cpu14:6090) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:51 vmkernel: 0:15:21:02.637 cpu0:4331) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:23:51 vmkernel: 0:15:21:02.637 cpu0:4331) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:51 vmkernel: 0:15:21:02.637 cpu14:6090) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:51 vmkernel: 0:15:21:02.637 cpu0:4324) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:23:51 vmkernel: 0:15:21:02.637 cpu0:4324) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:53 vmkernel: 0:15:21:04.489 cpu14:6090) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:23:53 vmkernel: 0:15:21:04.489 cpu12:4332) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:23:53 vmkernel: 0:15:21:04.638 cpu12:4332) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:23:53 vmkernel: 0:15:21:04.762 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:02 vmkernel: 0:15:21:12.999 cpu14:5039) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:02 vmkernel: 0:15:21:12.999 cpu0:4326) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:24:02 vmkernel: 0:15:21:13.131 cpu0:4326) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:02 vmkernel: 0:15:21:13.254 cpu14:5039) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:02 vmkernel: 0:15:21:13.254 cpu12:4332) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:24:02 vmkernel: 0:15:21:13.254 cpu12:4332) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:02 vmkernel: 0:15:21:13.255 cpu14:5039) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:02 vmkernel: 0:15:21:13.255 cpu8:4333) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:24:02 vmkernel: 0:15:21:13.255 cpu8:4333) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:04 vmkernel: 0:15:21:15.112 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:04 vmkernel: 0:15:21:15.112 cpu6:4337) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:24:04 vmkernel: 0:15:21:15.249 cpu6:4337) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:04 vmkernel: 0:15:21:15.373 cpu14:188206) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:11 pass: Activation N5Vmomi10ActivationE:0x5b3c7ea8 : invoke made on vmodl.query.PropertyCollector:ha - property-collector

    23 Feb 07:24:11 pass: Arg version: '111 '.

    23 Feb 07:24:11 pass: throw vmodl.fault.RequestCanceled

    23 Feb 07:24:11 pass: result: (vmodl.fault.RequestCanceled) {dynamicType = < unset >, faultCause = (vmodl. {MethodFault) null, msg = "",}

    23 Feb 07:24:11 pass: PendingRequest: HTTP Transaction failed, closes the connection: N7Vmacore15SystemExceptionE (connection reset by peer)

    23 Feb 07:24:12 vmkernel: 0:15:21:23.622 cpu14:188206) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:12 vmkernel: 0:15:21:23.622 cpu0:4331) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:24:12 vmkernel: 0:15:21:23.820 cpu0:4331) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:12 vmkernel: 0:15:21:23.943 cpu14:188206) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:12 vmkernel: 0:15:21:23.944 cpu8:4334) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:24:21 vmkernel: 0:15:21:32.199 cpu14:6105) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:21 vmkernel: 0:15:21:32.199 cpu8:4338) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:24:21 vmkernel: 0:15:21:32.342 cpu8:4338) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:21 vmkernel: 0:15:21:32.466 cpu14:6105) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:21 vmkernel: 0:15:21:32.466 cpu6:4324) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:24:21 vmkernel: 0:15:21:32.466 cpu6:4324) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:21 vmkernel: 0:15:21:32.466 cpu14:6105) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:21 vmkernel: 0:15:21:32.466 cpu2:4331) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:24:21 vmkernel: 0:15:21:32.466 cpu2:4331) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:23 vmkernel: 0:15:21:34.323 cpu14:6090) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:23 vmkernel: 0:15:21:34.323 cpu8:4332) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:24:23 vmkernel: 0:15:21:34.467 cpu8:4332) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:23 vmkernel: 0:15:21:34.590 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:31 vmkernel: 0:15:21:42.822 cpu14:11155) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:31 vmkernel: 0:15:21:42.822 cpu8:4328) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:24:31 vmkernel: 0:15:21:42.971 cpu8:4328) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:32 vmkernel: 0:15:21:43.095 cpu14:11155) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:32 vmkernel: 0:15:21:43.095 cpu8:4332) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:24:32 vmkernel: 0:15:21:43.095 cpu8:4332) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:32 vmkernel: 0:15:21:43.095 cpu14:11155) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:32 vmkernel: 0:15:21:43.095 cpu2:4327) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:24:32 vmkernel: 0:15:21:43.096 cpu2:4327) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:33 vmkernel: 0:15:21:44.953 cpu14:6103) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:33 vmkernel: 0:15:21:44.953 cpu8:4334) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:24:34 vmkernel: 0:15:21:45.078 cpu8:4334) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:34 vmkernel: 0:15:21:45.202 cpu14:6103) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:42 vmkernel: 0:15:21:53.433 cpu14:5009) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:42 vmkernel: 0:15:21:53.433 cpu2:4337) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:24:42 vmkernel: 0:15:21:53.583 cpu2:4337) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:42 vmkernel: 0:15:21:53.706 cpu14:11155) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:42 vmkernel: 0:15:21:53.706 cpu0:4331) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:24:42 vmkernel: 0:15:21:53.707 cpu0:4331) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:42 vmkernel: 0:15:21:53.707 cpu14:11155) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:42 vmkernel: 0:15:21:53.707 cpu0:4335) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:24:42 vmkernel: 0:15:21:53.707 cpu0:4335) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:44 vmkernel: 0:15:21:55.558 cpu14:6105) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:44 vmkernel: 0:15:21:55.558 cpu2:4329) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:24:44 vmkernel: 0:15:21:55.695 cpu2:4329) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:44 vmkernel: 0:15:21:55.819 cpu14:6105) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:53 vmkernel: 0:15:22:04.128 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:53 vmkernel: 0:15:22:04.128 cpu0:4327) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:24:53 vmkernel: 0:15:22:04.284 cpu0:4327) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:53 vmkernel: 0:15:22:04.407 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:53 vmkernel: 0:15:22:04.407 cpu2:4329) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:24:53 vmkernel: 0:15:22:04.408 cpu6:4329) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:53 vmkernel: 0:15:22:04.408 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:53 vmkernel: 0:15:22:04.408 cpu8:4338) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:24:53 vmkernel: 0:15:22:04.408 cpu8:4338) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:55 vmkernel: 0:15:22:06.265 cpu14:5009) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:24:55 vmkernel: 0:15:22:06.265 cpu0:4330) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:24:55 vmkernel: 0:15:22:06.414 cpu0:4330) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:24:55 vmkernel: 0:15:22:06.538 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:01 vmkernel: 0:15:22:12.645 cpu14:5039) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:01 vmkernel: 0:15:22:12.645 cpu8:4333) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:25:01 vmkernel: 0:15:22:12.794 cpu8:4333) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:01 vmkernel: 0:15:22:12.918 cpu14:5192) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:01 vmkernel: 0:15:22:12.918 cpu2:4327) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:25:01 vmkernel: 0:15:22:12.918 cpu2:4327) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:01 vmkernel: 0:15:22:12.918 cpu14:5192) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:01 vmkernel: 0:15:22:12.918 cpu8:4338) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:25:01 vmkernel: 0:15:22:12.918 cpu8:4338) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:03 vmkernel: 0:15:22:14.769 cpu14:6389) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:03 vmkernel: 0:15:22:14.770 cpu8:4334) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:25:03 vmkernel: 0:15:22:14.907 cpu8:4334) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:04 vmkernel: 0:15:22:15.031 cpu14:6389) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:12 vmkernel: 0:15:22:23.268 cpu14:5138) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:12 vmkernel: 0:15:22:23.268 cpu3:4331) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:25:12 vmkernel: 0:15:22:23.411 cpu3:4331) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:12 vmkernel: 0:15:22:23.535 cpu14:4984) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:12 vmkernel: 0:15:22:23.535 cpu0:4335) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:25:12 vmkernel: 0:15:22:23.536 cpu0:4335) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:12 vmkernel: 0:15:22:23.536 cpu14:4984) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:12 vmkernel: 0:15:22:23.536 cpu4:4325) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:25:12 vmkernel: 0:15:22:23.536 cpu4:4325) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:14 vmkernel: 0:15:22:25.387 cpu14:5040) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:14 vmkernel: 0:15:22:25.387 cpu3:4329) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:25:14 vmkernel: 0:15:22:25.536 cpu3:4329) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:14 vmkernel: 0:15:22:25.660 cpu14:5040) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:22 vmkernel: 0:15:22:33.873 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:22 vmkernel: 0:15:22:33.873 cpu1:4327) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:25:23 vmkernel: 0:15:22:34.023 cpu1:4327) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:23 vmkernel: 0:15:22:34.147 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:23 vmkernel: 0:15:22:34.147 cpu3:4329) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:25:23 vmkernel: 0:15:22:34.147 cpu3:4329) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:23 vmkernel: 0:15:22:34.147 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:23 vmkernel: 0:15:22:34.147 cpu0:4336) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:25:23 vmkernel: 0:15:22:34.148 cpu0:4336) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:24 vmkernel: 0:15:22:35.998 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:24 vmkernel: 0:15:22:35.998 cpu8:4328) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:25:25 vmkernel: 0:15:22:36.154 cpu8:4328) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:25 vmkernel: 0:15:22:36.278 cpu14:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:31 vmkernel: 0:15:22:42.402 cpu14:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:31 vmkernel: 0:15:22:42.402 cpu1:4326) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:25:31 vmkernel: 0:15:22:42.545 cpu1:4326) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:31 vmkernel: 0:15:22:42.669 cpu14:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:31 vmkernel: 0:15:22:42.669 cpu8:4333) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:25:31 vmkernel: 0:15:22:42.669 cpu8:4333) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:31 vmkernel: 0:15:22:42.670 cpu14:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:31 vmkernel: 0:15:22:42.670 cpu8:4338) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:25:31 vmkernel: 0:15:22:42.670 cpu8:4338) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:33 vmkernel: 0:15:22:44.520 cpu14:11155) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:33 vmkernel: 0:15:22:44.520 cpu8:4334) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:25:33 vmkernel: 0:15:22:44.670 cpu8:4334) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:33 vmkernel: 0:15:22:44.794 cpu14:11155) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:42 vmkernel: 0:15:22:53.037 cpu14:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:42 vmkernel: 0:15:22:53.037 cpu0:4331) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:25:42 vmkernel: 0:15:22:53.180 cpu0:4331) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:42 vmkernel: 0:15:22:53.304 cpu14:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:42 vmkernel: 0:15:22:53.304 cpu2:4335) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:25:42 vmkernel: 0:15:22:53.305 cpu2:4335) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:42 vmkernel: 0:15:22:53.305 cpu14:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:42 vmkernel: 0:15:22:53.305 cpu2:4330) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:25:42 vmkernel: 0:15:22:53.305 cpu2:4330) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:44 vmkernel: 0:15:22:55.156 cpu14:188206) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:44 vmkernel: 0:15:22:55.156 cpu2:4336) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:25:44 vmkernel: 0:15:22:55.305 cpu2:4336) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:44 vmkernel: 0:15:22:55.429 cpu14:188206) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:52 vmkernel: 0:15:23:03.660 cpu14:188206) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:52 vmkernel: 0:15:23:03.660 cpu1:4327) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:25:52 vmkernel: 0:15:23:03.804 cpu1:4327) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:52 vmkernel: 0:15:23:03.928 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:52 vmkernel: 0:15:23:03.928 cpu2:4336) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:25:52 vmkernel: 0:15:23:03.928 cpu2:4336) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:52 vmkernel: 0:15:23:03.928 cpu14:4110) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:52 vmkernel: 0:15:23:03.928 cpu8:4339) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:25:52 vmkernel: 0:15:23:03.928 cpu8:4339) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:54 vmkernel: 0:15:23:05.779 cpu14:188206) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:25:54 vmkernel: 0:15:23:05.779 cpu8:4332) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:25:54 vmkernel: 0:15:23:05.928 cpu8:4332) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:25:55 vmkernel: 0:15:23:06.052 cpu14:188206) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:01 vmkernel: 0:15:23:12.117 cpu14:6090) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:01 vmkernel: 0:15:23:12.117 cpu8:4333) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:26:01 vmkernel: 0:15:23:12.254 cpu8:4333) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:01 vmkernel: 0:15:23:12.379 cpu14:6090) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:01 vmkernel: 0:15:23:12.379 cpu8:4338) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:26:01 vmkernel: 0:15:23:12.379 cpu8:4338) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:01 vmkernel: 0:15:23:12.379 cpu14:6090) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:01 vmkernel: 0:15:23:12.379 cpu1:4327) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:26:01 vmkernel: 0:15:23:12.379 cpu1:4327) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:03 vmkernel: 0:15:23:14.229 cpu14:5009) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:03 vmkernel: 0:15:23:14.230 cpu0:4335) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:26:03 vmkernel: 0:15:23:14.409 cpu0:4335) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:03 vmkernel: 0:15:23:14.533 cpu14:11155) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:11 vmkernel: 0:15:23:22.776 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:11 vmkernel: 0:15:23:22.776 cpu2:4325) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:26:11 vmkernel: 0:15:23:22.931 cpu2:4325) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:12 vmkernel: 0:15:23:23.056 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:12 vmkernel: 0:15:23:23.056 cpu0:4335) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:26:12 vmkernel: 0:15:23:23.056 cpu0:4335) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:12 vmkernel: 0:15:23:23.056 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:12 vmkernel: 0:15:23:23.056 cpu0:4330) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:26:12 vmkernel: 0:15:23:23.056 cpu0:4330) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:13 vmkernel: 0:15:23:24.913 cpu5:185904) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:13 vmkernel: 0:15:23:24.913 cpu8:4339) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:26:14 vmkernel: 0:15:23:25.062 cpu8:4339) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:14 vmkernel: 0:15:23:25.186 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:22 vmkernel: 0:15:23:33.423 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:22 vmkernel: 0:15:23:33.423 cpu6:4327) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:26:22 vmkernel: 0:15:23:33.560 cpu6:4327) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:22 vmkernel: 0:15:23:33.685 cpu5:6088) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:22 vmkernel: 0:15:23:33.685 cpu0:4337) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:26:22 vmkernel: 0:15:23:33.685 cpu0:4337) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:22 vmkernel: 0:15:23:33.685 cpu5:6088) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:22 vmkernel: 0:15:23:33.685 cpu0:4336) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:26:22 vmkernel: 0:15:23:33.685 cpu0:4336) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:24 vmkernel: 0:15:23:35.542 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:24 vmkernel: 0:15:23:35.542 cpu0:4326) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:26:24 vmkernel: 0:15:23:35.685 cpu0:4326) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:24 vmkernel: 0:15:23:35.809 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:30 vmkernel: 0:15:23:41.922 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:30 vmkernel: 0:15:23:41.922 cpu6:4329) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:26:31 vmkernel: 0:15:23:42.059 cpu6:4329) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:31 vmkernel: 0:15:23:42.183 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:31 vmkernel: 0:15:23:42.183 cpu0:4327) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:26:31 vmkernel: 0:15:23:42.183 cpu0:4327) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:31 vmkernel: 0:15:23:42.184 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:31 vmkernel: 0:15:23:42.184 cpu0:4336) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:26:31 vmkernel: 0:15:23:42.184 cpu4:4336) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:33 vmkernel: 0:15:23:44.040 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:33 vmkernel: 0:15:23:44.040 cpu0:4330) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:26:33 vmkernel: 0:15:23:44.190 cpu0:4330) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:33 vmkernel: 0:15:23:44.314 cpu5:6087) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:41 vmkernel: 0:15:23:52.569 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:41 vmkernel: 0:15:23:52.569 cpu8:4334) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:26:41 vmkernel: 0:15:23:52.718 cpu8:4334) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:41 vmkernel: 0:15:23:52.843 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:41 vmkernel: 0:15:23:52.843 cpu0:4325) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:26:41 vmkernel: 0:15:23:52.843 cpu0:4325) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:41 vmkernel: 0:15:23:52.843 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:41 vmkernel: 0:15:23:52.843 cpu2:4335) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:26:41 vmkernel: 0:15:23:52.844 cpu2:4335) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:43 vmkernel: 0:15:23:54.699 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:43 vmkernel: 0:15:23:54.699 cpu10:4339) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:26:43 vmkernel: 0:15:23:54.903 cpu10:4339) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:44 vmkernel: 0:15:23:55.027 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:52 vmkernel: 0:15:24:03.258 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:52 vmkernel: 0:15:24:03.258 cpu0:4327) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:26:52 vmkernel: 0:15:24:03.395 cpu0:4327) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:52 vmkernel: 0:15:24:03.520 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:52 vmkernel: 0:15:24:03.520 cpu14:4339) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:26:52 vmkernel: 0:15:24:03.520 cpu14:4339) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:52 vmkernel: 0:15:24:03.520 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:52 vmkernel: 0:15:24:03.520 cpu0:4337) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:26:52 vmkernel: 0:15:24:03.520 cpu2:4337) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:54 vmkernel: 0:15:24:05.377 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:26:54 vmkernel: 0:15:24:05.377 cpu10:4332) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:26:54 vmkernel: 0:15:24:05.514 cpu10:4332) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:26:54 vmkernel: 0:15:24:05.638 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27 vmkernel: 0:15:24:11.733 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27 vmkernel: 0:15:24:11.733 cpu0:4329) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:27 vmkernel: 0:15:24:11.888 cpu0:4329) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:01 vmkernel: 0:15:24:12.012 cpu5:4367) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:01 vmkernel: 0:15:24:12.012 cpu2:4327) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:27:01 vmkernel: 0:15:24:12.012 cpu2:4327) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:01 vmkernel: 0:15:24:12.012 cpu5:4367) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:01 vmkernel: 0:15:24:12.012 cpu2:4331) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:27:01 vmkernel: 0:15:24:12.012 cpu2:4331) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:02 vmkernel: 0:15:24:13.869 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:02 vmkernel: 0:15:24:13.869 cpu0:4330) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:27:03 vmkernel: 0:15:24:14.013 cpu0:4330) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:03 vmkernel: 0:15:24:14.137 cpu5:4367) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:11 vmkernel: 0:15:24:22.392 cpu5:186428) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:11 vmkernel: 0:15:24:22.392 cpu8:4334) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:27:11 vmkernel: 0:15:24:22.541 cpu8:4334) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:11 vmkernel: 0:15:24:22.666 cpu5:186428) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:11 vmkernel: 0:15:24:22.666 cpu0:4326) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:27:11 vmkernel: 0:15:24:22.666 cpu0:4326) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:11 vmkernel: 0:15:24:22.666 cpu5:186428) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:11 vmkernel: 0:15:24:22.666 cpu0:4330) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:27:11 vmkernel: 0:15:24:22.666 cpu0:4330) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:13 vmkernel: 0:15:24:24.516 cpu5:186428) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:13 vmkernel: 0:15:24:24.516 cpu10:4339) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:27:13 vmkernel: 0:15:24:24.660 cpu10:4339) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:13 vmkernel: 0:15:24:24.784 cpu5:186428) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:22 vmkernel: 0:15:24:33.015 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:22 vmkernel: 0:15:24:33.015 cpu0:4327) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:27:22 vmkernel: 0:15:24:33.158 cpu0:4327) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:22 vmkernel: 0:15:24:33.283 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:22 vmkernel: 0:15:24:33.283 cpu10:4339) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:27:22 vmkernel: 0:15:24:33.283 cpu10:4339) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:22 vmkernel: 0:15:24:33.283 cpu5:4101) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:22 vmkernel: 0:15:24:33.283 cpu2:4324) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:27:22 vmkernel: 0:15:24:33.283 cpu2:4324) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:24 vmkernel: 0:15:24:35.134 cpu5:9738) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:24 vmkernel: 0:15:24:35.134 cpu1:4336) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:27:24 vmkernel: 0:15:24:35.295 cpu1:4336) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:24 vmkernel: 0:15:24:35.419 cpu5:9738) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:30 vmkernel: 0:15:24:41.507 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:30 vmkernel: 0:15:24:41.507 cpu0:4329) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:27:30 vmkernel: 0:15:24:41.675 cpu0:4329) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:30 vmkernel: 0:15:24:41.799 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:30 vmkernel: 0:15:24:41.799 cpu6:4331) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:27:30 vmkernel: 0:15:24:41.799 cpu6:4331) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:30 vmkernel: 0:15:24:41.800 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:30 vmkernel: 0:15:24:41.800 cpu0:4337) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:27:30 vmkernel: 0:15:24:41.800 cpu0:4337) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:32 vmkernel: 0:15:24:43.650 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:32 vmkernel: 0:15:24:43.650 cpu0:4326) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:27:32 vmkernel: 0:15:24:43.793 cpu0:4326) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:32 vmkernel: 0:15:24:43.918 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:41 vmkernel: 0:15:24:52.137 cpu11:11155) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:41 vmkernel: 0:15:24:52.137 cpu6:4335) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:27:41 vmkernel: 0:15:24:52.280 cpu6:4335) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:41 vmkernel: 0:15:24:52.405 cpu11:11155) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:41 vmkernel: 0:15:24:52.405 cpu6:4326) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:27:41 vmkernel: 0:15:24:52.405 cpu6:4326) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:41 vmkernel: 0:15:24:52.405 cpu11:11155) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:41 vmkernel: 0:15:24:52.405 cpu14:4333) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:27:41 vmkernel: 0:15:24:52.405 cpu14:4333) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:42 pass: created task: haTask-304 - vim.VirtualMachine.powerOn - 306

    23 Feb 07:27:42 pass: 2010-02-23 07:27:42.209 5B22BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx' power on request received

    23 Feb 07:27:51 vmkernel: 0:15:25:02.742 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:51 vmkernel: 0:15:25:02.742 cpu0:4337) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:27:51 vmkernel: 0:15:25:02.891 cpu0:4337) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:52 vmkernel: 0:15:25:03.016 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:52 vmkernel: 0:15:25:03.016 cpu6:4335) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:27:52 vmkernel: 0:15:25:03.016 cpu6:4335) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:52 vmkernel: 0:15:25:03.017 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:52 vmkernel: 0:15:25:03.017 cpu0:4326) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:27:52 vmkernel: 0:15:25:03.017 cpu0:4326) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:53 vmkernel: 0:15:25:04.873 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:27:53 vmkernel: 0:15:25:04.873 cpu0:4324) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:27:54 vmkernel: 0:15:25:05.022 cpu0:4324) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:27:54 vmkernel: 0:15:25:05.147 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28 vmkernel: 0:15:25:11.246 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28 vmkernel: 0:15:25:11.246 cpu0:4337) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:28 vmkernel: 0:15:25:11.390 cpu0:4337) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28 vmkernel: 0:15:25:11.515 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28 vmkernel: 0:15:25:11.515 cpu2:4330) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:28 vmkernel: 0:15:25:11.515 cpu2:4330) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28 vmkernel: 0:15:25:11.515 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28 vmkernel: 0:15:25:11.515 cpu6:4335) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:28 vmkernel: 0:15:25:11.515 cpu6:4335) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:02 vmkernel: 0:15:25:13.365 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:02 vmkernel: 0:15:25:13.365 cpu8:4339) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:28:02 vmkernel: 0:15:25:13.556 cpu8:4339) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:02 vmkernel: 0:15:25:13.681 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:09 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:28:09 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:28:09 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:28:09 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:28:09 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:28:09 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:28:09 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:28:09 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:28:09 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:28:09 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:28:09 sfcb [6070]: peripheral device ID storelib physical: 0xD

    23 Feb 07:28:10 vmkernel: 0:15:25:21.924 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:10 vmkernel: 0:15:25:21.924 cpu0:4329) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:28:11 vmkernel: 0:15:25:22.067 cpu0:4329) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:11 vmkernel: 0:15:25:22.192 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:11 vmkernel: 0:15:25:22.192 cpu8:4339) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:28:11 vmkernel: 0:15:25:22.192 cpu8:4339) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:11 vmkernel: 0:15:25:22.192 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:11 vmkernel: 0:15:25:22.192 cpu14:4328) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:28:11 vmkernel: 0:15:25:22.192 cpu14:4328) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:13 vmkernel: 0:15:25:24.042 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:13 vmkernel: 0:15:25:24.042 cpu0:4336) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:28:13 vmkernel: 0:15:25:24.186 cpu0:4336) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:13 vmkernel: 0:15:25:24.310 cpu11:4107) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:20 pass: HostCtl Exception when the collection of statistics: SysinfoException: node (VSI_NODE_sched_cpuClients_numVcpus); (Bad0001) status = failure; Message = Instance (1): 0 Input (0)

    23 Feb 07:28:21 vmkernel: 0:15:25:32.529 cpu11:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:21 vmkernel: 0:15:25:32.529 cpu0:4330) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:28:21 vmkernel: 0:15:25:32.672 cpu0:4330) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:21 vmkernel: 0:15:25:32.797 cpu11:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:21 vmkernel: 0:15:25:32.797 cpu0:4326) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:28:21 vmkernel: 0:15:25:32.797 cpu0:4326) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:21 vmkernel: 0:15:25:32.797 cpu11:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:21 vmkernel: 0:15:25:32.797 cpu14:4333) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:28:21 vmkernel: 0:15:25:32.797 cpu14:4333) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:23 vmkernel: 0:15:25:34.653 cpu11:6089) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:23 vmkernel: 0:15:25:34.653 cpu2:4325) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:28:23 vmkernel: 0:15:25:34.815 cpu2:4325) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:30 vmkernel: 0:15:25:41.045 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:30 vmkernel: 0:15:25:41.045 cpu0:4335) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:28:30 vmkernel: 0:15:25:41.195 cpu0:4335) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:30 vmkernel: 0:15:25:41.319 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:30 vmkernel: 0:15:25:41.319 cpu0:4330) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:28:30 vmkernel: 0:15:25:41.320 cpu0:4330) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:30 vmkernel: 0:15:25:41.320 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:30 vmkernel: 0:15:25:41.320 cpu6:4326) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:28:30 vmkernel: 0:15:25:41.320 cpu6:4326) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:32 vmkernel: 0:15:25:43.176 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:32 vmkernel: 0:15:25:43.176 cpu9:4339) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:28:32 vmkernel: 0:15:25:43.325 cpu9:4339) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:32 vmkernel: 0:15:25:43.450 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:38 vmkernel: 0:15:25:49.854 cpu8:4334) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:38 vmkernel: 0:15:25:49.854 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:38 vmkernel: 0:15:25:49.854 cpu2:4337) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:28:38 vmkernel: 0:15:25:49.854 cpu2:4337) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:40 vmkernel: 0:15:25:51.710 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:40 vmkernel: 0:15:25:51.711 cpu6:4327) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:28:40 vmkernel: 0:15:25:51.848 cpu6:4327) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:40 vmkernel: 0:15:25:51.973 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:40 vmkernel: 0:15:25:51.973 cpu9:4339) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:28:40 vmkernel: 0:15:25:51.973 cpu9:4339) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:40 vmkernel: 0:15:25:51.973 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:40 vmkernel: 0:15:25:51.973 cpu6:4325) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:28:40 vmkernel: 0:15:25:51.973 cpu6:4325) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:42 vmkernel: 0:15:25:53.829 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:42 vmkernel: 0:15:25:53.829 cpu6:4336) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:28:43 vmkernel: 0:15:25:54.044 cpu6:4336) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:43 vmkernel: 0:15:25:54.169 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:51 vmkernel: 0:15:26:02.400 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:51 vmkernel: 0:15:26:02.400 cpu0:4326) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:28:51 vmkernel: 0:15:26:02.531 cpu0:4326) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:51 vmkernel: 0:15:26:02.656 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:51 vmkernel: 0:15:26:02.656 cpu9:4338) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:28:51 vmkernel: 0:15:26:02.656 cpu9:4338) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:51 vmkernel: 0:15:26:02.656 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:51 vmkernel: 0:15:26:02.656 cpu6:4336) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:28:51 vmkernel: 0:15:26:02.656 cpu6:4336) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:28:52 vmkernel: 0:15:26:03.467 cpu4:68711) megasas: sn 352358 cmd = 0 x 28 attempts ABORT = tmo 0 = 0

    23 Feb 07:28:52 vmkernel: 0:15:26:03.467 cpu4:68711) < 5 > 0: megasas: RESET 352358 cmd = 28 attempts sn = 0

    23 Feb 07:28:53 vmkernel: 0:15:26:04.512 cpu5:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:28:54 vmkernel: 0:15:26:05.491 cpu0:68711) megasas < 5 >: successful reset

    23 Feb 07:29:02 vmkernel: 0:15:26:12.999 cpu7:6104) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:02 vmkernel: 0:15:26:12.999 cpu14:4339) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:29:02 vmkernel: 0:15:26:13.136 cpu14:4339) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:02 vmkernel: 0:15:26:13.261 cpu7:6104) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:02 vmkernel: 0:15:26:13.261 cpu12:4334) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:29:02 vmkernel: 0:15:26:13.261 cpu12:4334) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:02 vmkernel: 0:15:26:13.261 cpu7:6104) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:02 vmkernel: 0:15:26:13.261 cpu0:4335) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:29:02 vmkernel: 0:15:26:13.261 cpu0:4335) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:04 vmkernel: 0:15:26:15.117 cpu7:6104) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:04 vmkernel: 0:15:26:15.117 cpu0:4327) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:29:04 vmkernel: 0:15:26:15.261 cpu0:4327) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:04 vmkernel: 0:15:26:15.386 cpu7:6104) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:12 vmkernel: 0:15:26:23.616 cpu7:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:12 vmkernel: 0:15:26:23.616 cpu0:4336) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:29:12 vmkernel: 0:15:26:23.754 cpu0:4336) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:12 vmkernel: 0:15:26:23.879 cpu7:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:12 vmkernel: 0:15:26:23.879 cpu0:4329) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:29:12 vmkernel: 0:15:26:23.879 cpu0:4329) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:12 vmkernel: 0:15:26:23.879 cpu7:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:12 vmkernel: 0:15:26:23.879 cpu8:4332) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:29:12 vmkernel: 0:15:26:23.879 cpu8:4332) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:12 vmkernel: 0:15:26:23.995 cpu0:190779) megasas: ABORT sn 352531 cmd = attempts 0x2a = tmo 0 = 0

    23 Feb 07:29:12 vmkernel: 0:15:26:23.995 cpu0:190779) < 5 > 0: megasas: RESET sn 352531 cmd = attempts 2 a = 0

    23 Feb 07:29:14 vmkernel: 0:15:26:25.735 cpu7:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:23 vmkernel: 0:15:26:34.293 cpu4:6389) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:23 vmkernel: 0:15:26:34.293 cpu12:4334) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:29:23 vmkernel: 0:15:26:34.442 cpu12:4334) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:23 vmkernel: 0:15:26:34.567 cpu4:6389) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:23 vmkernel: 0:15:26:34.567 cpu1:4337) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:29:23 vmkernel: 0:15:26:34.568 cpu1:4337) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:23 vmkernel: 0:15:26:34.568 cpu4:4363) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:23 vmkernel: 0:15:26:34.568 cpu8:4333) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:29:23 vmkernel: 0:15:26:34.568 cpu8:4333) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:25 vmkernel: 0:15:26:36.424 cpu4:6389) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:25 vmkernel: 0:15:26:36.424 cpu2:4331) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:29:25 vmkernel: 0:15:26:36.574 cpu2:4331) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:25 vmkernel: 0:15:26:36.699 cpu4:6389) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:32 pass: 2010-02-23 07:29:32.555 verbose 5B22BB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/petas03/petas03.vmx" unknown unexpected toolsVersionStatus, use status offline guestToolsCurrent

    23 Feb 07:29:32 pass: 2010-02-23 07:29:32.562 verbose 5B22BB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx" unknown unexpected toolsVersionStatus, use status offline guestToolsCurrent

    23 Feb 07:29:32 pass: 2010-02-23 07:29:32.568 verbose 5B22BB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/petas02/petas02.vmx" unknown unexpected toolsVersionStatus, use status offline guestToolsCurrent

    23 Feb 07:29:34 vmkernel: 0:15:26:45.030 cpu4:5039) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:34 vmkernel: 0:15:26:45.030 cpu12:4332) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:29:34 vmkernel: 0:15:26:45.161 cpu12:4332) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:34 pass: created task: haTask-80 - vim.VirtualMachine.powerOn - 315

    23 Feb 07:29:34 spend: 2010-02-23 07:29:34.172 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx' power on request received

    23 Feb 07:29:34 spend: 2010-02-23 07:29:34.172 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx' ethernet reconfigures protective is required

    23 Feb 07:29:34 pass: 257 of the event: petas01 on the RTHUS7002.rintra.ruag.com ha-Data Center host begins

    23 Feb 07:29:34 spend: 2010-02-23 07:29:34.173 1680BB90 info 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx' State of Transition (VM_STATE_OFF-> VM_STATE_POWERING_ON)

    23 Feb 07:29:34 vmkernel: 0:15:26:45.286 cpu4:5039) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:44 vmkernel: 0:15:26:55.635 cpu4:9738) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:44 vmkernel: 0:15:26:55.635 cpu8:4333) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:29:44 vmkernel: 0:15:26:55.785 cpu8:4333) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:44 vmkernel: 0:15:26:55.910 cpu4:9738) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:44 vmkernel: 0:15:26:55.910 cpu8:4338) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:29:44 vmkernel: 0:15:26:55.910 cpu8:4338) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:44 vmkernel: 0:15:26:55.910 cpu4:9738) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:44 vmkernel: 0:15:26:55.910 cpu6:4329) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:29:44 vmkernel: 0:15:26:55.910 cpu6:4329) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:46 vmkernel: 0:15:26:57.766 cpu4:6104) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:46 vmkernel: 0:15:26:57.766 cpu0:4324) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:29:46 vmkernel: 0:15:26:57.915 cpu0:4324) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:47 vmkernel: 0:15:26:58.040 cpu4:4363) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:55 vmkernel: 0:15:27:06.276 cpu1:9738) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:55 vmkernel: 0:15:27:06.276 cpu6:4331) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:29:55 vmkernel: 0:15:27:06.414 cpu6:4331) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:55 vmkernel: 0:15:27:06.539 cpu1:9738) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:55 vmkernel: 0:15:27:06.539 cpu2:4324) megasas_hotplug_work < 6 > [6]: 0x006e event code

    23 Feb 07:29:55 vmkernel: 0:15:27:06.539 cpu2:4324) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:55 vmkernel: 0:15:27:06.540 cpu1:9738) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:55 vmkernel: 0:15:27:06.540 cpu4:4335) megasas_hotplug_work < 6 > [6]: 0x005d event code

    23 Feb 07:29:55 vmkernel: 0:15:27:06.540 cpu4:4335) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:57 vmkernel: 0:15:27:08.395 cpu1:190710) megasas_service_aen < 6 > [6]: NEA received

    23 Feb 07:29:57 vmkernel: 0:15:27:08.395 cpu3:4336) megasas_hotplug_work < 6 > [6]: code event 0 x 0071

    23 Feb 07:29:57 vmkernel: 0:15:27:08.538 cpu3:4336) megasas_hotplug_work < 6 > [6]: registered AIA

    23 Feb 07:29:57 vmkernel: 0:15:27:08.664 cpu1:190710) megasas_service_aen < 6 > [6]: NEA received

    And then there is my hostd.log

    DISKLIB-VMFS: ' / vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05/app-back-05-flat.vmdk ': open with success (21) size = 53687091200, hd = 0. Type 3

    DISKLIB-VMFS: ' / vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05/app-back-05-flat.vmdk ': closed.

    Reminder of Foundry recorded on 4

    Reminder of Foundry recorded on 17

    Reminder of Foundry recorded on 5

    Reminder of Foundry recorded on 11

    Reminder of Foundry recorded on 12

    Reminder of Foundry recorded 13

    Reminder of Foundry recorded on 14

    Reminder of Foundry recorded on 15

    Reminder of Foundry recorded 26

    Reminder of Foundry saved on 16

    2010-02-23 06:52:24.922 5AF25DC0 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Lack of recall tolerance status received

    2010-02-23 06:52:24.922 5AF25DC0 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Record/replay reminder of the State received

    2010-02-23 06:52:24.922 5AF25DC0 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Current state recovered VM Foundry 5, 2

    2010-02-23 06:52:24.922 info 5AF25DC0 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" The State of Transition (VM_STATE_INITIALIZING-> VM_STATE_OFF)

    Cannot find the content reflected in extended config xml.

    DISKLIB-VMFS: ' / vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05/app-back-05-flat.vmdk ': open with success (21) size = 53687091200, hd = 0. Type 3

    DISKLIB-VMFS: ' / vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05/app-back-05-flat.vmdk ': closed.

    DISKLIB-VMFS: ' / vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05/app-back-05-flat.vmdk ': open success (23) size = 53687091200, hd = 0. Type 3

    DISKLIB-VMFS: ' / vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05/app-back-05-flat.vmdk ': closed.

    2010-02-23 06:52:35.532 info 5AF25DC0 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" Version initial tools: 3:guestToolsNotInstalled

    2010-02-23 06:52:35.532 5AF25DC0 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Overhead VM predicted: 118587392 bytes

    2010-02-23 06:52:35.532 info 5AF25DC0 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" Initialized virtual machine.

    ModeMgr::End: op = current, normal = normal, count = 2

    Task completed: success of haTask-ha-folder-vm-vim.Folder.createVm-243 status

    Event 244: Created the virtual machine on RTHUS7002.rintra.ruag.com ha-data center

    2010-02-23 06:52:35.533 5AF25DC0 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Create worker thread successfully completed

    2010-02-23 06:52:35.539 5B1A9B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Unknown, unexpected toolsVersionStatus use guestToolsNotInstalled of the offline state

    The task was created: haTask-320 - vim.VirtualMachine.powerOn - 246

    2010-02-23 06:52:53.868 5B168B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Power on request received

    2010-02-23 06:52:53.868 5B168B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Reconfigure the ethernet support if necessary

    Event 245: app-back-05-neu on host RTHUS7002.rintra.ruag.com ha-data center begins

    2010-02-23 06:52:53.868 info 5B168B90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" The State of Transition (VM_STATE_OFF-> VM_STATE_POWERING_ON)

    ModeMgr::Begin: op = current, normal = normal, count = 1

    Load: Existing file loading: /etc/vmware/license.cfg

    2010-02-23 06:52:53.899 5B168B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' PowerOn request queue

    2010-02-23 06:52:53.913 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Current state recovered VM Foundry 7, 6

    /VM/ #1dea389a74a8b231 /: VMHSVMCbPower: status of the VM powerOn with option soft

    2010-02-23 06:52:53.913 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" VMHS: Exec () ING/bin/vmx

    VMHS: VMKernel_ForkExec (/ bin/vmx, flag = 1): rc = 0 pid = 190710

    2010-02-23 06:52:53.965 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" An established connection.

    Airfare to CIMOM version 1.0, root user

    PersistAllDvsInfo called

    2010-02-23 06:53:20.200 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" Paths on the virtual machine of mounting connection: /db/connection / #14bc.

    2010-02-23 06:53:20.248 5B1EAB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' No upgrade required

    2010-02-23 06:53:20.253 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" Completion of Mount VM for vm.

    2010-02-23 06:53:20.254 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" Mount VM Complete: OK

    Airfare to CIMOM version 1.0, root user

    2010-02-23 06:54:45.556 5B0E6B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' MKS ready for connections: true

    2010-02-23 06:54:45.556 info 1688CB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" Information request: insufficient video RAM. The maximum resolution of the virtual machine will be limited to 1176 x 885 to 16 bits per pixel. To use the configured maximum resolution of 2360 x 1770 to 16 bits per pixel, increase the amount of video memory allocated to this virtual machine by setting svga.vramSize = "16708800" in the virtual machine configuration file.

    Id: 0: Type: 2, default: 0, number of options: 1

    2010-02-23 06:54:45.556 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" Disconnect the current control.

    2010-02-23 06:54:45.556 5B0E6B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' No upgrade required

    2010-02-23 06:54:45.556 5B168B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Current state recovered VM Foundry 7, 6

    VixVM_AnswerMessage returned 0

    2010-02-23 06:54:45.562 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" Disconnect the current control.

    2010-02-23 06:54:45.562 5B1A9B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Current state recovered VM Foundry 7, 6

    Event 246: Message on app-back-05-neu on RTHUS7002.rintra.ruag.com ha-Datacenter: insufficient video RAM. The maximum resolution of the virtual machine will be limited to 1176 x 885 to 16 bits per pixel. To use the configured maximum resolution of 2360 x 1770 to 16 bits per pixel, increase the amount of video memory allocated to this virtual machine by setting svga.vramSize = "16708800" in the virtual machine configuration file.

    2010-02-23 06:54:45.562 1688CB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Auto-répondu insufficient video RAM issue. The maximum resolution of the virtual machine will be limited to 1176 x 885 to 16 bits per pixel. To use the configured maximum resolution of 2360 x 1770 to 16 bits per pixel, increase the amount of video memory allocated to this virtual machine by setting svga.vramSize = "16708800" in the virtual machine configuration file.

    2010-02-23 06:55:26.096 5B22BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' State of the Tools version: noTools

    2010-02-23 06:55:26.096 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" VMX status has been defined.

    2010-02-23 06:55:26.097 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" Disconnect the current control.

    2010-02-23 06:55:26.097 5B22BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' The event of Imgcust read vmdb tree values State = 0, errorCode = 0, errorMsgSize = 0

    2010-02-23 06:55:26.097 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" Disconnect the current control.

    2010-02-23 06:55:26.097 5B22BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Tools running status changed to: keep

    2010-02-23 06:55:41.030 info 1698FB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" Connected to testAutomation-fd, sent remotely end pid: 190710

    2010-02-23 06:55:41.032 5B127B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Current state recovered VM Foundry 4, 8

    2010-02-23 06:55:41.034 5AF25DC0 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' No upgrade required

    247 event: app-back-05-neu on RTHUS7002.rintra.ruag.com ha-data center power is on

    2010-02-23 06:55:41.036 info 5B0E6B90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" The State of Transition (VM_STATE_POWERING_ON-> VM_STATE_ON)

    Change of State received for VM ' 320'

    2010-02-23 06:55:41.065 5B0E6B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Actual VM overhead: 88780800 bytes

    2010-02-23 06:55:41.139 5B0E6B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Time to pick up some config: 73 (MS)

    Task completed: success of status haTask-320 - vim.VirtualMachine.powerOn - 246

    Add vm 320 poweredOnVms list

    2010-02-23 06:55:43.211 5AF25DC0 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Time to pick up some config: 67 (MS)

    Requested locale "and/or derived from locale ' en_US.utf - 8', is not compatible with the operating system. Using English.

    Requested locale "and/or derived from locale ' en_US.utf - 8', is not compatible with the operating system. With the help of C.

    Event 248: [email protected] user

    2010-02-23 06:55:47.873 info 5B0E6B90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" Airfare for mks to user connections: rzhmi_ope

    2010-02-23 06:55:55.843 1688CB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Number of connections new MKS: 1

    2010-02-23 06:56:00.279 5AF25DC0 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Actual VM overhead: 89305088 bytes

    RefreshVms overhead to 1 VM update

    Airfare to CIMOM version 1.0, root user

    2010-02-23 06:56:40.293 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Actual VM overhead: 88829952 bytes

    RefreshVms overhead to 1 VM update

    Ability to root pool changed 18302 MHz / 43910MB at 18302 MHz / 43909MB

    Airfare to CIMOM version 1.0, root user

    PersistAllDvsInfo called

    Rising with name = "haTask - vim.SearchIndex.findByInventoryPath - 224" failed.

    Rising with name = "haTask - vim.SearchIndex.findByInventoryPath - 224" failed.

    Rising with name = "haTask - vim.SearchIndex.findByInventoryPath - 240" failed.

    Rising with name = "haTask - vim.SearchIndex.findByInventoryPath - 240" failed.

    Airfare to CIMOM version 1.0, root user

    FetchSwitches: added 0 items

    FetchDVPortgroups: added 0 items

    Airfare to CIMOM version 1.0, root user

    Ability to root pool changed 18302 MHz / 43909MB at 18302 MHz / 43908MB

    2010-02-23 07:01:46.927 168CDB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx' Unknown, unexpected toolsVersionStatus use guestToolsCurrent of the offline state

    2010-02-23 07:01:47.586 1688CB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/petaw02/petaw02.vmx' Unknown, unexpected toolsVersionStatus use guestToolsCurrent of the offline state

    2010-02-23 07:01:47.593 1688CB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/petaw01/petaw01.vmx' Unknown, unexpected toolsVersionStatus use guestToolsCurrent of the offline state

    2010-02-23 07:01:48.412 5B127B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/petas03/petas03.vmx' Unknown, unexpected toolsVersionStatus use guestToolsCurrent of the offline state

    2010-02-23 07:01:48.418 5B127B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx' Unknown, unexpected toolsVersionStatus use guestToolsCurrent of the offline state

    2010-02-23 07:01:48.425 5B127B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/petas02/petas02.vmx' Unknown, unexpected toolsVersionStatus use guestToolsCurrent of the offline state

    2010-02-23 07:01:50.707 5B0E6B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Number of connections new MKS: 0

    The task was created: haTask-304 - vim.VirtualMachine.powerOn - 268

    2010-02-23 07:01:52.340 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx' Power on request received

    2010-02-23 07:01:52.340 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx' Reconfigure the ethernet support if necessary

    Event 249: PETA-route on the RTHUS7002.rintra.ruag.com ha-Data Center host starts

    2010-02-23 07:01:52.341 info 1680BB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx" The State of Transition (VM_STATE_OFF-> VM_STATE_POWERING_ON)

    ModeMgr::Begin: op = current, normal = normal, count = 2

    Load: Existing file loading: /etc/vmware/license.cfg

    2010-02-23 07:01:59.118 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx' PowerOn request queue

    2010-02-23 07:02:00.513 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/petas02/petas02.vmx' Unknown, unexpected toolsVersionStatus use guestToolsCurrent of the offline state

    Airfare to CIMOM version 1.0, root user

    2010-02-23 07:02:07.643 5B1A9B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx' Current state recovered VM Foundry 7, 6

    /VM/ #c27a9ee2a691ac1a /: VMHSVMCbPower: status of the VM powerOn with option soft

    CloseSession called for the session id = 5206e96f - 88-2e3f-9187-60932847859f 0c

    Rising with name = "haTask-304 - vim.VirtualMachine.powerOn - 129 ' failed.

    GetPropertyProvider failed for haTask-304 - vim.VirtualMachine.powerOn - 129

    Rising with name = "haTask-256 - vim.VirtualMachine.reset - 124 ' failed.

    GetPropertyProvider failed for haTask-256 - vim.VirtualMachine.reset - 124

    Event 250: Rzhmi_ope disconnected user

    2010-02-23 07:02:24.668 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx" VMHS: Exec () ING/bin/vmx

    VMHS: VMKernel_ForkExec (/ bin/vmx, flag = 1): rc = 0 pid = 192683

    Rising with name = "haTask-ha-folder-vm-vim.Folder.createVm-243" failed.

    Rising with name = "haTask-ha-folder-vm-vim.Folder.createVm-243" failed.

    2010-02-23 07:02:58.683 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx" An established connection.

    PersistAllDvsInfo called

    Ability to root pool changed 18302 MHz / 43908MB at 18302 MHz / 43907MB

    Airfare to CIMOM version 1.0, root user

    Ability to root pool changed 18302 MHz / 43907MB in 18302 MHz / 43909MB

    Airfare to CIMOM version 1.0, root user

    Rising with name = "haTask-320 - vim.VirtualMachine.powerOn - 246" failed.

    Rising with name = "haTask-320 - vim.VirtualMachine.powerOn - 246" failed.

    FoundryVMDBPowerOpCallback: Op to VMDB reports power failed for VM /vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx with error msg = "the operation has timed out" and error code-41.

    2010-02-23 07:06:02.208 5B127B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx' Current state recovered VM Foundry 5, 2

    Have not power Op: error: (3006) the virtual machine must be turned on

    2010-02-23 07:06:02.209 5B22BB90 WARNING "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx" Failed operation

    Event 251: Cannot power on PETA-router on RTHUS7002.rintra.ruag.com ha-data center. A general error occurred:

    2010-02-23 07:06:02.209 info 5B22BB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx" The State of Transition (VM_STATE_POWERING_ON-> VM_STATE_OFF)

    ModeMgr::End: op = current, normal = normal, count = 3

    Task completed: error status haTask-304 - vim.VirtualMachine.powerOn - 268

    Airfare to CIMOM version 1.0, root user

    PersistAllDvsInfo called

    Ability to root pool changed 18302 MHz / 43909MB at 18302 MHz / 43906MB

    Airfare to CIMOM version 1.0, root user

    Airfare to CIMOM version 1.0, root user

    Ability to root pool changed 18302 MHz / 43906MB at 18302 MHz / 43908MB

    Activation N5Vmomi10ActivationE:0x5b68a8b0 : invoke made on vmodl.query.PropertyCollector:ha - property-collector

    ARG version:

    '93.

    Launch vmodl.fault.RequestCanceled

    Result:

    {(vmodl.fault.RequestCanceled)

    dynamicType = < unset >

    faultCause = (vmodl. NULL in MethodFault),

    MSG = ""

    }

    PendingRequest: HTTP Transaction failed, closes the connection: N7Vmacore15SystemExceptionE (connection reset by peer)

    Ability to root pool changed 18302 MHz / 43908MB at 18302 MHz / 43909MB

    Airfare to CIMOM version 1.0, root user

    Airfare to CIMOM version 1.0, root user

    Requested locale "and/or derived from locale ' en_US.utf - 8', is not compatible with the operating system. Using English.

    Requested locale "and/or derived from locale ' en_US.utf - 8', is not compatible with the operating system. With the help of C.

    Event 252: [email protected] user

    2010-02-23 07:13:13.852 info 5B1EAB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx" Airfare for mks to user connections: rzhmi_ope

    PersistAllDvsInfo called

    2010-02-23 07:13:19.794 5B22BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Number of connections new MKS: 1

    2010-02-23 07:14:18.185 5B0E6B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Number of connections new MKS: 0

    Activation N5Vmomi10ActivationE:0x5c0f4dc0 : invoke made on vmodl.query.PropertyCollector:ha - property-collector

    ARG version:

    « 3 »

    CloseSession called for the session id = 528c7c0f-5bce-6ea5-CE22-e4927fab2c89

    Launch vmodl.fault.RequestCanceled

    Result:

    {(vmodl.fault.RequestCanceled)

    dynamicType = < unset >

    faultCause = (vmodl. NULL in MethodFault),

    MSG = ""

    }

    Event 253: Rzhmi_ope disconnected user

    Error reading the client while you wait header: N7Vmacore15SystemExceptionE (connection reset by peer)

    Error reading the client while you wait header: N7Vmacore15SystemExceptionE (connection reset by peer)

    Error reading the client while you wait header: N7Vmacore15SystemExceptionE (connection reset by peer)

    2010-02-23 07:14:20.843 info 1680BB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx" Airfare for mks to user connections: rzhmi_ope

    2010-02-23 07:14:26.122 5AF25DC0 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Number of connections new MKS: 1

    Airfare to CIMOM version 1.0, root user

    Rising with name = "haTask-304 - vim.VirtualMachine.powerOn - 268 ' failed.

    Rising with name = "haTask-304 - vim.VirtualMachine.powerOn - 268 ' failed.

    Airfare to CIMOM version 1.0, root user

    Airfare to CIMOM version 1.0, root user

    PersistAllDvsInfo called

    CloseSession called for the session id = 520edca0-62aa - 19 c 8-8c9d-bd710bd6c882

    Event in 254: Rzhmi_ope disconnected user

    2010-02-23 07:19:01.520 5B0A5B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/app-back-05-neu/app-back-05-neu.vmx' Number of connections new MKS: 0

    Requested locale "and/or derived from locale ' en_US.utf - 8', is not compatible with the operating system. Using English.

    Requested locale "and/or derived from locale ' en_US.utf - 8', is not compatible with the operating system. With the help of C.

    Event 255: [email protected] user

    2010-02-23 07:19:05.616 info 5B0E6B90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx" Airfare for mks to user connections: rzhmi_ope

    Airfare to CIMOM version 1.0, root user

    2010-02-23 07:19:11.260 5B127B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Number of connections new MKS: 1

    Airfare to CIMOM version 1.0, root user

    Airfare to CIMOM version 1.0, root user

    PersistAllDvsInfo called

    Airfare to CIMOM version 1.0, root user

    Activation N5Vmomi10ActivationE:0x5b3c7ea8 : invoke made on vmodl.query.PropertyCollector:ha - property-collector

    ARG version:

    "111".

    Launch vmodl.fault.RequestCanceled

    Result:

    {(vmodl.fault.RequestCanceled)

    dynamicType = < unset >

    faultCause = (vmodl. NULL in MethodFault),

    MSG = ""

    }

    PendingRequest: HTTP Transaction failed, closes the connection: N7Vmacore15SystemExceptionE (connection reset by peer)

    Airfare to CIMOM version 1.0, root user

    Airfare to CIMOM version 1.0, root user

    The task was created: haTask-304 - vim.VirtualMachine.powerOn - 306

    2010-02-23 07:27:42.209 5B22BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx' Power on request received

    2010-02-23 07:27:42.209 5B22BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx' Reconfigure the ethernet support if necessary

    Event 256: PETA-route on the RTHUS7002.rintra.ruag.com ha-Data Center host starts

    2010-02-23 07:27:42.210 info 5B22BB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx" The State of Transition (VM_STATE_OFF-> VM_STATE_POWERING_ON)

    ModeMgr::Begin: op = current, normal = normal, count = 2

    Load: Existing file loading: /etc/vmware/license.cfg

    2010-02-23 07:27:49.686 5B22BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx' PowerOn request queue

    2010-02-23 07:27:58.217 1690EB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx' Current state recovered VM Foundry 7, 6

    /VM/ #c27a9ee2a691ac1a /: VMHSVMCbPower: status of the VM powerOn with option soft

    Airfare to CIMOM version 1.0, root user

    PersistAllDvsInfo called

    2010-02-23 07:28:15.230 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx" VMHS: Exec () ING/bin/vmx

    VMHS: VMKernel_ForkExec (/ bin/vmx, flag = 1): rc = 0 pid = 191945

    HostCtl Exception while collecting statistics: SysinfoException: node (VSI_NODE_sched_cpuClients_numVcpus); (Bad0001) status = failure; Message = Instance (1): 0 Input (0)

    2010-02-23 07:28:34.614 5B22BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' The Tools version state: ok

    2010-02-23 07:28:34.616 168CDB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' MKS ready for connections: false

    2010-02-23 07:28:34.660 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Number of connections new MKS: 0

    2010-02-23 07:28:34.782 info 5AF25DC0 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx" Airfare for mks to user connections: rzhmi_ope

    2010-02-23 07:28:49.390 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx" An established connection.

    2010-02-23 07:29:04.550 5B22BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Overhead VM predicted: 143904768 bytes

    RefreshVms overhead to 1 VM update

    2010-02-23 07:29:29.573 verbose 1690EB90 ' vm: / vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_Radius + Jg - CA/Test_Radius + Jg - CA.vmx' Unknown, unexpected toolsVersionStatus use guestToolsNotInstalled of the offline state

    2010-02-23 07:29:32.555 5B22BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/petas03/petas03.vmx' Unknown, unexpected toolsVersionStatus use guestToolsCurrent of the offline state

    2010-02-23 07:29:32.562 5B22BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx' Unknown, unexpected toolsVersionStatus use guestToolsCurrent of the offline state

    2010-02-23 07:29:32.568 5B22BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/petas02/petas02.vmx' Unknown, unexpected toolsVersionStatus use guestToolsCurrent of the offline state

    The task was created: haTask-80 - vim.VirtualMachine.powerOn - 315

    2010-02-23 07:29:34.172 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx' Power on request received

    2010-02-23 07:29:34.172 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx' Reconfigure the ethernet support if necessary

    Event 257: petas01 on the RTHUS7002.rintra.ruag.com ha-Data Center host begins

    2010-02-23 07:29:34.173 1680BB90 info "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx" The State of Transition (VM_STATE_OFF-> VM_STATE_POWERING_ON)

    ModeMgr::Begin: op = current, normal = normal, count = 3

    Load: Existing file loading: /etc/vmware/license.cfg

    2010-02-23 07:29:34.327 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx' PowerOn request queue

    Airfare to CIMOM version 1.0, root user

    Ability to root pool changed 18302 MHz / 43909MB at 18302 MHz / 43910MB

    Airfare to CIMOM version 1.0, root user

    Ability to root pool changed 18302 MHz / 43910MB at 18302 MHz / 43909MB

    HostCtl Exception while collecting statistics of network for vm 256: SysinfoException: node (VSI_NODE_net_openPorts_type); (Bad0001) status = failure; Message = cannot get Int

    2010-02-23 07:31:52.964 1688CB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx' Current state recovered VM Foundry 7, 6

    /VM/ #dd20eb79a599aa61 /: VMHSVMCbPower: status of the VM powerOn with option soft

    2010-02-23 07:31:52.984 1676AB90 info "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx" VMHS: Exec () ING/bin/vmx

    VMHS: VMKernel_ForkExec (/ bin/vmx, flag = 1): rc = 0 pid = 200813

    FoundryVMDBPowerOpCallback: Op to VMDB reports power failed for VM /vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx with error msg = "the operation has timed out" and error code-41.

    2010-02-23 07:31:52.985 5B168B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx' Current state recovered VM Foundry 5, 2

    Have not power Op: error: (3006) the virtual machine must be turned on

    2010-02-23 07:31:52.985 5B22BB90 WARNING "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx" Failed operation

    Event 258: Cannot power on PETA-router on RTHUS7002.rintra.ruag.com ha-data center. A general error occurred:

    2010-02-23 07:31:52.986 info 5B22BB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/PETA-Router/PETA-Router.vmx" The State of Transition (VM_STATE_POWERING_ON-> VM_STATE_OFF)

    ModeMgr::End: op = current, normal = normal, count = 4

    Task completed: error status haTask-304 - vim.VirtualMachine.powerOn - 306

    vmdbPipe_Streams could not read: OVL_STATUS_EOF

    2010-02-23 07:31:52.988 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' No upgrade required

    2010-02-23 07:31:52.989 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx" Disconnect the current control.

    2010-02-23 07:31:52.989 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx" Disassembly of the virtual machine.

    2010-02-23 07:31:52.990 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx" VMDB disassemble insiders.

    2010-02-23 07:31:52.990 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx" VM complete disassembly.

    2010-02-23 07:31:53.000 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx" Mt. state values have changed.

    2010-02-23 07:31:53.000 1680BB90 WARNING "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx" Failed to get resource for a motor home on VM settings

    There is no worldId 4294967295 world

    2010-02-23 07:31:53.001 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx" Mt. state values have changed.

    2010-02-23 07:31:53.002 info 1676AB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx" Reloading configuration state.

    2010-02-23 07:31:53.106 1676AB90 info "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx" An established connection.

    2010-02-23 07:32:24.650 1676AB90 info "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx" Paths on the virtual machine of mounting connection: /db/connection/#158 d.

    TAKEN 2 (13)

    client closed connection detected recv

    Intake automation detected ready for VM (/ vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx)

    2010-02-23 07:32:24.709 5B1A9B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx' No upgrade required

    2010-02-23 07:32:24.720 1676AB90 info "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx" Completion of Mount VM for vm.

    2010-02-23 07:32:24.721 1676AB90 info "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Test_DHCP/Test_DHCP.vmx" Mount VM Complete: OK

    2010-02-23 07:32:24.777 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Time to pick up some config: 31779 (MS)

    2010-02-23 07:32:24.782 5B22BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Current state recovered VM Foundry 5, 2

    2010-02-23 07:32:24.782 5B0E6B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Current state recovered VM Foundry 5, 2

    2010-02-23 07:32:24.849 1690EB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Unknown, unexpected toolsVersionStatus use guestToolsNotInstalled of the offline state

    2010-02-23 07:32:24.850 5B22BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Receipt Foundry power state update 5

    2010-02-23 07:32:24.850 info 5B22BB90 "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx" The State of Transition (VM_STATE_ON-> VM_STATE_OFF)

    ModeMgr::End: op = current, normal = normal, count = 3

    Change of State received for VM ' 256'

    2010-02-23 07:32:24.850 5B22BB90 WARNING "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx" Could not find activation record, user unknown event.

    Event 259: Std-Server on RTHUS7002.rintra.ruag.com ha-data center is turned off

    2010-02-23 07:32:24.851 5B22BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Lack of recall tolerance status received

    2010-02-23 07:32:24.851 5B0E6B90 WARNING "vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx" Received a double transition of Foundry: 2, 0

    2010-02-23 07:32:24.919 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Time to pick up some config: 66 (MS)

    Remove the vm 256 poweredOnVms list

    2010-02-23 07:32:24.990 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Time to pick up some config: 65 (MS)

    2010-02-23 07:32:25.061 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Time to pick up some config: 65 (MS)

    2010-02-23 07:32:25.065 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' The event of Imgcust read vmdb tree values State = 0, errorCode = 0, errorMsgSize = 0

    2010-02-23 07:32:25.066 1680BB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Tools running status changed to: keep

    2010-02-23 07:32:25.066 5B0E6B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' State of the Tools version: unknown

    2010-02-23 07:32:25.066 5B0E6B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' No upgrade required

    2010-02-23 07:32:25.081 16A11B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Unknown, unexpected toolsVersionStatus use guestToolsNotInstalled of the offline state

    2010-02-23 07:32:25.088 1698FB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Unknown, unexpected toolsVersionStatus use guestToolsNotInstalled of the offline state

    2010-02-23 07:32:25.090 16A11B90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Unknown, unexpected toolsVersionStatus use guestToolsNotInstalled of the offline state

    2010-02-23 07:32:25.090 1698FB90 verbose 'vm:/vmfs/volumes/4b0a69a8-3934ecfc-866f-001999742fbb/Std-Server/Std-Server.vmx' Unknown, unexpected toolsVersionStatus use guestToolsNotInstalled of the offline state

    Airfare to CIMOM version 1.0, root user

    My Server includes:

    2 x Intel Xeon X 5550, 4 c/8 t 2.66 GHz, 8 MB

    48 GB DDR3 RAM

    12 HD SAS 3 g 146 GB 10 k HOT PLUG

    -11 HD than RAID 5

    -1 global hot spare

    Anyone know more about these symptoms?

    If it is a server supported (Dell, HP, IBM, etc.) the State of health of the host will be visible from the vSphere client (hardware status tab if managed vCentre, or the tab configuration, health if not).

    Otherwise, you will need to get a diagnostic utility currently out of the manufacturer and use it.

    Please give points for any helpful answer.

  • Question about w32time and VMs

    In our disaster recovery site, we have multiple ESX hosts configured to synchronize time from our time to Linux Server (set to synchronize with an external time server). VMs on these servers are set to get their time to the ESX host using VMWare tools. We also have a domain controller virtual set to synchronize with the same time server externally (nosync option specified).

    Time on the virtual machines seems to stay in sync fine. However, when starts the server, we see the following event in our event logs:

    "The time provider NtpClient could not find a domain controller to use as a time source. NtpClient will try again within minutes XXX. »

    I think that the reason for which the server cannot find a domain controller, such as a time server is due the nosync option I specified on my domain controller. Therefore, I think that this isn't really a problem that we need to worry. This is a good assumption. If possible, I would like to get rid of these events so. If I want to disable the service w32time on each virtual machine since I am using VMWare tools to synchronize the time with the ESX host or is there a better way to handle this error?

    Thank you.

    Take a look on:

    Re: Time synchronization for guests

    The service is linked with time synchronization Windows

    You should not see this error all on DC... (or it means that your external source is not valid)

    On the VM using VMware Tools time synchronization, this service is not required, so you can put it in manual mode on or off.

    André

  • NPIV Virtual Port and WWN VM

    Hi guys,.

    Looking at doc NPIV for VCAP-DCA. Little confusion of Virtual Port, NPIV and VM WWN.

    Some study guide talks on up to 4 virtual port by virtual computer. I understand the virtual port pair is WWN assigned to the virtual machine that can be used as vHBA to the FC fabric. In my lab 5.5, I can assign up to 16 pairs WWN on a virtual machine (8 HW).

    I can see the limit of 4 virtual port in the v4.1 document, but couldn't find it in v5 and v5.5. This limit removed from v5 and later?

    Thank you.

    VMware vSphere 4 - ESX and vCenter Server

    Each virtual machine can have up to 4 virtual ports. Active NPIV VMS are assigned exactly 4 WWN NPIV related, which is used to communicate with the physical host bus adapters through virtual ports. As a result, virtual machines can use up to 4 physical HBA to purposes are NPIV.

    Documentation Centre of vSphere 5.5

    In fact, it is given in the documentation of 5.5. You might have missed it. Here is the link for it

    Documentation Centre of vSphere 5.5

    You can have up to 16 WWN. "The documentation States" ""you can create 1 to 16 pairs WWN, which may be mapped to the first from 1 to 16 physical FC HBA on the host. "

  • vROPs and SRM - protected Machines and placeholder VMs have different guest free space operating system

    Hi all

    Question: Home - health - screen 'one or more machines virtual comments file systems are running out of disk space' has different values for MRS. protect VMs and VMs placeholder.

    Someone has to meet a gap with machines SRM protected value of free space of different OS comments that the position expressed VM?   I've been enforcing a cleaning of the guests with < 5% free space.  When we clean up the comments, the alert is cleared almost immediately on the VM protégé (power on), but the position expressed VM is never a present according to vROPs value.

    Is this a known issue? Is the problem with SRM or vROPs.  Are the vCenters related with MRS supported?

    I'm under vRops 6.2.0.3528905 build 3528905.  We use array based replication and have a GB link between sites.  Replication occurs in less than 5 minutes, so this shouldn't be a problem.  I can still see the question once a day and VMS space have yet to synchronize or to get close in value to their counterpart protect VM.

    Thank you

    Luis C.

    So after a discussion with the SRM team and vRealize Operations Manager, has identified vRealize treated VMs placeholder as ordinary VMs.   The interesting thing is that when a failover test is running that hard drives are connected to the VM placeholder.  All the information from disk are ingested in the vCenter site recovery.   Later this was my problem.  I had to open a feature request.  If any of you are in the same boat please do the same so they'll add logic in to MRS.   At the moment my solution was to create a dynamic group for turning off VMs and join the dynamic group to a policy with disabled alarms.

  • RDM and NPIV support in 5.1?

    I was wondering if anyone knows if RDM and NPIV are supported in 5.1?  I am sure that they have not been supported in 1.5, but please correct me if I'm wrong.  We really need some insight on these areas for deployment recently. Thanks in advance!

    5.1 support NPIV and RDM.

  • VMs that partially copied in the new store and now cannot start!

    Hi - my team had implemented an ESXi host 4.1 VShere with the VM files kept locally on disks to hosts.

    A SIN has been implemented and a storage area available to the host.

    A member of the team began to move virtual machines from the local drive Hosts in the NAS storage area, but the bombed process and VMs attempt to start using files on the NAS, even if the files are still on the local storage (unfortunately this included also the VCentre server).

    Can someone advise a way to solve this problem!

    Thank you

    Chris

    Hello
    It would be the behavior expected if all the VMDK using absolute paths in the description of vmdk.

    easiy, you can check: connection via winscp and upload the VMDK with flat.vmdk or delta.vmdk extension.
    Then open them in a texteditor and find absolute path entries

Maybe you are looking for