T2p scripts fail on NFS mounts
Hello IDM gurus!
I'm trying to implement a multi data center (MDC) implementation of OAM 11gR2Ps3. 1 data center is in place and functioning as expected. We try to use T2P scripts described in the documentation of the OAM following MDC to clone the binary (copyBinary.sh) but you see the error message
Error message: 1
November 17, 2015 18:46:48 - ERROR - CLONE-20435 some Oracle homes are excluded during the copy operation.
November 17, 2015 18:46:48 - CAUSE - CLONE-20435 following Oracle homes have been excluded during the copy operation.
[/ data/Oracle/OAM/.snapshot/daily.2015-11-13_0010/oracle_common, data/Oracle/OAM/.snapshot/daily.2015-11-13_0010/oracle_bip, data/Oracle/OAM/.snapshot/daily.2015-11-13_0010/Oracle_IDM1, data/Oracle/OAM/.snapshot/daily.2015-11-14_0010/oracle_common, data/Oracle/OAM/.snapshot/daily.2015-11-14_0010/oracle_bip, data/Oracle/OAM/.snapshot/daily.2015-11-14_0010/Oracle_IDM1, data/Oracle/OAM/.snapshot/daily.2015-11-15_0010/oracle_common, /data/Oracle/OAM/.snapshot/daily.2015-11-15_0010/oracle_bip] data/Oracle/OAM/.snapshot/daily.2015-11-15_0010/Oracle_IDM1, data/Oracle/OAM/.snapshot/daily.2015-11-16_0010/oracle_common, data/Oracle/OAM/.snapshot/daily.2015-11-16_0010/oracle_bip, data/Oracle/OAM/.snapshot/daily.2015-11-16_0010/Oracle_IDM1, data/Oracle/OAM/.snapshot/daily.2015-11-17_0010/oracle_common, data/Oracle/OAM/.snapshot/daily.2015-11-17_0010/oracle_bip, data/Oracle/OAM/.snapshot/daily.2015-11-17_0010/Oracle_IDM1, /data/Oracle/OAM/.snapshot/daily.2015-11-11_0010/oracle_common /data/Oracle/OAM/.snapshot/daily.2015-11-11_0010/oracle_bip [, data/Oracle/OAM/.snapshot/daily.2015-11-11_0010/Oracle_IDM1, data/Oracle/OAM/.snapshot/daily.2015-11-12_0010/oracle_common, data/Oracle/OAM/.snapshot/daily.2015-11-12_0010/oracle_bip, data/Oracle/OAM/.snapshot/daily.2015-11-12_0010/Oracle_IDM1], and the possible causes are:
1. all Oracle homes were not registered with an OraInventory.
2 if all the Oracle homes have been saved with a unique custom OraInventory, pointer file corresponding inventory received no T2P operation.
3 canonical path to the Oracle Home recorded with OraInventory was not child Middleware House.
November 17, 2015 18:46:48 - ACTION - CLONE - 20435. be sure that all causes possible, listed in the CAUSE section are supported.
.Snapshot in the error message file is a NFS function because it mounts are actually NFS volumes.
Are the T2P scripts supported on NFS mounts? If so, how to solve this problem? Would appreciate your comments on how we can solve this problem
Concerning
Shiva
The issue was due to wrong settings for the frames. This has been fixed by the storage team. T2P scripts were always initially and after that correction of the Storage mounts, they ran successfully. A restoration was necessary, and RCU patterns had to be recreated on the clone
Tags: Fusion Middleware
Similar Questions
-
Excerpts from run.log
*****************
2014-09-10 13:21:51 UTC...
2014-09-10 13:21:51 UTC [VMSTAF] [0] INFO: VM: [PID 13775] command (perl /iozone - duration.pl d 3600 f - 128 m - r 1024 k t /tmp/iozone.dat 1 > /iozone.2.log 2 > & 1) on 192.168.1.25 is completed.
2014 09/11 01:36:16 UTC Test script failed to 300 s to/opt/vmware/VTAF/Certification/Workbench/VTAFShim 279, < STDIN > line 309 cleaning.
2014 09/11 01:36:16 UTC interrupted Test
One thing noticed here is that there is no logs generated during about 12 hours between the completion of test on one of the GOS and Test Script failed.
All of the recommendations will help...
Do not wait more than 2 hours if you think that the test case is stuck. Clean and restart the test.
-
Hi all
I've updated to Win 7 Enterprise for NFS support.
I was able to map two NFS shares successfully.
However, upon restart, the actions are lost. Is it possible (I don't see one in the snap-snap-in mmc NFS) to indicate that they be reconnected at logon. I could use a login script in local GPO to do this, but I was wondering if there was a "better way" (tm).
A tip for those who struggle with the NFS client: use showmount to see if Windows can even see the actions. I had to set the firewall issues, then showmount showed stocks and I was able to login and navigate through the:
Server EI C:\Users\owner>showmount
Export the server list:
/nfs4exports 192.168.x.0/24
RAID 192.168.x.0/24/nfs4exports.
mythstorage 192.168.x.0/24/nfs4exports.C:\Users\owner>mount server: / nfs4exports/z: raid
z: is now connected to the server: / nfs4exports/raidThe command completed successfully.
C:\Users\owner >
Found the answer on the technet site (it's sort of, a persistent systemic mount is not possible, but using a user login script is possible.)
Basically NFS connections are stored on a basis by user.
"NFS mounted drive letters are specific session so running this in a script to start most likely will not work." The service would probably need to mount its own drive letters from a script that runs in the context of service. We have specific guidelines, that it is not recommended to http://support.microsoft.com/kb/180362. "
-
ESXi 4.1 - blades HP and NFS mounts
Hello
I'm having a strange problem with NFS access
We currently use HP Blades BL680c G5 servers, in C7000 enclosures. Configuration of the switch, Cisco (3120 x) are the same for all ports / ESXs.
I use VMNIC0 and VMNIC1 in etherchannel / Hash IP access to the warehouses of data NFS on NetApp filers.
I have now two servers beside I want to reverse for test reasons. These two servers are moving instead of the other.
Once restarted, these two hosts ar sees no no more data warehouses. They appear as inactive. I can't reconnect them (VI client or esxcfg-nas - r)
vmkping for Netapp file servers works more also.
As soon as I put these 2 servers to their original square, everything is back to normal.
On the side of the cisco switch ports are still configured the same. None of them are closed.
There I'm missing something, but don't know where to look.
Thanks in advance
Hi Hernandez,
If you're repatching hosts or they always use the same network cables as before (perhaps a change of IP address, or a rule on the host of the NFS export)
don't forget, you'll have policies created on the NFS mounts 'export' these fixtures for the names of specific hosts or IPs - if you're repatching and change IP etc you can cause this export invalidation,
-
I have two questions about how the NFS mount points is in fact from the point of view NFS servers.
If the mount point of the 'outside' is considered as for example / ISO, would this mean
1. that the directory is named "iso" or could it be a real name on the local file system on the NAS Server? (As in the names of SMB share, could be different from the name of folder.)
2. that the folder is actually located directly on the root of the local file system, which is/ISO on the server, or would it be for example/dir1/dir2/iso and always to the outside as appearance / ISO? (As in the SMB, where 'share' creates a new root somewhere on the filesystem that is exposed to the outside.)
ricnob wrote:
Thank you! It would be a typical NFS-Unix/Linux server?
Yes.
So perhaps it is not defined in the standard how this should be done, but can vary between implementations? (* nix of course most common.)
Possible, but does MS ever meet standards? Even their own?
RFC1094/NFS:
"Different operating systems may have restrictions on the depth of the tree or used names, but also using a syntax different tor represent the"path", which is the concatenation of all the"components"(file and directory names) in the name...". Some operating systems provide a 'mount' operation to make all file systems appear as a single tree, while others maintain a 'forest' of file systems... NFS looking for a component of a path at the same time. It may not be obvious why he did not only take the name of way together, stroll in the directories and return a file handle when it is carried out. There are many good reasons not to do so. First, paths need separators between items in the directory, and different operating systems use different separators. We could define a representation of network Standard access path, but then each path must be analyzed and converted at each end. "
AWo
VCP 3 & 4
\[:o]===\[o:]
= You want to have this ad as a ringtone on your mobile phone? =
= Send 'Assignment' to 911 for only $999999,99! =
-
maximum size of the NFS mounts
Hello
I would like to know what is the maximum size for mounting NFS into ESX (i) 4? If I need more than 2 TB, I need to use extensions iSCSI volumes. Is it the same thing with NFS or I can come up and use for example a 4 TB NFS share?
Thank you
Marcus
NFS mounts have No. limit. It is also great that you need to make. ESX can mount NFS, that's all. The ONLY limitation for ESX is VMFS, because ESX creates / manages VMFS volumes. NFS mounts are managed by OTHER systems, they have their own limits.
NFS is just a huge disk hard as far as ESX is concerned.
-
How to start vmware after NFS mounts
OK, so now I have solved the mystery of editing. I'm looking for the gurus of linux on how to change the sequence of boot for mounts NFS are made prior to the VMware service.
I use Centos 5.3 on the host.
Phil
Hello
You must configure vmware to start after nfs and network.
A simple example makes it easier to understand:
LS /etc/rc3.d/S*network
/etc/rc3.d/S10network
LS /etc/rc3.d/S*nfs
/etc/rc3.d/S11nfs
LS /etc/rc3.d/S*vmware
/etc/rc3.d/S05vmware
In this case, it may be like this:
RM /etc/rc3.d/S05vmware
ln-s /etc/init.d/vmware /etc/rc3.d/S20vmware
Today, vmware always will begin after the nfs mounts.
A saludo/best regards,.
Pablo
Please consider providing any useful answer. Thank you!! - Por favor considered premiar las useful responses. MUCHAS gracias!
-
I just bought a Qnap NAS box that I want to share with 2 VM hosts. I did a little research on NFS vs iSCSI and it seems that NFS has some pretty good advantage over iSCSI. Especially when she is on a network to shared resources.
The hosts are the two Centos 5.3.
I wonder what would be the optimum settings (options) for the NFS mount? I found a couple of some old site, but I wonder what people here use? Here's a setting string that I found:
nfsvers = 3, tcp, timeo = 600, TRANS = 2, rsize = 32768, wsize = 32768, hard, intr, noatime
I'm heading to the coast to look at the man page for NFS now to see what these all mean...
Thank you
Phil
Here's a good explanation of this thing:
The nolock option prevents the sharing of file locking
information between the NFS server and the NFS client. The server is
unaware of the file locks on this client and vice versa.
Using
the nolock option is required if the NFS server has its NFS file lock
the feature in a State broken or remained dead letter, but she works between
all versions of NFS.
If another host is accessing the same files that
your app on this host, there may be problems if file locking
information sharing is disabled.
Failure to maintain good
blocking a write operation to the same host with a read operation on
another may cause the reader to have incomplete or inconsistent data
(reads a structure of data/record/line that is written only in part).
A
locking failure between two writers is likely to have data
loss/corruption, such as the later writing replaces the previous one. The
changes made by the previous write operation may be lost.
-
-
Try adding that another NFS mount to a 3.5 server. I have an existing one on a Windows 2003 server that fits very well. The error I get when creating is:
& gt; Configuring the host error: cannot open the volume: / vmfs/volumes/f33bc09a-59b57f41 & lt;
Any ideas? It is also a 2003 server, but different from the box.
Hello
Looks like the esx host IP or subnet does not have access to the NFS share.
Check the properties of the NFS server.
vExpert 2009
-
VCB snapshot creation failed: Pre custom gel script failed - how to fix?
On a VM I started getting this error of VCB:
Error: Another error occurred: failed to create snapshot: gel prior custom script failed.
I found several other discussions about this, with resolutions of filesystem suspend disabling and reinstalling the VMware Tools.
The guest is a CentOS 5 VM. He has lots of free space. The LUN has enough free space. I tried to reinstall the VMware tools. I tried to create the script pre-anti-freeze, chmodding to 755, even tried to add a simple script ' exit 0' in so that he actually returned correct status of output. "
Snapshots fine VM in Virtual Center. I can't turn off the setting to suspend as it is set automatically by Backup Exec and I don't see an option to turn it off. Adding to the config.js appears to have ignored.
The full command that I use for testing that is generating the error is:
vc-01 vcbMounter.exe - h u Administrator Pei "mypass" - a uuid:564d7acf-0ecb-d867-8e4f-bbcb1aea6508 - r "C:\TEMP\monitor-22-06-408" t - fullvm m nbd - M 0 f 0
I have several other CentOS 5 VM that work very well in the VCB.
Any ideas or other things I can try?
StarWind Software R & D
-
Hello
I am very new to LiveCycle Designer, and I get the error message: "Script failed (the language is formcalc;» context is xfa... The error error: syntax near token ' (' on row 3, column 0.)
I just add some numeric fields. The calculation works fine, but I want to get rid of this, any error message?
Here is the code I have in the Script Editor:
Form1. Page2.TotalEnrollment::calculate - (FormCalc, client)
TotalEnrollment.rawValue = NumericField1.rawValue + NumericField2.rawValue
This doesn't make any sense I have yet one "(" in my syntax...
Thank you!
Hello
With FormCalc, you don't need .rawValue.
Also that the script is in the object you want to set the value, you can use $ instead of the name of the object.
Try:
$ = NumericField1 + NumericField2
Make sure you have not a mistake, as I suspect that this could be the problem.
Niall
-
VERY simple calculation - "Script Failed' unknown value
Hello people. I'm not new to LCD (I used this product for a period of time to create graphic documents), how ever I am new to use the LCD screen to create documents that make mathematical calculations... This is exactly why I'm having this problem. Sorry grrrrrrr...
I have a simple document with a few tables/rows. Each line has a value simple condition (if something is checked there is a numeric value), it works fine. What I try to do is just the total value of all the rows in each table, and that the bombs (errors) with "Script Failed (language is formcalc;» context is..) then identifies all the field names as 'unknown '. I'm tempted to test a little differently and I still get this error, in fact I can't get this form to just add to the simple numbers without a similar error.
Novel I don't know it's me - has been less-than-smart, I didn't belive there is an error with the LCD itself.
Any direction would be most appreciated. Thank you very much!
http://www.chnwi.info/test/wellness_scorecard_final.PDF
Two things:
1. the sum command has capital S Sum() not sum() is
(2. There is no space between the sum and the)
3. When you reference other fields, they are in a different subform that the total amount, so you have to way to the fields. The best way to do this is in the Script Editor, the place the cursor where you want the reference to appear, then hold ctrl and move the cursor over the desired field. The cursor turns into a version V. the mouse when you're on the ground and the correct reference will be written in the script editor.
4. you can not split the order through two lines... .in all on a single line.
I have included my modified version. I only did 1 table.
Paul
-
The addition of NFS to data - going. zmvolume failed a./mount-nfs-store.pl
I have 1.5 of the workspace
I am trying to add NFSv3 part in it. The NFS server is Windows 2008.
I used the instructions in the document. Whats happening is on the line 'mount-a' at the mount-nfs - store.pl script its change the owner and group of zimbra to 4294967294 and therefore give the
Error has occurred: directory does not exist or is not writable
zmvolume failed on line de./mount-nfs-store.pl 67
As soon as I remove the property and the group back to zimbra.
Don't know what's happening here ideas?
I also cant chown to zimbra thereupon 4294967294 unless and until I remove the disc first.
Thank you
If anyone need an answer... Make sure you give anonymous enable and give the file for anonymous logon rights
-
SQL Server 2005 unattended install using the script fails on Windows 2003 Cluster
We strive to perform the installation without SQL Server 2005 via the script assistance, but the installation fails on Windows 2003 Cluster, we use Windows 2008 with HyperV running a DC with two nodes (all Win 2003). Script is...
Start/wait setup.exe/qb VS = INSTALLVS SQLTEST = SQL_Engine INSTANCENAME = SQL123 ADDLOCAL = SQL_Engine ADDNODE = sqlnode-1, sqlnode-2 GROUP = SQL1 IP = 192.168.1.85, SQLNetwork ADMINPASSWORD = Windows8! SAPWD = Windows8! INSTALLSQLDIR=%ProgramFiles%\Microsoft SQL Server\ INSTALLSQLDATADIR = G:\SQLDATADIR\ SQLACCOUNT = SQLPASSWORD lab\sql_svc = Windows8! AGTACCOUNT = lab\sql_svc = Windows8 AGTPASSWORD! SQLBROWSERACCOUNT = SQLBROWSERPASSWORD = Windows8 lab\sql_svc! SQLCLUSTERGROUP = "lab\sql_grp" AGTCLUSTERGROUP = "lab\sql_grp" FTSCLUSTERGROUP = "lab\sql_grp" ERRORREPORTING = 1 SQMREPORTING = 1 SQLCOLLATION = SQL_Latin1_General_CP1_CI_AS
Errors on the nodes:
SQLSetup0008_SQLNODE1_Core (Local)
Running: InstallSqlAction to the: 4/14/2012 22:56:39
Installation: sql on target: SQLNODE1
Waiting for remote setup (s) to prepare
Remote Setup (s) is ready
Problem of determining the State of the virtual server for the package: '1' because of the exception of data store:
Source the name of the file: datastore\cachedpropertycollection.cpp
Compiler timestamp: Fri 29 Jul 01:13:49 2005
Function name: CachedPropertyCollection::findProperty
The source line number: 63
----------------------------------------------------------
Could not find the property 'VsDataPath' {'PackageIdStateScope', ' ', '1'} in the cache.
Source the name of the file: datastore\packageidstatescopecollector.cpp
Compiler timestamp: Wed Aug 24 13:40:04 2005
Function name: PackageIdStateScopeCollector::collectProperty
The source line number: 115
----------------------------------------------------------
dataPathValue is empty
Cluster functionality detected: SQL_Engine
Loaded DLL:C:\Program SQL Server\90\Setup Bootstrap\sqlsval.dll Version: 2005.90.1399.0
Source the name of the file: datastore\cachedpropertycollection.cpp
Compiler timestamp: Fri 29 Jul 01:13:49 2005
Function name: CachedPropertyCollection::findProperty
The source line number: 130
----------------------------------------------------------
Unable to find property 'IPResources' {"VirtualServerInfo", "", "SQLTEST"} in the cache
Source the name of the file: datastore\cachedpropertycollection.cpp
Compiler timestamp: Fri 29 Jul 01:13:49 2005
Function name: VirtualServerInfo.IPResources
The source line number: 113
----------------------------------------------------------
Could not collect the property 'IPResources' {"VirtualServerInfo", "", "SQLTEST"}
Transact package threw an exception.
Error code: 0x8007000d (13)
Windows error text: these data are incorrect.
Source the name of the file: sqlchaining\highlyavailablepackage.cpp
Compiler timestamp: Mon Aug 29 01:18:42 2005
Function name: sqls::HighlyAvailablePackage:manageVsResources
The source line number: 490
---- Context -----------------------------------------------
SQLs::HighlyAvailablePackage:preInstall
SQLs::HighlyAvailablePackage:manageVsResources
m_dataDirPath is empty
Cluster API threw an exception during operations of virtualization.
Package first notify: 13
Error code: 0x8007000d (13)
Windows error text: these data are incorrect.
Source the name of the file: sqlchaining\highlyavailablepackage.cpp
Compiler timestamp: Mon Aug 29 01:18:42 2005
Function name: sqls::HighlyAvailablePackage:manageVsResources
The source line number: 490
---- Context -----------------------------------------------
SQLs::HighlyAvailablePackage:preInstall
SQLs::HighlyAvailablePackage:manageVsResources
m_dataDirPath is empty
Cluster API threw an exception during operations of virtualization.
SQLSetup0008_SQLNODE2_Core (Local)
Local configuration can complement
Running: InstallSqlAction at: 4/2012/14 23:23:57
Installation: sql on target: SQLNODE2
Informs the package is ready to start: 0
Waiting for notification start installation
Local configuration can begin installation
Error code: 0x8007000d (13)
Windows error text: these data are incorrect.
Source the name of the file: remotepackageengine\remotepackageinstallersynch.cpp
Compiler timestamp: Wed Aug 24 13:40:17 2005
Function name: sqls::RemotePackageInstallerSynch:preInstall
The source line number: 128
---- Context -----------------------------------------------
SQLs::InstallPackageAction: perform
SQLs::RemotePackageInstallerSynch:preInstall
Abandonment of the package: "sql", due to an error from the configuration of the host: 13
Notify all ready to commit: 13
Notify all ready to put end to: 13
Waiting for notification complete installation
Local configuration can complement
Packaging return code: 13
Complete: InstallSqlAction at: 4/2012/14 23:23:59, returned false
Error: The 'InstallSqlAction' Action failed during execution. Error during execution information:
Target collection includes the local machine.
-----------------------------------------------
Error: The 'UninstallForRS2000Action' Action failed during execution. Error during execution information:
Action: "UninstallForRS2000Action" will be marked as failed due to the following condition:
The condition 'rs has been correctly upgraded level.' returned false.
Package installation: 'patchRS2000' failed due to a precondition.
Running: ReportChainingResults at: 4/2012/14 23:24:1
Error: Action "ReportChainingResults" threw an exception during execution.
One or more packages could not be installed. See logs for details of the error. : 13
Error code: 0x8007000d (13)
Windows error text: these data are incorrect.
Source the name of the file: sqlchaining\sqlchainingactions.cpp
Compiler timestamp: 1 Thu Sep 22:23:05 2005
Name of the function: sqls::ReportChainingResults: perform
The source line number: 3097
Please notify.
Assani
Hello
The question you posted would be better suited in the TechNet Forums. I would recommend posting your query in the TechNet Forums. You can follow the link to your question:
http://TechNet.Microsoft.com/en-us/WindowsServer/bb512923
Hope this information helps.
-
I have changed most of our rules through several Wssf and use a number of rule-level Variables. I got these variables through groovy scripts.
Recently, I had to change a variable that appears in all the rules.
This is the script that I have written so far:
com.quest.nitro.service.sl.interfaces.rule import. *;
def ruleInfo = "";
def ruleSvc = server.get ("RuleService");
def ruleList = ["DBSS - ADH Service status"];
def varExp = "INST_NAME;
def varText = "scope.parent_node.mon_instance_name";allRules = ruleSvc.getAllRules ();
for (rule allRules) {}
ruleName def = rule.getName ();
def ruleCart = rule.getCartridgeName ();If (ruleList.contains (ruleName))
If (ruleCart.equals ("DB_SQL_Server_UI")) {}
def ExpressionSet = rule.getExpressions ();
for (expression in ExpressionSet) {}
If (expression.getName () .equals ("INST_NAME")) {}
ExpressionSet.remove (expression);
ruleInfo += "Removed variable $varExp of $ruleName \n";
}}
Add a new term
ExpressionSet.add (rule.createExpression ("INST_NAME", varText));rule.setExpressions (ExpressionSet);
ruleSvc.saveRule (rule);
ruleInfo += "added $varExp variable $ruleName \n";
}
}Return ruleInfo;
The part of the script that adds the variable work consistantly. The part of the script removes it expresion of level rule fails most of the time with the following error:
com.quest.nitro.service.sl.interfaces.scripting.ScriptAbortException: script1001968: java.util.ConcurrentModificationException
I'm still fairly new to groovy and java script, but I gather that when I have to iterate over a collection that is changed in another thread, the iterator survey a java.util.ConcurrentModificationException. I read that I should look somehow collection synchronization. Before we go down this rabbit hole, I thought I would just ask here, how should I write this code then it work consistantly?
This is due to brakes on an object. You can try to break your code into pieces. First of all try and get all the rules in a list that matches your criteria. Call it a refinedRuleList, and then iterate through this refinedRuleList for Expressions and remove them. This gives a try!
Thank you
#AJ Aslam
Maybe you are looking for
-
Updates MS causing havoc with my Vista Home Premium?
Durable symptom is stuck on classic view with the concomitant loss of functionality to update. I have found a temporary solution to what is "Restart" under "Thèmes" in the section 'Services' but I can't find a permanent solution. There is an error me
-
limited connection - wired and wireless connection
all devices, but this wireless to a 3 and 2 wired connection works with router/isp one device gets continually unidentified network have tried changing ports - resetting routers and cables Tech ISP could not solve even added a wireless - device to co
-
Windows Media Player-audio tracks overlap during playback
Behavior of strage in WMP from the application of SP1 Win7 Since the Windows 7 (32 bit premium) with SP1 upgrade, Windows MediaPlayer exposes a stange problem. With WMP set to auto-repeat during playback, I double click an MP3 file from Windows Expl
-
Reinstallation of XP in the drive of new
My HP mini 110-1013TU' hard drive is broken, to before making any system that backs up. How to install XP on the new drive? I have no CD, even if the laptop has the license sticker and warranty period.
-
How to reference a. CHM file rather than copy them into a merged project?
Hello! I'm new posting. Projects merged CHM files. You may recognize this picture:When I go to assign one (in my case) an existing merger of the project a new . CHM file, I'm presented with this dialog box. However, I do not want the. CHM file can be