HP ProLiant DL160 - cannot create the data with local storage (controller, no RAID) store

Hi all. I appreciate your time and comments.

Using VMware ESXi (initially), installation (VMware ESXi 5.5.0, 1331820) will very well (install on USB). Able to enter in the console of the vSphere Client, everything seems normal. Any attempt to use local storage, however, is filled with this error message: http://i.imgur.com/40ySsOS.png

"It was not correct to specified parameters. Vim.Host.DiskPartitionInfo.spec"-" 'HostStorageSystem.ComputerDiskPartitionInfo' call for object 'storage system' on ESXi "< ip address >" failed. "

I tried both the base (from VMware) installation disc and the "Custom" (HP) HP installation disk. Both have the same problem: all facilities, but local disks - as seen while - are not able to serve local storage. It worked very well (previous incarnation was as Server 2003 file server, no reported\noticed questions). It's not under warranty, its going to be used as a laboratory server. All this seems to work well.

Specifics:

HP DL160 G6

Storage card: ICH10 6 Port SATA AHCI Controller

Discs: 3 G MDL 500 GB 7200

Controller and the disks are perceived by the system: http://i.imgur.com/3PAKQHL.png

Can someone point me in the right direction on what to go here? Research on the error messages do not lead to a lot of help. I'll get more pending. I would be grateful.

This could be a problem with the current data on the disk. I suggest clean you the partition info using the DD explained to the KB VMware: Troubleshooting in the add storage Wizard error: cannot read this disk partition information to see if that solves the problem.

André

Tags: VMware

Similar Questions

  • Cannot create the data in vSphere 4.1 store

    Hi all

    I can't create a data store in my client vSphere 4.1.

    When I start to create the data store, I get this error, notice how it also says that disk is empty - it is also misrepresentation of the size of the 250 GB drive - should be 3.4 TB

    review_disk.PNG

    I can go beyond this by pressing return, and then click Next, but after that I entered a data store name, this error pops up:

    formatting.PNG

    I don't know why this is, I've also tried it on several other client machines, all show the same errors - I join two error logs in the hope that someone will be able to shed some light on this.

    Thank you

    K

    As said, LUNs presented to the host need to be ESXi <= 2tb="" minus="" 512="" bytes="" in="" order="" to="" work="" as="" expected.="" for="" the="" reason="" of="" this="" limitation="" see=""> http://kb.vmware.com/kb/3371739

    Alternatively, you may consider upgrade to ESXi 5.0, supporting the LUN with up to 64 TB.

    André

  • Cannot create the data store

    Hello!

    I installed ESX 3.5 update 4 with success, but I met following problem: I can not create the data store!

    Customer VI can't see my hard drive!

    Once the connection directly to the ESX Server using ssh, we see:

    # esxcfg-vmhbadevs

    #            # empty!!!

    # df h

    Size of filesystem used Avail use % mounted on

    / dev/hde2 3.5 G 1.4 G 2.0 G 41%.

    / dev/hde6 24% 19 M 61 M 84 M/Boot

    No 124M 124M 0 0% / dev/shm

    / dev/hde7 525M 20 M 479M 4% / var/log

    # vdf h

    Size of filesystem used Avail use % mounted on

    / dev/hde2 3.5 G 1.4 G 2.0 G 41%.

    / dev/hde6 24% 19 M 61 M 84 M/Boot

    No 124M 124M 0 0% / dev/shm

    / dev/hde7 525M 20 M 479M 4% / var/log

    devices/vmfs / 33M 33M 0 0% / vmfs/devices

    # fdisk-l

    Disk/dev/hdk: 200.0 GB, 200049647616 bytes

    255 heads, 63 sectors/track, 24321 cylinders

    Units = cylinders of 16065 * 512 = 8225280 bytes

    + Device boot start end blocks Id system.

    FB/dev/hdk1 1 16709 134215011 unknown

    FB unknown/dev/hdk2 16710 24321 61143328 +.

    Disk/dev/hdi: 1500,3 GB, 1500301910016 bytes

    255 heads, 63 sectors/track, 182401 bottles

    Units = cylinders of 16065 * 512 = 8225280 bytes

    + Device boot start end blocks Id system.

    /dev/hdi1 * 1 182401 1465135968 unknown fb +.

    Disk/dev/hdg: 1500,3 GB, 1500301910016 bytes

    255 heads, 63 sectors/track, 182401 bottles

    Units = cylinders of 16065 * 512 = 8225280 bytes

    + Device boot start end blocks Id system.

    /dev/hdg1 * 1 182401 1465135968 unknown fb +.

    Disk/dev/hde: 500,1 GB, 500107862016 bytes

    255 heads, 63 sectors/track, 60801 cylinders

    Units = cylinders of 16065 * 512 = 8225280 bytes

    + Device boot start end blocks Id system.

    / dev/hde1 1 96 763904 5 extended

    Partition 1 does not stop the cylinder limit.

    / dev/hde2 97 548 3630690 83 Linux

    FB/dev/hde3 618 60802 483425304 unknown

    / dev/HDE4 549 617 554242 82 Linux swap +.

    / dev/13 27 fc 112624 unknown hde5

    / dev/hde6 * 2 12 88326 83 Linux

    / dev/546178 95 28 hde7 83 Linux +.

    Partition table entries are not in the order of disc

    Disk/dev/hdc: 1500,3 GB, 1500301910016 bytes

    255 heads, 63 sectors/track, 182401 bottles

    Units = cylinders of 16065 * 512 = 8225280 bytes

    + Device boot start end blocks Id system.

    / dev/hdc1 43932 182401 1112260275 7 HPFS/NTFS

    /dev/hdc2 * 1-11252-90381658 7 HPFS / NTFS +.

    unknown/dev/hdc3 11253 43931 262494061 FB +.

    Partition table entries are not in the order of disc

    #

    The partitions are present, but vmware won't do an analysis.

    Is there a way to understand what happened?

    My setup

    motherboard: Tyan Thunder n3600m S2932GRN

    disks: SATA seagate and WD.

    NIck

    I see the barebone specification and seem that your controller is a 'soft' RAID

    Take a look at the configuration of the SATA controller.

    When your drive will be recognized as/dev/sdXX, then you can create your data store.

    André

    * If you found this device or any other answer useful please consider awarding points for correct or helpful answers

  • Cannot create the data source in console (Linux) wls XE db (windows)

    Hi all
    I use Linux5 for 11.1.1.1.4 soa deployments, I'm unable to create the data source in WLS of XE database that runs in win7, I'm getting below error.
    did I do it properly or it is not possible? When connecting with db XE which runs on Linux, I am able to connect.


    -Error connection failed test.
    -Error IO error: the network adapter could not establish the connection
    oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:443)
    oracle.jdbc.driver.PhysicalConnection. < init > (PhysicalConnection.java:670)
    oracle.jdbc.driver.T4CConnection. < init > (T4CConnection.java:230)
    oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:34)
    oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:567)
    oracle.jdbc.pool.OracleDataSource.getPhysicalConnection(OracleDataSource.java:404)
    oracle.jdbc.xa.client.OracleXADataSource.getPooledConnection(OracleXADataSource.java:694)
    oracle.jdbc.xa.client.OracleXADataSource.getXAConnection(OracleXADataSource.java:267)
    oracle.jdbc.xa.client.OracleXADataSource.getXAConnection(OracleXADataSource.java:134)
    com.bea.console.utils.jdbc.JDBCUtils.testConnection(JDBCUtils.java:745)
    com.bea.console.actions.jdbc.datasources.createjdbcdatasource.CreateJDBCDataSource.testConnectionConfiguration(CreateJDBCDataSource.java:458)
    sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    java.lang.reflect.Method.invoke(Method.java:597)
    org.apache.beehive.netui.pageflow.FlowController.invokeActionMethod(FlowController.java:870)
    org.apache.beehive.netui.pageflow.FlowController.getActionMethodForward(FlowController.java:809)
    org.apache.beehive.netui.pageflow.FlowController.internalExecute(FlowController.java:478)
    org.apache.beehive.netui.pageflow.PageFlowController.internalExecute(PageFlowController.java:306)
    org.apache.beehive.netui.pageflow.FlowController.execute(FlowController.java:336)
    ...

    just please give me some resolution.

    Concerning
    Shankar

    I noticed that this is a problem with b & w of the VM communication. box of VM workstation vm.

    Thankx
    Siva

  • Cannot create the data VMFS on SAN store

    Recently, I wiped two Dell 2950 running 3.0 with no problems. I installed the 3.5 update 4 and since I can not create a VMFS data store in the GUI. I get an error that says "error during the configuration of the host: cannot open the volume: / vmfs/volumes/xxxxx".

    When I connect the console to the root, I see that the volume has been created, and I can even write. I also noticed that the data link store does not work as well.

    Storage EMC CX3 is CX4.

    I tried this on two servers various and two tables of sotrage.

    Ah yes... CLARiiON storage system must use MRU failover.

    Happy that you guessed it sort

  • Cannot create the data store: the current licence or ESXi version prohibited the execution of the requested operation.

    using vicfg-nas I am unable to create/remove a datastore nfs on my lab running esxi 6 at home. Is this not supported? Listing them works well. I can create/delete data warehouses nfs using the client to vcenter very well, but it's a little embarrassing.

    I guess that the host is allowed with the free edition of hypervisor! One of the few restrictions of the free version of hypervisor is access to the API. In order to have "write access" (i.e. change things rather than the list of them) you must run the host in evaluation mode, or license (Essentials or better).

    André

  • Cannot create the data source to SQL Server

    Hi people,

    I am running IIS, Windows XP SP3, SQL Express 2005, Trial Version of ColdFusion 9 (no patches).

    The administrator using ColdFusion, when I try to create a data source for SQLExpress 2005 (SQL Server Express), by using the SQL Server driver, I get the following error:

    Connection verification failed for data source: AMT
    java.sql.SQLException: [Macromedia] [SQLServer JDBC Driver] the requested instance is not valid or is not running.
    The root cause was that: java.sql.SQLException: [Macromedia] [SQLServer JDBC Driver] the requested instance is not valid or is not running.

    The "instance", which I interpret as meaning the database instance, is "machinename\SQLExpress" (it is a so-called "named instance").  That's what I enter in the "Server" field of the display (data & Services-> sources-> Microsoft SQL Server).

    However, I am able to create an ODBC data source name in Windows using the driver Microsoft SQL Server Native Client Version 09.00.3042 and the same instance, "machinename\SQLExpress".

    Does anyone have any ideas as to what is wrong?

    Try to use the domain name TCP/IP (or IP address) and port instead of the info of the connectivity of Windows style.  You may need to enable TCP/IP as a network on the DB server protocol well (I think it is disabled by default on SQL Express Ed).

    --

    Adam

  • Cannot retrieve the data by binary storage by default for XML 11 g

    Oracle 11.2

    All,

    As someone mentioned in another post, I got I should use binary XMLType CLOB rather than 11g because it is more efficient. When I create the XMLType column as binary I can't retrieve the value of the column, but when I use CLOB I am able to extract data on.

    -Create table with XMLTYPE column.  Since it is 11.2 storage of the column is automatically binary
    CREATE THE TABLE HR. XMLTABLESTORE (key_id VARCHAR2 (10) PRIMARY KEY, xmlloaddate date, xml_column XMLTYPE);

    -Insert the XML into the XML column
    INSERT INTO HUMAN RESOURCES. VALUES XMLTABLESTORE (HR. XMLSEQUENCE. NEXTVAL, SYSDATE, XMLType (bfilename ('XMLDIRX', 'PROD_20110725_211550427_220b.xml'),
    nls_charset_id ('AL32UTF8'))); COMMIT;

    -When I do a select I see full XML in the xml_column column
    SELECT * FROM HR. XMLTABLESTORE

    -When I run the following query I get the following:
    SELECT extract (xml_column, ' / / MapItem/@ProductNum') ProductNum OF HR. XMLTABLESTORE

    ProductNum
    -------------
    XMLType

    -When I run the following query on the @, I get the following:
    SELECT extract (xml_column, ' / / MapItem/ProductNum ') ProductNum OF HR. XMLTABLESTORE

    ProductNum
    -------------
    Null value

    When I run the same SELECT query retrieves (xml_column, ' / / MapItem/@ProductNum') ProductNum OF HR. XMLTABLESTORE and the table is created by CLOB, I get out the expected value of the XML file.

    How can I get the query to retrieve the data through a binary file?

    I appreciate any help in advance.

    Thank you
    Shawn

    Published by: 886184 on Sep 20, 2011 15:42

    Probably a problem with your client tool.

    It works for me:

    SQL*Plus: Release 11.2.0.2.0 Beta on Mer. Sept. 21 19:39:55 2011
    
    Copyright (c) 1982, 2010, Oracle.  All rights reserved.
    
    Connected to:
    Oracle Database 11g Express Edition Release 11.2.0.2.0 - Beta
    
    SQL> CREATE TABLE xmltablestore (
      2    key_id VARCHAR2(10) PRIMARY KEY
      3  , xmlloaddate DATE
      4  , xml_column XMLTYPE
      5  );
    
    Table created.
    
    SQL> INSERT INTO xmltablestore
      2  VALUES ('1', sysdate, XMLType(bfilename('TEST_DIR', 'PROD_20110725_211550427_220b.xml'),nls_charset_id('AL32UTF8')))
      3  ;
    
    1 row created.
    
    SQL> commit;
    
    Commit complete.
    
    SQL> SELECT extract(xml_column, '//MapItem/@ProductNum') ProductNum
      2  FROM xmltablestore
      3  ;
    
    PRODUCTNUM
    --------------------------------------------------------------------------------
    63481062975
    
    SQL> SELECT extractValue(xml_column, '//MapItem/@ProductNum') ProductNum
      2  FROM xmltablestore
      3  ;
    
    PRODUCTNUM
    --------------------------------------------------------------------------------
    63481062975
    
    SQL> SELECT xmlcast(
      2          xmlquery('/Entries/Category/MapItem/@ProductNum'
      3           passing t.xml_column
      4           returning content
      5          )
      6          as number
      7         ) ProductNum
      8  FROM xmltablestore t
      9  ;
    
    PRODUCTNUM
    ----------
    6,3481E+10
    
    SQL> SELECT xmlcast(
      2          xmlquery('/Entries/Category/MapItem/@ProductNum'
      3           passing t.xml_column
      4           returning content
      5          )
      6          as varchar2(30)
      7         ) ProductNum
      8  FROM xmltablestore t
      9  ;
    
    PRODUCTNUM
    ------------------------------
    63481062975
    

    BTW, extract and extractvalue functions are deprecated in version 11.2.
    Oracle now recommends using XMLCast/XMLQuery.

  • How to create several data on local storage scsi stores

    Hi all

    If you have a host ESX 3.5 local scsi storage with how you create multiple data warehouses. I would like to create 2 data warehouses. One for my virtual machine and one for my ISO. I tried 2 partitions VMFS3 partitioning at the time of the creation of ESX but when I remove the storage to rename it it only allow me to create 1 data store? Is this possible?

    Is there a downside to simply create a folder in 1 ISO called data store and empty my iso files it?

    I would also like to point out that I have only 1 raid controller

    you will need to manually partition the free space of the Service Console using fdisk.

    If its only for a store, I wouldn't bother.

    Just create a folder like you said yourself, without inconveniences, except that you have to watch out you don't fill your data with the ISOs and other stuff store.

  • Cannot create the file with EclipsePlugin .cod

    Hello

    I don't know how to describe it. I installed Eclipse plugin, JDE 4.5.0_4.5.0.16 and then updated to JDE plugin 4.7.0_4.7.0.46 (because I need the latest JDE packages so I can use ApplicationIndicator).

    But now when I'm trying to compile or create a .cod file (Eclipse: Project / Build active Configuration of BlackBerry) I get this error message:

    Executing rapc for the project BirthdayReminder at Mon Jun 08 09:28:07 CEST 2009.
    C:\Programme\eclipse\plugins\net.rim.eide.componentpack4.5.0_4.5.0.16\components\bin\launcher.exe C:\Programme\eclipse\plugins\net.rim.eide.componentpack4.5.0_4.5.0.16\components\bin\rapc.exe  -quiet import=..\..\..\..\Programme\eclipse\plugins\net.rim.eide.componentpack4.5.0_4.5.0.16\components\lib\net_rim_api.jar;..\..\..\..\Programme\eclipse\plugins\net.rim.eide.componentpack4.7.0_4.7.0.46\components\lib\net_rim_api.jar codename=..\BirthdayReminder\BirthdayReminder ..\BirthdayReminder\BirthdayReminder.rapc -sourceroot="C:\Dokumente und Einstellungen\xxx\workspace\BirthdayReminder\src" "C:\Dokumente und Einstellungen\xxx\workspace\BirthdayReminder\bin"
    ..\..\..\..\Programme\eclipse\plugins\net.rim.eide.componentpack4.7.0_4.7.0.46\components\lib\net_rim_api.jar(net_rim_bb_addressbook.cod): Error!: Duplicate definition for 'net.rim.device.apps.api.addressbook.AddToAddressBookContext' found in: ..\..\..\..\Programme\eclipse\plugins\net.rim.eide.componentpack4.5.0_4.5.0.16\components\lib\net_rim_api.jar(net_rim_bb_addressbook.cod)
    rapc failed for the project BirthdayReminder
    

    I don't know how to fix this... Why are there at - it a method definition duplated in the container of the RIM?

    I'm sorry - I solved it this problem for me.

    After update eclipse to 4.7.0.46, you need to uninstall the 4.5.0.16 package caraa. After a restart of eclipse, the new 4.7.0.46 package will be selected for the compilation and no error will be thrown.

  • Cannot create the VPN with partner

    Our Organization is trying to upgrade the equipment we use to establish a VPN connection with our partner.

    The old material is a Cisco 2811 router (OldCore) and the new is a 4431 (NewCore) from Cisco.

    The partner uses a Sonicwall device at the other end for the vpn connection. The VPN between the OldCore and the Sonicwall device works fine. However, when we are trying to replace the OldCore by the NewCore, the VPN connection does not come to the top. I checked the settings and they are all the same for OldCore and NewCore. Partner says that they have configured anything on their end that could cause this problem.

    result of "sh cry isa his"the NewCore wrote.

    IPv4 Crypto ISAKMP Security Association
    DST CBC conn-State id
    XX.xx.xx.xx yy.yy.yy.yy MM_NO_STATE ACTIVE 0
    XX.xx.xx.xx yy.yy.yy.yy MM_NO_STATE ACTIVE 0 (deleted)

    When I disconnect NewCore and replace it with the OldCore, the vpn connection comes back up without any problem.

    A strange thing is that I can ping the public ip of the partner OldCore (public interface) device form but not of NewCore (public interface). However, I can ping the public ip address of the inside interface of NewCore associated device form.

    Someone had this problem? How did you solve this problem?

    Hello

    You might want to consider configuring NAT to the new kernel. You can also run suite debugs as he tried to set up the new router tunnel of kernel by sending valuable traffic to the VPN.

    debug the cond cry counterpart ipv4

    debugging cry isa

    debugging ips cry

    When debugging is collected, type "undebug all.

    HTH

    Averroès.

  • ORA-19504: cannot create the file '+ DATA '.

    Hello everyone.

    This is the scenario:

    We have a test server that is used to restore daily backups of the Production database. Restore us the database with the same SID as the production one.

    For specific reasons, we need create a second database (with a different SID) on this server with an older backup from the production one. To realize that I'm trying to use a part of the "DUPLICATE without connection to the target" tutorials on the web.

    I tried the simple guide that I found which is:

    (1) copy the backup files cold /somedirectory

    (2) start OLD database with nomount

    (3) connect RMAN with OLDER as an auxiliary

    (4) run the following: DUPLICATE DATABASE FOR OLD BACKUP LOCATION ' / somedirectory' NOFILENAMECHECK.

    Here's the result (I deleted some lines because of the size of it):

    ----------------------

    RMAN > ProdDB to OlderDB DUPLICATE DATABASE

    2 > LOCATION of BACKUP "/ home/oracle/OlderBackupFiles.

    3 > NOFILENAMECHECK.

    4 >

    From October 1, 14 Db double

    content of Script memory:

    {

    clone of SQL 'alter system set control_files =

    "+DATA/OlderDB/controlfile/current.829.859839217" comment =

    ' Set by RMAN "scope = spfile;

    clone of SQL 'alter system set = db_name

    "ProdDB" comment =

    ' Modified by RMAN duplicate "scope = spfile;

    clone of SQL 'alter system set db_unique_name =

    "OlderDB" comment =

    ' Modified by RMAN duplicate "scope = spfile;

    clone to stop immediately;

    Start clone force nomount

    Restore controlfile primary clone of ' / home/oracle/OlderDB/controlfile_ProdDB_20141001_4159.bkp';

    change the clone database mount;

    }

    execution of Script memory

    SQL statement: alter system set control_files = comment "+DATA/OlderDB/controlfile/current.829.859839217" = "defined by RMAN" scope = spfile

    SQL statement: change the system db_name set = comment "ProdDB" = "modified by RMAN duplicate" scope = spfile

    SQL statement: alter system set db_unique_name = comment "OlderDB" = "modified by RMAN duplicate" scope = spfile

    (...)

    From restoration to 1 October 14

    allocated channel: ORA_AUX_DISK_1

    channel ORA_AUX_DISK_1: SID = 191 type device = DISK

    channel ORA_AUX_DISK_1: restore the control file

    channel ORA_AUX_DISK_1: restoration complete, duration: 00:00:03

    output file name=+DATA/OlderDB/controlfile/current.829.859839217

    Restoration finished in October 1, 14

    mounted database

    output channel: ORA_AUX_DISK_1

    allocated channel: ORA_AUX_DISK_1

    channel ORA_AUX_DISK_1: SID = 191 type device = DISK

    content of Script memory:

    {

    until the SNA 274262921.

    the value of newname for datafile clone 1 again;

    the value of newname for datafile clone 2 again.

    the value of newname for datafile clone 3 again.

    the value of newname for datafile clone 4 new ones;

    the value of newname for datafile clone 5 again.

    the value of newname for datafile clone 6 again.

    the value of newname for datafile clone 7 again.

    restoration

    database clone;

    }

    (...)

    From restoration to 1 October 14

    using the ORA_AUX_DISK_1 channel

    channel ORA_AUX_DISK_1: from the restore backup set data file

    channel ORA_AUX_DISK_1: specifying datafile (s) to restore from backup set

    channel ORA_AUX_DISK_1: restore datafile 00001 to + DATA

    channel ORA_AUX_DISK_1: restore datafile 00002 to + DATA

    channel ORA_AUX_DISK_1: restore datafile 00003 to + DATA

    channel ORA_AUX_DISK_1: restore datafile 00004 in + DATA

    channel ORA_AUX_DISK_1: restore datafile 00005 to + DATA

    channel ORA_AUX_DISK_1: restore datafile 00006 to + DATA

    channel ORA_AUX_DISK_1: restore datafile 00007 to + DATA

    channel ORA_AUX_DISK_1: reading from the backup /home/oracle/OlderDB/database_ProdDB_20141001_4157.bkp piece

    channel ORA_AUX_DISK_1: ORA-19870: error when restoring the backup /home/oracle/OlderDB/database_ProdDB_20141001_4157.bkp piece

    ORA-19504: cannot create the file '+ DATA '.

    ORA-17502: ksfdcre:4 cannot create the file + DATA

    ORA-15041: diskgroup space 'DATA' exhausted

    switch to the previous backup

    Oracle instance started

    (...)

    content of Script memory:

    {

    clone of SQL 'alter system set = db_name

    "OlderDB" comment =

    ' Restore the original value by RMAN "scope = spfile;

    clone of SQL 'alter system reset db_unique_name scope = spfile;

    clone to stop immediately;

    }

    execution of Script memory

    Errors in the script of the memory

    RMAN-03015: an error has occurred in the script stored memory Script

    RMAN-06136: the auxiliary database ORACLE error: ORA-01507: database not mounted

    ORA-06512: at "SYS." "X$ DBMS_RCVMAN ', line 13466

    ORA-06512: at line 1

    RMAN-05556: not all data files have backups can be recovered on SNA 274262921

    RMAN-03015: an error has occurred in the script stored memory Script

    RMAN-06026: some targets not found - abandonment of restoration

    RMAN-06023: no backup or copy of the file 4 found to restore

    RMAN-06023: no backup or copy of datafile 3 found to restore

    RMAN-06023: no backup or copy of datafile 2 found to restore

    RMAN-06023: no backup or copy of datafile 1 found to restore

    RMAN-00571: ===========================================================

    RMAN-00569: = ERROR MESSAGE STACK FOLLOWS =.

    RMAN-00571: ===========================================================

    RMAN-03002: failure of the command duplicate Db at 15:39:11 01/10/2014

    RMAN-05501: abandonment of duplicate target database

    Complete recovery manager.

    ------------------------------------------------------------------

    The first mistake of the stack was ORA-19504 while trying to restore the backup of the database.

    First thing I took a glance was ASM occupation but it enough available space.

    The second was a permission problem, but it doesn't seem to be the case because RMAN can correctly write the controlfile to ASM.

    One have some advice on what I should look for?

    Thanks in advance sorry for my English.

    Select this option.

    Hello.

    Thanks, but is not the case. As I have said, that the DATA diskgroup has space enough he uses only a single disk in a RAID.

    But I solved my problem... To the auxiliary database, I added the following to the spfile:

    DB_FILE_NAME_CONVERT = '+ DATA/ProdDB', '+ DATA/OlderDB ".

    LOG_FILE_NAME_CONVERT = '+ DATA/ProdDB', '+ DATA/OlderDB ".

    I don't know why, but with these two clauses, it worked well. Perhaps RMAN was trying to restore it to the wrong place?

  • Cannot create the p12 certificate with existing certificate of distribution

    Hello

    I published apps before using my employers apple developer account that is simple enough forward if you follow the instructions. Asked me to create an application for distribution with the apple store using the developer account. I need to create the final application .zip and pass it to the customer who shall submit the actual application is.

    My problem is that I can't create a p12 certificate to attach to the DPS App Builder app. The customer has three certificates of distribution in their account, but had third party create them on their behalf. To my knowledge, the best way to handle this would be to revoke old certificates three distribution in their apple developer account and create a new linked to my machine that would allow me to create all the necessary certificates. It seems that 3 certificates of distribution is the limit they may have at any given time.

    My questions are:

    1. is my solution, right?

    2. If so, revoke a certificate of distribution would cause problems for one of their other applications that are published or in progress?

    The only other option that I could see was the creator of one of the certificates in distribution create a p12 certificate that I relate to the DPS App Builder app.

    Any help, advice or pointing me in the right direction would be greatly appreciated. I've never had a problem of edition following the directions, so it is a new issue for me.

    Kind regards

    C

    If you have access to your Apple dev account, you can create certificates on your machine, but if they have existing certificates in their account, could cause confusion and don't want to mess with their existing certificates.

    You advise to ask your client to create and pass the CERT, create the app with their certificates and pass back the zip for them to download

  • error barsigner: cannot create application data directory

    Hello, I got this error when I try to sing my application:

    error barsigner: cannot create application data directory

    I tried everything: as an administrator, change the directory security and so on.

    I get this error with the command line and from Flash Builder.

    I have v.0.9.1 SDK

    I have my application ready to be submitted, but cannot understand this error, I would be very disappointed to not meet the deadline of March 31 :-/

    Any clue? Thank you

    Really 0.9.1? could sign you your application with this version? have you tried the SDK 0.9.4?

  • Error in retrieving the metadata of HFM. Cannot create the file

    In our HFM 11.1.2.3.503, we get an error when we try to extract metadata from any application of HFM. The States of error message failed to create the file.

    We are on hfm.501 but I tried with connecting to 503 to see if it solved the problem, but there was no solution in sight.

    {796CEDB3-FC01-4A9C-856F-06FEDBF035F2}

    Cannot create the file.

    Error reference number: {796CEDB3-FC01-4A9C-856F-06FEDBF035F2}; User name: BI-DEMOS5$
    NUM: 0 x 80040230; Type: 1; DTime: 16/12/2015-21:27:04; SVR: BI-DEMOS5; File:; Line: 0; Worm:;

    There is no indication where he tries to put all the files so that I could try to find out if there is a setting somewhere pointing to a folder that does not exist or similar.

    Try the revision of the following document:

    • Hyperion Financial Management (HFM) error "unexpected error: 80040230 has occurred" trying to to Scan, load or extract meta-data (Doc ID 1282089.1)

Maybe you are looking for