Performance: Bulk Insert Oracle 10g

The following Situation: we have a VISUAL BASIC 6 Application (I know... VB6...), an XML file with data and Oracle 10 g server. The XML file must be imported into the database.

So far the Application (Via ADO) analyzes the XML and creates INSERT and UPDATE Stmts and sends it to the DB. The Stmts will be processed within a Transaction and that the application sends each INSERT, UPDATE, separated from the database.

But it's a disaster... as expected performance... :-) Importing takes several hours...

Now my task is to increase performance, but how...

I've tried several things, but without real success, for example...

I did some tests with the SQl * Loader. The Insert is very fast, but I can't do an update, so I had to first remove the existing data. But I can't go forward two steps in a single transaction, because of the SQL * Loader.

I tried to write a stored procedure that accepts a TEENAGER. Recordset as an input parameter and then creates the Insert and Update statements in the DB to reduce network traffic, but I've not found a way to handle a TEENAGER. Recordset as an input to a stored procedure parameter.

Anyone have an idea how I can import the XML in a quick way into the existing DB (and maybe make a replacement of existing records when importing...) within a transaction without changing the structure of the DB? (Oracle Packages? interface in C++ integrated into Visual Basic 6...) Is there a way to import the XML file directly to the DB?

Thanks in advance for any idea :-))

I tried to write a stored procedure that accepts a TEENAGER. Recordset as input param..., but I have not found a way to handle a TEENAGER. Recordset as an input parameter
for a procedure stored.

Use SYS_REFCURSOR type parameter. In bulk it collect in a collection of PL/SQL. FORALL allows soup by INSERT and UPDATE statements.

Cheers, APC

blog: http://radiofreetooting.blogspot.com

Tags: Database

Similar Questions

  • Performance issue Bulk Insert PL/SQL table type

    Hi all

    I put in work of a batch to fill a table with a large number of data records(>3,000,000). To reduce the execution time, I used PL/SQL tables to temporarily store data that must be written to the destination table. Once all documents are piling up in the PL/SQL table I use a FORALL operator for bulk insert the records in the physical table.

    Currently, I follow two approaches to implement the process described above. (Please see the code segments below). I need to choose how to best wise performance between these two approaches. I really appreciate all the comments of experts about the runtime of the two approaches.

    (I don't see much difference in consumption of time in my test environment that has limited the data series. This process involves building a complex set of structures of large product once deployed in the production environment).


    Approach I:_
    DECLARE
    TYPE of test_type IS test_tab % ROWTYPE directory INDEX TABLE;
    test_type_ test_type.
    ins_idx_ NUMBER;
    BEGIN
    ins_idx_: = 1;
    NESTED LOOPS
    test_type_ (ins_idx_) .column1: = value1;
    test_type_ (ins_idx_) .column2: = value2;
    test_type_ (ins_idx_) .column3: = value3;
    ins_idx_: = ins_idx_ + 1;
    END LOOP;

    I_ FORALL in 1.test_type_. COUNTY
    INSERT INTO test_tab VALUES (i_) test_type_;
    END;
    /


    Approach II:_
    DECLARE
    Column1 IS a TABLE OF TYPE test_tab.column1%TYPE INDEX DIRECTORY.
    Column2 IS a TABLE OF TYPE test_tab.column2%TYPE INDEX DIRECTORY.
    Column3 IS a TABLE OF TYPE test_tab.column3%TYPE INDEX DIRECTORY.
    column1 column1_;
    column2_ Column2;
    column3_ Column3;
    ins_idx_ NUMBER;
    BEGIN
    ins_idx_: = 1;
    NESTED LOOPS
    column1_ (ins_idx_): = value1;
    column2_ (ins_idx_): = value2;
    column3_ (ins_idx_): = value3;
    ins_idx_: = ins_idx_ + 1;
    END LOOP;

    FORALL idx_ in 1.column1_. COUNTY
    INSERT
    IN n_part_cost_bucket_tab)
    Column1,
    Column2,
    Column3)
    VALUES)
    column1_ (idx_),
    column2_ (idx_),
    column3_ (idx_));
    END;
    /

    Best regards
    Lorenzo

    Published by: nipuna86 on January 3, 2013 22:23

    nipuna86 wrote:

    I put in work of a batch to fill a table with a large number of data records(>3,000,000). To reduce the execution time, I used PL/SQL tables to temporarily store data that must be written to the destination table. Once all documents are piling up in the PL/SQL table I use a FORALL operator for bulk insert the records in the physical table.

    Performance is more than just reducing the execution time.

    Just as smashing a car stops more than a car in the fastest possible time.

    If it was (breaking a car stopping all simply), then a brick with reinforced concrete wall construction, would have been the perfect way to stop all types of all sorts of speed motor vehicles.

    Only problem (well more than one actually) is that stop a vehicle in this way is bad for the car, the engine, the driver, passengers and any other content inside.

    And pushing 3 million records in a PL/SQL 'table' (btw, that is a WRONG terminology - there no PL/SQL table structure) in order to run a SQL cursor INSERT 3 million times, to reduce the execution times, is no different than using a brick wall to stop a car.

    Both approaches are pretty well imperfect. Both places an unreasonable demand on the memory of the PGA. Both are still row-by-row (aka slow-by-slow) treatment.

  • SELECT from Bulk INSERT - Performance Clarification

    I have 2 tables-emp_new & emp_old. I need to load all the data from emp_old to emp_new. Is there a transaction_id column in emp_new whose value should be extracted from a main_transaction table that includes a column of region Code. Something like -

    TRANSACTION_ID REGION_CODE
    ------------------------- ------------
    100. WE
    AMER 101
    APAC 102

    My bulk insert query looks like this-

    INSERT INTO emp_new
    (col1,
    col2,
    ...,
    ...,
    ...,
    transaction_id)
    SELECT
    col1,
    col2,
    ...,
    ...,
    ...,
    * (Select transaction_id from main_transaction WHERE region_code = 'US') *.
    Of emp_old

    There are millions of rows that need to be loaded this way. I would like to know if the Subselection to fetch the transaction_id would be re-executed for each line, which would be very expensive and I'm actually looking for a way to avoid this. The main_transcation table is pre-loaded and its values will not change. Is there a way (via some SUSPICION) to indicate that the subselect should not re-run for each line?

    On a different note, the implementation plan of the whole above INSERT looks like-

    --------------------------------------------------------------------------
    | ID | Operation | Name | Lines | Bytes | Cost (% CPU).
    --------------------------------------------------------------------------
    | 0 | INSERT STATEMENT. 11 M | 54 M | 6124 (4) |
    | 1. FULL RESTRICTED INDEX SCAN FAST | EMPO_IE2_IDX | 11 M | 54 M | 6124 (4) |
    --------------------------------------------------------------------------
    EMPO_IE2_IDX-> Index on emp_old

    I'm surprised to see that the main_transaction of the table is not in the execution plan at all. Does this mean that the subselect is not executed for each line? However, at least for the first reading, I suppose that the table must appear in the plan.

    Can someone help me understand this?

    Why the explain command plan includes no information about the table of main_transaction
    Can someone please clarify?

    As I said originally (and repeated in a later post) - probably because PLAN_TABLE is an older version.
    More recent versions of PLAN_TABLE are required to correctly report "most recent" functions implementation plans.

  • Bulk Insert using databaselink in oracle 10 g (10.2)

    Bulk insert with data link gives an error like 'cannot insert null into the column primary code as follows
    ForAll i in < type name > such... < type name > .last
    insert into < table_name > name of @databaselink values...

    If I use the code throws above an error cannot insert null. But if I use the same collection with simple insert it works
    the code as follows

    I'm in < type name > such... < type name > .last
    loop
    Insert in the name of @databaselink values < table name >...
    end loop;
    It worked. ?!!!!!

    I was wondering what would be the solution and where it wasn't. Please throw some light on this
    Thanks in advance

    Bulk operations are used to minimize the change between PL/SQL and SQL context engines.
    For remote database operations, there is no context switching of each link DB must open a new connection.
    If you can ensure that your procedure @ distance DB only you can use the features in bulk collect.

    Thank you
    Fahad Aziz Khan

  • Bulk insert in an external table

    Hi, I get the error ora-29913, ora-01410 trying to do a bulk insert of external table

    INSERT

    IN CORE_TB_LOG

    (SELECT 'MODEL', 'ARCH_BH_MODEL', ROWID, "MODEL-D-000000001', - 45, 'A', SYSDATE, 'R'")

    OF ARCH_BH_MODEL1

    WHERE length (MOD_XCODIGO) > 10)

    INSERT

    *

    ERROR on line 1:

    ORA-29913: error in executing ODCIEXTTABLEFETCH legend

    ORA-01410: invalid ROWID

    ARCH_BH_MODEL1 is the external table.


    What's wrong?


    Thank you.

    Hello

    There is no ROWID in external tables.

    It makes sense: ROWID identifies where a line is stored in the database; It shows the data file and the block number in this file.

    External tables are not stored in the database.  They exist independently of any data file in the database.  The concept of an Oracle block does not apply to them.

    Why would you copy the ROWID, even if you could?

    Apart from ROWID and SYSDATE, you select only literals.  You don't want to select all the real data of the external table?

    What is the big picture here?  Post a small example of data (a CREATE TABLE statement and a small data file for the external table) and the desired results from these sample data (in other words, what core_tb_log must contain after INSERTION is complete.)  Explain how you get these results from data provided.

    Check out the Forum FAQ: Re: 2. How can I ask a question on the forums?

  • Performance when INSERTING

    Hi all

    I have a question about the performance of an insert.
    1. INSERT INTO tbl1 SELECT * FROM tbl2 
    
    2. Bulk insert. 
    CUROSR c IS SELECT * FROM tbl1;
    OPEN c 
    LOOP 
    FETCH c BULK COLLECT INTO plsql_tbl 
    FORALL i IN  plsql_tbl.first..plsql_tbl.last
    INSERT INTO tbl2 VALUES plsql_tbl(i);
    EXIT WHEN c%NOT_FOUND; 
    END LOOP;
    CLOSE c;
    Which of the 2 options above is more rapid and optimal?
    In option 2, we by selecting data slider and then fill in the data of cursor in plsql table and then insert the plsql data into a base table. Don't take long than option 1, where it is just an Insert into select *?

    Thank you
    Rod.

    SQL insertion cursor is executed once 1000. PL/SQL data are sent via a unique context switch.

    Replace the FORALL with a FOR loop and the SQL insertion is performed once 1000. PL/SQL data are sent via a context 1000 switches.

    This is the fundamental difference. Do not make the insertion cursor all run faster or making less than any work or i/o using the FORALL - you simply create a larger pipe to transfer to the SQL engine.

    This allows the engine SQL do the loop on your behalf. There the collection/data table containing a 1000 items. It can now perform that move the cursor by using a loop on this shipment of PL/SQL database.

  • How to use Oracle 10g (LOGMINER) consistent with DBA privileges JKM

    Hi gurus,

    Transposition of the CDC, we are checking out JKM Oracle 10g coherent (LOGMINER).
    Only restriction is that we can have no privilege DBA granted to the user of the ODI.

    So far, I did the following

    JKM Oracle 10g compatible (LOGMINER) uesed with

    ASYNCHRONOUS_MODE = NO
    AUTO_CONFIG = NO
    JOURNAL_TABLE_OPTIONS: USERS


    1 source schema: test
    2. There are 2 tables (DEPT_CDC_SRC and EMP_CDC_SRC) in the schema of the TRIAL, the two were added to the process of the CDC
    3 schema work: cdc

    4. from SYS schema the following grants were given to ODI work shema user.

    Grant connect, the cdc resources
    GRANT SELECT ANY TABLE OF CDC
    Grant EXECUTE ON dbms_stats to cdc
    Grant execute on DBMS_CDC_PUBLISH at CDC
    Grant execute on DBMS_CDC_UTILITY to CDC
    Grant execute on DBMS_CDC_SUBSCRIBE at CDC
    Grant execute on DBMS_STREAMS_CDC_ADM to CDC
    Grant execute on DBMS_CDC_IPUBLISH to CDC
    Grant execute on DBMS_CDC_SYS_IPUBLISH to CDC
    Grant execute on DBMS_CDC_ISUBSCRIBE at CDC
    Grant execute on DBMS_CDC_DPUTIL to CDC
    Grant execute on DBMS_CDC_EXPDP to CDC
    Grant execute on DBMS_CDC_EXPVDP to CDC
    Grant execute on DBMS_CDC_IMPDP to CDC

    5. as a user SYSDBA has done the following:
    ====================================
    SHUTDOWN IMMEDIATE;
    BOOTABLE MEDIA;
    ALTER DATABASE ARCHIVELOG;
    ALTER DATABASE OPEN;
    ALTER SYSTEM ARCHIVE LOG START;
    ALTER SYSTEM SWITCH LOGFILE;
    ===================================

    6 start journal... it fails to create the stage newspaper.
    with the following error
    ORA-06512: at "SYS." DBMS_CDC_PUBLISH.

    7. for the goal conceded role DBA to CDC test and start the log it is successful

    8 revoked of CDC DBA and restart the log. It still fails to form newspaper market.
    with the same error i.e.
    ORA-06512: at "SYS." DBMS_CDC_PUBLISH.

    How to solve this problem without any DBA privilege to the scheme of ODI work?

    Kind regards
    Frédéric

    Published by: 804400 on November 24, 2010 02:39

    Hello

    http://www.DBA-Oracle.com/t_insifficient_privileges_create_view_grant.htm said

    You must have the privileges of the object that you made from tables

    Privileges required to create views

    To create a view, you must meet the following requirements:

    You must have been granted the CREATE VIEW (to create views in your schema) or CREATE ANY VIEW (to create a view of another user schema) system privilege, either explicitly, or by a role.

    You need to explicitly obtain the SELECT, INSERT, UPDATE, or DELETE privileges object on all underlying base objects in the view or system privileges SELECT ANY TABLE, INSERT a WHOLE TABLE, UPDATE ANY TABLE or DELETE ANY TABLE. You may not have gotten these privileges through roles.

    I hope this helps.

    Thank you
    Fati

  • Bulk insert table

    Hello
    Version 10g

    I need to get an array of 100 recordings in pl/sql procedure.
    The table is built on columns.
    Then I need bulk insert into the table.

    Question: Is it possible to get a table in the stored procedure?

    Thank you

    Yes you can get it. Check the code below.

    SQL> create type trec is object (a number, b number);
      2  /
    
    Type created.
    
    SQL> create type tlist is table of trec;
      2  /
    
    Type created.
    
    SQL> create table coltab(col1 number, col2 number, entry_date date);
    
    Table created.
    
    SQL> ed
    Wrote file afiedt.buf
    
      1  create or replace procedure arraytest (p tlist) is
      2  begin
      3  for i in p.first..p.last
      4  loop
      5  insert into coltab values
      6  (p(i).a,p(i).b,sysdate);
      7  end loop;
      8* end;
    SQL> /
    
    Procedure created.
    
    -----------Testing--------------------
    SQL> declare
      2  l tlist := tlist(trec(1,2),trec(4,3));
      3  begin
      4  arraytest(l);
      5  end;
      6  /
    
    PL/SQL procedure successfully completed.
    
    SQL>
    SQL>
    SQL> select * from coltab;
    
          COL1       COL2 ENTRY_DAT
    ---------- ---------- ---------
             1          2 04-AUG-10
             4          3 04-AUG-10
    

    If you do not use the concept of line, try this. This works in a similar way.

     create or replace procedure arraytest (p tlist) is
     begin
     insert into coltab
     select t1.*, sysdate from table(p) t1;
    end;
    /
    
  • Oracle 10g Rac (TAF)

    Hello
    I'm known for RAC env.
    How to implement TAF in oracle 10g RAC approx.
    and how do you know... and how conf (base & preconnect) services for which use them.
    Thank you
    KK

    Published by: user603328 on June 20, 2010 12:02

    Before you configure TAF, you should know about TAF

    If the instance fails, TAF allows applications to automatically reconnect to a different instance. The new connection will be identical to the original, but uncommitted, existing at the time of breaking transactions will be cancelled.

    TAF can be configured to perform two types of failover: session and selectwith the resumption of the session, any uncommitted transaction will be cancelled, and the session will be linked to another instance, with the resumption of the selection, uncommitted transactions will be cancelled, the second will be connected to another instance and any select statement that was running at the time of the breakup will be rerun by the sesion again using the same RCS and recoveries lines will be rejected upward until the original request has failed. The Select statement will continue return the remaining rows to the client.

    a TAF connection must be configured in the configuration file net tnsnames.oraOracle. In oracle 10.1 and higher, the DBCA services management page can generate appropriate TNS for the TAF entries.

    To configure the TAF, the TNS entry should include a FAILOVER mode clause that defines the properties of the recovery as follows,

    TYPE attribute defines the behavior as a result of a failure. Value: Session, select or none

    Attributes of the METHOD--> when connections are made to the failover. Value: base or preconnect

    ATTEMPTS--> number of times where a connection should be attempted before displaying an error.

    DELAY--> the time in seconds between each connection attempt.

    Reading Document Oracle,

    http://www.Oracle.com/pls/db102/search?word=configure+TAF&PartNo=

  • Oracle 10g DBA questions

    I am preparing to Oracle 10g certification. I have several questions that I can't answer how well I did a lot of research. Help, please. Thank you. S.

    1. which two databases users can log into EM to perform the loading bath using SQL loader?

    2. who would have three configurations you use for the automatic management of the backup and restoration of the Oracle database operations?
    store data files in the flash recovery area.
    store the archive logs in the flash recovery area.
    Back up the control files; use the database in archivelog mode.
    Configure the automatica undo management;
    Use the flash recovery for backup files automatic storage management

    3. which three user names, by default, can provide access to the control of database Oracle Enterprise managers?
    system, sysman, dmsnmp, sys?

    4. When you create a database using DBCA, why is the block size not enabled?

    5. the table maybe flashback if it resides in a locally managed tablspace?

    You do not have my point. There is no need to look for 'anyone' on any forum for the answers. You should look for them yourself.

    Think about this question only. Why the confusion? Florian gave an answer already you must choose the tempelate custom to get the option of block size. So the question is the size of block unmodifiable why right?

    + 1. you have chosen the file storage system.

    Doesn't ANY sense!
    2. you use no model of custom database to create db

    + 3. you haven't chosenGrid Ctrl +.
    It has nothing to do with the size of the block.

    + 4. data block size is set to the maximum block size supported by the operating system.

    Nope, oracle assigns the default value they use in a statement.

    + 5 block size can be increased when DBCA is called Oracle installer +.
    Nonsense!

    I left the option 2, that has meaning today?

    Yet once, not worth about issues and especially not these. Think and prepare concepts and for it you just put the efforts.

    HTH
    Aman...

  • Oracle 10g RAC - private interconnection on VLAN not routable private

    There is in our data center existing Oracle 10g RAC configured with VLAN private interconnection managed by a different group of DBA.

    We create a new, separate Oracle 10 g RAC environment to support our request.

    When we discussed with our data center people to set up a local VIRTUAL private network for our CARS of interconnection, they suggest to use the same VLAN used by the other Oracle RAC configurations existing private. In this case the IPs of interconnection will be on the same subnet as the other Oracle RAC configurations.

    For example, if
    RAC1 with 2 nodes use 192.168.1.1 and 192.168.1.2 in the VLAN_1 for the Interconect, they want us to use the same VLAN_1 with interconnecting IP 192.168.1.3 and 192.168.1.4 to our 2 node RAC.

    Share the same subnet on the same private VLAN for the interconnection of different configurations of RAC, supported?
    Which will cause a drop in performance? This means the IPs of a RAC interconnect configuration is to pings from other RAC configuration.

    Someone met with such purpose?

    Could not find any info on it on Metalink.

    Thank you

    Yes
    It is practically very doable... as you would have only 4 m/c in subnet ip... and it is much less than the public subnet that we should abstain from interconnection.

  • Hello Sir: I use a generator of form of oracle 10g and I want to search on two dates using between.how it is possible.

    Hello Sir: I use a generator of form of oracle 10g and I want to search on two dates using between.how it is possible.

    I guess its text fields are the date data type.

  • Oracle 10g of MSSQL 2005 database link

    Hello

    We are configuration link database to oracle 10g of mssql 2005

    Thanks to the heterogeneous connectivity. We get the following error when testing database link.

    ORA-28500: connection between ORACLE and a non-Oracle system has sent this message:

    ORA-28541: error in the HS on line 1 init file.

    ORA-02063: preceding 2 lines of < SID >

    28500 00000 - "connection between ORACLE and a non-Oracle system has sent this message:

    * Cause: The cause is explained in the forwarded message.

    * Action: See documentation for the non-Oracle of the transmission system

    Message.

    Error on line: column 4:19

    Can someone guide correct configeration of Hetrogeneous Service and why the error arises.

    Thank you

    Basu,

    Dg4ODBC gateway can be used with the 10.2.0.5 database. It is certified and no additional license is required for the database to ODBC gateway - the license is included in the license database (even when the Oracle database is still version 10.2) AND you can also install Dg4ODBC on another machine, then the Oracle database. There is no need to have Dg4ODBC installed on the same computer as the Oracle database.

    -When you install DG4ODBC on the same machine as your Oracle 10.2 database, then install in its own, separate OH - otherwise you will corrupt your existing installation of the database.

    -Klaus

    Post edited by: kgronau

  • Is there a way to fill the boss way recursive/dynamically in oracle 10g

    Hello

    I want to complete the following template and then use the result as a virtual table in oracle 10g.

    SELECT 2 FROM DUAL;

    UNION ALL

    4. SELECT FROM DUAL;

    ALL UNIONL

    8. SELECT FROM DUAL;

    .

    .

    .

    .

    UNION ALL

    . SELECT POWER (2.61) FROM DUAL;

    Is it possible that I can use the recursion or dynamic query with "with clause" and avoid writing all these statements.

    Thank you

    YG

    Hi GY,.

    In 10g, you can use a clause of the TYPE:

    SELECT nbr

    OF THE DOUBLE

    MODEL

    DIMENSION (2 d)

    MEASURES (number 0)

    RULES ITERATE (61)

    (nbr [ITERATION_NUMBER] = power (2, ITERATION_NUMBER + 1))

    ORDER BY nbr;

    From 11 g 2, you can use a WITH recursive clause:

    WITH v (n, lvl) AS (SELECT 2 n, 1 lvl FROM DUAL

    UNION ALL

    SELECT 2 * n, 1 + lvl

    V

    WHERE the lvl<=>

    )

    Select n v

  • Oracle 10g with 10.2.0.5 patch app thin?

    Hello

    I'm working on creating a fine app for oracle 10g. I am able to create the thin application without patch and everything seems to work normally, but in our situation, we must apply the 10.2.0.5 patch. When I patch the program, everything still seems to work ok, so I'm going through with the postscan, which also seems to work fine until there finishes.

    I don't know if this is related, but I get the following warnings after the postscan:

    Could not copy the file--> C:\Program Files (x 86) C:\ProgramData\Microsoft\RAC\StateData\RacMetaData.dat \VMware\VMware ThinApp\Captures\Oracle 10 g + patch\%Common AppData%\Microsoft\RAC\StateData\RacMetaData.dat
    Could not copy the file--> C:\Program Files (x 86) C:\ProgramData\Microsoft\RAC\StateData\RacWmiEventData.dat \VMware\VMware ThinApp\Captures\Oracle 10 g + patch\%Common AppData%\Microsoft\RAC\StateData\RacWmiEventData.dat
    Could not copy the file--> C:\Program Files (x 86) C:\ProgramData\Microsoft\Search\Data\Applications\Windows\MSStmp.log \VMware\VMware ThinApp\Captures\Oracle 10 g + patch\%Common AppData%\Microsoft\Search\Data\Applications\Windows\MSStmp.log
    Could not copy the file C:\Users\All Users\Microsoft\RAC\StateData\RacMetaData.dat--> C:\Program Files (x 86) \VMware\VMware ThinApp\Captures\Oracle 10 g + patch\%drive_C%\Users\All Users\Microsoft\RAC\StateData\RacMetaData.dat
    Could not copy the file C:\Users\All Users\Microsoft\RAC\StateData\RacWmiEventData.dat--> C:\Program Files (x 86) \VMware\VMware ThinApp\Captures\Oracle 10 g + patch\%drive_C%\Users\All Users\Microsoft\RAC\StateData\RacWmiEventData.dat
    Could not copy the file C:\Users\All Users\Microsoft\Search\Data\Applications\Windows\MSStmp.log--> C:\Program Files (x 86) \VMware\VMware ThinApp\Captures\Oracle 10 g + patch\%drive_C%\Users\All Users\Microsoft\Search\Data\Applications\Windows\MSStmp.log
    Could not copy the file C:\Users\All Users\Microsoft\Windows Defender\IMpService925A3ACA-C353 - 458 A-AC8D - A7E5EB378092.lock--> C:\Program Files (x 86) \VMware\VMware ThinApp\Captures\Oracle 10 g + patch\%drive_C%\Users\All Users\Microsoft\Windows Defender\IMpService925A3ACA-C353 - 458 A-AC8D - A7E5EB378092.lock
    Could not copy the file C:\Users\All Defender\Scans\History\CacheManager\MpSfc.bin--> C:\Program Files (x 86) Users\Microsoft\Windows \VMware\VMware ThinApp\Captures\Oracle 10 g + patch\%drive_C%\Users\All Users\Microsoft\Windows Defender\Scans\History\CacheManager\MpSfc.bin

    And then finally I get this at the end of the 'Build Project' message:

    C:\Windows\hh.exe: an executable not valid
    Build failed *.

    I can't determine why the build fails.

    Any help would be appreciated.

    Hi Mark

    1. the warnings that you are seeing after Postscan aren't a problem, in my view, they are for most of the newspapers that were not copied to the ThinApp of your application project (probably because access is denied because some application process is running and use).

    2. the error you see regarding hh.exe during the generation probably occurs because it is a 64-bit binary (and ThinApp 4.7.3 can't stand not the 64-bit binaries), in all cases, please open the file package.ini in the project folder (the folder that contains the project ThinApp registered Oracle 10 g, generated by the capture process) and find the section [hh.exe] and add 'Disabled = 1' under it , and then try again to build again by running build.bat.

    Thank you.

Maybe you are looking for