Oracle On Demand on EHA Pod data loader

Oracle data loader does not work correctly.
I downloaded from Staging (EHA Pod).
And I did the following work.

1. go to the "config" folder and update 'OracleDataLoaderOnDemand.config '.
hosturl = https://secure-ausomxeha.crmondemand.com
2. go to the "sample" folder and change the Owner_Full_Name to the 'account - insert.csv '.

And at the command prompt, run the batch file.
It runs successfully, but the records are not inserted on EHA Pod.Records exists on EGA Pod.
It is the newspaper.
Data loader is only EGA Pod? Could you please give me some advice?


[2012-09-19 14:49:55, 281] DEBUG - BulkOpsClient.main () [hand]: start of execution.
[2012-09-19 14:49:55, 281] DEBUG - BulkOpsClient.main () [hand]: load from the list of configurations: {sessionkeepchkinterval = 300, maxthreadfailure = 1, testmode is production, logintimeoutms = 180000, csvblocksize = 1000, maxsoapsize is 10240, impstatchkinterval = 30, numofthreads = 1, https://secure-ausomxeha.crmondemand.com = hosturl maxloginattempts = 1, routingurl = https://sso.crmondemand.com, manifestfiledir=.\Manifest\}
[2012-09-19 14:49:55, 281] DEBUG - BulkOpsClient.main () [hand]: list of all options of loaded: {datafilepath = sample/account - insert.csv, waitforcompletion = False, clientlogfiledir = datetimeformat = usa, operation = insertion, username = XXXX/XXXX, help = False, disableimportaudit = False, clientloglevel = detailed, mapfilepath = sample/account.map, duplicatecheckoption = externalid, csvdelimiter is, importloglevel = errors, recordtype = account}
[2012-09-19 14:49:55, 296] DEBUG - BulkOpsClientUtil.getPassword () [hand]: entering.
[2012-09-19 14:49:59, 828] DEBUG - BulkOpsClientUtil.getPassword () [hand]: get out.
[2012-09-19 14:49:59, 828] DEBUG - BulkOpsClientUtil.lookupHostURL () [hand]: entering.
[2012-09-19 14:49:59, 937] DEBUG - BulkOpsClientUtil.lookupHostURL () [hand]: request for host to send to search: https://sso.crmondemand.com/router/GetTarget
[2012-09-19 14:50:03, 953] DEBUG - BulkOpsClientUtil.lookupHostURL () [hand]: search for returned: < host? XML version = "1.0" encoding = "UTF-8"? >
< HostUrl > https://secure-ausomxega.crmondemand.com < /HostUrl >
[2012-09-19 14:50:03, 953] DEBUG - BulkOpsClientUtil.lookupHostURL () [hand]: extract successfully the host URL: https://secure-ausomxega.crmondemand.com
[2012-09-19 14:50:03, 953] DEBUG - BulkOpsClientUtil.lookupHostURL () [hand]: get out.
[2012-09-19 14:50:03, 953] DEBUG - BulkOpsClientUtil.determineWSHostURL () [hand]: entering.
[2012-09-19 14:50:03, 953] DEBUG - BulkOpsClientUtil.determineWSHostURL () [hand]: URL of the host of the routing application = https://secure-ausomxega.crmondemand.com
[2012-09-19 14:50:03, 953] DEBUG - BulkOpsClientUtil.determineWSHostURL () [hand]: URL of the host of the config = https://secure-ausomxeha.crmondemand.com
[2012-09-19 14:50:03, 953] DEBUG - BulkOpsClientUtil.determineWSHostURL () [hand]: updated the config file:.\config\OracleDataLoaderOnDemand.config
[2012-09-19 14:50:03, 953] DEBUG - BulkOpsClientUtil.determineWSHostURL () [hand]: URL of the host set to https://secure-ausomxega.crmondemand.com
[2012-09-19 14:50:03, 953] DEBUG - BulkOpsClientUtil.determineWSHostURL () [hand]: get out.
[2012-09-19 14:50:03, 953] INFO - [main] trying to connect...
[2012-09-19 14:50:10, 171] INFO - [main] successfully connected as: XXXX/XXXX
[2012-09-19 14:50:10, 171] DEBUG - BulkOpsClient.doImport () [hand]: start of execution.
[2012-09-19 14:50:10, 171] INFO - request Oracle Loader validation on demand data import [main]...
[2012-09-19 14:50:10, 171] DEBUG - FieldMappingManager.parseMappings () [hand]: start of execution.
[2012-09-19 14:50:10, 171] DEBUG - FieldMappingManager.parseMappings () [hand]: complete execution.
[2012-09-19 14:50:11, 328] DEBUG - ODWSSessionKeeperThread.Run () [Thread-3]: call for submission BulkOpImportGetRequestDetail WS
[2012-09-19 14:50:11, 328] INFO - SOAP [main], A request was sent to the server to create import demand.
[2012-09-19 14:50:13, 640] DEBUG - SOAPImpRequestManager.sendImportGetRequestDetail () [Thread-3]: asks SOAP sent successfully and received a response
[2012-09-19 14:50:13, 640] DEBUG - [Thread-3] ODWSSessionKeeperThread.Run (): BulkOpImportGetRequestDetail WS call ends
[2012-09-19 14:50:13, 640] DEBUG - ODWSSessionKeeperThread.Run () [Thread-3]: Code of State response SOAP = OK
[2012-09-19 14:50:13, 640] DEBUG - ODWSSessionKeeperThread.Run () [Thread-3]: go to sleep for 300 seconds.
[2012-09-19 14:50:20, 328] INFO - [main] a response to the SOAP request to create the import on the server request has been received.
[2012-09-19 14:50:20, 328] DEBUG - SOAPImpRequestManager.sendImportCreateRequest () [hand]: asks SOAP sent successfully and received a response
[2012-09-19 14:50:20, 328] INFO - [main] validation of Oracle Data Loader application Import PASSED.
[2012-09-19 14:50:20, 328] DEBUG - BulkOpsClient.sendValidationRequest () [hand]: complete execution.
[2012-09-19 14:50:20, 343] DEBUG - ManifestManager.initManifest () [hand]: create manifest directory: .\\Manifest\\
[2012-09-19 14:50:20, 343] DEBUG - BulkOpsClient.submitImportRequest () [hand]: start of execution.
[2012-09-19 14:50:20, 390] DEBUG - BulkOpsClient.submitImportRequest () [hand]: sending CSV data Segments.
[2012-09-19 14:50:20, 390] DEBUG - CSVDataSender.CSVDataSender () [hand]: CSVDataSender will use 1-wire.
[2012-09-19 14:50:20, 390] INFO - [main] application to Oracle Loader on demand data import with the following request Id: AEGA-FX28VK...
[2012-09-19 14:50:20, 390] DEBUG - CSVDataSender.sendCSVData () [hand]: creation of thread 0
[2012-09-19 14:50:20, 390] INFO - [main] import Request Submission Status: started
[2012-09-19 14:50:20, 390] DEBUG - CSVDataSender.sendCSVData () [hand]: from wire 0
[2012-09-19 14:50:20, 390] DEBUG - CSVDataSender.sendCSVData () [hand]: there are pending requests. Go to sleep.
[2012-09-19 14:50:20, 406] DEBUG - CSVDataSenderThread.run () [Thread-5]: Thread 0 Presentation of CSV data Segment: 1 of 1
[2012-09-19 14:50:24, 328] INFO - [Thread-5] has received a response to the data import SOAP request sent to the server.
[2012-09-19 14:50:24, 328] DEBUG - SOAPImpRequestManager.sendImportDataRequest () [Thread-5]: asks SOAP sent successfully and received a response
[2012-09-19 14:50:24, 328] INFO - [Thread-5] A SOAP request that contains the import data was sent to the server: 1 of 1
[2012-09-19 14:50:24, 328] DEBUG - CSVDataSenderThread.run () [Thread-5]: there is no more waiting for the request to be picked up by Thread 0.
[2012-09-19 14:50:24, 328] DEBUG - CSVDataSenderThread.run () [Thread-5]: Thread 0 finished now.
[2012-09-19 14:50:25, 546] INFO - [main] import Request Submission Status: 100.00%
[2012-09-19 14:50:26, 546] INFO - [main] Presentation of Oracle Data Loader application Import completed successfully.
[2012-09-19 14:50:26, 546] DEBUG - BulkOpsClient.submitImportRequest () [hand]: complete execution.
[2012-09-19 14:50:26, 546] DEBUG - BulkOpsClient.doImport () [hand]: complete execution.
[2012-09-19 14:50:26, 546] INFO - [main] trying to connect...
[2012-09-19 14:50:31, 390] INFO - XXXX/XXXX [hand] is now disconnected.
[2012-09-19 14:50:31, 390] DEBUG - ODWSSessionKeeperThread.Run () [Thread-3]: interrupted.
[2012-09-19 14:50:31, 390] DEBUG - BulkOpsClient.main () [hand]: complete execution.

Hello

the points of data loader by default for the production environment without worrying if download you it from intermediary or production.
To change the pod edit the configuration file and capture the content below:

hosturl = https://secure-ausomxeha.crmondemand.com
routingurl = https://secure-ausomxeha.crmondemand.com
testmode = debug

Tags: Oracle

Similar Questions

  • What LKM and IKM for b/w MSSQL 2005 and Oracle 11 of fast data loading

    Hello

    Can anyone help to decide what LKMs and IKMs are best for data loading between MSSQL and Oracle.

    Staging area is Oracle. Need to load around the lines of 400Million of MSSQL to Oracle 11 g.

    Best regards
    Muhammad

    "LKM MSSQL to ORACLE (BCP SQLLDR)" may be useful in your case which uses BCP and SQLLDR to extract and laod of MSSQL and Oracle database.

    Please see details on KMs to the http://docs.oracle.com/cd/E28280_01/integrate.1111/e12644/ms_sqlserver.htm#BGBJBGCC

  • With FDM/ERPi of Oracle Data Source incremental data loads

    Hello

    I use ERPi 11.1.2.1. In the workspace, it is possible to set the option of the rule of the data load to be instant, or incremental. However, I use FDM/ERPi set to load data from Oracle GL in Essbase. Is it possible for me to put in place the FDM for the data to load incremental data rules charges? Could be a parameter in the source ERPi adapter?

    Thanks for any information you could provide.

    Yes, the source ERPi adapter there is an option for "Method of Load Data" that will allow you to define how the rule of the DL is run. By default, it is "FULL REFRESH" but can be changed.

    (A) connecting to the application via the workbench and the source system adapters
    (B) make a right click on the Source ERPI adapter and choose "options".

    You will see an option to load method and it will be the full refresh value, choose the value of the option you want in the menu drop-down and save.

  • Data loading convert timestamp

    I use APEX 5.0 and download data wizard allows to transfer data from excel.

    I define Update_Date as timestamp (2) and the data in Excel is 2015/06/30 12:21:57

    NLS_TIMESTAMP_FORMAT is HH12:MI:SSXFF AM JJ/MM/RR

    I created the rule of transformation for update_date in the body of the function, plsql as below

    declare

    l_input_from_csv varchar2 (25): = to_char(:update_date, 'MM/DD/YYYY HH12:MI:SS AM');

    l_date timestamp;

    Start

    l_date: = to_timestamp (l_input_from_csv, ' MM/DD/YYYY HH12:MI: SS AM' ");

    Return l_date;

    end;

    I keep having error of transformation rule.  If I do create a transformation rule, it will ask for invalid month.

    Please help me to fix this.

    Thank you very much in advance!

    Hi DorothySG,

    DorothySG wrote:

    Please test on demand delivery data loading

    I stick a few examples to the Instruction of Page, you can use copy and paste function, it will give the same result

    Please change a coma separator ',' on the other will be message "do not load".

    Check your Application 21919. I changed the rule of transformation:

    Data is loading properly.

    Kind regards

    Kiran

  • Move rejected records to a table during a data load

    Hi all

    When I run my interfaces sometimes I get errors caused by "invalid records. I mean, some contents of field is not valid.

    Then I would move these invalid to another table records, while the data loading lights in order not to interrupt the loading of data.

    How can I do? There are examples that I could follow?

    Thanks in advance

    concerning

    Hi Alvaro,

    Here you can find different ways to achieve this goal and choose according to your requirement:

    https://community.Oracle.com/thread/3764279?SR=Inbox

  • Data loading Wizzard

    On Oracle Apex, is there a feasibility study to change the default feature

    1. can we convert the load data Wizard just insert to insert / update functionality based on the table of the source?

    2. possibility of Validation - Count of Records < target table is true, then the user should get a choice to continue with insert / cancel the data loading process.

    I use APEX 5.0

    Need it please advice on this 2 points

    Hi Sudhir,

    I'll answer your questions below:

    (1) Yes, loading data can be inserted/updated updated

    It's the default behavior, if you choose the right-hand columns in order to detect duplicate records, you will be able to see the records that are new and those who are up to date.

    (2) it will be a little tricky, but you can get by using the underlying collection. Loading data uses several collections to perform the operations, and on the first step, load us all the records of the user in the collection "CLOB_CONTENT". by checking this against the number of records in the underlying table, you can easily add a new validation before moving on to step 1 - step 2.

    Kind regards

    Patrick

  • Data loading 10415 failed when you export data to Essbase EPMA app

    Hi Experts,

    Can someone help me solve this issue I am facing FDM 11.1.2.3

    I'm trying to export data to the application Essbase EPMA of FDM

    import and validate worked fine, but when I click on export its failure

    I am getting below error

    Failed to load data

    10415 - data loading errors

    Proceedings of Essbase API: [EssImport] threw code: 1003029 - 1003029

    Encountered in the spreadsheet file (C:\Oracle\Middleware\User_Projects\epmsystem1\EssbaseServer\essbaseserver1\app\Volv formatting

    I have Diemsion members

    1 account

    2 entity

    3 scenario

    4 year

    5. period

    6 regions

    7 products

    8 acquisitions

    9 Servicesline

    10 Functionalunit

    When I click on the button export its failure

    I checked 1 thing more inception. DAT file but this file is empty

    Thanks in advance

    Hello

    Even I was facing the similar problem

    In my case I am loading data to the Application of conventional planning. When all the dimension members are ignored in the mapping for the combination, you try to load the data, and when you click Export, you will get the same message. . DAT empty file is created

    You can check this

    Thank you

    Praveen

  • Apex data load configuration missing table when importing to the new workspace

    Hi everyone knows this show before and have a work around.

    I export / import my request for a different workspace, and everything works fine except for 1 loading tables data.

    In my application, I use tables of data loading, and it seems that the latter do not properly Setup for the table object to load data. This causes my application to fail when the user tries to download a text file.
    Does anyone have a work around next to recreate the table of data load object?

    Breadcrumb: Components shared-> load data tables-> add modify data load table

    The app before exporting displays Workspace: OOS
    Single column 1 - CURRENCY_ID (number)
    Single column 2 - month (Date)

    When I import the app in the new workspace (OOS_UAT) data type is absent.
    Single column 1 - CURRENCY_ID
    Single column 2 - MONTH

    When I import the same workspace app: OOS I do not know this problem

    Version of the apex: Application Express 4.1.1.00.23

    Hi all

    If you run 4.1.1 it was a bug 13780604 (DATA DOWNLOAD WIZARD FAILED if EXPORTS of OTHER workspace) and have been fixed. You can download the fix for 13780604 (support.us.oracle.com) and the associated 4.1.1

    Kind regards
    Patrick

  • Schema name is not displayed in the data loading

    Hi all

    I'm trying to load a CSV file using oracle apex data loading option. The options are using a new upload (.csv) file and table. In the data load page, the schema name is not list my current schema because of which I could not not to download the CSV file.
    Can someone please help with that?


    I use apex oracle 4.1.1

    Concerning
    Rajendrakumar.P

    Raj,

    If it works on apex.oracle.com (4.2) and not in your case (4.1.1), my suspicion is that this is a bug that has been fixed in 4.2 APEX. Apart from upgrading your version 4.2 of the APEX, I'm not sure that there is not really a viable alternative.

    Thank you

    -Scott-

    http://spendolini.blogspot.com
    http://www.enkitec.com

  • Event scripts FDM shot twice during data loads

    Here's an interesting question. I added the following three scripts to different event (one at a time, ensuring that one of them is both), clear data before loading to Essbase:


    Script event content:
    ' Declare local variables
    Dim ObjShell
    Dim strCMD
    «Call MaxL script to perform data clear the calculation.»
    Set objShell = CreateObject ("WScript.Shell")
    strCMD = "D:\Oracle\Middleware\EPMSystem11R1\products\Essbase\EssbaseClient\bin\startMAXL.cmd D:\Test.mxl"
    API. DataWindow.Utilities.mShellAndWait strCMD, 0


    MaxL Script:
    Login * identified by * on *;
    run the calculation ' FIX("Member1","Member2") CLEARDATA "Member3"; ENDFIX' on *. *** ;
    "exit";




    However, it seems that clear is performed twice, both before and after the data has been loaded to Essbase. This has been verified at every step, checking the newspaper of Essbase applications:

    No script event:
    -No Essbase data don't clear in the application log

    Above to add the script to the event "BefExportToDat":
    -The script is executed once when you click Export in the customer Web FDM (before the "target load" modal popup is displayed). Entries are visible in the log of Essbase applications.
    -Script is then run a second time when you click the OK button in the modal pop-up "target Load System". Entries are visible in the log of Essbase applications.

    Above to add the script to the event "AftExportToDat":
    -The script is executed once when you click Export in the customer Web FDM (before the "target load" modal popup is displayed). Entries are visible in the log of Essbase applications.
    -Script is then run a second time when you click the OK button in the modal pop-up "target Load System". Entries are visible in the log of Essbase applications.

    Above to add the script to the event "BefLoad":
    -Script only runs that after you click Export in the FDM Web Client (before 'target system load' modal popup is displayed).
    -Script is run AFTER loading to Essbase data when the OK button is clicked in the modal popup "load the target system". Entries are visible in the log of Essbase applications.

    Some notes on the above:
    1. "BefExportToDat" and "AftExportToDat" are both performed twice, before and after the modal popup "target Load System". :-(
    2. "befLoad" is executed WHEN the data is loaded to Essbase. :-( :-(

    Someone please any idea how we could run a clear Essbase database before the data is loaded, and not after we have charged for up-to-date data? And maybe about why event scripts above seem to be fired twice? It doesn't seem to be any logic to this!


    BefExportToDat - entered in the journal Application Essbase:
    + [Sea 16 May 16:19:51 2012]Local/Monthly/Monthly/admin@Native Directory/140095859451648/Info (1013091) +]
    + Received order [calculate] user [directory admin@Native] +.

    + [Sea 16 May 16:19:51 2012]Local/Monthly/Monthly/admin@Native Directory/140095859451648/Info (1013162) +]
    + Received order [calculate] user [directory admin@Native] +.

    + [Sea 16 May 16:19:51 2012]Local/Monthly/Monthly/admin@Native Directory/140095859451648/Info (1012555) +]
    + Erasure of the data in the partition [Member3] with fixed member [Period (Member1); Scenario (Member2)] +.
    +...+

    + [Sea 16 May 16:20:12 2012]Local/Monthly/Monthly/admin@Native Directory/140095932016384/Info (1003037) +]
    Updated load cells [98] data

    + [Sea 16 May 16:20:12 2012]Local/Monthly/Monthly/admin@Native Directory/140095932016384/Info (1003024) +]
    Data load time: seconds [0.52]
    +...+

    + [Sea 16 May 16:20:12 2012]Local/Monthly/Monthly/admin@Native Directory/140095930963712/Info (1013091) +]
    + Received order [calculate] user [directory admin@Native] +.

    + [Sea 16 May 16:20:12 2012]Local/Monthly/Monthly/admin@Native Directory/140095930963712/Info (1013162) +]
    + Received order [calculate] user [directory admin@Native] +.

    + [Sea 16 May 16:20:12 2012]Local/Monthly/Monthly/admin@Native Directory/140095930963712/Info (1012555) +]
    + Erasure of the data in the partition [Member3] with fixed member [Period (Member1); Scenario (Member2)] +.


    AftExportToDat - entered in the journal Application Essbase:
    + [Sea 16 May 16:21:32 2012]Local/Monthly/Monthly/admin@Native Directory/140095933069056/Info (1013091) +]
    + Received order [calculate] user [directory admin@Native] +.

    + [Sea 16 May 16:21:32 2012]Local/Monthly/Monthly/admin@Native Directory/140095933069056/Info (1013162) +]
    + Received order [calculate] user [directory admin@Native] +.

    + [Sea 16 May 16:21:32 2012]Local/Monthly/Monthly/admin@Native Directory/140095933069056/Info (1012555) +]
    + Erasure of the data in the partition [Member3] with fixed member [Period (Member1); Scenario (Member2)] +.
    +...+

    + [Sea 16 May 16:21:47 2012]Local/Monthly/Monthly/admin@Native Directory/140095930963712/Info (1003037) +]
    Updated load cells [98] data

    + [Sea 16 May 16:21:47 2012]Local/Monthly/Monthly/admin@Native Directory/140095930963712/Info (1003024) +]
    Data load time: seconds [0.52]
    +...+

    + [Sea 16 May 16:21:47 2012]Local/Monthly/Monthly/admin@Native Directory/140095928858368/Info (1013091) +]
    + Received order [calculate] user [directory admin@Native] +.

    + [Sea 16 May 16:21:47 2012]Local/Monthly/Monthly/admin@Native Directory/140095928858368/Info (1013162) +]
    + Received order [calculate] user [directory admin@Native] +.

    + [Sea 16 May 16:21:47 2012]Local/Monthly/Monthly/admin@Native Directory/140095928858368/Info (1012555) +]
    + Erasure of the data in the partition [Member3] with fixed member [Period (Member1); Scenario (Member2)] +.


    BefLoad - entered in the journal Application Essbase:
    + [Sea 16 May 16:23:43 2012]Local/Monthly/Monthly/admin@Native Directory/140095932016384/Info (1013091) +]
    + Received order [calculate] user [directory admin@Native] +.

    + [Sea 16 May 16:23:43 2012]Local/Monthly/Monthly/admin@Native Directory/140095932016384/Info (1013162) +]
    + Received order [calculate] user [directory admin@Native] +.

    + [Sea 16 May 16:23:43 2012]Local/Monthly/Monthly/admin@Native Directory/140095932016384/Info (1012555) +]
    + Erasure of the data in the partition [Member3] with fixed member [Period (Member1); Scenario (Member2)] +.
    +...+

    + [Sea 16 May 16:23:44 2012]Local/Monthly/Monthly/admin@Native Directory/140095929911040/Info (1003037) +]
    Updated load cells [98] data

    + [Sea 16 May 16:23:44 2012]Local/Monthly/Monthly/admin@Native Directory/140095929911040/Info (1003024) +]
    Data load time: seconds [0.52]
    +...+

    + [Sea 16 May 16:23:45 2012]Local/Monthly/Monthly/admin@Native 140095860504320/Directory/Info (1013091) +]
    + Received order [calculate] user [directory admin@Native] +.

    + [Sea 16 May 16:23:45 2012]Local/Monthly/Monthly/admin@Native 140095860504320/Directory/Info (1013162) +]
    + Received order [calculate] user [directory admin@Native] +.

    + [Sea 16 May 16:23:45 2012]Local/Monthly/Monthly/admin@Native 140095860504320/Directory/Info (1012555) +]
    + Erasure of the data in the partition [Member3] with fixed member [Period (Member1); Scenario (Member2)] +.

    James, the scripts export and the Load event will fire four times, once for each type of file: the. DAT file (main TB file),-A.DAT (log file),-B.DAT and - c.DAT.

    To work around this problem, then only run during the loading of the main TB file, add the following or something similar at the beginning of your event scripts. This assumes that strFile is in the list of parameters to the subroutine:

    Select Case LCase(Right(strFile,6))
         Case "-a.dat", "-b.dat", "-c.dat" Exit Sub
    End Select
    
  • HP - data loading settings

    I know that the settings in "Settings to load data" are stored in the tables of planning but what do we do for the below average and how they work to meet a certain requirement of load?
    1 dimension data load
    2 dimension driver
    3. data load Dimension Parent
    4 driver Dimension of unique identifiers

    This page should give you more details - http://download.oracle.com/docs/cd/E17236_01/epm.1112/hp_admin/dataload.html

    See you soon

    John
    http://John-Goodwin.blogspot.com/

  • Oracle 10g Xe 4 GB of data size limit

    Hello
    I have Oracle 10 g XE. And I have to import the dump file. But I imported discharge tables, but when I have the table last importation of dump file, I got an error.
    ORA_12952 demand exceeds the maximum of 4 GB database size.

    I know that Oracle 10 g Xe 4 GB data size limit. But now what can I do?
    What should I do? Please any idea?

    Yes, it's free. Although it is still in beta.

    http://www.Oracle.com/technetwork/database/Express-Edition/11gxe-beta-download-302519.html

    Don't know what lot means

  • Newbie sorry data-load question and datafile / viral measure

    Hi guys

    Sorry disturbing you - but I did a lot of reading and am still confused.

    I was asked to create a new tablespace:

    create tablespace xyz datafile 'oradata/corpdata/xyz.dbf' size 2048M extent management local size unique 1023M;

    alter tablespace xyz add datafile ' / oradata/corpdata/xyz.dbf' size 2048M;

    Despite being worried not given information about the data to load or why the tablespace must be sized that way - I was told to just 'do it '.

    Someone tried to load data - and there was a message in the alerts log.

    ORA-1652: unable to extend temp by 65472 segment in tablespace xyz

    We do not use autoextend on data files even if the person loading the data would be so (they are new on the environment).

    The database is on a cold backup nightly routine - we are in a rock anvil - we have no space on the server - to make RMAN and only 10 G left on the Strip for (Veritas) backup routine and thus control space with no autoextend management.

    As far as I know of the above error message is that the storage space is not large enough to hold the load data - but I was told by the person who imports the data they have it correctly dimensioned and it something I did when the database create order (although I have cut and pasted from their instructions - and I adapted to our environment - Windows 2003 SP2 but 32 bits).

    The person called to say I had messed up their data loading and was about to make me their manager for failing to do my job - and they did and my line manager said that I failed to correctly create the tablespace.

    When this person was asked to create the tablespace I asked why they thought that extensions should be 1023M and said it was a large data load that must be inserted to a certain extent.

    That sounds good... but I'm confused.

    1023M is very much - this means that you have only four extents in the tablespace until it reaches capacity.

    It is a load - is GIS data - I have not participated in the previous data loads GIS - other than monitor and change of tablespaces to support - and previous people have size it right - and I've never had no return. Guess I'm a bit lazy - just did as they asked.

    However, they never used 128K as a size measure never 1023M.

    Can I ask is 1023 M normal for large data loads - or I'm just the question - it seems excessive unless you really just a table and an index of 1023M?

    Thanks for any idea or other research.

    Assuming a block size of 8 KB, 65472 would be 511 MB. However, as it is a GIS database, my guess is that the database block size itself has been set to 16K, then 65472 is 1023MB.

    What load data is done? Oracle Export dump? Which includes a CREATE INDEX statement?
    Export-Import is a CREATE TABLE and INSERT so that you would get an ORA-1652 on it. So you get ORA-1652 if the array is created.
    However, you will get an ORA-1652 on an INDEX to CREATE the target segment (ie the Index) for this operation is initially created as a 'temporary' segment until the Index build is complete when it switches to be a 'temporary' to be a segment of "index".

    Also, if parallelism is used, each parallel operation would attempt to assign degrees of 1023 MB. Therefore, even if the final index target should have been only, say 512 MB, a CREATE INDEX with a DEGREE of 4 would begin with 4 extensions of 1023 MB each and would not decrease to less than that!

    A measure of 1023 MB size is, in my opinion, very bad. My guess is that they came up with an estimate of the size of the table and thought that the table should be inserted in 1 measure and, therefore, specified 1023 MB in the script that is provided to you. And it is wrong.

    Same Oracle AUTOALLOCATE goes only up to 64 MB extended when a Segment reached the mark of 1 GB.

  • Forcing errors when loading essbase nonleaf data loading

    Hi all

    I was wondering if anyone had experience forcing data load errors when loadrules is trying to push the nonleaf members data in an essbase cube.

    I obviously ETL level to prevent members to achieve State of charge which are no sheet of error handling, but I would however like the management of the additional errors (so not only fix the errors).

    ID much prefer errors to be rejected and shown in the log, rather than being crushed by aggregation in the background

    Have you tried to create a security filter for the user used by the load that allows only write at level 0 and not greater access?

  • The data load has run in Odi but still some interfaces are running in the operator tab

    Hi Experts,

    I'm working on the customization of the olive TREE, we run the incremental load every day. Data loading is completed successfully, but the operator status icon tab showing some interfaces running.

    Could you please, what is the reason behind still running the interfaces tab of the operator. Thanks to Advance.your valuable suggestion is very useful.

    Kind regards

    REDA

    What we called stale session and can be removed with the restart of the agent.

    You can also manually clean session expired operator.

Maybe you are looking for

  • Satellite A500-19 t - battery no longer charges

    Hello I have a Satellite A500-19 t and since last week my battery is fresher.When I start the laptop, the battery light is turned on for a few seconds, and then turns off. In windows, he knows the battery is in place, but it does not load it. Is the

  • Unable to connect to the Web Services Proxy/unused (a HP 7500 JO)

    Hello Here's the situation: Previous installer: .  I have connected my 7500 for the corporate network (IP = 172... series) .  registered the product with ePrintCenter .  used a PC under Windows to print (and scan) with the printer .  web proxy entry

  • WksSS.exe error Help!

    I know that the support ends for my system but I need help as I can get it! When you try to open any Micro flexible spreadsheet file I get the following message is displayed. C:\PROGRA~1\MICROS~3\wksSS.exe NTVDM CPU has met an illegel statement. CS:

  • Canon MP560 printer printing color is stained with some of the missing color

    Recently, I had paper stuck in the printer, but managed to withdraw.  Now when I print a document, black printing is normal but color printing is stained with some of the missing color which makes it difficult to read words.  Does anyone have an idea

  • Unable to turn on monitor Zalman 3D visualization

    I am trying to download this codec on 32-bit windows7, but cannot, can someone help me please?