Error processing query ORACLE - data 01403:no found

We have overcome the issue of the blank page by applying the patch to APEX 4.2.2.

Now us patched/upgraded to the APEX 4.2.3 and no longer have the issue of the blank page, but have it which sounds similar:

I'm trying here, since I do not receive debugging messages - POST goes in and returns immediately with the not found error of data.

APEX 4.2.3

APEX auditor 2.0.2

GlassFish 4.0

Apps install OK and can display the login page (see), but trying to connect (I agree).

immediately get this error:

Error processing the request

ORA-010403: no data found

Debug messages displays only show entries - nothing, not even the first line of Accept.

Firebug shows similar - Post (which seems OK) the immediate response is the error page and the message.

I don't have access to papers APEX listener tonight, but expect to get for them, or that someone check them

outside, AM tomorrow.

Any ideas or suggestions?

This happens on all applications (15) except one.  Still trying to discern the different in this app vs other.

Thank you-

The issue proved to be a model of incorrect page, where two #FORM_OPEN # guidelines were originally several presentation key.   Remove the 2nd #FORM_OPEN # the page template (and we had them in the Page of Login model and the standard page template by default) and everything works fine.

Use of the APEX listener discovered the issue who had been there all along.

Deployment via the HTTP server, it was never a visible problem.

This series of applications was built in HTMLDB 2.0 and updated several times. Not to say how long this 2nd #FORM_OPEN # has been there.   Am just happy that it's over now.

He also had to move some process on our login page before mind in the front regions for them to work properly, because of the shift in exactly when the APEX 4.1.1 + apps treat before and other things after the header.  See the help article on the element in Compatibility Mode in the properties of the Application for more information.

Our upgrade was from APEX to APEX 4.2.3 4.1.0 not a big jump but significant in this respect.

Also see this link for more information be aware of the Login/Logout process during the passage of 4.1 APEX APEX 4.1.1 and higher:

Login/Logout Handling in Apex 4.1.1 | Christophs 2 Cents of Oracle

Thank you all for the suggestions which led me to track down,

Karen

Tags: Database

Similar Questions

  • 383 receving error while querying the data search

    Hello

    We have a script (vbs) that runs in Diadem and asks a Datafinder (Server edition). After the addition of a code of error handling, we encountered the following error:

    Error description: an error occurred while searching. The query can contain at least one invalid property.        Number: 383 Source: false.*

    * Now, according to the message above, we can extract that there is a property or a value of the property (of the application) makes / hurt, but see the next line from this message please.

    Once we took the trap for this error, we connect and save queries (in files TDQ) which were used at the present time that the error occurred. When load us the files of this query, no error is caused. We currently have to solve this problem, but we don't find the root cause yet. No guidance, the tip or help will be greatly appreciated.

    Hi manny,.

    This looks like a problem of type of property data for me.  In rare cases save a condition of tiara in a *.tdq file can cause this condition to change the data type of the property.  You can try to reduce the number of assignments to one and determine what state is the culprit?  Then ask yourself which operator you use in this condition - if it is "=" or "<>" then it supports any type of data.  If it's any other condition, then he could run against a string data type.  In addition, you may have a datetime property that is not optimized in the DataFinder, who could be considered an invalid property.

    Brad Turpin

    Tiara Product Support Engineer

    National Instruments

  • Error: The Oracle data access Client is not installed

    Hi all
    I tried to configure the Performance Management architect 11.1.2 at that point, I got an error as "The Oracle data access Client is not installed".

    Kindly guide me how to overcome this problem.

    Thanks and greetings
    Alizée

    install ODP for .net

    Good reading: http://download.oracle.com/docs/cd/E17236_01/epm.1112/epm_install_start_here.pdf

    a little video:
    https://oracleaw.webex.com/ec0605ld/eventcenter/recording/recordAction.do;jsessionid=hL8FPMKDJzfGYMG1V21K0B9yCH32R07pvRmQmxDgT12ZK6vNKQhH!-901920245?theAction=poprecord&actname=%2Feventcenter%2Fframe%2Fg.do&apiname=lsr.php&renewticket=0&renewticket=0&actappname=ec0605ld&entappname=url0107ld&needFilter=false&&isurlact=true&entactname=%2FnbrRecordingURL.do&rID=63550522&rKey=ec6e556fa18cd368&recordID=63550522&rnd=5208921685&siteurl=oracleaw&SP=EC&AT=pb&format=short

  • The execution of the trigger, 01403 error. 00000 - "no data found".

    Hi guys, Pl/Sql

    When we try to run after initiation, as 01403 error. 00000 - "no data found".

    Trigger is

    CREATE OR REPLACE TRIGGER SYNC_OUGR_USER_ADDRESS

    AFTER INSERT ON OUGR_USER_ADDRESS FOR EACH LINE

    DECLARE

    P_CD_ADDR_TYPE OUGR_USER_ADDRESS. TYPE % CD_ADDR_TYPE;

    P_AD_CITY OUGR_USER_ADDRESS. TYPE % AD_CITY;

    CITY OF P_NM_NAME. TYPE % NM_NAME;

    P_FL_OVERSEAS OUGR_USER_ADDRESS. TYPE % FL_OVERSEAS;

    P_AD_COUNTRY OUGR_USER_ADDRESS. TYPE % AD_COUNTRY;

    P_TEMP_CITY VARCHAR2 (10);

    P_CD_CODE REF_COUNTRY_CODE. TYPE % CD_CODE;

    BEGIN

    P_CD_ADDR_TYPE: =: NEW. CD_ADDR_TYPE;

    P_FL_OVERSEAS: =: NEW. FL_OVERSEAS;

    P_AD_CITY: =: NEW. AD_CITY;

    P_AD_COUNTRY: =: NEW. AD_COUNTRY;

    SELECT LENGTH (TRIM (TRANSLATE (P_AD_CITY, ' + -. 0123456789', ' '))) IN THE DOUBLE P_TEMP_CITY; -to check if the value is numeric

    SELECT NM_NAME IN THE P_NM_NAME OF THE CITY WHERE ID_TOWN = P_AD_CITY;

    SELECT CD_CODE INTO P_CD_CODE FROM REF_COUNTRY_CODE WHERE CD_CODE = P_AD_COUNTRY;

    IF P_CD_ADDR_TYPE ('ma', 'PA') THEN

    IF P_TEMP_CITY IS NULL THEN

    P_AD_CITY: = P_NM_NAME;

    ON THE OTHER

    P_AD_CITY: = P_AD_CITY;

    END IF;

    ON THE OTHER

    P_AD_CITY: = P_NM_NAME;

    END IF;

    IF P_FL_OVERSEAS = "Y" THEN

    P_AD_COUNTRY: = P_CD_CODE;

    ON THE OTHER

    P_AD_COUNTRY: = P_AD_COUNTRY;

    END IF;

    INSERT INTO OUGR_USER_ADDRESS@TO_GVRS

    (ID_ADDRESS,

    CD_ADDR_TYPE,

    AD_UNIT,

    AD_NUM,

    AD_STR1,

    AD_STR2,

    AD_CITY,

    AD_COUNTY,

    AD_ST,

    AD_COUNTRY,

    AD_ZIP5,

    AD_ZIP4,

    FL_AD_RURAL,

    FL_OVERSEAS,

    TM_STAMP

    )

    VALUES (: NEW.ID_ADDRESS,)

    : NEW. CD_ADDR_TYPE,

    : NEW. AD_UNIT,

    : NEW. AD_NUM,

    : NEW. AD_STR1,

    : NEW. AD_STR2,

    P_AD_CITY,

    : NEW. AD_COUNTY,

    : NEW. AD_ST,

    P_AD_COUNTRY,

    : NEW. AD_ZIP5,

    : NEW. AD_ZIP4,

    : NEW. FL_AD_RURAL,

    : NEW. FL_OVERSEAS,

    : NEW. TM_STAMP

    );

    END SYNC_OUGR_USER_ADDRESS;

    /

    Greatly appreciate your help in this regard.

    Thanks in advance.

    Kind regards

    REDA

    Hi, Raj,

    Instead of

    SELECT LENGTH (TRIM (TRANSLATE (P_AD_CITY, ' + -. 0123456789', ' '))) IN THE DOUBLE P_TEMP_CITY; -to check if the value is numeric

    SELECT NM_NAME IN THE P_NM_NAME OF THE CITY WHERE ID_TOWN = P_AD_CITY;

    SELECT CD_CODE INTO P_CD_CODE FROM REF_COUNTRY_CODE WHERE CD_CODE = P_AD_COUNTRY;

    You can say:

    P_TEMP_CITY: = LENGTH (TRIM (TRANSLATE (P_AD_CITY, ' + -. 0123456789', ' ')));

    SELECT MIN (NM_NAME)

    IN P_NM_NAME

    OF THE CITY

    WHERE ID_TOWN = P_AD_CITY;

    SELECT MIN (CD_CODE)

    IN P_CD_CODE

    OF REF_COUNTRY_CODE

    WHERE CD_CODE = P_AD_COUNTRY;

    You don't need the double table much in PL/SQL.

    When you use an aggregate function (MIN, as above) without a clause GROUP BY, the result set is guaranteed to have exactly 1 row.  This also avoids the error (ORA-01422) TOO_MANY_ROWS, which is probably impossible in this example.

  • How to query Oracle, MySQL and MSSQL data?

    For an environment with Oracle 11 g/12 c Enterprise edition, MySQL community edition and MSSQL 2008/2012 5.7 stanard/enterprise edition, is there any problem using DG4ODBC to query the data for all 3 platforms?

    There are other free alternatives?

    If the queried data are mainly contained in MySQL or MSSQL, it will be more efficient to query MySQL or MSSQL?

    If so, any suggestion of how to do it in these platforms? I know that MSSQL can use linked server, but it is quite slow.

    Hi Ed,

    Ok!   I still think a bridge 'instance' as a partner to ainit .ora file but still useful to be aware how others see these things.

    Your installation is that we have our test systems - multiple gateways - dg4odbc, the dg4msql, the dg4drda etc. - in an ORACLE_HOME.

    Kind regards

    Mike

  • Foreground of Oracle process to access data under Linux files

    As far as I know, processes user in an Oracle database running the application code, but the processes of database running Oracle database server code (server process analyze and execute the SQL statements issued through the app, read data of data files blocks and return the results to the application.) Background treats the archive logs, update the headers of all the data files to save the details of the control point, written content of the buffers of data files, runs the recovery if necessary, etc.)

    However, I could see the oracle user processes to access data files in my Oracle database. When I look for the processes to access data belonging to the tablespace files "recprov", I get the following:

    [root@ymir ~] # lsof | grep recprov
    Oracle Oracle 465 11u REG 8.18 6291464192 /ora2/oradata/essepr3/recprov_tb_01.dbf 48496658
    Oracle Oracle 465 13u REG 8.18 4823457792 /ora2/oradata/essepr3/recprov_tb_02.dbf 48496659
    Oracle Oracle 465 15u REG 8.18 4823457792 /ora2/oradata/essepr3/recprov_tb_02.dbf 48496659
    Oracle Oracle 964 11u REG 8.18 4823457792 /ora2/oradata/essepr3/recprov_tb_02.dbf 48496659
    Oracle Oracle 964 13u REG 8.18 4823457792 /ora2/oradata/essepr3/recprov_tb_02.dbf 48496659
    Oracle Oracle 964 14u REG 8.18 6291464192 /ora2/oradata/essepr3/recprov_tb_01.dbf 48496658
    Oracle Oracle 13364 14u REG 8.18 4823457792 /ora2/oradata/essepr3/recprov_tb_02.dbf 48496659
    Oracle Oracle 13364 76u REG 8.18 6291464192 /ora2/oradata/essepr3/recprov_tb_01.dbf 48496658
    Oracle Oracle 16445 17u REG 8.18 4823457792 /ora2/oradata/essepr3/recprov_tb_02.dbf 48496659
    Oracle Oracle 16445 18u REG 8.18 6291464192 /ora2/oradata/essepr3/recprov_tb_01.dbf 48496658
    Oracle Oracle 20522 REG 8.18 4823457792 /ora2/oradata/essepr3/recprov_tb_02.dbf 48496659 82uW
    Oracle Oracle 20522 REG 8.18 6291464192 /ora2/oradata/essepr3/recprov_tb_01.dbf 48496658 122uW
    Oracle Oracle 20532 44u REG 8.18 4823457792 /ora2/oradata/essepr3/recprov_tb_02.dbf 48496659
    Oracle Oracle 20532 75u REG 8.18 6291464192 /ora2/oradata/essepr3/recprov_tb_01.dbf 48496658
    Oracle Oracle 20534 17u REG 8.18 4823457792 /ora2/oradata/essepr3/recprov_tb_02.dbf 48496659
    Oracle Oracle 20534 79u REG 8.18 6291464192 /ora2/oradata/essepr3/recprov_tb_01.dbf 48496658

    As you can see, there are 7 oracle processes that access data files: 4 processes user and 3 background processes:

    [root@ymir ~] # ps - ef
    ...
    Oracle 465 1 Jan27 0? 00:00:23 oracleessepr3 (LOCAL = NO)
    Oracle 964 1 Jan27 0? 00:00:25 oracleessepr3 (LOCAL = NO)
    Oracle 13364 1 0 Jan13? 02:45:02 oracleessepr3 (LOCAL = NO)
    Oracle 16445 1 0 Jan21? 00:00:05 oracleessepr3 (LOCAL = NO)
    20522 1 0 2009 Oracle? 00:09:59 ora_dbw0_essepr3
    20532 1 0 2009 Oracle? 00:04:24 ora_ckpt_essepr3
    20534 1 0 2009 Oracle? 00:04:10 ora_smon_essepr3
    ...

    And I confirmed this information from the data base repository:

    SQL > select p.SPID, SSE. TYPE, SSE. Username, SSE. MACHINE, SSE. PROGRAM
    session $ v ESS, v$ process p
    where p.SPID (465, 964, 13364, 16445, 20522, 20532, 20534) and SES. PADDR = p.ADDR;

    USER NAME OF TYPE SPID MACHINE PROGRAM
    ----- ----------- --------------- --------------- ---------------------------------------------------------
    20522 BACKGROUND dbserver (DBW0) oracle@dbserver
    20532 BACKGROUND dbserver oracle@dbserver (CKPT)
    20534 BACKGROUND dbserver oracle@dbserver (SMON)
    13364 USER DBSNMP dbserver emagent@dbserver (TNS V1 - V3)
    964 RECPROV USER appserver JDBC Thin Client
    16445 USER SYSTEM userPC PlSqlDev.exe
    465 RECPROV USER appserver JDBC Thin Client

    * RECPROV is the name of the user of the application database.

    My question is, why are there user processes to access data files? I think that it does not match the process architecture defined by Oracle. And most importantly, it does not mean a security leak in the database?

    DB version: 10.2.0.3, dedicated
    The Linux Version: Linux 2.6.18 - 53.el5xen #1 SMP Sat Nov 10 19:46:06 EDT 2007 x86_64 x86_64 x86_64 GNU/Linux

    Thanks in advance

    Hello

    There seems to be some confusion, even in the Oracle documentation, in terms of nomenclature.

    Think of it this way. There is the client program that connects to Oracle, such as SQL * Plus, Toad, custom, etc. written application. This program runs on the client. This may or may not be on the database server.

    Then there's the Oracle database server process. They run always on the database server. These processes are of two types, the USER and the background.

    You can see:

    SQL> select background,count(*) from v$process group by background;
    
    B   COUNT(*)
    - ----------
              65
    1         37
    

    Here are copies of the file binary oracle attach directly to the SGA and interact with database files. The binary oracle works with several different "personalities" based on the different purposes, he is currently serving. The type of background process, are divided into different specific methods for specific purposes.
    Yet once, observe:

    SQL> select pname from v$process where background = 1;
    
    PNAME
    -----
    PMON
    VKTM
    GEN0
    DIAG
    DBRM
    PING
    PSP0
    ACMS
    DIA0
    LMON
    LMD0
    
    PNAME
    -----
    LMS0
    LMS1
    RMS0
    LMHB
    MMAN
    DBW0
    DBW1
    LGWR
    CKPT
    SMON
    RECO
    
    PNAME
    -----
    RBAL
    ASMB
    MMON
    MMNL
    MARK
    LCK0
    RSMN
    GTX0
    SMCO
    W000
    RCBG
    
    PNAME
    -----
    QMNC
    Q001
    CJQ0
    Q005
    
    37 rows selected.
    

    Thus, each background process serves a different purpose. For example. DBW0 concerns only to write data from the buffer cache data files and maintaining all data structures associated with it. Each background process above serves a different purpose, I will not go into all. Generally, the background processes start up time of instance startup, even though some may be started on demand, at a certain time after the start of the proceeding.

    Now, consider a dedicated server model. When you connect to the database by using SQL * Plus, for example, you run the binary "sqlplus". Assume that you are connected via SQL * Net. In this case, sqlplus connects to the listener, the listener creates an oracle server process, type 'USER '. This process always runs on the database server. It runs under the user 'oracle', and it provides your real and direct interface to the database. This user can interact with the database and read the database files directly.

    Mainly, your server processes the USER will read the data from the cache buffers and, in the case of an absence of cache, will read the data in the data files and loading into the buffer cache. In addition, according to the code path, it can directly read or write to the datafiles, bypassing the buffer cache.

    So, the bottom line, all server (USER and BOTTOM) processes, interact with the database and associated files. The code of client program (sqlplus, Toad, etc.) NEVER directly interacts with the database.

    Hope that things cleared up.

    -Mark

  • Error running query database

    I have a web application written in CF8 with Oracle 11 g as primary server. This application has been used very often for over 6 years. Currently, I'm moving to CF10 and you only a small code change CF. When I ran the new application in my test server, everything seems to work fine except when it calls an Oracle Package. This action generates an error: error running database query the strange thing is all work before and after the call to the procedure, I tested using cfabort this Package Oracle still works in the production server (CF8), but not when it is called by CF10. My question is: are there changes for CF10 when you call a procedure? or is there any fix that I don't know? The code is as follows: SELECT box trim (to_char (SYSDATE, 'DAY')) WHEN 'MONDAY' then '1' another '2' end HAVE double TodaysDate

    SELECT Count (other_id) AS NoRecFound FROM gl_dup_ids_ssns WHERE Trim (create_date) =

    SELECT Count (other_id) AS NoRecFound FROM gl_dup_ids_ssns WHERE (create_date) = CF Trim codes to stop the process and email admin

    Error performing query of database appears when it hit to run cfstoredproc. The codes are exactly the same that in CF8, this model has not been changed. Exceptions 14:03:53.053 - database Exception: in /home/space/users/www/GL/glproc.cfm: line 93 runtime error query database.

    I found the answer! In case someone out there also face the same question. In Administrator, Datasource Advance, go down and find: authorized SQL where there are checkboxes for Select, Update, Delete, Insert and one of them is to store the process checkbox. My box was not checked that's why ColdFusion is unable to call a stored procedure. I checked and recorded and I'm good to go.

  • Can't access websites in Google Chrome error: the requested URL could not be found

    Original title: Error message

    Hello, I get the message below when I try to go to a Web site, but only from Google Chrome, it works from Internet explore and he used to work in any case of Chrome before, so weird and so boring!

    any ideas?
    Thank you very much.
    ERROR the requested URL could not be found

    While trying to retrieve the requested URL, the following error was encountered:

    • Invalid query

    Some aspects of the HTTP request is not valid. Possible problems:

    • Missing or unknown request method
    • Missing URL
    • Missing HTTP identifier (HTTP/1.0)
    • Demand is too great
    • Content-Length missing for POST request
    • Transfer-Encoding not supported
    • Illegal character in hostname

    Footprint 4.8/FPMCP 

    Hello

    If I understand correctly, you are unable to open Web sites using Google Chrome and the same works fine in Internet Explorer.

    Did you of recent changes made to your computer?

    We will try the suggestions from the following link.

    Clear your cache and other data from browser

    http://support.Google.com/chrome/bin/answer.py?hl=en&answer=95582

    If the problem persists, you can communicate with the support of Google Chrome.

    http://productforums.Google.com/Forum/#! Forum/chrome

  • Error: The lines of data with unmapped dimensions exist for period "1 April 2014".

    Expert Hi

    The below error when I click on the button Execute in order to load data in the area of data loading in 11.1.2.3 workspace. Actually, I already put in the tabs global mapping (add records of 12 months), mapping of Application (add records of 12 months) and map sources (add a month "1 April 2014' as the name of period with Type = Explicit mapping") in the service of the period mapping. What else should I check to fix this? Thank you.

    2014-04-29 06:10:35, 624 [AIF] INFO: beginning of the process FDMEE, process ID: 56
    2014-04-29 06:10:35, 625 [AIF] INFO: recording of the FDMEE level: 4
    2014-04-29 06:10:35, 625 [AIF] INFO: FDMEE log file: null\outbox\logs\AAES_56.log
    2014-04-29 06:10:35, 625 [AIF] INFO: user: admin
    2014-04-29 06:10:35, 625 [AIF] INFO: place: AAESLocation (Partitionkey:2)
    2014-04-29 06:10:35, 626 [AIF] INFO: period name: Apr 1, 2014 (period key: 4/1/14-12:00 AM)
    2014-04-29 06:10:35, 627 [AIF] INFO: category name: AAESGCM (category key: 2)
    2014-04-29 06:10:35, 627 [AIF] INFO: name rule: AAESDLR (rule ID:7)
    2014-04-29 06:10:37, 504 [AIF] INFO: Jython Version: 2.5.1 (Release_2_5_1:6813, September 26 2009, 13:47:54)
    [JRockit (R) Oracle (Oracle Corporation)]
    2014-04-29 06:10:37, 504 [AIF] INFO: Java platform: java1.6.0_37
    2014-04-29 06:10:39, 364 INFO [AIF]: - START IMPORT STEP -
    2014-04-29 06:10:45, 727 INFO [AIF]:
    Import of Source data for the period "1 April 2014".
    2014-04-29 06:10:45, 742 INFO [AIF]:
    Import data from Source for the book "ABC_LEDGER".
    2014-04-29 06:10:45, 765 INFO [AIF]: monetary data lines imported from Source: 12
    2014-04-29 06:10:45, 783 [AIF] INFO: Total of lines of data from the Source: 12
    2014-04-29 06:10:46, 270 INFO [AIF]:
    Map data for period "1 April 2014".
    2014-04-29 06:10:46, 277 [AIF] INFO:
    Treatment of the column mappings 'ACCOUNT '.
    2014-04-29 06:10:46, 280 INFO [AIF]: data rows updated EXPLICIT mapping rule: 12
    2014-04-29 06:10:46, 280 INFO [AIF]:
    Treatment of the "ENTITY" column mappings
    2014-04-29 06:10:46, 281 [AIF] INFO: rows of data updates to EXPLICIT mapping rule: 12
    2014-04-29 06:10:46, 281 [AIF] INFO:
    Treatment of the column mappings "UD1.
    2014-04-29 06:10:46, 282 [AIF] INFO: rows of data updates to EXPLICIT mapping rule: 12
    2014-04-29 06:10:46, 282 [AIF] INFO:
    Treatment of the column mappings "node2".
    2014-04-29 06:10:46, 283 [AIF] INFO: rows of data updates to EXPLICIT mapping rule: 12
    2014-04-29 06:10:46, 312 [AIF] INFO:
    Scene for period data "1 April 2014".
    2014-04-29 06:10:46, 315 [AIF] INFO: number of deleted lines of TDATAMAPSEG: 171
    2014-04-29 06:10:46, 321 [AIF] INFO: number of lines inserted in TDATAMAPSEG: 171
    2014-04-29 06:10:46, INFO 324 [AIF]: number of deleted lines of TDATAMAP_T: 171
    2014-04-29 06:10:46, 325 [AIF] INFO: number of deleted lines of TDATASEG: 12
    2014-04-29 06:10:46, 331 [AIF] INFO: number of lines inserted in TDATASEG: 12
    2014-04-29 06:10:46, 332 [AIF] INFO: number of deleted lines of TDATASEG_T: 12
    2014-04-29 06:10:46, 366 [AIF] INFO: - END IMPORT STEP -
    2014-04-29 06:10:46, 408 [AIF] INFO: - START NEXT STEP -
    2014-04-29 06:10:46, 462 [AIF] INFO:
    Validate the data maps for the period "1 April 2014".
    2014-04-29 06:10:46, 473 INFO [AIF]: data rows marked as invalid: 12
    2014-04-29 06:10:46, ERROR 473 [AIF]: error: the lines of data with unmapped dimensions exist for period "1 April 2014".
    2014-04-29 06:10:46, 476 [AIF] INFO: Total lines of data available for export to the target: 0
    2014-04-29 06:10:46, 478 FATAL [AIF]: error in CommMap.validateData
    Traceback (most recent call changed):
    Folder "< string >", line 2348 in validateData
    RuntimeError: [u "error: the lines of data with unmapped dimensions exist for period" 1 April 2014' ""]

    2014-04-29 06:10:46, 551 FATAL [AIF]: COMM error validating data
    2014-04-29 06:10:46, 556 INFO [AIF]: end process FDMEE, process ID: 56

    Thanks to all you guys

    This problem is solved after I maped all dimensions in order of loading the data. I traced only Entity, account, Custom1 and Custom2 at first because there is no source map Custom3, Custom4 and PIC. After doing the mapping for Custom3, Custom4 and PKI, the problem is resolved. This is why all dimensions should be mapped here.

  • ODI LKM Oracle for Oracle Data Pump question

    Hi all

    I have a weird problem, ODI.

    I associate myself with per_all_people_f, fnd_user to load the w_user_ds using Oracle Data Integrator. The used LKM is LKM Oracle for Oracle Data Pump.

    Fine when I run the interface. I am getting below error

    ODI-1227: task failed USER_DATA_SET (load) on the source of connection ORACLE EBS.

    Caused by: java.sql.SQLSyntaxErrorException: ORA-00923: KEYWORD not found where expected

    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:462)

    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:405)

    at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:931)

    at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:481)

    at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:205)

    at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:548)

    at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:217)

    at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:1115)

    at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1488)

    at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3769)

    at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3954)

    at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1539)

    at oracle.odi.runtime.agent.execution.sql.SQLCommand.execute(SQLCommand.java:163)

    at oracle.odi.runtime.agent.execution.sql.SQLExecutor.execute(SQLExecutor.java:102)

    at oracle.odi.runtime.agent.execution.sql.SQLExecutor.execute(SQLExecutor.java:1)

    at oracle.odi.runtime.agent.execution.TaskExecutionHandler.handleTask(TaskExecutionHandler.java:50)

    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.processTask(SnpSessTaskSql.java:2913)

    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java:2625)

    at com.sunopsis.dwg.dbobj.SnpSessStep.treatAttachedTasks(SnpSessStep.java:577)

    at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java:468)

    at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java:2128)

    to oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$ 2.doAction(StartSessRequestProcessor.java:366)

    at oracle.odi.core.persistence.dwgobject.DwgObjectTemplate.execute(DwgObjectTemplate.java:216)

    at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.doProcessStartSessTask(StartSessRequestProcessor.java:300)

    to oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.access$ 0 (StartSessRequestProcessor.java:292)

    to oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$ StartSessTask.doExecute (StartSessRequestProcessor.java:855)

    at oracle.odi.runtime.agent.processor.task.AgentTask.execute(AgentTask.java:126)

    to oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor$ 2.run(DefaultAgentTaskExecutor.java:82)

    at java.lang.Thread.run(Thread.java:662)

    The generated code is

    create the table 780021 X

    (

    C1_FIRST_NAME,

    C2_MID_NAME,

    C3_LAST_NAME,

    C4_FULL_NAME,

    C5_NAME_SUFFIX,

    C6_SEX_MF_CODE,

    C7_SEX_MF_NAME,

    C8_COUNTRY_NAME,

    C9_LOGIN,

    C10_CREATED_BY_ID,

    C11_CHANGED_BY_ID,

    C12_CREATED_ON_DT,

    C13_CHANGED_ON_DT,

    C14_AUX1_CHANGED_ON_DT,

    C15_SRC_EFF_TO_DT,

    C16_INTEGRATION_ID,

    C17_EFFECTIVE_START_DATE

    )

    (EXTERNAL) ORGANIZATION

    TYPE oracle_datapump

    Dat_dir default DIRECTORY

    LOCATION ("X780021.exp")

    )

    PARALLEL

    in SELECT

    ALL_PEOPLE_F.FIRST_NAME,

    ALL_PEOPLE_F.MIDDLE_NAMES,

    ALL_PEOPLE_F.LAST_NAME,

    ALL_PEOPLE_F.FULL_NAME,

    ALL_PEOPLE_F.SUFFIX,

    ALL_PEOPLE_F.SEX,

    ALL_PEOPLE_F.SEX,

    ALL_PEOPLE_F.NATIONALITY,

    USER. USER_NAME,

    ALL_PEOPLE_F.CREATED_BY,

    ALL_PEOPLE_F.LAST_UPDATED_BY,

    ALL_PEOPLE_F.CREATION_DATE,

    ALL_PEOPLE_F.LAST_UPDATE_DATE,

    ALL_PEOPLE_F.CREATION_DATE,

    ALL_PEOPLE_F.EFFECTIVE_END_DATE,

    USER. USER_ID,

    ALL_PEOPLE_F.EFFECTIVE_START_DATE

    from APPS. FND_USER USER, APPS. PER_ALL_PEOPLE_F ALL_PEOPLE_F

    where (1 = 1)

    And (ALL_PEOPLE_F.PERSON_ID = USER. EMPLOYEE_ID)

    I don't see what is the problem here.

    Someone can help me.

    Thank you and best regards,

    Krishna Prasad

    I found the problem, its with the way ODI generated alias for the FND_USER table, by default it produces USER as an alias, which is a keyword from oracle. We just need to rename it to something else, and it worked.

  • DB ERROR: ORA-01033: ORACLE initialization or shutdown in progress

    Hello

    I want to start the Oracle 11 g server, but I get this error message

    ERROR:
    ORA-01033: ORACLE initialization or shutting
    Process ID: 0
    Session IDs: serial number 0: 0

    Note:-My db was working fine, but after Windows Update and restarting the computer, I get this error


    I followed the previous thread that describes what it takes, but I am getting below error

    SQL * more: Production of the version 11.2.0.2.0 on Tue Jan 15 15:10:12 2013

    Copyright (c) 1982, 2010, Oracle. All rights reserved.

    SQL > connect sys/admin123 have sysdba
    ERROR:
    ORA-28056: audit event log entry records Windows failed
    OSD-226114008: Message not found 226114008. product = RDBMS. installation = SOSD
    S/O-error: (OS 1502) the event log file is full.
    ORA-28056: audit event log entry records Windows failed
    OSD-226114008: Message not found 226114008. product = RDBMS. installation = SOSD
    S/O-error: (OS 1502) the event log file is full.


    SQL > shutdown immediate
    ORA-01012: not connected
    SQL > startup
    ORA-01012: not connected
    SQL > shutdown immediate
    ORA-01012: not connected
    SQL >

    can you please tell me what to do
    Thanks in advance

    Published by: gaelle Medlery on 15 January 2013 02:16

    Hello
    In the error message, it is clear that stop db is in progress. Must wait some time yet so that will end the shutdown process, then you can start. If you do not have the waiting time you reboot the machine so that once the server has been restarted, you can start your database.

    Kind regards
    Kishore

  • Error when inserting XML Date in the Table

    Hi all

    I am working on Oracle 11 g and trying to insert a date XML in the table but get error - below

    Query - insert into TableName (ID, CREATION, CREATEDBY) VALUES (50, *'2010 - 12-15 T 12: 57:19'*, 'Name')

    Error - java.sql.SQLDataException: ORA-01861: literal does not match the format string

    CREATED column datatype is Date

    When I try to use sysdate instead of hard-coding XML date of obtaining inserted successfully into the table. Please let me know how to pass this XML format date.

    Thanks in advance.

    Concerning
    Nikhil

    I don't see any XML in what you posted. In any case:

    "2010 12-15 T 12: 57:19'.

    is a string, not a date. Use:

    to_date('2010-12-15T12:57:19','YYYY-mm-dd"T"HH24:mi:SS')

    For example:

    SQL> create table tbl(created date);
    
    Table created.
    
    SQL> insert into tbl values('2010-12-15T12:57:19');
    insert into tbl values('2010-12-15T12:57:19')
                           *
    ERROR at line 1:
    ORA-01861: literal does not match format string
    
    SQL> insert into tbl values(to_date('2010-12-15T12:57:19','YYYY-MM-DD"T"HH24:MI:SS'))
      2  /
    
    1 row created.
    
    SQL> 
    

    SY.

  • Export schema through Oracle data pump with question database Vault enabled

    Hello

    I installed and configured Vault database on an Oracle 11 g - r2 - 11.2.0.3 to protect a specific schema (SCHEMA_NAME) via a Kingdom. I followed the following doc:
    http://www.Oracle.com/technetwork/database/security/TWP-databasevault-DBA-BestPractices-199882.PDF
    to ensure that the system and the network user has sufficient rights to complete a schedule pump data Oracle export operation.

    I.e. I gave sys and system the following:
    execute dvsys.dbms_macadm.authorize_scheduler_user ('sys', 'SCHEMA_NAME');
    execute dvsys.dbms_macadm.authorize_scheduler_user ('system', 'SCHEMA_NAME');

    execute dvsys.dbms_macadm.authorize_datapump_user ('sys', 'SCHEMA_NAME');
    execute dvsys.dbms_macadm.authorize_datapump_user ('system', 'SCHEMA_NAME');

    I also create a second domain on the same schema (SCHEMA_NAME) to allow sys and system to manage indexes for the actual tables protected, in order to allow a system to manage indexes for the tables of protected area and sys. This separate realm was created for all types of indexes: Index, Partition of Index and Indextype, sys and system were allowed as the OWNER of this Kingdom.

    However, when I try and complete an operation of Oracle Data Pump export on the schematic, I get two errors directly after the following line appears in the export log:

    Processing object type SCHEMA_EXPORT/TABLE/INDEX/DOMAIN_INDEX/INDEX:
    ORA-39127: unexpected error of the call to export_string: = SYS. DBMS_TRANSFORM_EXIMP. INSTANCE_INFO_EXP('AQ$_MGMT_NOTIFY_QTABLE_S','SYSMAN',1,1,'11.02.00.00.00',newBlock)
    ORA-01031: insufficient privileges
    ORA-06512: at "SYS." DBMS_TRANSFORM_EXIMP', line 197
    ORA-06512: at line 1
    ORA-06512: at "SYS." Dbms_metadata", line 9081
    ORA-39127: unexpected error of the call to export_string: = SYS. DBMS_TRANSFORM_EXIMP. INSTANCE_INFO_EXP('AQ$_MGMT_LOADER_QTABLE_S','SYSMAN',1,1,'11.02.00.00.00',newBlock)
    ORA-01031: insufficient privileges
    ORA-06512: at "SYS." DBMS_TRANSFORM_EXIMP', line 197
    ORA-06512: at line 1
    ORA-06512: at "SYS." Dbms_metadata", line 9081

    The export is completed, but this error.

    Any help, pointers, suggestions, etc. in fact no matter what will be very welcome at this stage.

    Thank you

    I moved the forum thread on the "database - security. If the document does not help, pl open an SR with Support

    HTH
    Srini

  • There is no process to read data written to a pipe

    09/11/01 15:01:39.31 html: there is no process to read data written to a pipe.
    09/11/01 15:01:39.31 html: Servlet error
    java.io.IOException: there is no process to read data written to a pipe.
    at sun.nio.ch.FileDispatcher.write0 (Native Method)
    at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
    at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:132)
    at sun.nio.ch.IOUtil.write(IOUtil.java:103)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:329)
    at java.nio.channels.Channels.write(Channels.java:74)
    to java.nio.channels.Channels.access$ 000(Channels.java:61)
    to java.nio.channels.Channels$ 1.write(Channels.java:148)
    at com.evermind.server.http.AJPOutputStream.endRequest(AJPOutputStream.java:117)
    at com.evermind.server.http.AJPRequestHandler.run(AJPRequestHandler.java:306)
    at com.evermind.server.http.AJPRequestHandler.run(AJPRequestHandler.java:187)
    to oracle.oc4j.network.ServerSocketReadHandler$ SafeRunnable.run (ServerSocketReadHandler.java:260)
    to com.evermind.util.ReleasableResourcePooledExecutor$ MyWorker.run (ReleasableResourcePooledExecutor.java:303)
    at java.lang.Thread.run(Thread.java:810)

    How to solve this problem?

    Thank you
    Jackie

    Jackie,

    It's the alert_in the .log file.

    SQL > show the background_dump_dest parameter

    Thank you
    Hussein

  • Error if the publication data to the cache passive

    Hello

    I use push active-passive replication. When I add data in the active cache, the data added to the cache but get the error below during publish data on passive cache.

    03/11/31 22:38:06.014/291.473 Oracle coherence GE 3.6.1.0 < error > (thread = Proxy: ExtendTcpProxyService:TcpAcceptor, Member = 1): could not publish EntryOperation {siteName = site1, NOMCLUSTER = Cluster1, cacheName = dist-contact-cache, operation = insert, publishableEntry = PublishableEntry {key = Binary (length = 12, 0x154E094D65656E616B736869 = value), value = Binary (length = 88, value = 0x12813A15A90F00004E094D65656E61
    {{6B736869014E07506C6F74203137024E074368656E6E6169034E0954616D696C4E616475044E06363030303432401A155B014E0524737263244E0E73697465312D436C757374657231), originalValue = Binary (length = 0, value = 0 x)}} dist-contact-Cache Cache due to
    Java.io.StreamCorruptedException (packed): invalid type: Class: com.oracle.coherence.patterns.pushreplication.publishers.cache.AbstractCachePublisher 78
    2011-03-31 Oracle coherence GE 3.6.1.0 22:38:06.014/291.473 < D5 > (thread = Proxy: ExtendTcpProxyService:TcpAcceptor, Member = 1): an exception has occurred during the processing of an InvocationRequest for = Proxy Service: ExtendTcpProxyService:TcpAcceptor: (Wrapped: could not publish a lot with the [active Editor] editor on [dist-contact-cache] cache) java.lang.IllegalStateException: attempted to publish on dist-contact-cache cache

    Here is my file of xml configuration of cache coherence
    _________________________________
    ! SYSTEM cache-config DOCTYPE "cache - config.dtd" >

    < xmlns:sync = "class: com.oracle.coherence.patterns.pushreplication.configuration.PushReplicationNamespaceContentHandler cache-config" >
    <>- cached patterns
    <: pof-enabled synchronization provider = "true" >
    < sync: consistency-provider / >
    < / sync: provider >

    < cache-system-mapping >
    <>cache-mapping
    < name of cache - > dist-contact-cache < / cache-name >
    < scheme name > distributed-scheme-with-edition-dumps < / system-name >
    < sync: Editor >
    < sync: Publisher-name > active Editor < / sync: Publisher-name >
    < sync: Editor-system >
    < sync: remote-cluster-editor-system >
    < sync: remote-invocation-service-name > remote control-site2 < / sync: remote-invocation-service-name >
    < sync: remote-editor-system >
    < sync: local-cache-editor-system >
    < sync: target-cache-name > dist-contact-cache < / sync: target-cache-name >
    < / sync: local-cache-editor-system >
    < / sync: remote-editor-system >
    true < sync: autostart > < / sync: autostart >
    < / sync: remote-cluster-editor-system >
    < / sync: Editor-system >
    < / sync: Editor >
    < / cache-mapping >
    < / cache-system-mapping >
    < proxy-system >
    < service name > ExtendTcpProxyService < / service-name >
    < number of threads > 5 < / thread count >
    < Acceptor-config >
    <>tcp-Acceptor
    < address - >
    localhost < address > < / address >
    < port > 9099 < / port >
    < / local-address >
    < / tcp-Acceptor >
    < / Acceptor-config >
    < autostart > true < / autostart >
    < / proxy-system >

    < remote-invocation-plan >
    < service name > remote control-site2 < / service-name >
    < initiator-config >
    <>tcp-initiator
    <>remote addresses
    > the socket address <
    localhost < address > < / address >
    < port > 5000 < / port >
    < / socket-address >
    < / remote-address >
    < connect-timeout > 2 s < / connect-timeout >
    < / tcp-initiator >
    < outgoing-message Manager >
    < request-timeout > s 5 < / timeout request >
    < / Manager of outbound messages >
    < / initiator-config >
    < serializer >
    > class name < com.tangosol.io.pof.ConfigurablePofContext < / class name >
    < / serializer >
    < / remote-invocation-plan >

    < / cache-plans >
    < / cache-config >

    Thanks for the quick post. Please try to add "-Dtangosol.pof.enabled = true" to the active and passive server startup;
    and just to be sure to add "-Dtangosol.coherence.distributed.localstorage = true" as well.

    Mark J

Maybe you are looking for