External table to read another encoding
Hi allI am having some problems with the external table. My database coding is currently AL32UTF8. If my external table wants to read the txt file with the ZHS16GBK, the value is displayed not correctly when the external table reads the file. Y at - it means to display the value in the file txt correctly without changing the database coding and coding in the txt file.
You can give a character defined in the external table definition.
See for example: http://asktom.oracle.com/pls/apex/f?p=100:11:0:P11_QUESTION_ID:6611962171229 #10978420712036
There is also the {forum: id = 732} Forum, where you could find/ask as well.
Tags: Database
Similar Questions
-
Since data is stored outdoors, and the definition that is stored inside, it means that a the table structure of the outer table is stored in the database as well (or a virtual table based on the definition of the external table...)
I'll hit the ORA-DOCS again once, back in the external tables read tonight on them 2 or 3 sources and it is not quite clear whether real data with structure exists outside the database or an internal table set (and stored) retrieves data from an outside source.
Since data is stored outdoors, and the definition that is stored inside, it means that a the table structure of the outer table is stored in the database as well (or a virtual table based on the definition of the external table...)
The 'definition' you refer to IS the structure of the table; they are one and the same. There is no 'table' stored in the database using space or storage. When a query on an external table is executed, the data source is read.
I'll be hitting the ORA-DOCS up again, just got back into external tables tonight reading up on them from 2-3 sources and it is not quite clear if an actual data with structure exists out of the database , or an internally defined (and stored?) table draws data from an outside source.
I suggest you that start with the documentation of Oracle - including the ground provided the link to:
If any "metadata" are stored outside the database depends on if the file that is outside of the database was produced by Oracle using the robot UNLOADING discussed in this doc
Unloading of data using ORACLE_DATAPUMP Access driver
To unload data, you use the
ORACLE_DATAPUMP
driver access. The stream that is discharged is in a proprietary format and contains all of the data column for each row being unloaded.A discharge operation also creates a stream of metadata that describes the content of the data stream. The information in the metadata stream are required to load the stream. Therefore, the metadata stream is written to the file data and placed before the data stream.
If YOU provide data/files, then you MUST provide it in the format expected by the external table statement. You can, if you wish, use a preprocessor to convert ANY file (zip, encrypted, etc.) in the required format.
For YOUR files, you can the metadata stored in the same file, or elsewhere, if you choose, but Oracle will have NO knowledge of this fact and not will NOT be involved in the transformation or read any of the metadata that you provide. Your preprocessor must remove all these metadata and ONLY provide data in the format appropriate for Oracle to use.
If the file was produced by the process of UNLOADING Oracle then it will include metadata that Oracle WILL read, use, and remove as says this quote from doc above. These external metadata is added to the real external table definintion/metadata stored in the dictionary.
-
Timestamp of reading using the external Table
Hello
I have a data file that looks like
2011-08-15 00:00:00.000000 | 23:59:59.999999 2011-08-15
I am trying to use an external table. But I always get an error trying to read the fractional part (ie the last 6 digits).
The code is:
create table ext_table_fdw)
DW_Open TIMESTAMP
DW_Close TIMESTAMP
)
(external) Organization
type oracle_loader
the default directory ftp_in
(settings) access
records delimited by newline
nologfile
fields ended by ' |'
missing field values are null
(
DW_OPEN Char Date_Format Timestamp mask 'Mon dd yyyy Hh:Mi:Ss:Ff6'
DW_Close Char Date_Format Timestamp mask 'Mon dd yyyy Hh:Mi:Ss:Ff6'
)
)
location ("fdwextract_copy.txt")
)
reject limit unlimited;
I get this error:
ORA-29913: error in executing ODCIEXTTABLEFETCH legend
ORA-29400: data cartridge error
Error opening file /u02/lpremia/ftp-in/EXT_TABLE_FDW_11948.bad
Please help as it is very important for us to be able to read the last part of the timestamp.
Thank you.
Best regards
Brindauser6361157 wrote:
Hello
I have a data file that looks like
2011-08-15 00:00:00.000000 | 23:59:59.999999 2011-08-15
DW_OPEN Char Date_Format Timestamp mask 'Mon dd yyyy Hh:Mi:Ss:Ff6'
DW_Close Char Date_Format Timestamp mask 'Mon dd yyyy Hh:Mi:Ss:Ff6'The MASK is NOT March data!
'YYYY-MM-DD HH24:MI:SS:Ff6 ". -
My question is - is it possible for me to fix this error at the level of external table definition? Please advice
Here is the data file I am trying to download...
KSEA | 08-10 - 2015-17.00.00 | 83.000000 | 32.000000 | 5.800000
KBFI | 2015-08-06 - 15.00.00 | 78.000000 | 35.000000 | 0.000000
KSEA | 08-10 - 2015-11.00.00 | 73.000000 | 55.000000 | 5.800000
KSEA | 08-08 - 2015-05.00.00 | 61.000000 | 90.000000 | 5.800000
KBFI | 2015-08-06 - 16.00.00 | 78.000000 | 36.000000 | 5.800000
KSEA | 2015-08-07 - 18.00.00 | 82.000000 | 31.000000 | 10.400000
KSEA | 08-10 - 2015-00.00.00 | 65.000000 | 61.000000 | 4.600000
KBFI | 08-08 - 2015-07.00.00 | 63.000000 | 84.000000 | 4.600000
KSEA | 08-10 - 2015-15.00.00 | 81.000000 | 34.000000 | 8.100000
This is the external table script
CREATE TABLE MWATCH. MWATCH. WEATHER_EXT ".
(
LOCATION_SAN VARCHAR2 (120 BYTE),
DATE OF WEATHER_DATETIME,
NUMBER (16) TEMP.
NUMBER (16) OF MOISTURE,
WIND_SPEED NUMBER (16)
)
EXTERNAL ORGANIZATION
(TYPE ORACLE_LOADER
THE DEFAULT DIRECTORY METERWATCH
ACCESS SETTINGS
(records delimited by newline
BadFile "METERWATCH": "weather_bad" logfile 'METERWATCH': 'weather_log '.
fields ended by ' |' missing field values are null
(location_san, WEATHER_DATETIME char date_format DATE mask "YYYY-mm-dd - hh.mi.ss", TEMPERATURE, MOISTURE, wind_speed)
)
LOCATION (METERWATCH: 'weather.dat')
)
REJECT LIMIT UNLIMITED
PARALLEL (DEGREE 5 1 INSTANCES)
NOMONITORING;
Here is the error in the weather_bad which is generated files...
column WEATHER_DATETIME of 55 error processing in the 1st row to the /export/home/camsdocd/meterwatch/weather.dat data file ORA - 01849ther_log.log 55 56 error processing column WEATHER_DATETIME in the row 1 for the /export/home/camsdocd/meterwatch/weather.dat data file 57 56 ORA - 01849ther_log.log: time must be between 1 and 12 58 column WEATHER_DATETIME 57 error during treatment number 2 for the /export/home/camsdocd/meterwatch/weather.dat data file 59 ORA-58 01849: time must be between 1 and 12 60 column WEATHER_DATETIME of 59 error processing 5th for the /export/home/camsdocd/meterwatch/weather.dat data file 61 ORA-60 01849: time must be between 1 and 12 62 column WEATHER_DATETIME of 61 error treatment in line 6 to the /export/home/camsdocd/meterwatch/weather.dat data file 63 ORA-62 01849: time must be between 1 and 12 64 column WEATHER_DATETIME of 63 error treatment in row 7 for datafile /export/home/camsdocd/meterwatch/weather.dat 65 ORA-64 01849: time must be between 1 and 12 66 column WEATHER_DATETIME of 65 error treatment 9 for the /export/home/camsdocd/meterwatch/weather.dat data file online 67: time must be between 1 and 12 My question is - is it possible for me to fix this error at the level of external table definition? Please advice
Yes it is possible. Let's not your date mask. You're masking for 12-hour format when your data is in 24-hour format. Change the mask of your date to be "YYYY-mm-dd-hh24. MI.ss ". Notice the change in "BOLD".
-
Pretreatment of an external table - table is empty at the end
Hello
Oracle Database 11 g Enterprise Edition Release 11.2.0.1.0 - 64 bit Production
PL/SQL Release 11.2.0.1.0 - Production
"CORE 11.2.0.1.0 Production."
AMT for Solaris: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
Solaris 10 x 86
I'm trying to create an external table that first pre-process a file and then the bed. The problem is that, in the end, the outer table is empty even if the read file is not empty.
First, the table works gunzip on the 'employees.csv.gz' file located in/export/home/oracle/archives and then bed. The file employees.csv.gz exists in the specified location. This is the complete script:
-This Directory keeps the archived files
CREATE or REPLACE DIRECTORY arch_dir AS 'archives/export/home/oracle/archives";
-This directory shows where the command gunzip
CREATE or REPLACE the bin_dir AS DIRECTORY ' / usr/bin';
CREATE TABLE emp_exadata
(
employee_id NUMBER (22, 0),
first name VARCHAR2 (20).
last_name VARCHAR2 (25).
e-mail VARCHAR2 (25).
Phone_Number VARCHAR2 (20).
hire_date DATE,
job_id VARCHAR2 (10),
higher wages (22: 2),
commission_pct NUMBER 22 (2)
manager_id NUMBER (22, 0),
department_id NUMBER (22, 0)
)
EXTERNAL ORGANIZATION
(
TYPE oracle_loader
Arch_dir default DIRECTORY
ACCESS SETTINGS
(
RECORDS delimited BY NEWLINE
preprocessor bin_dir: "gunzip".
end fields BY «»
missing field VALUES are NULL
(
employe_id,
first name,
last_name,
E-mail
Phone_Number,
hire_date CHAR date_format DATE mask "dd-mm-yyyy hh24:mi:ss."
job_id,
salary,
commission_pct,
manager_id,
department_id
)
)
LOCATION ("employees_exp.csv.gz")
)
REJECT LIMIT UNLIMITED;
When I choose to emp_exadata the result set is empty!
SELECT * FROM emp_exadata;
no selected line
When I look at the db server in the directory /export/home/oracle/archives I see no archived file employees_exp.csv. Here is the result of the first three lines:
bash - $3.2-3 employees_exp.csv head
198, Donald, Ollivier, DOCONNEL, 650.507.9833, 21/06/2007 00:00:00, SH_CLERK, 2600, 124, 50,.
199, Douglas, grant, DGRANT, 650.507.9844, 2008-01-13 00:00:00, SH_CLERK, 2600, 124, 50,.
200 Jennifer Whalen, JWHALEN 515.123.4444, 17/09/2003 00:00:00, AD_ASST, 4400, 101, 10.
The end of the lines in the file line is LF (unix style). The encoding is in ANSI format.
I tried to experiment around, but cannot view records when I select in the external table. Please help me to solve it.
I also register the generated log file:
LOG file opened at 01/06/15 16:40:11
For table EMP_EXADATA field definitions
Record format DELIMITED BY newline
Data in file have same "endianness" as platform
Rows with all null fields are accepted
Fields of the Data Source:
EMPLOYEE_ID CHAR (255)
Completed by «,»
Trim whitespace same as SQL Loader
FIRST NAME CHAR (255)
Completed by «,»
Trim whitespace same as SQL Loader
LAST_NAME CHAR (255)
Completed by «,»
Trim whitespace same as SQL Loader
EMAIL CHAR (255)
Completed by «,»
Trim whitespace same as SQL Loader
PHONE_NUMBER CHAR (255)
Completed by «,»
Trim whitespace same as SQL Loader
HIRE_DATE TANK (19)
Day DATE data type, date mask dd-mm-yyyy hh24:mi:ss
Completed by «,»
Trim whitespace same as SQL Loader
JOB_ID CHAR (255)
Completed by «,»
Trim whitespace same as SQL Loader
SALARY CHAR (255)
Completed by «,»
Trim whitespace same as SQL Loader
COMMISSION_PCT CHAR (255)
Completed by «,»
Trim whitespace same as SQL Loader
MANAGER_ID CHAR (255)
Completed by «,»
Trim whitespace same as SQL Loader
DEPARTMENT_ID CHAR (255)
Completed by «,»
Trim whitespace same as SQL Loader
You need to carefully examine the description of the PREPROCESSOR option in the chapter of the manual external Tables utility.
The first point that applies to your question is that the preprocessor must write its data transformed to stdout. To do this with gunzip, you use the - c command line parameter. The second point that applies to your case, in view of the answer to the first point, is that you must write a script shell if your preprocessor requires command line settings.
Kind regards
Bob
-
External table->; fetch location?
With the help of Oracle 10.2.0.5
An external table is a construction that gives me access SQL to a file.
Is it possible to know the name of the file somehow inside to select? Like to add a column with the name of the file?
example of Pseudo
The result might look like this:CREATE TABLE EXT_DUMMY ( "RECORDTYPE" VARCHAR2(100 BYTE), "COL1" VARCHAR2(100 BYTE), "COL2" VARCHAR2(100 BYTE), "FILE" VARCHAR2(100 BYTE) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "IMPORT_BAD_FILE" ACCESS PARAMETERS ( records delimited BY newline FIELDS TERMINATED BY ';' MISSING FIELD VALUES ARE NULL ( RECORDTYPE CHAR , COL1 CHAR , COL2 CHAR , FILE CHAR FILLER ) ) LOCATION ( 'Testfile1.txt, Testfile2.txt' ) ) reject limit 10 ;
I would like to know what file is read a certain rank. Maybe I missed an option in the documentation. In this example, I have two different files as the source for the external table.RECORDTYPE COL1 COL2 FILE SAMPLE DUMMY DUMMY Testfile1.txt SAMPLE DUMMY1 DUMMY Testfile1.txt SAMPLE DUMMY2 DUMMY Testfile1.txt SAMPLE DUMMY3 DUMMY Testfile1.txt SAMPLE DUMMY1 DUMMY1 Testfile2.txt SAMPLE DUMMY1 DUMMY2 Testfile2.txt SAMPLE DUMMY2 DUMMY1 Testfile2.txt
Another use case could be that:
If I enable a user to switch the external table to a different file
. How do know us which file is read during the select on the table? When UserA's select, perhaps UserB just modified the location before that selection has been started. That's why UserA reads in a different file than expected.alter table EXT_DUMMY location ('Testfile3.txt' )
Published by: Sven w. on May 26, 2011 16:48
Published by: Sven w. on May 26, 2011 16:51
Published by: Sven w. on May 26, 2011 17:11Hi Sven,
I don't know how much we can rely on that, but we will consider the following:
create table test_xt ( rec_id number , message varchar2(100) ) organization external ( default directory test_dir access parameters ( records delimited by newline fields terminated by ';' ) location ( 'marc5.txt' , 'test1.csv' , 'test2.csv' , 'test3.csv' ) );
I always thought that the ROWID doesn't hold much meaning for an external table, but...
SQL> select t.rowid 2 , dump(t.rowid) as rowid_dump 3 , regexp_substr(dump(t.rowid,10,9,1),'\d+$') as file# 4 , t.* 5 from test_xt t 6 ; ROWID ROWID_DUMP FILE# REC_ID MESSAGE ------------------ --------------------------------------------------------- ------ ---------- ------------------------------- (AADVyAAAAAAAAAAAA Typ=208 Len=17: 4,0,0,213,200,0,0,0,0,0,0,0,0,0,0,0,0 0 1 this is a line from marc5.txt (AADVyAAAAAAAAAAAA Typ=208 Len=17: 4,0,0,213,200,0,0,0,0,0,0,0,0,0,0,0,33 0 2 this is a line from marc5.txt (AADVyAAAAAAAAAAAA Typ=208 Len=17: 4,0,0,213,200,0,0,0,0,0,0,0,0,0,0,0,66 0 3 this is a line from marc5.txt (AADVyAAAAAAAAAAAA Typ=208 Len=17: 4,0,0,213,200,0,0,0,0,0,0,0,0,0,0,0,99 0 4 this is a line from marc5.txt (AADVyAAAAAEAAAAAA Typ=208 Len=17: 4,0,0,213,200,0,0,0,1,0,0,0,0,0,0,0,0 1 1 this is a line from test1.csv (AADVyAAAAAEAAAAAA Typ=208 Len=17: 4,0,0,213,200,0,0,0,1,0,0,0,0,0,0,0,33 1 2 this is a line from test1.csv (AADVyAAAAAEAAAAAA Typ=208 Len=17: 4,0,0,213,200,0,0,0,1,0,0,0,0,0,0,0,66 1 3 this is a line from test1.csv (AADVyAAAAAEAAAAAA Typ=208 Len=17: 4,0,0,213,200,0,0,0,1,0,0,0,0,0,0,0,99 1 4 this is a line from test1.csv (AADVyAAAAAIAAAAAA Typ=208 Len=17: 4,0,0,213,200,0,0,0,2,0,0,0,0,0,0,0,0 2 1 this is a line from test2.csv (AADVyAAAAAIAAAAAA Typ=208 Len=17: 4,0,0,213,200,0,0,0,2,0,0,0,0,0,0,0,33 2 2 this is a line from test2.csv (AADVyAAAAAIAAAAAA Typ=208 Len=17: 4,0,0,213,200,0,0,0,2,0,0,0,0,0,0,0,66 2 3 this is a line from test2.csv (AADVyAAAAAMAAAAAA Typ=208 Len=17: 4,0,0,213,200,0,0,0,3,0,0,0,0,0,0,0,0 3 1 this is a line from test3.csv (AADVyAAAAAMAAAAAA Typ=208 Len=17: 4,0,0,213,200,0,0,0,3,0,0,0,0,0,0,0,33 3 2 this is a line from test3.csv (AADVyAAAAAMAAAAAA Typ=208 Len=17: 4,0,0,213,200,0,0,0,3,0,0,0,0,0,0,0,66 3 3 this is a line from test3.csv (AADVyAAAAAMAAAAAA Typ=208 Len=17: 4,0,0,213,200,0,0,0,3,0,0,0,0,0,0,0,99 3 4 this is a line from test3.csv (AADVyAAAAAMAAAAAA Typ=208 Len=17: 4,0,0,213,200,0,0,0,3,0,0,0,0,0,0,0,132 3 5 this is a line from test3.csv 16 rows selected
Then with a join to EXTERNAL_LOCATION$:
SQL> with ext_loc as ( 2 select position-1 as pos 3 , name as filename 4 from sys.external_location$ 5 where obj# = ( select object_id 6 from user_objects 7 where object_name = 'TEST_XT' ) 8 ) 9 select x.filename, 10 t.* 11 from test_xt t 12 join ext_loc x on x.pos = to_number(regexp_substr(dump(t.rowid,10,9,1),'\d+$')) 13 ; FILENAME REC_ID MESSAGE ------------ -------- -------------------------------- marc5.txt 1 this is a line from marc5.txt marc5.txt 2 this is a line from marc5.txt marc5.txt 3 this is a line from marc5.txt marc5.txt 4 this is a line from marc5.txt test1.csv 1 this is a line from test1.csv test1.csv 2 this is a line from test1.csv test1.csv 3 this is a line from test1.csv test1.csv 4 this is a line from test1.csv test2.csv 1 this is a line from test2.csv test2.csv 2 this is a line from test2.csv test2.csv 3 this is a line from test2.csv test3.csv 1 this is a line from test3.csv test3.csv 2 this is a line from test3.csv test3.csv 3 this is a line from test3.csv test3.csv 4 this is a line from test3.csv test3.csv 5 this is a line from test3.csv
Seems to work... assuming that the files are always read in the order specified by the LOCATION parameter and the ID generated actually means what I think it means.
-
Hello
I try to get data by using the external table as follows
create table nfs_acq ( Participant_ID char( 3 ), Transaction_Type char( 2 ), From_Account_Type char( 2 ), To_Account_Type char( 2 ), Trans_srno char( 12 ), Response_Code char( 2 ), PAN_Number char( 19 ), Member_Number char( 1 ), Approval_Number char( 6 ), System_Trace_Number char( 12 ), Transaction_Date char( 6 ), Transaction_Time char( 6 ), Merchant_Category_Code char( 4 ), Card_Acceptor_S_Date char( 6 ), Card_Acceptor_ID char( 15 ), Card_Acceptor_t_ID char( 8 ), Card_Acceptor_t_Location char( 40 ), Acquirer_ID char( 11 ), Acquirer_Settlement_Date char( 6 ), Transaction_Currency_code char( 3 ), Transaction_Amount char( 15 ), Actual_Transaction_Amount char( 15 ), Transaction_Acitivity_fee char( 15 ), Acquirer_Cur_Code char( 3 ), Acquirer_s_Amount char( 15 ), Acquirer_Settlement_Fee char( 15 ), Acquirer_settl_proc_fee char( 15 ), Tran_Acq_Conv_Rate char( 15 ) ) -- ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY HRK_NEW ACCESS PARAMETERS ( RECORDS fixed 274 skip 1 FIELDS ( -- Participant_ID char( 3 ), Transaction_Type char( 2 ), From_Account_Type char( 2 ), To_Account_Type char( 2 ), Trans_srno char( 12 ), Response_Code char( 2 ), PAN_Number char( 19 ), Member_Number char( 1 ), Approval_Number char( 6 ), System_Trace_Number char( 12 ), Transaction_Date char( 6 ), Transaction_Time char( 6 ), Merchant_Category_Code char( 4 ), Card_Acceptor_S_Date char( 6 ), Card_Acceptor_ID char( 15 ), Card_Acceptor_t_ID char( 8 ), Card_Acceptor_t_Location char( 40 ), Acquirer_ID char( 11 ), Acquirer_Settlement_Date char( 6 ), Transaction_Currency_code char( 3 ), Transaction_Amount char( 15 ), Actual_Transaction_Amount char( 15 ), Transaction_Acitivity_fee char( 15 ), Acquirer_Cur_Code char( 3 ), Acquirer_s_Amount char( 15 ), Acquirer_Settlement_Fee char( 15 ), Acquirer_settl_proc_fee char( 15 ), Tran_Acq_Conv_Rate char( 15 ) ) ) LOCATION (' ACQRPKJB110116.mkjb') ) reject limit unlimited /
table design
However
When I select data from table, I get below error
ORA-29913: error in executing ODCIEXTTABLEOPEN legend
ORA-29400: data cartridge error
KUP-00554: error occurred when parsing the access settings
KUP-01005: syntax error: found 'minussign': expected an a: 'double-quoted-string, identifier, single-quoted-string '.
KUP-01007: in column 2 line 4
29913 00000 - "error in the execution of %s legend".
* Cause: The execution of the specified legend caused an error.
* Action: Examine the error messages take appropriate measures.
This is a sequential I am trying to import
KJB0403 4643470817234643471601110731336011160111ATM00201 001107001303004213191476009651 ATM00201TILAK KALYAN THANE MHIN800044 160111356000000000070000000000000070000000000000000000356000000000070000000000000000000000000000000000000001000000000 CHOWK
KJB0503 0174520817337281611601110956246011160111ATM00201 001109001320004293932011713114 ATM00201TILAK KALYAN THANE MHIN800044 160111356000000000000000000000000000000000000000000000356000000000000000000000000000000000000000000000000001000000000 CHOWK
KJB0403 0176940817339062491601110957576011160111ATM00201 001109001321004293932011713114 ATM00201TILAK KALYAN THANE MHIN800044 160111356000000000050000000000000050000000000000000000356000000000050000000000000000000000000000000000000001000000000 CHOWK
KJB0402 8653240817340573261601110959156011160111ATM00201 001109001322004722554560104882 ATM00201TILAK KALYAN THANE MHIN800044 160111356000000000320000000000000320000000000000000000356000000000320000000000000000000000000000000000000001000000000 CHOWK
KJB0403 4459000817360340861601111015066011160111ATM00201 001110001326005044372611412016 ATM00201TILAK KALYAN THANE MHIN800044 160111356000000001000000000000001000000000000000000000356000000001000000000000000000000000000000000000000001000000000 CHOWK
KJB0403 6468820817392369431601111037426011160111ATM00201 001110001335004214090575031872 ATM00201TILAK KALYAN THANE MHIN800044 160111356000000000100000000000000100000000000000000000356000000000100000000000000000000000000000000000000001000000000 CHOWK
KJB0402 9133280817459788951601111119146011160111ATM00201 001111001359006074194560106006 ATM00201TILAK KALYAN THANE MHIN800044 160111356000000000600000000000000600000000000000000000356000000000600000000000000000000000000000000000000001000000000 CHOWK
KJB0402 1985030817481985031601111131596011160111ATM00201 001111001362005360160500127597 ATM00201TILAK KALYAN THANE MHIN800044 160111356000000001000000000000001000000000000000000000356000000001000000000000000000000000000000000000000001000000000 CHOWK
KJB0403 4400820817484400821601111133216011160111ATM00201 001111001363004213371215519505 ATM00201TILAK KALYAN THANE MHIN800044 160111356000000000200000000000000200000000000000000000356000000000200000000000000000000000000000000000000001000000000 CHOWK
KJB0402 0817491361901601111137166011160111ATM00201 001111001366044135083790012559 ATM00201TILAK KALYAN THANE MHIN800044 160111356000000000300000000000000000000000000000000000356000000000000000000000000000000000000000000000000001000000000 CHOWK
KJB0403 6891010817496562841601111140096011160111ATM00201 001111001368004214090322036455 ATM00201TILAK KALYAN THANE MHIN800044 160111356000000000050000000000000050000000000000000000356000000000050000000000000000000000000000000000000001000000000 CHOWK
KJB0403 1310430817509973121601111147346011160111ATM00201 001111001371004689680028170636 ATM00201TILAK KALYAN THANE MHIN800044 160111356000000001000000000000001000000000000000000000356000000001000000000000000000000000000000000000000001000000000 CHOWK
KJB0503 0309320817544106971601111206046011160111ATM00201 001112001375005296160001593353 ATM00201TILAK KALYAN THANE MHIN800044 160111356000000000000000000000000000000000000000000000356000000000000000000000000000000000000000000000000001000000000 CHOWK
KJB0403 0322130817546216041601111207116011160111ATM00201 001112001376005296160001593353 ATM00201TILAK KALYAN THANE MHIN800044 160111356000000000700000000000000700000000000000000000356000000000700000000000000000000000000000000000000001000000000 CHOWK
KJB0502 8851180817548851181601111208376011160111ATM00201 001112001377005087530341448208 ATM00201TILAK KALYAN THANE MHIN800044 160111356000000000000000000000000000000000000000000000356000000000000000000000000000000000000000000000000001000000000 CHOWK
KJB0402 0566530817550566531601111209336011160111ATM00201 001112001378005087530341448208 ATM00201TILAK KALYAN THANE MHIN800044 160111356000000000420000000000000420000000000000000000356000000000420000000000000000000000000000000000000001000000000 CHOWK
KJB0403 6943770817556943771601111213006011160111ATM00201 001112001379004214920342912709 ATM00201TILAK KALYAN THANE MHIN800044 160111356000000000050000000000000050000000000000000000356000000000050000000000000000000000000000000000000001000000000 CHOWK
KJB0402 7864780817559532151601111214256011160111ATM00201 001112001380006220180039900075266 ATM00201TILAK KALYAN THANE MHIN800044 160111356000000000330000000000000330000000000000000000356000000000330000000000000000000000000000000000000001000000000 CHOWK
KJB0402 1083880817581341641601111226086011160111ATM00201 001112001382004378990101766440 ATM00201TILAK KALYAN THANE MHIN800044 160111356000000000800000000000000800000000000000000000356000000000800000000000000000000000000000000000000001000000000 CHOWK
KJB0403 4830820817582764451601111226546011160111ATM00201 001112001383006071600100041514 ATM00201TILAK KALYAN THANE MHIN800044 160111356000000000700000000000000700000000000000000000356000000000700000000000000000000000000000000000000001000000000 CHOWK
KJB0403 6286600817597510771601111234376011160111ATM00201 001112001386005326760302103432 ATM00201TILAK KALYAN THANE MHIN800044 160111356000000000100000000000000100000000000000000000356000000000100000000000000000000000000000000000000001000000000 CHOWK
KJB0402 0029180817604353121601111238126011160111ATM00201 001112001389004704560203009224 ATM00201TILAK KALYAN THANE MHIN800044 160111356000000001000000000000001000000000000000000000356000000001000000000000000000000000000000000000000001000000000 CHOWK
KJB0403 3000000817608080441601111240116011160111ATM00201 001112001390005044339052564987192 ATM00201TILAK KALYAN THANE MHIN800044 160111356000000000400000000000000400000000000000000000356000000000400000000000000000000000000000000000000001000000000 CHOWK
KJB0403 0139290817626145271601111249456011160111ATM00201 001112001396005346800000069646 ATM00201TILAK KALYAN THANE MHIN800044 160111356000000000900000000000000900000000000000000000356000000000900000000000000000000000000000000000000001000000000 CHOWK
KJB0403 1126320817671162161601111313226011160111ATM00201 001113001406005196190081887558 ATM00201TILAK KALYAN THANE MHIN800044 160111356000000000020000000000000020000000000000000000356000000000020000000000000000000000000000000000000001000000000 CHOWK
Help, please
External table reference...
Comments are lines that start with two dashes followed text. Comments must be placed before all access settings, for example:
access_parameters Clause
The access parameters clause contains comments, record formatting, and field formatting information. The syntax for the
access_parameters
clause is as follows: Text description of the illustration et_access_parameter.gifComments
Comments are lines that begin with two dashes followed by text. Comments must be placed before any access parameters, for example:
--This is a comment --This is another comment RECORDS DELIMITED BY NEWLINE
-
External table with mixed columns
Hello everyone. We use Oracle 11R1. We have an external table pointing to a CSV with 7 columns inside. The file has always column 7 but the column order differs from time to time. Each column has a header that remains consistent. Is there a way to map the column names to the column header, s so that we did not change the definition of the external table every time a new file is available in?
Thank you.
Hello
As John said, you must assign names of columns when the table is created. I guess you could have a dynamic SQL solution that reads the header you mentioned, uses this information to write a CREATE table, drop the table, and then recreated with the new order of the columns.
You may have a view that maps the 7 columns in your table of 7 columns in the view. Which column gets the mapping to that may depend on the header.
-
Load the XML file into Oracle external Table
I load the data from the XML file into an intermediate table Oracle using external Tables.Let's say below, it is my XML file
< header >
< A_CNT > 10 < / A_CNT >
< E_CNT > 10 < / E_CNT >
< AF_CNT > 10 < / AF_CNT >
< / header >
< student >
<>students-details
< Student_info >
< Single_Info >
< ID > 18 / < ID >
New York < City > < / City >
< country > United States < / country >
< Name_lst >
< Student_name >
Samuel < name > < / name >
Paul < Last_name > < / Last_name >
< DOB > 19871208 < / DOB >
Aware of < RecordStatus > < / RecordStatus >
< / Student_name >
< Student_name >
Samuel < name > < / name >
Paul < Last_name > < / Last_name >
< DOB > 19871208 < / DOB >< TerminationDt > 20050812 < / TerminationDt >
History of < RecordStatus > < / RecordStatus >
< / Student_name >
< / Name_lst >
< Personal_Info >
<>men < / Type >
< 27 > < / Age >
< / Personal_Info >
< / Single_Info >
< / Student_info >< student - register >
class < A >
< info >
< detail >
< ID student > 18 < / student >
EE < major > < / Major >
< course-Grades >
< course > VLSI < / course >
< degree > 3.0 < / Grade >
< / course-Grades >
< course-Grades >
< course > nanotechnology < / course >
< degree > 4.0 < / Grade >
< / course-Grades >
< / details >
< detail >
< ID student > 18 < / student >
THIS < major > < / Major >
< / details >
< / info >
class < A >
< Student_Enrol >
<>students-details
< student >I load this XML data file into a single table using an external Table. Could someone help me please with coding.
Thank you
Reva
Could you please help me how to insert my XML content into that.
Same as before, try a plain old INSERT:
insert into xml_pecos
values)
XmlType (bfilename ('XML_DIR', "test.xml"), nls_charset_id ('AL32UTF8'))
);
But you'll probably hit the same limitation as with the binary XMLType table.
In this case, you can use FTP to load the file as a resource in the XML DB repository.
If the XML schema has been registered with the hierarchy enabled then the file will be automatically inserted into the table.
Could you post the exact statement that you used to save the scheme?
In the meantime, you can also read this article, I did a few years ago, it covers the XML DB features that may be useful here, including details on how to load the file via FTP:
https://odieweblog.WordPress.com/2011/11/23/Oracle-XML-DB-a-practical-example/
And documentation of the course: http://docs.oracle.com/cd/E11882_01/appdev.112/e23094/xdb06stt.htm#ADXDB4672
-
Reg: question of external table-
Hi Experts,
I am trying to create and read from an external table, but it raises an error. Please notify.
Scenario-
I'm downloading a file of my APEX application that is stored in a BLOB field. Then, comes the following:
DBMS_LOB.CREATETEMPORARY (v_clob, true);
-/ / Convert BLOB on the CLOB type
() DBMS_LOB.converttoclob
v_clob, v_blob,
DBMS_LOB. LOBMAXSIZE,
v_dest_offset, v_src_offset,
v_blob_csid, v_lang_context, g_msg
);
-/ / creating a csv file
v_temp_filename: = 'apex_ ' | TO_CHAR (sysdate, 'yyyymmddhh24miss') |'. CSV';
-/ / Put the csv file in the database directory 'APEX_DIR '.
dbms_xslprocessor.clob2file (v_clob, 'APEX_DIR', v_temp_filename);
-/ / creating an external table
v_ext_table: = q'[create table (apex_temp_data_ext)
C001 varchar2 (4000), c002 varchar2 (4000), c003 varchar2 (4000), c004 varchar2 (4000), c005 varchar2 (4000).
C006 varchar2 (4000), c007 varchar2 (4000), c008 varchar2 (4000), c009 varchar2 (4000) c010 varchar2 (4000).
C011 varchar2 (4000), c012 varchar2 (4000), c013 varchar2 (4000), c014 varchar2 (4000), c015 varchar2 (4000).
C016 varchar2 (4000), c017 varchar2 (4000), c018 varchar2 (4000), c019 varchar2 (4000), c020 varchar2 (4000).
C021 varchar2 (4000), c022 varchar2 (4000), c023 varchar2 (4000), c024 varchar2 (4000), see c025 varchar2 (4000).
C026 varchar2 (4000), c027 varchar2 (4000), c028 varchar2 (4000), c029 varchar2 (4000), c030 varchar2 (4000).
C031 varchar2 (4000), c032 varchar2 (4000), c033 varchar2 (4000), c034 varchar2 (4000), c035 varchar2 (4000).
c037 varchar2 (4000), c038 varchar2 (4000), c039 varchar2 (4000), C036 varchar2 (4000), c040 varchar2 (4000).
c042 varchar2 (4000), c043 varchar2 (4000), c044 varchar2 (4000), c041 varchar2 (4000), c045 varchar2 (4000).
C046 varchar2 (4000), c047 varchar2 (4000), c048 varchar2 (4000), c049 varchar2 (4000), c050 varchar2 (4000)
)
(external) Organization
type oracle_loader
the default directory apex_dir
(settings) access
records delimited by newline
fields completed by «,»
surrounded of possibly ' "' and '"' NOTRIM
missing field values are null
(
C001 varchar2 (4000), c002 varchar2 (4000), c003 varchar2 (4000), c004 varchar2 (4000), c005 varchar2 (4000).
C006 varchar2 (4000), c007 varchar2 (4000), c008 varchar2 (4000), c009 varchar2 (4000) c010 varchar2 (4000).
C011 varchar2 (4000), c012 varchar2 (4000), c013 varchar2 (4000), c014 varchar2 (4000), c015 varchar2 (4000).
C016 varchar2 (4000), c017 varchar2 (4000), c018 varchar2 (4000), c019 varchar2 (4000), c020 varchar2 (4000).
C021 varchar2 (4000), c022 varchar2 (4000), c023 varchar2 (4000), c024 varchar2 (4000), see c025 varchar2 (4000).
C026 varchar2 (4000), c027 varchar2 (4000), c028 varchar2 (4000), c029 varchar2 (4000), c030 varchar2 (4000).
C031 varchar2 (4000), c032 varchar2 (4000), c033 varchar2 (4000), c034 varchar2 (4000), c035 varchar2 (4000).
c037 varchar2 (4000), c038 varchar2 (4000), c039 varchar2 (4000), C036 varchar2 (4000), c040 varchar2 (4000).
c042 varchar2 (4000), c043 varchar2 (4000), c044 varchar2 (4000), c041 varchar2 (4000), c045 varchar2 (4000).
C046 varchar2 (4000), c047 varchar2 (4000), c048 varchar2 (4000), c049 varchar2 (4000), c050 varchar2 (4000)
)
)
location ('] ' |) v_temp_filename | " ')
)
3 parallel
reject limit unlimited;] " ;
immediately run v_ext_table;
It gives me a generic mistake on the front-end server. But, when I take this external Table as well as the "v_temp_filename", that it is created, but when the SELECTION is fired triggers error:
ORA-29913: error in executing ODCIEXTTABLEOPEN legend
ORA-29400: data cartridge error
KUP-00554: error occurred when parsing the access settings
KUP-01005: syntax error: found 'distinctive sign': expected an a: "binary_double, types binary_float, comma, date, defaultif, char, decimal, double, float, integer, (, nullif, oracle_date, oracle_number, position, raw, recnum,), unsigned, varchar, varraw, varrawc, varcharc, zoned.
KUP-01008: the bad ID was: varchar2
KUP-01007: in column 15 of line 6
Privilege is already provided - GRANT READ, WRITE on APEX_DIR to APEX_DEV;
But you should check with DBA on the rwx for the generated 'v_temp_filename' permission.
Pointers?
Thank you and best regards,
Nordine
(on Oracle 11.2.0.3.0, Apex 4.2.5)
Try this:
. . . E t c...
-/ / creating an external table
v_ext_table: = "CREATE TABLE Apex_Temp_Data_Ext
(
C001 VARCHAR2 (4000), C002 VARCHAR2 (4000), C003 VARCHAR2 (4000), C004 VARCHAR2 (4000), C005 VARCHAR2 (4000).
C006 VARCHAR2 (4000), C007 VARCHAR2 (4000), C008 VARCHAR2 (4000), C009 VARCHAR2 (4000), C010 VARCHAR2 (4000).
C011 VARCHAR2 (4000), C012 VARCHAR2 (4000), C013 VARCHAR2 (4000), C014 VARCHAR2 (4000), C015 VARCHAR2 (4000).
C016 VARCHAR2 (4000), C017 VARCHAR2 (4000), C018 VARCHAR2 (4000), C019 VARCHAR2 (4000), C020 VARCHAR2 (4000).
C021 VARCHAR2 (4000), C022 VARCHAR2 (4000), C023 VARCHAR2 (4000), C024 VARCHAR2 (4000), SEE C025 VARCHAR2 (4000).
C026 VARCHAR2 (4000), C027 VARCHAR2 (4000), C028 VARCHAR2 (4000), C029 VARCHAR2 (4000), C030 VARCHAR2 (4000).
C031 VARCHAR2 (4000), C032 VARCHAR2 (4000), C033 VARCHAR2 (4000), C034 VARCHAR2 (4000), C035 VARCHAR2 (4000).
C036 VARCHAR2 (4000), C037 VARCHAR2 (4000), C038 VARCHAR2 (4000), C039 VARCHAR2 (4000), C040 VARCHAR2 (4000).
C041 VARCHAR2 (4000), C042 VARCHAR2 (4000), C043 VARCHAR2 (4000), C044 VARCHAR2 (4000), C045 VARCHAR2 (4000).
C046 VARCHAR2 (4000), C047 VARCHAR2 (4000), C048 VARCHAR2 (4000), C049 VARCHAR2 (4000), C050 VARCHAR2 (4000)
)
(EXTERNAL) ORGANIZATION
TYPE Oracle_Loader
The DEFAULT DIRECTORY Apex_Dir
(SETTINGS) ACCESS
RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY ","
OPTIONALLY SURROUNDED BY "" "AND" "" NOTRIM
MISSING FIELD VALUES ARE NULL
(
C001 TANK (4000), C002 TANK (4000), C003 TANK (4000), C004 TANK (4000), CHAR C005 (4000),
C006 TANK (4000), C007 TANK (4000), C008 TANK (4000), C009 TANK (4000), C010 TANK (4000).
C011 TANK (4000), C012 TANK (4000), C013 TANK (4000), CHAR C014 (4000), C015 TANK (4000),
C016 TANK (4000), C017 TANK (4000), CHAR C018 (4000), C019 TANK (4000), C020 TANK (4000),
C021 TANK (4000), C022 TANK (4000), C023 TANK (4000), CHAR C024 (4000), SEE C025 TANK (4000),
C026 TANK (4000), CHAR C027 (4000), C028 TANK (4000), C029 TANK (4000), C030 TANK (4000),
C031 TANK (4000), C032 TANK (4000), C033 TANK (4000), C034 TANK (4000), C035 TANK (4000).
C036 TANK (4000), C037 TANK (4000), C038 TANK (4000), C039 TANK (4000), C040 TANK (4000).
C041 TANK (4000), C042 TANK (4000), C043 TANK (4000), C044 TANK (4000), C045 TANK (4000).
C046 TANK (4000), C047 TANK (4000), C048 TANK (4000), C049 TANK (4000), C050 TANK (4000)
)
)
LOCATION('''||) V_Temp_Filename | " ')
)
3 PARALLEL
UNLIMITED RELEASE LIMIT ';
. . . E t c...
-
The Oracle 12 c memory option works with external tables?
Hello
Does anyone know if the external tables are also candidates for the benefit of the Oracle 12 c option in memory? I have read the documentation and white papers, and can't find any reference to external tables. If it is possible, we are very interested to know all about her, especially any limitation.
Thank you.
This is not the right forum for this question; This forum is for the memory of TimesTen database not Oracle database in memory option. These are completely different things.
But it happens that I can tell you that no, external tables are not candidates for use with the In-Memory Option.
Chris
-
How do I get the number of incorrect records when you use external tables
Hi all, I have an external table DEPT,.
DEPT. DAT
20. ELECTRONICS
10. SHOES
30. CAMERA
Select * from the Department; only 10 and 30 dept will be led as deptdescr for 20 that there are more than 10 in length so this record will go into the wrong file,
y at - it count any query to display the folder or get any query to get the record to view the entries entries wrong file rather that will drop and see how much is rejected.
Table:
CREATE TABLE DEPT ( DEPT NUMBER, DEPTDESCR VARCHAR2 (10 CHAR) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY BATCH_INBOX ACCESS PARAMETERS ( RECORDS DELIMITED BY '\r\n' BADFILE BATCH_BAD:'UPS_DEPT_LOAD_%p.bad' LOGFILE BATCH_LOG:'UPS_DEPT_%p.log' NODISCARDFILE FIELDS TERMINATED BY '|' MISSING FIELD VALUES ARE NULL ( DEPT, DEPTDESCR ) ) LOCATION (BATCH_INBOX:'DEPT.DAT') ) REJECT LIMIT UNLIMITED NOPARALLEL NOMONITORING;
You can use the wrong file as the data file for another external table, with the entire line in a single field. Please see the demo below.
Scott@orcl12c > CREATE or REPLACE DIRECTORY batch_inbox AS 'c:\my_oracle_files '.
2.
Created directory.
Scott@orcl12c > CREATE or REPLACE DIRECTORY batch_bad AS 'c:\my_oracle_files '.
2.
Created directory.
Scott@orcl12c > CREATE or REPLACE DIRECTORY batch_log AS 'c:\my_oracle_files '.
2.
Created directory.
Scott@orcl12c > CREATE TABLE DEPT
2 (
NUMBER 3 DEPT,
4 DEPTDESCR VARCHAR2 (10 CHAR)
(5) ORGANIZATION EXTERNAL
6 (TYPE ORACLE_LOADER
7 DEFAULT DIRECTORY BATCH_INBOX
8 ACCESS SETTINGS
9 (RECORDS DELIMITED BY "\r\n"
10 BADFILE BATCH_BAD: 'UPS_DEPT_LOAD.bad'
11 BATCH_LOG:'UPS_DEPT_%p.log LOGFILE'
12 NODISCARDFILE
13 FIELDS TERMINATED BY ' |'
14 MISSING FIELD VALUES ARE NULL
15 (
DEPT 16,
17 DEPTDESCR
18 )
19 )
LOCATION 20 (BATCH_INBOX:'DEPT.) DAT')
21 )
RELEASE 22 UNLIMITED LIMIT
23 NOPARALLEL
24 NOMONITORING;
Table created.
Scott@orcl12c > SELECT * FROM dept
2.
DEPTDESCR DEPT
---------- ----------
10 SHOES
CAMERA 30
2 selected lines.
Scott@orcl12c > CREATE TABLE DEPT_bad
2 (
3 the_whole_row VARCHAR2 (4000)
(4) ORGANIZATION EXTERNAL
5 (TYPE ORACLE_LOADER
6 DEFAULT DIRECTORY BATCH_INBOX
7 ACCESS SETTINGS
8 (RECORDS DELIMITED BY "\r\n"
9 NOLOGFILE
10 FIELDS TERMINATED BY '\r\n '.
11. THE MISSING FIELD VALUES ARE NULL
12 (
13 the_whole_row CHAR (4000)
14 )
15 )
16 RENTAL (BATCH_BAD:'UPS_DEPT_LOAD.) THE BAD ")"
17 )
RELEASE 18 UNLIMITED LIMIT
19 NOPARALLEL
20 NOMONITORING
21.
Table created.
Scott@orcl12c > SELECT * FROM dept_bad
2.
THE_WHOLE_ROW
--------------------------------------------------------------------------------
20. ELECTRONICS
1 selected line.
-
External table Oracle via the Tables API
Hello world
I did experiment with the Oracle NoSQL database recently and I became a bit stuck with the new API of Tables. I have so far successfully of the external tables on the data entered using storage techniques 'vanilla' KV and avro (using generic and specific links) scheme, but create API Tables seems to be another matter entirely.
My question arises in the trainer interface, which has a KeyValueVersion and a KVStore. I can't translate a KeyValueVersion created with the API of Tables in a primary key for recovery (since I don't know what the key generated by the API actually looks like to!) or map it on an avro scheme. The problem seems to be that the Tables API writes data in some format that can be easily translated into a string or an integer (releases from external table lines due to unknown characters if I am trying to retrieve all the values in the database to see what it looks like to), and try to map it to an AVRO map results in the error message 'the data are not as AVRO'.
Scenario:
I created a very simple table in the administration tool KV, which consists of a column personId (integer) that is PK, firstName, lastName, emailAddr (all channels) and enter 5 rows with success. What I want to do is to create an external table called person that returns just those 5 values (and brand new I add to the table of course). This means that I have to first understand what the parentKey value must be defined in the .dat file and how to take this key and it becomes a primary key for the recovery of the line.
Faithful old Google could not find information on how to do this (he was only a thread similar to this with the answer "we'll add it soon"), so I hope that someone here managed to do!
Thank you
Richard
Hi Richard
I understand the issue you are facing. In the current version (12.1.3.0.9) the external tables feature only works with records of K/V not with the Table model, however, in the next version (which us will very soon be GA) we will support integration of external tables with the data of Table model as well. Please make sure that you have signed up for the announcement of release so that we can inform you of the release. I apologize for the inconvenience, he did to you.
Best
Anuj
Sign up for announcements of NoSQL database , so we can warn you versions futures and other updates from the product of NoSQL database
-
Key issue of the external table preprocessor - ssh
I want an external table that runs a df command in a script
DFH.sh more
/ bin/df h
CREATE TABLE XT_df
(
SCRIPT_OUTPUT VARCHAR2 (2000)
)
EXTERNAL ORGANIZATION
(TYPE ORACLE_LOADER
Datapumpdir default DIRECTORY
ACCESS SETTINGS
(RECORDS DELIMITED BY NEWLINE
PREPROCESSOR datapumpdir: 'dfh.sh'
jump 1
FIELDS TERMINATED BY ', '.
surrounded of possibly "" "
)
LOCATION (datapumpdir: 'xtdf.dat')
)
Select * from XT_df
And it works. I see my df output.
I want to run something similar on multiple hosts, but the same host, so I place another table and call another shell script to run a remote ssh script after I have set user equivalence
/ usr/bin/SSH oracle@remotehost1 ' df-h | grep u02'
the works of shell script
However, qualifying by selecting in the external table I get ssh host checking has no error.
[Error] Run (1: 1): ORA-29913: error in executing ODCIEXTTABLEFETCH legend
ORA-29400: data cartridge error
KUP-04095: order of preprocessor /winlogs/dfh.sh has detected the error "host key verification failed.
"
So what could be the cause that if she works well as oracle from command line, checking the .ssh key is on the other side (I think).
> Datapumpdir: 'dfh.sh PREPROCESSOR'
Modify the script above to include the following line as the second line of the script
env | Tri o /tmp/capture.env
view the contents of /tmp/capture.env return here after it gets filled
-
External table: ORA-029913 only through connection allias TNS
Hello world
I have a problem on an Oracle 11.2.0.3 database. They have an external table MYTABLE that points to an Oracle MYDIR directory. This directory is a text file MyFile.txt, read by the external table.
If I connect to the database using "sqlplus user/pass" directly from the DB server, the statement 'select count (*) from' myTable works very well.
If I connect using a LMO allias "sqlplus user/pass@DB", the statement is wrong:
SQL> select count (*) from MYTABLE; select count (*) from MYTABLE * ERROR at line 1: ORA-29913: error in executing ODCIEXTTABLEOPEN callout ORA-29400: data cartridge error KUP-04040: file myfile.txt in MYDIR not found
The user application oracle has READ and write on the Oracle MYDIR directory.
The OS 'oracle' user permissions to write to the directory of the OS.
The problem occurs with the application user, but also with sys and system.
You have an idea? What could make the statement works in direct relation and fail to remote access?
Thanks for your help.
Michael
I finally found the solution! The oracle user has been recently added to the group. The database was relaunched after this change, but the listener has only been reloaded. All processes spawned by the receiver have been opened with privileges evil and that's why my local connections can get access to the file system and not open remote session.
THW following note has helped me find the solution: ORA-29283 "Operation file not valid" with OS group and Oracle user is a member (Doc ID 832424.1).
The note is only on the basis of data Oracle itself, but I tried to restart the receiver and it solved my problem.
Thanks for your help
Maybe you are looking for
-
I took my desktop computer to repair. Since its entry back we wonder to validate but will not accept my validation key which is valid and legal microsilly. original title: will not be validated after repair.
-
Black muslim in the White House virus threat
How can I check if this unit or other viral threats are genuine or a hoax? Y at - it a reliable site to use?
-
Whenever I try to log on to the msn butterfly, I get a msn stopped working, and then it closes. I can go through yahoo and log but I am unable to access my favorites, have been logging on the butterfly for years and now this has happened, any help?
-
I was wondering if the HTMLText links are supposed to work again? I have a link in the HTMLText property of a label with a stylesheet attached to it. It visually makes the way I expect, but it is not clickable. Or if it does nothing when I click o
-
NAT on 8.3 and VPN tunnel with overlapping addresses
Hi all I was looking at this document from Cisco and I think I understand how to convert the nat policy than the version 8.3 and later, but I was wondering what is happening to the acl crypto, you are always using the same as the older versions? As y