Functions in pipeline for the csv data analysis?

Hi all

I currently have a pl/sql procedure that is used to load and parse a CSV file in a table of database within the Apex.

Downloading csv files are quite large (nearly 1 million rows or more) and there is a time of significant waiting for the course ends. I tried both Wizard 4.2 data that was very slow loading and the apex plugin excel2collection who timed out/never finished.

I heard functions in pipeline and how they can offer great time savings for insert instructions where the database lines have no interconnect/dependencies to each other.

My question is, would the data through pipes to offer me a gain with my time insert statements, and if so someone could help me to implement? The current procedure is listed below, less any code validation etc. for readability. The CSV is first uploaded to a table in a BLOB file before be analyzed by the procedure.

-- Chunk up the CSV file and split into a line at a time
  rawChunk := dbms_lob.substr(bloContent, numChunkLength, numPosition + numExtra);
  strConversion := strConversion || utl_raw.cast_to_varchar2(rawChunk);

  numLineEnd := instr(strConversion,chr(10),1);  --This will return 0 if there is no chr(10) in the String


  strColumns := replace(substr(strConversion,1,numLineEnd -numTrailChar),CHR(numSpacer),',');

  strLine := substr(strConversion,1,numLineEnd);
  strLine := substr(strLine,1,length(strLine) - numTrailChar);
   
  -- Break each line into columns using the delimeter
  arrData := wwv_flow_utilities.string_to_table (strLine, '|');

    FOR i in 1..arrData.count
    LOOP
  
     --Now we concatenate the Column Values with a Comma
      strValues := strValues || arrData(i) || ','; 

    END LOOP;

     --Remove the trailing comma
      strValues := rtrim(strValues,',');

     -- Insert the values into target table, one row at a time
    BEGIN
      EXECUTE IMMEDIATE 'INSERT INTO ' || strTableName || ' (' || strColumns || ')
                         VALUES (' || strValues ||  ')';
    END;
  
    numRow := numRow + 1; --Keeps track of what row is being converted

    
   -- We set/reset the values for the next LOOP cycle
    strLine := NULL;
    strConversion := null;
    strValues := NULL;
    numPosition := numPosition + numLineEnd;
    numExtra := 0;
    numLineEnd := 0;
  END IF;
END LOOP;

Apex-user wrote:

Hi Chris,

I'm trying to expand your code to use more tou both current columns, but having trouble with the format here...

  1. While (l_clob) dbms_lob.getlength > l_off and l_off > 0 loop
  2. l_off_new: = instr (l_clob, c_sep, l_off, c_numsep);
  3. line (csv_split_type)
  4. substr (l_clob, l_off, instr (l_clob, c_sep, l_off)-l_off)
  5. , substr (l_clob, instr (l_clob, c_sep, l_off) + 1, l_off_new - instr (l_clob, c_sep, l_off) - 1)
  6. ));
  7. l_off: = l_off_new + 2; -to switch c_sep and line (10 sep

How can I add more columns to this code? I'm mixed with all segments of substr and instr.

I've done a rewrite on it (12 sec for 50,000 lines, 4 columns ~ 7 MB, 2.2 sec for 10,000 lines)

create or replace function get_csv_split_cr (blob p_blob)

return csv_table_split_type

pipelined

as

c_sep constant varchar2 (2): = "";

c_line_end constant varchar2 (1): = Chr (10);

l_row varchar2 (32767).

number of l_len_clob;

number of l_off: = 1;

CLOB l_clob;

-below is used only for the call of dbms_lob.converttoclob

l_src_off pls_integer: = 1;

l_dst_off pls_integer: = 1;

number of l_ctx: = dbms_lob. DEFAULT_LANG_CTX;

number of l_warn: = dbms_lob. WARN_INCONVERTIBLE_CHAR;

Start

DBMS_LOB.CREATETEMPORARY (l_clob, true);

DBMS_LOB.converttoclob (l_clob, p_blob, dbms_lob.lobmaxsize, l_src_off, l_dst_off, dbms_lob. DEFAULT_CSID, l_ctx, l_warn);

-Attention: hypothesis that there is at least a 'correct' csv-line

-should perhaps find a better guard condition

-Hypothesis: last column ends with the separator

l_len_clob: = length (l_clob);

While l_len_clob > l_off and l_off > 0 loop

l_row: = substr (l_clob, l_off, instr (l_clob, c_line_end, l_off)-l_off);

line (csv_split_type)

-start of the first; occurrence - 1

substr (l_row, 1, instr (l_row, c_sep) - 1)

-first; second occurrence; accident - first; occurrence

, substr (l_row, instr (l_row, c_sep, 1, 1) + 1, instr (l_row, c_sep, 1, 2) - instr (l_row, c_sep, 1, 1) - 1)

-second; third occurrence; occurrence - second; occurrence

, substr (l_row, instr (l_row, c_sep, 1, 2) + 1, instr (l_row, c_sep, 1, 3) - instr (l_row, c_sep, 1, 2) - 1)

- and so on

, substr (l_row, instr (l_row, c_sep, 1, 3) + 1, instr (l_row, c_sep, 1, 4) - instr (l_row, c_sep, 1, 3) - 1)

));

l_off: = l_off + length (l_row) + 1; -to switch c_sep and line (10 sep

end loop;

return;

end;

You must change the csv_split_type also.

Update: I had to correct, combined version of two upward.

Tags: Database

Similar Questions

  • How to select the csv data stored in a BLOB column as if it were an external table?

    Hi all

    (Happy to be back after a while! )

    Currently I am working on a site where users should be able to load the data of csv (comma is the separator) of their client machines (APEX 3.2 application) in the Oracle 11.2.0.4.0 EE database.

    My problem is:

    I can't use an external table (for the first time in my life ) so I'm a little clueless what to do as the csv data is stored by the application of the APEX in a BLOB column, and I'm looking for an elegant way (at least SQL PL/SQL/maximization) to insert the data into the destination table (run validations by a MERGER would be the most effective way to do the job).

    I found a few examples, but I think they are too heavy and there could be a more elegant way in Oracle DB 11.2.

    Simple unit test:

    drop table CBC purge;

    drop table dst serving;

    create table src

    (myblob blob

    );

    create table dst

    (num number

    , varchar2 (6) str

    );

    Insert in src

    Select utl_raw.cast_to_raw (1; AAAAAA ;'|| Chr (10) |

    2; BATH; »

    )

    Double;

    Desired (of course) output based on the data in table SRC:

    SQL > select * DST;

    NUM STR

    ---------- ------

    1 ABDELKRIM

    2 BATH

    Does anybody know a solution for this?

    Any ideas/pointers/links/examples are welcome!

    / * WARNING: I was 'off' for about 3 months, then the Oracle - the part of my brain has become a bit "rusty and I feel it should not be so complicated as the examples I've found sofar ' * /"

    Haha, wonder about regexp is like the blind leading the blind!

    However, it's my mistake: I forgot to put the starting position setting (so 1, 2, 3,... was in fact the starting position, not the nth occurrence. duh!)

    So, it should actually be:

    select x.*
    ,      regexp_substr(x.col1, '[^;]+', 1, 1)
    ,      regexp_substr(x.col1, '[^;]+', 1, 2)
    ,      regexp_substr(x.col1, '[^;]+', 1, 3)
    ,      regexp_substr(x.col1, '[^;]+', 1, 4)
    ,      regexp_substr(x.col1, '[^;]+', 1, 5)
    ,      regexp_substr(x.col1, '[^;]+', 1, 6)
    from  src
    ,      xmltable('/a/b'
                  passing xmltype(''||replace(conv_to_clob(src.myblob), chr(10), '')||'')
                  columns
                    col1 varchar2(100) path '.') x;
    

    Note: that's assuming that all the "columns" passed in the string won't be lame.

    If one of them might be null, then:

    select x.*
    ,      regexp_substr(ltrim(x.col1, ';'), '[^;]+', 1, 1)
    ,      regexp_substr(ltrim(x.col1, ';'), '[^;]+', 1, 2)
    ,      regexp_substr(ltrim(x.col1, ';'), '[^;]+', 1, 3)
    ,      regexp_substr(ltrim(x.col1, ';'), '[^;]+', 1, 4)
    ,      regexp_substr(ltrim(x.col1, ';'), '[^;]+', 1, 5)
    ,      regexp_substr(ltrim(x.col1, ';'), '[^;]+', 1, 6)
    from   src
    ,      xmltable('/a/b'
                  passing xmltype(replace(';'||replace(conv_to_clob(src.myblob), chr(10), ';')||'', ';;', '; ;'))
                  columns
                    col1 varchar2(100) path '.') x;
    
  • Need to create a structure for the target data store?

    Hi Experts,

    If I create a structure for the target data store, and then load the data from source to target works fine. If I make mistakes.

    Is necessary to create a structure for target?

    Please help me...

    Thanks in advance.

    A.Kavya.

    I found the answer. No need to create the structure for target temporary data store. and we need to create the structure for the permanent target data store.

  • Need to retrieve the data for the current date.

    Hello

    I have a table which then retrieves information when using this command.

    Select ta_acct, shift, created_on track_alerts;

    Technicolor A 24 March 14

    Technicolor A 24 March 14

    Technicolor A 24 March 14

    Technicolor A 24 March 14

    Manitoba telecom a 24 March 14 system

    Technicolor A 24 March 14

    I used this statement to retrieve the data for the given date.

    Select ta_acct, shift, created_on track_alerts where created_on = 24 March 14 ';

    Its not data recovery.

    Need help.

    Kind regards

    Prasad K T,.

    984002170

    Prasad K T wrote:

    Yes the created data type is date.

    CREATED_ON DATE

    Partha thanks it works now.

    Select ta_acct, shift, created_on in track_alerts where to move is: Shift and TRUNC (created_on) = TO_DATE('24-MAR-2014','DD-MON-YYYY');

    Still, I made a small change to my querry.

    Select ta_acct, shift, created_on track_alerts where to move is: shft and TRUNC (created_on) = TO_DATE (select double sysdate,'MON-DD-YYYY "");

    For this statement, it does not work.

    of course not...

    first: sysdate returns a date so no need of conversion here

    and

    second SYSDATE includes time, so your application should look like this:

    Select ta_acct, shift, created_on in track_alerts where to move is: Shift and TRUNC (created_on) = trunc (sysdate)

    or

    Select ta_acct, shift, created_on in track_alerts where to move is: shft and created_on > = trunc (sysdate) and created_on<>

    HTH

  • The number of heartbeat for the host data warehouses is 0, which is less than required: 2

    Hello

    I have trouble creating my DRS cluster + storage of DRS, I have 3 hosts esxi 5.1 for the task

    First, I created the cluster, no problem with that, so the DRS storage was created and now I can see in the Summary tab

    "The number of heartbeat for the host data warehouses is 0, which is less than required: 2".

    I search the Web and there are similar problems when people has only a single data store (the one that came with ESXi) and need to add another but in my case... vcenter detects any...

    In the views of storage I see the store of data (VMFS) but for some strange reason the cluster not

    In order to achieve data warehouses minimum (2) can I create an NFS and map it in THE 3 esxi hosts? Vcenter which consider a play config?

    Thank you

    You probably only have local data warehouses, which are not that HA would require for this feature (pulsations datastore) to work properly.

    You will need either 2 iSCSI, FC 2 or 2 NFS volumes... Or a combination of the any of them, for this feature to work. If you don't want to use this feature, you can also turn it off:

    http://www.yellow-bricks.com/2012/04/05/the-number-of-vSphere-HA-heartbeat-datastores-for-this-host-is-1-which-is-less-than-required-2/

  • Difference in the number of records for the same date - 11 GR 2

    Guy - 11 GR on Windows2005 2, 64-bit.

    BILLING_RECORD_KPN_ESP - is a monthly partitioned table.
    BILLING_RECORD_IDX #DATE - is a local index on "charge_date" in the table above.

    SQL > select / * + index (BILLING_RECORD_KPN_ESP BILLING_RECORD_IDX #DATE) * /.
    2 (trunc (CHARGE_DATE)) CHARGE_DATE;
    3 count (1) Record_count
    4. IN "RATOR_CDR". "" BILLING_RECORD_KPN_ESP ".
    where the 5 CHARGE_DATE = January 20, 2013.
    Group 6 by trunc (CHARGE_DATE)
    5 m

    CHARGE_DATE RECORD_COUNT
    ------------------ ------------
    2401 20 January 13-> > some records here.

    -> > Here I can see only '2041' records for Jan/20. But in the query below, it shows "192610" for the same date.

    Why is this difference in the number of records?

    SQL > select / * + index (BILLING_RECORD_KPN_ESP BILLING_RECORD_IDX #DATE) * /.
    (trunc (CHARGE_DATE)) CHARGE_DATE,
    2 count (1) Record_count
    3. FOR "RATOR_CDR." "" BILLING_RECORD_KPN_ESP ".
    "4 where CHARGE_DATE > 20 January 2013."
    Group of 5 by trunc (CHARGE_DATE)
    6 order by trunc (CHARGE_DATE)
    5 m

    CHARGE_DATE RECORD_COUNT
    ------------------ ------------
    192610 20 January 13-> > more records here
    JANUARY 21, 13 463067
    JANUARY 22, 13 520041
    23 JANUARY 13 451212
    JANUARY 24, 13 463273
    JANUARY 25, 13 403276
    JANUARY 26, 13 112077
    27 JANUARY 13 10478
    28 JANUARY 13 39158

    Thank you!

    Because in the second example you also select rows that have a nonzero component.

    The first example selects only rows that are 00:00:00

    (by the way, you should ask questions like this in the forum SQL)

  • ESXi is unable to install 'his place on the disk for the dump data' no ideas?

    It is an older server.

    Data sheet:

    GOING Linux 2200 series

    Intel P3 700 MHz x 2

    768 MB OF RAM

    9.1 GB SCSI x 3 (configured in RAID 5 now)

    I have attached a picture of the error message that is received and typed most of the message below.

    NOT_IMPLEMENTED /build/mts/release/bora-123629/bora/vmkernel/sched/sched.c:5075

    Frame 0x1402ce0 ip = 0x62b084 cr2 = cr3 = 0 x 0 = ox3000 cr4 = 0 x 20

    are is 0xffffffff ds is 0xffffffff fs = 0xffffffff gs = 0xffffffff

    EAX = 0xffffffff = 0xffffffff = 0xffffffff edx ecx ebx = 0xffffffff

    = 0x1402e3c = 0xffffffff edi esi EBP = 0xffffffff err =-1 eflags = 0xffffffff

    * 0:0 / & lt; NULL & gt;  1:0 / & lt; NULL & gt;

    0x1402e3c: battery: 0x830c3f, 0x1402e58, 0x1402e78

    VMK availability: 0:00:00:00.026 TSC: 222483259709

    No space on disk for the dump data

    Waiting for debugger... (World 0)

    Debugger is listening on the serial port...

    Press ESC to enter the local debugger

    This could be a simple problem or not, I'm not sure. I spent several hours already trying to reconfigure the readers to try to get the installation to recognize.

    Any help is greatly appreciated.

    I agree with Matt, the material can be simply too old-

    If you find this or any other answer useful please consider awarding points marking the answer correct or useful

  • Recommended value for the BAM data expiration time

    Hello

    Can someone tell me what is the recommended value for the BAM data expiration time?

    Enterprise Server default is 24 hours, but I would like to be able to raise the average runtime instance after several months. Is it reasonable to the value of the time-out value a high value? Or it will have an impact on the performance of BPM/BAM?

    Thanks in advance.

    Best regards
    CA

    Normally, we keep the BAM data expiration time at halfway with 24 to 72 hours. For historical reports that you are looking for the Data Mart / Data Warehouse DB are more logical. This database stores the data forever and takes pictures at longer intervals, normally 24 hours. These data are not in time real normally then because a capture instant is only taken once per day but will give you historical reports that you are looking for. The data from this database structure is almost identical to the BAM DB.

  • Not able to start agent cache for the requested data store

    Hello

    This is my first attempt in TimesTen. I am running TimesTen on the same host Linux (RHES 5.2) running Oracle 11 g R2. TimesTen version is:

    TimesTen Release 11.2.1.4.0


    Trying to create a simple cache.

    The DSN entry section for ttdemo1 to. odbc.ini is as follows:

    + [ttdemo1] +.
    Driver=/home/Oracle/TimesTen/TimesTen/lib/libtten.so
    Data store = / work/oracle/TimesTen_store/ttdemo1
    PermSize = 128
    TempSize = 128
    UID = hr
    OracleId = MYDB
    DatabaseCharacterSet = WE8MSWIN1252
    ConnectionCharacterSet = WE8MSWIN1252

    With the help of ttisql I connect

    Command > Connect "dsn = ttdemo1; pwd = oracle; oraclepwd = oracle;
    Successful login: DSN = ttdemo1; UID = hr; DataStore = / work/oracle/TimesTen_store/ttdemo1; DatabaseCharacterSet = WE8MSWIN1252; ConnectionCharacterSet=WE8MSWIN1252;DRIVER=/home/oracle/TimesTen/timesten/lib/libtten.so;OracleId=MYDB; PermSize = 128; TempSize = 128; TypeMode = 0; OracleNetServiceName = MYDB;
    (Default AutoCommit = 1).
    Command > call ttcacheuidpwdset ('ttsys', 'oracle');
    Command > call ttcachestart;
    * 10024: could not start agent cache for the requested data store. Could not initialize Handle.* Oracle environment
    The command failed.

    The following text appears in the tterrors.log:

    15:41:21.82 Err: ORA: 9143: ora-9143 - 1252549744-xxagent03356: database: TTDEMO1 OCIEnvCreate failed. Return - 1 code
    15:41:21.82 Err: 7140: oraagent says it failed to start: could not initialize manage Oracle environment.
    15:41:22.36 Err: 7140: TT14004: failed to create the demon TimesTen: couldn't reproduce oraagent for "/ work/oracle/TimesTen_store/ttdemo1 ': has not been initialized Handl Oracle environment

    What are the reasons that the demon cannot happen again to another agent? FYI, the environment variables are defined as:

    ORA_NLS33=/U01/app/Oracle/product/11.2.0/Db_1/ocommon/NLS/Admin/data
    ANT_HOME = / home/oracle/TimesTen/ttdemo1/3rdparty/ant
    CLASSPATH=/home/oracle/TimesTen/ttdemo1/lib/ttjdbc5.jar:/home/oracle/TimesTen/ttdemo1/lib/orai18n.jar:/home/oracle/TimesTen/ttdemo1/lib/timestenjmsxla.jar:/home/oracle/TimesTen/ttdemo1/3rdparty/jms1.1/lib/jms.jar:.
    Oracle@rhes5:/Home/Oracle/TimesTen/ttdemo1/info% echo $LD_LIBRARY_PATH
    /home/oracle/TimesTen/ttdemo1/lib:/home/oracle/TimesTen/ttdemo1/ttoracle_home/instantclient_11_1:/u01/app/oracle/product/11.2.0/db_1/lib:/u01/app/oracle/product/11.2.0/db_1/network/lib:/lib:/usr/lib:/usr/ucblib:/usr/local/lib


    See you soon

    I see no problem here. The ENOENTs are superfluous because it locates libtten here:

    23302 open ("/ home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/lib/libtten.so", O_RDONLY) = 3

    without doubt, it does the same thing trying to find the libttco.so?

    23302 open ("/ home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/ttoracle_home/instantclient_11_1/libttco.so", O_RDONLY) =-1 ENOENT (no such file or directory)

    Thank you for taking the trace. I really want to have a look at the complete file if you can send it to me?

  • How to error if the Date is not expected for the CSV Format

    Hi Experts,
    I have a situation where I expect to format DATE as "LUN-DD-YYYY", but there are issues when the CSV file is created with a DATE as "MON-DD-YY" (example November 15, 12 '). The date is inserted in the DB as November 15, 12 ', so I'm looking for a way to not load these files and these folders with incorrect dates added .bad and .log files. Here is my SQL:
    DROP TABLE PER_ALL_ASSIGNMENTS_M_XTERN;

    create table PER_ALL_ASSIGNMENTS_M_XTERN)
    PERSON_NUMBER VARCHAR2 (30 CHAR),
    ASSIGNMENT_NUMBER VARCHAR2 (30 CHAR),
    DATE OF EFFECTIVE_START_DATE,
    DATE OF EFFECTIVE_END_DATE,
    EFFECTIVE_SEQUENCE NUMBER 4,
    ATTRIBUTE VARCHAR2 (30 CHAR) CATEGORY,
    _ATTRIBUTE1 VARCHAR2 (150 CHAR),
    ...
    ATTRIBUTE Numero20 NUMBER,
    ATTRIBUTE DATE1 DATE,
    ATTRIBUTE DATE2 DATE,
    ATTRIBUTE DATE3 DATE,
    ATTRIBUTE DATE4 DATE,
    ATTRIBUTE DATE.5 DATE,
    ATTRIBUTE DATE6 DATE,
    ATTRIBUTE DATE7 DAY,
    ATTRIBUTE DATE8 DAY,
    ATTRIBUTE DATE9 DATE,
    ATTRIBUTE DATE10 DAY,
    ATTRIBUTE DATE11 DAY,
    ATTRIBUTE DATE12 DATE,
    ATTRIBUTE SMOKERS13 DAY,
    ATTRIBUTE DATE14 DAY,
    ATTRIBUTE DATE15 DAY,
    ASG_INFORMATION_CATEGORY VARCHAR2 (30 CHAR),
    ASG_INFORMATION1 VARCHAR2 (150 CHAR),
    ...
    ASG_INFORMATION_NUMBER20 NUMBER,
    ASG_INFORMATION_DATE1 DAY,
    ASG_INFORMATION_DATE2 DAY,
    ASG_INFORMATION_DATE3 DAY,
    ASG_INFORMATION_DATE4 DAY,
    ASG_INFORMATION_DATE5 DAY,
    ASG_INFORMATION_DATE6 DAY,
    ASG_INFORMATION_DATE7 DAY,
    ASG_INFORMATION_DATE8 DAY,
    ASG_INFORMATION_DATE9 DAY,
    ASG_INFORMATION_DATE10 DAY,
    ASG_INFORMATION_DATE11 DAY,
    ASG_INFORMATION_DATE12 DAY,
    ASG_INFORMATION_DATE13 DAY,
    ASG_INFORMATION_DATE14 DAY,
    DATE OF ASG_INFORMATION_DATE15
    )
    external organization
    (the default APPLCP_FILE_DIR directory
    access settings
    (records delimited by newline jump 1
    BadFile APPLCP_FILE_DIR: 'PER_ALL_ASSIGNMENTS_M_XTERN.bad'
    APPLCP_FILE_DIR log file: 'PER_ALL_ASSIGNMENTS_M_XTERN.log'
    ' fields completed by ',' EVENTUALLY ENCLOSED BY ' "'
    (PERSON_NUMBER,
    ASSIGNMENT_NUMBER,
    DATE MASK EFFECTIVE_START_DATE CHAR(20) DATE_FORMAT 'DD-MON-YYYY. "
    DATE MASK EFFECTIVE_END_DATE CHAR(20) DATE_FORMAT 'DD-MON-YYYY. "
    EFFECTIVE_SEQUENCE,
    _ATTRIBUTE_CATEGORY,
    _ATTRIBUTE1,
    ...
    _ATTRIBUTE_NUMBER20,
    DATE MASK _ATTRIBUTE_DATE1 CHAR(20) DATE_FORMAT 'DD-MON-YYYY. "
    DATE MASK _ATTRIBUTE_DATE2 CHAR(20) DATE_FORMAT 'DD-MON-YYYY. "
    DATE MASK _ATTRIBUTE_DATE3 CHAR(20) DATE_FORMAT 'DD-MON-YYYY. "
    DATE MASK _ATTRIBUTE_DATE4 CHAR(20) DATE_FORMAT 'DD-MON-YYYY. "
    DATE MASK _ATTRIBUTE_DATE5 CHAR(20) DATE_FORMAT 'DD-MON-YYYY. "
    DATE MASK _ATTRIBUTE_DATE6 CHAR(20) DATE_FORMAT 'DD-MON-YYYY. "
    DATE MASK _ATTRIBUTE_DATE7 CHAR(20) DATE_FORMAT 'DD-MON-YYYY. "
    DATE MASK _ATTRIBUTE_DATE8 CHAR(20) DATE_FORMAT 'DD-MON-YYYY. "
    DATE MASK _ATTRIBUTE_DATE9 CHAR(20) DATE_FORMAT 'DD-MON-YYYY. "
    DATE MASK _ATTRIBUTE_DATE10 CHAR(20) DATE_FORMAT 'DD-MON-YYYY. "
    DATE MASK _ATTRIBUTE_DATE11 CHAR(20) DATE_FORMAT 'DD-MON-YYYY. "
    DATE MASK _ATTRIBUTE_DATE12 CHAR(20) DATE_FORMAT 'DD-MON-YYYY. "
    DATE MASK _ATTRIBUTE_DATE13 CHAR(20) DATE_FORMAT 'DD-MON-YYYY. "
    DATE MASK _ATTRIBUTE_DATE14 CHAR(20) DATE_FORMAT 'DD-MON-YYYY. "
    DATE MASK _ATTRIBUTE_DATE15 CHAR(20) DATE_FORMAT 'DD-MON-YYYY. "
    ASG_INFORMATION_CATEGORY,
    ASG_INFORMATION1,
    ...
    ASG_INFORMATION_NUMBER20,
    DATE MASK ASG_INFORMATION_DATE1 CHAR(20) DATE_FORMAT 'DD-MON-YYYY. "
    DATE MASK ASG_INFORMATION_DATE2 CHAR(20) DATE_FORMAT 'DD-MON-YYYY. "
    DATE MASK ASG_INFORMATION_DATE3 CHAR(20) DATE_FORMAT 'DD-MON-YYYY. "
    DATE MASK ASG_INFORMATION_DATE4 CHAR(20) DATE_FORMAT 'DD-MON-YYYY. "
    DATE MASK ASG_INFORMATION_DATE5 CHAR(20) DATE_FORMAT 'DD-MON-YYYY. "
    DATE MASK ASG_INFORMATION_DATE6 CHAR(20) DATE_FORMAT 'DD-MON-YYYY. "
    DATE MASK ASG_INFORMATION_DATE7 CHAR(20) DATE_FORMAT 'DD-MON-YYYY. "
    DATE MASK ASG_INFORMATION_DATE8 CHAR(20) DATE_FORMAT 'DD-MON-YYYY. "
    DATE MASK ASG_INFORMATION_DATE9 CHAR(20) DATE_FORMAT 'DD-MON-YYYY. "
    DATE MASK ASG_INFORMATION_DATE10 CHAR(20) DATE_FORMAT 'DD-MON-YYYY. "
    DATE MASK ASG_INFORMATION_DATE11 CHAR(20) DATE_FORMAT 'DD-MON-YYYY. "
    DATE MASK ASG_INFORMATION_DATE12 CHAR(20) DATE_FORMAT 'DD-MON-YYYY. "
    DATE MASK ASG_INFORMATION_DATE13 CHAR(20) DATE_FORMAT 'DD-MON-YYYY. "
    DATE MASK ASG_INFORMATION_DATE14 CHAR(20) DATE_FORMAT 'DD-MON-YYYY. "
    DATE MASK DATE_FORMAT ASG_INFORMATION_DATE15 CHAR (20) 'MON-DD-YYYY ".
    )
    )
    location ("PER_ALL_ASSIGNMENTS_M.csv")
    )
    REJECT LIMIT UNLIMITED;

    .. example report CSV:

    E040101, EE040101, 1-Aug-00: 31-Dec-12, 1, NDVC, YES, SFC, N, STIP Plan - pressure, E040101, 2080, 5, 31113, 31113, 31113, 31113, 31113, 31113, 1-Jan-12, 31-Dec-12.

    Thank you
    Thai

    Message details pl of the OS and database versions

    Use the RRRR instead of AAAA - http://docs.oracle.com/cd/E11882_01/server.112/e26088/sql_elements004.htm#SQLRF00215

    HTH
    Srini

  • For the complex data type, how to generate the Dll with compatible interface to C/C++

    Hello

    I used the Labview FPGA module to develop test equipment. Now, I need to write a driver that is to be a Dll with compatible interface to C/C++ for this equipment. So that my client who is familiar with C/C++ can call the driver without any study on labview. But I had a few problem on how to convert labview for C/C++ data complex data type. To clearly explain to my question, I have attached a simple example. (see attachment) I try to generate a Dll for the attached example VI and get the the function prototype at the head of the files as below:

    ' void OpenFpgaReference (LStrHandle * RIODevice, TD1 * errorIn, LVRefNum * FPGAVIReferenceOut, TD1 * errorOut).

    As you have known, the type of data "LStrHandle * RIODevice" and "LVRefNum * FPGAVIReferenceOut" Labview data format are. C/C++ do not have this kind of data type and can not reconige it. As a result, I can't call the Dll of C/C++ programming language. How to convert these two data type of labview for the C/C++ compatible data format, and then build the Dll? Anyone know about this?

    The answer is really apprecaited! Thank you in advanced.

    Ivan.Chen wrote:

    As I found in the following article:

    http://digital.NI.com/public.nsf/WebSearch/FB001AA027C8998386256AAD006C142D?OpenDocument

    LVRefNum is the name of resource of LabVIEW VISA or refnum, and "it is impossible to convert LabVIEW VISA name of resource or refnum VISession valid ID."
    This means that external code modules can not access & control the session VISA which is open by labview. But for my purposes, I will not attempt to access this VISA extenal code(C/C++) session. I just hope that save this session VISA in the external code once I opened it in Labview dll; and pass it to the labview dll when needed. While I have not need to login again when I need to control the device. Is it possible to do?

    A LVRefNum is really just a single int32 value. Its meaning is useless for other environments than those who created it so that you Michael not any what in C/C++ caller but pass it back to other functions in your DLL, but this often isn't a problem at all.

    You can take the following statement of the LabVIEW extcode.h headers and add them to your delabviewed header files to make it work in such a way.

    #define Private (T) typedef struct T # _t {void * p ;} * T}

    Private (LVRefNum);

    The LStrHandle you must set a standard C string instead in your export DLL and document what is the size of the string buffer should have if it is an output parameter.

    TD1 error clusters should also be divided into their parameters (C compatible) separate for all items or just to the left of suite entirely.

    Rolf Kalbermatter

  • Filter KeywordFilterField customized for the tabular data model

    I am currently looking for the rows in the table that is filtered through the KeywordFilterField. The underlying data are in table form:

    Contact {name, phone, etc etc}

    The KeywordFilterField shows only what I pass to it (Contact.Name) by calling setSourceList() and filters that the channels in the list. So if I get the numbers, which will return a list empty, because none of the Contacts have numbers in their names.

    However, what I want to do is query the table, like a SQL query, retrieve lines that correspond to a part of Contact.Name or Contact.Phone. (Remember this application don't use SQLite.) I'm using RMS to persistent storage and I created my database and queries of base by hand.)

    Is there a way I could customize/override the filter query so that the KeywordFilterField calls my query functions rather than it's default filter String? It is a base with search CRUD application. I use KeywordFilterField because it's everything I need.

    Any help would be useful.

    It is possible with a text field and a list field, this way you can make your own personalized search for the keyword filter field does not search your data the way you want. In addition, I know that the keyword filter field is broken and that it was returning always completely incorrect search result for me.

    Here's an overview of what to do. Some things I can't tell you how for example to what is happening in the function "searchContacts" since it is up to you to write the code to do whatever custom search you want.

    class CustomKeywordFilterScreen extends MainScreen implements FieldChangeListener, ListFieldCallback
    {
        //just a slightly modified edit field for use in entering keywords
        private CustomKeywordField _filterEdit;
        //a standard list field
        private ListField _countryList;
    
        //temp variable to hold the last keyword searched so we don't search for it again
        //you'll see this used later on
        private String _previousFilterValue;
    
        //any other private vars you need to hold search results go here
        //we'll use an a Contact[] for illustration
        private Contact[] _contactResults;
    
        CustomKeywordFilterScreen()
        {
            super(Manager.VERTICAL_SCROLL | Manager.VERTICAL_SCROLLBAR);
    
            //initiaize to empty string
            _previousFilterValue = "";
    
            //searchContacts is whatever function you write to do the customized search you want,
        //in this example passing an empty string returns all contacts
            _contactResults = searchContacts(_previousFilterValue);
    
            //create the edit field and set it as the title of the screen so it's always visible,
        //even when you scroll the list
            _filterEdit = new CustomKeywordField();
            _filterEdit.setLabel("Search: ");
            _filterEdit.setChangeListener(this);
            setTitle(_filterEdit);
    
            //create the list for showing results
            _contactList = new ListField();
            _countryList.setRowHeight(-2);
            _contactList.setSize(_contactResults.length);
            _contactList.setCallback(this);
            add(_contactList);
        }
    
        protected boolean keyChar(char c, int status, int time)
        {
            if (c == Characters.ESCAPE)
            {
            //user pressed escape key, if there's something in the edit field clear it
            //otherwise handle it as closing the screen or whatever else you want
                if (_filterEdit.getTextLength() > 0)
                {
                    _filterEdit.setText("");
                }
                else
                {
                    //close the screen or do something else, it's your call
                    //maybe even do nothing, whatever you want
                }
                return (true);
            }
            else
            {
            //all other keystrokes set focus on the edit field
                _filterEdit.setFocus();
            }
            return (super.keyChar(c, status, time));
        }
    
        public void fieldChanged(Field field, int context)
        {
            if (field == _filterEdit)
            {
            //test the edit field's value against the previously searched value
            //if NOT the same then do a search and refresh results
                if (!_filterEdit.getText().equals(_previousFilterValue))
                {
                //cache the newest search keyword string
                    _previousFilterValue = _filterEdit.getText();
    
                    //search your data
                    _contactResults = searchContacts(_previousFilterValue);
            //update the list size to cause it to redraw
                    _contactList.setSize(_contactResults.length);
                }
            }
        }
    
        public void drawListRow(ListField listField, Graphics graphics, int index, int y, int width)
        {
            if (listField == _contactList && index > -1 && index < _contactResults.length)
            {
                //draw your list field row as you want it to appear
            }
        }
    
        public Object get(ListField listField, int index)
        {
            if (listField == _contactList && index > -1 && index < _contactResults.length)
            {
                return (_contactResults[index]);
            }
            return (null);
        }
    
        public int getPreferredWidth(ListField listField)
        {
            return (Display.getWidth());
        }
    
        public int indexOfList(ListField listField, String prefix, int start)
        {
            return (-1);
        }
    }
    
    class CustomKeywordField extends EditField
    {
        CustomKeywordField()
        {
            super(USE_ALL_WIDTH | NO_LEARNING | NO_NEWLINE);
        }
    
        protected void paint(Graphics graphics)
        {
            super.paint(graphics);
    
            //Draw caret so the edit field always shows a cursor giving the user an indication they can type at anytime,
        //even when the focus is on the list field that is used in conjunction with this edit field to create a
        //keyword filter field replacement
            getFocusRect(new XYRect());
            drawFocus(graphics, true);
        }
    }
    
  • PLSQL - to_date coded for the American date strings regardless of the machine?

    I do a code to deal with some issues that originally come from an Excel... There are some fields being a string containing a date value. So my code is to take the string and turn it into a variable date so I can insert it into a date field Oracle. The format of the dates in Excel is like this: "DD/MONTH/YYYY" "MONTH" is written in American English.

    So, if I do something like this:

    Select to_date (April 1, 2013 ',' DD/MONTH/YYYY ')

    in: myDate

    OF THE DOUBLE

    ;

    It works fine when I run it on my own machine English/American.

    But... When I run the same on my client machine (which is in another country, Brazil)... I get an error "Invalid month.

    Of course, if I had to do this:

    Select to_date (' 01/ABR/2013 ',' MON/DD/YYYY "")

    in: myDate

    OF THE DOUBLE

    ;

    Then it works fine.

    And... It seems that even if their machine settings are Brazilian, they provide the date field in Excel (which they provide) in a USA-English format.

    Then... What is a good way I can interpret their date field (which is in English)? I hope for a command that completes the to_date function that also tells the Oracle "interpret this" DD/MONTH/YYYY"format, where the MONTH is written in American English" regardless of work on a non-English machine?

    Not sure it's what you're asking. Have you tried TO_DATE (April 1, 2013 ', ' DD/MONTH/YYYY', 'NLS_DATE_LANGUAGE = American') when inserting?

  • Using the interval function works well for the end of the month


    I need to be able to get 1 year, 2 years, etc and also 1 month 2 months ago for each date in the calendar.  I can use the function of the INTERVAL for the most part, but it does not work if there is 1 month (year) would produce an invalid date.

    For example, the monitoring generates an error.

    Select TO_DATE (February 29, 2012 ')- INTERVAL of '1' YEAR  FROM DUAL; 

    Select TO_DATE('31-MAR-2011') - MONTH INTERVAL '1' FROM DUAL;

    Is there a way to get chosen to return on February 28, 2011?

    Thank you for your help.

    Sandy

    Hi, Sandy,

    ADD_MONTHs always returns a valid DATE.

    ADD_MONTHS (TO_DATE ((29 février 2012 ', «DD-MON-YYYY»), 12 * n))

    will return on February 29, n years after 2012, if this year is a leap year and February 28 this year, if it is not a leap year.

    In general

    ADD_MONTHS (d, m)

    Returns the date months after DATE m d.  If d is located near the end of his month and months m months in the future is for several days, then ADD_MONTHS returns the last day of that month.  By example, if d is on 31 March and 1 m, then ADD_MONTHS returns the last day in April (1 month after March), because there is no day 31 in April.

  • Logic required for the following data


    Hi all

    I have the procedure that I call a cursor to retrieve records. This query returns the following data

    DISZDIICWTBack to topdown
    9 1/29.6258.92136181602
    13 1/213.37512.51561191962
    18 1/218.62517.75587.520503
    262624.7510520103
    9 1/29.6258.8354016023858
    776.2762616836352

    I want to print only those values...

    9 1/29.6258.92136181602
    9 1/29.6258.8354016023858
    776.2762616836352

    As you can see in these first values and down overlap.

    I tried several ways to sort the query on the fields and have a logic, but I always get an extra line that is not overlapping.

    Can someone give me please the logic to get the desired result through conditions of procedure/function/formula

    Thank you

    929107 wrote:

    Hey Maher,

    Thanks for the reply, but I'm looking at some generic logic /Algorithm, I don't want to play with a fixed set of values. Values can change with the evolution of other structures.

    Kind regards

    I showed with fixed values for explanation.

    You can use my select statement on your table

    Select * from

    (select di, dii, ctw, sz, offset (down) on ldown (top control)

    of )

    where downstairs > = nvl (ldown, down);

    Concerning

    Mr. Mahir Quluzade

Maybe you are looking for