Functions Pipeline Table with other tables using

I'm on DB 11.2.0.2 and used sparingly in pipeline table functions, but plans to arrays for a project that has a pretty big (many lines). In my tests, selecting from the table in pipeline perform well enough (if it's directly from the table pipleined or the view from above, I have created). Where I start to see some degregation when I try to join the tabe in pipeline discovered at other tables and add where conditions.

Download

SELECT A.empno, A.empname, A.job, B.sal

OF EMP_VIEW A, B OF THE EMP

WHERE A.empno = B.empno AND

B.Mgr = '7839'

I've seen articles and blogs that mention this as a matter of cardinality and offer some undocumented methods to try to fight.

Can someone please give me some tips or tricks on this. Thank you!

I created a simple example using the emp table below to help illustrate what I'm doing.

DROP TYPE EMP_TYPE;

DROP TYPE EMP_SEQ;

CREATE OR REPLACE TYPE EMP_SEQ AS OBJECT

(EMPNO NUMBER (10),)

ENAME VARCHAR2 (100),

VARCHAR2 (100)) WORK;

/

CREATE OR REPLACE TYPE EMP_TYPE AS TABLE EMP_SEQ;

/

FUNCTION to CREATE or REPLACE get_emp back EMP_TYPE PIPELINED AS

BEGIN

TO heart (SELECT IN

EmpNo,

Ename,

job

WCP

)

LOOP

PIPE ROW (EMP_SEQ (cur.empno,

cur. Ename,

cur.job));

END LOOP;

RETURN;

END get_emp;

/

create or REPLACE view EMP_VIEW select * from table (get_emp ());

/

SELECT A.empno, A.empname, A.job, B.sal

OF EMP_VIEW A, B OF THE EMP

WHERE A.empno = B.empno AND

B.Mgr = '7839'

bobmagan wrote:

The ability to join would give me the most flexibility.

Pipelines can be attached. But here is the PL/SQL code - no tables. And without index.

Consider a view:

create or replace view sales_personel in select * from emp where job_type = 'SALES '.

And you use the view to determine the sellers in department 123:

Select * from sales_personel where dept_id = 123

Oracle considers that logically the next SQL statement like her can be pushed in the view:

select * from emp where job_type = 'SALES' and dept_id = 123


If the two columns in the filter are indexed for example, he may well decide to use a fusion of index to determine what EMP lines are dirty and department 123.

Now consider the same exact scenario with a pipeline. The internal process of pipelines are opaque to the SQL engine. He can't say the internal code pipeline "Hey, don't give me employees service 123".

He needs to run the pipeline. It must evaluate each channeled line and apply the "dept_id = 123" predicate. In essence, you must treat the complete pipeline as a table scan. And a slow that it take more than a simple disc lines when you perform the transformation of data too.

So yes - you can use the predicates on the pipelines, can join them, use analytical SQL and so immediately - but expect it to behave like a table in terms of optimization of SQL/CBO, is not realistic. And pointing to a somewhat flawed understanding of what a pipeline is and how it should be designed and used.

Tags: Database

Similar Questions

  • issue update a table using a function in pipeline

    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE     10.2.0.4.0     Production
    TNS for HPUX: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    OK, I have an example simplified below what is happening
    I try to make updates or inserts in the tables and I use a function pipeline for this bed of the same tables of the to do.
    and it gives me
    ORA-04091: table DSAMSTRC.T is mutating, trigger/function may not see it
    ORA-06512: at "DSAMSTRC.GET_T_ARRAY", line 6
    My solution was to do a slider and update record row by row, but I was wondering if there is a way around this error?


    Here's my example
    DROP TABLE t;
    ------------------------------------------------------------------------------------
    
    CREATE TABLE t
    AS
           SELECT LEVEL id, 'A' txt
             FROM DUAL
       CONNECT BY LEVEL <= 10;
    
    -------------------------------------------------------------------------------------
    
    CREATE OR REPLACE TYPE t_table_type AS OBJECT
                      (id NUMBER, txt VARCHAR2 (200));
    ------------------------------------------------------------------------------------
    
    CREATE OR REPLACE TYPE t_array AS TABLE OF t_table_type;
    -----------------------------------------------------------------------------------
    
    CREATE OR REPLACE FUNCTION get_t_array (v_txt IN t.txt%TYPE)
       RETURN t_array
       PIPELINED
    IS
    BEGIN
       FOR EACH IN (SELECT id, txt
                      FROM t
                     WHERE t.txt = v_txt)
       LOOP
          PIPE ROW (t_table_type (EACH.id, EACH.txt));
       END LOOP;
    
       RETURN;
    END;
    ---------------------------------------------------------------------------------------
    
    UPDATE t
       SET txt = 'B'
     WHERE id IN (SELECT id FROM TABLE (get_t_array ('A')));
     

    You can create a global temporary table that has the same columns that use your function to insert in there, and then use it to update t.

    CREATE GLOBAL TEMPORARY TABLE t_temp
    (id    INTEGER,
     txt  VARCHAR2(100))
    ON COMMIT DELETE ROWS;
    
    INSERT INTO t_temp
      (SELECT id, txt FROM get_t_array('A'));
    
    UPDATE t
       SET t.txt = (SELECT txt FROM t_temp
                     WHERE t_temp.id = t.id)
     WHERE t.id IN (SELECT id FROM t_temp);
    
  • parallel in pipeline table function

    Hi all

    To "allow the parallel pipeline" table function, what I need to turn a query parallel session first?

    I read a few white papers or web pages on map and reduce implemented with function table Oracle and see that, based on the table.

    Use the cursor in the loop to get a line, run out! This replaces SQL pl/sql.

    What is the cost that must be paid according to the table?

    Finally, how can I confirm that Oracle has put the table function in parallel?

    Best regards

    Leon

    user12064076 wrote:

    In my application, I wrote stored procedures that return a collection of user-not of the types of objects to Java.

    Flawed approach using memory expensive private server (PL/SQL PGA) caching SQL data and then push this data to the client - when the customer can use instead the more scalable and higher shared cache buffer memory instead.

    With the types of objects, we can remove most of the redundent data.

    This statement makes no sense that it is the same for SQL and sliders. And remove redundant data is preferable to the SQL itself - through partitioning engine pruning, predicates, only by selecting the columns there is place for the projection of SQL and so on.

    This OO design reduces the load on the network and makes it easy for Java to parse.

    Incorrect answer. It does not reduce network load - it can actually increase. Regarding Java 'parsing' - it's wrong from the beginning approach if the customer is required to analyze the data it receives from the server database. The analysis requires time CPU. Many average general processor muscle which will degrade the performance of analysis.

    You may be using the analysis out of context here? Find it me hard to believe that one could design an application and use a server database in this way. The analysis of means for example to receive XML data (text) and then he analysis in an object - like structure for use.

    Data of the database must be recovered by the customer in a binary structured format and not in a free format that requires the client to analyze in a structured format.

    But the problem is that we accumulate all data in memory first and push them to the client as a whole. If it's too huge, ORA-22813 occurs.
    This is why I intend to use the table of piplelined function.

    How that will solve the problem?

    As I followed your logic:
    (1) you do not use the cursor for some obscure (and probably completely unjustified) reason.
    (2) you may not return large collection PL/SQL variables without significantly dent PGA (process private memory) on the server and running into errors (this approach is conceptually incorrect anyway)
    (3) you are now using an array of pipeline that executes PL/SQL code to execute SQL code - and in turn must be executed by the client as a SQL using a slider

    So why put all the other moving parts (the pipeline code) between the two when the customer
    (a) must use SQL to access?
    (b) create a cursor?

    If, as you say, I returned a cursor, it would be very difficult for Java organize data.

    A table of pipeline must be able to be used through a cursor. All the DML statements from a client by Oracle are analyzed as cursors.

    Read the language PL/SQL Oracle® Database reference guide section on ' + chaining Pipelined Table Functions for Multiple Transformation + ".

    The format using a pipeline is (from the manual):

    SELECT * FROM TABLE(table_function_name(parameter_list))
    

    The "+ pipeline + ' is created by the SQL engine-, he need SQL to execute the PL/SQL code via the function TABLE() SQL.

    You said that the reason to use a pipeline transforms a structure of relational data stored in a structure of the object. You don't need a pipeline for it. Plain vanilla SQL can do the same thing, without the fixed costs of use PL/SQ, SQL data recovery and change within the pipeline between PL/SQL and SQL context engines.

    You simply call the constructor of the class of object in the projection of SQL and the cursor SQL returning the instantiated objects. For example

    create or replace type TEmployee is object(
     .. property definitions ...
    );
    
    create or replace type TEmployeeCollection is table of TEmployee;
    
    select
      TEmployee( col1, col2, .., coln ) as EMP_COLLECTION
    from emp
    where dept_id = :0
    and salary > :1
    and date_employed >= :2
    order by
      salary, date_employed
    

    No need for PL/SQL code. No need for a pipeline. The client will open the cursor and extraction of objects in a collection. The same approach that the customer would have used during extraction of a cursor on a table of pipeline function.

    Pipelines are best used as a process of transformation of data where only SQL cannot perform the transformation. I never in many years of design and writing applications used Oracle PL/SQL pipeline into production on a SQL table. Simply because the SQL itself is capable and powerful enough to do the job - and do it faster and better.

    I used pipeline is to transform the data from external sources into sets of SQL data. For example, a pipe on a web service. When the code PL/SQL of the constructions of the SOAP envelope, the HTTP call, analyzes the XML and returns the content in form of lines and columns - that allows to run a SQL SELECT on web-service-turned-into-a-SQL-table.

    If you'd told me that Leon - it seems to me that your approach is a typical approach to Java that has very little understanding of the concepts of database and Oracle databases. You can't deal with Oracle as a simple persistence layer. You can't treat SQL and PL/SQL as a simple i/o interface for the extraction of data from Oracle and grinding that in Java. Not if you think that your system to run Java and scaling.

    Rethink the Oracle layer, use properly - and your application will occur and will scale. Guaranteed.

    However, from my experience, many J2EE developers choose to treat the Oracle as a black box, not further that a kind of file system loaded to store structured data and try to do it in the Java layer. And this fail. And I saw him failing - of the jaw dropping kind epic failures (knocking all the national newspapers and media as a result and an impact on ordinary people who have to deal with the Government).

    And it's a shame... SQL and PL/SQL are superior to Java in this regard and are the layers much more able to cope and a power of data in the database. Example of the real world - largest table in our busiest database develops between 350 and 450 million lines per day and all our calculations of the data in this table is inside the database layer - and not in a layer of Java. Oracle performs and scales beautifully... when used correctly.

  • Pipelined table vs ref cursor in a function return

    Hi gurus,

    Everybody has discovered that a (subject) is faster on the other? Data will be primarily consumed from an external application (.net). What are the benefits? I can't decide if that use is

    Thank you very much.

    user12868294 wrote:
    Hi gurus,

    Everybody has discovered that a (subject) is faster on the other? Data will be primarily consumed from an external application (.net). What are the benefits? I can't decide if that use is

    Thank you very much.

    They are two different things.

    A pipeline table acts as an array, but you must always choose in it and so if your consumption that in .net, you would still use a Ref Cursor I guess to query this table in pipeline (I guess .net is not query the tables directly, but must use some sort of slider Ref?)

    Tables in pipeline can be fast, but it depends on what you need to. Is there a reason why you really need a feature in pipeline? If this is not the case, just use a normal query with a Ref Cursor, so your .net application only retrieves the data properly.

  • Pipeline table functions

    I try to use the table double tube function. However, I need help to understand how... I understand, the pipeline table functions do not wait for the function ends and begins to return the data as soon as it gets. The calling program can continue the treatment rather than wait. Does this program calling both functions work in parallel table?

    The calling function must still wait for all of the lines to be processed it just means that it can start to view until all rows are returned. It will not automatically put the function as a kind of stand-alone function that runs in the background lines back.

    Example:

    You may need to return 100,000 rows

    A pipeline function will return 1-5 000 from 5 000 are returned.

    If you are returning a collection but don't not the processing pipeline, it must wait until all the 100,000 lines to collect and send everything at once.

    A useful case of this pagination, or you want a user to see the first rows quickly since 99% of the time they will not be the page of the second series.

  • Export of data from a RDBMS Table to an another RDBMS Table using functions of the ODI

    Hello
    I am facing a problem while exporting a RDBMS table data to an another RDBMS Table using user ODI functions.
    Name:-User_Func
    Group:-training
    Syntax:-User_Func($(SrcField))

    Implementation syntax: -.

    (CASE
    WHEN $(SrcField) > 40000 THEN 'HIGH '.
    WHEN $(SrcField) BETWEEN 30000 AND 40000 THEN 'AVERAGE '.
    OTHER "LOW".
    )
    Technology:-Oracle

    To map the column RANK of my TARGET_EMPTABLE I write
    User_Func (SRC_TABLENAME. SALARY)
    using the Expression Editor.
    I got the following error

    ODI-1227: task failed ODI_FUNC_INTERFACE (export) on the source of ORACLE Source_DataServer connection.
    Caused by: java.sql.SQLSyntaxErrorException: ORA-00905: lack of keyword

    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:462)
    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:405)
    at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:931)
    at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:481)
    at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:205)
    at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:548)
    at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:217)
    at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:947)
    at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1283)
    at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1441)
    at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3769)
    at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3823)
    at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeQuery(OraclePreparedStatementWrapper.java:1671)
    at oracle.odi.query.JDBCTemplate.executeQuery(JDBCTemplate.java:189)
    at oracle.odi.runtime.agent.execution.sql.SQLDataProvider.readData(SQLDataProvider.java:89)
    at oracle.odi.runtime.agent.execution.sql.SQLDataProvider.readData(SQLDataProvider.java:1)
    at oracle.odi.runtime.agent.execution.DataMovementTaskExecutionHandler.handleTask(DataMovementTaskExecutionHandler.java:70)
    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.processTask(SnpSessTaskSql.java:2913)
    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java:2625)
    at com.sunopsis.dwg.dbobj.SnpSessStep.treatAttachedTasks(SnpSessStep.java:558)
    at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java:464)
    at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java:2093)
    to oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$ 2.doAction(StartSessRequestProcessor.java:366)
    at oracle.odi.core.persistence.dwgobject.DwgObjectTemplate.execute(DwgObjectTemplate.java:216)
    at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.doProcessStartSessTask(StartSessRequestProcessor.java:300)
    to oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.access$ 0 (StartSessRequestProcessor.java:292)
    to oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$ StartSessTask.doExecute (StartSessRequestProcessor.java:855)
    at oracle.odi.runtime.agent.processor.task.AgentTask.execute(AgentTask.java:126)
    to oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor$ 2.run(DefaultAgentTaskExecutor.java:82)
    at java.lang.Thread.run(Thread.java:619)
    and in the code tab is: -.

    Select
    SRC_FUNC_TABLE. E_NUMBER E_NUMBER,
    SRC_FUNC_TABLE. E_NAME E_NAME,
    SRC_FUNC_TABLE. E_LOC E_LOC,
    (CASE
    WHEN SRC_FUNC_TABLE. E_SAL > 40000 THEN 'HIGH '.
    WHEN SRC_FUNC_TABLE. E_SAL BETWEEN 30000 AND 40000 THEN 'AVERAGE '.
    OTHER "LOW".
    ) E_GRADE
    of SOURCE_SCHEMA. SRC_FUNC_TABLE SRC_FUNC_TABLE
    where (1 = 1)


    Help, please

    Anindya Chatterjee wrote:
    Hello
    I am facing a problem while exporting a RDBMS table data to an another RDBMS Table using user ODI functions.
    Name:-User_Func
    Group:-training
    Syntax:-User_Func($(SrcField))

    Implementation syntax: -.

    (CASE
    WHEN $(SrcField) > 40000 THEN 'HIGH '.
    WHEN $(SrcField) BETWEEN 30000 AND 40000 THEN 'AVERAGE '.
    OTHER "LOW".
    )

    Your syntax of the CASE statement is not correct
    Missing END keyword
    It should be

    (CASE
    WHEN $(SrcField) > 40000 THEN 'HIGH '.
    WHEN $(SrcField) BETWEEN 30000 AND 40000 THEN 'AVERAGE '.
    OTHER "LOW".
    END)

    Technology:-Oracle

    To map the column RANK of my TARGET_EMPTABLE I write
    User_Func (SRC_TABLENAME. SALARY)
    using the Expression Editor.
    I got the following error

    ODI-1227: task failed ODI_FUNC_INTERFACE (export) on the source of ORACLE Source_DataServer connection.
    Caused by: java.sql.SQLSyntaxErrorException: ORA-00905: lack of keyword

    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:462)
    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:405)
    at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:931)
    at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:481)
    at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:205)
    at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:548)
    at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:217)
    at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:947)
    at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1283)
    at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1441)
    at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3769)
    at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3823)
    at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeQuery(OraclePreparedStatementWrapper.java:1671)
    at oracle.odi.query.JDBCTemplate.executeQuery(JDBCTemplate.java:189)
    at oracle.odi.runtime.agent.execution.sql.SQLDataProvider.readData(SQLDataProvider.java:89)
    at oracle.odi.runtime.agent.execution.sql.SQLDataProvider.readData(SQLDataProvider.java:1)
    at oracle.odi.runtime.agent.execution.DataMovementTaskExecutionHandler.handleTask(DataMovementTaskExecutionHandler.java:70)
    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.processTask(SnpSessTaskSql.java:2913)
    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java:2625)
    at com.sunopsis.dwg.dbobj.SnpSessStep.treatAttachedTasks(SnpSessStep.java:558)
    at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java:464)
    at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java:2093)
    to oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$ 2.doAction(StartSessRequestProcessor.java:366)
    at oracle.odi.core.persistence.dwgobject.DwgObjectTemplate.execute(DwgObjectTemplate.java:216)
    at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.doProcessStartSessTask(StartSessRequestProcessor.java:300)
    to oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.access$ 0 (StartSessRequestProcessor.java:292)
    to oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$ StartSessTask.doExecute (StartSessRequestProcessor.java:855)
    at oracle.odi.runtime.agent.processor.task.AgentTask.execute(AgentTask.java:126)
    to oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor$ 2.run(DefaultAgentTaskExecutor.java:82)
    at java.lang.Thread.run(Thread.java:619)
    and in the code tab is: -.

    Select
    SRC_FUNC_TABLE. E_NUMBER E_NUMBER,
    SRC_FUNC_TABLE. E_NAME E_NAME,
    SRC_FUNC_TABLE. E_LOC E_LOC,
    (CASE
    WHEN SRC_FUNC_TABLE. E_SAL > 40000 THEN 'HIGH '.


    WHEN SRC_FUNC_TABLE. E_SAL BETWEEN 30000 AND 40000 THEN 'AVERAGE '.
    OTHER "LOW".
    ) E_GRADE
    of SOURCE_SCHEMA. SRC_FUNC_TABLE SRC_FUNC_TABLE
    where (1 = 1)

    Help, please

  • FDQM write back to the EBS with other tables as GL

    Hello

    Please, I need a confirmation on FDQM: if it is possible to rewrite data from Hyperion Planning (Essbase) in EBS with other tables that GL using standard features FDQM (tables of EBS EBS sourcing purchases tables etc.).
    FDQM ERPI-write is limited to the EBS GL.

    Or write in EBS, I use ODI.

    I thank you and appreciate your input!
    Robert

    Hello

    Certainly, you can write additional scripts to do the job you are looking for. Unfortunately the only "out of the box solution" is mentioned above.

    Thank you

  • Trying to create the table using Clause any Union with Select multiple stmts

    The motion seeks to get the substring of the value to for example.
    If the value is * ASA 2 * then so do ASA
    where as if the value is * 1.5 TST * it sholud come as TST as wise for others too.
    I am trying to execute stmt SQL written but in error as below:

    * ' ORA-00998 must appoint this expression with the alias column 00998.00000 - must appoint this expression with the column alias. "

    CREATE TABLE TEST_CARE AS
    (
    SELECT row_id, old_care_lvl, SUBSTR(old_care_lvl,3), len test_care_lvl FROM
    WHERE LENGTH (old_care_lvl) = 5
    UNION ALL
    SELECT row_id, old_care_lvl, SUBSTR(old_care_lvl,3), len test_care_lvl FROM
    WHERE LENGTH (old_care_lvl) = 7
    UNION ALL
    SELECT row_id, old_care_lvl, SUBSTR(old_care_lvl,3), len test_care_lvl FROM
    WHERE LENGTH (old_care_lvl) = 14
    UNION ALL
    Row_id SELECT, old_care_lvl, SUBSTR (old_care_lvl, 3), LEN test_care_lvl
    WHERE LENGTH = 7 AND old_care_lvl (old_care_lvl) = "Regular."
    );

    I want to create the table using the above given the multiple selection by using the Union ALL clause, but trying to create run query error like "ORA-00998 must appoint this expression with the alias column 00998.00000 - must appoint this expression with the column alias.

    Please guide me how to approach to solve this problem.
    Thanks in advance.

    Try this->

    CREATE TABLE TEST_CARE
    AS
      select *
      from (
              SELECT row_id, old_care_lvl,SUBSTR(old_care_lvl,3), len FROM test_care_lvl
              WHERE LENGTH(old_care_lvl) =5
              UNION ALL
              SELECT row_id, old_care_lvl,SUBSTR(old_care_lvl,3), len FROM test_care_lvl
              WHERE LENGTH(old_care_lvl) =7
              UNION ALL
              SELECT row_id, old_care_lvl,SUBSTR(old_care_lvl,3), len FROM test_care_lvl
              WHERE LENGTH(old_care_lvl) =14
              UNION ALL
              SELECT row_id, old_care_lvl,SUBSTR(old_care_lvl,3),LEN FROM test_care_lvl
              WHERE LENGTH(old_care_lvl) =7 AND old_care_lvl ='Regular'
          );
    

    N.B.: Not tested...

    Kind regards.

    LOULOU.

  • How the identity if one table used only by synonyms and not by any other subprogrammes in a schema?

    Hi people

    How the identity if one table used only by synonyms and not by any other subprogrammes within a schema. I see in the TOAD describe objects tab used by, but I would like to identify hundreds of table so I would like to know if there would be any SQL or Meta Data Tables?

    How the identity if one table used only by synonyms and not by any other subprogrammes within a schema. I see in the TOAD describe objects tab used by, but I would like to identify hundreds of table so I would like to know if there would be any SQL or Meta Data Tables?

    The ALL_DEPENDENCIES view has hierarchical information based on object_id

    The view object has the object_type.

    Create a hierarchical query on the first view and attach it to the second view.

    Or you can use the utldtree.sql file in the installation of the DB admin folder. Comments initially show you how a hierarchical query based on the object type.

  • You can create a table of contents with page numbers using bookmarks?

    You can create a table of contents with page numbers using bookmarks?

    Sometimes a long article has bookmarks to help navigate, it but has no real TOC (table of contents) on the first page. In such a situation, I think that it would facilitate the reading of the paper version if you can somehow create a table of contents with page based on hierarchical bookmarks in the document numbers.

    If this is not possible from Acrobat, is there a third party app?

    Indeed you have created a script for this - sorry that I missed it. I should have...

    Acrobat - Create TOC bookmarks

  • Implementation of the functions of table (indexOf, lastIndexOf, removeDuplicates...)

    Hello

    I'm trying to implement some functions to manipulate tables more easily, as I would with other programming languages (Array.IndexOf exists in Javascript, but seems to be absent from ExtendScript).

    First of all, I tried to add these functions using Array.prototype; Here's what I've done for two of these functions:

    Array.prototype.indexOf = function (value)
            {
              for (var i = 0;i<this.length;i++)
                {
                    if (this[i] == value) return i;
                }
                return -1;
            }
           
    Array.prototype.removeDuplicates = function ()
            {
                var removed = [];
                for (var i = 0;i<this.length-1;i++) {
                        for (var j=i+1;j<this.length;j++) {
                            if (this[i] == this[j]) {               
                                removed.push(this.splice(j,)1);
                            }
                        }
                    }
                    return removed;
            }
    

    It seemed to work fine, but I discovered that it breaks this kind of loop:

    for (var i in array) {
         alert(array[i]);
         }
    

    The loop through the values in the table and continues beyond array.length with the features added to the prototype.

    As explained here, I have found a workaround, using Object.defineProperty () rather than implement functions in Array.proptotype

    Object.defineProperty(Array.prototype, "indexOf", {
      enumerable: false,
      value: function(value) {
          for (var i = 0;i<this.length;i++)
                {
                    if (this[i] == value) return i;
                }
                return -1;
        }
    });
    

    But... Object.defineProperty () is missing in ExtendScript too!

    I don't know what to try next... Any idea?

    Thank you!

    The primary reason that some of these functions do not exist, is that ExtendScript is based on the ECMA-262 standard. Very old JavaScript and I don't think all this is implemented.

    PG. 3 of the CS6 script guide (only written comprehensive guide Adobe has for the moment) after effects Developer Center | Adobe Developer Connection:

    The ExtendScript language

    "After Effects scripts using the Adobe ExtendScript language, which is an extended form of JavaScript used by several applications Adobe, including Photoshop, Illustrator, and InDesign. ExtendScript implements the JavaScript language according to the ECMA-262 specification. The After Effects script engine supports the 3rd edition of the ECMA-262 standard, including its conventions of notation and lexical, types, objects, expressions, and statements. ExtendScript also implements the E4X ECMA-357 specification, which defines the access to the data in XML format. »

    Even though I know that many developers have made prototyping, I found it to be annoying personally, especially if your code moves outside your machine. I just made autonomous functions for all my scripts. It has been easier to reuse code and create some (not all) missing features that would be nice to have the day current Javascript.

  • XML data in the table using sql/plsql

    Hi experts,

    Could you please help with the following requirement. I have the tags xml (.xml on a server file) below. I need to access this file and read the XML and insert into the db table using sql and plsql. Is it possible with the cdata below? And there is a nested this table.

    Could someone please guide me if you have a sample code file and xml.

    <? XML version = "1.0" encoding = "UTF-8"? >

    < generation_date > <! [CDATA [17/11/2015]] > < / generation_date >

    < generated_by > <! [CDATA [Admin Admin]] > < / generated_by >

    < year > <! [CDATA [2015]] > < / year >

    < month > <! [CDATA [01]] > < / month >

    < author >

    < author > <! [CDATA [user author]] > < / author > < author_initial > <! [CDATA [user]] > < / author_firstname > < author_country > <! [CDATA [author]] > < / author_lastname >

    < author_email > <! [CDATA [[email protected]]] > < / author_email >

    < author_data_01 > <! [CDATA []] > < / author_data_01 >

    < author_data_02 > <! [CDATA []] > < / author_data_02 >

    < items >

    < article_item >

    < article_id > <! [CDATA [123456]] > < / article_id >

    < publication > <! [CDATA [Al Bayan]] > < / publication >

    < section > <! [CDATA [Local]] > < / section >

    < issue_date > <! [CDATA [11/11/2015]] > < / issue_date >

    < page > <! [CDATA [2]] > < / print this page >

    < article_title > <! [CDATA [title.]] > < / article_title > < number_of_words > <! [CDATA [165]] > < / number_of_words >

    < original_price > <! [CDATA [200]] > < / original_price >

    < original_price_currency > <! [CDATA [DEA]] > < / original_price_currency >

    < price > <! [CDATA [250]] > < / price >

    < price_currency > <! [CDATA [DEA]] > < / price_currency >

    < / article_item >

    < / articles >

    < total_amount > <! [CDATA [250]] > < / total_amount >

    < total_amount_currency > <! [CDATA [DEA]] > < / total_amount_currency >

    < / author >

    < / xml >

    Thanks in advance,

    Suman

    XMLTABLE using...

    SQL > ed
    A written file afiedt.buf

    1 with t (xml) as (select xmltype ('))
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    [[12 [email protected]]] >
    13
    14
    15
    16
    17
    18
    19


    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33 ") of the double)"
    34-

    35 end of sample data
    36-
    37 - assumptions:
    (38 - a) XML may have several tags
    (39 - b) each may contain more
    40-
    41 select x.gen_by, x.gen_date, x.mn, x.yr
    42, y.author, y.auth_fn, y.auth_ln, y.auth_cnt, y.auth_em, y.auth_d1, y.auth_d2

    43, z.id, z.pub, z.sec, z.iss_dt, z.pg, z.art_ttl, z.num_wrds, z.oprice, z.ocurr, z.price, z.curr
    44 t
    45, xmltable ('/ authxml')
    from $ 46 t.xml
    path of 47 columns gen_date varchar2 (10) '. / generation_date'
    48, path of varchar2 (15) of gen_by '. / generated_by'
    49, path of varchar2 (4) year '. "/ year"
    50 varchar2 (2) mn road '. "/ month"
    51, path of xmltype authors '.'
    52                 ) x
    53, xmltable ('/ authxml/authors ')
    from $ 54 x.authors
    author of 55 path of varchar2 columns (15) '. / author'
    56, path of varchar2 (10) of auth_fn '. / author_firstname'
    57, path of varchar2 (10) of auth_ln '. / author_lastname'
    58 road of VARCHAR2 (3) auth_cnt '. / author_country'
    59 road of varchar2 (20) of auth_em '. / author_email'
    60 road of varchar2 (5) of auth_d1 '. / author_data_01'
    61, path of varchar2 (5) of auth_d2 '. / author_data_02'
    62, path of xmltype articles '. / Articles'
    63                 ) y
    64, xmltable ('/ Articles/article_item ')
    from $ 65 y.articles
    path id 66 number columns '. / article_id'
    67, path of varchar2 (10) pub '. ' / publication.
    68 road of varchar2 (10) dry '. / section'
    69, path of varchar2 (10) of iss_dt '. / issue_date'
    70 road of VARCHAR2 (3) pg '. "/ print this page"
    71, path of varchar2 (20) of art_ttl '. / article_title'
    72, path of varchar2 (5) of num_wrds '. / number_of_words'
    73, path of varchar2 (5) of oprice '. / original_price'
    74 road to VARCHAR2 (3) ocurr '. / original_price_currency'
    75, path of varchar2 (5) price '. "/ price"
    76, path of VARCHAR2 (3) curr '. / price_currency'
    77*                ) z
    SQL > /.

    GEN_DATE GEN_BY YEAR MN AUTHOR AUTH_FN AUTH_LN AUT AUTH_EM AUTH_ AUTH_ ID PUB DRY ISS_DT PG ART_TTL NUM_W OPRIC HEARTS PRICE OCU
    ---------- --------------- ---- -- --------------- ---------- ---------- --- -------------------- ----- ----- ---------- ---------- ---------- ---------- --- -------------------- ----- ----- --- ----- ---
    17/11/2015 Admin Admin 2015 01 user author user author [email protected] 123456 UAE Al Bayan Local 11/11/2015 2 is the title.   165 200 AED AED 250

    Of course, you'll want to change the types of data, etc. as needed.

    I assumed that the XML can contain several "" sections and that each section can contain several entries.

    Thus the XMLTABLE aliasing as 'x' gives information of XML, and supplies the data associated with the XMLTABLE with alias 'y' which gets the multiple authors, which itself section of the XMLTABLE with alias 'z' for each of the article_item.

    CDATA stuff are handled automatically by SQLX (XML functionality integrated into Oracle's SQL)

  • Normalize the names in a huge table using UTL_MATCH

    Hello

    I have a large table (350 million records) with a "full name" column

    This column has a few typos, so I have to 'normalise' the data (only for this column), using UTL_MATCH. JARO_WINKLER_SIMILARITY.

    I did some tests with a small table, and it works to show the similar names:

    SELECT b.SID, b.name FROM typotable a, typotable b utl_match.jaro_winkler_similarity (b.SID, b.name) WHERE BETWEEN 85 and 99 AND a.rowid > b.rowid;


    But:


    (1) the test table was small, by using this code directly on the 350 million accounts table take ages... What can be done about it?


    (2) this shows just the similar names. How can I update the table by searching for similarities, choose one of them as the only value for each name?




    Thank you

    1590733 wrote:

    Yes, I get your point. The thing is that there is no "correct" available names and the original table is huge, that's what I thought:

    -Create a table of secondary NAMES, with unique names. These names would have been generated by match the values similar to one of them (but always the same, no matter if is not one that suits). This should be equivalent to your table 'correctness '.

    -Run the cleaning procedure for updating records

    How can I create this secondary NAMES table? (The column 'genre' is not serious at all, that the 'name' must be set)

    Thanks for your help

    Well, you need to determine what is the logic that would pick one of the incorrect names on the other.  In its current version, you can easily get two incorrect values having the same value of match.  But then you must also consider what creates a 'group' of values that you can get the best in the group.  Using the match itself is not enough to create groups.

    Example:

    SQL > ed

    A written file afiedt.buf

    1 Select a.fname as $fname1, b.fname as fname2

    2, utl_match.jaro_winkler_similarity (a.fname, b.fname) as a match

    3 typotable one

    4 join typotable b on (a.fname! = b.fname)

    where the 5 utl_match.jaro_winkler_similarity (a.fname, b.fname) > = 85

    6 * 1.3 desc order

    SQL > /.

    $FNAME1 FNAME2 MATCH

    ---------- ---------- ----------

    FROCEN FROZEN 92

    FROZEN FROCEN 92

    FROZEN FROCEN 92

    FROZEN FROZEN 92

    JELLY FROZIN 93

    JELLY FROCEN 92

    FROZEN FROZEN 92

    FROZEN FROZIN 93

    WHIPLASH WIPLASH 96

    WHIPLASH WIPLASH 96

    10 selected lines.

    As you can see, for example, FROCEN has two possible variants, both with a football match of the 92.  The same with others.

    However, you could start cutting things around (and it's really a hack) to get something like:

    SQL > ed

    A written file afiedt.buf

    1 with t as)

    2. Select a.fname as $fname1, b.fname as fname2

    3, utl_match.jaro_winkler_similarity (a.fname, b.fname) as a match

    typotable a 4

    5 join typotable b on (a.fname! = b.fname)

    where the 6 utl_match.jaro_winkler_similarity (a.fname, b.fname) > = 85

    7       )

    8, ch. as)

    9 select $fname1, ($fname1, fname2) greatest as fname2, match

    10, (select count (*)

    11 t t2

    12 where t2.fname2 = t.fname2

    13 and t2.fname1! = t.fname1

    (14) as the NTC

    15 t

    16       )

    17, r as)

    18 select $fname1, fname2, match, cnt

    19, row_number() over (partition by $fname1 by cnt desc, desc match order): the nurse

    20 c

    21       )

    22 select $fname1, fname2

    23 r

    where the 24 rn = 1

    25 * order by 1

    SQL > /.

    $FNAME1 FNAME2

    ---------- ----------

    FROZEN FROCEN

    FROZEN FROZEN

    FROZEN FROZEN

    FROZIN FROZIN

    WHIPLASH WIPLASH

    WIPLASH WIPLASH

    6 selected lines.

    but then it depends on your data as to if it will work in all circumstances

  • Cascade of two functions of table build - what happens?

    I found the code indicated in the attachment in a very complex VI that I'm rehabilitated.  Could someone explain what is happening with the functions of table build "cascade".

    I see that the first function to build array built a table 1 d of integers that passes to the second function to build table that shows a table 2D of integers, but since there is no element or second input array, I don't see how it works.

    This is part of a Subvi, which is not incorporated into a loop.

    All the examples have at least two entries.  There is no option to concatenate entries with a single entry. There is an implicit operation here that I don't understand.

    If someone explain how it works?

    You're taking a scalar value and creating a table 1 d with exactly 1 element.

    So you're taking 1 d table and build in a 2D, with exactly 1 Item table.

    Since you can't concatenate something with nothing, only logical mode for the table build would be to build in a table of the next larger size.

    You could add a third, then you'd have a table 3D with exactly 1 element.  And so on.

  • How do the function of table 1 d search case-insensitive for the array of strings

    How do the function of table 1 d search case-insensitive for the array of strings

    Hi Karine,.

    convert the two (table and search for the string) to lowercase before using this feature...

Maybe you are looking for