the given query optimization

Hi all

I have a CST_ITEM_COSTS table that has columns MATERIAL_COST, MATERIAL_OVERHEAD_COST, RESOURCE_COST, OUTSIDE_PROCESSING_COST, OVERHEAD_COST, INVENTORY_ITEM_ID,
ORGANIZATION_ID, LAST_UPDATE_DATE and COST_TYPE_ID. I want to take the data in this table and fill out the table, items_d, that has the columns of data COST AND COST_ELEMENT, LAST_UPDATE_DATE, INVENTORY_ITEM_ID, COST_TYPE_ID, ORGANIZATION_ID., which would be stored as 1 row cst_item_costs should be divided into 5 rows which each will have inventory_ITEM_ID,
ORGANIZATION_ID, LAST_UPDATE_DATE and COST_TYPE_ID.cost for 1 row will be the value of cost of materials and cost element is harcoded at HARDWARE, 2nd row will cost as element of value and the MATERIAL_OVERHEAD_COST cost will be harcoded as overhead, sick 3 ranks have cost resource cost value and cost elemnt resourece... same shall We will have data of 4th and 5th line. Here's the code: is it possible to optimize this code, to reduce the length or improve its performance.

SELECT
"PIVOT." "INVENTORY_ITEM_ID$ 1""INVENTORY_ITEM_ID",
"PIVOT." "ORGANIZATION_ID_1$ 1""ORGANIZATION_ID",
"PIVOT." "COST_TYPE_ID_1$ 1""COST_TYPE_ID_1",
"PIVOT." "LAST_UPDATE_DATE_1$ 1""LAST_UPDATE_DATE",
"PIVOT." "$ 1 '' COST' COST,
"PIVOT." "COST_ELEMENT$ 1""COST_ELEMENT"
Of
(SELECT
CASE WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 0 THEN 'PIVOT_SOURCE '. "' INVENTORY_ITEM_ID ' WHEN 'PIVOT_ROW_GENERATOR '. ' ' ID ' = 1 THEN 'PIVOT_SOURCE '. "' INVENTORY_ITEM_ID ' WHEN 'PIVOT_ROW_GENERATOR '. ' ' ID ' = 2 THEN 'PIVOT_SOURCE '. "' INVENTORY_ITEM_ID ' WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 3 THEN 'PIVOT_SOURCE '. "' INVENTORY_ITEM_ID ' WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 4 THEN"PIVOT_SOURCE ". "" INVENTORY_ITEM_ID END "" INVENTORY_ITEM_ID$ 1 ",".
CASE WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 0 THEN 'PIVOT_SOURCE '. "' ORGANIZATION_ID ' WHEN 'PIVOT_ROW_GENERATOR '. ' ' ID ' = 1 THEN 'PIVOT_SOURCE '. "' ORGANIZATION_ID ' WHEN 'PIVOT_ROW_GENERATOR '. ' ' ID ' = 2 THEN 'PIVOT_SOURCE '. "' ORGANIZATION_ID ' WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 3 THEN 'PIVOT_SOURCE '. "' ORGANIZATION_ID ' WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 4 THEN"PIVOT_SOURCE ". "" ORGANIZATION_ID END "" ORGANIZATION_ID_1$ 1 ",".
CASE WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 0 THEN 'PIVOT_SOURCE '. "' COST_TYPE_ID ' WHEN 'PIVOT_ROW_GENERATOR '. ' ' ID ' = 1 THEN 'PIVOT_SOURCE '. "' COST_TYPE_ID ' WHEN 'PIVOT_ROW_GENERATOR '. ' ' ID ' = 2 THEN 'PIVOT_SOURCE '. "' COST_TYPE_ID ' WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 3 THEN 'PIVOT_SOURCE '. "' COST_TYPE_ID ' WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 4 THEN"PIVOT_SOURCE ". "" COST_TYPE_ID END "" COST_TYPE_ID_1$ 1 ",".
CASE WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 0 THEN 'PIVOT_SOURCE '. "' LAST_UPDATE_DATE ' WHEN 'PIVOT_ROW_GENERATOR '. ' ' ID ' = 1 THEN 'PIVOT_SOURCE '. "' LAST_UPDATE_DATE ' WHEN 'PIVOT_ROW_GENERATOR '. ' ' ID ' = 2 THEN 'PIVOT_SOURCE '. "' LAST_UPDATE_DATE ' WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 3 THEN 'PIVOT_SOURCE '. "' LAST_UPDATE_DATE ' WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 4 THEN"PIVOT_SOURCE ". "" LAST_UPDATE_DATE END "" LAST_UPDATE_DATE_1$ 1 ",".
CASE WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 0 THEN 'PIVOT_SOURCE '. "' MATERIAL_COST ' WHEN 'PIVOT_ROW_GENERATOR '. ' ' ID ' = 1 THEN 'PIVOT_SOURCE '. "' MATERIAL_OVERHEAD_COST ' WHEN 'PIVOT_ROW_GENERATOR '. ' ' ID ' = 2 THEN 'PIVOT_SOURCE '. "' RESOURCE_COST ' WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 3 THEN 'PIVOT_SOURCE '. "' OUTSIDE_PROCESSING_COST ' WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 4 THEN"PIVOT_SOURCE ". "" OVERHEAD_COST "END" COST$ 1 ",".
CASE WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 0 THEN 'MATTER' WHEN 'PIVOT_ROW_GENERATOR '. ' ' ID ' = 1 THEN 'MATERIAL OVERHEAD' WHEN 'PIVOT_ROW_GENERATOR '. ' ' ID ' = 2 THEN 'RESOURCE' WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 3 THEN 'OUTSIDE TREATMENT" WHEN "PIVOT_ROW_GENERATOR". "" ID "= 4 THEN ENDS"OVERHEAD"" COST_ELEMENT$ 1 ".
Of
(SELECT
0 "ID".
Of
DOUBLE
UNION ALL
SELECT
1 "ID".
Of
DOUBLE
UNION ALL
SELECT
2 "ID".
Of
DOUBLE
UNION ALL
SELECT
3 "ID".
Of
DOUBLE
UNION ALL
SELECT
4 "ID".
Of
DOUBLE) "PIVOT_ROW_GENERATOR."
(SELECT
'CST_ITEM_COSTS '. "" MATERIAL_COST ""MATERIAL_COST"
'CST_ITEM_COSTS '. "" MATERIAL_OVERHEAD_COST ""MATERIAL_OVERHEAD_COST"
'CST_ITEM_COSTS '. "" RESOURCE_COST ""RESOURCE_COST"
'CST_ITEM_COSTS '. "" OUTSIDE_PROCESSING_COST ""OUTSIDE_PROCESSING_COST"
'CST_ITEM_COSTS '. "" OVERHEAD_COST ""OVERHEAD_COST"
'CST_ITEM_COSTS '. "" INVENTORY_ITEM_ID ""INVENTORY_ITEM_ID"
'CST_ITEM_COSTS '. "" ORGANIZATION_ID ""ORGANIZATION_ID"
'CST_ITEM_COSTS '. "" COST_TYPE_ID ""COST_TYPE_ID"
'CST_ITEM_COSTS '. ' ' LAST_UPDATE_DATE ' 'LAST_UPDATE_DATE '.
Of
("CST_ITEM_COSTS" "CST_ITEM_COSTS") "PIVOT_SOURCE") 'PIVOT '.

Thanks for your help.

This is the simplified and formatted code:

SELECT c.inventory_item_id inventory_item_id$1,
       c.organization_id organization_id_1$1,
       c.cost_type_id cost_type_id_1$1,
       c.last_update_date last_update_date_1$1,
       CASE
         WHEN pivot_row_generator.id = 0 THEN c.material_cost
         WHEN pivot_row_generator.id = 1 THEN c.material_overhead_cost
         WHEN pivot_row_generator.id = 2 THEN c.resource_cost
         WHEN pivot_row_generator.id = 3 THEN c.outside_processing_cost
         WHEN pivot_row_generator.id = 4 THEN c.overhead_cost
       END cost$1,
       CASE
         WHEN pivot_row_generator.id = 0 THEN 'MATERIAL'
         WHEN pivot_row_generator.id = 1 THEN 'MATERIAL OVERHEAD'
         WHEN pivot_row_generator.id = 2 THEN 'RESOURCE'
         WHEN pivot_row_generator.id = 3 THEN 'OUTSIDE PROCESSING'
         WHEN pivot_row_generator.id = 4 THEN 'OVERHEAD'
       END cost_element$1
FROM   cst_item_costs c,
      (SELECT level - 1 id
       FROM   dual
       CONNECT BY level <= 5) pivot_row_generator

You must put the code with braces around it commands for formatting

Tags: Database

Similar Questions

  • Question about the query optimizer

    For a year during my database of the Conference the following table
    CREATE TABLE TASKS
    (
        "ID" NUMBER NOT NULL ENABLE,
        "START_DATE" DATE,
        "END_DATE" DATE,
        "DESCRIPTION" VARCHAR2(50 BYTE)
    ) ;
    with approximately 1.5 million entries were given. In addition, there were the following query:
    SELECT START_DATE, COUNT(START_DATE) FROM TASKS
    GROUP BY START_DATE
    ORDER BY START_DATE;
    And the Index:
    create index blub on Tasks (start_date asc);
    The main exercise was to speed up queries with indexes. Because all the data is available the optimizer ignores index and just have a full table scan.
    Here the QEP:
    ----------------------------------------------------------------------------                                                                                                                                                                                                                                 
    | Id  | Operation          | Name  | Rows  | Bytes | Cost (%CPU)| Time     |                                                                                                                                                                                                                                 
    ----------------------------------------------------------------------------                                                                                                                                                                                                                                 
    |   0 | SELECT STATEMENT   |       |  9343 | 74744 |  3423   (6)| 00:00:42 |                                                                                                                                                                                                                                 
    |   1 |  SORT GROUP BY     |       |  9343 | 74744 |  3423   (6)| 00:00:42 |                                                                                                                                                                                                                                 
    |   2 |   TABLE ACCESS FULL| TASKS |  1981K|    15M|  3276   (2)| 00:00:40 |                                                                                                                                                                                                                                 
    ----------------------------------------------------------------------------
    Then we tried to compel him to make the index with this query:
    ALTER SESSION SET OPTIMIZER_MODE = FIRST_ROWS_1;
    
    SELECT /* + INDEX (TASKS BLUB) */ START_DATE, COUNT(START_DATE) FROM TASKS
    GROUP BY START_DATE
    ORDER BY START_DATE;
    but again he ignored the index. The optimizer guide makes clear, that whenever you will use all the data in a table, it must do a full scan.
    So we fooled him doing a scan limited quick index with this query:
    create or replace function bla
    return date deterministic is
      ret date;
    begin
      select MIN(start_date) into ret from Tasks;
      return ret;
    end bla;
    
    ALTER SESSION SET OPTIMIZER_MODE = FIRST_ROWS_1;
    
    SELECT /* + INDEX (TASKS BLUB) */ START_DATE, COUNT(START_DATE) FROM TASKS
    where start_date >= bla
    GROUP BY START_DATE
    ORDER BY START_DATE; 
    now, we got the following QEP:
    -----------------------------------------------------------------------------                                                                                                                                                                                                                                
    | Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |                                                                                                                                                                                                                                
    -----------------------------------------------------------------------------                                                                                                                                                                                                                                
    |   0 | SELECT STATEMENT     |      |     1 |     8 |     3   (0)| 00:00:01 |                                                                                                                                                                                                                                
    |   1 |  SORT GROUP BY NOSORT|      |     1 |     8 |     3   (0)| 00:00:01 |                                                                                                                                                                                                                                
    |*  2 |   INDEX RANGE SCAN   | BLUB |     1 |     8 |     3   (0)| 00:00:01 |                                                                                                                                                                                                                                
    ----------------------------------------------------------------------------- 
    So he use the index.

    Now to my two questions:

    1. why should always do a full scan (because the response of optimizer documentation is a bit unsatisfactory)?
    2. After looking at the difference between the costs (FS: 3276 IR: 3) and the time, the system needs (FS: 9.6 s IR: 4.45) why the optimizer refused the plan clearly better?

    Thanks in advance,

    Kai Gödde

    Published by: Kai Gödde on May 30, 2011 18:54

    Published by: Kai Gödde on May 30, 2011 18:56

    The reason for which Oracle is full of sweeping the table corresponding to your request:

    SELECT START_DATE, COUNT(START_DATE) FROM TASKS
    GROUP BY START_DATE
    ORDER BY START_DATE;
    

    and using the index for:

    SELECT /* + INDEX (TASKS BLUB) */ START_DATE, COUNT(START_DATE) FROM TASKS
    where start_date >= bla
    GROUP BY START_DATE
    ORDER BY START_DATE;
    

    has to do with the (possible) null values in the table. Note that the query with a predicate on start_date would have probably used the index even without suspicion.

    The optimizer does not know that there is a start_date value in each row of the table and the group by expression will include NULL values, but because you count start_date (meaning count of non-null of the expression values) the count himself will be null. For example:

    SQL> with t as (
      2     select trunc(sysdate) dt from dual union all
      3     select trunc(sysdate) dt from dual union all
      4     select trunc(sysdate-1) dt from dual union all
      5     select trunc(sysdate-1) dt from dual union all
      6     select to_date(null) from dual)
      7  select dt, count(dt) from t
      8  group by dt;
    
    DT           COUNT(DT)
    ----------- ----------
                         0
    29-MAY-2011          2
    30-MAY-2011          2
    

    Because Oracle does not create an index entry for a line with all null values in the index key, the optimizer is forced full analysis of the table to make sure that it returns all rows. In the query with a predicate on start_date the optimizer knows that no matter what start_date > blah must be non-null.

    To make your first query to use an index, you must declare either start_date as not null (which implies that it is a mandatory field), or if there may be values NULL, but you care not to add a like predicate:

    where start_date is not null);
    

    John

  • How to optimize the select query executed in a cursor for loop?

    Hi friends,

    I run the code below and clocked at the same time for each line of code using DBMS_PROFILER.
    CREATE OR REPLACE PROCEDURE TEST
    AS
       p_file_id              NUMBER                                   := 151;
       v_shipper_ind          ah_item.shipper_ind%TYPE;
       v_sales_reserve_ind    ah_item.special_sales_reserve_ind%TYPE;
       v_location_indicator   ah_item.exe_location_ind%TYPE;
    
       CURSOR activity_c
       IS
          SELECT *
            FROM ah_activity_internal
           WHERE status_id = 30
             AND file_id = p_file_id;
    BEGIN
       DBMS_PROFILER.start_profiler ('TEST');
    
       FOR rec IN activity_c
       LOOP
          SELECT DISTINCT shipper_ind, special_sales_reserve_ind, exe_location_ind
                     INTO v_shipper_ind, v_sales_reserve_ind, v_location_indicator
                     FROM ah_item --464000 rows in this table
                    WHERE item_id_edw IN (
                             SELECT item_id_edw
                               FROM ah_item_xref --700000 rows in this table
                              WHERE item_code_cust = rec.item_code_cust
                                AND facility_num IN (
                                       SELECT facility_code
                                         FROM ah_chain_div_facility --17 rows in this table
                                        WHERE chain_id = ah_internal_data_pkg.get_chain_id (p_file_id)
                                          AND div_id = (SELECT div_id
                                                          FROM ah_div --8 rows in this table 
                                                         WHERE division = rec.division)));
       END LOOP;
    
       DBMS_PROFILER.stop_profiler;
    EXCEPTION
       WHEN NO_DATA_FOUND
       THEN
          NULL;
       WHEN TOO_MANY_ROWS
       THEN
          NULL;
    END TEST;
    The SELECT inside the LOOP FOR cursor query took 773 seconds.
    I tried to use COLLECT in BULK instead of a cursor for loop, but it did not help.
    When I took the select query separately and executed with a value of the sample, and then he gave the results in a Flash of a second.

    All tables have primary key index.
    Any ideas what can be done to make this code more efficient?

    Thank you
    Raj.
    DECLARE
      v_chain_id ah_chain_div_facility.chain_id%TYPE := ah_internal_data_pkg.get_chain_id (p_file_id);
    
      CURSOR cur_loop IS
      SELECT * -- better off explicitly specifying columns
      FROM ah_activity_internal aai,
      (SELECT DISTINCT aix.item_code_cust, ad.division, ai.shipper_ind, ai.special_sales_reserve_ind, ai.exe_location_ind
         INTO v_shipper_ind, v_sales_reserve_ind, v_location_indicator
         FROM ah_item ai, ah_item_xref aix, ah_chain_div_facility acdf, ah_div ad
        WHERE ai.item_id_edw = aix.item_id_edw
          AND aix.facility_num = acdf.facility_code
          AND acdf.chain_id = v_chain_id
          AND acdf.div_id = ad.div_id) d
      WHERE aai.status_id = 30
        AND aai.file_id = p_file_id
        AND d.item_code_cust = aai.item_code_cust
        AND d.division = aai.division;         
    
    BEGIN
      FOR rec IN cur_loop LOOP
        ... DO your stuff ...
      END LOOP;
    END;  
    

    Published by: Dave hemming on December 4, 2008 09:17

  • How to do a validation based on the SQL query?

    Hello

    I have a requirement to perform a validation on a field (messageTextinput) in my page OAF.

    When I click the button apply, the value in this field is validated based on a SQL query (for example, the value in the field NOT IN (value1 select from table1).)

    Help, please.

    Best regards

    Joe

    1. create a SQL query based VO, XXVO. For example:-SQL query is select xx_table;

    2. Enter the "Apply" button click event in the controller and run the AM method passing the value entered by the user in the given field, for example:-value is VAL1

    3. in the method of the AM, get a handle of the VO, the whereClause and run.

    OAViewObjectImpl vo = findViewObject ("XXVO1"); XXVO1 is the name of the instance of the VO above, XXVO

    vo.executeEmptyRowSet ();

    vo.setWhereClause (null);

    vo.setWhereClause ("value =" + VAL1 + "'");

    vo.executeQuery ();

    If (VO. GetRowCount() > 0)

    A record is with the value of VAL1. Perform the required action

    I hope this helps.

  • How to get the timestamp per minute for the given interval

    Hello

    I have a table with a date of beginning and end of time columns. I need to divide the date given in one minute interval and post the results.

    create table min_data(objectid varchar2(20),starttime timestamp,endtime timestamp,duration number);
    
    
    SET DEFINE OFF;
    Insert into MIN_DATA Values ('U1_B011_P006_InvA', TO_DATE('06/23/2015 02:42:00', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('06/23/2015 02:46:00', 'MM/DD/YYYY HH24:MI:SS'), 5);
    Insert into MIN_DATA Values ('U1_B011_P006_InvA', TO_DATE('06/23/2015 12:43:00', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('06/23/2015 12:44:00', 'MM/DD/YYYY HH24:MI:SS'), 2);
    COMMIT;
    
    
    
    
    

    My expected output should be something like this. INT_TIMESTAMP is the timestamp calculated for the given interval (time of start and end times)

    INT_TIMESTAMPOBJECTIDSTARTTIMEEND TIME
    23/06/2015-02:42U1_B011_P006_InvA23/06/2015-02:4223/06/2015-02:46
    23/06/2015-02:43U1_B011_P006_InvA23/06/2015-02:4223/06/2015-02:46
    23/06/2015-02:44U1_B011_P006_InvA23/06/2015-02:4223/06/2015-02:46
    23/06/2015-02:45U1_B011_P006_InvA23/06/2015-02:4223/06/2015-02:46
    23/06/2015-02:46U1_B011_P006_InvA23/06/2015-02:4223/06/2015-02:46
    23/06/2015 12:43U1_B011_P006_InvA23/06/2015 12:4323/06/2015 12:44
    23/06/2015 12:44U1_B011_P006_InvA23/06/2015 12:4323/06/2015 12:44

    I wrote a query that works for one set of intervals.

    With get_data AS(
    SELECT   a.*,
             starttime -1/1440 v_s_date,
             endtime v_e_date
    FROM min_data a
    where duration=5)
    SELECT v_s_date + ((1 / 1440) * DECODE(LEVEL, 1, 1, LEVEL)) int_timestamp, objectid,starttime,endtime
              FROM get_data d
             WHERE MOD(LEVEL, 1) = 0
                OR LEVEL = 1
            CONNECT BY LEVEL <= (v_e_date - v_s_date) * 1440;
    
    

    Please send me a SQL query that gives me the timestamps of minutes between intervals.

    Hello

    The following query works for any number of intervals

    SELECT STARTTIME + ((LVL-1) / 1440) INT_TIMESTAMP, OBJECTID, STARTTIME, ENDTIME

    Of

    (SELECT LEVEL LVL FROM DUAL CONNECT BY LEVEL< 10)="">

    (SELECT * FROM MIN_DATA)

    WHERE STARTTIME + ((LVL-1) / 1440) BETWEEN STARTTIME AND ENDTIME ORDER BY, STARTTIME, ENDTIME LVL;

    Concerning

    Salim

  • Clarification of the SQL query in 2 day + Guide APEX

    I worked through the Oracle Database Express Edition 2 day + Application Express Developer's Guide, and try to decipher the SQL query in Chapter 4 (building your app).

    The code is:

    SELECT d.DEPARTMENT_ID,

    d.DEPARTMENT_NAME,

    (select count (*) from oehr_employees where department_id = d.department_id)

    "Number of employees", he said.

    substr (e.first_name, 1, 1) |'. ' || Select 'Name Manager',

    c.COUNTRY_NAME 'place '.

    OEHR_DEPARTMENTS d,

    E OEHR_EMPLOYEES

    OEHR_LOCATIONS l,

    C OEHR_COUNTRIES

    WHERE d.LOCATION_ID = l.LOCATION_ID

    AND l.COUNTRY_ID = c.COUNTRY_ID

    AND e.department_id = d.DEPARTMENT_ID

    AND d.manager_id = e.employee_id

    AND instr (superior (d.department_name), superior (nvl (:P2_REPORT_SEARCH,d.department_name))) > 0)

    I don't know exactly what is happening in the last line. I think I understand what the different functions but I'm not clear on the use of the: P2_REPORT_SEARCH string.

    What does this string? This code simply checking that d.department_name isn't NA?

    I have SQL experience but am not very familiar with the Oracle PL/SQL implementation. Can someone please give me a brief breakdown that check is doing in the context of the overall query? The application seems to work even if the conditional statement is not included.

    Thank you.

    2899145 wrote:

    Thanks for the reply. I apologize if the information I added was incomplete. The code came from the day 2 + Application Express (version 4.2) Developer Guide.

    In the section 'your own Application of 4 Buuilding' https://docs.oracle.com/cd/E37097_01/doc.42/e35122/build_app.htm#TDPAX04000 , they describe the creation of a report

    page that includes the "manager_id" and 'location_id '. The SQL query, I pasted above extracted from the data in other tables to substitute the real 'name of the Manager' and 'rent '.

    for the corresponding ID values. It makes sense, and the part of the SQL query that explicitly doing this makes sense.

    However, given that the document is a guide for the development of the APEX, I guess the command:

    AND instr (upper (d.department_name), upper (nvl (:P2_REPORT_SEARCH,d.department_name))) > 0

    done something valuable, and I do not recognize what is exactly the value.

    From a practical point of view why would I need to include this conditional statement?  Which only added to the application?

    Looking at the guide in question, it is clear that the

    AND instr(upper(d.department_name),upper(nvl(:P2_REPORT_SEARCH,d.department_name)))>0
    

    the line is completely unnecessary in the context of this tutorial, and it can be removed. The search in the tutorial app page is implemented by using a report filter interactive rather than a P2_REPORT_SEARCH element, which does not seem to exist at all. (It's a quirk of the APEX that bind variable references to non-existent items are replaced with NULL silently rather than exceptions). I thought that perhaps it would be legacy code a version of the tutorial prior to the introduction of interactive reports at the APEX 3.1, but I can't find explicit instructions to create such an element of filter in the 3.0 tutorial. I guess it must have been automatically generated by the application wizard when you create a standard report page.

    If you do not want to see the effect he would have (as described in the post above), leave it in the source report, add a text element of P2_REPORT_SEARCH, and a button "submit" on page 2 and experimenting to find different values of the element and clicking on the submit button...

  • RETURN TO THE UPDATE QUERY CLAUSE

    I have a request written in Postgres.   This will pick up the records in the table job_information with the State, as provided by the application (ex: "READY_TO_RUN") and with limit of records like the one provided by the application (ex: 100), then updates the job_information with app get (ex: "ACHIEVEMENTS") and returns that defined (means, returns the data table job_information total for these got put to date with the given State) records for the use of the application.

    Can someone give me advice on the translation in Oracle?   Thank you!!

    Query in Postgres

    UPDATE job_information AS J1
    SET status=?
    FROM
      (SELECT job_name,
        job_group,
        created_date
      FROM job_information
      WHERE status           =?
      AND CURRENT_TIMESTAMP >= scheduled_execution_time
      ORDER BY scheduled_execution_time limit ?
      ) AS J2
    WHERE J1.job_name   = J2.job_name
    AND J1.job_group    = J2.job_group
    AND J1.created_date = J2.created_date RETURNING *;
    

    Example of a query (in postgres):

    UPDATE job_information AS J1
    SET status= 'ACQUIRED'
    FROM
      (SELECT job_name,
        job_group,
        created_date
      FROM job_information
      WHERE status           = 'READY_TO_RUN'
      AND CURRENT_TIMESTAMP >= scheduled_execution_time
      ORDER BY scheduled_execution_time limit 100
      ) AS J2
    WHERE J1.job_name   = J2.job_name
    AND J1.job_group    = J2.job_group
    AND J1.created_date = J2.created_date RETURNING *;
    

    Oracle SQL - query, I wrote it is not working

    UPDATE JOB_INFORMATION SET STATUS=
    (
    WITH J2 as (
                            select job_name, job_group, created_date from (SELECT job_name, job_group, created_date FROM job_information WHERE status= :b and current_timestamp >= scheduled_execution_time order by scheduled_execution_time ) where rownum<= :c
                )
    SELECT distinct :a FROM JOB_INFORMATION J1, J2 WHERE J1.job_name = J2.job_name AND J1.job_group = J2.job_group AND J1.created_date = J2.created_date
    )
    RETURNING * FROM JOB_INFORMATION BULK COLLECT INTO SOMETHING ;
    
    1. create or replace package test_pack
    2. as
    3. type r_tab is (record
    4. test.job_name%type job_name,
    5. (status test.status%type);
    6. type t_tab is table of the r_tab;
    7. function test_func (v_status_o VARCHAR, v_status_i VARCHAR) - USE VARCHAR2
    8. T_tab RETURN PIPELINED;
    9. end;
    10. /
    11. create or replace the BODY of PACKAGE as test_pack
    12. function test_func (v_status_o VARCHAR2, v_status_i VARCHAR2) return t_tab pipelined as
    13. PRAGMA AUTONOMOUS_TRANSACTION;
    14. v_tab t_tab;
    15. Start
    16. Update test
    17. set status = v_status_o :-
    18. where Job_name in (select job_name TEST where status = :v_status_i).
    19. job_name, return STATUS
    20. bulk collect into v_tab;
    21. commit;
    22. because me in 1... loop v_tab. Count
    23. pipe row (v_tab (i));
    24. end loop;
    25. end;
    26. end;
    27. /

    REMOVE the colon before parameters and use the same types of data

  • Problem with the simple query.

    Hi all

    I am facing problem with the query below

    Select A.COL1, A.COL2

    B.COL1, B.COL2

    FROM TABLE1 A

    TABLE 1 B

    WHERE A.header = '123'

    AND B.header = '123'

    AND nvl (A.COL6, 'ABC') = 'ABC '.

    AND NVL (B.COL6, 'DEF') = 'DEF '.

    Basically, my requiremenyt is: I have only one table, TABLE1 here, which has a line two lines (for the same header) as "ABC" and another is "DEF". Table 1 has two columns (col1, col2) that should be displayed for both lines.

    When the header has two records in table1 top query works. and but if I do not have a record for any header example there are a record for "abc" in col6 only. so my query above does not work because there is no record for 'DEF' in col6. But I want to again request to fecth the output (for b.col1 and b.col2 should have null values)

    could you pls suggest me how to get the 4 columns.

    Thanks in advance

    Kind regards

    UVA.

    Try to place the status of outer join on column: analytical_criterion_code as

    and nvl (AUDIT.analytical_criterion_code, 'AUDIT2') = 'verification2. '

    .

    .

    and nvl (TRANS.analytical_criterion_code, 'TRANS2') = 'TRANS2.

    In the sub query based on the opinions that you have given in post # 1, although there is no value "DEF * ' for col6 due to the condition of outer join on b.col6 (+) line is extracted with b.col [1,2,3] as NULL values. Try to remove the (+) sign b.col6 and test.

    with t as)

    Select 111 col1, col2 'aaa', 'ABC' col6 123 header of all the double union

    Select 222 'bbb', 'DEF' col6, 123 double header

    )

    q as (select 123 double header)

    Select A.COL1, A.COL2, A.COL6

    B.COL1, B.COL2, b.COL6

    q.header

    T a

    t b

    q

    where a.col6 (+) = 'ABC '.

    and b.col6 (+) = "DEF."

    and q.header = a.header (+)

    and q.header = b.header (+)

  • Need help on query optimization

    Hi experts,

    I have the following query that lasts more than 30 minutes to retrieve data. We use Oracle 11 g.
    SELECT B.serv_item_id,
      B.document_number,
      DECODE(B.activity_cd,'I','C',B.activity_cd) activity_cd,
      DECODE(B.activity_cd, 'N', 'New', 'I', 'Change', 'C', 'Change', 'D', 'Disconnect', B.activity_cd ) order_activity,
      b.due_date,
      A.order_due_date ,
      A.activity_cd order_activty_cd
    FROM
      (SELECT SRSI2.serv_item_id ,
        NVL(to_date(TO_CHAR(asap.PKG_GMT.sf_gmt_as_local(TASK2.revised_completion_date),'J'),'J'), SR2.desired_due_date) order_due_date ,
        'D' activity_cd
      FROM asap.serv_req_si SRSI2,
        asap.serv_req SR2,
        asap.task TASK2
      WHERE SRSI2.document_number = 10685440
      AND SRSI2.document_number   = SR2.document_number
      AND SRSI2.document_number   = TASK2.document_number (+)
      AND SRSI2.activity_cd       = 'D'
      AND TASK2.task_type (+)     = 'DD'
      ) A ,
      (SELECT SRSI1.serv_item_id,
        SR1.document_number,
        SRSI1.activity_cd,
        NVL(to_date(TO_CHAR(asap.PKG_GMT.sf_gmt_as_local(TASK1.revised_completion_date),'J'),'J'), SR1.desired_due_date) due_date
      FROM asap.serv_req_si SRSI1,
        asap.serv_req SR1,
        asap.task TASK1,
        asap.serv_req_si CURORD
      WHERE CURORD.document_number   = 10685440
      AND SRSI1.document_number      = SR1.document_number
      AND SRSI1.document_number     != CURORD.document_number
      AND SRSI1.serv_item_id         = CURORD.serv_item_id
      AND SRSI1.document_number      = TASK1.document_number (+)
      AND TASK1.task_type (+)        = 'DD'
      AND SR1.type_of_sr             = 'SO'
      AND SR1.service_request_status < 801
      AND SRSI1.activity_cd         IN ('I', 'C', 'N')
      ) B
    WHERE B.serv_item_id = A.serv_item_id;
    If I run the inline queries (subqueries) A and B separately, it comes in a few seconds, but when I try to join the two should we close at 1: 00 sometimes. In my case query A specific, 52 returns records and query B return 120 files only.

    For me, it looks like the failure of the optimizer to determine the amount of data, it will return for each subquery. I feel the need to fool the optimizer through workaround to get the result more quickly. But I'm not able to find a work around for this. If someone of you can give some light on it, it would be really useful.

    Thank you very much
    GAF

    Published by: user780504 on August 7, 2012 02:16

    Published by: BluShadow on August 7, 2012 10:17
    addition of {noformat}
    {noformat} tags for readability and replace &lt;&gt; with != to circumvent forum issue.  Please read {message:id=9360002}                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    

    Perhaps using / * + materialize * / advice? See above

    Concerning

    Etbin

  • A query optimization

    Hi all
    I tried to optimize the following query, what is the best method to grant the following query?
    ---code truncated
    Best regards
    Val

    Published by: debonair Valerie on December 27, 2011 04:02

    Discussion: HOW TO: post a request for tuning SQL statement - posting model
    HOW to: Validate a query of SQL statement tuning - model showing

  • Extraction of a node in an XMLtype table - selection of the previous query

    Hey all,.

    I work with a server Oracle 11 g r2 and basically need to be able to analyse and select nodes from it. I spent hours scouring the net and reading the manual oracle xml db trying to find an appropriate solution to my problem, but I can't seem to identify the correct way to do it. I have some experience in programming, but none with databases oracle, sql or xml in general so forgive me if this is a trivial question.

    OK, so the question:

    I have a very simple XML file saved under catalog.xml and it is as follows:

    <>Catalog
    < cd >
    < title > hide your heart < /title >
    < artist > Bonnie Tyler < / artist >
    < Country > UK < / country >
    < company > CBS Records < / company >
    < price > 9.90 < / price >
    < year > 1988 < / year >
    < CD >
    < cd >
    Empire Burlesque < title > < /title >
    < artist > Bob Dylan < / artist >
    < country > USA < / country >
    < company > Columbia < / company >
    < price > 10.90 < / price >
    < year > 1985 < / year >
    < CD >
    < / catalogue >

    Now, I want to be able to extract the title of the given cd a certain artist. So, for example, if the artist is "bob dylan", the title should be "empire burlesque".

    Now I created an XMLType table in Oracle as follows:

    CREATE BINARY XMLTYPE STORE AS XML BINARY XMLType TABLE.

    I then load my xml file into oracle by:

    Insert the BINARY values (XMLTYPE (BFILENAME ('XML_DIR', 'catalog.xml'), nls_charset_id ('AL32UTF8')));

    So far so good.

    Now for the part of the excerpt:

    First of all I've tried:

    SELECT extract (b.object_value, ' / catalogue/cd/title ')
    Binary b
    WHERE existsNode (b.object_value,'/catalog/cd [artist = "Bob Dylan"]') = 1;

    EXTRACT(B.OBJECT_VALUE,'/CATALOG/CD/TITLE')
    --------------------------------------------------------------------------------

    < title > hide your heart < /title >
    Burlesque Empire < title > < /title >

    1 selected line.



    It did not work because the xml file was all in 1 row then I realized that I had to split my xml in separate rows. Doing that, I had to convert the nodes < title > a virtual table using the functions XMLSequence() and table(). These functions convert nodes two title returned by the extract() function in a virtual table composed of two XMLType objects, each of which contains a single title element.

    Second test:

    SELECT value (d)
    Binary B,
    Table (xmlsequence(extract(b.object_value,'/catalog/cd'))) d
    WHERE existsNode (b.object_value,'/catalog/cd [artist = "Bob Dylan"]') = 1;

    VALUE (D)
    --------------------------------------------------------------------------------

    < cd >
    < title > hide your heart < /title >
    < artist > Bonnie Tyler < / artist >
    < Country > UK < / country >
    < company > CBS Records < / company >
    < price > 9.90 < / price >
    < year > 1988 < / year >
    < CD >

    < cd >
    Empire Burlesque < title > < /title >
    < artist > Bob Dylan < / artist >
    < country > USA < / country >
    < company > Columbia < / company >
    < price > 10.90 < / price >
    < year > 1985 < / year >
    < CD >

    2 selected lines.


    It's better because it is now divided into 2 different lines so I should be able to make a selection - where, and then select the title from the artist.

    However, this is where I have questions, I tried to literally hours but I can't understand how to use the results of the query above in my next. So I tried to use a suquery in this way:

    Select extract (sub1, ' cd/title')
    Of
    (
    SELECT value (d)
    Binary B,
    Table (xmlsequence(extract(b.object_value,'/catalog/cd'))) d
    ) sub1
    WHERE existsNode (sub1,'/ cd [artist = "Bob Dylan"]') = 1;

    However, sql * plus displays an error:

    ORA-00904: "SUB1": invalid identifier.

    I've tried dozens of variations to try to use subqueries but I simly can't do things.

    I heard you can do also do this using variables or pl/sql, but I don't know where to start.

    Any help would be greatly appreciated I tried everything at my disposal.

    This should help you get started!

    select banner as "Oracle version" from v$version where banner like 'Oracle%';
    
    create table otn5test(
      id number,
      data xmltype
    );
    
    insert into otn5test values (1, xmltype('
    
    Hide your heart
    Bonnie Tyler
    UK
    CBS Records
    9.90
    1988
    
    
    Empire Burlesque
    Bob Dylan
    USA
    Columbia
    10.90
    1985
    
    
    '));
    
    select otn5test.id, x.*
    from otn5test,
         xmltable('/catalog/cd[artist/text()="Bob Dylan"]' passing otn5test.data
         columns title varchar2(20) path 'title') x;
    
    select otn5test.id,
           xmlcast(xmlquery('/catalog/cd[artist/text()="Bob Dylan"]/title'
                   passing otn5test.data returning content)
           as varchar2(20)) from otn5test;
    
    drop table otn5test;
    
    sqlplus> @otn-5.sql
    
    Oracle version
    ------------------------------------------------------------------------------
    Oracle Database 11g Release 11.2.0.1.0 - 64bit Production
    
    Table created.
    
    1 row created.
    
         ID TITLE
    ---------- --------------------
          1 Empire Burlesque
    
         ID XMLCAST(XMLQUERY('/C
    ---------- --------------------
          1 Empire Burlesque
    
    Table dropped.
    
  • Count (*) with the nested query

    Hello
    I have a question about the count (*) with the nested query.

    I have a table T1 with these columns:
    Number of C1
    Number of C2
    Number of C3
    Number of C4
    Number of C5

    (The type of each column is not relevant for example).

    This query:
    Select C1, C2, C3, C4
    from T1
    Group C1, C2

    It is not correct becausa C3 and C4 are the columns specified in the GROUP BY expression.

    If you run this query:
    Select count (*)
    (select C1, C2, C3, C4
    from T1
    Group C1, C2)

    I don't have an error message (properly, the result is the number of records).

    Why?

    Thank you.

    Best regards
    Luca

    because the optimizer rewrites as

    SELECT     COUNT(*)
                  FROM   T1
              GROUP BY   C1, C2  
    

    G.

    Edited by: g. March 1, 2011 09:19

  • Beginner: query optimization

    Hello

    Given that two queries:

    1)

    Select id_msisdn_a, BH_LINK a id_msisdn_b
    where there is no
    (select 1 from bh_node b where b.id_mes = a.id_mes and b.id_msisdn = a.id_msisdn_a)
    and not exists
    (select 1 from bh_node b where b.id_mes = a.id_mes and b.id_msisdn = a.id_msisdn_b)
    Id_msisdn_a group, id_msisdn_b


    ------------------------------------------------------------------------------------------------------------------
    | ID | Operation | Name | Lines | Bytes | TempSpc | Cost (% CPU). Time | Pstart. Pstop |
    ------------------------------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | 116K | 4990K | 320K (12) | 01:04:09 |
    | 1. HASH GROUP BY. 116K | 4990K | 5928K | 320K (12) | 01:04:09 |
    | 2. ANTI NESTED LOOPS. 116K | 4990K | 319KO (12) | 01:03:53 |
    | 3. ANTI NESTED LOOPS. 11 M | 354 M | 318K (12) | 01:03:45 |
    | 4. RANGE OF PARTITION ALL THE | 556 M | 10G | 285K (2) | 00:57:07 | 1. 6.
    | 5. HASH PARTITION ALL | 556 M | 10G | 285K (2) | 00:57:07 | 1. 4.
    | 6. TABLE ACCESS FULL | BH_LINK | 556 M | 10G | 285K (2) | 00:57:07 | 1. 24.
    | 7. RANGE OF PARTITION ITERATOR. 32 M | 366 M | 0 (0) | 00:00:01 | KEY | KEY |
    | 8. PARTITION HASH ITERATOR. 32 M | 366 M | 0 (0) | 00:00:01 | KEY | KEY |
    |* 9 | INDEX UNIQUE SCAN | PK_BH_NODE | 32 M | 366 M | 0 (0) | 00:00:01 |
    | 10. RANGE OF PARTITION ITERATOR. 32 M | 374 M | 0 (0) | 00:00:01 | KEY | KEY |
    | 11. PARTITION HASH ITERATOR. 32 M | 374 M | 0 (0) | 00:00:01 | KEY | KEY |
    | * 12 | INDEX UNIQUE SCAN | PK_BH_NODE | 32 M | 374 M | 0 (0) | 00:00:01 |
    ------------------------------------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):
    ---------------------------------------------------

    9 - access("B".") ID_MSISDN '= 'A'.' ID_MSISDN_B' AND 'B '. ' ID_MES '= 'A'. ("' ID_MES")
    12 - access("B".") ID_MSISDN '= 'A'.' ID_MSISDN_A' AND 'B '. ' ID_MES '= 'A'. ("' ID_MES")










    2)

    Select id_msisdn_a, id_msisdn_b from BH_LINK a, BH_NODE b
    where b.id_mes = a.id_mes
    and not (b.id_msisdn = a.id_msisdn_a) and b.id_msisdn = a.id_msisdn_b
    Id_msisdn_a group, id_msisdn_b


    ----------------------------------------------------------------------------------------------------------------
    | ID | Operation | Name | Lines | Bytes | TempSpc | Cost (% CPU). Time | Pstart. Pstop |
    ---------------------------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | 580 M | 17G | 461G (100) | 999:59:59 |
    | 1. HASH GROUP BY. 580 M | 17G | 235P | 461G (100) | 999:59:59 |
    | 2. RANGE OF PARTITION ALL THE | 5976T | 169P | 18G (100) | 999:59:59 | 1. 6.
    |* 3 | HASH JOIN | 5976T | 169P | 124 M | 18G (100) | 999:59:59 |
    | 4. HASH PARTITION ALL | 32 M | 374 M | 15163 (1) | 00:03:02 | 1. 4.
    | 5. FULL RESTRICTED INDEX SCAN FAST | PK_BH_NODE | 32 M | 374 M | 15163 (1) | 00:03:02 | 1. 24.
    | 6. HASH PARTITION ALL | 556 M | 10G | 285K (2) | 00:57:07 | 1. 4.
    | 7. TABLE ACCESS FULL | BH_LINK | 556 M | 10G | 285K (2) | 00:57:07 | 1. 24.
    ---------------------------------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):
    ---------------------------------------------------

    3 - access("B".") ID_MES '= 'A'.' ID_MES')
    filter ('B'. "" ID_MSISDN "<>' A" "." "" ID_MSISDN_A' OR 'B '. "" ID_MSISDN "<>' A" "." " ID_MSISDN_B')






    /******************
    ALTER TABLE BH_LINK ADD)
    CONSTRAINT PK_BH_LINK
    KEY ELEMENTARY SCHOOL
    (ID_MSISDN_A, ID_MSIS
    DN_B, ID_MES, ID_SUBTIPO_SERVICIO, ID_DESTINO_CONCRETO_SER)
    THE LOCAL INDEX USE);

    ALTER TABLE BH_NODE ADD)
    CONSTRAINT PK_BH_NODE
    KEY ELEMENTARY SCHOOL
    (ID_MSISDN, ID_MES)
    THE LOCAL INDEX USE);
    ************************/



    Two questions:

    ((a) is the same result, produced by 1) a d2)?

    ((b) if the answer is affirmative, may 2) be optimized without using "is"?


    Thanks in advance,
    Jose Luis

    user455401 wrote:
    Thanks for your help.

    Sorry, the second query has an error; the correct is:

    Select id_msisdn_a, id_msisdn_b from BH_LINK a, BH_NODE b
    where b.id_mes = a.id_mes
    and not (b.id_msisdn = a.id_msisdn_a OR b.id_msisdn = a.id_msisdn_b)
    Id_msisdn_a group, id_msisdn_b

    In this way, I invited that the result is equal, is it not?

    No, for the same reason.
    Query 1 excludes any group where any member of the Group has the wrong type of match in bh_node. Thiat is, to decide if a line should be included in the aggregation, you need to know something on other lines in the same group.
    Your new query, like the original query 2, includes a group where a member of the Group has the right type of game. This is not the same. Saying "I drink tea." (which is similar to query 2 and your new query) is not so much saying "Nobody in my family drinks beer." (which is similar to the query 1). These two instructions may by chance be true, or they might be true in some very specific circumstances (for example, everyone in a family drink the same thing, and nobody drinks miore one thing), but in general they are independent of the statements. Your new query such as query 2, none of the group as a whole.

    By the Way, your request doesn't seem to improve 1):

    It is an improvement of Query1. It scans bh_node once, but 1 query scans 2 times bh_node.
    It is not significantl; there faster than 2 of the application, but who cares? Query 2 product of erroneous results; There is no reason to think how fast it is.

    Looking at your problem once again, I think that the solution I posted must do an outer join.

    SELECT    l.id_msisdn_a, l.id_msisdn_b
    FROM               bh_link      l
    LEFT OUTER JOIN       bh_node      n  ON     l.id_mes  = n.ld_mes
    GROUP BY  l.id_msisdn_a, l.id_msisdn_b
    HAVING       COUNT ( CASE
                      WHEN  n.id_msisdn     IN (l.msisdn_a, l.msisdn_b)
                    THEN  1
                  END
                ) = 0
    OR        NVL (l.id_msisdn_a, l.id_msisdn_b)     IS NULL          -- If needed
    ;
    

    Unless post you some examples of data (CREATE TABLE and INSERT statements), I can't test anything, and very similar errors occur.

  • Filter % symbol in the given string

    Hi friends,

    I want to delete the '%' symbol in the given string and load it into another table.

    For example:
    ========

    If the string is "% hgjkdhdfj % djsfhsjk % hjkghf % fdjhjhfsjh"... In this example, I want to delete all the '%' symbol and load into another table... Query will be relevant to the entire...

    Someone help me please...


    Kind regards
    Williams.

    I'm not quite clear on your condition... you are looking for:

    SQL> select replace ('fdjhjhfsjh%hgjkdhdfj%djsfhsjk%hjkghf%', '%')
      2    from dual
      3  /
    
    REPLACE('FDJHJHFSJH%HGJKDHDFJ%DJS
    ---------------------------------
    fdjhjhfsjhhgjkdhdfjdjsfhsjkhjkghf
    
  • which is wrong with the specified query?

    I wrote a query to check the dependence of the indexed columns.
    but it's not retune all data. I checked individually for columns, and they are used. So why I don't get all the data from the below given query.

    Select type, NAME, LINE, TEXT from all_source where upper (text) as
    ('''%'||' SELECT nom_de_colonne FROM all_ind_columns where table_name in
    ("WSF_PERSON",
    "WSF_CODESET_TYPECODE",
    "GTT_STUDENT_REC_UR",
    "UPLOAD_ROSTER_LOG",
    "WSF_CLASSOFFERING",
    "PHCORE_CLASS_WORKGROUP",
    "WSF_STUDENTENROLLMENTEVENT",
    "WSF_SCHOOLYEAR",
    "WSF_CLASSSTUDENT",
    "WSF_GRADELEVEL_MASTER",
    "NCS4S_ACCOUNT_INFO",
    "WSF_EDUCATION_ORGUNIT",
    "WSF_PERSONROLE",
    WSF_STUDENT ")' | ' %''')

    Hello

    My guess is that you touch one ' too_many_quotes '...
    Try this:

    with cols as ( select column_name
                   from   all_ind_columns
                   where  table_name in  ('WSF_PERSON',
                                          'WSF_CODESET_TYPECODE',
                                          'GTT_STUDENT_REC_UR',
                                          'UPLOAD_ROSTER_LOG',
                                          'WSF_CLASSOFFERING',
                                          'PHCORE_CLASS_WORKGROUP',
                                          'WSF_STUDENTENROLLMENTEVENT',
                                          'WSF_SCHOOLYEAR',
                                          'WSF_CLASSSTUDENT',
                                          'WSF_GRADELEVEL_MASTER',
                                          'NCS4S_ACCOUNT_INFO',
                                          'WSF_EDUCATION_ORGUNIT',
                                          'WSF_PERSONROLE',
                                          'WSF_STUDENT')
                 )
    select type
    ,      name
    ,      line
    ,      text
    from   all_source
    ,      cols
    where  upper(text) like '%'||column_name||'%';
    

    (You can also add a restriction on the OWNER and/or use select distinct...)

Maybe you are looking for