MVIEWs or PLSQL?

Hey people,

I was just wondering about the MVIEWs and thought I can't I just use a merge statement instead to create a refreshing MVIEW it. Consider that I have my refreshment as FINISHED in one of my MVIEW, why don't I have just use a PLSQL block with truncation (considering my atomicity is set to false in my MVIEW) and insert it into a table heap instead of use the MVIEW?

Have we not no particular reason for creation of MVIEW can we do it using PLSQL? and that's why we go to MVIEWs?

Thank you

Yes, that's what I mean.

If your table employees is 'small', Oracle may decide not to rewrite the query, but it is the general principle

Tags: Database

Similar Questions

  • measure the plsql runtime

    Hello dear teachers,
    I would be gratful for advice.
    I have materialized that is updated with the procedure of dbms_mview and is scheduled through dbms_job.
    How I could measure that refresh the execution time in order to query a table for example and be there at the start of this job (refreshing), and when it ended?
    I do set the altimeter on in sqlplus, but is there something like this in plsql, then insert that time in other tables?
    I'm on 9.2.0.7.

    So you can just insert sysdate to a table before you run the update on the mview and then insert another date, after that it's over and then you can use these two dates to determine the time taken, store or simply the sysdate before you run into a variable and then determine the difference after execution and store this value in a table.

    Shouldn't be difficult. ;)

  • PLSQL Collection for the holding of the identifiers

    Hello experts,

    I ask for your help?

    I have tables like this: COMP_REL (PARENT NUMBER, NUMBER of CHILDREN) and COMP_INFO (NUMBER CHILD,...).

    In a procedure, I need to make several select statements of COMP_INFO for all children under a parent (grouping of different columns of COMP_INFO).

    The problem is that COMP_REL table is huge. How can I use PLSQL collection (or table) to store the children of COMP_REL I need and then join this collection with COMP_INFO to get different data?

    It is advisable to use collections PLSQL for this purpose at all?


    Thank you, Atanas.

    Hello

    I used analytical functions for the info that I need with only a query the comp_info table (didn't need to interrogate several times). So I didn't have to use TWG or a collection of temporary cache results.

    Thank you very much for your help!

  • MView delta deploy problem between SDDM schema and db (Swap models target dictionary import) - overview of the DDL generation

    Hello

    I'm having a hard time to reverse the difference between my role model and my fusion database schema.

    The initial goal is simple:

    1 / detect differences in metadata

    2 / I SDDM to generate the DDL change code

    (if possible, if not, recreate, reload: powerful existing featured BTW)

    3 / deploy

    4 / check/confirm that no more delta existing

    I do this:

    * menu file > import > dictionary

    * Select connection

    * Select the db schema

    * check the "Swap target model.

    * Select MY_MVIEW > next (TABLE 1 DB object to import) > finish (work "Generate Design")

    * in the model comparison window, I have to deselect everything, less table MY_MVIEW AND also MY_VIEW Materialized View

    (as they appear as 2 SDDM objects)

    DOF Preview button

    I see:

    -comments created in first place (whereas the MVIEW should be recreated)

    which is smaller but still blurs the legibility

    -MY_MVIEW is systematically recreated

    (how many times already I deploy)

    I figured out:

    . SDDM objects tables (disorders, implemented in the MVIEW form) and host MVIEW (Physics) the query independently

    . even if I sync them manually (copy - paste), DDL deployed code is not strictly identical to

    So it may have to do with a dysfunction compare?

    SDDM is full of options to desensitize compares (physical exclusion, storage, etc.), but I found no way to simply compare and align MVIEWs

    (and the documentation is rare on the subject)

    Any clue?

    THX

    Interesting.  Looks like you're it's partitioning that is causing the problem.

    In a model, partitioning information can be held on the objects of physical model for Tables and materialized views.

    In the case where a Table and materialized views are linked together (by the implementation as a Materialized View on physical model Table property), it is information of partitioning that is held on the Table which is relevant.  The information on the Table is used when generating DDL.  And in an import or synchronize, partitioning information are added to the Table object.

    I think that what is happening in your case probably is your model includes some details of partitioning maintained on the Materialized View object.

    Synchronization is combining the details of your database partition to the Table in the model.

    As it does not associate the details of partition of your database with the materialized view in the model object, the comparison shows a difference for the materialized view:-not partitioned in database, but partitioned in your model.  And this difference is causing the drop and re-create the view materialized in the DDL.

    There are various options to work around this:

    1. you can remove details unnecessary partitioning and maintained on the view materialized in your model object.

    2. you can clear the check box for the entry for materialized views in the tree in the dialog box models to compare before making the DOF preview.  (But it also means that no DDL will be generated for all other differences in these materialized views).

    3. you can use the filter properties to filter the relevant properties (e.g., partitioned, partitioning columns and Subpart columns for Materialized View objects), and then select the button refresh trees before performing the DOF preview.  (See the screenshot below).

    David

  • boards of opt_estimate in the merge of refresh operations quick mview

    Hello

    in a system (Oracle 11 g Enterprise Edition Release 11.2.0.2.0 - 64 bit Production database) I'm trying to optimize, there are materialized views with "fast refresh on commit" and there are many queries to the following distribution:

    / * MV_REFRESH (MRG) * / MERGE INTO "XXX". "" WITH THE HELP OF YYY "" SNA$ ' (SELECT / * + OPT_ESTIMATE (QUERY_BLOCK MAX = 485387) * /...)

    "As far as I can see the queries show the structure explained in Alberto Dell's Oracle blog'Era ' quick refreshment of only materialized aggregate views with SUM - algorithm (in the section" Refresh for mixed-DML TMPDLT") - the best resource on refresh mview algorithms I know. But I could not find information on the setting of the OPT_ESTIMATE indicator. In the database, I see that the values in the indicator are changing:

    Select st.sql_id

    , substr (st.sql_text, instr (st.sql_text, 'OPT_ESTIMATE'), 40) sql_text

    St dba_hist_sqltext

    where st.sql_text like ' / * MV_REFRESH (MRG) * / MERGE INTO "XXX". » YYY » %'

    and...


    SQL_ID SQL_TEXT

    ------------- ----------------------------------------

    6by5cwg0v6zaf OPT_ESTIMATE (QUERY_BLOCK MAX = 485387) *.

    2b5rth5uxmaa2 OPT_ESTIMATE (QUERY_BLOCK MAX = 485387) *.

    4kqc15tb2hvut OPT_ESTIMATE (QUERY_BLOCK MAX = 490174) *.

    fyp1rn4qvxcdb OPT_ESTIMATE (QUERY_BLOCK MAX = 490174) *.

    a5drp0m9wt53k OPT_ESTIMATE (QUERY_BLOCK MAX = 407399) *.

    2dcmwg992pjaz OPT_ESTIMATE (QUERY_BLOCK MAX = 485272) *.

    971zzvq5bdkx6 OPT_ESTIMATE (QUERY_BLOCK MAX = 493572) *.

    46434kbmudkq7 OPT_ESTIMATE (QUERY_BLOCK MAX = 493572) *.

    4ukc8yj73a3h3 OPT_ESTIMATE (QUERY_BLOCK MAX = 491807) *.

    8k46kpy4zvy96 OPT_ESTIMATE (QUERY_BLOCK MAX = 491807) *.

    3h1n5db3vdugt OPT_ESTIMATE (QUERY_BLOCK MAX = 493547) *.

    5340ukdznyqr6 OPT_ESTIMATE (QUERY_BLOCK MAX = 493547) *.

    7fxhdph8ymyz8 OPT_ESTIMATE (QUERY_BLOCK MAX = 407399) *.

    15f3st5gdvwp3 OPT_ESTIMATE (QUERY_BLOCK MAX = 491007) *.

    083ntxzh8wnhg OPT_ESTIMATE (QUERY_BLOCK MAX = 491007) *.

    cg17yjx3qay5z OPT_ESTIMATE (QUERY_BLOCK MAX = 491452) *.

    5qt37uzwrwkgw OPT_ESTIMATE (QUERY_BLOCK MAX = 491452) *.

    byzfcg7vvj859 OPT_ESTIMATE (QUERY_BLOCK MAX = 485272) *.

    aqtdpak3636y5 OPT_ESTIMATE (QUERY_BLOCK MAX = 493572) *.

    dcrkruvsgpz3u OPT_ESTIMATE (QUERY_BLOCK MAX = 492226) *.

    7mmt5px6sd7xg OPT_ESTIMATE (QUERY_BLOCK MAX = 492226) *.

    9c6v714pbjvc0 OPT_ESTIMATE (QUERY_BLOCK MAX = 485336) *.

    fbpsz02yq2qxv OPT_ESTIMATE (QUERY_BLOCK MAX = 485336) *.

    0q04g2rh9j84y OPT_ESTIMATE (QUERY_BLOCK MAX = 491217) *.

    gp3u5d5702dpb OPT_ESTIMATE (QUERY_BLOCK MAX = 491638) *.

    9f35swtju24aa OPT_ESTIMATE (QUERY_BLOCK MAX = 491638) *.

    a70jwxnrxtfjn OPT_ESTIMATE (QUERY_BLOCK MAX = 491217) *.

    93mbf02cjq2ny OPT_ESTIMATE (QUERY_BLOCK MAX = 491217) *.

    Then of course the cardinalities in the OPT_ESTIMATE indication are not static here and the sql_id of foreign exchange as a result. And this change prevents me from using the basic lines of sql plan to gurantee a stable path (essentially to avoid the parallel operations for refresh, well operations that Parallels access for queries do not prevent). I did a quick check with 11.2.0.1 and see the same model here:

    drop materialized view t_mv;

    drop table t;

    create table t

    as

    Select rownum id

    , mod (rownum, 50) col1

    , mod (rownum, 10) col2

    , lpad ('* ', 50,' *') col3

    of the double

    connect by level < = 100000;

    exec dbms_stats.gather_table_stats (user, 't')

    Create materialized view log on t with rowid (id, col1, col2, col3) including the new values;

    Create materialized view t_mv

    quickly refresh on validation

    as

    Select col1

    sum (col2) sum_col2

    count (*) NTC

    count (col2) cnt_col2

    t

    Col1 group;

    Update t set col2 = 0 where col1 = 1;

    commit;

    SQL_ID, 4gnafjwyvs79v, number of children 0

    -------------------------------------

    / * MV_REFRESH (MRG) * / MERGE IN 'TEST '. "" T_MV ""$SNA"WITH THE HELP OF SELECT (SELECT

    / * + OPT_ESTIMATE (QUERY_BLOCK MAX = 1000) * / "DLT$ 0. "" COL1 ""GB0. "

    SUM (DECODE ("DLT$ 0". "DML$ $"(, 'I', 1,-1) * DECODE ("DLT ($0". " ("" COL2 ").

    (NULL, 0, 1)) 'D0', SUM (DECODE ("DLT$ 0". "DML$ $"(, 'I', 1,-1)) 'D1', "

    NVL (SUM (DECODE ("DLT$ 0". "DML$ $"(, 'I', 1,-1) * ("DLT$ 0". " ((("" COL2 ")), 0)

    'D2' FROM (SELECT CHARTOROWID (' MAS$ ".)) ("' M_ROW$ $') RID$,.

    "MAS$". " "COL1", "MAS$". "' COL2 ', DECODE (" MAS$ ".) OLD_NEW$ $, 'N', ' (I ' 'd').

    DML$ $, "MAS$". "" DMLTYPE$ $"" DMLTYPE$ $', 'TEST '. "" MLOG$ _T "" MAS$ ".

    WHERE "MAS$". XID$ $ =: 1) 'DLT$ 0' GROUP BY 'DLT$ 0. ("' COL1 ')" AV$ "ON

    (SYS_OP_MAP_NONNULL ("SNA$".)) ("' COL1 ') = SYS_OP_MAP_NONNULL (" AV$ ".) (("" GB0 '))

    WHEN MATCHED THEN UPDATE SET "SNA$". "CNT_COL2"= "$SNA". «CNT_COL2 "+" AV$ ".»

    'D0', '$SNA '. "CNT"= "$SNA". «CNT "+" AV$ ".» "' D1 ',.

    "SNA$". " SUM_COL2 "= DECODE (" SNA$ "". ")" CNT_COL2 "+" AV$ ".» D0', 0, NULL, NVL ("SNA$".)

    ("SUM_COL2", 0) + AV$ «» ("' D2 ') DELETE WHERE (" SNA$ ".) ("' CNT ' = 0) IS NOT

    MATCHED THEN INSERT ("SNA$".) "" COL1 ","$SNA ". "" CNT_COL2 ","$SNA ". "" CNT ",.

    Hash value of plan: 2085662248

    -----------------------------------------------------------------------------------

    | ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time |

    -----------------------------------------------------------------------------------

    |   0 | MERGE STATEMENT |         |       |       |    24 (100) |          |

    |   1.  MERGE | T_MV |       |       |            |          |

    |   2.   VIEW                  |         |       |       |            |          |

    |*  3 |    OUTER HASH JOIN |         |    40.  4640 |    24 (9) | 00:00:01 |

    |   4.     VIEW                |         |    40.  2080.    20 (5) | 00:00:01 |

    |   5.      GROUP SORT BY |         |    40.  1640 |    20 (5) | 00:00:01 |

    |*  6 |       TABLE ACCESS FULL | MLOG$ _T |    40.  1640 |    19 (0) | 00:00:01 |

    |   7.     MAT_VIEW FULL ACCESS | T_MV |    50.  3200 |     3 (0) | 00:00:01 |

    -----------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):

    ---------------------------------------------------

    3 - access (SYS_OP_MAP_NONNULL ("SNA$".)) ("' COL1 ') = SYS_OP_MAP_NONNULL (" AV$ ".) "G

    B0'))

    6 - filter("MAS$".") XID$ $"(=:1)"

    Note

    -----

    -dynamic sample used for this survey (level = 2)

    So I see an OPT_ESTIMATE (QUERY_BLOCK MAX = 1000) and this estimate seems to be fairly stable when I change the number of lines, number of blocks, use partitioning etc. I checked a trace with Event 10046 but didn't source the value 1000. I also disabled the comments of cardinality ("_optimizer_use_feedback" = false) and there is no profile sql in my system (according to dba_sql_profiles) - but the OPT_ESTIMATE is still there.

    So the question is: is there something known about values used in the OPT_ESTIMATE indication for the materialized view fast refresh operations? Thanks in advance for your contributions.

    Concerning

    Martin Preiss

    Martin Preiss wrote:

    In regard to point 1: initially as a partitioned table starting with K 1000 lines, then 100K, then 10K, I created my test T table (using my old example of Blog Oracle MP: Materialized View Fast refresh): in all cases, I got OPT_ESTIMATE (QUERY_BLOCK MAX = 1000). My first problem in this context is that I did not come with a model showing different OPT_ESTIMATE values - as I see them in the prod system.

    Using the sql Profiler and the force_match option looks promising - I'll check if I can use them here.

    Concerning

    Martin

    Hi Martin,

    but perhaps the OPT_ESTIMATE indicator is based on stats of the MLOG or MV table, where my question. Since this is the option to 'MAX' of OPT_ESTIMATE (limit the maximum number of rows for this query block) the 1000 resembles 'low' default value is used if the stats are missing or the MLOG / MV table less than 1000 lines?

    Randolf

  • VARRAY in plsql

    Hi Experts,

    What is the best way to use varray plsql loop to achieve this

    account1  date1  account1  balance_1
    a1        1-jan-2016   s1    100
    a1        1-jan-2016   s2    200
    a1        2-jan-2016   s1    300
    a1        2-jan-2016   s2    400
    a1        3-jan-2016   s1    500
    a1        3-jan-2016   s2    600
    

    expected results

    account1  date1  account1  balance_1
    a1        1-jan-2016   s1    100
    a1        2-jan-2016   s1    100+300   
    a1        3-jan-2016   s1    100+300+500
    

    a1        1-jan-2016   s2    200
    a1        2-jan-2016   s2    200+400
    a1        3-jan-2016   s2    200+400+600
    

    SQL > ed
    A written file afiedt.buf

    1 with testdata (date1, Account2, account1, balance) as)
    2 Select 'a1', date ' 2016-01-01', 's1', 100 double Union all
    3 select 'a1', date ' 2016-01-01', 's2', 200 double Union all
    4 Select 'a1', date "2016-01-02', 's1', 300 double Union all
    5 Select 'a1', date ' 2016-01-02', 's2', 400 double Union all
    6 select 'a1', date ' 2016-01-03', 's1', 500 double Union all
    7 select 'a1', date ' 2016-01-03', 's2', 600 of the double
    8        )
    9  --
    10. Select date1 account1, Account2, balance_string
    11, to_number (x.column_value) as balance
    12 years of)
    13. Select account1
    14, date1
    15, Account2
    16, trim ('+ ' sys_connect_by_path(balance,'+')) as balance_string
    17 testdata
    18 connection of account1 = account1 prior
    19 and Account2 = Account2 prior
    20 and date1 = date1 prior + 1
    21 to begin with (account1, Account2, date1) (select account1, Account2, min (date1)
    22 of testdata
    23 account1, Account2)
    24       )
    25 *, xmltable (balance_string) x
    SQL > /.

    DATE1 AC BALANCE_STRING AC BALANCE
    -- ----------- -- -------------------- ----------
    A1 1 January 2016 s1 100 100
    A1 2 January 2016 s1 100 + 300 400
    S1 3 January 2016 a1 100 + 300 + 500 900
    A1 1 January 2016 s2 200 200
    A1 2 January 2016 s2 200 + 400 600
    A1 3 January 2016 s2 200 + 400 + 600-1200

    6 selected lines.

    Or more simply (I don't think I tried to analytical functions in a connect by clause before, but it seems to work ok)...

    SQL > ed
    A written file afiedt.buf

    1 with testdata (date1, Account2, account1, balance) as)
    2 Select 'a1', date ' 2016-01-01', 's1', 100 double Union all
    3 select 'a1', date ' 2016-01-01', 's2', 200 double Union all
    4 Select 'a1', date "2016-01-02', 's1', 300 double Union all
    5 Select 'a1', date ' 2016-01-02', 's2', 400 double Union all
    6 select 'a1', date ' 2016-01-03', 's1', 500 double Union all
    7 select 'a1', date ' 2016-01-03', 's2', 600 of the double
    8        )
    9  --
    10. Select account1
    11, date1
    12, Account2
    13, trim ('+ ' sys_connect_by_path(balance,'+')) as balance_string
    14, sum (balance) over (partition of account1, Account2 date1 order) as the balance
    15 of testdata
    16 log in account 1 = prior account1
    17 and Account2 = Account2 prior


    18 and date1 = date1 prior + 1
    19 start with (account1, Account2, date1) (select account1, Account2, min (date1)
    20 of testdata
    21 * group of account1, Account2)
    SQL > /.

    DATE1 AC BALANCE_STRING AC BALANCE
    -- ----------- -- -------------------- ----------
    A1 1 January 2016 s1 100 100
    A1 2 January 2016 s1 100 + 300 400
    S1 3 January 2016 a1 100 + 300 + 500 900
    A1 1 January 2016 s2 200 200
    A1 2 January 2016 s2 200 + 400 600
    A1 3 January 2016 s2 200 + 400 + 600-1200

    6 selected lines.

  • PLSQL block does not run

    Hi all

    PLSQL block is not executing .showing reference out of reach. ?


    create or replace procedure sma
    is

    cursor c4 (P_no number) is select * from employee where department_id = P_no;
    I have used % rowtype;
    Start
    Open c4 (90);
    loop
    extract the c4 inside;
    When the exit c4% notfound;
    dbms_output.put_line (I.employee_id |) I.first_name);
    end loop;
    Close c4;
    end;

    /

    Start
    SMA. C4;
    end;

    image.png

    Capture.PNG

    And how he can run if the procedure name is SMA when you call ADM. C4? Question:

    BEGIN

    ADM;

    END;

    /

    PYM

  • MVIEW log truncation

    Hi all

    I have a question about logs mview.

    While analyzing my AWR reports I found that most of the query at the top of the page are on my logs MView. How to optimize these queries?

    I intend to truncate the table $ MLog too, then after truncating tables MLg that I do a full refresh or my fast regular refresh is correct?

    We use a server Oracle 11 GR 2.

    Let me know if any more information is required for the same.

    Thank you

    AJ

    You can follow this doc for you reference

    236233.1

  • XML data in the table using sql/plsql

    Hi experts,

    Could you please help with the following requirement. I have the tags xml (.xml on a server file) below. I need to access this file and read the XML and insert into the db table using sql and plsql. Is it possible with the cdata below? And there is a nested this table.

    Could someone please guide me if you have a sample code file and xml.

    <? XML version = "1.0" encoding = "UTF-8"? >

    < generation_date > <! [CDATA [17/11/2015]] > < / generation_date >

    < generated_by > <! [CDATA [Admin Admin]] > < / generated_by >

    < year > <! [CDATA [2015]] > < / year >

    < month > <! [CDATA [01]] > < / month >

    < author >

    < author > <! [CDATA [user author]] > < / author > < author_initial > <! [CDATA [user]] > < / author_firstname > < author_country > <! [CDATA [author]] > < / author_lastname >

    < author_email > <! [CDATA [[email protected]]] > < / author_email >

    < author_data_01 > <! [CDATA []] > < / author_data_01 >

    < author_data_02 > <! [CDATA []] > < / author_data_02 >

    < items >

    < article_item >

    < article_id > <! [CDATA [123456]] > < / article_id >

    < publication > <! [CDATA [Al Bayan]] > < / publication >

    < section > <! [CDATA [Local]] > < / section >

    < issue_date > <! [CDATA [11/11/2015]] > < / issue_date >

    < page > <! [CDATA [2]] > < / print this page >

    < article_title > <! [CDATA [title.]] > < / article_title > < number_of_words > <! [CDATA [165]] > < / number_of_words >

    < original_price > <! [CDATA [200]] > < / original_price >

    < original_price_currency > <! [CDATA [DEA]] > < / original_price_currency >

    < price > <! [CDATA [250]] > < / price >

    < price_currency > <! [CDATA [DEA]] > < / price_currency >

    < / article_item >

    < / articles >

    < total_amount > <! [CDATA [250]] > < / total_amount >

    < total_amount_currency > <! [CDATA [DEA]] > < / total_amount_currency >

    < / author >

    < / xml >

    Thanks in advance,

    Suman

    XMLTABLE using...

    SQL > ed
    A written file afiedt.buf

    1 with t (xml) as (select xmltype ('))
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    [[12 [email protected]]] >
    13
    14
    15
    16
    17
    18
    19


    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33 ") of the double)"
    34-

    35 end of sample data
    36-
    37 - assumptions:
    (38 - a) XML may have several tags
    (39 - b) each may contain more
    40-
    41 select x.gen_by, x.gen_date, x.mn, x.yr
    42, y.author, y.auth_fn, y.auth_ln, y.auth_cnt, y.auth_em, y.auth_d1, y.auth_d2

    43, z.id, z.pub, z.sec, z.iss_dt, z.pg, z.art_ttl, z.num_wrds, z.oprice, z.ocurr, z.price, z.curr
    44 t
    45, xmltable ('/ authxml')
    from $ 46 t.xml
    path of 47 columns gen_date varchar2 (10) '. / generation_date'
    48, path of varchar2 (15) of gen_by '. / generated_by'
    49, path of varchar2 (4) year '. "/ year"
    50 varchar2 (2) mn road '. "/ month"
    51, path of xmltype authors '.'
    52                 ) x
    53, xmltable ('/ authxml/authors ')
    from $ 54 x.authors
    author of 55 path of varchar2 columns (15) '. / author'
    56, path of varchar2 (10) of auth_fn '. / author_firstname'
    57, path of varchar2 (10) of auth_ln '. / author_lastname'
    58 road of VARCHAR2 (3) auth_cnt '. / author_country'
    59 road of varchar2 (20) of auth_em '. / author_email'
    60 road of varchar2 (5) of auth_d1 '. / author_data_01'
    61, path of varchar2 (5) of auth_d2 '. / author_data_02'
    62, path of xmltype articles '. / Articles'
    63                 ) y
    64, xmltable ('/ Articles/article_item ')
    from $ 65 y.articles
    path id 66 number columns '. / article_id'
    67, path of varchar2 (10) pub '. ' / publication.
    68 road of varchar2 (10) dry '. / section'
    69, path of varchar2 (10) of iss_dt '. / issue_date'
    70 road of VARCHAR2 (3) pg '. "/ print this page"
    71, path of varchar2 (20) of art_ttl '. / article_title'
    72, path of varchar2 (5) of num_wrds '. / number_of_words'
    73, path of varchar2 (5) of oprice '. / original_price'
    74 road to VARCHAR2 (3) ocurr '. / original_price_currency'
    75, path of varchar2 (5) price '. "/ price"
    76, path of VARCHAR2 (3) curr '. / price_currency'
    77*                ) z
    SQL > /.

    GEN_DATE GEN_BY YEAR MN AUTHOR AUTH_FN AUTH_LN AUT AUTH_EM AUTH_ AUTH_ ID PUB DRY ISS_DT PG ART_TTL NUM_W OPRIC HEARTS PRICE OCU
    ---------- --------------- ---- -- --------------- ---------- ---------- --- -------------------- ----- ----- ---------- ---------- ---------- ---------- --- -------------------- ----- ----- --- ----- ---
    17/11/2015 Admin Admin 2015 01 user author user author [email protected] 123456 UAE Al Bayan Local 11/11/2015 2 is the title.   165 200 AED AED 250

    Of course, you'll want to change the types of data, etc. as needed.

    I assumed that the XML can contain several "" sections and that each section can contain several entries.

    Thus the XMLTABLE aliasing as 'x' gives information of XML, and supplies the data associated with the XMLTABLE with alias 'y' which gets the multiple authors, which itself section of the XMLTABLE with alias 'z' for each of the article_item.

    CDATA stuff are handled automatically by SQLX (XML functionality integrated into Oracle's SQL)

  • The display of an image defined in an external PLSQL procedure that renders the code HTML and called in a region of the APEX

    I have an external PLSQL procedure that dynamically creates a report out of the HTML tags that I then called an anonymous block APEX PLSQL.  I'm making bad images with the following code:
    in the external procedure.  How do you get around that? (NOTE: the procedure is too big to store directly in the APEX)

    ....

    ' < style td = "width: auto;" "padding: 0px 5px 0px ' > '. spc_rec. SPC_VIABILITY_STATUS. "< table > ' |

    "< style td =" width: auto; " text-align: left; "padding: 0px 5px 0px" > ' | spc_rec. SPC_VIABILITY_REASON. "< table > ' |

    TD > < img src = "" #IMAGE_PREFIX #check2.gif "alt =" "/ > < table > '"

    ....

    Thanks in advance

    PaulP

    Hi Paul,.

    You can use the global variable of the APEX package below to get the image prefix in pl/sql. Of course, your procedure should be in APEX, schema analysis application.

    APEX_APPLICATION. G_IMAGE_PREFIX

    Kind regards

    Hari

  • Fetch and loop in plsql

    Hello

    In plsql fetch and buckle, what is the difference between these 2 statements? Does make a difference when or the other of the Sub lines take precedence over the other?

    OPEN email_details_cur (p_transactionid);

    LOOP

    EXIT WHEN email_details_cur % NOTFOUND;

    EXTRACTION email_details_cur

    IN email_details_cur_rec;

    END LOOP;

    vs

    OPEN email_details_cur (p_transactionid);

    LOOP

    EXTRACTION email_details_cur

    IN email_details_cur_rec;

    EXIT WHEN email_details_cur % NOTFOUND;

    END LOOP;

    < / dbo

    Hello

    Take a look at this demo and think what is different and why.

    set serveroutput on
    
    prompt Exit when is before fetch
    declare
      cursor c is select dummy from dual;
      l_dummy dual.dummy%type;
    begin
      open c;
      loop
        EXIT WHEN C%NOTFOUND;
        fetch c into l_dummy;
        --
        dbms_output.put('c%notfound is ');
        if c%notfound then
          dbms_output.put('true');
        else
          dbms_output.put('not true');
        end if;
        dbms_output.put_line('');
        --
         end loop;
      close c;
    end;
    /
    
    prompt Exit when is after fetch
    declare
      cursor c is select dummy from dual;
      l_dummy dual.dummy%type;
    begin
      open c;
      loop
        fetch c into l_dummy;
        EXIT WHEN C%NOTFOUND;
        --
        dbms_output.put('c%notfound is ');
        if c%notfound then
          dbms_output.put('true');
        else
          dbms_output.put('not true');
        end if;
        dbms_output.put_line('');
        --
         end loop;
      close c;
    end;
    
    Exit when is before fetch
    
    PL/SQL procedure successfully completed.
    
    c%notfound is not true
    c%notfound is true
    
    Exit when is after fetch
    
    PL/SQL procedure successfully completed.
    
    c%notfound is not true
    
  • NLS_DATE_FORMAT in PLSQL Developer differ from sqlplus

    Hello

    When you set the nls_date_format sqlplus at the session level, the results are as expected.Capture.JPG

    Whereas, when you set the second level nls_date_format in PL/SQl developer, output the same format in both cases.

    Capture1.JPG

    help kindly on even...

    Thank you to make clear... just ask out of curiosity, is possible for my session I modified my NLS_DATE_FORMAT and always the system reflecting the NLS_DATE_FORMAT substituted by the client software...

    Means for my session how is it possible that without good privileges Client software replaces the format...

    Why do you think that the output that you see in a GUI such as PLSQL Developer is determined by NLS session parameters? When the result set contains a date, then the proponent of the product is free to decide if it is displayed after the session or some settings settings in the tool preferences.

    SELECT SYSDATE FROM dual;

    ALTER SESSION SET NLS_DATE_FORMAT = 'YYYY-MM-DD ";

    SELECT SYSDATE FROM dual;

    When I run the present into a toad, I get two different formats for the last statement in the manner which I run it:

    F9 (run the statement): 10:26:08 09/01/2015

    F5 (Exceute as a script): 2015-09-01

    If the toad ignores the parameter when the developer thought that I don't know what I really want.

    The same executions in SQL Developer shows 2015-09-01 both times. Here, the designer would think "Hey the user is a developer, that he or she will know what to do" or the designer has not thought to the problem and simply used the session settings.

    Concerning

    Marcus

  • Where are stored the PLSQL variables?

    Hello community of the Oracle,

    My question - in what section of the PGA / SGA are PLSQL stored variables?

    I mean if I'm going to declare a bunch of 'heavy' variables (for example collections include millions of CLOB) - they could cause the overflow from memory? Or they will flip to TBS temp? (I never heard of a programming language where the variables are not stored in memory)

    Thank you very much

    Ilya Golosovsky

    They are stored in the PGA. They will not be spilled for temp, you will get an error "cannot allocate memory".

  • How to create the xml file in oracle plsql

    Hello

    I need the under xml (abc.xml) in unix Server out_directory file (the out_directory path: / u01/apps/xml /)

    Select sivauser, sivapwd from sivainformations;-it will be multiple records

    Select sivatelepone phone; - it will be multiple records

    Select xyzverion versionid; - it will be multiple records

    based on the above information, I need the sub file xml using oracle plsql procedure or a block

    example: suppose we record

    <? XML version = "1.0" encoding = "UTF-8"? >
    "< sivaService version ="2.0"xmlns ="http://www.siva.ab/siva/4.0/test">."
    < data language 'DEU' = >
    < sivauser action = "siva3" sivapwd = "siva123" > --i need to sivainformations table(sivauser,sivapwd) timeline
    phone < sivatelepone > < / sivatelepone >--i need to chronogram sivatelepone table (phone) based on the sivauser column
    < abcversion version = "1.0" >
    < Productinfo >
    versionID < xyzverion > < / xyzverion >--i need to xyzverion (versionid) based on the sivauser column table records
    < / Productinfo >
    < / abcversion >
    < / action >
    < / data >
    < / sivaService >


    example: assume that multiple records

    <? XML version = "1.0" encoding = "UTF-8"? >
    "< sivaService version ="2.0"xmlns ="http://www.siva.ab/siva/4.0/test>
    < data language 'DEU' = >
    < sivauser action = "siva3" sivapwd = "siva123" > --i need to sivainformations table(sivauser,sivapwd) timeline
    < sivatelepone > '345678' < / sivatelepone >--i need to chronogram sivatelepone table (phone) based on the sivauser column
    < abcversion version = "1.0" >
    < Productinfo >
    '1.1' < xyzverion > < / xyzverion >--i need to xyzverion (versionid) based on the sivauser column table records
    < / Productinfo >
    < / abcversion >
    < / action >

    < sivauser action = "siva4" sivapwd = "siva123" > --i need to sivainformations table(sivauser,sivapwd) timeline
    < sivatelepone > '123456' < / sivatelepone >--i need to chronogram sivatelepone table (phone) based on the sivauser column
    < abcversion version = "1.0" >
    < Productinfo >
    "1.2" < xyzverion > < / xyzverion >--i need to xyzverion (versionid) based on the sivauser column table records
    < / Productinfo >
    < / abcversion >
    < / action >
    < / data >
    < / sivaService >

    Please help me

    Thank you
    Siva

    I added a column ID to match the lines between the three tables.

    SQL> with sivainformations
      2  as
      3  (
      4     select 1 id, 'karthick' sivauser, 'karthick' sivapwd from dual union all
      5     select 2 id, 'ram', 'ram' from dual
      6  )
      7  , sivatelepone
      8  as
      9  (
     10     select 1 id, 1234567890 telepone from dual union all
     11     select 2 id, 1234512345 from dual
     12  )
     13  , versionid
     14  as
     15  (
     16     select 1 id, 1.1 versionid from dual union all
     17     select 2, 1.2 from dual
     18  )
     19  select xmlelement
     20         (
     21             "shivaService"
     22           , xmlattributes('2.0' as "version", 'http://www.siva.ab/siva/4.0/test' as "xmlns")
     23           , xmlelement
     24             (
     25                 "Data"
     26               , xmlattributes('DEU' as "language")
     27               , xmlagg
     28                 (
     29                     xmlelement
     30                     (
     31                          "Action"
     32                        , xmlattributes(s.sivauser as "sivauser", s.sivapwd as "sivapwd")
     33                        , xmlelement("shivatelepone", t.telepone)
     34                        , xmlelement
     35                          (
     36                              "abcversion"
     37                            , xmlattributes('1.0' as "version")
     38                            , xmlelement
     39                              (
     40                                   "ProductInfo"
     41                                 , xmlelement("xyzversion", v.versionid)
     42                              )
     43                          )
     44                      )
     45                 )
     46             )
     47         ).EXTRACT('*') xml_output
     48    from sivainformations s
     49    join sivatelepone t
     50      on s.id = t.id
     51    join versionid v
     52      on s.id = v.id;
    
    XML_OUTPUT
    -------------------------------------------------------------------------------------------------------------------
    
      
        
          1234567890
          
            
              1.1
            
          
        
        
          1234512345
          
            
              1.2
            
          
        
      
    
    
    SQL>
    
  • Contains only contains one and not - how PLSQL

    Hi all

    I'm writing a plsql code I need some advice.

    How can I optimize contains only - all - contains contains not in PLSQL.

    For Ex:

    If l_string1 ContainsOnly "ABCDEFGHIJKL" (a set of values)

    If l_string2 ContainsAny ' + -/'

    If l_string3 ContainsNot "1234567890"

    Thank you

    Aman

    Hello

    If you are really determined to do not post sample data, and then publish an issue involving sample data of people who want to help you already, or readily available, such as the tables in the scott schema or HR.

    For example, consider the ename column of the table scott.emp, which has these values:

    ENAME

    ----------

    ADAMS

    ALLEN

    BLAKE

    CLARK

    FORD

    JAMES

    JONES

    KING

    MARTIN

    MILLER

    SCOTT

    SMITH

    TURNER

    WARD

    Say you're interested in finding the enames which consist exclusively of letters from the 'A' series of 'n', i.e.:

    ENAME

    ----------

    ALLEN

    BLAKE

    KING

    Here are 3 ways to do it.  (I produce these results

    ENAME NR L R

    ---------- -- -- --

    ADAMS

    ALLEN OK OK OK

    BLAKE OK OK OK

    CLARK

    FORD

    JAMES

    JONES

    OK OK OK KING

    MARTIN

    MILLER

    SCOTT

    SMITH

    TURNER

    WARD

    just to test 3-way at the same time and to ensure that they are equivalent.  You might what to use conditions in a WHERE clause, and not a CASE expression.)

    This is the query that produced the above results:

    SELECT ename

    CASE

    WHEN NOT REGEXP_LIKE (ename

    , '[^A-N]'

    )

    AND THEN 'OK '.

    END AS nr

    CASE

    WHEN LTRIM (ename

    , "ABCDEFGHIJKLMN".

    ) IS NULL

    AND THEN 'OK '.

    END as l

    CASE

    WHEN REGEXP_LIKE (ename

    , '^[A-N]+$'

    )

    AND THEN 'OK '.

    END AS r

    FROM scott.emp

    ORDER BY ename

    ;

    In reply #5, you have shown that you already know how to find what lines that contain all the letters

    OTHER THAN « A » ' n '.  To find the containing enames

    ONLY ' ' 'N', you can simply use the NOT operator.  In other words, an ename contains ONLY A - N if (and only if) it does not any letter ANOTHER THAT A - N.  This is the approach used in the column above nr.  However, I think that the other ways (column l and r) are better; l is more effective.

Maybe you are looking for