generate an entry for hierarchical table dividing lines

Hello

We use 11 GR 2.

I would like to generate a hierarchical structure from a set of lines

containing data like this:

Assume that these are the input rows

create table my_input (around inp_id, name varchar2 (2000), constraint primary my_pk)

key (inp_id);

Enter my_input values (1, ' / a/b/c/d ");

Enter my_input values (2, ' / x 1/f123 ");

Enter my_input values (3, ' / a/b/c/e ");

Enter my_input values (4, ' / f/b ");

This should go in a table like this

create table my_output (id integer, name varchar2, integer prior_id, inp_id, forced

my_pk_id primary key (id))

such as if I do a "select * from my_output;" the output looks like:

1 null > < null > <

2 b 1 < null >

3 c 2 < null >

3 1 4 D

5 x 1 < null > < null >

6 5 2 f123

7 e 3 3

8 f < null > < null >

9 b 8 4

If that means the 'inp_id' of my_input is taken, stored the item in

my_input is a sheet, otherwise the inp_id is null in my_output.

Is there a simple way to solve this problem?

Thank you, Hannes

Not sure that I fully understand your needs:

with t1 as)

Select inp_id,

column_value rn,

regexp_count(Name,'/') NTC,

regexp_substr(Name,'[^/]+',1,column_value) name,

substr (name, 1, instr(name ||)) (("/", "/", 1, column_value)-1) path

of my_input,.

table)

cast)

MultiSet)

Select the level

of the double

connect by level<=>

)

as sys. OdciNumberList

)

)

),

T2 as)

Select min (inp_id) inp_id,

min (RN) Dungeon rn (dense_rank of first order of inp_id),

min (NTC) NTC Dungeon (dense_rank of first order of inp_id),

name

from t1

Group path,

name

)

Select row_number() on id (order of inp_id, rn),

name,

rn - 1 case

When 0 then null

of another row_number() (order in inp_id, rn)-1

end prior_id

case CNT

When rn then inp_id

end inp_id

the t2

/

ID NAME PRIOR_ID INP_ID
---------- ---- ---------- ----------
1 a
2 b              1
3 c              2
4 d              3          1
5 x 1
6 5 2 f123
7 e              6          3
8 f

ID NAME PRIOR_ID INP_ID
---------- ---- ---------- ----------
9 b              8          4

9 selected lines.

SQL >

SY.

Tags: Database

Similar Questions

  • Impossible to uninstall a program error: unable to generate the uninstaller for ca aintivirus E9030 command line.

    Original title: impossible to uninstall the program error msg

    I'm unable to uninstall a program, an error message keeps popping up saying "error E9030 cannot generate the uninstaller for ca aintivirus command line.
    can anyone help?

    Original title: impossible to uninstall the program error msg

    I'm unable to uninstall a program, an error message keeps popping up saying "error E9030 cannot generate the uninstaller for ca aintivirus command line.
    can anyone help?

    Try this... no guarantee

    Start > computer > open C: drive > open the Progran files > y at - it a record of the Ca antivirus?
    If so, open it > is there a uninstaller.exe file here?
    If so, open it and let go about uninstalling.

    Or,

    Start > all programs > CA antivirus > y at - it an option to uninstall?

    Hope one of these AIDS.

  • java.sql.SQLException: JDBC LLR, table check failed for the table ' WL_LLR_ADMI

    Hello

    I am trying to install OSB in a different domain. I already have a suite of soa running.

    This is the directory structure
    Middleware/user_projects/domains

    (a) soa_domain
    (b) osb_domain

    Who, from the administration server for the BSO, I get error below.


    Error: -.
    < server failed. Reason: Last forest resource [JTAExceptions:119002] failed during initialization. The server cannot start unless all configured logging last resource (LLRs) initialize. Fault reason:
    javax.transaction.SystemException: weblogic.transaction.loggingresource.LoggingResourceException: java.sql.SQLException: check table JDBC LLR, failed for the table "WL_LLR_ADMINSERVER", line ' JDBC LLR field / / server ' record had an unexpected value ' soa_domain / / AdminServer' expected ' osb_domain / / AdminServer'* ONLY the domain and the server that creates a table original LLR can access *.



    I see the solution in https://blogs.oracle.com/epc/entry/technical_table_verify_failed_for but I have no doubt here.

    When I run

    Select RECORDSTR in the WL_LLR_ADMINSERVER where
    XIDSTR = "field of LLR JDBC / / server ';"

    I get the result like-> soa_domain / / AdminServer


    If I change it to osb_domain / / this AdminServer, will affect my soa_domain server... ? Please advice

    Published by: user10720442 on December 11, 2012 11:54

    Hello

    There are two possible solutions to this problem:

    Solution 1:

    To solve this problem reconfigures the basic information database of Point differently for each domain, if you have more than one domain. That, to change the port of the database and the name below two files in the field

    In the setDomainEnv.cmd (or .sh) file inside directory change DOMAIN_HOME/bin Point base port number and the name of the comic.

    Set POINTBASE_PORT = 9094
    Set POINTBASE_DBNAME = weblogic_eval2

    JDBC:PointBase:server://localhost:9094 / weblogic_eval2

    In the file wlsbjmsrpDataSource - jdbc.xml inside change DOMAIN_HOME/config/jdbc directory under entries with port of pointbase database updated and the name (this will be in two places in the file).

    Solution 2:

    If the domain name has been changed and do not want to change the database properties, then an update to the WL_LLR_ADMINSERVER table is possible:

    that is to say:
    Update SCHEMA_SAMPLE. Set RECORDSTR = WL_LLR_ADMINSERVER ' base_domain / / AdminServer' where XIDSTR = "JDBC LLR field / / server ';"

    Kind regards
    Kal

  • How to find the child level for each table in a relational model?

    Earthlings,

    I need your help, and I know that, "Yes, we can change." Change this thread to a question answered.

    So: How to find the child level for each table in a relational model?

    I have a database of relacional (9.2), all right?
    .
         O /* This is a child who makes N references to each of the follow N parent tables (here: three), and so on. */
        /↑\ Fks
       O"O O" <-- level 2 for first table (circle)
      /↑\ Fks
    "o"o"o" <-- level 1 for middle table (circle)
       ↑ Fk
      "º"
    Tips:
    -Each circle represents a table;
    -Red no tables have foreign key
    -the picture on the front line of tree, for example, a level 3, but when 3 becomes N? How is N? That is the question.

    I started to think about the following:

    First of all, I need to know how to take the kids:
    select distinct child.table_name child
      from all_cons_columns father
      join all_cons_columns child
     using (owner, position)
      join (select child.owner,
                   child.constraint_name fk,
                   child.table_name child,
                   child.r_constraint_name pk,
                   father.table_name father
              from all_constraints father, all_constraints child
             where child.r_owner = father.owner
               and child.r_constraint_name = father.constraint_name
               and father.constraint_type in ('P', 'U')
               and child.constraint_type = 'R'
               and child.owner = 'OWNER') aux
     using (owner)
     where child.constraint_name = aux.fk
       and child.table_name = aux.child
       and father.constraint_name = aux.pk
       and father.table_name = aux.father;
    Thought...
    We will share!

    Thanks in advance,
    Philips

    Published by: BluShadow on April 1st, 2011 15:08
    formatting of code and hierarchy for readbility

    Have you looked to see if there is a cycle in the graph of dependence? Is there a table that has a foreign key to B and B has a back of A foreign key?

    SQL> create table my_emp (
      2    emp_id number primary key,
      3    emp_name varchar2(10),
      4    manager_id number
      5  );
    
    Table created.
    
    SQL> ed
    Wrote file afiedt.buf
    
      1  create table my_mgr (
      2    manager_id number primary key,
      3    employee_id number references my_emp( emp_id ),
      4    purchasing_authority number
      5* )
    SQL> /
    
    Table created.
    
    SQL> alter table my_emp
      2    add constraint fk_emp_mgr foreign key( manager_id )
      3         references my_mgr( manager_id );
    
    Table altered.
    
    SQL> ed
    Wrote file afiedt.buf
    
      1   select level lvl,
      2          child_table_name,
      3          sys_connect_by_path( child_table_name, '/' ) path
      4     from (select parent.table_name      parent_table_name,
      5                  parent.constraint_name parent_constraint_name,
      6                  child.table_name        child_table_name,
      7                  child.constraint_name   child_constraint_name
      8             from user_constraints parent,
      9                  user_constraints child
     10            where child.constraint_type = 'R'
     11              and parent.constraint_type = 'P'
     12              and child.r_constraint_name = parent.constraint_name
     13           union all
     14           select null,
     15                  null,
     16                  table_name,
     17                  constraint_name
     18             from user_constraints
     19            where constraint_type = 'P')
     20    start with child_table_name = 'MY_EMP'
     21*  connect by prior child_table_name = parent_table_name
    SQL> /
    ERROR:
    ORA-01436: CONNECT BY loop in user data
    

    If you have a cycle, you have some problems.

    (1) it is a NOCYCLE keyword does not cause the error, but that probably requires an Oracle version which is not so far off support. I don't think it was available at the time 9.2 but I don't have anything old enough to test on

    SQL> ed
    Wrote file afiedt.buf
    
      1   select level lvl,
      2          child_table_name,
      3          sys_connect_by_path( child_table_name, '/' ) path
      4     from (select parent.table_name      parent_table_name,
      5                  parent.constraint_name parent_constraint_name,
      6                  child.table_name        child_table_name,
      7                  child.constraint_name   child_constraint_name
      8             from user_constraints parent,
      9                  user_constraints child
     10            where child.constraint_type = 'R'
     11              and parent.constraint_type = 'P'
     12              and child.r_constraint_name = parent.constraint_name
     13           union all
     14           select null,
     15                  null,
     16                  table_name,
     17                  constraint_name
     18             from user_constraints
     19            where constraint_type = 'P')
     20    start with child_table_name = 'MY_EMP'
     21*  connect by nocycle prior child_table_name = parent_table_name
    SQL> /
    
           LVL CHILD_TABLE_NAME               PATH
    ---------- ------------------------------ --------------------
             1 MY_EMP                         /MY_EMP
             2 MY_MGR                         /MY_EMP/MY_MGR
             1 MY_EMP                         /MY_EMP
             2 MY_MGR                         /MY_EMP/MY_MGR
    

    (2) If you try to write on a table and all of its constraints in a file and do it in a valid order, the entire solution is probably wrong. It is impossible, for example, to generate the DDL for MY_EMP and MY_DEPT such as all instructions for a table come first, and all the instructions for the other are generated second. So even if NOCYCLE to avoid the error, you would end up with an invalid DDL script. If that's the problem, I would rethink the approach.

    -Generate the DDL for all tables without constraint
    -Can generate the DDL for all primary key constraints
    -Can generate the DDL for all unique key constraints
    -Can generate the DDL for all foreign key constraints

    This is not solidarity all the DOF for a given in the file object. But the SQL will be radically simpler writing - there will be no need to even look at the dependency graph.

    Justin

  • PK PL/SQL generated for a table

    Hi all

    How we generate a pk for a new line of an element, which is defined with a value for the new pk?

    In an old thread Re: form of PL/SQL generated Primary Key? , someone added links to explain that, unfortunately, they do no more work.

    Can someone explain to me how this could be done?

    Thanks in advance


    Kind regards
    Cleo

    Published by: Cleopatra on March 7, 2011 05:42

    You may not use the "* Source Type * ' as primary key" * Custom PL/SQL function * "and use the function

    BEGIN
      return v('ITEM_NAME');
    END;
    
  • Unable to display data for the date where there is no entry in the table

    Hello

    I need a urgent, described below:

    I have a table named as 'dirty', consisting of three columns: empno, sale_amt and sale_date.
    (Please ref. The table with data script as shown below)

    Now, if I run the query:
    "select trunc (sale_date) sale_date, sum (sale_amt) total_sale of the sales group by order trunc (sale_date) by 1.
    It then displays the data for the dates there is an entry in this table. But it displays no data for the
    date in which there is no entry in this table.

    If you run the Table script with data in your schema, then u will see that there is no entry for the 28th. November 2009 in
    sales table. Now the above query displays data for the rest as his dates are in the table of the sale with the exception of 28. November 2009.
    But I need his presence in the result of the query with the value "sale_date' as '28. November 2009 "and that of"total_sale"as
    « 0 ».

    Y at - there no way to get the result I need?

    Please help as soon as POSSIBLE.

    Thanks in advance.

    Create the table script that contains data:
    ------------------------------------------

    CREATE TABLE SALE
    (
    NUMBER EMPNO,
    NUMBER OF SALE_AMT
    DATE OF SALE_DATE
    );
    TOGETHER TO DEFINE
    Insert into SALES
    (EMPNO, SALE_AMT, SALE_DATE)
    Values
    (100, 1000, TO_DATE (DECEMBER 1, 2009 10:20:10 ',' DD/MM/YYYY HH24:MI:SS'));))
    Insert into SALES
    (EMPNO, SALE_AMT, SALE_DATE)
    Values
    (100, 1000, TO_DATE (NOVEMBER 30, 2009 10:21:04 ',' DD/MM/YYYY HH24:MI:SS'));))
    Insert into SALES
    (EMPNO, SALE_AMT, SALE_DATE)
    Values
    (100, 1000, TO_DATE (NOVEMBER 29, 2009 10:21:05 ',' DD/MM/YYYY HH24:MI:SS'));))
    Insert into SALES
    (EMPNO, SALE_AMT, SALE_DATE)
    Values
    (100, 1000, TO_DATE (NOVEMBER 26, 2009 10:21:06 ',' DD/MM/YYYY HH24:MI:SS'));))
    Insert into SALES
    (EMPNO, SALE_AMT, SALE_DATE)
    Values
    (100, 1000, TO_DATE (NOVEMBER 25, 2009 10:21:07 ',' DD/MM/YYYY HH24:MI:SS'));))
    Insert into SALES
    (EMPNO, SALE_AMT, SALE_DATE)
    Values
    (200, 5000, TO_DATE (NOVEMBER 27, 2009 10:23:06 ',' DD/MM/YYYY HH24:MI:SS'));))
    Insert into SALES
    (EMPNO, SALE_AMT, SALE_DATE)
    Values
    (200, 4000, TO_DATE (NOVEMBER 29, 2009 10:23:08 ',' DD/MM/YYYY HH24:MI:SS'));))
    Insert into SALES
    (EMPNO, SALE_AMT, SALE_DATE)
    Values
    (200, 3000, TO_DATE (NOVEMBER 24, 2009 10:23:09 ',' DD/MM/YYYY HH24:MI:SS'));))
    Insert into SALES
    (EMPNO, SALE_AMT, SALE_DATE)
    Values
    (200, 2000, TO_DATE (NOVEMBER 30, 2009 10:23:10 ',' DD/MM/YYYY HH24:MI:SS'));))
    Insert into SALES
    (EMPNO, SALE_AMT, SALE_DATE)
    Values
    (300, 7000, TO_DATE (NOVEMBER 24, 2009 10:24:19 ',' DD/MM/YYYY HH24:MI:SS'));))
    Insert into SALES
    (EMPNO, SALE_AMT, SALE_DATE)
    Values
    (300, 5000, TO_DATE (NOVEMBER 25, 2009 10:24:20 ',' DD/MM/YYYY HH24:MI:SS'));))
    Insert into SALES
    (EMPNO, SALE_AMT, SALE_DATE)
    Values
    (300, 3000, TO_DATE (NOVEMBER 27, 2009 10:24:21 ',' DD/MM/YYYY HH24:MI:SS'));))
    Insert into SALES
    (EMPNO, SALE_AMT, SALE_DATE)
    Values
    (300, 2000, TO_DATE (NOVEMBER 29, 2009 10:24:22 ',' DD/MM/YYYY HH24:MI:SS'));))
    Insert into SALES
    (EMPNO, SALE_AMT, SALE_DATE)
    Values
    (300, 1000, TO_DATE (NOVEMBER 30, 2009 10:24:22 ',' DD/MM/YYYY HH24:MI:SS'));))
    COMMIT;

    Any help will be necessary for me


    Kind regards
    WITH tab AS
      (SELECT TRUNC(sale_date) sale_date,
        SUM(sale_amt) total_sale
         FROM sale
       GROUP BY TRUNC(sale_date)
       ORDER BY 1
      )
     SELECT sale_date,
      NVL(total_sale,0) total_sale
       FROM tab
       model
       REFERENCE refmodel ON (SELECT 1 indx, MAX(sale_date)-MIN(sale_date) AS daysdiff , MIN(sale_date) minsaledate FROM tab)
         dimension BY (indx)
         measures(daysdiff,minsaledate)
       main main_model
       dimension BY (sale_date)
       measures(total_sale)
       RULES upsert SEQUENTIAL ORDER ITERATE(1000) until (iteration_number>refmodel.daysdiff[1]-1)
       ( total_sale[refmodel.minsaledate[1]+iteration_number]=total_sale[cv()] )
    ORDER BY sale_date
    

    using a clause type

    Ravi Kumar

  • How to block a select statement for a table of the specified until validation line

    Hello

    How to block a SELECT statement for a specified row in a table until willingness to engage occure?

    My procedure is part of a Stock request and if a user (X) get stock 1 piece, another user must wait for the user (Y) complete his reportoire.

    So, let say my stock have 10 pencils.

    When the X user starts the Stock_PLS procedure, in table Stock, line (e.g. R1) that user X should work with it, must be lock up to what the commit / rollback will appear.

    procedure Stock_PLS...

    Start
    ..
    pencils: pencils-1 =;
    ..
    End

    Observation-> pencils: = 9;


    This means that is a user there, run SELECT * stock of the table where rows_id = R1, SELECTION should wait the Stock_PLS started by user X will be completed (with commit or rollback) and his SELECT should return the value 9.

    What I need, is that Oracle has something like LINE EXCLUSIVE TABLE LOCK, but in my situation the SELECT statement should be denied on the specified lines, until the end of the procedure.

    Kind regards
    Michael

    Hello
    You can achieve this using the update with the Select clause.
    You can write your select statement like this
    SELECT * from stock of the table where rows_id = R1 for update;

  • Gray and white table for all other records (line)

    I have the page that queries records db and outputs to a page to display (html)
    I want to know if I can get an output that shows a slight difference for every other row (record). For e.g. first line would have the lesser gray color and the second line would be white color. This is repeated.
    Thank you
    What I write.
    It is in html, cf?
    Like the cells below:
    Like the coldfusion forums page. (for clarity)








    Something like that;


    column headings


    stuff of detail
    closing tags

  • Parsing xml and store the details in the hierarchical tables

    Hi all

    I'm trying to parse a xml code and store the details in the hierarchical tables, however, I am unable to analyze the child attributes of tags and store the details in relational format.

    Oracle - 11.2.0.4 version

    My XML looks like in below:

    <Root>
    <ParentTag name="JobName" attrib1="Text" attrib2="SomeOtherText">
      <ChildTag childAttrib1="SomeValue1" childAttrib2="SomeValue2" />
      <ChildTag childAttrib1="SomeValue3" childAttrib2="SomeValue4" />
      <ChildTag childAttrib1="SomeValue5" childAttrib2="SomeValue6" />
    
      <OtherChildTag childAttrib1="SomeValue1" childAttrib2="SomeValue2" />
    </ParentTag>
    </Root>
    
    

    The table structure is as follows:

    create the table parent_details
    (
    job_id number primary key,
    VARCHAR2 (100) job_name,.
    job_attrib1 varchar2 (100),
    job_attrib2 varchar2 (100)
    );

    create the table child_details
    (
    child_id number primary key,
    number of parent_job_id
    child_attrib1 varchar2 (100),
    child_attrib2 varchar2 (100),
    Constraint fk_child_details foreign key (parent_job_id) refers to parent_details (job_id)
    );

    After analysis, I would expect the data to be stored in the format below:

    Table Name:-
    parent_details
    ID Name     Attribute1  Attribute2
    1  JobName  Text        SomeOtherText
    
    
    ChildTable (Store Child Tag details)
    ID Parent ID Attribute1  Attribute2
    1  1         SomeValue1  SomeValue2
    2  1         SomeValue3  SomeValue4
    3  1         SomeValue5  SomeValue6
    
    

    I tried following SQL, but it does not work. Please suggest if the same SQL can be improved to get the details in the expected format or should I use another solution.

    select job_details.*
      from test_xml_table t,
           xmltable ('/Root/ParentTag'
                      passing t.col 
                      columns 
                        job_name varchar2(2000) path '@name',
                        attribute1 varchar2(2000) path '@attrib1',
                        attribute2 varchar2(2000) path '@attrib2',
                        childAttribute1 varchar2(2000) path '/ChildTag/@childAttrib1'
                    ) job_details;
    

    I'm not forced to have a SQL solution, but would if it can be in SQL.

    Kind regards

    Laureline.

    Post edited by: Jen K - added the SQL, I tried to build.

    Well, the XML contains hierarchical data, and SQL is a "dish" of data, so it's up to you to treat lines that are coming out of the flat style and determine how to get that in separate tables.

    Suppose that we have several nodes of ParentTag each containing several nodes of ChildTag...

    SQL > ed
    A written file afiedt.buf

    1 with t (xml) as (select xmltype ('))
    2
    3
    4
    5
    6
    7

    8
    9
    10
    11
    12

    13
    ') of double)
    14-
    15 end of test data
    16-
    17 select x.p
    18, x.name, x.attrib1, x.attrib2
    19, including
    20, y.childattrib1, y.childattrib2
    21 t
    22, xmltable ('/ Root/ParentTag ')
    23 passage t.xml
    p 24 columns for ordinalite
    25, path name varchar2 (10) '. / @name'
    26, path of varchar2 (10) of attrib1 '. / @attrib1 '


    27, way to varchar2 (10) of attrib2 '. / @attrib2 '
    28 children xmltype road '.'
    29                 ) x
    30, xmltable ('/ ParentTag/ChildTag ')
    passage 31 x.children
    c 32 columns for ordinalite
    33, path of varchar2 (10) of childattrib1 '. / @childAttrib1 '
    34 road of varchar2 (10) of childattrib2 '. / @childAttrib2 '
    35*                ) y
    SQL > /.

    P NAME ATTRIB1 ATTRIB2 C CHILDATTRI CHILDATTRI
    ---------- ---------- ---------- ---------- ---------- ---------- ----------
    1 text JobName SomeOtherT 1 SomeValue1 value2
    1 text JobName SomeOtherT 2 SomeValue3 SomeValue4
    1 text JobName SomeOtherT 3 SomeValue5 SomeValue6
    JobName2 TextX SomeOtherT 1 SomeValue6 SomeValue8 2
    JobName2 TextX SomeOtherT 2 SomeValue7 SomeValue9 2

    Using the 'ordinalite' gives us the line number for this node in the XML file, so that you can identify each parent as well as to say who is the first record of this parent (because it will have a child with the ordinalite 1).

    An INSERT ALL tuition assistance we can insert into two different tables at the same time to keep related data... for example

    SQL > create table tbl1 (pk number, name varchar2 (10), attrib1 varchar2 (10), attrib2 varchar2 (10))
    2.

    Table created.

    SQL > create table tbl2 (parent_pk number, attrib1 varchar2 (10), attrib2 varchar2 (10))
    2.

    Table created.

    SQL > insert all
    2 when c = 1 then
    3 in the tbl1 (pk, attrib1, attrib2)
    4 values (p, attrib1, attrib2)
    When 5 1 = 1 then
    6 in the tbl2 (parent_pk, attrib1, attrib2)
    7 values (p, childattrib1, childattrib2)
    8 with t (xml) as (select xmltype ('))
    9
    10
    11
    12
    13
    14

    15
    16
    17
    18
    19

    20
    ') of double)
    21 select x.p
    22, x.name, x.attrib1, x.attrib2
    23, including
    24, y.childattrib1, y.childattrib2
    25 t
    26, xmltable ('/ Root/ParentTag ')
    27 passage t.xml
    p 28 columns for ordinalite
    29, path name varchar2 (10) '. / @name'
    30, path of varchar2 (10) of attrib1 '. / @attrib1 '
    31, path of varchar2 (10) of attrib2 '. / @attrib2 '
    32 children xmltype road '.'
    33                 ) x
    34, xmltable ('/ ParentTag/ChildTag ')
    passage 35 x.children
    c 36 columns for ordinalite
    37, path of varchar2 (10) of childattrib1 '. / @childAttrib1 '
    38, path of varchar2 (10) of childattrib2 '. / @childAttrib2 '
    39                 ) y
    40.

    7 lines were created.

    SQL > select * from tbl1;

    PK ATTRIB1 ATTRIB2 NAME
    ---------- ---------- ---------- ----------
    1 text JobName SomeOtherT
    2 JobName2 TextX SomeOtherT

    SQL > select * from tbl2.

    PARENT_PK ATTRIB1 ATTRIB2
    ---------- ---------- ----------
    1 SomeValue1 value2
    1 SomeValue3 SomeValue4
    1 SomeValue5 SomeValue6
    SomeValue6 2 SomeValue8
    SomeValue7 2 SomeValue9

  • How to get a magic number for any table that returns more than 32 k?

    I'm in a unique situation where in I try to extract the values of the multiple tables and to publish as XML output. The problem is based on the condition of a few tables can retrieve more than 32 KB and less than 32 KB of data. Less than 32KB is not a problem, as the generation of XML is smooth. The minute he reached more than 32 k, it generates a runtime error. I was wondering if there is a way to make sure that the minute results of the query is greater than 32 KB, it must break say - if results is 35KO, so should I break this result 32 KB and 3 KB. Once then pass these data to appear as XML output. Again, it is not just for a table, but all the tables which are called in the function.

    Is it possible? I'm unable to get ideas, or did I do something so complex from the point of view of production support. If you'd be grateful if someone can guide me on this.

    The way it is, is the following:
    I have a table named ctn_pub_cntl


    CREATE TABLE CTNAPP.ctn_pub_cntl 
    (ctn_pub_cntl_id          NUMBER(18)
    ,table_name                  VARCHAR2(50)
    ,last_pub_tms              DATE
    ,queue_name               VARCHAR2(50)
    ,dest_system              VARCHAR2(50)
    ,frequency                  NUMBER(6)
    ,status                      VARCHAR2(8)
    ,record_create_tms          DATE
    ,create_user_id                VARCHAR2(8)
    ,record_update_tms          DATE
    ,update_user_id             VARCHAR2(8)
    ,CONSTRAINT ctn_pub_cntl_id_pk PRIMARY KEY(ctn_pub_cntl_id)
    );
    
    

    To do this, the data are:


    INSERT INTO CTNAPP.ctn_pub_cntl
    (ctn_pub_cntl_id    
     ,table_name         
     ,last_pub_tms  
    ,queue_name  
     ,dest_system        
     ,frequency          
    )
    VALUES
    (CTNAPP_SQNC.nextval
    ,'TRKFCG_SBDVSN'
    ,TO_DATE('10/2/2004 10:17:44PM','MM/DD/YYYY HH12:MI:SSPM')
    ,'UT.TSD.TSZ601.UNP'
    ,'SAP'
    ,15
    );
    
    INSERT INTO CTNAPP.ctn_pub_cntl
    (ctn_pub_cntl_id    
     ,table_name         
     ,last_pub_tms  
     ,queue_name  
     ,dest_system        
     ,frequency          
    )
    VALUES
    (CTNAPP_SQNC.nextval
    ,'TRKFCG_TRACK_SGMNT_DN'
    ,TO_DATE('02/06/2015 9:50:00AM','MM/DD/YYYY HH12:MI:SSPM')
    ,'UT.TSD.WRKORD.UNP'
    ,'SAP'
    ,30
    );
    
    INSERT INTO CTNAPP.ctn_pub_cntl
    (ctn_pub_cntl_id    
     ,table_name         
     ,last_pub_tms  
    ,queue_name  
     ,dest_system        
     ,frequency          
    )
    VALUES
    (CTNAPP_SQNC.nextval
    ,'TRKFCG_FXPLA_TRACK_LCTN_DN'
    ,TO_DATE('10/2/2004 10:17:44PM','MM/DD/YYYY HH12:MI:SSPM')
    ,'UT.TSD.YRDPLN.INPUT'
    ,'SAP'
    ,30
    ); 
    
    INSERT INTO CTNAPP.ctn_pub_cntl
    (ctn_pub_cntl_id    
     ,table_name         
     ,last_pub_tms  
    ,queue_name  
     ,dest_system        
     ,frequency          
    )
    VALUES
    (CTNAPP_SQNC.nextval
    ,'TRKFCG_FXPLA_TRACK_LCTN2_DN'
    ,TO_DATE('02/06/2015 9:50:00AM','MM/DD/YYYY HH12:MI:SSPM')
    ,'UT.TSD.TSZ601.UNP'
    ,'SAP'
    ,120
    );
    
    INSERT INTO CTNAPP.ctn_pub_cntl
    (ctn_pub_cntl_id    
     ,table_name         
     ,last_pub_tms 
    ,queue_name  
     ,dest_system        
     ,frequency          
    )
    VALUES
    (CTNAPP_SQNC.nextval
    ,'TRKFCG_FXPLA_TRACK_LCTN2_DN'
    ,TO_DATE('04/23/2015 11:50:00PM','MM/DD/YYYY HH12:MI:SSPM')
    ,'UT.TSD.YRDPLN.INPUT'
    ,'SAP'
    ,10
    );
    
    INSERT INTO CTNAPP.ctn_pub_cntl
    (ctn_pub_cntl_id    
     ,table_name         
     ,last_pub_tms 
    ,queue_name  
     ,dest_system        
     ,frequency          
    )
    VALUES
    (CTNAPP_SQNC.nextval
    ,'TRKFCG_FIXED_PLANT_ASSET'
    ,TO_DATE('04/23/2015 11:50:00AM','MM/DD/YYYY HH12:MI:SSPM')
    ,'UT.TSD.WRKORD.UNP'
    ,'SAP'
    ,10
    );
    
    INSERT INTO CTNAPP.ctn_pub_cntl
    (ctn_pub_cntl_id    
     ,table_name         
     ,last_pub_tms 
    ,queue_name  
     ,dest_system        
     ,frequency          
    )
    VALUES
    (CTNAPP_SQNC.nextval
    ,'TRKFCG_OPRLMT'
    ,TO_DATE('03/26/2015 7:50:00AM','MM/DD/YYYY HH12:MI:SSPM')
    ,'UT.TSD.WRKORD.UNP'
    ,'SAP'
    ,30
    );
    
    INSERT INTO CTNAPP.ctn_pub_cntl
    (ctn_pub_cntl_id    
     ,table_name         
     ,last_pub_tms
    ,queue_name  
     ,dest_system        
     ,frequency          
    )
    VALUES
    (CTNAPP_SQNC.nextval
    ,'TRKFCG_OPRLMT_SGMNT_DN'
    ,TO_DATE('03/28/2015 12:50:00AM','MM/DD/YYYY HH12:MI:SSPM')
    ,'UT.TSD.WRKORD.UNP'
    ,'SAP'
    ,30
    );
    
    /
    
    COMMIT;
    
    

    Once the above data are inserted and committed, then I created a feature in a package:


    CREATE OR REPLACE PACKAGE CTNAPP.CTN_PUB_CNTL_EXTRACT_PUBLISH
    IS
    
    TYPE tNameTyp IS TABLE OF ctn_pub_cntl.table_name%TYPE INDEX BY BINARY_INTEGER;
    g_tName tNameTyp;
    
    TYPE tClobTyp IS TABLE OF CLOB INDEX BY BINARY_INTEGER;
    g_tClob tClobTyp;
    
    
    FUNCTION GetCtnData(p_nInCtnPubCntlID IN CTN_PUB_CNTL.ctn_pub_cntl_id%TYPE,p_count OUT NUMBER ) RETURN tClobTyp;
    
    
    END CTNAPP.CTN_PUB_CNTL_EXTRACT_PUBLISH;
    
    
    --Package body
    
    CREATE OR REPLACE PACKAGE BODY CTNAPP.CTN_PUB_CNTL_EXTRACT_PUBLISH
    IS
    
         doc           xmldom.DOMDocument;
         main_node     xmldom.DOMNode;
         root_node     xmldom.DOMNode;
         root_elmt     xmldom.DOMElement;
         child_node    xmldom.DOMNode;
         child_elmt    xmldom.DOMElement;
         leaf_node     xmldom.DOMNode;
         elmt_value    xmldom.DOMText;
         tbl_node      xmldom.DOMNode;
         table_data    XMLDOM.DOMDOCUMENTFRAGMENT;
      
         l_ctx         DBMS_XMLGEN.CTXHANDLE;
         vStrSqlQuery  VARCHAR2(32767);
         l_clob        tClobTyp;
         --
         l_xmltype     XMLTYPE;
         --
    --Local Procedure to build XML header     
    PROCEDURE BuildCPRHeader IS
    
      BEGIN
        child_elmt := xmldom.createElement(doc, 'PUBLISH_HEADER');
        child_node  := xmldom.appendChild (root_node, xmldom.makeNode (child_elmt));
    
        child_elmt := xmldom.createElement (doc, 'SOURCE_APLCTN_ID');
        elmt_value := xmldom.createTextNode (doc, 'CTN');
        leaf_node  := xmldom.appendChild (child_node, xmldom.makeNode (child_elmt));
        leaf_node  := xmldom.appendChild (leaf_node, xmldom.makeNode (elmt_value));
        
        child_elmt := xmldom.createElement (doc, 'SOURCE_PRGRM_ID');
        elmt_value := xmldom.createTextNode (doc, 'VALUE');
        leaf_node  := xmldom.appendChild (child_node, xmldom.makeNode (child_elmt));
        leaf_node  := xmldom.appendChild (leaf_node, xmldom.makeNode (elmt_value));
    
        child_elmt := xmldom.createElement (doc, 'SOURCE_CMPNT_ID');
        elmt_value := xmldom.createTextNode (doc, 'VALUE');
        leaf_node  := xmldom.appendChild (child_node, xmldom.makeNode (child_elmt));
        leaf_node  := xmldom.appendChild (leaf_node, xmldom.makeNode (elmt_value));
    
        child_elmt := xmldom.createElement (doc, 'PUBLISH_TMS');
        elmt_value := xmldom.createTextNode (doc, TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS'));
        leaf_node  := xmldom.appendChild (child_node, xmldom.makeNode (child_elmt));
        leaf_node  := xmldom.appendChild (leaf_node, xmldom.makeNode (elmt_value));
        
    END BuildCPRHeader;
    
    --Get table data based on table name
    FUNCTION GetCtnData(p_nInCtnPubCntlID IN CTN_PUB_CNTL.ctn_pub_cntl_id%TYPE,p_Count OUT NUMBER) RETURN tClobTyp IS
        
        vTblName      ctn_pub_cntl.table_name%TYPE;
        vLastPubTms   ctn_pub_cntl.last_pub_tms%TYPE;
         
    BEGIN
                g_vProcedureName:='GetCtnData';    
                g_vTableName:='CTN_PUB_CNTL';
                
            SELECT table_name,last_pub_tms
            INTO   vTblName, vLastPubTms
            FROM   CTN_PUB_CNTL
            WHERE  ctn_pub_cntl_id=p_nInCtnPubCntlID;
        
        -- Start the XML Message generation
            doc := xmldom.newDOMDocument;
            main_node := xmldom.makeNode(doc);
            root_elmt := xmldom.createElement(doc, 'PUBLISH');
            root_node := xmldom.appendChild(main_node, xmldom.makeNode(root_elmt));
            
          --Append Table Data as Publish Header
            BuildCPRHeader;
            
          --Append Table Data as Publish Body
          
           child_elmt := xmldom.createElement(doc, 'PUBLISH_BODY');
           leaf_node  := xmldom.appendChild (root_node, xmldom.makeNode(child_elmt)); 
           
           DBMS_SESSION.SET_NLS('NLS_DATE_FORMAT','''YYYY:MM:DD HH24:MI:SS''');
           
           vStrSqlQuery := 'SELECT * FROM ' || vTblName 
                          || ' WHERE record_update_tms <= TO_DATE(''' || TO_CHAR(vLastPubTms, 'MM/DD/YYYY HH24:MI:SS') || ''', ''MM/DD/YYYY HH24:MI:SS'') ' ;
                        --  ||  ' AND rownum < 16'
                          --;
          DBMS_OUTPUT.PUT_LINE(vStrSqlQuery);
           
           l_ctx  := DBMS_XMLGEN.NEWCONTEXT(vStrSqlQuery);
          DBMS_XMLGEN.SETNULLHANDLING(l_ctx, 0);
          DBMS_XMLGEN.SETROWSETTAG(l_ctx, vTblName); 
           
          -- Append Table Data as XML Fragment
          l_clob(1):=DBMS_XMLGEN.GETXML(l_ctx);  
          elmt_value := xmldom.createTextNode (doc, l_clob(1)); 
         leaf_node  := xmldom.appendChild (leaf_node, xmldom.makeNode (elmt_value)); 
         
         xmldom.writeToBuffer (doc, l_clob(1));
         l_clob(1):=REPLACE(l_clob(1),'&lt;?xml version=&quot;1.0&quot;?&gt;', NULL);
         l_clob(1):=REPLACE(l_clob(1),'&lt;', '<');
         l_clob(1):=REPLACE(l_clob(1),'&gt;', '>');
         
         RETURN l_clob;
         
         DBMS_OUTPUT.put_line('Answer is' ||l_clob(1));
         
         EXCEPTION
            
            WHEN NO_DATA_FOUND THEN
            
            DBMS_OUTPUT.put_line('There is no data with' || SQLERRM);
            g_vProcedureName:='GetCtnData';
            g_vTableName:='CTN_PUB_CNTL';
            g_vErrorMessage:=SQLERRM|| g_vErrorMessage;
            g_nSqlCd:=SQLCODE;
            ctn_log_error('ERROR',g_vErrorMessage,'SELECT',g_nSqlCd,p_nInCtnPubCntlID,g_vPackageName,g_vProcedureName,g_vTableName);
            
            
            WHEN OTHERS THEN
           
           DBMS_OUTPUT.PUT_LINE('ERROR : ' || SQLERRM);
           ctn_log_error('ERROR',g_vErrorMessage,'OTHERS',g_nSqlCd,p_nInCtnPubCntlID,g_vPackageName,g_vProcedureName,g_vTableName);
           
    END GetCtnData;
    
    
    PROCEDURE printClob (result IN OUT NOCOPY CLOB) IS
        xmlstr   VARCHAR2 (32767);
        line     VARCHAR2 (2000);
    BEGIN
        xmlstr := DBMS_LOB.SUBSTR (result, 32767);
    
        LOOP
           EXIT WHEN xmlstr IS NULL;
           line := SUBSTR (xmlstr, 1, INSTR (xmlstr, CHR (10)) - 1);
           DBMS_OUTPUT.put_line (line);
           xmlstr := SUBSTR (xmlstr, INSTR (xmlstr, CHR (10)) + 1);
        END LOOP;
    END printClob;
    
    END CTN_PUB_CNTL_EXTRACT_PUBLISH;
    
     
    

    If you notice my query:


    vStrSqlQuery := 'SELECT * FROM ' || vTblName 
                          || ' WHERE record_update_tms <= TO_DATE(''' || TO_CHAR(vLastPubTms, 'MM/DD/YYYY HH24:MI:SS') || ''', ''MM/DD/YYYY HH24:MI:SS'') ' ;
                         ||  ' AND rownum < 16'
                        ;
    

    The minute I comment

    ||  ' AND rownum < 16' ;


    It generates an error because this query returns about 600 lines and all these lines must be published in XML format and the tragedy, is that it is a C program between institutions i.e. C calls my functions of packged and then will do all the processing. This will return the results to the C program. Then obviously C does not recognize the CLOB and somewhere in the process, I convert to VARCHAR or CLOB CLOB, I have to use VARCHAR as a return type. That's my challenge.


    Someone who can help me find the number required of magic and also a brief knows how, I understand that. Thanks in advance.

    Not that I would use it myself but your package can be simplified down, like this:

    create or replace package ctn_pub_cntl_extract_publish is
    
      C_DTFORMAT  constant varchar2(30) := 'YYYY-MM-DD HH24:MI:SS';
    
      function getXMLData (p_table_name in varchar2, p_pub_tms in date) return xmltype;
      function getCTNData (p_id in number) return clob;
    
    end ctn_pub_cntl_extract_publish;
    /
    
    create or replace package body ctn_pub_cntl_extract_publish is
    
      function getXMLData (p_table_name in varchar2, p_pub_tms in date)
      return xmltype
      is
    
        v_query  varchar2(32767) :=
                 q'{select * from $$TABLE_NAME where record_update_tms <= to_date(:1, 'YYYYMMDDHH24MISS')}';
    
        ctx      dbms_xmlgen.ctxHandle;
        doc      xmltype;
    
      begin
    
        execute immediate 'alter session set nls_date_format = "'||C_DTFORMAT||'"';
        v_query := replace(v_query, '$$TABLE_NAME', dbms_assert.simple_sql_name(p_table_name)); 
    
        ctx := dbms_xmlgen.newContext(v_query);
        dbms_xmlgen.setBindValue(ctx, '1', to_char(p_pub_tms, 'YYYYMMDDHH24MISS'));
        dbms_xmlgen.setRowSetTag(ctx, p_table_name);
        dbms_xmlgen.setNullHandling(ctx, dbms_xmlgen.DROP_NULLS);
        doc := dbms_xmlgen.getXMLType(ctx);
        dbms_xmlgen.closeContext(ctx);
    
        return doc; 
    
      end;
    
      function getCTNData (p_id in number)
      return clob
      is
    
        doc  clob;
    
      begin
    
        select xmlserialize(document
                 xmlelement("PUBLISH"
                 , xmlelement("PUBLISH_HEADER"
                   , xmlforest(
                       'CNT' as "SOURCE_APLCTN_ID"
                     , 'VALUE' as "SOURCE_PRGRM_ID"
                     , 'VALUE' as "SOURCE_CMPNT_ID"
                     , to_char(sysdate, C_DTFORMAT) as "PUBLISH_TMS"
                     )
                   )
                 , xmlelement("PUBLISH_BODY"
                   , getXMLData(t.table_name, t.last_pub_tms)
                   )
                 )
               )
        into doc
        from ctn_pub_cntl t
        where t.ctn_pub_cntl_id = p_id;
    
        return doc;
    
      end;
    
    end ctn_pub_cntl_extract_publish;
    

    Function getXMLData() generates a canonical XML document out of the table that is passed as a parameter.

    Function built getCTNData() code XML "PUBLISH", the document using SQL/XML functions, the getXMLData() call in the process and returns the content serialized as a CLOB.

  • LMD using DB links generate Redo entries/Archivelogs

    SOURCEDB production database, we use the database links to connect to the oracle TARGETDB database.

    We want to know if the DML statements (Insert, Update, Delete) on TARGETDB SOURCEDB using the database link generates archivelogs on the SOURCEDB.

    I have been looking for documentation, but couldn't find an answer to this. Appreciate if provide you any link to answer this question.

    Hello

    Easy to test:

    SQL > connect DEMO/demo@//localhost/pdb1.demo.pachot.net

    Connected.

    SQL > drop database link DEMO;

    Database link dropped.

    SQL > create database DEMO link to connect to DEMO identified using the demo '& _CONNECT_IDENTIFIER';

    old 1: create database DEMO link to connect to DEMO identified using the demo '& _CONNECT_IDENTIFIER '.

    new 1: create database DEMO link to connect to the identified using the demo DEMO "/ / localhost/pdb1.demo.pachot.net'"

    Database link created.

    SQL > create table DEMO as select lpad('x',1000,'x') x from (select * double connect by level)<=1000),(select *="" from="" dual="" connect="" by="" level=""><>

    Table created.

    SQL > connect DEMO/demo@//localhost/pdb1.demo.pachot.net

    Connected.

    SQL > set autotrace trace

    SQL > update DEMO@DEMO set x = upper (x);

    100000 lines to date.

    Execution plan

    ----------------------------------------------------------

    Hash value of plan: 1805557832

    ----------------------------------------------------------------------------------------

    | ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time | Inst |

    ----------------------------------------------------------------------------------------

    |   0 | UPDATE STATEMENT REMOTE CONTROL |      |   100K |    95 M |  3919 (1) | 00:00:01 |        |

    |   1.  UPDATE | DEMO |       |       |            |          |   PDB1 |

    |   2.   TABLE ACCESS FULL | DEMO |   100K |    95 M |  3919 (1) | 00:00:01 |   PDB1 |

    ----------------------------------------------------------------------------------------

    Note

    -----

    -instruction entirely remote

    Statistics

    ----------------------------------------------------------

    4 recursive calls

    1 db block Gets

    Gets 2 compatible

    0 physical reads

    Dimensions of 280 redo

    854 bytes sent via SQL * Net to client

    835 bytes received via SQL * Net from client

    3 SQL * Net back and forth to and from the client

    1 sorts (memory)

    0 sorts (disk)

    100000 rows processed

    SQL > set autotrace off

    SQL > select name, value of v$ mystat join v$ statname where name like 'redo size using(statistic#). "

    NAME                                                                  VALUE

    ---------------------------------------------------------------- ----------

    redo size                                                              3132

    SQL > select name, value of v$mystat@DEMO join v$ statname where name like 'redo size using(statistic#). "

    NAME                                                                  VALUE

    ---------------------------------------------------------------- ----------

    size 235313596

    For a table that has about 100 MB of data, you see that the remote update generates 200 MB of roll forward on the remote site (which is 100 MB of new values and 100 MB of cancellation). And very little about the local session.

    Kind regards

    Franck.

  • MEV: Column level of masking - error: ORA-28104: entry for sec_relevant_cols value is invalid

    Hi gurus,

    I try to hide the column to secure the table for a specified user, here are the details of the code that I use to apply security and DB:

    Version of DB: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0

    Security features:

    create or replace function kr_sec_function_papf (p_object_schema IN VARCHAR2,

    nom_objet_p VARCHAR2)

    return varchar2

    as

    p_nid varchar2 (200);

    Whoami varchar2 (100);

    Start

    If SYS_CONTEXT (' USERENV', 'SESSION_USER' ") ="VPDTEST ".

    then

    p_nid: = 'national_identifier = national_identifier';

    return (p_nid);

    on the other

    p_nid: = '1 = 2';

    return (p_nid);

    end if;

    end kr_sec_function_papf;

    /

    Code to add the policy:

    BEGIN

    DBMS_RLS. () ADD_POLICY

    object_schema = > 'APPS,'

    object_name = > "PER_ALL_PEOPLE_F"

    POLICY_NAME = > "secure_emp"

    policy_function = > 'kr_sec_function_papf ',.

    statement_types = > "SELECT."

    sec_relevant_cols = > 'NATIONAL_IDENTIFIER ',.

    sec_relevant_cols_opt = > DBMS_RLS. ALL_ROWS);

    END;

    /

    I get the error message when executing the above plsql block, the error is:

    ORA-28104: entry for sec_relevant_cols value is invalid

    ORA-06512: at "SYS." DBMS_RLS", line 20

    ORA-06512: at line 2

    Someone please help me solve the problem.

    Thanks in advance.

    ~ Krishna Nand Singh

    Hello world

    I had this problem to be solved.

    The problem is with the object_schema-online 'APPS', setting the schema name of the object is 'HR' and APPS a with the same name.

    The Correct code should be:

    BEGIN

    DBMS_RLS. () ADD_POLICY

    object_schema-online 'HR ',.

    object_name-online "PER_ALL_PEOPLE_F."

    POLICY_NAME-online "secure_emp."

    policy_function-online "kr_sec_function_papf."

    statement_types-online "SELECT."

    sec_relevant_cols-online "NATIONAL_IDENTIFIER."

    sec_relevant_cols_opt-online DBMS_RLS. ALL_ROWS);

    END;

    /

    Thank you

    Krishna Nand Singh

  • How can I exclude a level for the table of contents, but this amount as a bookmark?

    I know how to exclude a level for the table of contents (or rather include only levels I want to be included) and also how to create bookmarks from the table of contents entries, but how can I exclude a level for the table of contents, but always include it as a bookmark?

    Make 2 toc styles. One for the entries in the Bookmarks panel, you can generate a table of contents on the side of any page, and you must score in its definition to create bookmarks and other things that you use to build the table of contents for the visible part of your document to print. This should have unchecked the option create a bookmark in its definition.

  • not generating the trace for rdf report by oracle apps file

    Hi all


    in fact, we aim to generate trace file for reports and a convert to text file using tkprof by simultaneous program unix shell script submit using fnd_request.submit_request in another program of concurrent proceedings. but all the reports that are created by using pl/sql generates the trace file, but rdf report does not trace file.

    Report generator Oracle 6i is used

    Oracle application is 11i

    List of measures are being taken to get the trace file are
    1.SRW. USER_EXIT (' FND SRWINIT' "); before the release of report
    2SRW. USER_EXIT ("FND SRWEXIT'"); in after the report

    another of the measures which are followed
    SRW.do_sql ("alter session set SQL_TRACE = TRUE'"); before release of the report
    SRW.do_sql ("alter session set SQL_TRACE = FALSE'"); in after the report

    above, said steps are done, but still it does not arouse any trace file

    same oracle_process is null

    Select oracle_process_id from the fnd_concurrent_requests where request_id

    ID processOracle for this report oracle rdf file is not generated.


    Please help me in this issue

    Thank you

    Published by: 797525 on October 12, 2012 12:43 AM

    Add the following line before the outbreak of report
    SRW. DO_SQL ("alter session set events = tracefile_identifier" trace 10046 name context forever, level 4 "=" REPORT ' ")

    Trace stops automatically when the report closes.

    In addition, what program submits the script fnd_request.submit_request... shell / pl/sql procedure?

    you initialize apps FND_GLOBAL. APPS_INITIALIZE before submit_request of shooting?

    Make a DNF: active Log Debug = Yes and check the table of fnd_log_messages

    See the following MOS docs:
    Oracle 6i [ID 111311.1] follow-up reports
    See you soon,.
    ND
    Use the buttons "useful" or "correct" to award points to the answers.

  • Conversion tables and the entries in the table

    I'm working on a conversion table for our former products of FM unstructured to DITA. I understand the basic concepts, but I'm having a problem with the table cells.

    I have P:CellBody in my conversion table, in the first row, mapped at the entrance with a cellbody qualifier.

    I also TC mapped at the entrance.

    The same applies to P:CellHeading, and Th.

    Therefore, my text is wrapped in two input elements. The context tab to show the item says:

    entry

    entry

    line

    TBODY

    tgroup

    Table

    body

    NoName

    NoName

    I'm sure it should be the same as above with the input only one element (and of course with the fixed NoNames which I think I know how to do; I just have not had here yet).

    How to avoid get my cells wrapped in two input elements?

    Thanks in advance,

    Marsha

    Marsha,

    I don't understand what you're trying to do, or what is exactly in your conversion table. Be aware, however, that FrameMaker will always create elements for the basic elements that occur in your tables. The table of conversion that you give little control of how these items will be marked, but not question whether elements will exist.

    If your conversion table contains lines such as:

    P:CellBody entry cellbody
    TC: entry

    You will get the nested input elements. External is the cell of the table itself and inside is the paragraph. FrameMaker does not have a valid document to use the tag of the element itself to a cell and a container, so aside from the results is not what you wanted, they are not correct in FrameMaker.

    If your table cells contain simple paragraphs and you don't want the elements for cells and paragraphs, your conversion table didn't even need to mention paragraph CellBody and CellHeading tags. Indeed, if your table formats using CellBody as paragraph format for cells in the body of a table and CellHeading as the paragraph format for the cells in the table, your ESD header didn't even apply the paragraph formats.

    Another alternative is to include a paragraph tag in a table row of conversion for a table cell by combining TC: and P: to match table cells containing such paragraphs. For example:

    TC: P:CellBody entry

    creates items named cell entry of table cells containing paragraphs tag CellBody. The paragraph in such a cell is not encapsulated in an extra element.

    One final note is that TH: in a conversion table refers to the position of the entire table. his children are header lines. The analogue of table of TH body: to:, not TC.:

    -Lynne

Maybe you are looking for