Repetition of groups nested in a single table - model RTF

Hi all

I have a little problem with RTF models. I try to use 2 recurring groups within a single table, but everything I'm not get data for the fields of the outer loop, or still getting only one record of the loop internal.

It is a model of report of cash requirements. My expandable outside group is G_VENDOR and internal is G_INVOICE. I'm at the stage where I pasted the table with the G_INVOICE details in another table (with the NAME of the SELLER in the first field). This has however a drawback - it is not to repeat the name of the seller if there is more than one G_INVOICE in G_VENDOR. I don't want tables repeated for each provider, one with all the data.

I had SR Oracle open, but they seem not to be very useful, makes me think it is a bug and not fixed will never be. I know that the XML flatenning would be an option, but I don't want be to redevelop all alone I need template for reports.

Someone has an idea?

Concerning
Piotr

Hi Piotr,

Ideally you would be that flatten, but if you are inside the loop of invoice you can still access the fields of the outer loop by changing the form field and by prefixing with... /.

for example becomes

Kind regards

Robert

Tags: Business Intelligence

Similar Questions

  • Merge cells in the table (model RTF)

    Good, everyone! Is it possible to merge cells during execution of certain conditions?

    Edited by: user11367404 the 24.08.2009 06:10

    Look at this.

    http://winrichman.blogspot.com/search/label/row%20Spanning

  • groups nested without group xmlagg function

    I make groups nested without group xmlagg function when using the xmlagg inside another function xmlagg function. Find the structure and sample data in the table here.
     CREATE TABLE "TEST_TABLE" 
       ("KEY" NUMBER(20,0), 
        "NAME" VARCHAR2(50 ), 
        "DESCRIPTION" VARCHAR2(100 )
       );
    
       Insert into TEST_TABLE (KEY,NAME,DESCRIPTION) values (1,'sam','desc1');
       Insert into TEST_TABLE (KEY,NAME,DESCRIPTION) values (2,'max','desc2');
       Insert into TEST_TABLE (KEY,NAME,DESCRIPTION) values (3,'peter',null);
       Insert into TEST_TABLE (KEY,NAME,DESCRIPTION) values (4,'andrew',null);
    
    select 
            XMLSerialize(document
            xmlelement("root",
             xmlagg(
               xmlelement("emp"           
               , xmlforest(Key as "ID")           
               , xmlforest(name as "ename")
               , xmlelement("Descriptions",  
               xmlagg(
                  xmlforest(description as "Desc")
                  )
                )
               )
              )
           ) as clob indent
           ) as t    
          from test_table;
    Then I removed the of the select query above and utilisΘ xmlelement xmlagg function
      select 
            XMLSerialize(document
            xmlelement("root",
             xmlagg(
               xmlelement("emp"           
               , xmlforest(Key as "ID")           
               , xmlforest(name as "ename")
               , xmlelement("Descriptions",             
                  xmlforest(description as "Desc")
                  )           
               )
              )
           ) as clob indent
           ) as t    
          from test_table;
    It works fine, but the created xml with empty elements for Descriptions key 3 and 4 that has null values. I need no need descriptions element in the xml file when it is set to null. Please help me solve this problem.

    Expected behavior, it is not a bug.

    The real question is why did you put XMLAgg in the first place?
    You can expect several DESCRIPTION by employee?

    If yes then please provide some examples of data to this situation.
    If not then a simple case statement should be enough:

    SQL> set long 1000
    SQL>
    SQL> select XMLSerialize(document
      2          xmlelement("root",
      3           xmlagg(
      4             xmlelement("emp"
      5             , xmlforest(Key as "ID")
      6             , xmlforest(name as "ename")
      7             , case when description is not null then
      8                xmlelement("Descriptions",
      9                  xmlforest(description as "Desc")
     10                )
     11               end
     12             )
     13           )
     14          ) as clob indent
     15         ) as t
     16  from test_table;
    
    T
    --------------------------------------------------------------------------------
    
      
        1
        sam
        
          desc1
        
      
      
        2
        max
        
          desc2
        
      
      
        3
        peter
      
      
        4
        andrew
      
    
     
    
  • How to combine the large number of tables of pair key / value in a single table?

    I have a pair key / value tables of 250 + with the following features

    (1) keys are unique within a table but may or may not be unique in the set of tables
    (2) each table has about 2 million lines

    What is the best way to create a single table with all unique key-values of all these paintings? The following two queries work up to about 150 + tables
    with
      t1 as ( select 1 as key, 'a1' as val from dual union all
              select 2 as key, 'a1' as val from dual union all
              select 3 as key, 'a2' as val from dual )
    , t2 as ( select 2 as key, 'b1' as val from dual union all
              select 3 as key, 'b2' as val from dual union all
              select 4 as key, 'b3' as val from dual )
    , t3 as ( select 1 as key, 'c1' as val from dual union all
              select 3 as key, 'c1' as val from dual union all
              select 5 as key, 'c2' as val from dual )
    select coalesce(t1.key, t2.key, t3.key) as key
    ,      max(t1.val) as val1
    ,      max(t2.val) as val2
    ,      max(t3.val) as val3
    from t1
    full join t2 on ( t1.key = t2.key )
    full join t3 on ( t2.key = t3.key )
    group by coalesce(t1.key, t2.key, t3.key)
    /
    
    with
      master as ( select rownum as key from dual connect by level <= 5 )
    , t1 as ( select 1 as key, 'a1' as val from dual union all
              select 2 as key, 'a1' as val from dual union all
              select 3 as key, 'a2' as val from dual )
    , t2 as ( select 2 as key, 'b1' as val from dual union all
              select 3 as key, 'b2' as val from dual union all
              select 4 as key, 'b3' as val from dual )
    , t3 as ( select 1 as key, 'c1' as val from dual union all
              select 3 as key, 'c1' as val from dual union all
              select 5 as key, 'c2' as val from dual )
    select m.key as key
    ,      t1.val as val1
    ,      t2.val as val2
    ,      t3.val as val3
    from master m
    left join t1 on ( t1.key = m.key )
    left join t2 on ( t2.key = m.key )
    left join t3 on ( t3.key = m.key )
    /

    A couple of questions, then a possible solution.

    Why the hell you have 250 + tables pair key / value?

    Why the hell you want to group them in a table containing one row per key?

    You could do a pivot of all the tables, not part. something like:

    with
      t1 as ( select 1 as key, 'a1' as val from dual union all
              select 2 as key, 'a1' as val from dual union all
              select 3 as key, 'a2' as val from dual )
    , t2 as ( select 2 as key, 'b1' as val from dual union all
              select 3 as key, 'b2' as val from dual union all
              select 4 as key, 'b3' as val from dual )
    , t3 as ( select 1 as key, 'c1' as val from dual union all
              select 3 as key, 'c1' as val from dual union all
              select 5 as key, 'c2' as val from dual )
    select key, max(t1val), max(t2val), max(t3val)
    FROM (select key, val t1val, null t2val, null t3val
          from t1
          union all
          select key, null, val, null
          from t2
          union all
          select key, null, null, val
          from t3)
    group by key
    

    If you can do it in a single query, Union all 250 + tables, you don't need to worry about chaining or migration. It may be necessary to do this in a few passes, depending on the resources available on your server. If so, I would be inclined to first create the table, with a larger than normal free percent, making the first game as a right inset and other pass or past as a merger.

    Another solution might be to use the approach above, but limit the range of keys with each pass. So pass we would have a like predicate when the key between 1 and 10 in every branch of the union, pass 2 would have key between 11 and 20, etc. In this way, everything would be straight inserts.

    That said, I'm going back to my second question above, why the hell you want or need to do that? What is the company you want to solve. There could be a much better way to meet the requirement.

    John

  • Dreamweaver several graphs in single table cell

    Dreamweaver is having problems when I try to combine multiple charts in a single table cell. Some are separated by two end just "" entries, while others do not accept these separators. Pleas you show me the right way to handle this.

    Jack,

    • Tables should not be used for web page layouts.
    • Tables for tabular data such as spreadsheets and graphics only.
    • Today, we use an external CSS file for the layout, typography and other styles.

    Please show us what you are trying to do by copying and sticky code in a reply from the web forum.

    Nancy O.

  • How to restore a single table from a DP Export to a different pattern?

    Environment:

    Oracle 11.2.0.3 EE on Solaris

    I was looking at documentation when importing DP trying to find the correct syntax to import a single table of an RFP to a different schema export.

    So I want to load the table User1. Table1 in USER2. Table1 a DP Export.

    Looking at the REMAP_TABLE options:
    REMAP_TABLE=[schema.]old_tablename[.partition]:new_tablename
    
    OR
    
    REMAP_TABLE=[schema.]old_tablename[:partition]:new_tablename
    I can't see where to specify the name of the target schema. Examples were the new table name residing in the same pattern with just a new name.

    I looked at the REMAP_SCHEMA but the docs say matter the entire schema in the new schema and I want only one 1 table.

    All suggestions are welcome!

    -gary

    I thought I tried this combination and it seemed to me that the REMAP_SCHEMA somehow too rolled TABLES = parameter > and started to load all the objects.

    If it does not fail (and it shouldn't) then please post details here and I'll see what happens.

    Let me back in the tray to sand and try again. I admit I was a bit of a rush when I did the first time.

    We are all in a hurry, no worries. If it fails, please post the details and the log file.

    Does make any sense that a parameter would be substitute another?

    No, this should never happen. We have tons of audits to ensure that labour cannot have several meanings. For example, you can not say

    Full schemas = y = foo - what you want, or a full export list export schema, etc.

    Your suggestion was the first thing that I thought would work.

    It should work. If this isn't the case, send the log with the command and the results file.

    Dean
    Thanks again for the help and stay tuned for my new attempt.

    -gary

  • single table hash clusters

    I created a hash cluster single table like this:

    create tablespace mssm datafile 'c:\app\mssm01.dbf' size 100 m
    Segment space management manual;


    create the cluster hash_cluster_4k
    (id number (2))
    size 8192 single hash table is id hashkeys 4 tablespace mssm;

    --Also created a table cluster with the line size such as single record corresponds to a block and inserted 5 records each with a separate key value


    CREATE TABLE hash_cluster_tab_8k
    (number (2) id,)
    txt1 tank (2000).
    txt2 tank (2000).
    tank (2000) txt3
    )
    CLUSTER hash_cluster_8k (id);


    Begin
    because loop me in 1.5
    Insert in the values of hash_cluster_tab_8k (i, 'x', 'x', 'x');
    end loop;
    end;
    /
    exec dbms_stats.gather_table_stats (WATERFALL of the USER 'HASH_CLUSTER_TAB_8K' = > true);


    Now, if I try to access the folder with id = 1 - it shows 2 I / O (cr = 2) instead of the single e/s as provided in a hash cluster.



    Rows Row Source operation
    ------- ---------------------------------------------------
    1 ACCESS HASH_CLUSTER_TAB_8K HASH TABLE (cr = 2 pr = 0 pw = time 0 = 0 US)


    If I run the query, even after the creation of a unique index on hash_cluster_tab (id), the execution plan specifies access hash and single e/s (cr = 1).

    This means that for a single e/s in a single table hash cluster, we create a unique index? It will not create an additional burden to maintain an index?

    What is the second I/o necessary for in the case where a unique index is absent?

    I would be very grateful if gurus could explain this behavior.

    Thanks in advance...

    user12288492 wrote:
    I ran the query with all 5 id values and the results have been more confusing.

    During the first inning, I had VC = 2 for two values of keys, the Czech Republic rest = 1
    During the second inning, I = 2 for a key value, the Czech Republic cr rest = 1
    In the third set, I had VC = 1 for all values of keys

    The effects vary depending on the number of previous runs and the number of times you reconnect.
    The extra CR is a cleansing of the block effect. If you check the access of the buffer (events 10200-10203), then you can see the details. Simplistically, if you create your data, then connect and interrogate one of the lines (but not id = 5, because that will be cleaned on the collection of statistics) you should be able to see the following numbers:

    cleanouts only - consistent read gets                                        1
    immediate (CR) block cleanout applications                                   1
    commit txn count during cleanout                                             1
    cleanout - number of ktugct calls                                            1
    Commit SCN cached                                                            1
    redo entries                                                                 1
    redo size                                                                   80
    

    On the first block visited, Oracle made a visit to buffer to untangle a YVERT cleaning (ktugct - get the commit time). This cleans up the block and caches the acquired RCS. The rest of the blocks that visit you in the same session should not be cleaned because the session can use the updated SNA caching to avoid needing a cleanup operation. Finally all the blocks will have been cleaned up (which means that they will be written to the disk) and the extra CR stops happening.

    There is a little quirk - drain plug seems to apply to the format block calls - and I do not understand why this was did not each row inserted thereafter.

    Concerning
    Jonathan Lewis

  • How to select a group nested in the PS?

    When I use the self-select: Group option, it will select only the group most in which is placed my layer clicked; but I use a lot of groups nested inside other groups. Is there a way to change higher or lower level groups?

    Select the Group of lower level by clicking in the layers panel is tedious, because when I click on a layer with the value automatic selection 'Sex', it will automatically expand groups in which it is placed and I have to scroll up to find the layer group.

    Are you aware of the pop - up we get when ctrl-click with the tool move?

  • multiple to single table replication

    Hello

    Is there a way we can replicate multiple tables at source on a single table to the target.

    example: join table A and B on the source and put it in the table C.

    Thank you!

    Yes it is possible. OGG can be used to get the data from the tables of the source and go in a single target table. But keep in mind if you are data fusion then PK conflict should be avoided otherwise, you can get any data problems.

    -Koko

  • Shared services (group nesting)

    Hello

    In the case of in the case of Group B group nesting another group A is added (or nested) as a member of the group. The supply of Group B also applies to group A? Or is it available A group separately?

    When I click with the right button on a group's and go to the tab group. Group names (.) Group 1, group 2, etc.) I see in the tab group members are called nested/child groups and the GroupA is a parent group. Please correct me if I'm not right.

    That is right. Happy, I was able to help.

    See you soon,.
    Mehmet

  • Fusion of single table

    Hi all

    I am using the command merge onto a single table. I want to check some values in the table, if they already exist I just update, thing that I want to insert.

    For this I use the following code:



    MERGE INTO my_table OLD_VAL
    NEW_VAL in ASSISTANCE from (SELECT L_field1, L_field2, L_field3, DOUBLE L_field4)
    WE (OLD_VAL.field1 = NEW_VAL. L_field1
    AND OLD_VAL.field2 = NEW_VAL. L_field2
    AND OLD_VAL.field3 = NEW_VAL. L_field3
    )
    WHEN MATCHED THEN
    UPDATE SET OLD_VAL.field4 = NEW_VAL. L_field4
    WHEN NOT MATCHED THEN
    INSERT (Field1, Field2, field3, field4, sphere5)
    VALUES (NEW_VAL. L_field1, NEW_VAL. L_field2, NEW_VAL. L_field3, NEW_VAL. L_field4, SYSDATE);

    Fields starting with L_ here is my local variables inside my procedure.

    It is giving error as ORA-00904: "NEW_VAL. "" L_field3 ": invalid identifier

    Thank you all.

    SELECT L_field1, L_field2, L_field3, DOUBLE L_field4

    1. you r select all values here?
    2. try to give alias for all columns

  • For a single table import failed...

    Hello
    On the production server, I reported a user db, let's call it PRD_USER, with tablespace PRD_TBL default.
    On the development server, I reported a user db, let's call it PRD_USER, with tablespace DEV_TBL default.
    On the production server, I use the db of exp utility to import as:
    IMP System/Manager of the user PRD_USER touser = PRD_USER = ignore = file_name = «...» ' log = «...» ».
    Succeeds the import for about 25 tables and indexes and constraints, but it fails for a single table with the error: {I don't remember the error ORA- and I do not have access currently} DEV_TBL tablespace does not exist.

    Of course this tablespace does not exist on production env. But how this problem arises because the default tablespace for the user is not DEV_TBL but PRD_TBL...?

    Do you have any idea what can be the cause and how can I overcome this problem when importing...? {Note: I gave a temporary solution... take the table creation sql script leaving aside the reference of the tablespace "DEV_TBL"}.

    The two servers work in exactly the same version of DB...
    Note: I use DB 10g v.2

    Thank you
    SIM

    If the table has Partitions, import strives to create the Partitions (in the CREATE TABLE statement) on the original table space.

    OR there is a LOB segment in the table import strives to create on the original table space.

    Hemant K Collette

  • Calc problem with fact table measure used in the bridge table model

    Hi all

    I have problems with the calculation of a measure of table done since I used it as part of a calculation in a bridge table relation.

    A table of facts, PROJECT_FACT, I had a column (PROJECT_COST) whose default aggregate is SUM. Whenever PROJECT_COST was used with any dimension, the appropriate aggregation was made at appropriate levels. But, no more. One of the relationships that project_fact is a dimension, called PROJECT.

    PROJECT_FACT contains details of employees, and every day they worked on a project. So for one day, employee, Joe, could have a PROJECT_COST $ 80 to 123 project, the next day, Joe might have $40 PROJECT_COST for the same project.

    Dimension table, PROJECT, contains details of the project.

    A new feature has been added to the software - several customers can now be charged to a PROJECT, where as before, that a single client has been charged.
    This fresh percentage collapse is in a new table - PROJECT_BRIDGE. PROJECT_BRIDGE has the project, CUSTOMER_ID, will BILL_PCT. BILL_PCT always add up to 1.

    Thus, the bridge table might look like...
    CUSTOMER_ID BILL_PCT PROJECT
    123 100.20
    123 200.30
    123 300.50
    456 400 1.00
    678 400 1.00

    Where to project 123, is a breakdown for multiple clients (. 20,.30.50.)

    Let's say in PROJECT_FACT, if you had to sum up all the PROJECT_COST for project = 123, you get $1000.


    Here are the steps I followed:

    -In the physical layer, PROJECT_FACT has a 1:M with PROJECT_BRIDGE and PROJECT_BRIDGE (a 1:M) PROJECT.
    PROJECT_FACT = > PROJECT_BRIDGE < = PROJECT

    -In the logical layer, PROJECT has a 1:M with PROJECT_FACT.
    PROJECT = > PROJECT_FACT

    -Logical fact table source is mapped to the bridge table, PROJECT_BRIDGE, so now he has several tables, it is mapped (PROJECT_FACT & PROJECT_BRIDGE). They are defined for an INTERNAL join.
    -J' created a measure of calculation, MULT_CUST_COST, using physical columns, which calculates the sum of the PROJECT_COST X the amount of the percentage in the bridge table. It looks like: $ (PROJECT_FACT. PROJECT_COST * PROJECT_BRIDGE. BILL_PCT)
    -J' put MULT_CUST_COST in the presentation layer.

    We still want the old PROJECT_COST autour until it happened gradually, it is therefore in the presentation layer as well.


    Well, I had a request with only project, MULT_CUST_COST (the new calculation) and PROJECT_COST (the original). I expect:

    PROJECT_COST MULT_CUST_COST PROJECT
    123. $1000 $1000

    I'm getting this for MULT_CUST_COST, however, for PROJECT_COST, it's triple the value (perhaps because there are quantities of 3 percent?)...

    PROJECT_COST MULT_CUST_COST PROJECT
    123 $1000 (correct) $3000 (incorrect, it's been tripled)

    If I had to watch the SQL, you should:
    SELECT SUM (PROJECT_COST),
    SUM (PROJECT_FACT. PROJECT_COST * PROJECT_BRIDGE. BILL_PCT),
    PROJECT
    Of...
    PROJECT GROUP


    PROJECT_COST used to work properly at a table of bridge of modeling.
    Any ideas on what I got wrong?

    Thank you!

    Hello

    Phew, what a long question!

    If I understand correctly, I think the problem is with your old measure of cost, or rather that combines with you a new one in the same request. If you think about it, your request as explained above will bring back 3 rows from the database, that's why your old measure of cost is multiplied. I think that if you took it out of the query, your bridge table would work properly for the only new measure?

    I would consider the migration of your historical data in the bridge table model so that you have one type of query. For historical data, each would have a single row in the bridge with a 1.0 BILL_PCT.

    Good luck

    Paul
    http://total-bi.com

  • Columns of the table model Transformation header/footer

    Hello @all,
    I'm new to this forum. I'm now starting to use Oracle SQL Data Modeler 3.1.0 - 700

    I found the "Table model" transformation. I use this transformation to model my header and footer columns. How can I create columns for my header on the first position of my table? Will there be another opportunity to create a header or footer?

    I hope you understand my question.

    Thank you for the help,

    Max

    You can use

    table.moveToIndex(column,index);
    

    to move the column that you created in the desired position - following moves the column to the beginning of the list of columns:

    table.moveToIndex(column,0);
    

    Philippe

  • Multiple values in a single Table cell

    Hello

    I have a requirement of the customer. I need to show multiple values within a table cell

    Example of

    Location City Shop
    NorthCity AHS-200, SH-210, SH310
    SouthCity BSH - 100, SH341
    EastCity CSH-20

    But my table shows repeating cell as follows.

    Location City Shop
    NorthCity ASH-200
    NorthCity ASH-210
    NorthCity ASH310
    SouthCity BSH-100
    SouthCity BSH341
    EastCity CSH-20

    So I need your help to show repeated STORE name in a single column of the Table.

    Thank you

    Try this

    EVALUATE_AGGR ('LISTAGG (%1, %2) within THE GROUP (ORDER BY DESC %3)', TableName.ColumnName, ',', TableName.ColumnName)

Maybe you are looking for

  • DeskJet 14 2540 Chromebook: HP Chromebook 14 can not download the printer driver 2540

    I have a HP Chromebook 14 with a data account T mobile.  I bought a HP Deskjet 2540 and cannot get the download to come without an error message that there is a problem with the download... it's even possible for me to define and to use the printer w

  • IPsec over HTTPS

    Is there a way to create an IPSec connection on port 443 (for example if the UDP Port 500 is blocked by outside firewallrules). I noticed some other routers are able, or if it will support on Netgear UTM in futured upgrades? Thank you...

  • Computer screen freezes when idle.

    Original Tilte: do not have to restart the computer after the screen freezes don't move the mouse for about 10 minutes the screen of my computer crashes if not used after a few minutes

  • 2001FP, will never get support of Windows 7/8/8.1

    Hi all I recently upgraded my Windows 7 laptop and I'm having problems with my monitor drivers. Currently installed is the pilot of the generic PnP monitor that will not let me pan past 1024 x 768. The monitor is a 2001FP and help or direction would

  • (RE) Apply profiles to host with PowerCLI

    Hi guys.I have two clusters in VCenter each with its own host profile.  Recently, I made a change in profiles and wanted reapply them for my guests.  When I do it manually, it goes without a hitch.  Profiles of already implemented at the cluster leve