grouping data with a dynamic number of levels

Is it possible to do something like:

< cfloop... >
< cfoutput group =... >
...
< / cfloop >

< cfoutput >
...
< / cfoutput >

< cfloop... >
< / cfoutput >
< / cfloop >

I tried but it doesn't seem to work. I would like to consolidate data from several levels deep, with a dynamic number of levels. I don't know it in advance. Is it possible to do?

Thank you

Roman

Hello

You cannot place the tag inside a it will throw an error...

Why don't you do the grouping at the query level?...

Tags: ColdFusion

Similar Questions

  • What is an effective way to way logarithmic bin data with a constant number of points per decade?

    Hi all

    I would like to clean a logarithmic field of PSD in binning and averaging so that I have a constant number of points per decade (say 10, just for the sake of argument). Generally, means simpler and cleaner, I can think about getting this is research in the table entering all points between the frequencies A and B, with an average of these points and assigning a frequency (A + B) / 2 of the new average bin. However, I cannot find how to access frequency information, I need to achieve this. To be more clear, I can imagine if I had two tables, one who holds the frequencies calculated from my stream of incoming data, and the other which held and the amplitude of each corresponding frequency, that I might look for clues in the frequency table with values between A and B, then the average of the values in the table of amplitude which lie between the indices back put them in a new table with a new array of corresponding frequency. The process is a little more general that just on average every ten points, to say, as the number of points per decade continues to grow. My main obstacle at the moment, however, is that the amplitudes of the voltage are a set of values that receive through the operation of PSD, while the part of the frequency of the wave seems to be a DBL continues single-valued. I hope I've explained that well enough for someone to shed some light on my problem. Also, if anyone has a suggestion for a better way to approach the problem please let me know - there must be a pretty simple answer there, but it's deceiving me right now. Thanks in advance for the help.

    -Alex

    Hello

    If I get you right. you have:

    a table with the frequencies

    a table with the corresponding values of amplitude

    Then you want to merge parts of the data by averaging on the specific frequency ranges. I think that there is no VI 1 solution, you will need to write this on your own ():

    I start to get the min/max of frequencies and then interpolate a scale from your needs (like logarythmic) mounting with the quantity of bins you want. This should be an array again.

    Next step is to browse the frequency table, check (the first and) the last value in the location wanted (stop the loop, return the index). This should end up with an array of index. [I guess that's where you can save some computation time most by smart]

    Finally, use these indices to browse the amplitude values and make your average. Should return an array of the length of your array of locations.

    Ground in color fantasies and enjoy.

    Just what you intend to do?

  • Grouping data with the name of the derived field.

    Hello..

    I like condition, I want to group by table with 'Date' but provided XML there is no separate column of the 'Date'. I've derived a timestamp, StartTime column. So I used xdoxslt:left(StartTime,10) which gives me the date. But now I want to do grouping using this value. I did <? for each object: group;. /xdoxslt:Left(StartTime,10)? > <? sorting: current-group () / StartTime; ' ascending '; type_donnees = "text"? > but his error giving ' error in the expression: "." / xdoxslt:left(StartTime,10)'.

    Please can someone help me on this. ? If anyone has an Info abt it please share.

    Thanks in advance

    Try this way

    <>./ StartTime, 10)? "><>xdoxslt:left (current-group () / StartTime, 10);' ascending '; data type = "date"? >

  • Grouping data with dates of Max and Min problem

    Ladies and gentlemen,

    I have a problem that I have tried from different angles. It's probably very easy for some of you, or some of you may have met before, so any help on this is greatly appreciated. I will describe below.

    I have the following data:

    User station site code dstamp ref Qty
    -------- --------- ---------- ------------- --------------------------------------------- ------- -------
    Site_1 user_1 RPT104 Activity_1 16 May 11 13.43.06.566193000 ref_1 1125
    Site_1 user_1 RPT104 Activity_1 16 May 11 13.49.31.364224000 ref_1 1125
    Site_1 user_1 RPT104 Activity_1 16 May 11 13.49.47.413252000 ref_1 1125
    Site_1 user_1 RPT104 Activity_1 16 May 11 13.51.48.906793000 ref_1 1125
    Site_1 user_1 RPT104 Activity_1 16 May 11 13.51.56.947312000 ref_1 1125
    Site_1 user_1 RPT104 Activity_1 16 May 11 13.54.29.396052000 ref_1 1125
    Site_1 user_1 RPT104 Activity_1 16 May 11 13.54.37.444307000 ref_1 1125
    Site_1 user_1 RPT104 Activity_1 16 May 11 13.57.00.237546000 ref_1 1125
    Site_1 user_1 RPT104 Activity_1 16 May 11 13.57.04.285148000 ref_1 1125
    Site_1 user_1 RPT104 Activity_1 16 May 11 13.59.24.745162000 ref_1 1125
    Site_1 user_1 RPT104 Activity_1 16 May 11 13.59.44.774318000 ref_1 1125
    Site_1 user_1 RPT104 Activity_1 16 May 11 14.01.22.434940000 ref_1 1125
    Site_1 user_1 RPT104 Activity_1 16 May 11 14.01.51.291059000 ref_1 1125
    Site_1 user_1 RPT104 Activity_2 16 May 11 14.05.23.572211000 ref_2 1125
    Site_1 user_1 RPT104 Activity_1 16 May 11 14.06.01.058978000 ref_1 1125
    Site_1 user_1 RPT104 Activity_3 16 May 11 14.06.41.341612000 ref_1 1125
    Site_1 user_1 RPT104 Activity_3 16 May 11 14.06.49.375972000 ref_1 1125
    Site_1 user_1 RPT104 Activity_3 16 May 11 14.06.49.388699000 ref_1 1125
    Site_1 user_1 RPT104 Activity_3 16 May 11 14.06.49.401287000 ref_1 1125
    Site_1 user_1 RPT104 Activity_3 16 May 11 14.06.49.413361000 ref_1 1125
    Site_1 user_1 RPT104 Activity_3 16 May 11 14.06.49.425675000 ref_1 1125
    Site_1 user_1 RPT104 Activity_3 16 May 11 14.06.49.437360000 ref_1 1125
    Site_1 user_1 RPT104 Activity_3 16 May 11 14.06.49.449079000 ref_1 1125
    Site_1 user_1 RPT104 Activity_3 16 May 11 14.06.49.460697000 ref_1 1125
    Site_1 user_1 RPT104 Activity_3 16 May 11 14.06.49.472606000 ref_1 1125
    Site_1 user_1 RPT104 Activity_3 16 May 11 14.06.49.484031000 ref_1 1125
    Site_1 user_1 RPT104 Activity_3 16 May 11 14.06.49.495551000 ref_1 1125
    Site_1 user_1 RPT104 Activity_3 16 May 11 14.06.49.513645000 ref_1 1125
    Site_1 user_1 RPT104 Activity_3 16 May 11 14.06.49.530405000 ref_1 1125


    and I'm looking for it in this format:


    Site user station code start end ref Qty
    --------     ---------     ----------     -------------     ---------------------------------------------     ---------------------------------------------          ----------     -------
    Site_1 user_1 RPT104 Activity_1 13.43.06.566193000 16 May 11 May 16, 11 14.05.23.572211000 ref_1 1125
    Site_1 user_1 RPT104 Activity_2 14.05.23.572211000 16 May 11 May 16, 11 14.06.01.058978000 ref_2 1125
    Site_1 user_1 RPT104 Activity_1 14.06.01.058978000 16 May 11 May 16, 11 14.06.41.341612000 ref_1 1125
    Site_1 user_1 RPT104 Activity_3 14.06.41.341612000 16 May 11 (May 16, 11 14.06.49.530405000 + 4secs ref_1 1125)
    either may 16, 11 14.06.53.530405000)


    I can get the hours start and end without problem using data intial twice and it compensation by a rownum is but using the functions max and min based on the data that I get:

    Site user station code start end ref Qty
    --------     ---------     ----------     -------------     ---------------------------------------------     ---------------------------------------------          ----------     -------
    Site_1 user_1 RPT104 Activity_1 16 May 11 13.43.06.566193000 * May 16, 11 14.06.41.341612000 * ref_1 1125
    Site_1 user_1 RPT104 Activity_2 14.05.23.572211000 16 May 11 May 16, 11 14.06.01.058978000 ref_2 1125
    Site_1 user_1 RPT104 Activity_3 * 14.06.41.341612000 * 16 May 11 (May 16, 11 14.06.49.530405000 + 4secs ref_1 1125)
    either may 16, 11 14.06.53.530405000)

    who is missing on the 3rd line of the previous dataset (if everything goes well in fat after validation) and assigns the wrong time end.

    I think the solution may have soemthing to do using the function dense_rank() (any ORDER by code, start) but I'm not too familiar with it and I think that the facts in the Start column data is unique it affects its functioning.

    If anyone can offer any help or point me in the right direction I'll offer eternal grace and rest a drink we should never meet!

    see you soon

    Published by: MickyMick on June 7, 2011 03:21

    BobLilly wrote:
    Tabibitosan of Aketi method can be applied here (see {: identifier of the thread = 1005478})

    Site_1 user_1 RPT104 Activity_1 2011-05-16 13.43.06.566193000 2011-05-16 14.05.23.572211000 ref_1 1125
    Site_1 user_1 RPT104 Activity_2 2011-05-16 14.05.23.572211000 2011-05-16 14.06.01.058978000 ref_2 1125
    Site_1 user_1 RPT104 Activity_1 2011-05-16 14.06.01.058978000 2011-05-16 14.06.41.341612000 ref_1 1125
    Site_1 RPT104 Activity_3 2011-05-16 14.06.41.341612000 user_1 ref_1 14.06.45.341612000 2011-05-16 1125

    According to OP we may 16, 11 14.06.49.530405000 + 4secs. In any case, use method start_of_group:

    With t as (
    Select 'Site_1' as Site, 'user_1' as Usr, 'RPT104' as Station, 'Activity_1' as Code
    , to_timestamp('16-MAY-11 13.43.06.566193000', 'DD-MON-RR HH24:MI:SS.FF') as DTStamp, 'ref_1' as Ref, 1125 as Qty from dual union all
    Select 'Site_1' as Site, 'user_1' as Usr, 'RPT104' as Station, 'Activity_1' as Code
    , to_timestamp('16-MAY-11 13.49.31.364224000', 'DD-MON-RR HH24:MI:SS.FF') as DTStamp, 'ref_1' as Ref, 1125 as Qty from dual union all
    Select 'Site_1' as Site, 'user_1' as Usr, 'RPT104' as Station, 'Activity_1' as Code
    , to_timestamp('16-MAY-11 13.49.47.413252000', 'DD-MON-RR HH24:MI:SS.FF') as DTStamp, 'ref_1' as Ref, 1125 as Qty from dual union all
    Select 'Site_1' as Site, 'user_1' as Usr, 'RPT104' as Station, 'Activity_1' as Code
    , to_timestamp('16-MAY-11 13.51.48.906793000', 'DD-MON-RR HH24:MI:SS.FF') as DTStamp, 'ref_1' as Ref, 1125 as Qty from dual union all
    Select 'Site_1' as Site, 'user_1' as Usr, 'RPT104' as Station, 'Activity_1' as Code
    , to_timestamp('16-MAY-11 13.51.56.947312000', 'DD-MON-RR HH24:MI:SS.FF') as DTStamp, 'ref_1' as Ref, 1125 as Qty from dual union all
    Select 'Site_1' as Site, 'user_1' as Usr, 'RPT104' as Station, 'Activity_1' as Code
    , to_timestamp('16-MAY-11 13.54.29.396052000', 'DD-MON-RR HH24:MI:SS.FF') as DTStamp, 'ref_1' as Ref, 1125 as Qty from dual union all
    Select 'Site_1' as Site, 'user_1' as Usr, 'RPT104' as Station, 'Activity_1' as Code
    , to_timestamp('16-MAY-11 13.54.37.444307000', 'DD-MON-RR HH24:MI:SS.FF') as DTStamp, 'ref_1' as Ref, 1125 as Qty from dual union all
    Select 'Site_1' as Site, 'user_1' as Usr, 'RPT104' as Station, 'Activity_1' as Code
    , to_timestamp('16-MAY-11 13.57.00.237546000', 'DD-MON-RR HH24:MI:SS.FF') as DTStamp, 'ref_1' as Ref, 1125 as Qty from dual union all
    Select 'Site_1' as Site, 'user_1' as Usr, 'RPT104' as Station, 'Activity_1' as Code
    , to_timestamp('16-MAY-11 13.57.04.285148000', 'DD-MON-RR HH24:MI:SS.FF') as DTStamp, 'ref_1' as Ref, 1125 as Qty from dual union all
    Select 'Site_1' as Site, 'user_1' as Usr, 'RPT104' as Station, 'Activity_1' as Code
    , to_timestamp('16-MAY-11 13.59.24.745162000', 'DD-MON-RR HH24:MI:SS.FF') as DTStamp, 'ref_1' as Ref, 1125 as Qty from dual union all
    Select 'Site_1' as Site, 'user_1' as Usr, 'RPT104' as Station, 'Activity_1' as Code
    , to_timestamp('16-MAY-11 13.59.44.774318000', 'DD-MON-RR HH24:MI:SS.FF') as DTStamp, 'ref_1' as Ref, 1125 as Qty from dual union all
    Select 'Site_1' as Site, 'user_1' as Usr, 'RPT104' as Station, 'Activity_1' as Code
    , to_timestamp('16-MAY-11 14.01.22.434940000', 'DD-MON-RR HH24:MI:SS.FF') as DTStamp, 'ref_1' as Ref, 1125 as Qty from dual union all
    Select 'Site_1' as Site, 'user_1' as Usr, 'RPT104' as Station, 'Activity_1' as Code
    , to_timestamp('16-MAY-11 14.01.51.291059000', 'DD-MON-RR HH24:MI:SS.FF') as DTStamp, 'ref_1' as Ref, 1125 as Qty from dual union all
    Select 'Site_1' as Site, 'user_1' as Usr, 'RPT104' as Station, 'Activity_2' as Code
    , to_timestamp('16-MAY-11 14.05.23.572211000', 'DD-MON-RR HH24:MI:SS.FF') as DTStamp, 'ref_2' as Ref, 1125 as Qty from dual union all
    Select 'Site_1' as Site, 'user_1' as Usr, 'RPT104' as Station, 'Activity_1' as Code
    , to_timestamp('16-MAY-11 14.06.01.058978000', 'DD-MON-RR HH24:MI:SS.FF') as DTStamp, 'ref_1' as Ref, 1125 as Qty from dual union all
    Select 'Site_1' as Site, 'user_1' as Usr, 'RPT104' as Station, 'Activity_3' as Code
    , to_timestamp('16-MAY-11 14.06.41.341612000', 'DD-MON-RR HH24:MI:SS.FF') as DTStamp, 'ref_1' as Ref, 1125 as Qty from dual union all
    Select 'Site_1' as Site, 'user_1' as Usr, 'RPT104' as Station, 'Activity_3' as Code
    , to_timestamp('16-MAY-11 14.06.49.375972000', 'DD-MON-RR HH24:MI:SS.FF') as DTStamp, 'ref_1' as Ref, 1125 as Qty from dual union all
    Select 'Site_1' as Site, 'user_1' as Usr, 'RPT104' as Station, 'Activity_3' as Code
    , to_timestamp('16-MAY-11 14.06.49.388699000', 'DD-MON-RR HH24:MI:SS.FF') as DTStamp, 'ref_1' as Ref, 1125 as Qty from dual union all
    Select 'Site_1' as Site, 'user_1' as Usr, 'RPT104' as Station, 'Activity_3' as Code
    , to_timestamp('16-MAY-11 14.06.49.401287000', 'DD-MON-RR HH24:MI:SS.FF') as DTStamp, 'ref_1' as Ref, 1125 as Qty from dual union all
    Select 'Site_1' as Site, 'user_1' as Usr, 'RPT104' as Station, 'Activity_3' as Code
    , to_timestamp('16-MAY-11 14.06.49.413361000', 'DD-MON-RR HH24:MI:SS.FF') as DTStamp, 'ref_1' as Ref, 1125 as Qty from dual union all
    Select 'Site_1' as Site, 'user_1' as Usr, 'RPT104' as Station, 'Activity_3' as Code
    , to_timestamp('16-MAY-11 14.06.49.425675000', 'DD-MON-RR HH24:MI:SS.FF') as DTStamp, 'ref_1' as Ref, 1125 as Qty from dual union all
    Select 'Site_1' as Site, 'user_1' as Usr, 'RPT104' as Station, 'Activity_3' as Code
    , to_timestamp('16-MAY-11 14.06.49.437360000', 'DD-MON-RR HH24:MI:SS.FF') as DTStamp, 'ref_1' as Ref, 1125 as Qty from dual union all
    Select 'Site_1' as Site, 'user_1' as Usr, 'RPT104' as Station, 'Activity_3' as Code
    , to_timestamp('16-MAY-11 14.06.49.449079000', 'DD-MON-RR HH24:MI:SS.FF') as DTStamp, 'ref_1' as Ref, 1125 as Qty from dual union all
    Select 'Site_1' as Site, 'user_1' as Usr, 'RPT104' as Station, 'Activity_3' as Code
    , to_timestamp('16-MAY-11 14.06.49.460697000', 'DD-MON-RR HH24:MI:SS.FF') as DTStamp, 'ref_1' as Ref, 1125 as Qty from dual union all
    Select 'Site_1' as Site, 'user_1' as Usr, 'RPT104' as Station, 'Activity_3' as Code
    , to_timestamp('16-MAY-11 14.06.49.472606000', 'DD-MON-RR HH24:MI:SS.FF') as DTStamp, 'ref_1' as Ref, 1125 as Qty from dual union all
    Select 'Site_1' as Site, 'user_1' as Usr, 'RPT104' as Station, 'Activity_3' as Code
    , to_timestamp('16-MAY-11 14.06.49.484031000', 'DD-MON-RR HH24:MI:SS.FF') as DTStamp, 'ref_1' as Ref, 1125 as Qty from dual union all
    Select 'Site_1' as Site, 'user_1' as Usr, 'RPT104' as Station, 'Activity_3' as Code
    , to_timestamp('16-MAY-11 14.06.49.495551000', 'DD-MON-RR HH24:MI:SS.FF') as DTStamp, 'ref_1' as Ref, 1125 as Qty from dual union all
    Select 'Site_1' as Site, 'user_1' as Usr, 'RPT104' as Station, 'Activity_3' as Code
    , to_timestamp('16-MAY-11 14.06.49.513645000', 'DD-MON-RR HH24:MI:SS.FF') as DTStamp, 'ref_1' as Ref, 1125 as Qty from dual union all
    Select 'Site_1' as Site, 'user_1' as Usr, 'RPT104' as Station, 'Activity_3' as Code
    , to_timestamp('16-MAY-11 14.06.49.530405000', 'DD-MON-RR HH24:MI:SS.FF') as DTStamp, 'ref_1' as Ref, 1125 as Qty from dual
    ),
    t1 as (
           select  t.*,
                   lead(DTStamp,1,DTStamp + interval '4' second) over(order by DTStamp) ENDTS,
                   case
                     when     lag(Site) over(order by DTStamp)  = Site
                          and
                              lag(Usr) over(order by DTStamp)  = Usr
                          and
                              lag(Station) over(order by DTStamp)  = Station
                          and
                              lag(Code) over(order by DTStamp)  = Code
                          and
                              lag(Ref) over(order by DTStamp)  = Ref
                          and
                              lag(Qty) over(order by DTStamp)  = Qty
                       then 0
                     else 1
                   end start_of_group
             from  t
          ),
    t2 as (
           select  t1.*,
                   sum(start_of_group) over(order by DTStamp) grp
             from  t1
          )
    select  Site,
            Usr,
            Station,
            Code,
            min(DTStamp) STARTTS,
            max(ENDTS) ENDTS,
            Ref,
            Qty
      from  t2
      group by grp,
               Site,
               Usr,
               Station,
               Code,
               Ref,
               Qty
      order by STARTTS
    /
    
    SITE   USR    STATIO CODE       STARTTS                             ENDTS                               REF          QTY
    ------ ------ ------ ---------- ----------------------------------- ----------------------------------- ----- ----------
    Site_1 user_1 RPT104 Activity_1 16-MAY-11 01.43.06.566193000 PM     16-MAY-11 02.05.23.572211000 PM     ref_1       1125
    Site_1 user_1 RPT104 Activity_2 16-MAY-11 02.05.23.572211000 PM     16-MAY-11 02.06.01.058978000 PM     ref_2       1125
    Site_1 user_1 RPT104 Activity_1 16-MAY-11 02.06.01.058978000 PM     16-MAY-11 02.06.41.341612000 PM     ref_1       1125
    Site_1 user_1 RPT104 Activity_3 16-MAY-11 02.06.41.341612000 PM     16-MAY-11 02.06.53.530405000 PM     ref_1       1125
    
    SQL> 
    

    SY.

  • Dynamic number of columns in the table

    Hello

    I use JDev 10.1.3.3.0 with ADF. I just want to create a table, with a dynamic number of columns in the table. The background is that a user of my webapplication can create and submit a sql query. Now, I have to show him the results. My idea was, I have save the result in a bean managed (ResultTable), which is stored in the session context and map at my table in my page.

    If I search the Forum and don't get only one useful thread: {: identifier of the thread = 971888}, but I don't exactly understand. What is the CollectionModel? Do I need this?

    I'm trying to report on the following:

    ResultTable
    public class ResultTable {
    
        public static final String SESSION_NAME = "ResultTable";
        private ArrayList<ResultColumn> columnList; 
        private CollectionModel collectionModel;
    
        public ResultTable() {
        }
    
        public ArrayList<ResultColumn> getColumnList() {
            return columnList;
        }
    
        public void setColumnList(ArrayList<ResultColumn> columnList) {
            this.columnList = columnList;
        }
    }
    ResultColumn
    public class ResultColumn {
        
        private ArrayList<ResultRow> rowList;
        private String name;
    
        public ResultColumn() {
        }
    
        public ArrayList<ResultRow> getRowList() {
            return rowList;
        }
    
        public void setRowList(ArrayList<ResultRow> rowList) {
            this.rowList = rowList;
        }
    
        public String getName() {
            return name;
        }
    
        public void setName(String name) {
            this.name = name;
        }
    }
    ResultTable
    public class ResultRow {
        
        private String value;
    
        public ResultRow() {
        }
    
        public String getValue() {
            return value;
        }
    
        public void setValue(String value) {
            this.value = value;
        }
    }
    My showResult.jspx
    <af:table emptyText="No items were found"
              value="ResultTable.columnList"
              var="column"
              varStatus="colStatus"
              id="table1">
      <af:forEach items="#{column.rowList}" var="row" varStatus="rowStatus">
        <af:column sortable="false" headerText="#{column.name}" 
                   id="column#{colStatus.index}">
          <af:outputText value="#{row.value}"
                         id="outputText#{rowStatus.index}"/>
        </af:column>
      </af:forEach>
    </af:table>
    The ResultTable was filled with data, but the Board is not filled. So, I think, it must be rejected to the data binding.

    I get warnings and errors to run too. But I don't know if they are the result or cause of my problem.
    27.10.2009 10:15:41 oracle.adfinternal.view.faces.renderkit.core.xhtml.TableRenderer renderTableWithoutColumns
    WARNUNG: Table with id: form1:table1 has no visible columns!
    27.10.2009 10:15:41 oracle.adfinternal.view.faces.io.HtmlResponseWriter endElement
    SCHWERWIEGEND: Element End name:span does not match start name:div
    27.10.2009 10:15:41 oracle.adfinternal.view.faces.io.HtmlResponseWriter endElement
    SCHWERWIEGEND: Element End name:span does not match start name:div
    27.10.2009 10:15:41 oracle.adfinternal.view.faces.io.HtmlResponseWriter endElement
    SCHWERWIEGEND: Element End name:form does not match start name:span
    27.10.2009 10:15:41 oracle.adfinternal.view.faces.io.HtmlResponseWriter endElement
    SCHWERWIEGEND: Element End name:body does not match start name:form
    27.10.2009 10:15:41 oracle.adfinternal.view.faces.io.HtmlResponseWriter endElement
    SCHWERWIEGEND: Element End name:html does not match start name:body
    Concerning

    Majo

    Hi Mario,.

    
      
        
          
        
      
    
    

    Note that your JSPX snippet above has serious shortcomings:

  • 'ResultTable.rowList' is not an EL expression, but the value attribute of the af: table must refer to an EL expression
  • Items AF:foreach = "#{row.cellList}"-you don't have to store information about the columns of all rows, more it won't work as af:forEach tag may not see the value of the expression of EL #{line} (or any component EL expression created). " See the tagdoc here: http://www.oracle.com/technology/products/jdev/htdocs/partners/addins/exchange/jsf/doc/tagdoc/core/forEach.html
  • "ID =" Column #{cellStatus.index} "and id =" outputText #{rowStatus.index} "are invalid and that they don't compile even as id attributes cannot contain EL expressions.

    .

    I think to solve your problem, you need three things:

  • List of columns (for example the list If you need to store only the names of column or list If you need additional information),.
  • list of lines,
  • a line can be a map (with the name of the column - cell data mapping; card e.g.) or a list (with columns indexed; for examplelist).

    Example with the lines of the map:

    JSPX snippet:

    
      
        
          
        
      
    
    

    The ResultTable bean:

    public class ResultTable {
    
        private List columnList;
        private List> rowList; 
    
        public ResultTable() {
        }
    
        public List getColumnList() {
            return columnList;
        }
    
        public void setColumnList(List columnList) {
            this.columnList = columnList;
        }
    
        public List> getRowList() {
            return rowList;
        }
    
        public void setColumnList(List> rowList) {
            this.rowList= rowList;
        }
    
    }
    

    Type ResultColumn:

    public class ResultColumn {
    
        // additional fields if needed...
        private String name;
    
        public ResultColumn() {
        }
    
        public String getName() {
            return name;
        }
    
        public void setName(String name) {
            this.name = name;
        }
    }
    

    AF:table display correctly after the initialization of the values in your beans properly filled ResultTalbe (e.g. to fill the list of rank with the lines).

    Hope this helps,
    Patrik

  • How to create personalized with a dynamic date stamps

    I tried to create a customized with a dynamic date stamp. But the date is not dynamic. Help, please

    There are a number of things that need to be properly configured for a stamp to work as a dynamic stamp, form the name of template to the calculation script. If you provide the details of how you set the PDF stamp, it would be useful, as would the next book that has all the information you need: www.amazon.com/About-Stamps-Acrobat® - without paper-Workflows/dp/0985614706 /.

  • Dynamic action for validation of date with the notification message plugin

    Hi all

    Someone help me please with dynamic action for validation of date with the message notification plugin. I have a form with two elements of the date picker control and message notification plugin.

    The requirement first user selects the exam is finished and then selects the date. So, if the date is greater than the date of the examination is over + 2 years then doesn't trigger the message notification plugin. I tried to create that dynamic action on the date picker date that triggers the scheduled issue notification message but I want to make conditional, I mean displays the message only if date of the selected is greater than the date of the exam is finished more than 2 years.

    In terms simple, notification is displayed only if provided is superior to (date of the exam is completed + 2 years).

    I use oracle apex 4.0 version and oracle 10g r2 database. I tried to reproduce the same requirement in my personal workspace. Here are the details. Please take a look.

    Workspace: raghu_workspace

    username: orton607

    password: orton607

    APP # 72193

    PG # 1

    Any help is appreciated.

    Thanks in advance.

    Orton.

    You can get the value of the date of entry:

    $(ele) .datePicker ('getDate');

    So what to add functions such as:

    function validateNotification (d1, d2) {}

    Date1 var = $(d1) .datepicker ('getDate');

    date2 var = $(d2) .datepicker ('getDate');

    if(date1 && date2) {}

    return ((date2.getTime()-date1.getTime())/(1000*24*60*60))>(365*2);

    } else {}

    Returns false;

    }

    }

    The logic based on setting (I have two years from years of 365 days preceding)

    Then in the D.A. specify a JavaScript expression as:

    validateNotification ('P2_REVIEW_COMPLETED', this.triggeringElement.id)

    Refer to page 2 for example.

  • How to export data to excel that has 2 tables with the same number of columns and the column names?

    Hi everyone, yet once landed upward with a problem.

    After trying many things to myself, finally decided to post here...

    I created a form in form builder 6i in which clicking on a button, the data gets exported to the excel sheet.

    It works very well with a single table. The problem now is that I cannot do the same with 2 tables.

    Because the tables have the same number of columns and the columns names.

    Here are the 2 tables with column names:

    Table-1 (MONTHLY_PART_1) Table-2 (MONTHLY_PART_2)
    SL_NOSL_NO
    MODELMODEL
    END_DATEEND_DATE
    U-1U-1
    U-2U-2
    U-4U-4
    ..................
    ..................
    U-20U-20
    U-25U-25

    Given that the tables have the same column names, I get the following error :

    402 error at line 103, column 4

    required aliases in the SELECT list of the slider to avoid duplicate column names.

    So how to export data to excel that has 2 tables with the same number of columns and the column names?

    Should I paste the code? Should I publish this query in 'SQL and PL/SQL ' Forum?

    Help me with this please.

    Thank you.

    Wait a second... is this a kind of House of partitioning? Shouldn't it is a union of two tables instead a join?

    see you soon

  • my up to date iMac can't read installing lightroom 5 dvd I received for Christmas, the dvd player works fine with the other dvd, is it possible to download the program with the serial number of the dvd?

    my up to date iMac can't read installing lightroom 5 dvd I received for Christmas, the dvd player works fine with the other dvd, is it possible to download the program with the serial number of the dvd?

    No need to worry about the disk version. He would just install an update anyway once it has been installed. Download from the link provided below and use the serial number, you need to activate.

    Updates

  • Group records with time

    Hi all

    This is our requirement.

    We must combine records with time.

    for example: period = 3
    TABLE: XX_SALES
    ---------------------------------------------
    XDATE XQTY
    ---------------------------------------------
    10 5/1/2012
    20 2/5/2012
    3/5/2012 30
    4/5/2012 60
    12 2012/5/7
    8/5/2012 23
    45 8/5/2012
    100 12/5 / 2012
    5/2012/13 55
    5/2012/15 99

    == >
    ---------------------------------------------
    XDATE XQTY
    ---------------------------------------------
    1/5/2012 10-> 5/1/2012 Group (5/1/2012 ~ 3/5/2012)
    2/5/2012 20-> 5/1/2012 Group (5/1/2012 ~ 3/5/2012)
    3/5/2012 30-> 5/1/2012 Group (5/1/2012 ~ 3/5/2012)
    4/5/2012 60-> Group 5/2012/4 (4/5/2012 ~ 2012/5/6) *.
    7/5/2012 12-> Group 5/2012/7 (5/7/2012 ~ 9/5/2012) *.
    8/5/2012 23-> Group 5/2012/7 (5/7/2012 ~ 9/5/2012) *.
    8/5/2012 45-> Group 5/2012/7 (5/7/2012 ~ 9/5/2012) *.
    5/2012/12 100-> Group 5/12/2012 (2012/5/12 ~ 14/5/2012) *.
    13/5/2012 55-> Group 5/12/2012 (2012/5/12 ~ 14/5/2012) *.
    5/15/2012 99-> Group 5/15/2012 (15/5/2012 ~ 5/17/2012) *.

    After amount to combine with period = 3, the result will be
    ---------------------------------------------
    XDATE_G XQTY_G
    ---------------------------------------------
    60 1/5/2012
    4/5/2012 60
    2012/5/7 80
    12/5/2012 155
    5/2012/15 99


    Here's the example script
     
    create table XX_SALES(XDATE DATE, XQTY Number);
    insert into XX_SALES VALUES(to_date('20120501','YYYYMMDD'),10);
    insert into XX_SALES VALUES(to_date('20120502','YYYYMMDD'),20);
    insert into XX_SALES VALUES(to_date('20120503','YYYYMMDD'),30);
    insert into XX_SALES VALUES(to_date('20120504','YYYYMMDD'),60);
    insert into XX_SALES VALUES(to_date('20120507','YYYYMMDD'),12);
    insert into XX_SALES VALUES(to_date('20120508','YYYYMMDD'),23);
    insert into XX_SALES VALUES(to_date('20120508','YYYYMMDD'),45);
    insert into XX_SALES VALUES(to_date('20120512','YYYYMMDD'),100);
    insert into XX_SALES VALUES(to_date('20120513','YYYYMMDD'),55);
    insert into XX_SALES VALUES(to_date('20120515','YYYYMMDD'),99);
     
    We can solve this problem by using the loop now:
    to find the XDATE_G and it's rank in the loop and the XQTY in the range of the sum.
    DECLARE
      V_DATE_FROM DATE := NULL;
      V_DATE_TO   DATE := NULL;
      V_QTY_SUM   NUMBER := 0;
      CURSOR CUR_DATE IS
        SELECT DISTINCT XDATE FROM XX_SALES ORDER BY XDATE;
    BEGIN
      FOR REC IN CUR_DATE LOOP
        IF V_DATE_TO IS NULL OR REC.XDATE > V_DATE_TO THEN
          V_DATE_FROM := REC.XDATE;
          V_DATE_TO   := REC.XDATE + 3 - 1;
          SELECT SUM(XQTY)
            INTO V_QTY_SUM
            FROM XX_SALES
           WHERE XDATE >= V_DATE_FROM
             AND XDATE <= V_DATE_TO;
          DBMS_OUTPUT.PUT_LINE(TO_CHAR(V_DATE_FROM, 'YYYYMMDD') ||
                               '-----qty: ' || TO_CHAR(V_QTY_SUM));
        END IF;
      END LOOP;
    END;
    Is it possible to solve this problem by using analyze sql?


    Thanks in advance,
    Best regards
    Zhxiang

    Published by: zhxiangxie on April 26, 2012 14:41 fixed the grouping expected data

    There was an article about a similar problem in Oracle Magazine recently:

    http://www.Oracle.com/technetwork/issue-archive/2012/12-Mar/o22asktom-1518271.html

    See the section on the 'grouping beaches '. They needed a total cumulative who started once the total reaches a certain amount.

    You need a total cumulative which starts again when the date changes to group and the dates of beginning and end of each group must be determined dynamically.

    This can be done with the analytical functions.

    Here is a solution-based 'code listing 5', the solution MODEL, which is recommended in the article.

    SELECT FIRST_DATE, SUM(XQTY) SUM_XQTY FROM (
      SELECT * FROM xx_sales
      MODEL DIMENSION BY(ROW_NUMBER() OVER(ORDER BY XDATE) RN)
      MEASURES(XDATE, XDATE FIRST_DATE, XQTY)
      RULES(
        FIRST_DATE[RN > 1] =
          CASE WHEN XDATE[CV()] - FIRST_DATE[CV() - 1] >= 3
          THEN xdate[cv()]
          ELSE FIRST_DATE[CV() - 1]
          END
      )
    )
    GROUP BY first_date ORDER BY first_date;
    
    FIRST_DATE            SUM_XQTY
    --------------------- --------
    2012/05/01 00:00:00         60
    2012/05/04 00:00:00         60
    2012/05/07 00:00:00         80
    2012/05/12 00:00:00        155
    2012/05/15 00:00:00         99
    

    If you 9i, there is no function model. In this case, I can give you a solution using START WITH / CONNECT BY that does not work as well.

  • Calculate business hours between 2 dates with negative numbers

    Hi all

    I use a function to calculate the difference between 2 dates schedules. The function was taken and adapted from this thread:

    Calculate business hours between two dates in a cursor

    CREATE OR REPLACE
      FUNCTION get_bus_minutes_between(
                                       p_start_date DATE,
                                       p_end_date DATE
                                      )
        RETURN NUMBER
        IS
            v_return NUMBER;
        BEGIN
            with t as (
                       select  case level
                                 when 1 then greatest(p_start_date,trunc(p_start_date) + 9 / 24)
                                 else trunc(p_start_date) + level - 15 / 24
                               end start_dt,
                               case connect_by_isleaf
                                 when 1 then least(p_end_date,trunc(p_end_date) + 20 / 24)
                                 else trunc(p_start_date) + level - 4 / 24
                               end end_dt
                         from  dual
                         connect by level <= trunc(p_end_date) - trunc(p_start_date) + 1
                      )
            select  sum(greatest(end_dt - start_dt,0)) * 24 * 60 work_minutes
              into  v_return
              from  t
              where trunc(start_dt) - trunc(start_dt,'iw') < 5; -- exclude weekends
            RETURN v_return;
        END;
        /

    I am running into an issue that when the p_end_date is before the p_start_date value the function returns the full time between the dates without regard for business hours. if the end_date is before the start date I get the expected result i.e. difference between the dates in business hours.

    Using example dates of start 19th October 2012 13:00:00 and end 22nd october 2012 13:21:00 the business minutes are 681. However if i reverse the dates I get -4341.


    Correct labour code is the following, I've annotated so that he could help someone else looking for a solution like this:

    CREATE OR REPLACE
      FUNCTION get_bus_minutes_between(
                                       p_start_date DATE,  --these are the 2 dates passed into the function
                                       p_end_date DATE     --these are the 2 dates passed into the function
    )
    RETURN NUMBER
    IS
    v_return NUMBER;

            l_start  DATE;

            l_end    DATE;

            l_factor NUMBER;
        BEGIN

            IF p_start_date <= p_end_date THEN            --this IF section checks which of the 2 dates is the greatest

               l_start  := p_start_date;                  -- and then assigns the earliest date to p_start_date

               l_end    := p_end_date;

               l_factor := 1;

            ELSE   -- swap the dates around, and multiply the result by -1

               l_start  := p_end_date;

               l_end    := p_start_date;

               l_factor := -1;

            END IF;

            with t as (
                       select  case level
                                 when 1 then greatest(l_start,trunc(l_start) + 9 / 24)  -- this sets the start of working hours
                                 else trunc(l_start) + level - 15 / 24  --if the start time is adjusted this value must also be adjusted
                               end start_dt,
                               case connect_by_isleaf
                                 when 1 then least(l_end,trunc(l_end) + 20 / 24)  -- this sets the end of working hours
                                 else trunc(l_start) + level - 4 / 24  -- if the end time is adjusted this value must also be adjusted
                               end end_dt
                         from  dual
                         connect by level <= trunc(l_end) - trunc(l_start) + 1
                      )
            select  sum(end_dt - start_dt) * 24 * 60 work_minutes  -- this subtracts the 2 dates and then multiplies by hours and minutes
              into  v_return
              from  t
              where trunc(start_dt) - trunc(start_dt,'iw') < 5; -- this exclude weekends
            RETURN v_return * l_factor;  -- this gives the negative or positive results so you can see if it was completed before or after the required date.
        END;
        /

  • How to create dummy data with 1 TB

    Hi Expert,

    May I know how to create dummy data with 1 TB to test performance. ?

    Respect of

    Liang

    what columns you want to include? Basically, you can generate as many lines as you like to connect by queries - something like:

    create the table big_t

    as

    with

    generator1 like)

    Select rownum id

    of the double

    connect by level<=>

    )

    ,

    generator2 as)

    Select lpad ('* ', 1000,' *') col1

    of the double

    connect by level<=>

    )

    Select g1.id

    g2.col1

    of generator1 g1

    generator2 g2

    ;

    Of course, it is not 1 TB, but you can change the number of levels. Who said: I think you need to find a sensitive table first structure.

  • Error: The lines of data with unmapped dimensions exist for period "1 April 2014".

    Expert Hi

    The below error when I click on the button Execute in order to load data in the area of data loading in 11.1.2.3 workspace. Actually, I already put in the tabs global mapping (add records of 12 months), mapping of Application (add records of 12 months) and map sources (add a month "1 April 2014' as the name of period with Type = Explicit mapping") in the service of the period mapping. What else should I check to fix this? Thank you.

    2014-04-29 06:10:35, 624 [AIF] INFO: beginning of the process FDMEE, process ID: 56
    2014-04-29 06:10:35, 625 [AIF] INFO: recording of the FDMEE level: 4
    2014-04-29 06:10:35, 625 [AIF] INFO: FDMEE log file: null\outbox\logs\AAES_56.log
    2014-04-29 06:10:35, 625 [AIF] INFO: user: admin
    2014-04-29 06:10:35, 625 [AIF] INFO: place: AAESLocation (Partitionkey:2)
    2014-04-29 06:10:35, 626 [AIF] INFO: period name: Apr 1, 2014 (period key: 4/1/14-12:00 AM)
    2014-04-29 06:10:35, 627 [AIF] INFO: category name: AAESGCM (category key: 2)
    2014-04-29 06:10:35, 627 [AIF] INFO: name rule: AAESDLR (rule ID:7)
    2014-04-29 06:10:37, 504 [AIF] INFO: Jython Version: 2.5.1 (Release_2_5_1:6813, September 26 2009, 13:47:54)
    [JRockit (R) Oracle (Oracle Corporation)]
    2014-04-29 06:10:37, 504 [AIF] INFO: Java platform: java1.6.0_37
    2014-04-29 06:10:39, 364 INFO [AIF]: - START IMPORT STEP -
    2014-04-29 06:10:45, 727 INFO [AIF]:
    Import of Source data for the period "1 April 2014".
    2014-04-29 06:10:45, 742 INFO [AIF]:
    Import data from Source for the book "ABC_LEDGER".
    2014-04-29 06:10:45, 765 INFO [AIF]: monetary data lines imported from Source: 12
    2014-04-29 06:10:45, 783 [AIF] INFO: Total of lines of data from the Source: 12
    2014-04-29 06:10:46, 270 INFO [AIF]:
    Map data for period "1 April 2014".
    2014-04-29 06:10:46, 277 [AIF] INFO:
    Treatment of the column mappings 'ACCOUNT '.
    2014-04-29 06:10:46, 280 INFO [AIF]: data rows updated EXPLICIT mapping rule: 12
    2014-04-29 06:10:46, 280 INFO [AIF]:
    Treatment of the "ENTITY" column mappings
    2014-04-29 06:10:46, 281 [AIF] INFO: rows of data updates to EXPLICIT mapping rule: 12
    2014-04-29 06:10:46, 281 [AIF] INFO:
    Treatment of the column mappings "UD1.
    2014-04-29 06:10:46, 282 [AIF] INFO: rows of data updates to EXPLICIT mapping rule: 12
    2014-04-29 06:10:46, 282 [AIF] INFO:
    Treatment of the column mappings "node2".
    2014-04-29 06:10:46, 283 [AIF] INFO: rows of data updates to EXPLICIT mapping rule: 12
    2014-04-29 06:10:46, 312 [AIF] INFO:
    Scene for period data "1 April 2014".
    2014-04-29 06:10:46, 315 [AIF] INFO: number of deleted lines of TDATAMAPSEG: 171
    2014-04-29 06:10:46, 321 [AIF] INFO: number of lines inserted in TDATAMAPSEG: 171
    2014-04-29 06:10:46, INFO 324 [AIF]: number of deleted lines of TDATAMAP_T: 171
    2014-04-29 06:10:46, 325 [AIF] INFO: number of deleted lines of TDATASEG: 12
    2014-04-29 06:10:46, 331 [AIF] INFO: number of lines inserted in TDATASEG: 12
    2014-04-29 06:10:46, 332 [AIF] INFO: number of deleted lines of TDATASEG_T: 12
    2014-04-29 06:10:46, 366 [AIF] INFO: - END IMPORT STEP -
    2014-04-29 06:10:46, 408 [AIF] INFO: - START NEXT STEP -
    2014-04-29 06:10:46, 462 [AIF] INFO:
    Validate the data maps for the period "1 April 2014".
    2014-04-29 06:10:46, 473 INFO [AIF]: data rows marked as invalid: 12
    2014-04-29 06:10:46, ERROR 473 [AIF]: error: the lines of data with unmapped dimensions exist for period "1 April 2014".
    2014-04-29 06:10:46, 476 [AIF] INFO: Total lines of data available for export to the target: 0
    2014-04-29 06:10:46, 478 FATAL [AIF]: error in CommMap.validateData
    Traceback (most recent call changed):
    Folder "< string >", line 2348 in validateData
    RuntimeError: [u "error: the lines of data with unmapped dimensions exist for period" 1 April 2014' ""]

    2014-04-29 06:10:46, 551 FATAL [AIF]: COMM error validating data
    2014-04-29 06:10:46, 556 INFO [AIF]: end process FDMEE, process ID: 56

    Thanks to all you guys

    This problem is solved after I maped all dimensions in order of loading the data. I traced only Entity, account, Custom1 and Custom2 at first because there is no source map Custom3, Custom4 and PIC. After doing the mapping for Custom3, Custom4 and PKI, the problem is resolved. This is why all dimensions should be mapped here.

  • Is there a way to get the list of hosts and its groups of belonging to the vCenter folder level in 5.5 web vsphere client plugin development?

    Hello

    I need to get the list of all hosts and its groups of belonging to the vcenter folder level.

    1. I created a view giving the extension point: vsphere.core.folder.monitorViews.

    2. After this step, I wrote the constraint as in my class of mediator,

    var ListConstraint:Constraint =

    QuerySpecUtil.createConstraintForRelationship ( _contextObject, 'childEntity');

    I was expecting a list of all child entities such as hosts, dc, cluster... But I have only the immediate child object which is only the Datacenter as my result.

    Is it possible to get all hosts and vCenter folder level Clusters because I need the entire list to vCenter (highest level).

    Other info:

    Object file has only two properties:

    1 childEntity - list of entities

    2 childType in-kind folder ('Virtual Machine', 'Data center'...)

    Is it possible to write a constraint specifying which list of childEntities I need using childType in.

    Example: Make Me childEntities that has a 'Host' and 'Cluster' childType but childType in doesn't have these two types.

    In addition, at this level, I could see the 'Associated objects' tab which has all the information I need, such as Clusters and Cluster tab hosts and host tab respectively.

    So, I think its possible to get this list to vCenter folder level.

    I have attached a screenshot representing the need. Kindly ignore the Conventions of naming in there since I edited the example comes with the sdk program.


    Query:

    1. How can I get the host and Cluster (table of relationship) list to vCenter folder level or even at the level of the vise.global.view?

    2. once I get this list, is it possible for me to manipulate that list and send the new list to IU?

    3. is there another way to do the same thing without the help of model classes and mediator?


    Pointers to this will be very useful.

    It is not possible to obtain all hosts a folder specific vCenter from a single query Data Manager.  You need to get the list of centers of data first and then get a list of data center hosts.

    It is best to make these repeated requests to the java level and return only the list that you want to the user interface.

    You can get all the objects in the host of the system with a simple query using a constraint with targetType = 'HostSystem', but you will need to eliminate those from other vCenter servers.  See how this chassis example queries all hosts the Java later in the getHosts() method: samples/chassis-app/chassisRackVSphere-service/src/main/java/com/vmware/samples/chassisRackVSphere/ChassisRackVSphereDataAdapter.java

    Another option is to use the vSphere Web Services SDK to browse vCenter. See the vSphere management forum for help on these APIs.  See this plugin of the sample using this SDK

    samples/vsphereviews/vsphere-wssdk-provider/src/main/java/com/vmware/samples/wssdkprovider/VmDataProviderImpl.java

  • Transfer of data with rowtype block

    Hello

    I'm using the version of oracle forms 11g (11.1.2.1.0 on 64-bit windows).

    I have a block of data with a lot of columns.

    I have what it takes to fill the block of data at A TIME-NEW-FORM-INSTANCE with the query.

    My problem is, I have what it takes to fill the block with % rowtype (and not every article in separately), since I have a lot of elements (= columns) in this data block.

    Here is my code (in One TIME-NEW-FORM-BODY relaxation):

    DECLARE

    CURSOR pop_blck_cur IS

    Select *.

    ALERTS. CRDX_ALERTS_PROCESS_TYPES

    where (PROCESS_TYPE_ID, STATUS_DATE) in (select PROCESS_TYPE_ID, max (STATUS_DATE), 'MOST UPDATED'

    ALERTS. CRDX_ALERTS_PROCESS_TYPES

    Group of PROCESS_TYPE_ID)

    order of PROCESS_TYPE_ID;

    l_row pop_blck_cur % ROWTYPE;

    l_first_rec NUMBER;

    l_last_rec NUMBER;

    cur_rec NUMBER;

    BEGIN

    LAST_RECORD;

    l_last_rec: =: system.trigger_Record;

    PREMIER_ENREGISTREMENT;

    l_first_rec: =: system.trigger_Record;

    cur_rec: = l_first_rec;

    FOR rec IN pop_blck_cur

    LOOP

    GO_BLOCK ('ALERTS_PROCESS_TYPES_BLOCK');

    cur_rec: = GET_BLOCK_PROPERTY ('ALERTS_PROCESS_TYPES_BLOCK', CURRENT_RECORD);

    GO_RECORD (cur_rec);

    ALERTS_PROCESS_TYPES_BLOCK % ROWTYPE : = l_row;        -here I need the slider assignment to my block of data

    WHEN THE OUTPUT: SYSTEM. LAST_RECORD = "TRUE";

    NEXT_RECORD;

    END LOOP;

    How to achieve this?

    Tanks in advance,

    Elad

    This isn't the works of forms in a way.

    Your base on your table of CRDX_ALERTS_PROCESS_TYPES block

    Equip your WHERE condition the default value of the property - where the block.

    In the WHEN-NEW - EXAMPLE of FORM, just put an EXECUTE_QUERY;

    If you want to fill the block "by hand", you will need to assign each item separately.

Maybe you are looking for