[8i] grouping with tricky conditions

Still stuck on an old database a little longer, and I'm trying out some information...

BANNER
----------------------------------------------------------------
Oracle8i Enterprise Edition Release 8.1.7.2.0 - Production
PL/SQL Release 8.1.7.2.0 - Production
CORE 8.1.7.0.0-Production
AMT for HP - UX: 8.1.7.2.0 - Production Version
NLSRTL Version 3.4.1.0.0 - Production

Simplified data examples:
CREATE TABLE     test_data
(     item_id     CHAR(25)
,     ord_id     CHAR(10)
,     step_id     CHAR(4)
,     station     CHAR(5)
,     act_hrs     NUMBER(11,8)
,     q_comp     NUMBER(13,4)
,     q_scrap     NUMBER(13,4)
);

INSERT INTO test_data
VALUES ('hardware-1','0001234567','0010','ROUGH',1.2,14,1);
INSERT INTO test_data
VALUES ('hardware-1','0001234567','0020','TREAT',1.4,14,0);
INSERT INTO test_data
VALUES ('hardware-1','0001234567','0030','OUTPR',2,13,1);
INSERT INTO test_data
VALUES ('hardware-1','0001234567','0040','FINSH',0.3,13,0);

INSERT INTO test_data
VALUES ('abc-123','0007654321','0005','ROUGH',1.2,10,0);
INSERT INTO test_data
VALUES ('abc-123','0007654321','0010','TREAT',1.4,10,0);
INSERT INTO test_data
VALUES ('abc-123','0007654321','0011','OUTPR',4.5,10,0);
INSERT INTO test_data
VALUES ('abc-123','0007654321','0012','OUTPR',2.3,9,1);
INSERT INTO test_data
VALUES ('abc-123','0007654321','0013','OUTPR',2.2,9,0);
INSERT INTO test_data
VALUES ('abc-123','0007654321','0017','OUTPR',3.9,9,0);
INSERT INTO test_data
VALUES ('abc-123','0007654321','0020','FINSH',3.3,9,0);

INSERT INTO test_data
VALUES ('987-x-123','0001234321','0010','ROUGH',1.2,15,0);
INSERT INTO test_data
VALUES ('987-x-123','0001234321','0015','ROUGH',1.6,15,0);
INSERT INTO test_data
VALUES ('987-x-123','0001234321','0020','TREAT',1.55,15,0);
INSERT INTO test_data
VALUES ('987-x-123','0001234321','0030','OUTPR',2.1,15,0);
INSERT INTO test_data
VALUES ('987-x-123','0001234321','0040','FINSH',0.4,15,0);

INSERT INTO test_data
VALUES ('xyz-321','0007654567','0005','ROUGH',1.2,10,0);
INSERT INTO test_data
VALUES ('xyz-321','0007654567','0010','TREAT',1.4,10,0);
INSERT INTO test_data
VALUES ('xyz-321','0007654567','0011','OUTPR',4.5,9,1);
INSERT INTO test_data
VALUES ('xyz-321','0007654567','0012','INSPC',4.1,9,0);
INSERT INTO test_data
VALUES ('xyz-321','0007654567','0013','OUTPR',2.2,9,0);
INSERT INTO test_data
VALUES ('xyz-321','0007654567','0017','OUTPR',3.9,9,0);
INSERT INTO test_data
VALUES ('xyz-321','0007654567','0020','FINSH',3.3,9,0);
Before you complain about the types of data, I did not, nor do I have control over them. These data represent orders (similar to the instructions) to make some items. Each (ord_id) order is for a number of a single element (item_id) and there are a few steps (step_id), performed in order, and each step is a specified station (station). Each step takes some time, and that gets recorded (act_hrs). In addition, every step a certain amount of items are completed (q_comp), and it is possible that someone's going to ruin a step and something is going to get discarded, so that gets saved too (q_scrap). There are many other columns that include my real data, and my real data set includes thousands of orders, but this set of sample must contain (hopefully) 1 example of each different thing, that I'm going to have to face.

What I do is to condense the SEQUENTIAL steps through the 'OUTPR' station in one record. (grouped by item_id and ord_id)

I want to keep the MIN step_id as the step_id in these cases and the SUM of the act_hrs (if I get the total number of hours real for all steps which condenses) q_comp MIN (for 2 reasons: firstly, I don't want the sum, given that I am working on the same physical pieces in each step of the order) (, and I don't want the max, because who could give if too much of everything have been scrapped), the SUM of q_scrap (because I want the total number of discarded pieces)...

Is the closest I've gotten with this query:
SELECT     item_id
,     ord_id
,     MIN(step_id)     AS step_id
,     station
,     SUM(act_hrs)     AS act_hrs
,     MIN(q_comp)     AS q_comp
,     SUM(q_scrap)     AS q_scrap
FROM     test_data
GROUP BY item_id
,      ord_id
,      station
,     CASE
          WHEN     station     = 'OUTPR'
          THEN     'OUTPR'
          ELSE     step_id || station
     END
ORDER BY item_id
,      ord_id
,      step_id
;
This query gives these results:
ITEM_ID                   ORD_ID     STEP STATI         ACT_HRS          Q_COMP         Q_SCRAP
------------------------- ---------- ---- ----- --------------- --------------- ---------------
987-x-123                 0001234321 0010 ROUGH           1.200          15.000            .000
987-x-123                 0001234321 0015 ROUGH           1.600          15.000            .000
987-x-123                 0001234321 0020 TREAT           1.550          15.000            .000
987-x-123                 0001234321 0030 OUTPR           2.100          15.000            .000
987-x-123                 0001234321 0040 FINSH            .400          15.000            .000
abc-123                   0007654321 0005 ROUGH           1.200          10.000            .000
abc-123                   0007654321 0010 TREAT           1.400          10.000            .000
abc-123                   0007654321 0011 OUTPR          12.900           9.000           1.000
abc-123                   0007654321 0020 FINSH           3.300           9.000            .000
hardware-1                0001234567 0010 ROUGH           1.200          14.000           1.000
hardware-1                0001234567 0020 TREAT           1.400          14.000            .000
hardware-1                0001234567 0030 OUTPR           2.000          13.000           1.000
hardware-1                0001234567 0040 FINSH            .300          13.000            .000
xyz-321                   0007654567 0005 ROUGH           1.200          10.000            .000
xyz-321                   0007654567 0010 TREAT           1.400          10.000            .000
xyz-321                   0007654567 0011 OUTPR          10.600           9.000           1.000 --*(should not be condensed, should only be step 0011)
xyz-321                   0007654567 0012 INSPC           4.100           9.000            .000
--*(should be steps 0013 & 0017 condensed here)
xyz-321                   0007654567 0020 FINSH           3.300           9.000            .000
However, it condenses all the steps in order with the 'OUTPR' station in one record, and I want only condense to who is sequential. I have noted with asterisks in the above defined results where data is bad.

Here are the real results, I want to see:
ITEM_ID                   ORD_ID     STEP STATI         ACT_HRS          Q_COMP         Q_SCRAP
------------------------- ---------- ---- ----- --------------- --------------- ---------------
987-x-123                 0001234321 0010 ROUGH           1.200          15.000            .000
987-x-123                 0001234321 0015 ROUGH           1.600          15.000            .000
987-x-123                 0001234321 0020 TREAT           1.550          15.000            .000
987-x-123                 0001234321 0030 OUTPR           2.100          15.000            .000
987-x-123                 0001234321 0040 FINSH            .400          15.000            .000
abc-123                   0007654321 0005 ROUGH           1.200          10.000            .000
abc-123                   0007654321 0010 TREAT           1.400          10.000            .000
abc-123                   0007654321 0011 OUTPR          12.900           9.000           1.000
abc-123                   0007654321 0020 FINSH           3.300           9.000            .000
hardware-1                0001234567 0010 ROUGH           1.200          14.000           1.000
hardware-1                0001234567 0020 TREAT           1.400          14.000            .000
hardware-1                0001234567 0030 OUTPR           2.000          13.000           1.000
hardware-1                0001234567 0040 FINSH            .300          13.000            .000
xyz-321                   0007654567 0005 ROUGH           1.200          10.000            .000
xyz-321                   0007654567 0010 TREAT           1.400          10.000            .000
xyz-321                   0007654567 0011 OUTPR           4.500           9.000           1.000 --*
xyz-321                   0007654567 0012 INSPC           4.100           9.000            .000
xyz-321                   0007654567 0013 OUTPR           6.100           9.000            .000 --*
xyz-321                   0007654567 0020 FINSH           3.300           9.000            .000
I'm not sure how to include sequential condtion... perhaps using lag/lead? Any help would be appreciated.

Hello

Here's a way to do it:

SELECT       item_id
,        ord_id
,       MIN (step_id)          AS step_id
,       station
,       SUM (act_hrs)          AS act_hrs
,       MIN (q_comp)          AS q_comp
,       SUM (q_scrap)          AS q_scrap
FROM       (          -- Begin in-line view to compute grp_id
           SELECT  test_data.*
           ,           ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                            ORDER BY        step_id
                          )
              - ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                      ,                CASE
                                        WHEN  station = 'OUTPR'
                                        THEN  NULL
                                        ELSE  step_id
                                       END
                            ORDER BY      step_id
                          )     AS grp_id
           FROM    test_data
       )          -- End in-line view to compute grp_id
GROUP BY  item_id
,            ord_id
,       station
,       grp_id
ORDER BY  item_id
,            step_id
;

It is close enough to your revised query, which I do not know is quite right. Your version looks back all the "OUTPR" lines which have a neighbor of "OUTPR" (with the same item_id ord_id, of course) in the same group, which is be ifor well these sample data. But what happens if there is a trail of 2 or more OUTPRs, then another station and then another OUTPRs 2 or more consecutive series. As a result, if we change the sample data, by changing only the station on the 1 line:

--INSERT INTO test_data
--VALUES ('xyz-321','0007654567','0010','TREAT',1.4,10,0);
-- Original row (above) replaced by new row (below)
INSERT INTO test_data
  VALUES ('xyz-321','0007654567','0010','OUTPR',1.4,10,0);

Now we have 2 groups of 2 OUTPRs, separated by an INSPC, but your query generates:

ITEM_ID         ORD_ID     STEP STATI    ACT_HRS     Q_COMP    Q_SCRAP
--------------- ---------- ---- ----- ---------- ---------- ----------
xyz-321         0007654567 0005 ROUGH        1.2         10          0
xyz-321         0007654567 0010 OUTPR         12          9          1
xyz-321         0007654567 0012 INSPC        4.1          9          0
xyz-321         0007654567 0020 FINSH        3.3          9          0

for the item_id. All 4 OUTPRs fall into one group. Don't you want to not really output like this:

ITEM_ID         ORD_ID     STEP STATI    ACT_HRS     Q_COMP    Q_SCRAP
--------------- ---------- ---- ----- ---------- ---------- ----------
xyz-321         0007654567 0005 ROUGH        1.2         10          0
xyz-321         0007654567 0010 OUTPR        5.9          9          1
xyz-321         0007654567 0012 INSPC        4.1          9          0
xyz-321         0007654567 0013 OUTPR        6.1          9          0
xyz-321         0007654567 0020 FINSH        3.3          9          0

? That's what my query produces.

Sorry, I know that's not much explanation, but I'm out of time now. I will post more on the technique Difference fixed later if I have time. For now, you can run the view online by iteself, showing 2 ROW_NUMBER expressions and their difference (grp_id).

Published by: Frank Kulash, 1 February 2012 14:34

Published by: Frank Kulash, 1 February 2012 14:48

Tags: Database

Similar Questions

  • [8i] grouping with delicate conditions (follow-up)

    I am posting this as a follow-up question to:
    [8i] grouping with tricky conditions

    This is a repeat of my version information:
    Still stuck on an old database a little longer, and I'm trying out some information...

    BANNER

    --------------------------------------------------------------------------------
    Oracle8i Enterprise Edition Release 8.1.7.2.0 - Production
    PL/SQL Release 8.1.7.2.0 - Production
    CORE 8.1.7.0.0-Production
    AMT for HP - UX: 8.1.7.2.0 - Production Version
    NLSRTL Version 3.4.1.0.0 - Production

    Now for the sample data. I took an order of my real data set and cut a few columns to illustrate how the previous solution didn't find work. My real DataSet still has thousands of orders, similar to this one.
    CREATE TABLE     test_data
    (     item_id     CHAR(25)
    ,     ord_id     CHAR(10)
    ,     step_id     CHAR(4)
    ,     station     CHAR(5)
    ,     act_hrs     NUMBER(11,8)
    ,     q_comp     NUMBER(13,4)
    ,     q_scrap     NUMBER(13,4)
    );
    
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0005','S509',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0010','S006',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0020','A501',0.85,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0026','S011',0.58,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0030','S970',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0040','S970',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0050','S003',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0055','S600',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0060','Z108',6.94,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0070','Z108',7,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0080','Z310',4.02,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0085','Z409',2.17,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0090','S500',0.85,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0095','S502',1.63,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0110','S006',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0112','S011',0.15,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0117','S903',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0118','S900',0,9,1);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0119','S950',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0120','S906',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0140','S903',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0145','S950',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0150','S906',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0160','S903',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0170','S900',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0220','S902',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0230','S906',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0240','S903',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0250','S003',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0260','S006',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0270','S012',0.95,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0280','Z417',0.68,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0285','Z417',0.68,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0290','Z426',1.78,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0300','Z426',2.07,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0305','Z426',1.23,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0310','Z402',3.97,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0315','Z308',8.09,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0410','Z409',4.83,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0430','S500',3.6,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0435','S502',0.43,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0450','S002',0.35,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0460','S001',1,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0470','Z000',2.6,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0476','S011',1,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0478','S510',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0480','S903',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0490','S003',1.2,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0500','S500',1.37,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0530','B000',0.28,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0536','S011',0.65,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0538','S510',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0540','S923',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0560','S003',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0565','S001',0.85,0,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0570','S012',2.15,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0575','S509',0,0,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0580','B000',3.78,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0590','S011',0.27,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0600','S510',0,9,0);
    Instead of grouping all sequential steps with 'OUTPR' station, I am gathering all the sequential steps with "S9%" station, then here is the solution changed to this:
    SELECT       item_id
    ,        ord_id
    ,       MIN (step_id)          AS step_id
    ,       station
    ,       SUM (act_hrs)          AS act_hrs
    ,       MIN (q_comp)          AS q_comp
    ,       SUM (q_scrap)          AS q_scrap
    FROM       (          -- Begin in-line view to compute grp_id
               SELECT  test_data.*
               ,           ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                                ORDER BY        step_id
                              )
                  - ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                          ,                CASE
                                            WHEN  station LIKE 'S9%'
                                            THEN  NULL
                                            ELSE  step_id
                                           END
                                ORDER BY      step_id
                              )     AS grp_id
               ,           ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                                ORDER BY        step_id
                              )                    AS r_num1
               ,       ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                          ,                CASE
                                            WHEN  station LIKE 'S9%'
                                            THEN  NULL
                                            ELSE  step_id
                                           END
                                ORDER BY      step_id
                              )                    AS r_num2
               FROM    test_data
           )          -- End in-line view to compute grp_id
    GROUP BY  item_id
    ,            ord_id
    ,       station
    ,       grp_id
    ORDER BY  item_id
    ,            step_id
    ;
    If just run the subquery to calculate grp_id, you can see that it sometimes affects the same number of group in two stages that are not side by side. For example, the two step 285 and 480 are they assigned group 32...

    I don't know if it's because my orders have many more steps that the orders of the sample I provided, or what...

    I tried this version too (by replacing all the names of the stations "S9%" by "OUTPR"):
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0005','S509',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0010','S006',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0020','A501',0.85,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0026','S011',0.58,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0030','OUTPR',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0040','OUTPR',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0050','S003',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0055','S600',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0060','Z108',6.94,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0070','Z108',7,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0080','Z310',4.02,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0085','Z409',2.17,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0090','S500',0.85,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0095','S502',1.63,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0110','S006',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0112','S011',0.15,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0117','OUTPR',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0118','OUTPR',0,9,1);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0119','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0120','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0140','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0145','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0150','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0160','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0170','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0220','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0230','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0240','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0250','S003',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0260','S006',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0270','S012',0.95,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0280','Z417',0.68,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0285','Z417',0.68,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0290','Z426',1.78,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0300','Z426',2.07,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0305','Z426',1.23,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0310','Z402',3.97,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0315','Z308',8.09,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0410','Z409',4.83,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0430','S500',3.6,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0435','S502',0.43,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0450','S002',0.35,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0460','S001',1,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0470','Z000',2.6,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0476','S011',1,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0478','S510',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0480','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0490','S003',1.2,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0500','S500',1.37,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0530','B000',0.28,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0536','S011',0.65,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0538','S510',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0540','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0560','S003',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0565','S001',0.85,0,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0570','S012',2.15,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0575','S509',0,0,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0580','B000',3.78,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0590','S011',0.27,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0600','S510',0,9,0);
    
    
    SELECT       item_id
    ,        ord_id
    ,       MIN (step_id)          AS step_id
    ,       station
    ,       SUM (act_hrs)          AS act_hrs
    ,       MIN (q_comp)          AS q_comp
    ,       SUM (q_scrap)          AS q_scrap
    FROM       (          -- Begin in-line view to compute grp_id
               SELECT  test_data.*
               ,           ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                                ORDER BY        step_id
                              )
                  - ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                          ,                CASE
                                            WHEN  station = 'OUTPR'
                                            THEN  NULL
                                            ELSE  step_id
                                           END
                                ORDER BY      step_id
                              )     AS grp_id
               ,           ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                                ORDER BY        step_id
                              )                    AS r_num1
               ,       ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                          ,                CASE
                                            WHEN  station = 'OUTPR'
                                            THEN  NULL
                                            ELSE  step_id
                                           END
                                ORDER BY      step_id
                              )                    AS r_num2
               FROM    test_data
           )          -- End in-line view to compute grp_id
    GROUP BY  item_id
    ,            ord_id
    ,       station
    ,       grp_id
    ORDER BY  item_id
    ,            step_id
    ;
    and it shows the same problem.

    Help?

    Hello

    I'm glad that you understood the problem.

    Here's a little explanation of the approach of the fixed difference. I can refer to this page later, so I will explain some things you obviously already understand, but I jump you will find helpful.
    Your problem has additional feature that, according to the station, some lines can never combine in large groups. For now, we will greatly simplify the problem. In view of the CREATE TABLE statement, you have posted and these data:

    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0010', 'Z417');
    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0011', 'S906');
    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0012', 'S906');
    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0140', 'S906');
    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0170', 'Z417');
    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0175', 'Z417');
    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0200', 'S906');
    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0205', 'S906');
    

    Let's say that we want this output:

    `                  FIRST LAST
                       _STEP _STEP
    ITEM_ID ORD_ID     _ID   _ID   STATION  CNT
    ------- ---------- ----- ----- ------- ----
    abc-123 0001715683 0010  0010  Z417       1
    abc-123 0001715683 0011  0140  S906       3
    abc-123 0001715683 0170  0175  Z417       2
    abc-123 0001715683 0200  0205  S906       2
    

    Where each line of output represents a contiguous set of rows with the same item_id, ord_id and station. "Contguous" is determined by step_id: lines with "0200" = step_id = step_id "0205' are contiguous in this example of data because there is no step_ids between '0200' and '0205". "
    The expected results include the step_id highest and lowest in the group, and the total number of original lines of the group.

    GROUP BY (usually) collapses the results of a query within lines. A production line can be 1, 2, 3, or any number of lines in the original. This is obviously a problem of GROUP BY: we sometimes want several lines in the original can be combined in a line of output.

    GROUP BY guess, just by looking at a row, you can tell which group it belongs. Looking at all the 2 lines, you can always know whether or not they belong to the same group. This isn't quite the case in this issue. For example, these lines

    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0140', 'S906');
    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0200', 'S906');
    

    These 2 rows belong to the same group or not? We cannot tell. Looking at just 2 lines, what we can say is that they pourraient belonging to the same group, since they have the same item_id, ord_id and station. It is true that members of same groups will always be the same item_id, the ord_id and train station; If one of these columns differ from one line to the other, we can be sure that they belong to different groups, but if they are identical, we cannot be certain that they are in the same group, because item_id, ord_id and station only tell part of the story. A group is not just a bunch or rows that have the same item_id, ord_id and station: a group is defined as a sequence of adjacent to lines that have these columns in common. Before we can make the GROUP BY, we need to use the analytical functions to see two lines are in the same contiguous streak. Once we know that, we can store this data in a new column (which I called grp_id), and then GROUP BY all 4 columns: item_id, ord_id, station and grp_id.

    First of all, let's recognize a basic difference in 3 columns in the table that will be included in the GROUP BY clause: item_id, ord_id and station.
    Item_id and ord_id always identify separate worlds. There is never any point comparing lines with separate item_ids or ord_ids to the other. Different item_ids never interact; different ord_ids have nothing to do with each other. We'll call item_id and ord_id column 'separate world '. Separate planet do not touch each other.
    The station is different. Sometimes, it makes sense to compare lines with different stations. For example, this problem is based on questions such as "these adjacent lines have the same station or not? We will call a "separate country" column of the station. There is certainly a difference between separate countries, but countries affect each other.

    The most intuitive way to identify groups of contiguous lines with the same station is to use a LAG or LEAD to look at adjacent lines. You can certainly do the job, but it happens to be a better way, using ROW_NUMBER.
    Help the ROW_NUMBER, we can take the irregular you are ordering step_id and turn it into a dial of nice, regular, as shown in the column of r_num1 below:

    `                                 R_             R_ GRP
    ITEM_ID ORD_ID     STEP STATION NUM1 S906 Z417 NUM2 _ID
    ------- ---------- ---- ------- ---- ---- ---- ---- ---
    abc-123 0001715683 0010 Z417       1         1    1   0
    abc-123 0001715683 0011 S906       2    1         1   1
    abc-123 0001715683 0012 S906       3    2         2   1
    abc-123 0001715683 0140 S906       4    3         3   1
    abc-123 0001715683 0170 Z417       5         2    2   3
    abc-123 0001715683 0175 Z417       6         3    3   3
    abc-123 0001715683 0200 S906       7    4         4   3
    abc-123 0001715683 0205 S906       8    5         5   3
    

    We could also assign consecutive integers to the lines in each station, as shown in the two columns, I called S906 and Z417.
    Notice how the r_num1 increases by 1 for each line to another.
    When there is a trail of several rows of S906 consectuive (for example, step_ids ' 0011 'by '0140'), the number of s906 increases by 1 each line to another. Therefore, during the duration of a streak, the difference between r_num1 and s906 will be constant. For 3 lines of the first series, this difference is being 1. Another series of S906s contiguous started step_id = '0200 '. the difference between r_num1 and s906 for this whole series is set to 3. This difference is what I called grp_id.
    There is little meaning for real numbers, and, as you have noticed, streaks for different stations can have as by chance the same grp_id. (it does not happen to be examples of that in this game of small sample data.) However, two rows have the same grp_id and station if and only if they belong to the same streak.

    Here is the query that produced the result immediately before:

    SELECT    item_id
    ,        ord_id
    ,        step_id
    ,       station
    ,       r_num1
    ,       CASE WHEN station = 'S906' THEN r_num2 END     AS s906
    ,       CASE WHEN station = 'Z417' THEN r_num2 END     AS Z417
    ,       r_num2
    ,       grp_id
    FROM       (          -- Begin in-line view to compute grp_id
               SELECT  test_data.*
               ,           ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                                ORDER BY        step_id
                              )
                  - ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id, station
                                ORDER BY      step_id
                              )     AS grp_id
               ,           ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                                ORDER BY        step_id
                              )                    AS r_num1
               ,       ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id, station
                                ORDER BY      step_id
                              )                    AS r_num2
               FROM    test_data
           )          -- End in-line view to compute grp_id
    ORDER BY  item_id
    ,            ord_id
    ,       step_id
    ;
    

    Here are a few things to note:
    All analytical ORDER BY clauses are the same. In most of the problems, there will be only an ording regime that matters.
    Analytical PARTITION BY clauses include the columns of 'distinct from the planet', item_id and ord_id.
    The analytical PARTITION BY clauses also among the column 'split the country', station.

    To get the results we want in the end, we add a GROUP BY clause from the main query. Yet once, this includes the columns of the 'separate world', column 'split the country', and the column 'fixed the difference', grp_id.
    Eliminating columns that have been includied just to make the output easier to understand, we get:

    SELECT    item_id
    ,        ord_id
    ,        MIN (step_id)          AS first_step_id
    ,       MAX (step_id)          AS last_step_id
    ,       station
    ,       COUNT (*)          AS cnt
    FROM       (          -- Begin in-line view to compute grp_id
               SELECT  test_data.*
               ,           ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                                ORDER BY        step_id
                              )
                  - ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id, station
                                ORDER BY      step_id
                              )     AS grp_id
               FROM    test_data
           )          -- End in-line view to compute grp_id
    GROUP BY  item_id
    ,            ord_id
    ,       station
    ,       grp_id
    ORDER BY  item_id
    ,            ord_id
    ,       first_step_id
    ;
    

    This prioduces the output displayed much earlier in this message.

    This example shows the fixed difference indicated. Specific problem you is complicated a little what you should use an expression BOX based on station rather than the station iteself.

  • Cannot create a smart group with three conditions

    When I create or edit a smart group, I can specify one or two conditions. But if I try to add a third condition, no third line is displayed, even if the OK button goes grey as if a third line were appeared containing a partially specified condition.

    Does anyone of you can create a smart group with three or more diseases? I can't.

    You need to scroll to the bottom of the list of criteria to see one after another. The window is not resized.

  • Is it possible to rank with the condition of a group?

    Is it possible to rank with the condition of a group? for example, I have following dataset.
     
    RANK() OVER (partition BY PC.POLICY_NUMBER, PC.TRANSACTION_TYPE, COV_CHG_EFF_DATE order by PC.POLICY_NUMBER, COV_CHG_EFF_DATE, PC.TIMESTAMP_ENTERED) AS RNK,
    
    POLICY_NUMBER    TRANSACTION_TYPE  COV_CHG_EFF_DATE  TIMESTAMP_ENTERED                        Rank
    10531075PQ                           01           01/FEB/2009              15/SEP/2009 01:16:09.356663 AM       1
    10531075PQ                           01           01/FEB/2009              15/SEP/2009 01:16:09.387784 AM       2
    10531075PQ                           02           15/OCT/2009             16/OCT/2009 04:40:24.564928 PM       1
    10531075PQ                           02           15/OCT/2009             16/OCT/2009 04:40:24.678118 PM       2
    10531075PQ                           10           15/OCT/2009             16/OCT/2009 04:45:20.290117 PM       1
    10531075PQ                           10           15/OCT/2009             16/OCT/2009 04:40:29.088737 PM       2
    10531075PQ                           09           15/OCT/2009             16/OCT/2009 04:40:29.088737 PM       1 (expected 3)
    10531075PQ                           06           17/OCT/2009             17/OCT/2009 04:45:20.290117 PM       1
    10531075PQ                           07           17/OCT/2009             17/OCT/2009 04:40:29.088737 PM       1 (expected 2)
    I want to group founded by transaction ID. For ex, '09 'and '10' as a game and ' 06' one '07' as another set. Instead of the beginning rank, rank I want continue for any occurrence of the ' 09 'or ' 10'. In the example above, for the next line, I expect to grade 3 that there are transaction 2 '10' already exist for the same COV_CHG_EFF_DATE.

    09 10531075PQ October 15, 2009 October 16, 2009 04:40:29.088737 PM 1 (3 planned)

    I wonder if it's possible with the rank or another another analytic function. Not looking for exact labour code, I will appreciate if someone can give me idea/advice. Example of table and the test data, if someone wants to experience
     
    drop table PC_COVKEY_PD;
    
    create table PC_COVKEY_PD (
    POLICY_NUMBER varchar(30),
    TERM_IDENT varchar(3),
    COVERAGE_NUMBER varchar(3),
    TRANSACTION_TYPE varchar(3),
    COV_CHG_EFF_DATE date,
    TIMESTAMP_ENTERED timestamp
    );
    
    delete from PC_COVKEY_PD;
    
    commit;
    
    insert into PC_COVKEY_PD values ('10531075PQ', '021', '002', '01', to_date('01/FEB/2009','DD/MM/YYYY'), cast('15/SEP/2009 01:16:09.356663 AM' as timestamp));
    insert into PC_COVKEY_PD values ('10531075PQ', '021', '001', '01', to_date('01/FEB/2009','DD/MM/YYYY'), cast('15/SEP/2009 01:16:09.387784 AM' as timestamp));
    insert into PC_COVKEY_PD values ('10531075PQ', '021', '004', '02', to_date('15/OCT/2009','DD/MM/YYYY'), cast('16/OCT/2009 04:40:24.164928 PM' as timestamp));
    insert into PC_COVKEY_PD values ('10531075PQ', '021', '004', '02', to_date('15/OCT/2009','DD/MM/YYYY'), cast('16/OCT/2009 04:40:24.264928 PM' as timestamp));
    insert into PC_COVKEY_PD values ( '10531075PQ', '021', '005', '10', to_date('15/OCT/2009','DD/MM/YYYY'), cast('16/OCT/2009 04:40:24.364928 PM' as timestamp));
    insert into PC_COVKEY_PD values ('10531075PQ', '021', '002', '10', to_date('15/OCT/2009','DD/MM/YYYY'), cast('16/OCT/2009 04:40:24.464928 PM' as timestamp));
    insert into PC_COVKEY_PD values ( '10531075PQ', '021', '004', '09', to_date('15/OCT/2009','DD/MM/YYYY'), cast('16/OCT/2009 04:40:24.564928 PM' as timestamp));
    insert into PC_COVKEY_PD values ('10531075PQ', '021', '004', '06', to_date('22/NOV/2011','DD/MM/YYYY'), cast('17/OCT/2009 04:40:24.564928 PM' as timestamp));
    insert into PC_COVKEY_PD values ('10531075PQ', '021', '004', '07', to_date('22/NOV/2011','DD/MM/YYYY'), cast('17/OCT/2009 04:40:24.664928 PM' as timestamp));
    
    commit;
    
    SELECT POLICY_NUMBER,
           TERM_IDENT,
           COVERAGE_NUMBER,
           TRANSACTION_TYPE,
           COV_CHG_EFF_DATE,
           TIMESTAMP_ENTERED,
            RANK() OVER (partition BY PC.POLICY_NUMBER, PC.TERM_IDENT, PC.TRANSACTION_TYPE, PC.COV_CHG_EFF_DATE 
                    order by PC.POLICY_NUMBER, PC.TERM_IDENT, PC.COV_CHG_EFF_DATE, PC.TIMESTAMP_ENTERED) AS RNK
    FROM PC_COVKEY_PD PC
    ORDER BY PC.POLICY_NUMBER, PC.TERM_IDENT, PC.COV_CHG_EFF_DATE, PC.TIMESTAMP_ENTERED ;
    Published by: 966820 on October 30, 2012 19:26
  • Photo constantly order of photos in a slideshow changes no matter how many times the movements of the user the photo back to the good look at an order. Example: Bathroom Plans eventually grouped with pictures of kitchen! I have found no way to stop this o

    Apple Photo 1.3 serious problems - how can I SOLVE all these problems?

    (1) breaks down without rhyme or reason no matter where I am in the workflow.

    (2) pictures will not be Shut Down Every Time, even after several days of waiting.

    (3) aPhoto frequently badly chooses picture in the EDIT picture option, I get a picture different than the one I clicked on which is on a 100 pictures in a row.

    (4) picture constantly order of photos in a slideshow changes no matter how many times the movements of the end user the photo back to the good look at an order. Example: Bathroom Plans eventually grouped with pictures of kitchen! I have found no way to stop this weird behavior! Is there a way to stop this? If I drag the photo again some 7 additional photos in the slide show, after a minute or less, he appears again to where it was it not. !@#$%$#

    (5) If you make any CHANGES to a photo, it often changes the appearance of your complete slideshow of this picture with impatience. So you lose all this not work fix your configuration of the slide show. Even changing the order of photos once more that I had put back where they should be. !@#$$#@

    (6) photo identifies often shades of lamps and long door handles as the faces of the people.

    (7) photo made bad decisions when it comes to brightness, contrast and colors effortlessly around other than to use other software, where as with iPhoto there was a lot of workarounds. I could continue, but will save one who might be reading of this.

    I am up to date on all updates for my Mac. If anyone have REAL answers so please spilling the beans, but according to me, it's the only truth is that Apple has rolled out a product inferrer to replace an exceptional product, called iPhoto, which does not work on my new iMac computer 5K of 27 ".   If I knew what I would have chosen another computer that I use iPhoto to prepare more of fifty to sixty thousand photos in a given year and I use iPhoto to make hundreds of slideshows from it.  Are there plugins for Photo 1.3? I ask because I see where there could be Add-ons, but I can't find.

    Apple has taken a serious decision by turning his back to iPhoto and tens of millions of loyal users.

    Thanks in advance to anyone brave enough to tackle this job.

    James

    First, back up your library of Photos and hold down the command and option keys while launching Photos - repair your database - you have a corrupted database

    LN

  • Creating security group with grants decided in active directory - Server 2003

    Hello

    I need to create several different security groups for about 7 users with grant different access rights, but all users will access the same folder main and some of the same void records. I created a group with some of the users but appear to have access to all the folders there particular subfolder but I only want to have access to some of the folders in the selected subfolder.

    I guess what I'm asking is how do I create groups of different security with grants decided for each groups and ensuring that users in these groups only have access and subsidies to certain folders.

    I don't know if I explained myself properly but I certainly confused myself, I hope someone can point me in the right direction to solve this problem.

    Thanks in advance

    Jah

    Jah,

    For assistance, please ask for help in the appropriate Microsoft TechNet Windows Server Forum.

    Thank you.

  • I can't send an e-mail as a group, with or without an attachment, I always get error 0x800CCC0B the message, I have outlook express.

    cannot send error 0x800CCC0B group

    I can't send an e-mail as a group, with or without an attachment, I always get error 0x800CCC0B the message, I have outlook express (not sure which version) under XP, I used to be able to send a group with 500 more emails in it, I tried to narrow the group to 200, but it makes no difference can anyone help?
    Check out this link. Apparently there is a max of 100 recipients simultaneously and they also will disable your account temporarily if you try many times.
     
     
  • ERROR - 1051414 - cannot set the role of group with shared services [30:1101:JNDI error] error.

    Hi all

    I tried provide access to filter the group in both SSP and thru Maxl command but still get the error below. Any experience of this problem? If Yes, please let me know how you solved this problem.


    ERROR - 1051414 - cannot set the role of group with shared services [30:1101:JNDI error] error.


    Thanks in advance!

    Krishna

    Read the support document that I posted the link, prior to changes for openldap ensure you that it is saved.

  • Scrolling group problem: not all the group with scrolling content?

    Helloooow peoplez script wise.

    I was biting my nails on this problem, the last two days and still have not found a solution. Go here:

    I tried to make a window (or Panel actually, because ultimately it must be run from the EI > window > scriptname.jsx), with a scroll bar, which can scroll content in a group beside him. I can get the scroll and all group, but is the problem, just now, that the elements of x in the group, the last get cut. Like this (it is supposed to be 500 buttons):

    Screen Shot 2015-10-13 at 21.40.05.png

    The only way that I was able to get the 'internal' group grow is by not using do not align properties and manually set the .size. I know that the content is here because the last element arises, when I move the last location of points [1] upwards until he reached the top of the group. I tried to refresh the page layout (layout.layout (true); layout.resize (()) to each call of the function onChanging() of the cursor, but without success. Read a lot of forum posts beautiful and discussion by @Marc Autret and other users of scriptUI/extendscript long has this far been without success.

    TL; DR: I'm doing a group with a lot of content that I can scroll with a scroll bar.

    Here is the code snippet, I hope fairly well commented:

    {
    //scroller test
    // uncomment the temp path (replace with some image file path) and the lines inside the populateGrid() function to reproduce my problem better
    // I'm ussing an image 512x288 pixels
    
    
    //var tempPath = "/Volumes/Verbinski/02_SCRIPTING/After_Effects/stockholm/ROOT/EXPLOSIONS/Fireball_side_01/Thumbs/Fireball_Side_01_024.jpg";
    
    
    // create window
    var mWin = new Window('palette');
      mWin.size = [500,500];
      mWin.orientation = 'row';
    
    
    // If you like it, then you better put a scroller on it.
    var scroller = mWin.add('ScrollBar');
      scroller.size = [20,mWin.size[1]-40]
      scroller.minvalue = -5;
      scroller.value = scroller.minvalue;
      scroller.maxvalue = 10000; // tried changing this to all sorts of interesting numbers.
    
    
    //This should move the group, created further down.
    scroller.onChanging = function(){
      grid.location = [grid.location[0],-scroller.value];
    }
    
    
    // "Boundary" for grid (see below)
    var gridArea = mWin.add('panel',undefined,'gridArea');
      gridArea.size = [mWin.size[0]-40,mWin.size[1]-40];
    
    
    // The grid... a digital fronteer... and also container of stuff
    var grid = gridArea.add('panel',undefined,'grid');
      grid.size = [gridArea.size[0]-20,9000000000] // no matter how high I put this, it doesn't change a thing
    
    
    // Just an array for all the images to go
    var clips = [];
    // Total height gets calculated in the populateGrid function.
    var totalHeight = 0;
    
    
    function populateGrid(rows){
      var img;
      for(i=0;i<rows;i++){
      // img = grid.add('image',undefined,tempPath);
      // clips.push(img);
      grid.add('button',undefined,i);
      }
      for(i in clips){
      clips[i].location = [0,(clips[i].image.size[1]*i)]
      }
      // totalHeight = (img.image.size[1]+grid.spacing)*rows;
      // grid.size = [grid.size[0],totalHeight]
      // scroller.maxvalue = totalHeight/2;
    
    
    }
    
    
    // put x number of buttons/images into the grid
    populateGrid(500);
    
    
    // shwo to window
    mWin.show();
    mWin.center();
    }
    

    Reaally hope someone here sees this and can help out me.

    Cheers, Fynn.

    My Control Panel:

    retina 5K, 4 GHz Intel Core i7 iMac

    32 GB of RAM, 512 GB SSD HARD drive

    OSX Yosemite: 10.10.4

    AE: CS6 |  CC 2014: 13.1.1.3

    Aaalrighty, guys. It seems to me have cracked... Sort of...

    David, your version worked quite well, I just modified a bit to get the right calculation.

    The wheel now works as expected and the scroller.maxvalue is calculated as ((number of items) * height of the first item).

    Everything works fine until I have started using the automatic layout manager. The option 'fill' at least makes it really hard to understand the final height of the internal objects. So they must be defined accordingly.

    Indeed, here is my modified code (sorry, @David for resources according to the version string, you may simply copy the scroller.onChanging () and populateGrid() x parts))

    {
    //scroller test
    // I'm using an image of around 512x288 pixels
    
    var tempPath = "YOUR IMAGE HERE";
    
    // create window
    var mWin = new Window('palette');
      mWin.size = [500,500];
      mWin.orientation = 'row';
    
    // If you like it, then you better put a scroller on it.
    var scroller = mWin.add('ScrollBar');
      scroller.size = [20,mWin.size[1]-40]
      scroller.minvalue = 0;
      scroller.value = scroller.minvalue;
      scroller.maxvalue = 3000; // tried changing this to all sorts of interesting numbers.
    
    //This should move the group, created further down.
    var scrollDiary = 0;
    scroller.onChanging = function(){
      var scrollVal = Math.abs(scroller.value)-scrollDiary;
      for(i=0;i		   
  • How to make a sum of text_field or column with where condition?

    Hi all

    In Oracle forms 6i, I created a form in which there are 5 text_Items (with 20 none of the displayed fields) namely ACCOUNT FD DO, AMOUNT, INTEREST RATE, STATUS and INTEREST_YEAR.

    FD ACCOUNT NO.

    AMOUNT

    INTEREST RATE

    STATUS

    INTEREST_YEAR

    47665

    50000

    1. 11.5

    E

    5750

    37463

    60000

    12

    D

    7200

    47651

    100000

    1. 12.5

    D

    12500

    34766

    70000

    11

    E

    7700

    I want to make the sum of the INTEREST_YEAR where status = 'E '.

    I created a TOTAL_INTEREST_YEAR name field in which I want to display the sum.

    How the sum with where condition?

    Thank you.

    Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production

    Oracle form Builder 6i.

    Michael,

    When you write the formula for a calculated item, it does not use PL/SQL expressions (to include built-in DECODING) in Forms 6i.  If there was no conditional control over your calculation, you can simply make your article a summary point and perform the summation over column interest_rate .  However, because your calculation depends on the value in the STATUS column, you will need to use a combination of a calculated item and a summary article because you can't use an IF, DECODE or any other PL/SQL statement in the formula for the calculated item.  Therefore, you need to create a function in the knot of program units and call the function in your formula.  I tested it using the following code and it worked correctly.

    First, create the following function in the node of the object browser program units.

    FUNCTION calc_interest RETURN NUMBER IS
         n_ret_val  NUMBER := 0;
     BEGIN
       IF ( :YOUR_BLOCK.STATUS = 'E' ) THEN
           n_ret_val := :YOUR_BLOCK.interest_rate;
       END IF;
       RETURN n_ret_val;
    END calc_interest;
    

    First, you must change the property to BLOCK request all archives = Yes

    Then, open the palette of your calculated item property, and set the following properties:

    1. calculation = Formula

    2 property Forumla = CALC_INTEREST

    3. point data base = No.

    Now create a second item in the table not based on in your block that will display the amount of the interests summarized.  Open the palette property for this element and set the following properties:

    1 Data Type = number

    2 calculation Mode = Summary

    3. function = sum

    4 summarizes point = "name of your element calculated.

    5 base of data point = No.

    6 canvas = "your canvas.

    When you query your block, you should see the sum of all records where STATUS = 'E '.

    It worked for me, in my example form that I created so this should work for you.

    Craig...

  • replace groups with symbols

    I have a customer file used groups duplicated instead of symbols. Is there a script that will replace all the "groups" with a symbol given? I'm sure this has been asked before, but can't seem to find the thread.

    Thank you.

    IIRC, one of the scripts on kelsocartography.com should do exactly that.

    But you would need to place a symbol instance and run the script.

  • Some build-in panels are grouped with my plugin Panel in InDesign CC

    My settings for panelist look like this:

    resources LocaleIndex (kSDKDefPanelResourceID)

    {kViewRsrcType,

    {kWildFS, k_Wild, kSDKDefPanelResourceID + index_enUS}

    };

    / * Definition of panelist.

    */

    resources involved (kSDKDefPanelResourceID)

    {

    {

    1 group in the list

    kSDKDefPanelResourceID, / / resource ID for this Panel (use SDK by default ID rsrc)

    kMyPluginID, / / ID of the plugin which holds this Panel

    kIsResizable,

    kMTPanelWidgetActionID, / / Action ID to show/hide Panel

    "MyPlugin:Test", / / appears in the list window.

    "", / / Substitute the form menu path ' hand: Foo "If you want your palette in the second menu item to place

    0.0, / / menu replacing the position of the alternative Menu to determine the order of the menu

    kMyImageRsrcID, kMyPluginID, / / Rsrc ID, ID Plugin for a PNG icon resource to use for this palette (when it is not active)

    c_Panel

    }

    };

    resources MTPanelWidget (kSDKDefPanelResourceID + index_enUS)

    {

    __FILE__, __LINE__, / / macro location

    kMTPanelWidgetID, / / WidgetID

    kPMRsrcID_None, / / RsrcID

    kBindAll, / / Binding (0 = none)

    0, 0, 320, 386, / / framework: left, top, right, bottom.

    kTrue, kTrue, / / Visible, Enabled

    kTrue, / / clear the background

    kInterfacePaletteFill, / / Color Erase

    kMTPanelTitleKey, / / name of Panel

    {

    }

    "MyPlugin" / / name of contextual menu (internal)

    };

    The problem is that when I click on the window-> MyPlugin-> Test liquid integrated panels layout of the article and references are also open and grouped with my Panel. I don't understand what I'm doing wrong. The situation is the same with all the examples in the sdk that have signs. Help, please. 10 x in advance.

    My previous post was that this panelMgr-> GetPanelFromWidgetID (kMyPanelWidgetID) does not find the Panel when it is called in the initializer.

    I'm now deep - search the trees from the GetRootPaletteNode paletteRef, using PaletteRefUtils methods. For kTabPanelContainerType, I used panelMgr-> GetPanelFromPaletteContainer() to check the ID of widget, fortunately callers already works. For other types, I just go down the children.

    Three steps more to move the paletteRef found in a new floating window:

    columnRef = PaletteRefUtils::NewFloatingTabGroupContainerPalette(); kTabPaneType

    groupRef = PaletteRefUtils::NewTabGroupPalette (column); kTabGroupType

    PaletteRefUtils::ReparentPalette(myPanelRef,groupRef,PaletteRef());

    If you have several panels, Iterate step 2 + 3 to create groups (lines) per Panel, or just step 3 if they all end up stacked.

    Edit: I can find why GetPanelFromWidgetID did not work: apparently initializer priority is considered by plugin, so one of my signs in a plugin different just was not yet registered. Forget all that deep-search tips.

  • Move groups with paddings in gradient along the paths

    Hi guys,.

    I would like to divide the green leaves on the image below on the green on the right object so that the leaves create the shape of the lines. They fill the space of the Green object to the similar form at the moment. Do you understand what I mean?

    Schermafbeelding 2013-12-30 om 15.00.38.png

    I tried to mix but since leaves are objects grouped with gradient fills, but then this is the result:

    Schermafbeelding 2013-12-30 om 15.00.32.png

    So my question: I need to divide the leaves along the Green shape so that they follow the form. How can I get this done?

    Best regards

    Bob

    You have 4 gradients (extended to mixtures of clipping masks) that you have to cut. After I found 3 of them, there was one more in the stem.

    You also need to repapply your mix modes after cutting, so take your selection ntoe.

  • How to use 'LIKE' operator/substr/instr with if condition?

    Hello

    How to use 'LIKE' operator/substr/instr with if condition?

    I can use the following function:
    <? xdofx:InStr('ssStatus','Open',1) = '0'? >
    which returns true or false depending on ssStatus as Open *.

    But when I try to use <? If: xdofx:instr('ssStatus','Open',1) = '0'? > calculating <? end if? >

    It gives an error.

    Any suggestion? OR a solution?
    Thank you.

    Published by: user12427117 on March 10, 2011 20:42

    Published by: user12427117 on March 10, 2011 20:46

    You can try to use

    0? >

    Use contains to AS

  • Count records in different tables with a condition

    Hello

    I would like to ask for your help. I want to count the records from multiple tables at the same time. These tables have a common column with them with a date data type. The output I want is to see the records in the tables with a condition of the column_date < (specified date).


    as of:

    Select count (*) from (the_tables) where (column_date), (specific_date)


    Your help would be much appreciated.

    Hello

    || ' WHERE ITEMDATE <= 8/1/2008;';
    

    With the Frank remark about the mistake, another good practice would be to use a connection variable:

    script_sql := 'SELECT COUNT (*) FROM '
    || r_1.owner
    || '.'
    || r_1.table_name
    || ' WHERE ITEMDATE <= :1';
    
    ...
    
    EXECUTE IMMEDIATE script_sql
    INTO this_cnt
    USING to_date('08/01/2008','DD/MM/YYYY');
    

Maybe you are looking for