Two equal consecutive integers

In this vi if 2 consecutive numbers are equal the Boolean value remains true even if it must if turn off then.

If there is a way without using local variables to do this?

After writing each boolean, you must erase it before moving on to the next iteration. This is more easily done with an interior OF the loop containing two iterations.

Here is a quick sketch. See if it works for you.

(I've also simplified the rest of your code, which was extremely complicated. I don't know that it could be further simplified. Note that I used a transparent cluster for LEDs, eliminating all these comparisons)

Tags: NI Software

Similar Questions

  • Compare the content of two equal nested tables

    I'm working on a black box test where I compare the contents of two structurally equal tables before and after executing a script of some. My two tables, MDQ_OLD and MDQ_NEW, are filled with the data in two separate operations.

    The two tables, I'll compare are nested, as you can see in the CREATE TABLE scripts below.

    I tried to use the less-operator sign, but without success.

    I also tried to select data in a type that is % ROWTYPE to my nested tables, but it does not work as well (see the below script in this post).

    Can you please help me on this problem on how to compare the content of two nested tables?

    Run the scripts below to reproduce the problem and be sure to update this post if more information is required.

    -The scripts below-

    Select * from version of v$.

    Oracle Database 11 g Enterprise Edition Release 11.2.0.4.0 - 64 bit Production

    PL/SQL Release 11.2.0.4.0 - Production

    CORE Production 11.2.0.4.0

    AMT for Linux: Version 11.2.0.4.0 - Production

    NLSRTL Version 11.2.0.4.0 - Production

    -First of all, I create my types

    CREATE OR REPLACE TYPE FORCE AS OBJECT MDQ_DETAIL (NUMBER OF MDQ_DETAIL_ID, MDQ_DETAIL_DESC VARCHAR2 (100));

    CREATE OR REPLACE TYPE T_MDQ_DETAIL AS TABLE MDQ_DETAIL;

    -Note that this type contains the table T_MDQ_DETAIL type:

    CREATE OR REPLACE TYPE MDQ_PARENT FORCE AS OBJECT (NUMBER MDQ_ID, MDQ_DETAILS T_MDQ_DETAIL);

    - Then I create two equal nested tables

    CREATE THE NESTED TABLE AS MDQ_PR_OLD STORE MDQ_DETAILS MDQ_PARENT MDQ_OLD TABLE.

    CREATE THE NESTED TABLE AS MDQ_PR_NEW STORE MDQ_DETAILS MDQ_PARENT MDQ_NEW TABLE.

    -Insert test data in the nested tables

    Insert into MDQ_OLD (MDQ_ID, MDQ_DETAILS) Values (1, T_MDQ_DETAIL (MDQ_DETAIL(1,'desc1')));

    Insert into MDQ_NEW (MDQ_ID, MDQ_DETAILS) Values (2, T_MDQ_DETAIL (MDQ_DETAIL(1,'desc1')));

    -Try to use the negative operator to compare the contents of the trailer of the nested tables, but it gives this error:

    -ORA-00932: inconsistent data types: expected - got DISPATCH. T_MDQ_DETAIL

    Select * from MDQ_NEW

    less

    Select * from MDQ_OLD;

    -Try to select in a ROWTYPE, but it fails

    declare

    myTypeOld MDQ_OLD % ROWTYPE;

    myTypeNew MDQ_New % ROWTYPE;

    myTypeDiff MDQ_New % ROWTYPE;

    Start

    -Select gives: PLS-00497: do not mix between row and several rows (in BULK) list

    Select * bulk collect into mdq_old myTypeOld;

    Select * bulk collect into mdq_new myTypeNew;

    -Need a 'compare the function of membership card' on the types of multiset except to work, but as far as I

    -I'm not able to bulk collect into myTypeOld or myTypeNew, this won't help out me.

    myTypeDiff: = multiset myTypeOld except myTypeNew.

    end;

    -Cleaning:

    drop table MDQ_OLD;

    drop table MDQ_NEW;

    type of projection MDQ_PARENT;

    type of projection T_MDQ_DETAIL;

    type of projection MDQ_DETAIL;

    > queries you provided intercepts not who.

    You asked how to compare the content of nested tables.

    I knew that you didn't ask for what you actually want, that's why I asked you to specify the comparison more in detail.

    > Do you have a query that grabs this difference as well?

    SELECT o.mdq_id, od.*
    OF mdq_old o, TABLE (o.mdq_details) od
    LESS
    SELECT n.mdq_id, nd.*
    OF mdq_new n, TABLE (n.mdq_details) nd;

    > Also, if possible, do you have a sample of a statement to COLLECT LOOSE, please?

    Actually, you raise an interesting point on using % ROWTYPE, in my view, that should be. This make...

    DECLARE
    TYPE rt_mdq_new () IS RENDERING
    mdq_id NUMBER,
    mdq_details t_mdq_detail);
         
    TYPE tt_mdq_new IS TABLE OF THE rt_mdq_new;
      
    t_mdq_new tt_mdq_new;
    BEGIN
    SELECT mdq_id, mdq_details
    LOOSE COLLECTION t_mdq_new
    OF mdq_new min.;
    END;
    /

    DECLARE
    CURSOR c_mdq_new
    IS
    SELECT mn.*
    OF mdq_new min.;
         
    TYPE tt_mdq_new IS TABLE OF c_mdq_new % ROWTYPE;
      
    t_mdq_new tt_mdq_new;
    BEGIN
    OPEN c_mdq_new.
    Get the c_mdq_new COLLECT in BULK IN t_mdq_new;
    CLOSE C_mdq_new;
    END;
    /

  • Looking for two spaces consecutive in a Pages document

    5.6.1 pages

    OSX 10.11.3,

    27 "iMac end 2013.

    When I search a document Pages 5.6.1 for two consecutive spaces (so that I can make a replacement all the) present Pages of MathType equations as instances. Can someone explain what, to the top with this?

    I've recently updated to 4.3, reluctantly. Things got better in 5.6.1 but this seems like a real bug.

    Thank you

    Rick

    There may be more than one space preceding or following a MathType equation in your document. Anything with that at all.

    In your view menu, display invisible characters. If there are two spaces adjacent to a MathType equation, then you can use the right arrow in the dialog box find/replace to strengthen these past, occurrences or replace them.

    Pages ' 09 v4.3 and v5.6.1 Pages are entirely different applications, the latter being a subset dismal, inconsistent of the former. You use v5 Pages when you like frustration and lost productivity and Pages ' 09, when you want the accomplishment.

  • GREP: find two identical consecutive strings

    Hello

    I should find all occurrences of two identical words (or string) which occur consecutively, as those at the beginning of this sentence. Is this possible with the ID?

    I would have thought that

    (\w+) $1

    It would, but it's not.

    The syntax of a search string is different from the chain of change. Use the following syntax:

    \1 (\w+)

    - but he will find one or more consecutive characters of same separated by a space. For example, it will find your "find find" but also "t t' in ' to the '.  If you want to find whole words only, a good first attempt would be:

    \1 (\b\w+)

    but it will fail for a few fairly common phrases. She will find 'for foreign countries","in international relations"and"be better ". To find words in doubles only, use this:

    (\b\w+) \1\b

    or even that; a little better because he will find reanalyses thus:

    (\b\w+) (\1)+\b

    Edit: Hi Eugene.

  • ViewCriteria comparing two columns of the table to create

    Anyone know how I can create a ViewCriteria where clause that compares the two columns of the same table?

    For example if I had two columns of integers (MINSAL and MAXSAL) and wanted to see if they are equal. Normally, I would do the following SQL below.

    SELECT * EMPL
    WHERE MINSAL = MAXSAL

    Try to link any of them to an expression of groovy.

    Timo

  • Align the two signals and measure the Phase Shift

    Hello

    I do an experiment in which I use the NI USB-6221 DAQ card. The jury is able to make 250 k samples/second. I want to measure two voltages in a circuit and find the phase shift between them at frequencies between 1 and 10000. First I ouputted a wave sinusoidal frequency variable through the Commission and applied to a test circuit. Then I used the Board to measure the two tensions consecutively (thus reducing the maximum sampling frequency at 125 k). I used the signals align VI and measured the two phases and then calculates the phase shift (VI attached in Phase 1). It worked well for the test circuit I built in which the phase shift went way logarithmique.20 degrees ~84.5 degrees and then stabilized. At frequencies above 5 000 Hz phase shift must have remained constant, but it varies more or less 1 degree. When the phase shift is 84.5 degrees, present a degree of variability is not particularly explicit. When I asked my program on the circuit that I really wanted to measure, the phase shift went from-. 5 degrees up to about 1.2 degrees. The change in the values of phase shift at high frequencies (> 3000) was environ.2 degrees. Given the small phase shift, this variation is unacceptable. Now I tried to use a sequence to each blood individually (increase the maximum sampling frequency to 250 k) and then align the two signals and measure the phase of each shift. When I use align it and re - sample Express VI to realign the two signals, I get the message "error 20333 analysis: cannot align two waveforms with dt even if their samples are not clocked in phase." Is it possible to align two signals I describe here? I enclose the new VI as Phase 2

    Matthew,

    I think I have an idea for at least part of the problem.

    I took your program data and deleted stuff DAQ.  I have converted the Signal on the chart control and looked then what was going on with the signal analysis.

    The output of the Waveforms.vi line has two waveforms, like the entry.  However, arrays of Y in the two waveforms are empty!  It does not generate an error. After some head scratching, reading the help files and try things out, that's what I think is happening: the time t0 two input signals are 1,031 seconds apart. Since the wavefoms contains 1,000 seconds of data, there is no overlap and may not align them.

    I changed the t0 on two waveforms are the same, and it lines up.  The number of items in the tables is reduced by one. Then I increased the t0 of 0.1 seconds on the first element. The output had both greater than the entry by dt t0 t0 and the size of the arrays was 224998.  Reversing the t0 two elements shifts the phase in the opposite direction.

    What that tells me, is that you can not reliably align two waveforms which do not overlap.

    I suggest that you go to 2-channel data acquisition and that it accept the reduced sample rate.  You won't get the resolution you want, but you should be able to tell if something important happens.

    You may be able to improve the equivalent resolution by taking multiple steps with a slight phase shift. This is similar to the way that old oscilloscopes of sampling (analog) worked. Take a series of measures with the signal you are currently using.  The make enough average to minimize changes due to noise. Then pass the phase of the signal of excitement to an amount that is smaller than the resolution of phase of sampling rate and repeat the measurements.  Recall that I calculated that for a 5 kHz signal sampled at 125kHz, you get a sample every 14.4 degrees. If shift you the phase of 1 degree (to the point/mathematical simulation), you get a different set of samples for excitement.  They are always separated by 14.4 degrees.  Take another series of measures. Transfer phase another degree and repeat.  As long as your sampling clocks are stable enough so that frequency does not drift significantly (and it shouldn't with your equipment), you should be able to get near resolution of what you need.  The trade-off is that you need to perform more measurements and may need to keep track of the phase shifts between the various measures.

    Lynn

  • County consecutive appearance

    Hello
    I had this requirement to count the number of consecutive appearance of a status for a period dynamically


    with x as)
    Select 101 as 'ID', '01' than the 'STATUS', January 1, 11 "as"DT"of the double
    Union
    Select 101 as "ID", "02" than the 'STATUS', January 2, 11 "as"DT"of the double
    Union
    Select 101 as 'ID', '03' than the 'STATUS', January 3, 11 "as"DT"of the double
    Union
    Select 101 as "ID", "02" than the 'STATUS', January 4, 11 "as"DT"of the double
    Union
    Select 101 as 'ID', ' 02 ' as 'STATUS', 5 January 11' as 'DT' double.
    Union
    Select 101 as 'ID', '03' than the 'STATUS', January 6, 11 "as"DT"of the double
    Union
    Select 101 as "ID", "02" than the 'STATUS', January 7, 11 "as"DT"of the double
    Union
    Select 101 as "ID", "02" than the 'STATUS', January 8, 11 "as"DT"of the double
    Union
    Select 101 as 'ID', '04' than the 'STATUS', January 9, 11 "as"DT"of the double
    Union
    Select 101 as 'ID', '03' than the 'STATUS', January 10, 11 "as"DT"of the double
    Union
    Select 101 as 'ID', '03' than the 'STATUS', January 11, 11 "as"DT"of the double
    Union
    Select 101 as 'ID', '03' than the 'STATUS', January 12, 11 "as"DT"to double)
    Select x.ID, x.STATUS, x.DT x command by x.DT

    < pre >
    Output must be - ID 101 and STATUS '01' is not necessary
    ----------------------------------------------
    1Time 2Time STATUS 3 & 4Time 5 & above
    ----------------------------------------------
    02-1-2
    1-2-03
    04 1
    < / pre >

    1time columns, 2Time, 3 & 4Time, 5 & above means the number of consecutive appearance of the State (duty to check the status of the previous month and the next month)

    That is to say for example if we take the '02' status
    After checking the previous month and the month next status '02' appeared once consecutively on ' 01-02-11'(Feb)
    and so 1 should be "1Time" column.

    Similarly, after reviewing the previous month and the State next month, the number of 2 consecutive appearance of the '02' status twice, is on (January 4, 11 ', January 5, 11') and (January 7, 11 ', January 8, 11'), where the number of them will be under the column «2Time» 2

    For the '03' status.
    There are two unique consecutive appearance of the State, i.e. January 3, 11 'and 6 January 11'. County is 2 under '1Time"column status 03
    and status "03"a single 3 consecutive appearance, i.e. on January 10, 11", January 11, 11 ', 12 January 11, County is 1 under 3 & 4Time coulmn.

    Concerning

    Like this?

    -- Data:
    with x as (
    select 101 as "ID", '01' as "STATUS", '01-01-11' as "DT" from dual
    union
    select 101 as "ID", '02' as "STATUS", '01-02-11' as "DT" from dual
    union
    select 101 as "ID", '03' as "STATUS", '01-03-11' as "DT" from dual
    union
    select 101 as "ID", '02' as "STATUS", '01-04-11' as "DT" from dual
    union
    select 101 as "ID", '02'as "STATUS", '01-05-11' as "DT" from dual
    union
    select 101 as "ID", '03' as "STATUS", '01-06-11' as "DT" from dual
    union
    select 101 as "ID", '02' as "STATUS", '01-07-11' as "DT" from dual
    union
    select 101 as "ID", '02' as "STATUS", '01-08-11' as "DT" from dual
    union
    select 101 as "ID", '04' as "STATUS", '01-09-11' as "DT" from dual
    union
    select 101 as "ID", '03' as "STATUS", '01-10-11' as "DT" from dual
    union
    select 101 as "ID", '03' as "STATUS", '01-11-11' as "DT" from dual
    union
    select 101 as "ID", '03' as "STATUS", '01-12-11' as "DT" from dual)
    -- Query:
    select status, sum(decode(c,1,1,0)) "1 Time",
                   sum(decode(c,2,1,0)) "2 Time",
                   sum(decode(c,3,1,4,1,0)) "3 or 4 Time",
                   sum(case when c>=5 then 1 else 0 end) "5 or above"
    from
    (
      select status, d, count(*) c
      from
        (
         select x.ID,
                  x.STATUS,
                  to_date(x.DT,'MM-DD-RR')-row_number() over (partition by status order by dt) d
         from x order by x.DT
        )
      group by status, d
    )
    group by status
    order by status;
    

    If you want, you can add the condition

    where status!='01'
    

    Published by: hm on 11.01.2012 22:40

  • Need the XQuery function that compares two nodes.

    Is there a any function XQuery for the OSB 10 g which compares two nodes with a correct management of redundant namespaces?
    There is a function "" fn:deep - equality "that compares two nodes." But it fails when one node is to have redundant namespaces as an attribute.
    I need a XQuery function that compares two nodes for the same content, same child nodes and their content, the same attributes of the child nodes except the attribute xmlns for one of the node that has redundant namespaces (namespace that are already defined in the parent nodes or any unnecessary space of names).

    for example, it should treat following two equal nodes:

    < has t '1' = >
    < xmlns:ns1 b = "ns1" >
    2
    < /b >
    < /a >


    < has t '1' = >
    < b >
    2
    < /b >
    < /a >


    Above two nodes are the same attributes and content except 1st node attribute xmlns. Wise business scenario it is redundant but not bad, same case can occur and I need to take care of these scenarios.
    Please let me know about a feature or logic to treat this

    in fact the XMLs 2 you posted are different to a '2 '.

    I tried this:

    XQuery version "1.0" encoding "Cp1252";
    (: pragma type = "xs: anyType" ::))

    declare namespace xf = "http://tempuri.org/OSB%20Project%201/compare/";

    declare function:compare() xf
    {element (*)}

    Let $a: = (t = '3')(b xmlns:ns="ns") 2 (/ b) (/a)
    Let $b: = (a t = "3") (b) 2 / (b) (/a)

    return {fn:deep - equal ($a, $b)}
    };

    XF:Compare()

    and it does not work as expected, it is the additional xmlns:ns = 'ns' are ignored...
    I have real

  • Numbering of the lines based on two columns

    Hello world

    I´d would like to know if there is a way to achieve the numbering shown in the second table below indicating only Oracle native functions, like ROW_COUNT() on the partition, etc.

    I m using Oracle 10 g.

    The logic used is:
    From 1, increment one each time that the ORIGIN is identical to the FIRST ORIGIN of the line (ID) group.
    ID    ORIGIN    DESTINATION    ORDER
    ------------------------------------
    1     A         B              1
    1     B         A              2
    1     A         B              3
    1     B         C              4
    1     C         A              5
     
    ID     ORIGIN    DESTINATION    ORDER    NUMBERING
    --------------------------------------------------
    1      A         B              1        1
    1      B         A              2        1
    1      A         B              3        2
    1      B         C              4        2
    1      C         A              5        2
    In order to compare the ORIGIN of each line with the FIRST ORIGIN of the group, I used the function LAG to create a column that will be the FIRST ORIGIN of the value in the group.

    However, I was not able to number the lines as shown above (column NUMBERING).

    Any help will be much appreciated.

    --------------------------------------------------------------------------------------------------------------

    Test query:
    WITH T AS
    (
      SELECT 1 ID, 'A' ORIGIN, 'B' DESTINATION, 1 ORDERING FROM DUAL UNION ALL
      SELECT 1 ID, 'B' ORIGIN, 'A' DESTINATION, 2 ORDERING FROM DUAL UNION ALL
      SELECT 1 ID, 'A' ORIGIN, 'B' DESTINATION, 3 ORDERING FROM DUAL UNION ALL
      SELECT 1 ID, 'B' ORIGIN, 'C' DESTINATION, 4 ORDERING FROM DUAL UNION ALL
      SELECT 1 ID, 'C' ORIGIN, 'A' DESTINATION, 5 ORDERING FROM DUAL
    )
      SELECT T.ID
           , T.ORIGIN
           , T.DESTINATION
           , T.ORDERING
           , LAG (T.ORIGIN, T.ORDERING -1, 0) OVER (PARTITION BY T.ID
                                                        ORDER BY T.ID
                                                               , T.ORDERING) FIRST_ORIGIN_OF_GROUP
        FROM T
    ORDER BY T.ID
           , T.ORDERING

    Hello

    Here's one way:

    WITH     got_first_origin     AS
    (
         SELECT     id, origin, destination, ordering
         ,     FIRST_VALUE (origin) OVER ( PARTITION BY  id
                                          ORDER BY         ordering
                                     ) AS first_origin
         FROM    t
    )
    SELECT     id, origin, destination, ordering
    ,     COUNT ( CASE
                        WHEN  origin = first_origin
                  THEN  1
                    END
               )     OVER ( PARTITION BY  id
                           ORDER BY      ordering
                   ) AS numbering
    FROM     got_first_origin
    ;
    

    This assumes that the combination of id and order is unique. Within an id, you place your order does not have to be consecutive integers, or something like that.

    Analytical functions cannot be nested (the argument of the function of COUNTY anlytic can not call the analytical FIRST_VALUE function); The subquery is necessary.
    You could do something with a LAG, as you have tried, rather than FIRST_VALUE, but you would still need a subquery, for the same reason.

  • Check box if the numeric field is equal to the value

    Newbie needs help!

    I am writing a script to check a box, but only after a numeric field adds up to "4".

    If you look at my attachment, "Student 1" has four check boxes (skills before the course, courses, etc.).  Each time that these boxes are checked/unchecked, the numeric field + or - 1 for the rawValue.  The script that I have problems with is on the box next to student 1...

    Here's the script of 'calculate' attached to this box:

    If (stu1_results.rawValue == 4) {}

    CheckBox_stu1.rawValue = true; Select the check box

    else {}

    CheckBox_stu1.rawValue is false; Clear the check box

    }

    Any advice would be great!

    Mike Schaefer

    (blinkyguy)

    There were two things wrong with your code. Logically, it was fine, but you have been mixing an entitled, declaration of the NTS and a comparison. In an if statement if you wantt o compare two values that you need two equal signs. If you want to assign a value to another, you need a single equal sign. Who was number one. The second is when you assign the value of the checkbox. The power values are 1/0 not true/false. So, you can either modify your code to assign a value of 1/0 instead of true/false or modfiy the box to the true/false as values instead of 1/0.

    The code should look like this:

    If (stu1_results.rawValue == 4) {}

    CheckBox_stu1.rawValue = 1;

    } else {}

    CheckBox_stu1.rawValue = 0;

    }

    Paul

  • Referential lag free

    I was wondering if anyone has any ideas on a fairly interesting query. Ideally, I had trying to solve with the help of a shift function, but referring to his previous result that complicates things a bit.

    I have a table of results of sport, and I would like to calculate a column 'champion '.

    Take the following data table:

    Rownum - team - opponent - winner
    1 A - B - A
    2 C - D - C
    3 E - F - F
    4A - C - HAS
    5A - D - D
    6 D - F - D

    The Champion column should work as follows:
    -Where the line of the former Champion team =, a value equal to the winner, if not, it should be on the previous line.

    Which would produce the following table:

    Rownum - team - opponent - winner - Champion
    1 A - B - A - A
    2 C - D - C - A
    3 - F - F - E HAS
    4A - C - A - A
    5A - D - D - D
    6 D - F - D - D

    Any thoughts?

    Hello

    If you have Oracle 11.2, then a recursive WITH cluase is the best way to solve this problem. Here's how I'd do:

    WITH     got_r_num   AS
    (
         SELECT     games.*
         ,     ROW_NUMBER () OVER (ORDER BY  game_num)     AS r_num
         FROM     games
    )
    ,     got_champion (game_num , team, opponent, winner, champion ) AS
    (
         SELECT  game_num, team, opponent, winner
         ,     winner                    AS champion
         FROM     got_r_num
         WHERE      game_num     = 1
               --
        UNION ALL
               --
         SELECT  rn.game_num , rn.team, rn.opponent, rn.winner
         ,      CASE
                   WHEN  c.champion     IN (rn.team, rn.opponent)
                   THEN  rn.winner
                   ELSE  c.champion
               END                      AS champion
         FROM      got_champion  c
         JOIN      got_r_num     rn     ON     rn.game_num     = c.game_num + 1
    )
    SELECT  game_num , team, opponent, winner, champion
    FROM     got_champion
    ;
    

    This is very similar to the previous recursive WITH suggestion of article, with these changes:
    (a) ROWNUM isn't a column name good. I used game_num instead
    (b) the recursive join condition assumes that the game_num values are consecutive integers, starting with 1. In real life, it's too hard to guarantee. The above query assumes that game_num is unique, but nothing more. It uses the ROW_NUMBER analytic function to generate consecutive integers, starting at 1, in the same order as game_num.
    (c) I guess, when the champion is changed, the new champion could be either in the column of the team or the opponent (as well as the column, of course). In other words, if you add this line in the sample data:

    INSERT INTO games (game_num, team, opponent, winner) VALUES (7, 'A', 'D', 'A');
    

    You can then these results:

    GAME
    _NUM TEAM OPPONENT   WINNER CHAMPION
    ---- ---- ---------- ------ ----------
       1 A    B          A      A
       2 C    D          C      A
       3 E    F          F      A
       4 A    C          A      A
       5 A    D          D      D
       6 D    F          D      D
       7 A    D          A      A  
    

    that is what the two motions I proposed the product. The other recursive WITH the solution of clause a champion = has ' on the last row, where game_num = 7.

    Yet once, clauses WITH recursive function in Oracle 11.2. The query in my first message works in Oracle 9.1 (and more) and needs only minor work to the point changes 8.1.
    It starts by determining, for each line, where the champion would change if the current champion was the winner on this line. In other words, it looks for the next game (lower than the current game_num game_num) where the winner today has played, but did not win.
    The following subquery, change_points, product lines where the champion actually changed, including the first row. (I assume that the winner of the first part is the first champion). This assumes that the first game_num is 1. If you can't hardcode the first issue of game in the START WITH clause, and then do a scalar subquery (SELECT MIN (game_num) FROM games) where I've hardcoded the number 1.
    In the main query, I joined change_points games. It's an outer join, because we want to include all of the lines of games, and theree will be generally fewer lines in change_points. (With the original 6 lines of sample data, change_points had only 2 rows, for game_nums 1 and 5; with line 7, I suggested, change_points has 3 rows). The analytical LAST_VALUE function finds the champion of the last point where the champion has changed.

    Intuitively, you would think that the analytical LAG function might help in this problem, but the value we want to get lag is itself the results of the previous LAG, so to use LAG, we would need one self-join for every row of result (except the first line). It is essentially what makes the recursive SOLUTION article, except that you do not have to hard code the self-joins; the recursive branch of the UNION ALL did exactly as much as we need.

  • Query (no, day, Date)

    Hello

    This query shows how many days

    Select level as dow, to_char (trunc (: P_date, 'day') + level - 1, 'day', 'NLS_DATE_LANGUAGE = ARABIC') as day, TO_DATE (: P_date, "dd-mm-rrrr") + T2D rownum-1 of dual connect by level < = 7;

    I want to change the query starting with

    7 Saturday date Saturday

    1 Sunday Sunday date

    2 Monday Monday date

    3 Tuesday

    and so on.

    Amatu Allah.

    Hello

    I see it; you want to spend 2 parameters DATE (start date and an end date) and generate a result set contains a line for each day in this range.

    In response #5, Kendeeny showed how to generate the 7 dates starting at a given date.  You can change that to generate however many you need, based on 2 parameters, as follows:

    WITH the settings THAT

    (

    SELECT TO_DATE (December 19, 2015 ", 'DD-MM-YYYY') AS first_date

    , TO_DATE (27 December 2015 ", 'DD-MM-YYYY') AS last_date

    OF the double

    )

    SELECT the LEVEL - 1 AS day_name + first_date

    OF parameters

    CONNECT BY LEVEL<= 1="" +="" last_date="" -="">

    ;

    It produces just the DATEs:

    DAY_NAME

    ----------

    19/12/2015

    20/12/2015

    21/12/2015

    22/12/2015

    23/12/2015

    24/12/2015

    25/12/2015

    26/12/2015

    27/12/2015

    Now, we must add the other 2 columns we want.

    DAY_ID DAY_DATE DAY_NAME

    ---------- --------- ----------

    7 Saturday, December 19, 2015

    1 Sunday, December 20, 2015

    2 Monday, December 21, 2015

    3 Tuesday, December 22, 2015

    4 Wednesday, December 23, 2015

    5 Thursday, December 24, 2015

    6 Friday, December 25, 2015

    7 Saturday, December 26, 2015

    1 Sunday, December 27, 2015

    The day_date column is easy using TO_CHAR.

    Day_id is more complicated.  Here's a way to do it, using the TRUNC:

    WITH the settings THAT

    (

    SELECT TO_DATE (December 19, 2015 ", 'DD-MM-YYYY') AS first_date

    , TO_DATE (27 December 2015 ", 'DD-MM-YYYY') AS last_date

    OF the double

    )

    got_d AS

    (

    SELECT the LEVEL - 1 AS d + first_date

    OF parameters

    CONNECT BY LEVEL<= 1="" +="" last_date="" -="">

    )

    SELECT 2 + d - TRUNC (d + 1, 'IW') AS day_id

    TO_CHAR (d

    , 'Day '.

    -, ' NLS_DATE_LANGUAGE = ARABIC '.

    ) AS day_date

    , TO_CHAR (d, 'dd-mm-yyyy') AS day_name

    OF got_d

    ORDER BY d

    ;

    TRUNC (d, 'IW') returns later Monday less than or equal to d.  This is a handy feature, and it is independent of the settings NLS_DATE_LANGUAGE and NLS_TERRITORY.

    d - TRUNC (d, 'IW') maps the days of the week to consecutive integers, but it assigns the lowest value on Monday.  We want Sunday to have the lowest number, so I used

    d - (TRUNC (d + 1, 'IW') instead.)  The 'magic number' 1 reflects the fact that Monday is a day after the Sunday.  It returns a number in the range from-1 to 5, but we want numbers in the range 1 to 7, that is exactly 2 more, so add 2.

  • [8i] grouping with delicate conditions (follow-up)

    I am posting this as a follow-up question to:
    [8i] grouping with tricky conditions

    This is a repeat of my version information:
    Still stuck on an old database a little longer, and I'm trying out some information...

    BANNER

    --------------------------------------------------------------------------------
    Oracle8i Enterprise Edition Release 8.1.7.2.0 - Production
    PL/SQL Release 8.1.7.2.0 - Production
    CORE 8.1.7.0.0-Production
    AMT for HP - UX: 8.1.7.2.0 - Production Version
    NLSRTL Version 3.4.1.0.0 - Production

    Now for the sample data. I took an order of my real data set and cut a few columns to illustrate how the previous solution didn't find work. My real DataSet still has thousands of orders, similar to this one.
    CREATE TABLE     test_data
    (     item_id     CHAR(25)
    ,     ord_id     CHAR(10)
    ,     step_id     CHAR(4)
    ,     station     CHAR(5)
    ,     act_hrs     NUMBER(11,8)
    ,     q_comp     NUMBER(13,4)
    ,     q_scrap     NUMBER(13,4)
    );
    
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0005','S509',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0010','S006',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0020','A501',0.85,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0026','S011',0.58,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0030','S970',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0040','S970',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0050','S003',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0055','S600',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0060','Z108',6.94,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0070','Z108',7,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0080','Z310',4.02,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0085','Z409',2.17,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0090','S500',0.85,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0095','S502',1.63,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0110','S006',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0112','S011',0.15,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0117','S903',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0118','S900',0,9,1);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0119','S950',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0120','S906',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0140','S903',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0145','S950',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0150','S906',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0160','S903',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0170','S900',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0220','S902',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0230','S906',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0240','S903',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0250','S003',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0260','S006',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0270','S012',0.95,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0280','Z417',0.68,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0285','Z417',0.68,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0290','Z426',1.78,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0300','Z426',2.07,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0305','Z426',1.23,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0310','Z402',3.97,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0315','Z308',8.09,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0410','Z409',4.83,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0430','S500',3.6,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0435','S502',0.43,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0450','S002',0.35,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0460','S001',1,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0470','Z000',2.6,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0476','S011',1,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0478','S510',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0480','S903',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0490','S003',1.2,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0500','S500',1.37,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0530','B000',0.28,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0536','S011',0.65,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0538','S510',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0540','S923',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0560','S003',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0565','S001',0.85,0,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0570','S012',2.15,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0575','S509',0,0,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0580','B000',3.78,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0590','S011',0.27,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0600','S510',0,9,0);
    Instead of grouping all sequential steps with 'OUTPR' station, I am gathering all the sequential steps with "S9%" station, then here is the solution changed to this:
    SELECT       item_id
    ,        ord_id
    ,       MIN (step_id)          AS step_id
    ,       station
    ,       SUM (act_hrs)          AS act_hrs
    ,       MIN (q_comp)          AS q_comp
    ,       SUM (q_scrap)          AS q_scrap
    FROM       (          -- Begin in-line view to compute grp_id
               SELECT  test_data.*
               ,           ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                                ORDER BY        step_id
                              )
                  - ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                          ,                CASE
                                            WHEN  station LIKE 'S9%'
                                            THEN  NULL
                                            ELSE  step_id
                                           END
                                ORDER BY      step_id
                              )     AS grp_id
               ,           ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                                ORDER BY        step_id
                              )                    AS r_num1
               ,       ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                          ,                CASE
                                            WHEN  station LIKE 'S9%'
                                            THEN  NULL
                                            ELSE  step_id
                                           END
                                ORDER BY      step_id
                              )                    AS r_num2
               FROM    test_data
           )          -- End in-line view to compute grp_id
    GROUP BY  item_id
    ,            ord_id
    ,       station
    ,       grp_id
    ORDER BY  item_id
    ,            step_id
    ;
    If just run the subquery to calculate grp_id, you can see that it sometimes affects the same number of group in two stages that are not side by side. For example, the two step 285 and 480 are they assigned group 32...

    I don't know if it's because my orders have many more steps that the orders of the sample I provided, or what...

    I tried this version too (by replacing all the names of the stations "S9%" by "OUTPR"):
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0005','S509',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0010','S006',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0020','A501',0.85,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0026','S011',0.58,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0030','OUTPR',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0040','OUTPR',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0050','S003',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0055','S600',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0060','Z108',6.94,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0070','Z108',7,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0080','Z310',4.02,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0085','Z409',2.17,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0090','S500',0.85,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0095','S502',1.63,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0110','S006',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0112','S011',0.15,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0117','OUTPR',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0118','OUTPR',0,9,1);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0119','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0120','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0140','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0145','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0150','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0160','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0170','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0220','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0230','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0240','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0250','S003',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0260','S006',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0270','S012',0.95,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0280','Z417',0.68,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0285','Z417',0.68,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0290','Z426',1.78,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0300','Z426',2.07,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0305','Z426',1.23,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0310','Z402',3.97,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0315','Z308',8.09,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0410','Z409',4.83,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0430','S500',3.6,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0435','S502',0.43,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0450','S002',0.35,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0460','S001',1,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0470','Z000',2.6,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0476','S011',1,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0478','S510',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0480','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0490','S003',1.2,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0500','S500',1.37,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0530','B000',0.28,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0536','S011',0.65,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0538','S510',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0540','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0560','S003',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0565','S001',0.85,0,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0570','S012',2.15,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0575','S509',0,0,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0580','B000',3.78,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0590','S011',0.27,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0600','S510',0,9,0);
    
    
    SELECT       item_id
    ,        ord_id
    ,       MIN (step_id)          AS step_id
    ,       station
    ,       SUM (act_hrs)          AS act_hrs
    ,       MIN (q_comp)          AS q_comp
    ,       SUM (q_scrap)          AS q_scrap
    FROM       (          -- Begin in-line view to compute grp_id
               SELECT  test_data.*
               ,           ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                                ORDER BY        step_id
                              )
                  - ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                          ,                CASE
                                            WHEN  station = 'OUTPR'
                                            THEN  NULL
                                            ELSE  step_id
                                           END
                                ORDER BY      step_id
                              )     AS grp_id
               ,           ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                                ORDER BY        step_id
                              )                    AS r_num1
               ,       ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                          ,                CASE
                                            WHEN  station = 'OUTPR'
                                            THEN  NULL
                                            ELSE  step_id
                                           END
                                ORDER BY      step_id
                              )                    AS r_num2
               FROM    test_data
           )          -- End in-line view to compute grp_id
    GROUP BY  item_id
    ,            ord_id
    ,       station
    ,       grp_id
    ORDER BY  item_id
    ,            step_id
    ;
    and it shows the same problem.

    Help?

    Hello

    I'm glad that you understood the problem.

    Here's a little explanation of the approach of the fixed difference. I can refer to this page later, so I will explain some things you obviously already understand, but I jump you will find helpful.
    Your problem has additional feature that, according to the station, some lines can never combine in large groups. For now, we will greatly simplify the problem. In view of the CREATE TABLE statement, you have posted and these data:

    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0010', 'Z417');
    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0011', 'S906');
    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0012', 'S906');
    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0140', 'S906');
    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0170', 'Z417');
    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0175', 'Z417');
    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0200', 'S906');
    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0205', 'S906');
    

    Let's say that we want this output:

    `                  FIRST LAST
                       _STEP _STEP
    ITEM_ID ORD_ID     _ID   _ID   STATION  CNT
    ------- ---------- ----- ----- ------- ----
    abc-123 0001715683 0010  0010  Z417       1
    abc-123 0001715683 0011  0140  S906       3
    abc-123 0001715683 0170  0175  Z417       2
    abc-123 0001715683 0200  0205  S906       2
    

    Where each line of output represents a contiguous set of rows with the same item_id, ord_id and station. "Contguous" is determined by step_id: lines with "0200" = step_id = step_id "0205' are contiguous in this example of data because there is no step_ids between '0200' and '0205". "
    The expected results include the step_id highest and lowest in the group, and the total number of original lines of the group.

    GROUP BY (usually) collapses the results of a query within lines. A production line can be 1, 2, 3, or any number of lines in the original. This is obviously a problem of GROUP BY: we sometimes want several lines in the original can be combined in a line of output.

    GROUP BY guess, just by looking at a row, you can tell which group it belongs. Looking at all the 2 lines, you can always know whether or not they belong to the same group. This isn't quite the case in this issue. For example, these lines

    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0140', 'S906');
    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0200', 'S906');
    

    These 2 rows belong to the same group or not? We cannot tell. Looking at just 2 lines, what we can say is that they pourraient belonging to the same group, since they have the same item_id, ord_id and station. It is true that members of same groups will always be the same item_id, the ord_id and train station; If one of these columns differ from one line to the other, we can be sure that they belong to different groups, but if they are identical, we cannot be certain that they are in the same group, because item_id, ord_id and station only tell part of the story. A group is not just a bunch or rows that have the same item_id, ord_id and station: a group is defined as a sequence of adjacent to lines that have these columns in common. Before we can make the GROUP BY, we need to use the analytical functions to see two lines are in the same contiguous streak. Once we know that, we can store this data in a new column (which I called grp_id), and then GROUP BY all 4 columns: item_id, ord_id, station and grp_id.

    First of all, let's recognize a basic difference in 3 columns in the table that will be included in the GROUP BY clause: item_id, ord_id and station.
    Item_id and ord_id always identify separate worlds. There is never any point comparing lines with separate item_ids or ord_ids to the other. Different item_ids never interact; different ord_ids have nothing to do with each other. We'll call item_id and ord_id column 'separate world '. Separate planet do not touch each other.
    The station is different. Sometimes, it makes sense to compare lines with different stations. For example, this problem is based on questions such as "these adjacent lines have the same station or not? We will call a "separate country" column of the station. There is certainly a difference between separate countries, but countries affect each other.

    The most intuitive way to identify groups of contiguous lines with the same station is to use a LAG or LEAD to look at adjacent lines. You can certainly do the job, but it happens to be a better way, using ROW_NUMBER.
    Help the ROW_NUMBER, we can take the irregular you are ordering step_id and turn it into a dial of nice, regular, as shown in the column of r_num1 below:

    `                                 R_             R_ GRP
    ITEM_ID ORD_ID     STEP STATION NUM1 S906 Z417 NUM2 _ID
    ------- ---------- ---- ------- ---- ---- ---- ---- ---
    abc-123 0001715683 0010 Z417       1         1    1   0
    abc-123 0001715683 0011 S906       2    1         1   1
    abc-123 0001715683 0012 S906       3    2         2   1
    abc-123 0001715683 0140 S906       4    3         3   1
    abc-123 0001715683 0170 Z417       5         2    2   3
    abc-123 0001715683 0175 Z417       6         3    3   3
    abc-123 0001715683 0200 S906       7    4         4   3
    abc-123 0001715683 0205 S906       8    5         5   3
    

    We could also assign consecutive integers to the lines in each station, as shown in the two columns, I called S906 and Z417.
    Notice how the r_num1 increases by 1 for each line to another.
    When there is a trail of several rows of S906 consectuive (for example, step_ids ' 0011 'by '0140'), the number of s906 increases by 1 each line to another. Therefore, during the duration of a streak, the difference between r_num1 and s906 will be constant. For 3 lines of the first series, this difference is being 1. Another series of S906s contiguous started step_id = '0200 '. the difference between r_num1 and s906 for this whole series is set to 3. This difference is what I called grp_id.
    There is little meaning for real numbers, and, as you have noticed, streaks for different stations can have as by chance the same grp_id. (it does not happen to be examples of that in this game of small sample data.) However, two rows have the same grp_id and station if and only if they belong to the same streak.

    Here is the query that produced the result immediately before:

    SELECT    item_id
    ,        ord_id
    ,        step_id
    ,       station
    ,       r_num1
    ,       CASE WHEN station = 'S906' THEN r_num2 END     AS s906
    ,       CASE WHEN station = 'Z417' THEN r_num2 END     AS Z417
    ,       r_num2
    ,       grp_id
    FROM       (          -- Begin in-line view to compute grp_id
               SELECT  test_data.*
               ,           ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                                ORDER BY        step_id
                              )
                  - ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id, station
                                ORDER BY      step_id
                              )     AS grp_id
               ,           ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                                ORDER BY        step_id
                              )                    AS r_num1
               ,       ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id, station
                                ORDER BY      step_id
                              )                    AS r_num2
               FROM    test_data
           )          -- End in-line view to compute grp_id
    ORDER BY  item_id
    ,            ord_id
    ,       step_id
    ;
    

    Here are a few things to note:
    All analytical ORDER BY clauses are the same. In most of the problems, there will be only an ording regime that matters.
    Analytical PARTITION BY clauses include the columns of 'distinct from the planet', item_id and ord_id.
    The analytical PARTITION BY clauses also among the column 'split the country', station.

    To get the results we want in the end, we add a GROUP BY clause from the main query. Yet once, this includes the columns of the 'separate world', column 'split the country', and the column 'fixed the difference', grp_id.
    Eliminating columns that have been includied just to make the output easier to understand, we get:

    SELECT    item_id
    ,        ord_id
    ,        MIN (step_id)          AS first_step_id
    ,       MAX (step_id)          AS last_step_id
    ,       station
    ,       COUNT (*)          AS cnt
    FROM       (          -- Begin in-line view to compute grp_id
               SELECT  test_data.*
               ,           ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                                ORDER BY        step_id
                              )
                  - ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id, station
                                ORDER BY      step_id
                              )     AS grp_id
               FROM    test_data
           )          -- End in-line view to compute grp_id
    GROUP BY  item_id
    ,            ord_id
    ,       station
    ,       grp_id
    ORDER BY  item_id
    ,            ord_id
    ,       first_step_id
    ;
    

    This prioduces the output displayed much earlier in this message.

    This example shows the fixed difference indicated. Specific problem you is complicated a little what you should use an expression BOX based on station rather than the station iteself.

  • Create fake double data table?

    I was just curious, is it not possible to generate a fake 'table' of lets say, two rows of the DUAL table?

    For example I could do
    SELECT '1' PRIM, 'CHILLY' COL2, 'ANIMAL' ANIMAL_TYPE
    FROM DUAL;
    That could give me a table row of
    PRIM     COL2   ANIMAL_TYPE 
    ----       ------       ----------- 
    1         CHILLY  ANIMAL     
    I was wondering how could I do something similar but more then 1 row such as
    PRIM     COL2     ANIMAL_TYPE 
    ----       ------       ----------- 
    1         CHILLY    ANIMAL 
    2         FUDGE    ANIMAL
    I know that another way, it's that I could write something like this
    SELECT 'John Doe', '555-1212' FROM DUAL
    UNION ALL
    SELECT 'Peter Doe','555-2323' FROM DUAL
    But was looking for something more 'effective' possibly to generate like lets say 10 ranks

    Thank you

    Published by: rodneyc8063 on October 2, 2011 18:12

    Hello

    Another approach is:

    SELECT     LEVEL          AS prim
    ,     CASE  LEVEL
             WHEN  1  THEN  'CHILLY'
             WHEN  2  THEN  'FUDGE'
         END          AS col2
    ,     'ANIMAL'     AS animal_type
    FROM     dual
    CONNECT BY     LEVEL     <= 2
    ;
    

    This works well if you have trends in the data to generate. For example, the above query assumes that there is a pattern to the prim column: values are consecutive integers. It assumes a pattern easier to animal_type: all values are the same. Add lines more is just a matter of changing the number 2 in the CONNECT BY clause 10 (or other) then add several WHEN clauses to (or change) the CASE expression.
    If you do not have models, then you might need CASE expressions for several columns, perhaps even of all the columns and you might find that the UNION approach you have posted has been easier.

  • Chop the text off a chain

    & var = I want to be able to eliminate the first 13 characters of this string so that I have just these two sentences. Is there a way to identify the n first characters, then set the string equal to what is the surplus?

    Taken from the manual of the AS3:

    Find a substring of character position

    The substr() and substring() methods are similar. They return both a substring of a string.
    Both take two parameters. In both methods, the first parameter is the starting position
    character in the given string. However, in the substr() method, the second parameter is the
    length of the substring to return, and in the substring() method, the second parameter is
    the position of the character at the end of the substring (which is not included in the returned)
    String). This example shows the difference between these two methods:

    var str:String = "Hello from Paris, Texas!"
    trace (Str.substr (11,15)); Paris, Texas!
    trace (Str.Substring (11,15)); output: bet
    Finding substrings and patterns in strings of 215

    The slice() method works similarly to the substring() method. When the two numbers given
    integers as parameters, it works exactly the same. However, the slice() method can
    take negative integers as parameters, in which case the character position is from the end
    of the string, as shown in the following example:

    var str:String = "Hello from Paris, Texas!"
    trace (Str.Slice (11,15)); Bet
    trace (Str.Slice (-3, -1)); // !!
    trace (Str.Slice (-3, 26)); // !!!
    trace (Str.Slice (-3, Str.Length)); // !!!
    trace (Str.Slice (-8, -3)); Texas

    You can combine negative and negative integers as parameters of the slice()
    method.
    Find the position of character of a match
    substring
    You can use the indexOf() and lastIndexOf() methods to locate matching substrings
    in the string, as shown in the following example:

    var str:String = "the Moon, the stars, the sea and Earth;
    trace (Str.IndexOf ("The")); 10

    Note that the indexOf() method is case-sensitive.
    You can specify a second parameter to indicate the index position in the string where
    Search, as follows:

    var str:String = "the Moon, the stars, sea and Earth.
    trace (str.indexOf ("the", 11)); 21
    The lastIndexOf() method finds the last occurrence of a substring in the string:
    var str:String = "the Moon, the stars, sea and Earth.
    trace (Str.LastIndexOf ("The")); 30

    If you include a second parameter to the lastIndexOf() method, the search is performed
    of this index position in the string working backward (from right to left):

    var str:String = "the Moon, the stars, sea and Earth.
    trace (str.lastIndexOf ("the", 29)); 21

Maybe you are looking for