Cannot create a smart group with three conditions

When I create or edit a smart group, I can specify one or two conditions. But if I try to add a third condition, no third line is displayed, even if the OK button goes grey as if a third line were appeared containing a partially specified condition.

Does anyone of you can create a smart group with three or more diseases? I can't.

You need to scroll to the bottom of the list of criteria to see one after another. The window is not resized.

Tags: Mac OS & System Software

Similar Questions

  • [8i] grouping with delicate conditions (follow-up)

    I am posting this as a follow-up question to:
    [8i] grouping with tricky conditions

    This is a repeat of my version information:
    Still stuck on an old database a little longer, and I'm trying out some information...

    BANNER

    --------------------------------------------------------------------------------
    Oracle8i Enterprise Edition Release 8.1.7.2.0 - Production
    PL/SQL Release 8.1.7.2.0 - Production
    CORE 8.1.7.0.0-Production
    AMT for HP - UX: 8.1.7.2.0 - Production Version
    NLSRTL Version 3.4.1.0.0 - Production

    Now for the sample data. I took an order of my real data set and cut a few columns to illustrate how the previous solution didn't find work. My real DataSet still has thousands of orders, similar to this one.
    CREATE TABLE     test_data
    (     item_id     CHAR(25)
    ,     ord_id     CHAR(10)
    ,     step_id     CHAR(4)
    ,     station     CHAR(5)
    ,     act_hrs     NUMBER(11,8)
    ,     q_comp     NUMBER(13,4)
    ,     q_scrap     NUMBER(13,4)
    );
    
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0005','S509',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0010','S006',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0020','A501',0.85,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0026','S011',0.58,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0030','S970',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0040','S970',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0050','S003',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0055','S600',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0060','Z108',6.94,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0070','Z108',7,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0080','Z310',4.02,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0085','Z409',2.17,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0090','S500',0.85,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0095','S502',1.63,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0110','S006',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0112','S011',0.15,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0117','S903',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0118','S900',0,9,1);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0119','S950',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0120','S906',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0140','S903',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0145','S950',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0150','S906',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0160','S903',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0170','S900',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0220','S902',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0230','S906',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0240','S903',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0250','S003',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0260','S006',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0270','S012',0.95,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0280','Z417',0.68,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0285','Z417',0.68,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0290','Z426',1.78,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0300','Z426',2.07,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0305','Z426',1.23,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0310','Z402',3.97,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0315','Z308',8.09,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0410','Z409',4.83,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0430','S500',3.6,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0435','S502',0.43,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0450','S002',0.35,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0460','S001',1,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0470','Z000',2.6,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0476','S011',1,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0478','S510',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0480','S903',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0490','S003',1.2,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0500','S500',1.37,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0530','B000',0.28,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0536','S011',0.65,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0538','S510',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0540','S923',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0560','S003',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0565','S001',0.85,0,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0570','S012',2.15,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0575','S509',0,0,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0580','B000',3.78,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0590','S011',0.27,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0001715683','0600','S510',0,9,0);
    Instead of grouping all sequential steps with 'OUTPR' station, I am gathering all the sequential steps with "S9%" station, then here is the solution changed to this:
    SELECT       item_id
    ,        ord_id
    ,       MIN (step_id)          AS step_id
    ,       station
    ,       SUM (act_hrs)          AS act_hrs
    ,       MIN (q_comp)          AS q_comp
    ,       SUM (q_scrap)          AS q_scrap
    FROM       (          -- Begin in-line view to compute grp_id
               SELECT  test_data.*
               ,           ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                                ORDER BY        step_id
                              )
                  - ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                          ,                CASE
                                            WHEN  station LIKE 'S9%'
                                            THEN  NULL
                                            ELSE  step_id
                                           END
                                ORDER BY      step_id
                              )     AS grp_id
               ,           ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                                ORDER BY        step_id
                              )                    AS r_num1
               ,       ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                          ,                CASE
                                            WHEN  station LIKE 'S9%'
                                            THEN  NULL
                                            ELSE  step_id
                                           END
                                ORDER BY      step_id
                              )                    AS r_num2
               FROM    test_data
           )          -- End in-line view to compute grp_id
    GROUP BY  item_id
    ,            ord_id
    ,       station
    ,       grp_id
    ORDER BY  item_id
    ,            step_id
    ;
    If just run the subquery to calculate grp_id, you can see that it sometimes affects the same number of group in two stages that are not side by side. For example, the two step 285 and 480 are they assigned group 32...

    I don't know if it's because my orders have many more steps that the orders of the sample I provided, or what...

    I tried this version too (by replacing all the names of the stations "S9%" by "OUTPR"):
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0005','S509',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0010','S006',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0020','A501',0.85,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0026','S011',0.58,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0030','OUTPR',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0040','OUTPR',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0050','S003',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0055','S600',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0060','Z108',6.94,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0070','Z108',7,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0080','Z310',4.02,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0085','Z409',2.17,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0090','S500',0.85,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0095','S502',1.63,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0110','S006',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0112','S011',0.15,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0117','OUTPR',0,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0118','OUTPR',0,9,1);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0119','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0120','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0140','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0145','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0150','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0160','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0170','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0220','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0230','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0240','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0250','S003',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0260','S006',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0270','S012',0.95,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0280','Z417',0.68,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0285','Z417',0.68,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0290','Z426',1.78,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0300','Z426',2.07,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0305','Z426',1.23,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0310','Z402',3.97,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0315','Z308',8.09,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0410','Z409',4.83,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0430','S500',3.6,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0435','S502',0.43,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0450','S002',0.35,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0460','S001',1,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0470','Z000',2.6,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0476','S011',1,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0478','S510',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0480','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0490','S003',1.2,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0500','S500',1.37,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0530','B000',0.28,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0536','S011',0.65,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0538','S510',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0540','OUTPR',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0560','S003',0,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0565','S001',0.85,0,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0570','S012',2.15,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0575','S509',0,0,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0580','B000',3.78,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0590','S011',0.27,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0009999999','0600','S510',0,9,0);
    
    
    SELECT       item_id
    ,        ord_id
    ,       MIN (step_id)          AS step_id
    ,       station
    ,       SUM (act_hrs)          AS act_hrs
    ,       MIN (q_comp)          AS q_comp
    ,       SUM (q_scrap)          AS q_scrap
    FROM       (          -- Begin in-line view to compute grp_id
               SELECT  test_data.*
               ,           ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                                ORDER BY        step_id
                              )
                  - ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                          ,                CASE
                                            WHEN  station = 'OUTPR'
                                            THEN  NULL
                                            ELSE  step_id
                                           END
                                ORDER BY      step_id
                              )     AS grp_id
               ,           ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                                ORDER BY        step_id
                              )                    AS r_num1
               ,       ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                          ,                CASE
                                            WHEN  station = 'OUTPR'
                                            THEN  NULL
                                            ELSE  step_id
                                           END
                                ORDER BY      step_id
                              )                    AS r_num2
               FROM    test_data
           )          -- End in-line view to compute grp_id
    GROUP BY  item_id
    ,            ord_id
    ,       station
    ,       grp_id
    ORDER BY  item_id
    ,            step_id
    ;
    and it shows the same problem.

    Help?

    Hello

    I'm glad that you understood the problem.

    Here's a little explanation of the approach of the fixed difference. I can refer to this page later, so I will explain some things you obviously already understand, but I jump you will find helpful.
    Your problem has additional feature that, according to the station, some lines can never combine in large groups. For now, we will greatly simplify the problem. In view of the CREATE TABLE statement, you have posted and these data:

    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0010', 'Z417');
    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0011', 'S906');
    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0012', 'S906');
    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0140', 'S906');
    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0170', 'Z417');
    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0175', 'Z417');
    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0200', 'S906');
    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0205', 'S906');
    

    Let's say that we want this output:

    `                  FIRST LAST
                       _STEP _STEP
    ITEM_ID ORD_ID     _ID   _ID   STATION  CNT
    ------- ---------- ----- ----- ------- ----
    abc-123 0001715683 0010  0010  Z417       1
    abc-123 0001715683 0011  0140  S906       3
    abc-123 0001715683 0170  0175  Z417       2
    abc-123 0001715683 0200  0205  S906       2
    

    Where each line of output represents a contiguous set of rows with the same item_id, ord_id and station. "Contguous" is determined by step_id: lines with "0200" = step_id = step_id "0205' are contiguous in this example of data because there is no step_ids between '0200' and '0205". "
    The expected results include the step_id highest and lowest in the group, and the total number of original lines of the group.

    GROUP BY (usually) collapses the results of a query within lines. A production line can be 1, 2, 3, or any number of lines in the original. This is obviously a problem of GROUP BY: we sometimes want several lines in the original can be combined in a line of output.

    GROUP BY guess, just by looking at a row, you can tell which group it belongs. Looking at all the 2 lines, you can always know whether or not they belong to the same group. This isn't quite the case in this issue. For example, these lines

    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0140', 'S906');
    INSERT INTO test_data (item_id, ord_id, step_id, station)  VALUES ('abc-123', '0001715683', '0200', 'S906');
    

    These 2 rows belong to the same group or not? We cannot tell. Looking at just 2 lines, what we can say is that they pourraient belonging to the same group, since they have the same item_id, ord_id and station. It is true that members of same groups will always be the same item_id, the ord_id and train station; If one of these columns differ from one line to the other, we can be sure that they belong to different groups, but if they are identical, we cannot be certain that they are in the same group, because item_id, ord_id and station only tell part of the story. A group is not just a bunch or rows that have the same item_id, ord_id and station: a group is defined as a sequence of adjacent to lines that have these columns in common. Before we can make the GROUP BY, we need to use the analytical functions to see two lines are in the same contiguous streak. Once we know that, we can store this data in a new column (which I called grp_id), and then GROUP BY all 4 columns: item_id, ord_id, station and grp_id.

    First of all, let's recognize a basic difference in 3 columns in the table that will be included in the GROUP BY clause: item_id, ord_id and station.
    Item_id and ord_id always identify separate worlds. There is never any point comparing lines with separate item_ids or ord_ids to the other. Different item_ids never interact; different ord_ids have nothing to do with each other. We'll call item_id and ord_id column 'separate world '. Separate planet do not touch each other.
    The station is different. Sometimes, it makes sense to compare lines with different stations. For example, this problem is based on questions such as "these adjacent lines have the same station or not? We will call a "separate country" column of the station. There is certainly a difference between separate countries, but countries affect each other.

    The most intuitive way to identify groups of contiguous lines with the same station is to use a LAG or LEAD to look at adjacent lines. You can certainly do the job, but it happens to be a better way, using ROW_NUMBER.
    Help the ROW_NUMBER, we can take the irregular you are ordering step_id and turn it into a dial of nice, regular, as shown in the column of r_num1 below:

    `                                 R_             R_ GRP
    ITEM_ID ORD_ID     STEP STATION NUM1 S906 Z417 NUM2 _ID
    ------- ---------- ---- ------- ---- ---- ---- ---- ---
    abc-123 0001715683 0010 Z417       1         1    1   0
    abc-123 0001715683 0011 S906       2    1         1   1
    abc-123 0001715683 0012 S906       3    2         2   1
    abc-123 0001715683 0140 S906       4    3         3   1
    abc-123 0001715683 0170 Z417       5         2    2   3
    abc-123 0001715683 0175 Z417       6         3    3   3
    abc-123 0001715683 0200 S906       7    4         4   3
    abc-123 0001715683 0205 S906       8    5         5   3
    

    We could also assign consecutive integers to the lines in each station, as shown in the two columns, I called S906 and Z417.
    Notice how the r_num1 increases by 1 for each line to another.
    When there is a trail of several rows of S906 consectuive (for example, step_ids ' 0011 'by '0140'), the number of s906 increases by 1 each line to another. Therefore, during the duration of a streak, the difference between r_num1 and s906 will be constant. For 3 lines of the first series, this difference is being 1. Another series of S906s contiguous started step_id = '0200 '. the difference between r_num1 and s906 for this whole series is set to 3. This difference is what I called grp_id.
    There is little meaning for real numbers, and, as you have noticed, streaks for different stations can have as by chance the same grp_id. (it does not happen to be examples of that in this game of small sample data.) However, two rows have the same grp_id and station if and only if they belong to the same streak.

    Here is the query that produced the result immediately before:

    SELECT    item_id
    ,        ord_id
    ,        step_id
    ,       station
    ,       r_num1
    ,       CASE WHEN station = 'S906' THEN r_num2 END     AS s906
    ,       CASE WHEN station = 'Z417' THEN r_num2 END     AS Z417
    ,       r_num2
    ,       grp_id
    FROM       (          -- Begin in-line view to compute grp_id
               SELECT  test_data.*
               ,           ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                                ORDER BY        step_id
                              )
                  - ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id, station
                                ORDER BY      step_id
                              )     AS grp_id
               ,           ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                                ORDER BY        step_id
                              )                    AS r_num1
               ,       ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id, station
                                ORDER BY      step_id
                              )                    AS r_num2
               FROM    test_data
           )          -- End in-line view to compute grp_id
    ORDER BY  item_id
    ,            ord_id
    ,       step_id
    ;
    

    Here are a few things to note:
    All analytical ORDER BY clauses are the same. In most of the problems, there will be only an ording regime that matters.
    Analytical PARTITION BY clauses include the columns of 'distinct from the planet', item_id and ord_id.
    The analytical PARTITION BY clauses also among the column 'split the country', station.

    To get the results we want in the end, we add a GROUP BY clause from the main query. Yet once, this includes the columns of the 'separate world', column 'split the country', and the column 'fixed the difference', grp_id.
    Eliminating columns that have been includied just to make the output easier to understand, we get:

    SELECT    item_id
    ,        ord_id
    ,        MIN (step_id)          AS first_step_id
    ,       MAX (step_id)          AS last_step_id
    ,       station
    ,       COUNT (*)          AS cnt
    FROM       (          -- Begin in-line view to compute grp_id
               SELECT  test_data.*
               ,           ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                                ORDER BY        step_id
                              )
                  - ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id, station
                                ORDER BY      step_id
                              )     AS grp_id
               FROM    test_data
           )          -- End in-line view to compute grp_id
    GROUP BY  item_id
    ,            ord_id
    ,       station
    ,       grp_id
    ORDER BY  item_id
    ,            ord_id
    ,       first_step_id
    ;
    

    This prioduces the output displayed much earlier in this message.

    This example shows the fixed difference indicated. Specific problem you is complicated a little what you should use an expression BOX based on station rather than the station iteself.

  • Creating a smart list with "remarks".

    Hello everyone, I want to create a smart list with titles that have added only remarks. Howevre there is no option as "remarks" are "true." Anyone with any suggestions?

    Thank you!

    All of the rules:

    Remarks is not < empty >

    Playlist is music

    TT2

  • Can I create a distribution group with other than microsoft email accounts?

    I want to be able to communicate with a set of independent consultants and match them with each other as a group. I tried to create a distribution group, but I am not able to add addresses of e-mail for the domain package. Anyway I can do this?

    I thank;

    Ivan.

    Hello

    This thread has been created in the Microsoft answers Site Feedback forum. the Microsoft moderation team has moved this thread on the Forum of Networking, Mail and get online other/unknown .

  • Cannot create a new startup with my data disc

    Hi guys, here's the problem: when I try to create a new startup with my data disc, I can't boot from this drive.

    The startup progress bar takes forever and then I see a white circle with a line through it. ("Prohibited" / "Ghostbusters" sign.)

    In verbose mode, it hangs on "break pci: SDXC ' for a while, then it clears the screen and watch the Circle + line with 'still waiting on the root device' at the bottom left.

    I can't boot into single user mode. Quite simply, there is the same thing as verbose mode. Even in safe mode. So I have few options for debugging. I did a SMC reset, no change. Is there anything else I should try?

    Additional information:

    -I'm trying to start off a new SSD that I bought, currently in a USB 3.0 enclosure external. I know that all is well with the speaker because I started off of him several times in the past. I know that all is well with the reader, because I tried to do the same process with another drive and it doesn't work the same way.

    -J' cloned my current boot drive to the new drive with SuperDuper!, Carbon Copy Cloner and I also did a clean install of El Capitan using recovery mode. The new facility has worked well until I migrated my old data.

    - So there must be something in my data which are striated throughout this process. I don't know what it is maybe or how to track him down.

    Any thoughts are appreciated!

    Have you tried the PRAM reset? Restart while pressing control + command + P + R until you hear the boot chime at least twice.

  • Cannot create a data file with a size of less than 1000 M

    Hello

    I have a problem that I can't solve. Whenever I try to create a DATA file with a size of less than 1000 MB, I get an ORA-03214

    Example:

    Select * from version of v$.

    Oracle Database 11 g Enterprise Edition Release 11.2.0.4.0 - 64 bit Production

    PL/SQL Release 11.2.0.4.0 - Production

    "CORE 11.2.0.4.0 Production."

    AMT for IBM/AIX RISC System/6000: Version 11.2.0.4.0 - Production

    NLSRTL Version 11.2.0.4.0 - Production

    -----

    ALTER TABLESPACE "IDX_PRETR1".

    ADD DATAFILE ' / appli/oracle/PR3/data4/TEST.dbf'

    SIZE 500 M;

    XXIX error on line 1 of the command:

    ALTER TABLESPACE "IDX_PRETR1".

    ADD DATAFILE ' / appli/oracle/PR3/data4/TEST.dbf'

    SIZE 500M

    Error report:

    SQL error: ORA-03214: the file size is less than the minimum required

    03214 00000 - "specified file size is smaller than the minimum required.

    * Cause: Specified for add/resize datafile/tempfile file size does not work

    provide the minimum required of an allocation unit.

    * Action: Increase the size of the file specification

    Thanks for the help and sorry for my English

    Hello

    The answer may lie in the INITIAL_EXTENT configuration in your space of tables, check DBA_tablespaces view.

    Also, try to use autoextend on and maxsize in your file data creation but better try with the highest value during the action of the mentioned error.

    Kind regards

    Juan M

  • In the Contacts file, cannot create new contact groups

    The menu instructions help to declare the following: in the toolbar, click New Contact Group, type a name in the group name box and then fill in the tabs of the Contact Group and contact information group. You do not have to fill in all the boxes; Just type as much information as you want about the new contact group you create.

    Everything would be easier if "New group contact" appeared on the toolbar.  He isn't here.  I was in the past, because I have a number of contact groups in the file.  What happened to him?

    Please repost your question in the Forum program: http://social.answers.microsoft.com/Forums/en-US/vistaprograms/threads where people who specialize in programs other than Vista and IE (like Windows Contacts) will be more than happy to help you with your question.

    Good luck! Lorien - a - MCSE/MCSA/network + / A +.

  • Cannot create a virtual machine with a vmdk file copied from another location, please find the attached error

    Hi all

    I copied a file from one place vmdk and try to make a new virtual machine with this vmdk file. But when I turn on after the creation of vm error is coming. Error in the text and the image below

    Power on the virtual machine: cannot open scsi0:0 disc: disc not supported or not valid type 7. Ensure that the disk has been imported.

    See the error of the stack for more details on the cause of this problem.

    Time: 31/03/2015-14:40:05

    Target: DBServer

    vCenter Server: vcsa

    Error stack

    An error was received from the ESX host turning on DBServer VM.

    Unable to start the virtual machine.

    Power DevicePowerOn module has failed.

    Unable to create the virtual SCSI device for scsi0:0, ' / vmfs/volumes/543d140b-feb33d52-7640-90b11c9796c3/vmdk/kapuatdb.vmdk'

    Could not open scsi0:0 disc: disc not supported or not valid type 7. Ensure that the disk has been imported.

    This error message generally if the hard files have been copied hosted as VMware Workstation product, which uses a format of sparse file that is not supported on an ESXi host. Instead of the hard copy, you can use VMware Converter, or - if you prefer - you can convert the hard using vmware-vdiskmanager (before transfer) or vmkfstools (after downloading). I deal to use vmkfstools you will need to load the mutliextent module (see for example "Clone or migration operations involving virtual discs non-VMFS on ESXi fail with an error" vSphere 5.1 Release Notes)

    André

  • Cannot create the p12 certificate with existing certificate of distribution

    Hello

    I published apps before using my employers apple developer account that is simple enough forward if you follow the instructions. Asked me to create an application for distribution with the apple store using the developer account. I need to create the final application .zip and pass it to the customer who shall submit the actual application is.

    My problem is that I can't create a p12 certificate to attach to the DPS App Builder app. The customer has three certificates of distribution in their account, but had third party create them on their behalf. To my knowledge, the best way to handle this would be to revoke old certificates three distribution in their apple developer account and create a new linked to my machine that would allow me to create all the necessary certificates. It seems that 3 certificates of distribution is the limit they may have at any given time.

    My questions are:

    1. is my solution, right?

    2. If so, revoke a certificate of distribution would cause problems for one of their other applications that are published or in progress?

    The only other option that I could see was the creator of one of the certificates in distribution create a p12 certificate that I relate to the DPS App Builder app.

    Any help, advice or pointing me in the right direction would be greatly appreciated. I've never had a problem of edition following the directions, so it is a new issue for me.

    Kind regards

    C

    If you have access to your Apple dev account, you can create certificates on your machine, but if they have existing certificates in their account, could cause confusion and don't want to mess with their existing certificates.

    You advise to ask your client to create and pass the CERT, create the app with their certificates and pass back the zip for them to download

  • [8i] grouping with tricky conditions

    Still stuck on an old database a little longer, and I'm trying out some information...

    BANNER
    ----------------------------------------------------------------
    Oracle8i Enterprise Edition Release 8.1.7.2.0 - Production
    PL/SQL Release 8.1.7.2.0 - Production
    CORE 8.1.7.0.0-Production
    AMT for HP - UX: 8.1.7.2.0 - Production Version
    NLSRTL Version 3.4.1.0.0 - Production

    Simplified data examples:
    CREATE TABLE     test_data
    (     item_id     CHAR(25)
    ,     ord_id     CHAR(10)
    ,     step_id     CHAR(4)
    ,     station     CHAR(5)
    ,     act_hrs     NUMBER(11,8)
    ,     q_comp     NUMBER(13,4)
    ,     q_scrap     NUMBER(13,4)
    );
    
    INSERT INTO test_data
    VALUES ('hardware-1','0001234567','0010','ROUGH',1.2,14,1);
    INSERT INTO test_data
    VALUES ('hardware-1','0001234567','0020','TREAT',1.4,14,0);
    INSERT INTO test_data
    VALUES ('hardware-1','0001234567','0030','OUTPR',2,13,1);
    INSERT INTO test_data
    VALUES ('hardware-1','0001234567','0040','FINSH',0.3,13,0);
    
    INSERT INTO test_data
    VALUES ('abc-123','0007654321','0005','ROUGH',1.2,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0007654321','0010','TREAT',1.4,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0007654321','0011','OUTPR',4.5,10,0);
    INSERT INTO test_data
    VALUES ('abc-123','0007654321','0012','OUTPR',2.3,9,1);
    INSERT INTO test_data
    VALUES ('abc-123','0007654321','0013','OUTPR',2.2,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0007654321','0017','OUTPR',3.9,9,0);
    INSERT INTO test_data
    VALUES ('abc-123','0007654321','0020','FINSH',3.3,9,0);
    
    INSERT INTO test_data
    VALUES ('987-x-123','0001234321','0010','ROUGH',1.2,15,0);
    INSERT INTO test_data
    VALUES ('987-x-123','0001234321','0015','ROUGH',1.6,15,0);
    INSERT INTO test_data
    VALUES ('987-x-123','0001234321','0020','TREAT',1.55,15,0);
    INSERT INTO test_data
    VALUES ('987-x-123','0001234321','0030','OUTPR',2.1,15,0);
    INSERT INTO test_data
    VALUES ('987-x-123','0001234321','0040','FINSH',0.4,15,0);
    
    INSERT INTO test_data
    VALUES ('xyz-321','0007654567','0005','ROUGH',1.2,10,0);
    INSERT INTO test_data
    VALUES ('xyz-321','0007654567','0010','TREAT',1.4,10,0);
    INSERT INTO test_data
    VALUES ('xyz-321','0007654567','0011','OUTPR',4.5,9,1);
    INSERT INTO test_data
    VALUES ('xyz-321','0007654567','0012','INSPC',4.1,9,0);
    INSERT INTO test_data
    VALUES ('xyz-321','0007654567','0013','OUTPR',2.2,9,0);
    INSERT INTO test_data
    VALUES ('xyz-321','0007654567','0017','OUTPR',3.9,9,0);
    INSERT INTO test_data
    VALUES ('xyz-321','0007654567','0020','FINSH',3.3,9,0);
    Before you complain about the types of data, I did not, nor do I have control over them. These data represent orders (similar to the instructions) to make some items. Each (ord_id) order is for a number of a single element (item_id) and there are a few steps (step_id), performed in order, and each step is a specified station (station). Each step takes some time, and that gets recorded (act_hrs). In addition, every step a certain amount of items are completed (q_comp), and it is possible that someone's going to ruin a step and something is going to get discarded, so that gets saved too (q_scrap). There are many other columns that include my real data, and my real data set includes thousands of orders, but this set of sample must contain (hopefully) 1 example of each different thing, that I'm going to have to face.

    What I do is to condense the SEQUENTIAL steps through the 'OUTPR' station in one record. (grouped by item_id and ord_id)

    I want to keep the MIN step_id as the step_id in these cases and the SUM of the act_hrs (if I get the total number of hours real for all steps which condenses) q_comp MIN (for 2 reasons: firstly, I don't want the sum, given that I am working on the same physical pieces in each step of the order) (, and I don't want the max, because who could give if too much of everything have been scrapped), the SUM of q_scrap (because I want the total number of discarded pieces)...

    Is the closest I've gotten with this query:
    SELECT     item_id
    ,     ord_id
    ,     MIN(step_id)     AS step_id
    ,     station
    ,     SUM(act_hrs)     AS act_hrs
    ,     MIN(q_comp)     AS q_comp
    ,     SUM(q_scrap)     AS q_scrap
    FROM     test_data
    GROUP BY item_id
    ,      ord_id
    ,      station
    ,     CASE
              WHEN     station     = 'OUTPR'
              THEN     'OUTPR'
              ELSE     step_id || station
         END
    ORDER BY item_id
    ,      ord_id
    ,      step_id
    ;
    This query gives these results:
    ITEM_ID                   ORD_ID     STEP STATI         ACT_HRS          Q_COMP         Q_SCRAP
    ------------------------- ---------- ---- ----- --------------- --------------- ---------------
    987-x-123                 0001234321 0010 ROUGH           1.200          15.000            .000
    987-x-123                 0001234321 0015 ROUGH           1.600          15.000            .000
    987-x-123                 0001234321 0020 TREAT           1.550          15.000            .000
    987-x-123                 0001234321 0030 OUTPR           2.100          15.000            .000
    987-x-123                 0001234321 0040 FINSH            .400          15.000            .000
    abc-123                   0007654321 0005 ROUGH           1.200          10.000            .000
    abc-123                   0007654321 0010 TREAT           1.400          10.000            .000
    abc-123                   0007654321 0011 OUTPR          12.900           9.000           1.000
    abc-123                   0007654321 0020 FINSH           3.300           9.000            .000
    hardware-1                0001234567 0010 ROUGH           1.200          14.000           1.000
    hardware-1                0001234567 0020 TREAT           1.400          14.000            .000
    hardware-1                0001234567 0030 OUTPR           2.000          13.000           1.000
    hardware-1                0001234567 0040 FINSH            .300          13.000            .000
    xyz-321                   0007654567 0005 ROUGH           1.200          10.000            .000
    xyz-321                   0007654567 0010 TREAT           1.400          10.000            .000
    xyz-321                   0007654567 0011 OUTPR          10.600           9.000           1.000 --*(should not be condensed, should only be step 0011)
    xyz-321                   0007654567 0012 INSPC           4.100           9.000            .000
    --*(should be steps 0013 & 0017 condensed here)
    xyz-321                   0007654567 0020 FINSH           3.300           9.000            .000
    However, it condenses all the steps in order with the 'OUTPR' station in one record, and I want only condense to who is sequential. I have noted with asterisks in the above defined results where data is bad.

    Here are the real results, I want to see:
    ITEM_ID                   ORD_ID     STEP STATI         ACT_HRS          Q_COMP         Q_SCRAP
    ------------------------- ---------- ---- ----- --------------- --------------- ---------------
    987-x-123                 0001234321 0010 ROUGH           1.200          15.000            .000
    987-x-123                 0001234321 0015 ROUGH           1.600          15.000            .000
    987-x-123                 0001234321 0020 TREAT           1.550          15.000            .000
    987-x-123                 0001234321 0030 OUTPR           2.100          15.000            .000
    987-x-123                 0001234321 0040 FINSH            .400          15.000            .000
    abc-123                   0007654321 0005 ROUGH           1.200          10.000            .000
    abc-123                   0007654321 0010 TREAT           1.400          10.000            .000
    abc-123                   0007654321 0011 OUTPR          12.900           9.000           1.000
    abc-123                   0007654321 0020 FINSH           3.300           9.000            .000
    hardware-1                0001234567 0010 ROUGH           1.200          14.000           1.000
    hardware-1                0001234567 0020 TREAT           1.400          14.000            .000
    hardware-1                0001234567 0030 OUTPR           2.000          13.000           1.000
    hardware-1                0001234567 0040 FINSH            .300          13.000            .000
    xyz-321                   0007654567 0005 ROUGH           1.200          10.000            .000
    xyz-321                   0007654567 0010 TREAT           1.400          10.000            .000
    xyz-321                   0007654567 0011 OUTPR           4.500           9.000           1.000 --*
    xyz-321                   0007654567 0012 INSPC           4.100           9.000            .000
    xyz-321                   0007654567 0013 OUTPR           6.100           9.000            .000 --*
    xyz-321                   0007654567 0020 FINSH           3.300           9.000            .000
    I'm not sure how to include sequential condtion... perhaps using lag/lead? Any help would be appreciated.

    Hello

    Here's a way to do it:

    SELECT       item_id
    ,        ord_id
    ,       MIN (step_id)          AS step_id
    ,       station
    ,       SUM (act_hrs)          AS act_hrs
    ,       MIN (q_comp)          AS q_comp
    ,       SUM (q_scrap)          AS q_scrap
    FROM       (          -- Begin in-line view to compute grp_id
               SELECT  test_data.*
               ,           ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                                ORDER BY        step_id
                              )
                  - ROW_NUMBER () OVER ( PARTITION BY  item_id, ord_id
                                          ,                CASE
                                            WHEN  station = 'OUTPR'
                                            THEN  NULL
                                            ELSE  step_id
                                           END
                                ORDER BY      step_id
                              )     AS grp_id
               FROM    test_data
           )          -- End in-line view to compute grp_id
    GROUP BY  item_id
    ,            ord_id
    ,       station
    ,       grp_id
    ORDER BY  item_id
    ,            step_id
    ;
    

    It is close enough to your revised query, which I do not know is quite right. Your version looks back all the "OUTPR" lines which have a neighbor of "OUTPR" (with the same item_id ord_id, of course) in the same group, which is be ifor well these sample data. But what happens if there is a trail of 2 or more OUTPRs, then another station and then another OUTPRs 2 or more consecutive series. As a result, if we change the sample data, by changing only the station on the 1 line:

    --INSERT INTO test_data
    --VALUES ('xyz-321','0007654567','0010','TREAT',1.4,10,0);
    -- Original row (above) replaced by new row (below)
    INSERT INTO test_data
      VALUES ('xyz-321','0007654567','0010','OUTPR',1.4,10,0);
    

    Now we have 2 groups of 2 OUTPRs, separated by an INSPC, but your query generates:

    ITEM_ID         ORD_ID     STEP STATI    ACT_HRS     Q_COMP    Q_SCRAP
    --------------- ---------- ---- ----- ---------- ---------- ----------
    xyz-321         0007654567 0005 ROUGH        1.2         10          0
    xyz-321         0007654567 0010 OUTPR         12          9          1
    xyz-321         0007654567 0012 INSPC        4.1          9          0
    xyz-321         0007654567 0020 FINSH        3.3          9          0
    

    for the item_id. All 4 OUTPRs fall into one group. Don't you want to not really output like this:

    ITEM_ID         ORD_ID     STEP STATI    ACT_HRS     Q_COMP    Q_SCRAP
    --------------- ---------- ---- ----- ---------- ---------- ----------
    xyz-321         0007654567 0005 ROUGH        1.2         10          0
    xyz-321         0007654567 0010 OUTPR        5.9          9          1
    xyz-321         0007654567 0012 INSPC        4.1          9          0
    xyz-321         0007654567 0013 OUTPR        6.1          9          0
    xyz-321         0007654567 0020 FINSH        3.3          9          0
    

    ? That's what my query produces.

    Sorry, I know that's not much explanation, but I'm out of time now. I will post more on the technique Difference fixed later if I have time. For now, you can run the view online by iteself, showing 2 ROW_NUMBER expressions and their difference (grp_id).

    Published by: Frank Kulash, 1 February 2012 14:34

    Published by: Frank Kulash, 1 February 2012 14:48

  • Satellite U400 cannot create DVDs of recovery with the recovery disk creator

    Hello

    I have a Satellite U400 (a few years), and I never created the recovery DVDs.
    Now I want to do, but for some reason, when I run creator of Toshiba Recovery disk, nothing happens...
    The partition containing the files of recovery is there, as well as the folder HDDRecovery. I have add a few files on this partition, this could be the problem?

    Thanks for your help.

    > I have some files on this partition, this could be the problem?
    N ° if you didn t change the partition structure should not be the problem.

    It is: does this Toshiba recovery disc creator market or not? If you can not star this tool has nothing to do with the recovery files.
    You will find this file - C:\Program Toshiba Toshiba Recovery Disc Creator\TRORDCLauncher.exe?

  • Cannot create the contact group

    I have Windows Vista. On my toolbar of Windows Mail, there is no pull down for "new group contacts. He goes to the file/new/contacts, and that's all! The new group contacts seems to have dissappeared in option.

    Hello, Mickey Ray

    Please use the Forum for answers!

    If you are referring to the list of Windows Contacts, you should be able to restore this functionality by right-clicking in an empty area and choosing Properties. In the Properties screen go to the tab customize and change the use this folder type as a template to Contacts.

    Let us know if it helps or if you need further assistance.

    David
    Microsoft Answers Support Engineer
    Visit our Microsoft answers feedback Forum and let us know what you think.

  • Cannot create a clean column with links

    Dear friends of the APEX,

    I'm developing an interactive report and you want to have in it a clean column with links instead of the standard "pencil" - column.

    I explain it by the example of https://apex.oracle.com/pls/apex/f?p=81683:1:

    I want that "Deptno" column had links with the viewer just as single line "pencil" - column a.

    My idea was to change the type of column "Deptno" of plain text to link.

    But, unfortunately, when I did in the pictures Page and save the modified Page Builder type column to return to 'plain text '.

    Anyone here know, what I did wrong?

    Thank you very much!

    LHOST

    Franck wrote:

    I'm developing an interactive report and you want to have in it a clean column with links instead of the standard "pencil" - column.

    I explain it by the example of https://apex.oracle.com/pls/apex/f?p=81683:1:

    I want that "Deptno" column had links with the viewer just as single line "pencil" - column a.

    Why would you do?

    My idea was to change the type of column "Deptno" of plain text to Link.

    If unique lines are not identified using a PK column, add ROWID to the projection of the report query and set the Type of this column to hidden column.

    Set the Type of the DEPTNO column on link, the link target #, #DEPTNO #and attributes of link to link text class = 'a-IRR-detail-row' data-line-id = "" #ROWID # "." Replace "ROWID" with the PK column name when unique lines are identified with a single column value. Save the changes.

    But, unfortunately, when I did in the pictures Page and save the modified Page Builder type column to return to 'plain text '.

    Anyone here know, what I did wrong?

    You changed the type of column 'Link' but did not enter the other properties needed to operate it as a link.

  • Cannot create a virtual machine with more than 256 GB

    I thought that the max capacity 2 TB vm. but I can't create a virtual machine more than 256 GB. what I am doing wrong?

    Thank you

    You probably use a block size of 1 MB, which limits a single HDD 256 GB.  You will need a larger block size

    1 MB = 256 GB

    2 512 GB

    4 MB = 1024 GB

    8 MB = 2048 GB

  • Cannot create the new VM with OS customization

    Hello

    I am trying to execute the following:

    New-VM - VMHost $esxHost - NAME vmamqa124 - $resPool - $fName - $lunName Datastore location ResourcePool - model $templateName - OSCustomizationSpec $customSpec

    When the clone ends, I get this error message on the customization of the OS:

    New - VM: 2008-12-16 09:25:33 New - VM F6175716-EC1B-42B2-A6F6-37949D8690B8 operation for the entity vm-47729 failed with the following message: "Personalization" has failed.

    Online: 1 character: 7

    + New-VM < < < < - VMHost $esxHost - NAME vmamqa124 - $resPool - $fName location ResourcePool - Datastore $lunName - model $templateName - OSCustomizationSpec $customSpec

    By running the same command w/o of the OSCustomizationSpec - setting works very well...

    You have an idea?

    Thank you.

    Yes, you can.

    The CustomizationIpGeneratorobject is extended by several other objects, one of them being the CustomizationDhcpIpGenerator.

    CustomizationFixedIp is one or the other of these extensions.

    If you use CustomizationDhcpIpGenerator customization will define the adapter using DHCP.

    As side note, be aware that it is only valid for Windows clients.

    For Linux clients, another approach is needed.

Maybe you are looking for

  • Satellite M200 - password HARD drive in the BIOS

    My M200 HARD drive doesn't have to (if Windows does not start) and I am replaced with a new HARD drive.I do not have the recovery discs so I will try to install windows from CD,. When installing it it does not find new HARD drive.When I go into the B

  • OfficeJet 6600: Failure of printer, no error code

    Down printer, no code error.  Tried troubleshooting tips; Reset no has not worked and cleaning of copper on the ink cartridges, not working anymore.   Printer 3 years so out of warranty.   This printer is in a holiday home and I remove the cartridges

  • Newbie... of the problems printing coupons, recipes, etc. of the laptop.

    Newbie... I try to print the recipes, coupons, printable and digital from my laptop to my printer wireless HP 8610.  He wants to 'save' docment, and then won't print.  Previously, had many 'PF' files recorded on the document in adobe reader. However,

  • Impression page engine total counts for 930c Series Deskjet printers

    I have a Deskjet 932C printer.  Once told me how to print a page that gives me the serial number of my printer, the service ID, a review of FW and the number of engine total pages.  I forgot how to do this and can't find the original email that told

  • S Windows XP of HP ProBook 4530 graphics drivers problem

    Hey I just had some of these 4530 s and installed XP on them, but won't take the graphics driver. I get the error "this computer does not meet minimum requirements." It has Service Pack 3 and Windows XP Pro, which is the only requirements that I coul