average for each contragentid

Hi all! This code works great, it returns the average value for contragentid = 1. So correct result is LUAHPER = 14996837,94

But when I add the values for contragentid = 2 (see commented strings), my statement returns incorrect result LUAHPER = 5497801,81

How can I improve my code in order to calculate the average for each contragentid?
with t as ( 

select to_date('12.03.2011','dd.mm.yyyy') as arcdate, 1 as contragentid,14275303.54 as luah, 214275303.54 as lusd from dual 
union all
select to_date('14.03.2011','dd.mm.yyyy') as arcdate, 1 as contragentid,14275303.54 as luah, 214275303.54 as lusd from dual 
union all
select to_date('15.03.2011','dd.mm.yyyy') as arcdate, 1 as contragentid,14275303.54 as luah, 214275303.54 as lusd from dual 
union all
select to_date('16.03.2011','dd.mm.yyyy') as arcdate, 1 as contragentid,15274795.50 as luah, 215274795.50 as lusd from dual 
union all
select to_date('17.03.2011','dd.mm.yyyy') as arcdate, 1 as contragentid,15431807.40 as luah, 215431807.40 as lusd from dual 
union all
select to_date('18.03.2011','dd.mm.yyyy') as arcdate, 1 as contragentid,15480730.00 as luah, 215480730 as lusd from dual 
union all
select to_date('21.03.2011','dd.mm.yyyy') as arcdate, 1 as contragentid,15480730.00 as luah, 215480730 as lusd from dual

/*union all
select to_date('12.03.2011','dd.mm.yyyy') as arcdate, 2 as contragentid,214275303.54 as luah, 214275303.54 as lusd from dual 
union all
select to_date('14.03.2011','dd.mm.yyyy') as arcdate, 2 as contragentid,214275303.54 as luah, 214275303.54 as lusd from dual 
union all
select to_date('15.03.2011','dd.mm.yyyy') as arcdate, 2 as contragentid,214275303.54 as luah, 214275303.54 as lusd from dual 
union all
select to_date('16.03.2011','dd.mm.yyyy') as arcdate, 2 as contragentid,215274795.50 as luah, 215274795.50 as lusd from dual 
union all
select to_date('17.03.2011','dd.mm.yyyy') as arcdate, 2 as contragentid,215431807.40 as luah, 215431807.40 as lusd from dual 
union all
select to_date('18.03.2011','dd.mm.yyyy') as arcdate, 2 as contragentid,215480730 as luah, 215480730 as lusd from dual 
union all
select to_date('21.03.2011','dd.mm.yyyy') as arcdate, 2 as contragentid,215480730 as luah, 215480730 as lusd from dual*/
) 

select contragentid, sum(luahper) / cnt as luahper 
from ( 
 select contragentid, (lead(arcdate,1,date '2011-03-20' + 1) over(order by arcdate) - arcdate) * luah luahper, 
date '2011-03-20' - date '2011-03-13' + 1 cnt 
from ( 
select arcdate, contragentid, luah, lusd 
from 

t 

where arcdate > date '2011-03-13' 
and arcdate <= date '2011-03-20' 
union all 
select greatest(arcdate,date '2011-03-13'), 
contragentid, luah, lusd 
from 

t 

where arcdate = (select max(arcdate) from 

t 

where arcdate <= date '2011-03-13') 
) ) 
where contragentid = 1
group by contragentid, cnt
Edited by: 858774 14/06/2011 02:29

Hello

858774 wrote:
But when I add the values for contragentid = 2 (see commented strings), my statement returns incorrect result LUAHPER = 5497801,81

I guess that's because you don't join and partition on contragentid.
This appears to do:

Scott@my10g SQL>/

CONTRAGENTID    LUAHPER
------------ ----------
           1 14996837.9

Scott@my10g SQL>l
  1  with t as (
  2  select to_date('12.03.2011','dd.mm.yyyy') as arcdate, 1 as contragentid,14275303.54 as luah, 214275303.54 as lusd from dual
  3  union all
  4  select to_date('14.03.2011','dd.mm.yyyy') as arcdate, 1 as contragentid,14275303.54 as luah, 214275303.54 as lusd from dual
  5  union all
  6  select to_date('15.03.2011','dd.mm.yyyy') as arcdate, 1 as contragentid,14275303.54 as luah, 214275303.54 as lusd from dual
  7  union all
  8  select to_date('16.03.2011','dd.mm.yyyy') as arcdate, 1 as contragentid,15274795.50 as luah, 215274795.50 as lusd from dual
  9  union all
 10  select to_date('17.03.2011','dd.mm.yyyy') as arcdate, 1 as contragentid,15431807.40 as luah, 215431807.40 as lusd from dual
 11  union all
 12  select to_date('18.03.2011','dd.mm.yyyy') as arcdate, 1 as contragentid,15480730.00 as luah, 215480730 as lusd from dual
 13  union all
 14  select to_date('21.03.2011','dd.mm.yyyy') as arcdate, 1 as contragentid,15480730.00 as luah, 215480730 as lusd from dual
 15  --
 16  union all
 17  select to_date('12.03.2011','dd.mm.yyyy') as arcdate, 2 as contragentid,214275303.54 as luah, 214275303.54 as lusd from dual
 18  union all
 19  select to_date('14.03.2011','dd.mm.yyyy') as arcdate, 2 as contragentid,214275303.54 as luah, 214275303.54 as lusd from dual
 20  union all
 21  select to_date('15.03.2011','dd.mm.yyyy') as arcdate, 2 as contragentid,214275303.54 as luah, 214275303.54 as lusd from dual
 22  union all
 23  select to_date('16.03.2011','dd.mm.yyyy') as arcdate, 2 as contragentid,215274795.50 as luah, 215274795.50 as lusd from dual
 24  union all
 25  select to_date('17.03.2011','dd.mm.yyyy') as arcdate, 2 as contragentid,215431807.40 as luah, 215431807.40 as lusd from dual
 26  union all
 27  select to_date('18.03.2011','dd.mm.yyyy') as arcdate, 2 as contragentid,215480730 as luah, 215480730 as lusd from dual
 28  union all
 29  select to_date('21.03.2011','dd.mm.yyyy') as arcdate, 2 as contragentid,215480730 as luah, 215480730 as lusd from dual
 30  --
 31  )
 32  select contragentid, sum(luahper) / cnt as luahper
 33  from (
 34   select contragentid, (lead(arcdate,1,date '2011-03-20' + 1) over(partition by contragentid order by arcdate) - arcdate) * luah luahper,
 35  date '2011-03-20' - date '2011-03-13' + 1 cnt
 36  from (
 37  select arcdate, contragentid, luah, lusd
 38  from
 39  t
 40  where arcdate > date '2011-03-13'
 41  and arcdate <= date '2011-03-20'
 42  union all
 43  select greatest(arcdate,date '2011-03-13'),
 44  contragentid, luah, lusd
 45  from
 46  t
 47  where arcdate = (select max(arcdate) from
 48  t t2
 49  where arcdate <= date '2011-03-13'
 50  and t.contragentid=t2.contragentid)
 51  ) )
 52  where contragentid = 1
 53* group by contragentid, cnt
Scott@my10g SQL>/

CONTRAGENTID    LUAHPER
------------ ----------
           1 14996837.9

858774 wrote: How can I improve my code in order to calculate the average for each contragentid?

When I remove the 'where contragentid = 1' seems to do with above query:

Scott@my10g SQL>/

CONTRAGENTID    LUAHPER
------------ ----------
           2  214996838
           1 14996837.9

Tags: Database

Similar Questions

  • How to calculate an average for each quarter in a year

    Hello world


    I have a table that stores all the unit tests for the data. I'm figuring avg unit tests that have been recorded by quarter in one year

    I tried with avg and the meter, but I'm sure that the results returned are not perfect

    Help me in this scenario.

    Thank you
    Sriks

    user10699584 wrote:

    so if there are 10 unit tests then the avg is 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10/10

    It is a weird way to definig average. Anyway, why:

    2005-2 Q 5 2.5

    According to your package:

    (1 + 2 + 3 + 4 + 5) / 5 = 3

    not 2.5. I'll assume that it's a typo. Now based on Mathematics:

    1 + 2 + 3 +... + n = (1 + n) * n / 2

    The foregoing your 'average' would be:

    (Count + 1 / 2)

    Also, I suspect tc_vts column data type is DATE, if so:

    select  to_char(tc_vts,'YYYY-Q"Q"') "year-Quarter",
            count(*) "Total cases",
            (1 + count(*)) / 2 "Average of total cases"
      from  Testcycl
      group by to_char(tc_vts,'YYYY-Q"Q"')
      order by "year-Quarter"
    /
    

    SY.

  • calculate an average for each record

    Hello

    what I want is: I have a table

    SCHOOLBOY (id, first_name, last_name,...)

    and another table

    BRANDS (id, schoolboy_id, discipline_id, mark, mark_type). a schoolboy

    on a discipline (math, for example), a schoolboy may have more points. I want to calculate the average mark, but for every schoolchild. then, for example I want to generate a report that contains the first name, the family name of the schoolboy and nearby, the average of the marks.
    How can I calculate the average of all brands for every schoolchild ?

    Thank you!

    Edited by: Roger22 06.06.2009 at 01:43

    Maybe something like:

     SELECT   first_name, last_name, b1.avg_mark
       FROM   schoolboys a1, (  SELECT   schoolboy_id, AVG (mark) avg_mark
                                  FROM   marks
                                  WHERE   discipline_id = 4 -- specify whatever discipline_id you want
                              GROUP BY   schoolboy_id) b1
      WHERE   a1.id = b1.schoolboy_id
     /
    

    Kind regards
    JO

    Edit: Corrected the Code tags

  • Case: View the balance of the average customer for each area code.

    Hi again together. As a C++ / VB programmer, I used by using a control specified structures like for loops and while loops and use predefined variables to solve my problems. I have a little trouble to adapt to the SQL paradigm, which really does contain a few key words and in built functions.

    I had to have some exercises to do by my tutor at the University, and I have a big enough problem with this one. I to recover the balance of the average customer of a 'customer' entity, but I have to do for each area code, not as a whole. When I look at this problem, I immediately think of loops, conditional check, a variable of integer type to control the loop and possibly a table or a vector data type to store the results. SQL is rather simple, and its simplicity is actually causing me problems, ironically.

    Here is the "Customer" entity which I am to get directions:
    CREATE TABLE CUSTOMER 
    (
                CUS_CODE            NUMERIC(6) 
                CONSTRAINT CUSTOMER_PK PRIMARY KEY,
                CUS_LNAME       varchar(15) NOT NULL,
                CUS_FNAME       varchar(15) NOT NULL,
                CUS_INITIAL     CHAR(1),
                CUS_AREACODE      VARCHAR(3) DEFAULT '02' NOT NULL CHECK(CUS_AREACODE IN ('03','07','08')),
                CUS_PHONE       VARCHAR(8) NOT NULL, 
                CUS_BALANCE     NUMERIC(9,2) DEFAULT 0.00
    );
    I was able to order the customer balances by their area code and to calculate the balances of customers on average as a whole, but as I said I had difficulties to calculate an average of customer balances exactly their area code and do it for all the codes present.

    Any help / example code would be much appreciated.

    OK, so you will understand how to get the average on all clients in SQL, not true? Something like

    SELECT AVG(cus_balance) avg_balance
      FROM customer
    

    If you want to get the average balance by another column, you simply group this column, i.e.

    SELECT cus_areacode, AVG(cus_balance) avg_balance
      FROM customer
     GROUP BY cus_areacode
    

    This essentially tells Oracle to group all data by area code before you run the aggregate function AVG on the cus_balance for each of these groups.

    Justin

  • Averages for number of loops

    Hi all

    I have a code (attached) in which a triangular wave is supposed to a magnetic field of ramp up and down to very low frequency (0.1 Hz or low). Reading (X) vs keithley reading field (Y) represented by two random numbers generators must be taken in synchronization with the triangular wave point by point. I would be very grateful if you could help to achieve the following objectives:

    1. by taking the average for a few cycles. Basically, I would define the number of loops in the outer circle to the loop and the number of sampling inside everything in a loop and then take the average between the loops.

    (Data for data of loop2 +... + loop1) + data to loop n/n

    2. the averages should be written to the text file and at the same time, I hope I always get the full date in the file action.

    Thank you

    Hadi

    Assuming that you want to monitor the data happens, I would like to initialize several arrays, both to keep the x and are data and to keep the number of each 'package '.

    Now, simply add the new data to the existing elements while incrementing the corresponding number. Dividing by the count will give you the way to be represented graphically. Here's a simple rewrite. The "collection must match the size of the field ramp, so it shouldn't be a separate control, I think. Currently, you graph the two random channels. You would you rather them the two graphic vs of the field instead? You want front and back scans averaged individually or must they go to the same average with a half data table?

    You save operation still seems a little convoluted. How do you actually data to be organized in the final file?

    Here is a quick project to help you get started.

  • The problem with the calculation of the average for the recurring value

    My task is to perform analysis, where in the table have ID to which she is assigned 10 measures (1-10) and each has its own value. I have to average of this what I take AVG (as) and I want to say. However, as prompcie Will I choose the second ID and I have two measures 1-10 time what I means account for 20 photos and I want to separate Average (two - each for 10 measures). Is there any possibility of Smash it? The use of a function or collection of rehearsals ?


    It looks like this:


    column 1 | column 2 | column 3 | AVG |


    ID |  Nr.  result | AVG?

    x | 1. 200 | AVG x?

    | 2. 210 |

    | 3. 210 |

    There | 1. 210 | is AVG?

    | 2. 208

    | 3. 200


    In column 4, I want to be avarage but another for id x and another for id y?

    is the result you are looking for:

    If Yes... then avg column formula is AVG (result BY id)

  • Average for a quarter and the grand total amount

    Hi all

    I have a report (PivotTable) in which I view information about sales. I have category, subcategory in the rows. Quarter of the year in the columns. Sales are loaded at the day level in the fact table. My report must indicate the average quarterly sales. That is to say that all sales per month should be added to the top and sales 3 months should be on average to get the average of quarterly sales.  We must show the total sales for each category sub. We must also show the % of sales for each subcategory with a category.

    I did the following,

    1. the aggregation of the column in the SPR Sales is basically

    2. in the report, I shot months in the criteria and which excluded the pivot

    3. I changed the aggregation (for total rum) for the SUM of criteria-> the column formula

    4. I changed the AVG aggregation to view measures pivot

    5. I have reproduced the measure to display the percentage of its sales

    6. I enabled total in the lines.

    Everything works well except the value of the subtotal of sales which gives me the average of subcategories at the level of the category, instead of this, it must be a sum.  The % of total is correct where he gives 100% for each category

    Please help me reach the sum to the total sub level

    Thank you!

    Concerning

    Deepak

    Hey, I just fixed the problem myself.  The solution was to write the formula as AVG (sales per quarter, category, subcategory), then change the rule of the aggregation in short.

  • Stats for each data store

    I am writing a script to determine the way to read and write rates, the average number of read and write average requests read and write latency for each data store.

    I tried to do two different ways, with my numbers to end 0.

    $startdate = (get-Date-time Minute 7 - 0 - 0 second). AddDays(-1)

    $enddate = (get-Date)

    $datastores = get-datastore

    {foreach ($DS in $datastores)

    #1

    = [string] $dswriterate ([Math]: Round ((($DS.datastore.write.average |))) Measure - Object - average value). Average), 2))

    #2

    = [string] $dswriterate ([Math]: Round ((($DS.write.average |))) Measure - Object - average value). Average), 2))

    }

    I understand that Get-Stat does not work with a store of data entity.  It would be great if I could get help with that!

    That is right.

    You can use my Get-Stat2 function to retrieve available data store counters.

    See monitor the size of your vDisks

    With the QueryMetrics switch you can see which parameters are available.

    See the usage statistics for the data store

  • IOPS / s &amp; latency for each virtual computer

    It's my first time to write a script to metrics out of the virtual environment and I'm trying to get out of the latency of disk IOPS and total / s disk for each virtual machine in the environment.

    Here are the relevant excerpts I have at the moment:

    #Get powered on virtual machines

    $VMs = get - VM? {$_.powerstate - eq "Receptor"}

    #Loop through each virtual computer

    {foreach ($vm to $VMs)

    $dskreadlatency = get-Stat-entity $vm - Stat "disk.totalreadlatency.average" - Start $start - finishing $end

    $dskwritelatency = get-Stat-entity $vm - Stat "disk.totalwritelatency.average" - Start $start - finishing $end

    $dsknumberwrites = get-Stat-entity $vm - Stat "virtualdisk.numberwriteaveraged.average" - Start $start - finishing $end

    $dsknumberreads = get-Stat-entity $vm - Stat "virtualdisk.numberreadaveraged.average" - Start $start - finishing $end

    }

    #setting fields to the averages of the stats (I have 4 of them)

    = [string] $fieldX ([Math]: Round ((($dskYYY |))) Measure - Object - average value). Average), 2))

    Unfortunately, I get 0 for all these statistics.  My level settings are all set to 2.  It would be awesome if I could help in this regard.

    Just to make sure that the performance data entry process are correctly configured on vCenter, you see the data for these counters on the performance for the same time interval tab?

  • get the IOPS / s and flow rate for each card of vmhba

    Hi, I have seen a few difficulties for the IOPS / s and flow stat for esxi for each vmhba card get the average, max, min

    I can get some network like net.usage.average, net.received.average and net.transmitted.average parameters as the metric is consolidated for all the network card.

    Here is the part of the classic script I use

    $metrics = "net.received.average", "net.transmitted.average".

    Get-Stat - entity $esx - start $start - finishing $stop - MaxSamples 10000 - Intervalmin 30 - stat $metrics |

    Group-object - property {$_.} @entity.name} | %{

    $esxname = $_. Group [0]. @entity.name

    $hard = (get-VMhost-name $esxname) .extensiondata. Summary.Hardware

    $netreceived = $_. Group | where {$_.} MetricId - eq "net.received.average" - and $_. {Instance - eq ""} | Measure-object-property value - average - Maximum - Minimum

    $nettransmit = $_. Group | where {$_.} MetricId - eq "net.transmitted.average" - and $_. {Instance - eq ""} | Measure-object-property value - average - Maximum - Minimum

    and so now, but on the storage card, consolidated metrics do not exist...

    Thank you very much!

    Try something like this

    $esx = Get-VMHost MyEsx$stat = "storageAdapter.numberReadAveraged.average","storageAdapter.numberWriteAveraged.average"$start = (Get-Date).AddMinutes(-5)
    
    Get-Stat -Entity $esx -Stat $stat -Start $start |Group-Object -Property Instance,Timestamp | %{$_ |Select @{N="Timestamp";E={$_.Values[1]}},@{N="Instance";E={$_.Values[0]}},@{N="Read IOPS";E={$_.Group | where {$_.MetricId -eq "storageAdapter.numberReadAveraged.average"} | %{$_.Value/$_.IntervalSecs}}},@{N="Write IOPS";E={$_.Group | where {$_.MetricId -eq "storageAdapter.numberWriteAveraged.average"} | %{$_.Value/$_.IntervalSecs}}}}
    

    It uses more or less the same concept that I used in my post to get the OPS are / s maximum .

  • Select the last value for each day of the table

    Hello!

    I have a table that contains several measures for each day. I need two queries on this table, and I'm not sure how to write them.

    The table stores the rows (sample data)
    *DateCol1                 Value       Database*
    27.09.2009 12:00:00       100           DB1
    27.09.2009 20:00:00       150           DB1
    27.09.2009 12:00:00       1000          DB2
    27.09.2009 20:00:00       1100          DB2
    28.09.2009 12:00:00       200           DB1
    28.09.2009 20:00:00       220           DB1
    28.09.2009 12:00:00       1500          DB2
    28.09.2009 20:00:00       2000          DB2
    Explanation of the data in the sample table:
    We measure the size of the data files belonging to each database to one or more times a day. The value column indicates the size of the files of database for each database at some point (date in DateCol1 European model).


    What I need:
    Query 1:
    The query must return to the last action for each day and the database. Like this:
    *DateCol1       Value      Database*
    27.09.2009        150          DB1
    27.09.2009       1100          DB2
    28.09.2009        220          DB1
    28.09.2009       2000          DB2
    Query 2:
    The query should return the average measurement for each day and the database. Like this:
    *DateCol1       Value      Database*
    27.09.2009       125          DB1
    27.09.2009      1050          DB2
    28.09.2009       210          DB1
    28.09.2009      1750          DB2
    Could someone please help me to write these two queries?

    Please let me know if you need further information.

    Published by: user7066552 on September 29, 2009 10:17

    Published by: user7066552 on September 29, 2009 10:17

    Why two queries when it suffice ;)

    SQL> select dt
      2       , db
      3       , val
      4       , avg_val
      5    from (
      6  select dt
      7       , val
      8       , db
      9       , row_number () over (partition by db, trunc (dt)
     10                                 order by dt desc
     11                            ) rn
     12       , avg (val) over (partition by db, trunc (dt)) avg_val
     13    from test)
     14   where rn = 1
     15  order by dt
     16  /
    
    DT        DB           VAL    AVG_VAL
    --------- ----- ---------- ----------
    27-SEP-09 DB2         1100       1050
    27-SEP-09 DB1          150        125
    28-SEP-09 DB2         2000       1750
    28-SEP-09 DB1          220        210
    
  • Calculation for each 3 discs

    I'm under the table with 10000 lines
    for example, I added only 10 lines
    I need fill the value r square for each (Column) has 3 rows and so on

    Select * from TABLEAA

    AN AVERAGE OF END TEACHER BEG
    --------------------------------------------
    1 0 0.1 159 159
    2 0.1 0.2 159 168
    3 0.2 0.3 179 159
    4 0.1 0.2 250 300
    5 0.2 0.3 320 250
    6 0.3 0.4 250 380
    7 0.2 0.3 388 379
    8 0.3 0.4 379 388
    9 0.4 0.5 388 400
    10 1.5 0.6 499 500

    R - square-> REGR_R2 (AVERAGE, TEACHER)

    A BEGINNING TEACHER AVERAGE R_Square END
    -------------------------------------------------
    1 0 0.1 159 159
    2 0.1 0.2 159 168
    3 0.3 0.2 179 159 0.25
    4 0.1 0.2 250 300
    5 0.2 0.3 320 250
    6 0.4 0.3 250 380 0.627906977
    7 0.2 0.3 388 379
    8 0.3 0.4 379 389
    9 0.5 0.4 388 400 0.000755287
    10 1.5 0.6 499 500



    create table TABLEAA
    (
    A NUMBER,
    PLEASE THE NUMBER,
    NUMBER OF END,
    NUMBER OF TEACHER,
    AVERAGE NUMBER
    )
    ;


    insert into TABLEAA (A, START, END, TEACHER, AVERAGE)
    values (10, 1.5,.6,, 499, 500);
    insert into TABLEAA (A, START, END, TEACHER, AVERAGE)
    values (1, 0,.1, 159, 159);
    insert into TABLEAA (A, START, END, TEACHER, AVERAGE)
    values (2,.1,.2,, 159, 168);
    insert into TABLEAA (A, START, END, TEACHER, AVERAGE)
    values (3,.2,.3, 179, 159);
    insert into TABLEAA (A, START, END, TEACHER, AVERAGE)
    values (4,.1,.2, 250, 300);
    insert into TABLEAA (A, START, END, TEACHER, AVERAGE)
    values (5,.2,.3, 320, 250);
    insert into TABLEAA (A, START, END, TEACHER, AVERAGE)
    values (6,.3,.4,, 250, 380);
    insert into TABLEAA (A, START, END, TEACHER, AVERAGE)
    values (7,.2,.3, 388, 379);
    insert into TABLEAA (A, START, END, TEACHER, AVERAGE)
    values (8,.3,.4,, 379, 388);
    insert into TABLEAA (A, START, END, TEACHER, AVERAGE)
    values (9,.4,.5, 388, 400);
    commit;


    Thanks in advance

    Published by: user1849 on August 19, 2009 09:48

    First of all, thanks for providing the table create and inserts.

    There is a function analytical regr_r2 who can do for you: http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/functions132.htm#i85922. Note that I had to change the average value of 388 to 389 A = 8 online to get exact numbers.

    SQL> select a
      2       , beg
      3       , end
      4       , prof
      5       , average
      6       , case mod(a,3) when 0 then regr_r2(average,prof) over (order by a rows between 2 preceding and current row) end r_square
      7    from tableaa
      8  /
    
             A        BEG        END       PROF    AVERAGE   R_SQUARE
    ---------- ---------- ---------- ---------- ---------- ----------
             1          0         ,1        159        159
             2         ,1         ,2        159        168
             3         ,2         ,3        179        159        ,25
             4         ,1         ,2        250        300
             5         ,2         ,3        320        250
             6         ,3         ,4        250        380 ,627906977
             7         ,2         ,3        388        379
             8         ,3         ,4        379        389
             9         ,4         ,5        388        400 ,000755287
            10        1,5         ,6        499        500
    
    10 rijen zijn geselecteerd.
    

    Kind regards
    Rob.

  • Show counts and averages for the 3 categories of side by side

    The following query was initially developed for the calculation of GPAs (the 920965 thread id). I changed to perform the same type of summary on partitions.
    The problem is that there are 3 types of partitions as opposed to a single type of GPA. When I run the query for the partition type, the results for
    that the partition type come out well, however, when I tried to add in other partition types (they have been commented out in the query below), results
    come not good when was 01:20 more in the query. This query can be modified to handle several types of partition, or will I
    need to separate runs for each type?

    Here the result should look like. The numbers in the pass are the number of students who have a score within each range
    and the overall average rating for each category is at the bottom.

    EXAMPLE OF SUMMARY:
      Score Range     Read     Math     Write
                   
     001 - 299     18     12     25
     300 - 349     60     50     50
     350 - 399     235     150     207
     400 - 449     523     400     463
     450 - 499     840     870     857
     500 - 549     1300     1189     1314
     550 - 599     1321     1400     1425
     600 - 649     1298     1280     1262
     650 - 699     605     940     737
     700 - 749     200     265     330
     750 - 800     109     119     102
     NO SCORES     1450     1284     1187
                   
     TOTAL     7959     7959     7959
                   
     AVERAGE     553     563     559
    Details of the sample:
    ID     READ     MATH     WRITE
    121212     570     520     550
    112121     650     570     600
    121121     
    111221     
    111122     600     625     610
    The way that the query runs now, if I run only during playback, playback numbers doing well, if I run it only for mathematics, Math numbers
    Come in fine, etc. When I typed in the query, the numbers came out badly.


    -This part creates the table in the range.  Usually just create once
    CREATE TABLE    Score_Range
    AS
    SELECT  000 AS low_score, 300 AS high_score, '001 - 299' AS display_txt, 1  AS range_sort    FROM dual    UNION ALL
    SELECT  300 AS low_score, 350 AS high_score, '300 - 349' AS display_txt, 2  AS range_sort    FROM dual    UNION ALL
    SELECT  350 AS low_score, 400 AS high_score, '350 - 399' AS display_txt, 3  AS range_sort    FROM dual    UNION ALL
    SELECT  400 AS low_score, 450 AS high_score, '400 - 449' AS display_txt, 4  AS range_sort    FROM dual    UNION ALL
    SELECT  450 AS low_score, 500 AS high_score, '450 - 499' AS display_txt, 5  AS range_sort    FROM dual    UNION ALL
    SELECT  500 AS low_score, 550 AS high_score, '500 - 549' AS display_txt, 6  AS range_sort    FROM dual    UNION ALL
    SELECT  550 AS low_score, 600 AS high_score, '550 - 599' AS display_txt, 7  AS range_sort    FROM dual    UNION ALL
    SELECT  600 AS low_score, 650 AS high_score, '600 - 649' AS display_txt, 8  AS range_sort    FROM dual    UNION ALL
    SELECT  650 AS low_score, 700 AS high_score, '650 - 699' AS display_txt, 9  AS range_sort    FROM dual    UNION ALL
    SELECT  700 AS low_score, 750 AS high_score, '700 - 749' AS display_txt, 10 AS range_sort    FROM dual    UNION ALL
    SELECT  750 AS low_score, 999 AS high_score, '750 - 800' AS display_txt, 11 AS range_sort    FROM dual    UNION ALL
    SELECT  NULL,          NULL,           'No Scores',                      13           FROM dual;
    ------------------------------------------------------------------------------------------------------------------------

    -This part is the actual query to use to see the summary
    WITH interesting_score_stat AS
    (
        SELECT stu_population, '1Applied' Status, college,
               sat_read, sat_math, sat_write
        FROM   gpa_stat
        WHERE  stu_population  in ('F','T')
        AND    academic_period = '200940'
    UNION ALL
        SELECT stu_population, '2Accepted' Status, college,
               sat_read, sat_math, sat_write
        FROM   gpa_stat
        WHERE  stu_population  in ('F','T')
        AND    academic_period = '200940'
        AND    accepted = 1
    UNION ALL
        SELECT stu_population, '3Deposit' Status, college,
               sat_read, sat_math, sat_write
        FROM   gpa_stat
        WHERE  stu_population  in ('F','T')
        AND    academic_period = '200940'
        AND    accepted = 1
        AND    deposit  = 1
    ),       all_colleges      AS
    (
        SELECT DISTINCT stu_population, Status, college
        FROM   interesting_score_stat
        
    )
    SELECT c.stu_population, 
           c.Status,
           c.college,
           r.display_txt           AS scorerange,
           COUNT (s.college)       AS count,
           round(NVL(AVG(s.sat_read),0),0)  AS avgRead
    --       round(NVL(AVG(s.sat_math),0),0)  AS avgMath,
    --       round(NVL(AVG(s.sat_write),0),0) AS avgWrite
    FROM  all_colleges           c
    CROSS JOIN score_range         r
    LEFT OUTER JOIN interesting_score_stat s    
      ON ( c.stu_population   = s.stu_population
     AND   c.status    = s.status
     AND   c.college   = s.college
     AND   r.low_score  <= s.sat_read
     --AND   r.low_score  <= s.sat_math
     --AND   r.low_score  <= s.sat_write
     AND   r.high_score  > s.sat_read
    -- AND   r.high_score  > s.sat_math
    -- AND   r.high_score  > s.sat_write
         )
      OR ( c.stu_population  =  s.stu_population 
     AND   c.status   =  s.status
     AND   c.college  =  s.college
     AND   r.low_score     IS NULL
     AND   s.sat_read      IS NULL
    -- AND   s.sat_math      IS NULL
    -- AND   s.sat_write     IS NULL
         )
    GROUP BY c.stu_population, 
             c.status, 
             cube ( c.college, 
                    r.display_txt
                  )
    ORDER BY c.stu_population, 
             c.status, 
             c.college, 
             r.display_txt
     
    ;

    Hello

    To make the sample data easier to manage, I cut socre_range up to four lines:

    SELECT  450 AS low_score, 500 AS high_score, '450 - 499' AS display_txt, 5  AS range_sort    FROM dual    UNION ALL
    SELECT  500 AS low_score, 550 AS high_score, '500 - 549' AS display_txt, 6  AS range_sort    FROM dual    UNION ALL
    SELECT  550 AS low_score, 600 AS high_score, '550 - 599' AS display_txt, 7  AS range_sort    FROM dual    UNION ALL
    SELECT  NULL,          NULL,           'No Scores',                      13           FROM dual;
    

    and use these data in gpa_stat:

    select '12345678' ID, '200940' Academic_period, 'Freshmen' Stu_Pop, 'F' Stu_population, 'LA' College, 1 Applied, 1 Accepted, 1 Deposit, 560 SAT_READ, 590 SAT_MATH, 510 SAT_WRITE from dual union all
    select '23456789',     '200940',    'Transfer',    'T',     'LA',    1,    1,    0,  null, null, null    from dual union all
    select '34567890',    '200940',    'Freshmen',    'F',    'BN',    1,    1,    1,    500,    510,    540 from dual union all
    select '45678901',    '200940',    'Freshmen',    'F',    'BN',    1,    1,    1,    530,    520,    630 from dual union all
    select '56789012',    '200940',    'Freshmen',    'F',    'BN',    1,    1,    1,    550,    520,    540 from dual union all
    select '67890123',    '200940',    'Freshmen',    'F',    'LA',    1,    1,    1,    null,    null,  null from dual 
    

    This query:

    WITH   cntr        AS
    (
         SELECT     LEVEL     test_code
         FROM     dual
         CONNECT BY     LEVEL <= 3     -- # of measure columns (sat_read, sat_math and sat_write)
    )
    ,     unpivoted_data         AS
    (
         SELECT     s.stu_population
         ,     CASE
                   WHEN  accepted = 1
                   AND   deposit  = 1
                               THEN     3
                   WHEN  accepted = 1
                               THEN     2
                               ELSE     1
              END     AS status_lvl
         ,     s.college
         ,     c.test_code
         ,     CASE     c.test_code
                   WHEN  1          THEN  sat_read
                   WHEN  2          THEN  sat_math
                   WHEN  3          THEN  sat_write
              END     AS score
         FROM          gpa_stat     s
         CROSS JOIN     cntr          c
         WHERE   stu_population          IN ('F', 'T')     -- Do all filtering here
         AND     academic_period          = '200940'
    )
    ,     all_colleges     AS
    (
         SELECT DISTINCT     college
         FROM             unpivoted_data
    )
    ,     all_populations     AS
    (     SELECT DISTINCT     stu_population
         FROM             unpivoted_data
    )
    SELECT       p.stu_population
    ,       a.display_txt                              AS status
    ,       c.college
    ,       NVL ( r.display_txt
               , ' (Total)'
               )                                   AS scorerange
    ,       COUNT     (CASE WHEN u.test_code = 1 THEN 1     END)     AS read
    ,       AVG      (CASE WHEN u.test_code = 1 THEN score END)     AS avgread
    ,       COUNT (CASE WHEN u.test_code = 2 THEN 1     END)     AS math
    ,       AVG      (CASE WHEN u.test_code = 2 THEN score END)     AS avgmath
    ,       COUNT (CASE WHEN u.test_code = 3 THEN 1     END)     AS write
    ,       AVG      (CASE WHEN u.test_code = 3 THEN score END)     AS avgwrite
    FROM                    all_populations p
    CROSS JOIN         all_status         a
    CROSS JOIN         all_colleges    c
    CROSS JOIN         score_range     r
    LEFT OUTER JOIN     unpivoted_data  u     ON       u.stu_population  =  p.stu_population
                                             AND     u.status_lvl       >= a.lvl_id
                                             AND      (     (       u.score      >=  r.low_score
                                                          AND       u.score      <   r.high_score
                                          )
                                  OR     (       u.score      IS NULL
                                          AND       r.low_score  IS NULL
                                          )
                                      )
                             AND     u.college       =  c.college
    GROUP BY  p.stu_population
    ,            a.display_txt
    ,       c.college
    ,       ROLLUP ((r.display_txt))
    ORDER BY  p.stu_population
    ,            a.display_txt
    ,         c.college
    ,       GROUPING (r.display_txt)
    ,            MIN (r.range_sort)
    ;
    

    produces this output:

    STU STATUS    CO SCORERANG READ READ MATH MATH WRITE WRITE
    --- --------- -- --------- ---- ---- ---- ---- ----- -----
    F   1Applied  BN 450 - 499    0         0          0
    F   1Applied  BN 500 - 549    2  515    3  517     2   540
    F   1Applied  BN 550 - 599    1  550    0          0
    F   1Applied  BN No Scores    0         0          0
    F   1Applied  BN  (Total)     3  527    3  517     2   540
    F   1Applied  LA 450 - 499    0         0          0
    F   1Applied  LA 500 - 549    0         0          1   510
    F   1Applied  LA 550 - 599    1  560    1  590     0
    F   1Applied  LA No Scores    1         1          1
    F   1Applied  LA  (Total)     2  560    2  590     2   510
    F   2Accepted BN 450 - 499    0         0          0
    F   2Accepted BN 500 - 549    2  515    3  517     2   540
    F   2Accepted BN 550 - 599    1  550    0          0
    F   2Accepted BN No Scores    0         0          0
    F   2Accepted BN  (Total)     3  527    3  517     2   540
    F   2Accepted LA 450 - 499    0         0          0
    F   2Accepted LA 500 - 549    0         0          1   510
    F   2Accepted LA 550 - 599    1  560    1  590     0
    F   2Accepted LA No Scores    1         1          1
    F   2Accepted LA  (Total)     2  560    2  590     2   510
    F   3Deposit  BN 450 - 499    0         0          0
    F   3Deposit  BN 500 - 549    2  515    3  517     2   540
    F   3Deposit  BN 550 - 599    1  550    0          0
    F   3Deposit  BN No Scores    0         0          0
    F   3Deposit  BN  (Total)     3  527    3  517     2   540
    F   3Deposit  LA 450 - 499    0         0          0
    F   3Deposit  LA 500 - 549    0         0          1   510
    F   3Deposit  LA 550 - 599    1  560    1  590     0
    F   3Deposit  LA No Scores    1         1          1
    F   3Deposit  LA  (Total)     2  560    2  590     2   510
    
                                     AVG       AVG         AVG
    STU STATUS    CO SCORERANG READ READ MATH MATH WRITE WRITE
    --- --------- -- --------- ---- ---- ---- ---- ----- -----
    T   1Applied  BN 450 - 499    0         0          0
    T   1Applied  BN 500 - 549    0         0          0
    T   1Applied  BN 550 - 599    0         0          0
    T   1Applied  BN No Scores    0         0          0
    T   1Applied  BN  (Total)     0         0          0
    T   1Applied  LA 450 - 499    0         0          0
    T   1Applied  LA 500 - 549    0         0          0
    T   1Applied  LA 550 - 599    0         0          0
    T   1Applied  LA No Scores    1         1          1
    T   1Applied  LA  (Total)     1         1          1
    T   2Accepted BN 450 - 499    0         0          0
    T   2Accepted BN 500 - 549    0         0          0
    T   2Accepted BN 550 - 599    0         0          0
    T   2Accepted BN No Scores    0         0          0
    T   2Accepted BN  (Total)     0         0          0
    T   2Accepted LA 450 - 499    0         0          0
    T   2Accepted LA 500 - 549    0         0          0
    T   2Accepted LA 550 - 599    0         0          0
    T   2Accepted LA No Scores    1         1          1
    T   2Accepted LA  (Total)     1         1          1
    T   3Deposit  BN 450 - 499    0         0          0
    T   3Deposit  BN 500 - 549    0         0          0
    T   3Deposit  BN 550 - 599    0         0          0
    T   3Deposit  BN No Scores    0         0          0
    T   3Deposit  BN  (Total)     0         0          0
    T   3Deposit  LA 450 - 499    0         0          0
    T   3Deposit  LA 500 - 549    0         0          0
    T   3Deposit  LA 550 - 599    0         0          0
    T   3Deposit  LA No Scores    0         0          0
    T   3Deposit  LA  (Total)     0         0          0
    
    60 rows selected.
    

    Among the assumptions I made were some of status.
    The different status of the status values seem to be subsets of each other. Otherwise said, any line that is qualified as "3Deposit" is also considered to be '2Accepted' and any line which is recorded as "2Accepted" is also considered to be '1Applied.
    For the same reason that you should have a table of scre_range, you must also have a table for the statutes of the thesis, like this:

    CREATE TABLE  all_status
    AS
    SELECT     1 as lvl_id, '1Applied' AS display_txt     FROM dual     UNION ALL
    SELECT     2,             '2Accepted'             FROM dual     UNION ALL
    SELECT     3,          '3Deposit'               FROM dual;
    

    where the highest lvl_ids are supposed to include lower levels (for example, lvl_id 1 is a subset of the 2 and 3).

    When there is no line in a group, the Middle columns in the output above remains NULL.
    0 in these places, use "NVL (AVG (...), 0) ' instead of «AVG (...)»

  • Order a different number of prints for each image

    If there is a way to order prints of different images different number?

    Say I want to order a print of all my images in a collection of 4 x 6, simple enough.

    Then, I want to order some of them also in format 8 x 10, also quite simple.

    But what I can't find out how to order say;  3 copies of the #4 8 x 10 image, 2 copies of the #5 8 x 10 image and 10 copies of the image #14 in 8 x 10.

    Something I m missing?

    Any help is appreciated.

    Thanks in advance.

    • The options button lets you change the number of prints for each selected photo.
    • The 'add pictures and change print sizes' button allows you to add additional formats and you can even select different amounts for each size.

    See this help page: https://help.apple.com/photos/mac/1.0/?lang=en#/pht6e15ea68

  • Why I have to return my password for each song I buy on iTunes

    At some point, iTunes began to requires me to enter my password for each song I buy.  Usually, I do the search and buy on my Windows 7 desktop.  I scoured my account looking for an explanation on why he has changed, but I think not all the controls that I maybe changed.  Always in the past, once that I registered on my account, I could buy all the pieces I want without another password to sign in.   This change took place a few months ago.

    Thank you

    Barb

    This article may help:

    Manage your iTunes Store and App Store - Apple Support password preferences

    Read very carefully to see what password of the options you can set (especially with the Touch ID).

Maybe you are looking for