Query SQL using Group by and the aggregate function

Hi all

I need your help in writing a SQL query to achieve the following objectives.

Scenario:

I have a table with 3 columns. There are 3 possible values for col3 - success, failure, and error.
Now I need a query that can give me the summary figures for distinct values for col3 for each GROUP BY col1 and col2 values. When there is no values for col3, then it should return ZERO count.

The example data:

Col1 Col2 Col3

success of ABC 01
success of ABC 02
success of ABC 01
ABC 01 failure
ABC 01 error
ABC 02 failure
ABC 03 error
XYZ 07 failure

Power required:

C1 c2 s_cnt F_cnt E_cnt (title)
ABC 01 2 1 1
ABC 02 1 1 0
03 0 0 1 ABC
XYZ 07 0 1 0

s_cnt = number of success; F_cnt = number of failure; E_cnt = number of errors

Please note that the exit should have 5 columns col1, col2, group of (col1, col2) count (success), group of (col1, col2) count (failure), group of (col1, col2) count (error)
and wherever there are n ROWS, then it should return ZERO.

Thanks in advance.

Kind regards
Shiva

Hi, Shiva,

Welcome to the forum!

Here's one way:

SELECT       col1
,       col2
,       COUNT ( CASE
                      WHEN  col3 = 'success'
                THEN  1
                  END
             )          AS s_cnt
,       COUNT ( CASE
                      WHEN  col3 = 'failure'
                THEN  1
                  END
             )          AS f_cnt
,       COUNT ( CASE
                      WHEN  col3 = 'Error'
                THEN  1
                  END
             )          AS e_cnt
FROM       table_x
GROUP BY  col1
,            col2
;

Whenever you have a problem, post a small example data (CREATE TABLE and only relevant columns, INSERT statements). If you do not, then don't expect answers, you can get to test.
Also post the results desired from these data.

Tags: Database

Similar Questions

  • analytical function and the aggregate function

    What are the analytical function and the aggregate function. What is the difference between them?

    Hello

    Analytic Functions : -.

    Analytical functions calculate a value of aggregation based on a group of lines. They differ from aggregate functions because they return several rows for each group. The Group of rows is called a window and is defined by the analytic_clause. For each line, a sliding window of lines is defined. The window determines the range of lines used for the calculations for the current line. Window sizes can be based on a physical number of rows or a logic as the time interval.
    Analytical functions are the last set of operations performed in a query with the exception of the last ORDER BY clause. Every joint and every WHERE, GROUP BY and HAVING clauses are met before the analytical functions are handled. As a result, analytic functions can only appear in the select list or the ORDER BY clause.
    Analytical functions are commonly used to calculate cumulative aggregates, moving, centered and considered.

    Aggregate functions : -.

    Aggregate functions return a line of single result based on the groups of lines, rather than on the unique lines. Aggregate functions can appear in selection lists, as well as in the HAVING and ORDER BY clauses. They are commonly used with the GROUP BY clause in a SELECT statement, where Oracle Database splits the rows in a table when asked or seen in groups. In a query that contains a GROUP BY clause, the select list items can be aggregation functions, GROUP BY constant expressions or expressions involving one of them. Oracle applies the functions of aggregation for each group of rows and returns a single result for each group line.
    If you omit the GROUP BY clause, Oracle then applies any aggregate functions in the select list for all rows in the table queried or the view. You use aggregate functions in the HAVING clause to eliminate groups of the output based on the results of aggregate functions, rather than the values of the individual lines of the queried table or view.

    Let me know if you feel any problem understanding.
    Thank you.

    Published by: varun4dba on January 27, 2011 15:32

  • How to use Group by in the analytic function

    I need to write the Department that has the minimum wage in a row. She must be with analytical function, but I have problem in group by. I can't use min() without group by.

    Select * from (min (sal) select min_salary, deptno, RANK() ON RN (ORDER BY sal CSA, CSA rownum) of the Group of emp by deptno) < 20 WHERE RN order by deptno;

    Published by: senza on 6.11.2009 16:09

    Hello

    senza wrote:
    I need to write the Department that has the minimum wage in a row. She must be with analytic function

    Therefore with an analytic function? Looks like it is a duty.

    The best way to get these results is with an aggregate, not analysis, function:

    SELECT      MIN (deptno) KEEP (DENSE_RANK FIRST ORDER BY sal)     AS dept_with_lowest_sal
    FROM      scott.emp
    ;
    

    Note that you do not need a subquery.
    This can be modififed if, for example, you want the lowest Department with the sal for each job.

    But if your mission is to use an analytical function, that's what you have to do.

    but I have problem in group by. I can't use min() without group by.

    Of course, you can use MIN without GROUP BY. Almost all of the aggregate (including MIN) functions have analytical equivalents.
    However, in this issue, you don't need to. The best analytical approach RANK only, not use MIN. If you ORDER BY sal, the lines with rank = 1 will have the minimum wage.

    Select * from (min (sal) select min_salary, deptno, RANK() ON RN (ORDER BY sal CSA, CSA rownum) of the Group of emp by deptno) WHERE the RN< 20="" order="" by="">

    Try to select plain old sal instead of MIN (sal) and get reid of the GROUP BY clause.

    Add ROWNUM in the ORDER BY clause is to make RANK return the same result as ROW_NUMBER, every time that it is a tie for the sal, the output will still be distinct numbers. which line gets the lower number will be quite arbitrary, and not necessarily the same every time you run the query. For example, MARTIN and WARD have exactly the same salary, 1250. The query you posted would assign rn = 4 to one of them and rn = 5 to another. Who gets 4? It's a toss-up. It could be MARTIN the first time you try, and WARD the next. (In fact, in a very small table like scott.emp, it probably will be consistent, but always arbitrary.) If this is what you want, it would be clearer and simpler just to use ROW_NUMEBR instead of RANK.

  • String and the aggregate function

    Hi all
    Suppose that in the HR schema example, I want to count all the employee table names that start by the of the. The following queries employees with this name:
    SQL> select first_name,job_id,count(first_name) from employees where first_name
    like 'S%' group by first_name, job_id;
    
    FIRST_NAME           JOB_ID     COUNT(FIRST_NAME)
    -------------------- ---------- -----------------
    Sundita              SA_REP                     1
    Samuel               SH_CLERK                   1
    Shelli               PU_CLERK                   1
    Sigal                PU_CLERK                   1
    Shelley              AC_MGR                     1
    Steven               AD_PRES                    1
    Susan                HR_REP                     1
    Sarath               SA_REP                     1
    Shanta               ST_MAN                     1
    Steven               ST_CLERK                   1
    Stephen              ST_CLERK                   1
    
    FIRST_NAME           JOB_ID     COUNT(FIRST_NAME)
    -------------------- ---------- -----------------
    Sundar               SA_REP                     1
    Sarah                SH_CLERK                   1
    
    13 rows selected.
    How to query number of employees who have first names beginning by the of '?

    Best regards
    Valerie
    Also I want to query first name and number of count, is this possible?
    

    I think you want:

    SELECT FIRST_NAME, COUNT (*)
    EMPLOYEES
    WHERE NAME LIKE %'
    ROLLUP GROUP ((FIRST_NAME))

  • Bug with the aggregate function and no group

    When I run the following query:
    with the_table as
    (
      select 1 as id, 100 as cost from dual
      union all select 2 as id, 200 as cost from dual
      union all select 3 as id, 300 as cost from dual
      union all select 4 as id, 400 as cost from dual
      union all select 5 as id, 500 as cost from dual
    )
    select id, cost
    from
    (
      select id, cost
      from the_table
      --
      union all
      --
      select 0 as id, sum(cost) as cost
      from the_table
      where 0 = 1
      -- group by 1
    )
    order by id;
    I get this result:
    ID    COST
    --  ------
     0  <null>     
     1     100
     2     200
     3     300
     4     400
     5     500
    However, when I "uncomment" the line "Group 1", the query works as expected (without the id = rank 0).

    This occurs even when "the_table" is an array.

    Someone else comes through this (and if so, how long is a problem)?

    The database is 11.2.0.2.0 64-bit.

    EDIT: It also happens without a Union - the following returns a single line (with null 0 and cost of id) without the Group By and no line with her:
    select id, cost
    from
    (
      select 0 as id, sum(cost) as cost
      from 
      (
        select 1 as id, 100 as cost from dual
        union all select 2 as id, 200 as cost from dual
        union all select 3 as id, 300 as cost from dual
        union all select 4 as id, 400 as cost from dual
        union all select 5 as id, 500 as cost from dual
      )
      where 0 = 1
      -- group by 1
    )
    order by id
    Edited by: Donbot February 15, 2012 10:29

    Donbot wrote:
    Someone else comes through this (and if so, how long is a problem)?

    The database is 11.2.0.2.0 64-bit.

    This is a documented behavior.

    http://docs.Oracle.com/CD/E11882_01/server.112/e26088/functions003.htm#SQLRF20035

    "
    All except COUNT (*) GROUPING and GROUPING_ID aggregate functions ignore NULL values. You can use the NVL function in the argument of an aggregation function to substitute a value for a null value. COUNTY and REGR_COUNT never return null, but return a number or zero. For all remaining functions of aggregation, * if the DataSet contains no line, * or if it contains only the rows with NULL values as arguments to the aggregate function, * then the function returns null.*
    "

  • How to get a handle to the window to screen using just group id and the id of the window?

    Hello

    We are trying to develop a video application, and we use ForeignWindowControl to display the video on the screen.

    We have those ForeignWindowControl declared and defined in a QML file that appears as needed.

    Using the CameraAPI, we are able to get local video displayed correctly on the local video ForiegnWindowControl. For this we have just the window group id and the id of the window api camera and it automatically configures the window newly created by the camera to the position API and the size defined in the QML file.

    However, this is isn't the case with the video remotely. Since there is no method/function similar to the createViewFinder of the camera api, we need to create a new window of ourselves that goes under the Group shared by the ForeignWindowControl remotely, using screen_create_window_type and set all the required parameters.

    To view the video, hard-code manually the position and size of the newly created window so that it corresponds to the position and the size of ForeignWindowControl remote in QML file and is displayed correctly.

    My question is how to find the ForeignWindowControl using just the group id and the id of the window? the way in which the cameraAPi done in-house?

    Any help would be appreciated.

    Thank you.

    There are 3 ways to associate a window with a ForeignWindowControl.

    1 tell the ForeignWindowControl to expect a window with a given windowId.  Then after that libscreen allows you to create your window with the same windowId, use screen_join_window_group() to adhere to the same group that uses the FWC.  The FWC should automatically capture this event and associate the window itself.  This is how the camera it.

    2. call the bindToWindow() of the ForeignWindowControl method to associate the FWC a particular windowId/groupId.  Similar to the #1, but doesn't seem more useful if you create the FWC after the window of the screen has been created.

    3. call the setWindowHandle() of the ForeignWindowControl method.  Similar to the #2, but instead of using windowId/groupId, you spend just the actual handle.

    See the FWC docs:

    https://developer.BlackBerry.com/Cascades/reference/bb__cascades__foreignwindowcontrol.html

    If you have created the window, you can use one of the 3 approaches.  If the window has been created in a different process (eg. mm-engine rendering, or photo-service device), then generally #1 is the approach you would use.

    See you soon,.

    Sean

  • Hello my name is jose and I would like to know how can I use the adobe CAMERA RAW bridge because the use of bridge and the desire to open a raw image of the camera, I get a message that says: BRIDGE HAND aplicaion NOT ACTIVE. BRIDGE REQUIRES THAT A PARTIC

    Hello my name is jose and I would like to know how can I use the adobe CAMERA RAW bridge because the use of bridge and the desire to open a raw image of the camera, I get a message that says: BRIDGE HAND aplicaion NOT ACTIVE. BRIDGE REQUIRES THAT A PARTICULAR PRODUCT HAS STARTED AT LEAST ONCE THIS FEATURE. I wonder what that means?

    I use a 64 - Bit lapto operating system (windows 7).

    Thank you

    my email is: [email protected] If you send me the answer to my query

    You must activate Photoshop.

    Mylenium

  • While using my phone and the battery goes down sometimes while it is on the charger

    While using my phone and the battery goes down sometimes while it is on the charger

    What charger do you use? Which shouldn't happen with the Apple adapter, but it might with adapters that cannot provide informed that the phone needs.

  • Apple music stops after playing a song! I use windows 10 and the new itunes. He stops playing after one, sometimes two song... radio too... Please help. I installed and all new authorized...

    Apple music stops after playing a song!

    I use windows 10 and the latest itunes on my laptop. He stops playing after one, sometimes two songs... radio too... Please help.

    I installed and all new authorized...

    Thank you...

    Markus

    same thing here for all my Mac or PC windows... they are "working" 8 months later... unreal

  • Required: a portable charger for an iPhone 4 that can be used in Germany and the United States

    Required: a portable charger for an iPhone 4 that can be used in Germany and the United States

    Does anyone have recommendations?

    The original charger is International, all you need is a converter of pin code.

  • you want to send data using labVIEW to arduino using write visa and the process and to take action using arduino. A

    I want to send data using labVIEW to arduino using write visa and the process and to take action using arduino. After that, I want to arduino to send out necessary via a serial port to labVIEW which should be read using visa read and store in a chain. While I am able to write or read both individually, I can't do it consecutively. I used advanced read and write vi for checking my code, but nothing is helping. The wrong bed 'time delay before execution. " Please let me know where I can go wrong. Also is it possible to write code for hx711 using labVIEW

    1. you need not "\n" on your orders println().  This command adds an end of line character already in the message.

    2. you get the error because you have a loop around your reading.  After the first reading (well technically, the second because of you add an extra line end character), there is nothing left in the port.  As a result, you will get the timeout.

    3. you should really consider using a Structure of the event.  This way you just don't write and read when you press the Write button and you can also use the structure of the event to make the loop to stop.  I also go up to close the port inside the stop-> value Change event.

  • Is it possible to watch video remotely using an ipad and the Camcorder HDR-CX330 wireless?

    Is it possible to watch video remotely using an ipad and the Camcorder HDR-CX330 wireless? I could see the pictures, but really want to see videos?  I use the PlayMemories application for remote control. Any suggestions?

    Hi bill54107,

    Welcome to the community of Sony!

    Unfortunately, you can not view videos recorded wireless using your mobile device. You can transfer videos on your iPad, but only if they are saved in the format. Mp4 format AVCHD files can be transferred wireless to a tablet or a smartphone. Enable the video double setting Rec to shoot AVCHD and Mp4 format at the same time.

    You can only use the transfer of photos and AVCHD videos wireless when you download Wireless Auto Import (for Mac) and the House of PlayMemories.

    If my post answered your question, please mark it as "accept as a Solution. Thanks_Mitch

  • For now I use PSE 8 and the adobe bridge for classification, stars, keywords can I keep all my work on the bridge if I change to PSE14 and the Organizer?

    Hello

    For now I use PSE 8 and the adobe bridge for classification, the stars, the keywords

    can I keep all my work on the bridge if I change to PSE14 and the Organizer?

    Thank you

    Exit the bridge and try to revive all by pressing the three keys Cmd + Shift + Optn (Cmd + Shift + alt)

    Choose to re - set prefs.

  • Hi Folks, I'm ready to go with the versions of CC of LR and Psalm I currently use Apple products and the iCloud for storing photos, they stay on the iCloud or do they need to go to a related version CC?  Also, I guess that versions CC of LR

    Hi Folks, I'm ready to go with the versions of CC of LR and Psalm I currently use Apple products and the iCloud for storing photos, they stay on the iCloud or do they need to go to a related version CC?  Also, I guess versions CC of LR are Mac compatible.  Can they go on several devices, the cloud itself, including mobile devices like the iPad?  Thanks a lot in advance, Dave.

    You can have your photos on iCloud without any problem.  Lightroom CC 2015 is compatible with mac, please check system requirements of Lightroom CC 2015. You can install Creative Cloud on 2 machines, however, you can't use both the at the same time, the control 1:1 point: http://wwwimages.adobe.com/content/dam/Adobe/en/legal/servicetou/Software_Terms-en_US-2015 0407_2200.pdf

    You can use Lightroom for mobile or your ipad and iPhone. Please check Adobe Lightroom for mobile FAQ

    I hope this helps.

  • Max and sum in a query using Group by and dense_rank

    Hi all

    I am running Oracle 10 G on Windows 2003.

    I have two tables, RT_DY_ZONE1EDYCONS and MV_ZONE_HOURLY. I need a query that will give me the SUM of MR_OL01_VT of RT_DY_ZONE1EDYCONS for each month and the maximum value of MR_OL01_FI_0S and MR_OL02_FI_0S and the time of the maximum value for each group for the month of MV_ZONE_HOURLY. I can't combine the two querys I came up with these forums in a single search.

    I need the following result, any help would be appreciated.
    datetime, SUM of MR_OL01_VT , max value MR_OL01_FI_0S ,max_time MR_OL01_FI_0S , max value MR_OL02_FI_0S ,max_time MR_OL02_FI_0S
    January 2010,8.373765,4.96935,2010-01-15:01,5.96835,2010-01-15:17
    I used the following query to obtain the SUM of the MR_OL01_VT
    SELECT 
        TRUNC(VOL.TIMESTAMP, 'MM') datetime, 
        SUM(VOL.MR_OL01_VT) 
    FROM 
        RT_DY_ZONE1EDYCONS VOL 
    GROUP BY 
        TRUNC(VOL.TIMESTAMP, 'MM')
    ORDER BY
        TRUNC(VOL.TIMESTAMP, 'MM')
    and this query for the maximum value/time MR_OL01_FI_0S and MR_OL02_FI_0S
    select TAG_NAME,
           max(tag_avg) keep (dense_rank last order by tag_avg) over (partition by tag_name) Max_Value,
           max(datetime) keep (dense_rank last order by tag_avg) over (partition by tag_name) AS MAX_DATE
    from mv_zone_hourly
    WHERE tag_name LIKE 'MR_OL0%_FI_0S'
    first table
    CREATE TABLE RT_DY_ZONE1EDYCONS 
       (     TIMESTAMP DATE NOT NULL ENABLE, 
         HB_OL00_VT NUMBER(12,6), 
         OR_RES0_VT NUMBER(12,6), 
         OP_OL01_VT NUMBER(12,6), 
         FP_OL01_VT NUMBER(12,6), 
         BD_OL01_VT NUMBER(12,6), 
         MR_OL01_VT NUMBER(12,6), 
         Z1E_VT NUMBER(12,6)
    )
    with the sample data
    Insert into RT_DY_ZONE1EDYCONS (TIMESTAMP,HB_OL00_VT,OR_RES0_VT,OP_OL01_VT,FP_OL01_VT,BD_OL01_VT,MR_OL01_VT,Z1E_VT) values (to_date('2010-01-15:00','YYYY-MM-DD:HH24'),4.443056,1.088,1.224927,0.663266,0,0.387499,1.079364);
    Insert into RT_DY_ZONE1EDYCONS (TIMESTAMP,HB_OL00_VT,OR_RES0_VT,OP_OL01_VT,FP_OL01_VT,BD_OL01_VT,MR_OL01_VT,Z1E_VT) values (to_date('2010-01-15:01','YYYY-MM-DD:HH24'),4.352083,null,0.692914,0.044029,0,0.373536,null);
    Insert into RT_DY_ZONE1EDYCONS (TIMESTAMP,HB_OL00_VT,OR_RES0_VT,OP_OL01_VT,FP_OL01_VT,BD_OL01_VT,MR_OL01_VT,Z1E_VT) values (to_date('2010-01-15:02','YYYY-MM-DD:HH24'),null,null,null,null,0,null,null);
    Insert into RT_DY_ZONE1EDYCONS (TIMESTAMP,HB_OL00_VT,OR_RES0_VT,OP_OL01_VT,FP_OL01_VT,BD_OL01_VT,MR_OL01_VT,Z1E_VT) values (to_date('2010-01-15:03','YYYY-MM-DD:HH24'),4.300694,null,0.662924,0,0,0.380275,null);
    Insert into RT_DY_ZONE1EDYCONS (TIMESTAMP,HB_OL00_VT,OR_RES0_VT,OP_OL01_VT,FP_OL01_VT,BD_OL01_VT,MR_OL01_VT,Z1E_VT) values (to_date('2010-01-15:04','YYYY-MM-DD:HH24'),null,null,null,null,0,null,null);
    Insert into RT_DY_ZONE1EDYCONS (TIMESTAMP,HB_OL00_VT,OR_RES0_VT,OP_OL01_VT,FP_OL01_VT,BD_OL01_VT,MR_OL01_VT,Z1E_VT) values (to_date('2010-01-15:05','YYYY-MM-DD:HH24'),0.025694,null,0.650406,0,0,0.342299,null);
    Insert into RT_DY_ZONE1EDYCONS (TIMESTAMP,HB_OL00_VT,OR_RES0_VT,OP_OL01_VT,FP_OL01_VT,BD_OL01_VT,MR_OL01_VT,Z1E_VT) values (to_date('2010-01-15:06','YYYY-MM-DD:HH24'),null,null,null,null,0,null,null);
    Insert into RT_DY_ZONE1EDYCONS (TIMESTAMP,HB_OL00_VT,OR_RES0_VT,OP_OL01_VT,FP_OL01_VT,BD_OL01_VT,MR_OL01_VT,Z1E_VT) values (to_date('2010-01-15:07','YYYY-MM-DD:HH24'),0.122917,-2.992,0.673062,0,0,0.423474,2.018381);
    Insert into RT_DY_ZONE1EDYCONS (TIMESTAMP,HB_OL00_VT,OR_RES0_VT,OP_OL01_VT,FP_OL01_VT,BD_OL01_VT,MR_OL01_VT,Z1E_VT) values (to_date('2010-01-15:08','YYYY-MM-DD:HH24'),0.106944,null,1.27403,0.768119,0,0.449303,null);
    Insert into RT_DY_ZONE1EDYCONS (TIMESTAMP,HB_OL00_VT,OR_RES0_VT,OP_OL01_VT,FP_OL01_VT,BD_OL01_VT,MR_OL01_VT,Z1E_VT) values (to_date('2010-01-15:09','YYYY-MM-DD:HH24'),null,null,null,null,0,null,null);
    Insert into RT_DY_ZONE1EDYCONS (TIMESTAMP,HB_OL00_VT,OR_RES0_VT,OP_OL01_VT,FP_OL01_VT,BD_OL01_VT,MR_OL01_VT,Z1E_VT) values (to_date('2010-01-15:10','YYYY-MM-DD:HH24'),0.122917,-2.448,0.637977,0,0,0.418056,1.514884);
    Insert into RT_DY_ZONE1EDYCONS (TIMESTAMP,HB_OL00_VT,OR_RES0_VT,OP_OL01_VT,FP_OL01_VT,BD_OL01_VT,MR_OL01_VT,Z1E_VT) values (to_date('2010-01-15:11','YYYY-MM-DD:HH24'),0.183333,-2.992,0.649855,0,0,0.401947,2.123531);
    Insert into RT_DY_ZONE1EDYCONS (TIMESTAMP,HB_OL00_VT,OR_RES0_VT,OP_OL01_VT,FP_OL01_VT,BD_OL01_VT,MR_OL01_VT,Z1E_VT) values (to_date('2010-01-15:12','YYYY-MM-DD:HH24'),1.157639,-2.992,1.039931,0.463684,0,0.43389,2.212134);
    Insert into RT_DY_ZONE1EDYCONS (TIMESTAMP,HB_OL00_VT,OR_RES0_VT,OP_OL01_VT,FP_OL01_VT,BD_OL01_VT,MR_OL01_VT,Z1E_VT) values (to_date('2010-01-15:13','YYYY-MM-DD:HH24'),4.536111,1.36,0.972226,0.381604,0,0.461941,1.36034);
    Insert into RT_DY_ZONE1EDYCONS (TIMESTAMP,HB_OL00_VT,OR_RES0_VT,OP_OL01_VT,FP_OL01_VT,BD_OL01_VT,MR_OL01_VT,Z1E_VT) values (to_date('2010-01-15:14','YYYY-MM-DD:HH24'),4.496528,2.176,0.647979,0,0,0.45611,1.216439);
    Insert into RT_DY_ZONE1EDYCONS (TIMESTAMP,HB_OL00_VT,OR_RES0_VT,OP_OL01_VT,FP_OL01_VT,BD_OL01_VT,MR_OL01_VT,Z1E_VT) values (to_date('2010-01-15:15','YYYY-MM-DD:HH24'),4.409028,2.72,0.665355,0,0,0.440141,0.583532);
    Insert into RT_DY_ZONE1EDYCONS (TIMESTAMP,HB_OL00_VT,OR_RES0_VT,OP_OL01_VT,FP_OL01_VT,BD_OL01_VT,MR_OL01_VT,Z1E_VT) values (to_date('2010-01-15:16','YYYY-MM-DD:HH24'),4.380556,1.36,0.886389,0.256387,0,0.429446,1.448334);
    Insert into RT_DY_ZONE1EDYCONS (TIMESTAMP,HB_OL00_VT,OR_RES0_VT,OP_OL01_VT,FP_OL01_VT,BD_OL01_VT,MR_OL01_VT,Z1E_VT) values (to_date('2010-01-15:17','YYYY-MM-DD:HH24'),4.433333,0.272,1.21716,0.656324,0,0.434169,1.85368);
    Insert into RT_DY_ZONE1EDYCONS (TIMESTAMP,HB_OL00_VT,OR_RES0_VT,OP_OL01_VT,FP_OL01_VT,BD_OL01_VT,MR_OL01_VT,Z1E_VT) values (to_date('2010-01-15:18','YYYY-MM-DD:HH24'),4.409722,2.176,0.653266,0,0,0.436253,1.144203);
    Insert into RT_DY_ZONE1EDYCONS (TIMESTAMP,HB_OL00_VT,OR_RES0_VT,OP_OL01_VT,FP_OL01_VT,BD_OL01_VT,MR_OL01_VT,Z1E_VT) values (to_date('2010-01-15:19','YYYY-MM-DD:HH24'),4.44375,2.448,0.67917,0,0,0.436947,0.879633);
    Insert into RT_DY_ZONE1EDYCONS (TIMESTAMP,HB_OL00_VT,OR_RES0_VT,OP_OL01_VT,FP_OL01_VT,BD_OL01_VT,MR_OL01_VT,Z1E_VT) values (to_date('2010-01-15:20','YYYY-MM-DD:HH24'),4.420833,0,1.273057,0.733813,0,0.428474,1.985489);
    Insert into RT_DY_ZONE1EDYCONS (TIMESTAMP,HB_OL00_VT,OR_RES0_VT,OP_OL01_VT,FP_OL01_VT,BD_OL01_VT,MR_OL01_VT,Z1E_VT) values (to_date('2010-01-15:21','YYYY-MM-DD:HH24'),4.390278,2.176,0.895212,0.280419,0,0.418195,0.620452);
    Insert into RT_DY_ZONE1EDYCONS (TIMESTAMP,HB_OL00_VT,OR_RES0_VT,OP_OL01_VT,FP_OL01_VT,BD_OL01_VT,MR_OL01_VT,Z1E_VT) values (to_date('2010-01-15:22','YYYY-MM-DD:HH24'),4.336806,1.904,0.670843,0,0,0.412711,1.349252);
    Insert into RT_DY_ZONE1EDYCONS (TIMESTAMP,HB_OL00_VT,OR_RES0_VT,OP_OL01_VT,FP_OL01_VT,BD_OL01_VT,MR_OL01_VT,Z1E_VT) values (to_date('2010-01-15:23','YYYY-MM-DD:HH24'),4.305556,2.448,0.689441,0,0,0.409099,0.759016);
    and the second table
    CREATE TABLE MV_ZONE_HOURLY
    ( TAG_NAME VARCHAR2(30),
      TAG_DESCRIP VARCHAR(100),
      DATETIME DATE,
      TAG_AVG NUMBER(12,6)
    )
    with the sample data
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL01_FI_0S','Montreal Rd Disch Flow 1',to_date('2010-01-15:00','YYYY-MM-DD:HH24'),4.166712);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL01_FI_0S','Montreal Rd Disch Flow 1',to_date('2010-01-15:01','YYYY-MM-DD:HH24'),4.96935);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL01_FI_0S','Montreal Rd Disch Flow 1',to_date('2010-01-15:02','YYYY-MM-DD:HH24'),4.367);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL01_FI_0S','Montreal Rd Disch Flow 1',to_date('2010-01-15:03','YYYY-MM-DD:HH24'),4.67788);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL01_FI_0S','Montreal Rd Disch Flow 1',to_date('2010-01-15:04','YYYY-MM-DD:HH24'),4.06335);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL01_FI_0S','Montreal Rd Disch Flow 1',to_date('2010-01-15:05','YYYY-MM-DD:HH24'),3.23456);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL01_FI_0S','Montreal Rd Disch Flow 1',to_date('2010-01-15:06','YYYY-MM-DD:HH24'),4.2333555);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL01_FI_0S','Montreal Rd Disch Flow 1',to_date('2010-01-15:07','YYYY-MM-DD:HH24'),4.5890);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL01_FI_0S','Montreal Rd Disch Flow 1',to_date('2010-01-15:08','YYYY-MM-DD:HH24'),4.166712);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL01_FI_0S','Montreal Rd Disch Flow 1',to_date('2010-01-15:09','YYYY-MM-DD:HH24'),4.96735);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL01_FI_0S','Montreal Rd Disch Flow 1',to_date('2010-01-15:10','YYYY-MM-DD:HH24'),4.8456);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL01_FI_0S','Montreal Rd Disch Flow 1',to_date('2010-01-15:11','YYYY-MM-DD:HH24'),4.2468);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL01_FI_0S','Montreal Rd Disch Flow 1',to_date('2010-01-15:12','YYYY-MM-DD:HH24'),4.06335);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL01_FI_0S','Montreal Rd Disch Flow 1',to_date('2010-01-15:13','YYYY-MM-DD:HH24'),3.9746);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL01_FI_0S','Montreal Rd Disch Flow 1',to_date('2010-01-15:14','YYYY-MM-DD:HH24'),4.2333555);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL01_FI_0S','Montreal Rd Disch Flow 1',to_date('2010-01-15:15','YYYY-MM-DD:HH24'),4.47);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL01_FI_0S','Montreal Rd Disch Flow 1',to_date('2010-01-15:16','YYYY-MM-DD:HH24'),4.166712);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL01_FI_0S','Montreal Rd Disch Flow 1',to_date('2010-01-15:17','YYYY-MM-DD:HH24'),4.96835);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL01_FI_0S','Montreal Rd Disch Flow 1',to_date('2010-01-15:18','YYYY-MM-DD:HH24'),4.6890);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL01_FI_0S','Montreal Rd Disch Flow 1',to_date('2010-01-15:19','YYYY-MM-DD:HH24'),4.42345);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL01_FI_0S','Montreal Rd Disch Flow 1',to_date('2010-01-15:20','YYYY-MM-DD:HH24'),4.06335);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL01_FI_0S','Montreal Rd Disch Flow 1',to_date('2010-01-15:21','YYYY-MM-DD:HH24'),3.4579);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL01_FI_0S','Montreal Rd Disch Flow 1',to_date('2010-01-15:22','YYYY-MM-DD:HH24'),4.2333555);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL01_FI_0S','Montreal Rd Disch Flow 1',to_date('2010-01-15:23','YYYY-MM-DD:HH24'),4.45789);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL02_FI_0S','Montreal Rd Disch Flow 2',to_date('2010-01-15:00','YYYY-MM-DD:HH24'),5.166712);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL02_FI_0S','Montreal Rd Disch Flow 2',to_date('2010-01-15:01','YYYY-MM-DD:HH24'),5.97835);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL02_FI_0S','Montreal Rd Disch Flow 2',to_date('2010-01-15:02','YYYY-MM-DD:HH24'),5.367);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL02_FI_0S','Montreal Rd Disch Flow 2',to_date('2010-01-15:03','YYYY-MM-DD:HH24'),5.67788);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL02_FI_0S','Montreal Rd Disch Flow 2',to_date('2010-01-15:04','YYYY-MM-DD:HH24'),5.06335);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL02_FI_0S','Montreal Rd Disch Flow 2',to_date('2010-01-15:05','YYYY-MM-DD:HH24'),4.23456);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL02_FI_0S','Montreal Rd Disch Flow 2',to_date('2010-01-15:06','YYYY-MM-DD:HH24'),5.2333555);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL02_FI_0S','Montreal Rd Disch Flow 2',to_date('2010-01-15:07','YYYY-MM-DD:HH24'),5.5890);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL02_FI_0S','Montreal Rd Disch Flow 2',to_date('2010-01-15:08','YYYY-MM-DD:HH24'),5.166712);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL02_FI_0S','Montreal Rd Disch Flow 2',to_date('2010-01-15:09','YYYY-MM-DD:HH24'),5.95635);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL02_FI_0S','Montreal Rd Disch Flow 2',to_date('2010-01-15:10','YYYY-MM-DD:HH24'),5.8456);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL02_FI_0S','Montreal Rd Disch Flow 2',to_date('2010-01-15:11','YYYY-MM-DD:HH24'),5.2468);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL02_FI_0S','Montreal Rd Disch Flow 2',to_date('2010-01-15:12','YYYY-MM-DD:HH24'),5.06335);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL02_FI_0S','Montreal Rd Disch Flow 2',to_date('2010-01-15:13','YYYY-MM-DD:HH24'),4.9746);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL02_FI_0S','Montreal Rd Disch Flow 2',to_date('2010-01-15:14','YYYY-MM-DD:HH24'),5.2333555);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL02_FI_0S','Montreal Rd Disch Flow 2',to_date('2010-01-15:15','YYYY-MM-DD:HH24'),5.47);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL02_FI_0S','Montreal Rd Disch Flow 2',to_date('2010-01-15:16','YYYY-MM-DD:HH24'),5.166712);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL02_FI_0S','Montreal Rd Disch Flow 2',to_date('2010-01-15:17','YYYY-MM-DD:HH24'),5.96835);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL02_FI_0S','Montreal Rd Disch Flow 2',to_date('2010-01-15:18','YYYY-MM-DD:HH24'),5.6890);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL02_FI_0S','Montreal Rd Disch Flow 2',to_date('2010-01-15:19','YYYY-MM-DD:HH24'),5.42345);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL02_FI_0S','Montreal Rd Disch Flow 2',to_date('2010-01-15:20','YYYY-MM-DD:HH24'),5.06335);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL02_FI_0S','Montreal Rd Disch Flow 2',to_date('2010-01-15:21','YYYY-MM-DD:HH24'),4.4579);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL02_FI_0S','Montreal Rd Disch Flow 2',to_date('2010-01-15:22','YYYY-MM-DD:HH24'),5.2333555);
    Insert into MV_ZONE_HOURLY (TAG_NAME,TAG_DESCRIP,DATETIME,TAG_AVG) values ('MR_OL02_FI_0S','Montreal Rd Disch Flow 2',to_date('2010-01-15:23','YYYY-MM-DD:HH24'),5.45789);

    Hello

    Thanks for posting the CREATE TABLE and INSERT statements; This is really useful!

    Your volumninous sample data are all for a month. Since your first query is grouped by month, I suppose that this is not always the case.
    In this case, you can make GROUP BY queries on two tables separately, in two subqueries, then join the results.
    For the separate columns for the values of mv_zone_hoiurly, I GROUP BY the name of tag in the subquery, then their pivot in two columns in the query of amin.

    WITH     edycons_summary       AS
    (
         SELECT       TRUNC (tmstmp, 'MM')     AS mnth
         ,       SUM (mr_ol01_vt)     AS sum_mr_ol01_vt
         FROM       rt_dy_zone1edycons
         GROUP BY  TRUNC (tmstmp, 'MM')
    )
    ,     hourly_summary       AS
    (
         SELECT       tag_name
         ,       TRUNC (datetime, 'MM')     AS mnth
         ,       MAX (tag_avg)                  AS max_avg
         ,       MAX (datetime) KEEP (DENSE_RANK LAST ORDER BY tag_avg NULLS FIRST)
                                              AS max_avg_datetime
         FROM       mv_zone_hourly
         WHERE       tag_name     IN ( 'MR_OL01_FI_0S'
                             , 'MR_OL02_FI_0S'
                           )
         GROUP BY  tag_name
         ,            TRUNC (datetime, 'MM')
    )
    SELECT       TO_CHAR (v.mnth, 'fmMonth YYYY')     AS datetime
    ,       v.sum_mr_ol01_vt
    ,       MAX ( CASE
                     WHEN  h.tag_name = 'MR_OL01_FI_0S'
                  THEN  h.max_avg
                 END
               )                         AS max_avg_01
    ,       MAX ( CASE
                     WHEN  h.tag_name = 'MR_OL01_FI_0S'
                  THEN  h.max_avg_datetime
                 END
               )                         AS datetime_01
    ,       MAX ( CASE
                     WHEN  h.tag_name = 'MR_OL02_FI_0S'
                  THEN  h.max_avg
                 END
               )                         AS max_avg_02
    ,       MAX ( CASE
                     WHEN  h.tag_name = 'MR_OL02_FI_0S'
                  THEN  h.max_avg_datetime
                 END
               )                         AS datetime_02
    FROM       edycons_summary     v
    JOIN       hourly_summary     h     ON     h.mnth     = v.mnth
    GROUP BY  v.mnth
    ,            v.sum_mr_ol01_vt
    ;
    

Maybe you are looking for