Query sorting based on log level

Hello.  I have a query like this:

SELECT DISTINCT

LOAD_PROF2,

V_TIME,

SUBSTATION_CODE,

CIRCUIT_CODE,

PROFILE_DAY,

DECODE (UPPER (PROFILE_DAY),

'MONDAY', 1,

"TUESDAY", 2.

'WEDNESDAY', 3.

'THURSDAY', 4,.

'FRIDAY', 5.

'SATURDAY', 6,.

"SUNDAY", 7,.

'HOLIDAY', 8,

"H_THURSDAY", 9,.

("H_FRIDAY", 10)

ORDERBY

OF LOAD_PROFILE_TEST

WHERE SUBSTATION_CODE = V_SUBSTATION_CODE

AND CIRCUIT_CODE = V_CIRCUIT_CODE

AND SUPERIOR (PROFILE_DAY) IN (SELECT (TOP)

TO_CHAR)

T_DATE + (LEVEL - 1).

"fmDAY'))

OF THE DOUBLE

CONNECT BY LEVEL < =.

V_DATE_IN - T_DATE + 1).

ORDER BY ORDERBY, V_TIME;

If I use T_DATE like 10/10/2013 and V_DATE_IN as the 13/10/2013, I get output that is sorted based on ORDERBY (that I created), which will be

Thursday, Friday, Saturday and Sunday.  However, when I use the 19/10/2013 as my V_DATE_IN, I get a result sorted as Monday, Tuesday... Sunday.  How can I sort it so that the first day where seems must be the day of my T_DATE, in this case, if I use the 10/10/2013 as my T_DATE and 19/10/2013 as V_DATE_IN, I should get an output sorted as Thursday, Friday... Wednesday.

What should I replace my ORDERBY to achieve this kind of sorting?  Thank you

Can you check if this query is what you wanted and meets your expectations. ??

Not TESTED on my side...

----------------------------------------

SELECT LOAD_PROF2,

V_TIME,

SUBSTATION_CODE,

CIRCUIT_CODE,

PROFILE_DAY

LOAD_PROFILE_TEST a.,

(SELECT T_DATE + (LEVEL - 1) dt, UPPER (TO_CHAR (T_DATE + (LEVEL - 1), 'fmDAY')) days)

OF THE DOUBLE

CONNECT BY LEVEL<= v_date_in="" -="" v_date="" +="" 1)="">

WHERE SUBSTATION_CODE = V_SUBSTATION_CODE

AND CIRCUIT_CODE = V_CIRCUIT_CODE

AND SUPERIOR (a.PROFILE_DAY) = b.days

LOAD_PROF2 GROUP,

V_TIME,

SUBSTATION_CODE,

CIRCUIT_CODE,

PROFILE_DAY

ORDER BY b.dt, v_time;

-------------------------------------------

See you soon,.

Manik.

Tags: Database

Similar Questions

  • DBMS_SCHEDULER logging level oddity - why?

    I am trying to establish with new database 11 g to migrate across our 10 g databases, and I noticed that dbms_scheduler jobs have a logging_level of "RUNS" on 10 g but, despite being created with the same code, on 11 g the logging_level is 'OFF '. I've checked everything I can think of, but I do not understand why the recording level is different on 11g.

    Here is my test scenario:

    < h3 > 10g < / h3 >
    select version
    from   v$instance;
    
    VERSION          
    -----------------
    10.2.0.4.0     
    
    select job_class_name, logging_level
    from   dba_scheduler_job_classes
    where  job_class_name = 'DEFAULT_JOB_CLASS';
    
    JOB_CLASS_NAME                 LOGGING_LEVEL
    ------------------------------ -------------
    DEFAULT_JOB_CLASS              RUNS           
    
    select dbms_scheduler.get_default_value('LOGGING_LEVEL') from dual;
    
    DBMS_SCHEDULER.GET_DEFAULT_VALUE('LOGGING_LEVEL')                               
    --------------------------------------------------------------------------------
    64             
    
    declare
      db_name varchar2(2);
    begin
       dbms_scheduler.create_job( 
        job_name=>'test_job', 
        job_type => 'PLSQL_BLOCK',
        job_action=> 'begin 
      null;
    end; ', 
      start_date => to_timestamp('01/16/2013 12:00:00', 'mm/dd/yyyy hh24:mi:ss'), 
      repeat_interval => 'SYSDATE + 12/24', 
      enabled => true, auto_drop=> false, 
      comments => 'test'
    );
    end;
    /
    
    select job_name, job_type, job_class, logging_level
    from   user_scheduler_jobs
    where  job_name = 'TEST_JOB';
    
    JOB_NAME                       JOB_TYPE         JOB_CLASS                      LOGGING_LEVEL
    ------------------------------ ---------------- ------------------------------ -------------
    TEST_JOB                       PLSQL_BLOCK      DEFAULT_JOB_CLASS              RUNS
    < h3 > 11g < / h3 >
    select version
    from   v$instance;
    
    VERSION          
    -----------------
    11.2.0.3.0  
    
    select job_class_name, logging_level
    from   dba_scheduler_job_classes
    where  job_class_name = 'DEFAULT_JOB_CLASS';
    
    JOB_CLASS_NAME                 LOGGING_LEVEL
    ------------------------------ -------------
    DEFAULT_JOB_CLASS              RUNS        
    
    select dbms_scheduler.get_default_value('LOGGING_LEVEL') from dual;
    
    DBMS_SCHEDULER.GET_DEFAULT_VALUE('LOGGING_LEVEL')                               
    --------------------------------------------------------------------------------
    64           
    
    declare
      db_name varchar2(2);
    begin
       dbms_scheduler.create_job( 
        job_name=>'test_job', 
        job_type => 'PLSQL_BLOCK',
        job_action=> 'begin 
      null;
    end; ', 
      start_date => to_timestamp('01/16/2013 12:00:00', 'mm/dd/yyyy hh24:mi:ss'), 
      repeat_interval => 'SYSDATE + 12/24', 
      enabled => true, auto_drop=> false, 
      comments => 'test'
    );
    end;
    /
    
    select job_name, job_style, job_type, job_class, logging_level
    from   user_scheduler_jobs
    where  job_name = 'TEST_JOB';
    
    JOB_NAME                       JOB_STYLE   JOB_TYPE         JOB_CLASS                      LOGGING_LEVEL
    ------------------------------ ----------- ---------------- ------------------------------ -------------
    TEST_JOB                       REGULAR     PLSQL_BLOCK      DEFAULT_JOB_CLASS              OFF
    Someone at - it ideas as to what could be causing the logging level will be set to OFF on the 11 g database? I know that I can adjust the recording manually via the SET_ATTRIBUTE procedure level, but nothing I have google/read in the documentation explains why the level is not set on WORKS on 11g.

    Am I missing something completely obvious?

    Hi D,

    I think that's probably a bug... When you create a task, oracle is the insertion of the 164400 in the sys.scheduler table $_job. And looking at his decoding, which is

    DECODE (BITAND (j.flags, 32 + 64 + 128 + 256),  32, 'OFF',  64, 'RUNS',  128, 'FAILED RUNS',  256, 'FULL',  NULL)
    

    which gives 32, so that it displays as OFF.

    Based on the track, I think that it does not have the sys.scheduler logging level $_class when inserting new jobs. I suspect that this is a bug.

    Hope everything goes well
    Raj J

  • X7.2 VCS log levels - changed?

    Before the upgrade to x7.2 I could define the log levels ranging from 1 (least verbose) to (more detailed) 4, where "4"would give me all the details of calls - which is what I need - that after the x7.2 all I get is the equivalent of '1' - which is not very helpful at all.

    I missed something - or is this a bug?

    Edit: looks like it's a bug, so guess I'll be downgraded to x7.1

    Bravo jens

    Hi Jens,

    I reproduced it and lifted a bug. Once it finds its way in to CDETS I'll make the number available. I also had a brief discussion for developers on this subject.

    Thank you

    Guy

  • Can we make several Off features in a collection of panels and how the query-off based on the example?

    Mr President.

    Can we make several Off features in a collection of panels and how the query-off based on the example?

    Concerning

    Once again, no jdev version?

    It must be really hard to remember ehich version you are working.

    Would have given a quick glance in the docs

    featuresOff java.util.Set Yes a list separated by spaces of the features by default to disable to the panelCollection. Values supported are

    That is the answer to a message.

    The second answer is that qbe is filtering tables. If you fund the table without filter you have not the qbe.

    Timo

  • Re: Log level and Cache

    Hi Experts,

    Usually in the production, here there is no newspaper survey but for a report, I want to set the logging level to back-end application and I want to make this report to hit DB instead of hitting cache how do I do this together

    When I kept it like this SET VARIABLE LOGLEVEL = 5; SET VARIABLE DISABLE_CACHE_HIT = 1; It displays odbc error.

    How to achieve this please think.

    Kind regards

    SET LOGLEVEL VARIABLE = 5, DISABLE_CACHE_HIT = 1;


    Pls mark if it is correct.

  • SFTP Log levels

    See in the error logs... Is there a way to change the logging level as to where this won't keep fill newspapers?

    < failed validation of FTP adapter TEST_ProcessOutbFile SFTP Channel >

    = Safety =.

    = This system is restricted to authorized Oracle users.      =

    = Unauthorized access may lead to disciplinary action and/or =

    = of civil or criminal sanctions. To the extent permitted by law =

    = use of this system may be monitored in accordance with the =

    is the terms of Oracle's Acceptable use policy.            =

    = Safety =.

    < 23 June 2014 2:45:53 PM CDT > < opinion > < Stdout > < BEA-000000 > < please log in with the user ID and password.

    = Security Notice =.

    = This system is restrictive! Ted to Oracle authorized users.      =

    = Unauthorized access may lead to disciplinary action and/or =

    = civil or criminal sanctions. To the extent permitted by law =

    = use of this system may be monitored in accordance with the =

    is the terms of Oracle's Acceptable use policy.            =

    = Security Notice =.

    Please sign in with the user ID and password.

    = Security Notice =.

    = & nb! SP;   This system is restricted to authorized Oracle users.      =

    = Unauthorized access may lead to disciplinary action and/or =

    = civ! He or penal sanctions. To the extent permitted by law =

    = use of this system may be monitored in accordance with the =

    is the terms of Oracle's Acceptable use policy.            =

    What version of the SOA Suite do you use? Can you give us more excerpts from the log file, say 20-30 more lines on each side?

    With the prefix

    It would seem that messages arrive via stdout, which is probably (sshtools) mavericks of third party library we use for SFTP.

  • When I try to install Lr CC (2nd install, a laptop) the procedure goes into an infinite loop: log / level, level/sign in, etc. Actually, it does not download.

    When I try to install Lr CC (2nd install, a laptop) the procedure goes into an infinite loop: log / level, level/sign in screens, etc. Actually, it does not download.

    I have no problems with the first installation on a desktop.

    You can download at the bottom of the link:

    https://helpx.Adobe.com/Lightroom/KB/Lightroom-downloads.html

    If you had purchased a subscription, you will need to install via Adobe Creative Cloud app.

  • Disable the logging level for individual users

    Hello

    We want to stop individual users to record level. Usually, we go to identity and click on the user to set the log level '0', but we have LDAP security settings don't so have no idea how to do.

    All ideas

    Thxs

    SYK

    LDAP?

    Still, you can see the RPD users when

    IM-> Action-> Set Online user filter specific user

  • Log level obiee 11g for application roles

    Hi Experts,

    It is possible to activate the logging level for an entire application role? For example, I have 20 + users who are a BIAdministrator and rather than allowing the log for each user level (such as additional users will be added/removed to the BIAdministrator role) is it possible to activate the journal of leveling for the application BIAdministrator role?
    All of the documentation I read points to 'no '. It seems that the only options are:


    1) activate at the level of the newspaper for the BISystemUser through the tab manage repository in the RPD
    (2) check the log level in the reports by prefixing
    (3) grant rights log level to individual users

    Looks like you can't turn on the logging level at nqsconfig or at the level of the application role. anyone could do this?

    Thank you

    Create a new init block to validate the specific role to set the number of loglevel to what you want.
    ex: case when role = then else 2 0 end
    and use LOGLEVEL in the variable section.

    Score pls help if

  • THE LOG LEVEL

    Hello world

    I use authentication external table. But we do not get table logging level.
    (Initially we plan keep the same level of newspaper (2) for all).
    My question is how can I get log level value (2) in a session variable.
    I tried to write 2 Select double and give the default value 2. But no luck...

    Please some can help me on this issue?
    Thanks in advance

    Hello

    Have you tried to select in the 'SYSIBM. SYSDUMMY1 ' table? It seems that SYSIBM. SYSDUMMY1 is the equivalent of "DUAL" of DB2. 2. SELECT which OF SYSIBM. SYSDUMMY1

    -Joe

  • The ODI log level

    Hello

    I'm looking for a description of the levels of Log ODI.
    I know that the logging level controls the details of the log in operator ODI output, but that there is more detailed documentation?
    I need information such as what the details are displayed in what newspaper level.

    Thanks in advance!

    See you soon,.
    H.

    Hello

    There are five levels of monitoring:
    1 shows the beginning and the end of each session
    2 poster level 1 and the beginning and the end of each stage
    3. display of level 2 and each task performed
    4 shows the executed, SQL queries as well as level 4
    5. a full, often requested trace during a support call

    Thank you
    Fati

  • Operator of ODI log - level

    Hello

    What is the importance of the logging level in operator ODI. I have worked in ODI, but did not notice this time... .to default sound always * 5 *...
    how it will affect the details of newspapers if there is a change in the level +/- newspaper.

    What is the highest level of log in operator ODI.

    Thank you

    SG

    Hi SG.

    The default logging for the operator level is 5.
    This is the highest level of logging available in ODI.

    That you give all the steps involved in your interface/package/procedure etc.
    If you decrease the level of logging information in the operator will also decrease.

    Thank you
    Fati

  • Oracle query sort by case-sensitivity

    Hi all

    I use the oracle 11g database.

    My use case is that I have a table with the following values

    Name table - test

    product id     productsortdescription
    H58098        ACETAMIDOHYDROXYPHENYLTHIAZOLE
    043994         Alloy .MM.INTHICK
    

    My query is

    select * from test order by productsortdescription;
    

    This query gives the result as it is like

    product id productsortdescription

    H58098 ACETA

    product id productsortdescription

    H58098 ACETA

    produit id productsortdescription

    H58098 ACETA

    product id     productsortdescription

    H58098        ACETA

    product id     productsortdescription
    H58098        ACETAMIDOHYDROXYPHENYLTHIAZOLE
    043994         Alloy .MM.INTHICK
    

    MIDOHYDROXYPHENYLTHIAZOLE

    043994 alloy. MR. INTHICK

    but early output/outcome should be as below:

    product id productsortdescription

    043994Alloy. MR. INTHICK

    H58098 ACETAMIDOHYDROXYPHENYLTHIAZOLE

    like all and ACE in productsortdescription

    l is in a small suitcase to C.

    The NLS Session parameters are as follows

    SELECT * from NLS_SESSION_PARAMETERS;

    NLS_LANGUAGE AMERICAN

    NLS_TERRITORY AMERICA

    NLS_CURRENCY $

    NLS_ISO_CURRENCY AMERICA

    NLS_NUMERIC_CHARACTERS.,.

    NLS_CALENDAR GREGORIAN

    NLS_DATE_FORMAT DD-MON-RR

    NLS_DATE_LANGUAGE AMERICAN

    NLS_SORT BINARY

    NLS_TIME_FORMAT HH.MI. SSXFF AM

    NLS_TIMESTAMP_FORMAT-DD-MON-RR HH.MI. SSXFF AM

    NLS_TIME_TZ_FORMAT HH.MI. SSXFF AM TZR

    NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI. SSXFF AM TZR

    NLS_DUAL_CURRENCY $

    BINARY NLS_COMP

    NLS_LENGTH_SEMANTICS BYTES

    NLS_NCHAR_CONV_EXCP FAKE

    Please, help me in this scenario.

    One option is to use the NLSSORT function. Most of the ASCII character set defines the sort lowercase before uppercase but EBCDIC kinds of lowercase before uppercase. So you can use

    with t as)

    Select the id "H58098", "ACETAMIDOHYDROXYPHENYLTHIAZOLE" union str double all the

    SELECT id, ' 043994 ',' alloy. Mr. INTHICK' double str

    )

    SELECT id, str

    t

    NLSSORT order (str, 'NLS_SORT = EBCDIC')
    ;

    ID STR

    ------ ------------------------------

    043994 alloy. MR. INTHICK

    H58098 ACETAMIDOHYDROXYPHENYLTHIAZOLE

    Of course, if your strings can contain non-alphanumeric characters, you should check that the sort of EBCDIC order is acceptable for them as well. To check this, you can use something like

    with t (str, ascii) as long as)

    Select chr(level+32), level + 32 double connect by level<=>

    )

    Select str, ascii from t by NLSSORT (str, 'NLS_SORT = EBCDIC')

    ;

    or simply do a search of the internet on EBCDIC. You can also substitute other kinds of language for EBCDIC and see if any of them meet your needs. See Appendix A of the Guide to support globalization for the list of valid values for NLS_SORT.

    You say that you are on 11g - if you want to say 11.2.x, then you can use the listagg function to get a more compact view of the sort order:

    with t (str, ascii) as long as)

    Select chr(level+32), level + 32 double connect by level<=>

    )

    Select listagg (str) in the Group (order by NLSSORT (str, 'NLS_SORT = EBCDIC')) as EBCDIC_order

    t

    ;

    EBCDIC_order


    ----------------------------------------------------------------------------------------------

    . <(+|&!$*);- %_="">?': #@'="abcdefghijklmnopqr~stuvwxyz[^]{ABCDEFGHI}JKLMNOPQR\STUVWXYZ0123456789

    Kind regards

    Bob

  • query rewite enable cube + jump level hierarchy problem

    Hello

    can someone help me on the following scenario, please?

    (I use OWB 11.2 client on WIN XP)

    I just created a dimension, i.e. dim_1 with three levels (l1, l2 and l3) with the ' ROLAP: with Cube MV ' storage option. the business identifier is of type number named 'key '. The standard hierarchy of the dimension is
    by default except that I have defined L1 level as him "jump at the level" of the L3 level. As I'd hoped associates the modified dimension table so that the key added to level L3 as his parents another level L1.

    Then I designed a map of ETL to load the following values:
    Hierarchy (Default): L1-L2-L3 > >
    Keychain attr. : 1-> 2-> 3
    and
    hierarchy (using hierarchical): L1-L3 >
    Keychain attr. : 1 > 4



    This means that the '1' L1 button is parent of key '3' or '4' of L3.
    Then I created a single dimension cube with the "ROLAP: with the Cube of MV" storage option. The only measure of the cube is named "N" of type number. The level of the dimension of the cube is L3 as a default value. Then I designed a mapping of ETL to load following values in this measure:
    « N » - « L3 ». "" Key.
    __________________
    1 - 3
    2 - 4

    When I update the related materialized view of the cube it does return no rows.

    * 1. why what is happening? *

    But when firstly, I refresh the MV associated with the dimension, and then refresh the cube MV, it returns the following incorrect result:
    Level.Key - N - County
    ___________________________
    L2.2 - 2-1
    L1.1 - 2-1

    The correct result should be:

    Level.Key - N - County
    ___________________________
    L2.2 - 1-1
    L1.1 - 3-2

    * 2. why what is happening? *

    When I got Disabling query rewrite of the cube, then there is no need to be refreshed dimension MV first. Refreshing Cube MV returns the following results about:

    Level.Key - N - County
    ___________________________
    L2.2 - 1-1
    L1.1 - 3-2
    L2. -2-1

    * 3.Is it bound to the ENABLE QUERY REWRITE of cube MV property? *

    Thanks in advance,

    SMSK.

    Command is not supported in cube MVs, see the documentation for OLAP;
    http://docs.Oracle.com/CD/E11882_01/OLAP.112/e17123/cubes.htm#CHDHBGGB

    A cube must conform to these requirements, before it can be referred to as a cube materialized view:
    * All the dimensions of the cube have at least a level and a level-based hierarchy. Ragged and non-hierarchical hierarchies are not supported. The dimensions should be mapped.
    * All dimensions in the cube using the same aggregation operator, which is SUM, MIN, or MAX.
    * The cube has one or more dimensions and one or more measures.
    * The cube is fully defined and mapped. For example, if the cube has five steps, then all five are mapped to the source tables.
    * The data for the cube type is NUMBER, VARCHAR2, NVARCHAR2, or DATE.
    * Source detail tables support dimension and count constraints. If they have not been defined, then use the Advisor to relational schema to generate a script that defines on detail charts.
    * The cube is compressed.
    * The cube can be accompanied by calculated measures, but he can not support analytics more advanced in the cube script.

    See you soon
    David

  • the 0-7 syslog logging level

    Hello Sir,

    I want to set up a syslog server and switches will send the log file to the analyst syslog server.

    Please Veuileez share with me level 0 (emergency) to level 7 (debug mode).

    What level I put only then can trace changes of username and user on the switch configuration?

    or any configuration which able to follow it and send to syslog server?

    Hello

    Would the following that you are looking to have something?

    http://www.Cisco.com/en/us/docs/iOS/12_3t/12_3t4/feature/guide/gtconlog.html

Maybe you are looking for