Update the query using nulls update box
I need to update a column according to the conditions that I've used below, the update query, I used is updated as well to null values. How can I stop this and keep the old values when no match was found for the case.create table sample (name varchar2(10),eno number(10),salary number(10));
insert into sample (name,eno,salary) values ('emp1',1,100);
insert into sample (name,eno,salary) values ('emp2',2,200);
insert into sample (name,eno,salary) values ('emp3',3,300);
select * from sample;
update sample
set salary =
case when salary = 100 then 10000 else
case when salary = 150 then 15000 else
case when salary = 200 then 20000 end end end
where name is not null;
Actual o/p:
emp1 1 10000
emp2 2 20000
emp3 3
Required o/p:
emp1 1 10000
emp2 2 20000
emp3 3 300
Hello
The updated control WHERE clause lines.
If you do not have a WHERE clause, then updates all rows in the table.
update sample
set salary = case
when salary = 100 then 10000
when salary = 150 then 15000
when salary = 200 then 20000
end
where salary IN (100, 150, 200)
;
Note that you do not need to nest BOX icies expressions (or almost anywhere else). If the 'salary = 100' condition is true, then its correspondent WHEN the value is returned, and the remaining terms will not be evaluated. If the first condition is not true, then only will be the "salary = 150" condition to be evaluated. (The terms are mutually exclusive in this example anyway, so it does not matter.)
Published by: Frank Kulash, June 5, 2012 13:09
Tags: Database
Similar Questions
-
change / stop the query using bad plan
I use 11.2.0.3. I'm wrong a script with multiple insert into... Select. One of the insert running for hours because it is using bad plan because of State statistics. I've now updated the statistics. Is - it there anyway I can do oracle raise this insert or ignore this insert and continue with the other inserts in my script. (I don't want to kill the session, I want to run other sqls).
Also, for the future is there a way to make oracle dynamic sampling rather than obsolete statistics usage?
I was able to cancel the query in another session to help
exec
DBMS_RESOURCE_MANAGER.SWITCH_CONSUMER_GROUP_FOR_SESS (sid, serial#, 'CANCEL_SQL');
-
Hello Experts
I can't in select the record_sequence in the output. Please see the part of the desired effect.
Please help solve this problem.
Is the version of Oracle, I'm working on that
Oracle Database 11 g Enterprise Edition Release 11.1.0.7.0 - 64 bit Production
With partitioning, OLAP, Data Mining and Real Application Testing options
Thank you
RB
TABLE1 AS
(
SELECT '28' EXAM_CD1, EXAM_CD2 '29', '10' EXAM_CD3, 111' CAND_ID FROM DUAL UNION ALL
SELECT '21' EXAM_CD1, EXAM_CD2 '39', '20' EXAM_CD3, 112' CAND_ID FROM DUAL UNION ALL
SELECT '22' EXAM_CD1, EXAM_CD2 '49', '30' EXAM_CD3, 113' CAND_ID FROM DUAL UNION ALL
SELECT 'EXAM_CD1 23', '59' EXAM_CD2, EXAM_CD3 ' 40', 114' CAND_ID FROM DUAL UNION ALL
SELECT '24' EXAM_CD1, EXAM_CD2 '69', '50' EXAM_CD3, 115' DOUBLE CAND_ID)
AS TABLE2
(
SELECT EXAM_CD '28', '111' CANDID, 1 RECORD_SEQ OF DOUBLE UNION ALL
SELECT '30' EXAM_CD, '113' CANDID, 2 RECORD_SEQ FROM DUAL UNION ALL
SELECT EXAM_CD '94', '111' CANDID, 3 RECORD_SEQ OF DOUBLE UNION ALL
SELECT EXAM_CD '69', '115' CANDID, 4 DOUBLE RECORD_SEQ)
(
SELECT EXAM_CD FROM TABLE2, CANDID
LESS
SELECT CAND_ID,
MAX (L CASE WHEN 1 EXAM_CD1 THEN WHEN 2 THEN of OTHER EXAM_CD2 EXAM_CD3 END) exam_code
FROM TABLE1,
(SELECT LEVEL L FROM DUAL CONNECT BY LEVEL < = 3)
CAND_ID GROUP, L)
The aim is
CAND_ID, EXAM_CD, RECORD_SEQ
* 111, 94, 3 *.Hello
Rb2000rb65 wrote:
The solution use not less as long as I get my results using the latest features, it is good with me.Good idea!
UNMIS is not the best tool for this task. The saved query gets the exam_cd and the Candide you want, but you can't find the record_seq because there of nothing like record_seq in table1.You can do this way:
SELECT * FROM table2 m WHERE NOT EXISTS ( SELECT 1 FROM table1 WHERE m.candid = candid AND m.exam_cd IN ( exam_cd1 , exam_cd2 , exam_cd3 ) ) ;
I guess you could use LESS, like this
SELECT * FROM table2 WHERE (exam_cd, candid) IN ( SELECT ... -- The MINUS query you posted goes here ) ;
but it is unecessarily complicated.
-
Helps the query using summary on partition
I don't know that miss me something small here. I need to make an inventory of end.
The formula is the following:
For the 501, end = store inventory
-closing inventory 7292.19
-supplies - 30,64 closing stock
-closing inventory for buns - 1002.34
= -----------
6259.21
I can get the stock of closing with analytics, but cannot end inventory.
My dollar gap is also swollen. It should be 780.55 for store 501;
-= = So I separated it and still get incorrect resultCREATE TABLE "SUBQUERY_CATEGORIES" ( "STOREID" NUMBER NOT NULL ENABLE, "WEEK_NBR" NUMBER, "DESCRIPTION" VARCHAR2(100 BYTE) NOT NULL ENABLE, "OPEN_INVENTORY" NUMBER, "CLOSING_INVENTORY" NUMBER, "TRANSFER_IN_COST" NUMBER, "TRANSFER_OUT_COST" NUMBER, "DELV_COST" NUMBER, "PREV_DELV_COST" NUMBER, "TOTAL_COST" NUMBER, "DOLLAR_VARIANCE" NUMBER ) ; Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (501,1,'Other Foods-I',880.04,837.16,17.32,0,491.92,880.044,837.158,35124.92); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (27,1,'Shortening-I',199.7,200.32,0,0,99.85,199.7,200.324,390.45); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (25,1,'Meat-I',154.69,464.06,168.75,42.188,1406.25,154.688,464.063,1239.85); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (25,1,'Bacon-I',74.99,62.63,19.405,0,154.16,74.992,62.628,1239.85); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (501,1,'Repairs & Maint',0,0,0,0,195,0,0,780.55); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (501,1,'Supplies',25.92,30.64,0,0,139.43,25.923,30.637,37466.58); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (25,1,'Drinks-I',585.06,750.36,0,0,715.87,585.058,750.358,8678.93); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (501,1,'Chili Ingridients-I',177.99,214.47,5.918,5.918,302.88,177.995,214.466,4683.32); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (27,1,'Meat-I',540,264.38,14.063,28.125,1181.25,540,264.375,780.89); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (27,1,'Paper-I',955.71,839.86,0,15.308,600.54,955.71,839.859,19131.9); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (27,1,'Supplies',11.78,9.43,0,0,158.85,11.783,9.427,17570.11); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (27,1,'Produce-I',180.5,98.64,0,0,206,180.498,98.638,3904.47); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (25,1,'Condiments-I',170.14,153.46,8.668,0,164.86,170.14,153.456,6819.16); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (27,1,'Repairs & Maint',0,0,0,0,500,0,0,390.45); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (25,1,'Fries-I',78.22,120.33,54.15,18.05,631.75,78.217,120.333,619.92); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (501,1,'Paper-I',1113.09,1093.63,50.884,14.07,846.22,1113.089,1093.633,43711.01); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (27,1,'Cheese-I',63.78,58.7,0,0,197.07,63.783,58.704,1171.34); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (501,1,'Produce-I',201.56,304.85,0,0,554.85,201.56,304.847,7805.54); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (27,1,'Buns-I',1064.44,793.73,0,0,191.36,1064.44,793.73,780.89); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (27,1,'Bacon-I',251.95,155.9,0,0,115.62,251.95,155.902,780.89); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (25,1,'Paper-I',806.87,871.74,12.113,8.448,674.56,806.871,871.741,30376.25); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (501,1,'Chicken-I',561.16,570.93,94.457,0,1568.81,561.156,570.929,5463.88); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (25,1,'Chicken-I',285.86,534.67,73.007,35.97,1402.86,285.858,534.67,4339.46); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (27,1,'Drinks-I',1061.6,1040.97,0,0,584.59,1061.597,1040.971,5466.26); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (501,1,'Breakfast-I',437.9,376.44,0,0,272.42,437.904,376.438,12488.86); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (27,1,'Condiments-I',173.67,159.72,0,3.251,187.55,173.671,159.721,4294.92); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (25,1,'Chili Ingridients-I',93.59,149.49,2.445,0,253.85,93.594,149.489,3719.54); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (25,1,'Buns-I',873.54,914.48,0,0,441.6,873.54,914.48,1239.85); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (501,1,'Fries-I',39.11,126.35,36.1,36.1,884.45,39.108,126.35,780.55); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (25,1,'Supplies',4.71,4.71,0,0,195.53,4.713,4.713,27896.56); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (25,1,'Other Foods-I',615.63,627.42,1.701,0,374.4,615.63,627.419,26656.71); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (27,1,'Chicken-I',702.5,471.66,0,65.64,1120.39,702.502,471.664,2733.13); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (501,1,'Dairy-I',176.9,128.3,0,0,332.14,176.904,128.304,5463.88); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (27,1,'Dairy-I',171.78,85.3,0,0,109.38,171.783,85.297,2342.68); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (25,1,'Dairy-I',122.71,89.86,0,0,288.98,122.706,89.856,3719.54); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (25,1,'Cheese-I',140.03,109.46,0,0,131.38,140.028,109.461,1859.77); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (25,1,'Produce-I',151.15,169.85,0,3.44,270.85,151.148,169.852,6199.23); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (501,1,'Shortening-I',249.63,259.61,0,0,259.61,249.625,259.61,780.55); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (501,1,'Bacon-I',116.58,115.62,0,0,308.32,116.581,115.62,1561.11); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (25,1,'Repairs & Maint',0,0,0,0,200,0,0,619.92); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (501,1,'Cheese-I',130.9,174.52,0,0,328.45,130.897,174.516,2341.66); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (501,1,'Condiments-I',225.65,207.39,0,0,247.81,225.645,207.394,8586.09); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (27,1,'Chili Ingridients-I',159.45,109.79,5.918,0,159.6,159.45,109.788,2342.68); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (501,1,'Meat-I',752.34,696.09,0,28.125,1800,752.344,696.094,1561.11); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (501,1,'Buns-I',1090.2,1002.34,0,0,524.4,1090.2,1002.34,1561.11); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (27,1,'Fries-I',147.41,88.75,0,0,595.65,147.408,88.746,390.45); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (27,1,'Other Foods-I',658.37,645.92,0,1.701,373.34,658.375,645.925,16789.22); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (501,1,'Drinks-I',1290.03,1153.85,0,0,689.84,1290.032,1153.848,12488.86); Insert into SUBQUERY_CATEGORIES (STOREID,WEEK_NBR,DESCRIPTION,OPEN_INVENTORY,CLOSING_INVENTORY,TRANSFER_IN_COST,TRANSFER_OUT_COST,DELV_COST,PREV_DELV_COST,TOTAL_COST,DOLLAR_VARIANCE) values (25,1,'Shortening-I',159.76,179.73,0,0,199.7,159.76,179.73,619.92); SET DEFINE OFF with categorycosts as ( SELECT storeid , week_nbr , UPPER(TRIM(description)) description , NVL(prev_delv_cost + transfer_in_cost + delv_cost - transfer_out_cost - total_cost, 0) AS cost , open_inventory , dollar_variance , sum(closing_inventory ) over (partition by storeid, week_nbr ) closing_inventory -- , closing_inventory from subquery_categories where storeid = 501 ) , pivoted_cat_costs AS (SELECT storeid , week_nbr , MAX(DECODE(closing_inventory, 0, 0, closing_inventory)) - NVL(MAX(DECODE(UPPER(TRIM(description)), 'SUPPLIES', cost)), 0) - NVL(MAX(DECODE(UPPER(TRIM(description)), 'BUNS-I', cost)), 0) as closing_inventory , MAX(DECODE(dollar_variance, 0, 0, dollar_variance)) as dollar_variance , NVL(MAX(DECODE(UPPER(TRIM(description)), 'SUPPLIES', cost)), 0) supplies , NVL(MAX(DECODE(UPPER(TRIM(description)), 'REPAIRS & MAINT', cost)), 0) repairs_and_maint , NVL(MAX(DECODE(UPPER(TRIM(description)), 'BUNS-I', cost)), 0) as buns FROM categorycosts GROUP BY storeid, week_nbr ) select * from pivoted_cat_costs;
The results should be:SELECT storeid , week_nbr , UPPER(TRIM(description)) description , NVL(prev_delv_cost + transfer_in_cost + delv_cost - transfer_out_cost - total_cost, 0) AS cost , open_inventory , dollar_variance -- , sum(closing_inventory ) over (partition by storeid, week_nbr ) closing_inventory , closing_inventory from subquery_categories;
There is an anomaly out Bill so it can produce the vairiance of good quality, but inv variance that my request is back is swollen.store 501 ending inventory = 6259.21 buns s/b 1002.34 inv variance 780.55 Store 27 ending inventory = 4220 buns s/b = 793.73 inv variance 390.45 Store 25 ending inventory = 4283 buns 914.48 inv variance 619.92
Can someone tell me what I am doing wrong?
Published by: TheHTMLDJ on December 9, 2009 06:42
SET DEFINE OFF has added and removed the subquery_categories schema nameWell, I asked for the logic as the specification and not necessarily a query. :)
Here's the query that gets your inventory of desired end.with categorycosts as ( SELECT storeid , week_nbr , UPPER(TRIM(description)) description , NVL(prev_delv_cost + transfer_in_cost + delv_cost - transfer_out_cost - total_cost, 0) AS cost , open_inventory , dollar_variance , sum(closing_inventory ) over (partition by storeid, week_nbr ) closing_inventory , DECODE(UPPER(TRIM(description)), 'SUPPLIES', closing_inventory) supp_cls_inv , DECODE(UPPER(TRIM(description)), 'BUNS-I', closing_inventory) bun_cls_inv from subquery_categories where storeid = 501 ) , pivoted_cat_costs AS (SELECT storeid , week_nbr , MAX(DECODE(closing_inventory, 0, 0, closing_inventory)) - NVL(MAX(supp_cls_inv), 0) - NVL(MAX(bun_cls_inv), 0) as closing_inventory , MAX(DECODE(dollar_variance, 0, 0, dollar_variance)) as dollar_variance , NVL(MAX(DECODE(UPPER(TRIM(description)), 'SUPPLIES', cost)), 0) supplies , NVL(MAX(DECODE(UPPER(TRIM(description)), 'REPAIRS & MAINT', cost)), 0) repairs_and_maint , NVL(MAX(DECODE(UPPER(TRIM(description)), 'BUNS-I', cost)), 0) as buns FROM categorycosts GROUP BY storeid, week_nbr ) select * from pivoted_cat_costs;
I still don't know the logic (or specifications) to derive your dollor expected variance.
-
Need help with the query using the aggregation
If I have a table, defined as follows:
CREATE TABLE range_test
(
range_id NUMBER (20) NOT NULL,
rank of char (1) NOT NULL,
lower_bound_of_range NUMBER (5.2) NOT NULL,
upper_bound_of_range NUMBER (5.2) NOT NULL,
received_date_time_stamp SYSTIMESTAMP NOT NULL DEFAULT TIMESTAMP
);
And I wanted to query the table to find the range associated with the last line inserted for each 'class' (for example 'A', 'B', 'C', etc.), how would I go about this?
I want something like the following, but I know that it will not work right:
SELECT
grade,
lower_bounding_of_range,
upper_bounding_of_range,
Max (received_date_time_stamp)
Of
range_test GROUP BY received_date_time_stamp;
Thanks for your help... I am frustrated with this one and I think that it should be possible without having to use the PL/SQL (i.e. the functions of SQL aggregation or subqueries should work).Perhaps something along the lines of...
SQL> ed Wrote file afiedt.buf 1 select deptno, empno, ename, hiredate 2 from emp 3* order by deptno, empno SQL> / DEPTNO EMPNO ENAME HIREDATE ---------- ---------- ---------- -------------------- 10 7782 CLARK 09-JUN-1981 00:00:00 10 7839 KING 17-NOV-1981 00:00:00 10 7934 MILLER 23-JAN-1982 00:00:00 20 7369 SMITH 17-DEC-1980 00:00:00 20 7566 JONES 02-APR-1981 00:00:00 20 7788 SCOTT 19-APR-1987 00:00:00 20 7876 ADAMS 23-MAY-1987 00:00:00 20 7902 FORD 03-DEC-1981 00:00:00 30 7499 ALLEN 20-FEB-1981 00:00:00 30 7521 WARD 22-FEB-1981 00:00:00 30 7654 MARTIN 28-SEP-1981 00:00:00 30 7698 BLAKE 01-MAY-1981 00:00:00 30 7844 TURNER 08-SEP-1981 00:00:00 30 7900 JAMES 03-DEC-1981 00:00:00 14 rows selected. SQL> ed Wrote file afiedt.buf 1 select deptno, empno, ename, hiredate 2 from ( 3 select deptno, empno, ename, hiredate 4 ,row_number() over (partition by deptno order by hiredate desc) as rn 5 from emp 6 ) 7 where rn = 1 8* order by deptno, empno SQL> / DEPTNO EMPNO ENAME HIREDATE ---------- ---------- ---------- -------------------- 10 7934 MILLER 23-JAN-1982 00:00:00 20 7876 ADAMS 23-MAY-1987 00:00:00 30 7900 JAMES 03-DEC-1981 00:00:00 SQL>
-
Application of setting fit the values of the query using CFSPREADSHEET
I'm trying to understand how to correctly apply my border line personalized to my values of query only on an automated report of formatting. Currently, I have where it is hard coded, but long term it would not work because the new values will be added to the database and query of the report values would be generated without the formatting of boundary line, unless I have change the line numbers manually whenever.
Here's the formatting that I use:
format4. TopBorder = "Thin";
format4.topBorderColor = "grey_40_percent";
format4. BottomBorder = "Thin";
format4.bottomBorderColor = "grey_40_percent";
format4. LeftBorder = "Thin";
format4.leftBorderColor = "grey_40_percent";
format4. RightBorder = "Thin";
format4.rightbordercolor = "grey_40_percent";
SpreadsheetFormatCellRange (report, format4, 2,1,26,8);
Have you attempted to assign a variable equal to the number of records that you pull from the database? I did it in several cfspreadsheets and it works as defining a counter to count the number off loops through the data.
-
Optimize the query using SUBSTR
Hi, I wrote the following to get the string 'abc' in the following input strings
' 2010/abc' and
' 2010/inv/abc '.
I have to get the string ("abc") that is after the last ' / '.
In the entrance of the channels the ' / ' can be 1 or 2
So I tried the following
The Select query above is even working for the string "abc/2010/inv" (until the 2nd "/" only it works)select substr(substr('2010/abc',(instr('2010/abc','/',1,1)+1),length('2010/abc inst')), (instr(substr('2010/abc',(instr('2010/abc','/',1,1)+1),length('2010/abc inst')), '/',1,1)+1),length (substr('2010/abc',(instr('2010/abc','/',1,1)+1),length('2010/abc inst')))) str from dual
Could please minimize the above query, and which may even work you for same 3rd or 4th ' / '.
Thank youHello
Alternatively, you can use regexp if you want:
Scott@my10g SQL>l 1 with data as ( 2 select '2010/abc' str from dual 3 union all select 'inv/2010/abc' from dual 4 union all select 'inv/2010/inv/2010/abc' from dual 5 union all select 'abc' from dual 6 ) 7* select str, regexp_substr(str,'[^/]+$') sstr from data Scott@my10g SQL>/ STR SSTR --------------------- ---------- 2010/abc abc inv/2010/abc abc inv/2010/inv/2010/abc abc abc abc
It would be up to the end of the string from the last source / (or early if no / in the source string)
-
Need help with the query using the AVG function
First post here.
I am a student taking a SQL class and I can't find a query.
I think I'm close to get it, but I can not quite all the way there.
Three tables are involved in this problem. Here is a list of the tables and the areas concerned:
Problem:orders table: order# shipstate orderitems table: order# isbn quantity (How many copies of book purchased on that order) books table: isbn retail (retail price of book)
I want to get an average of "total amount" by shipstate.
For example, in these tables, there are 8 records of the State of Florida.
However, there are only 5 unique order # for this State.
The amount of detail * quantity for these 8 disks (or 5 orders) is $345,10
Now to get my average $345,10 should be divided by 5. (number of unique commands)
In the following query it divides this $345,10 8. (number of records)
How to make this request to divide by the number of unique order # rather than the number of records?
According to me, once I get this part down, I can understand the rest of the problem.SELECT shipstate, AVG(quantity * retail) FROM orders JOIN orderitems USING (order#) JOIN books USING (isbn) GROUP BY shipstate HAVING SUM(quantity * retail) =ANY (SELECT SUM(quantity * retail) FROM books JOIN orderitems USING (isbn) JOIN orders USING (order#) GROUP BY shipstate)
The end result, I need, is to find all the individual commands that have a "total amount due" that is greater than the 'average amount due' for this state of clients.
Any help, suggestions or comments welcome.
MattYour average take into account the shipstate (8 disks), you can do that for the expected results.
With some examples of input data it would be easier, but here a try:SELECT shipstate, sum(quantity * retail)/count(distinct order#) FROM orders JOIN orderitems USING (order#) JOIN books USING (isbn) GROUP BY shipstate;
Nicolas.
delete the alias
Edited by: N. Gasparotto on October 3, 2008 19:28 -
Helps the query using case and County
Hello!
I have problems with a small part of a query below that finally can settle the invoice price.
(count (distinct spm97.sample_id) * 36.1) as "PROFILE."
case count (distinct spm33.sample_id)
When (count (distinct spm97.sample_id) = 0) then (count (distinct spm33.sample_id) * 26)
another (count (distinct spm33.sample_id) * 4.75)
put an end to 'CC ',.
The first line works - 0 or 36.10 is returned depending on whether or not a profile has been ordered for the sample.
The cost of CC is 4.75 if a profile was also sentenced, if it isn't then the cost is 26. The case statement is supposed to verify this and return the exact amount, but I can't make it work... I get the error message that a closing parenthesis is missing somewhere in the middle of line 3, if that helps.
Any advice would be much appreciated! Of course, this isn't the entire query - joins and all are all working well, it's just this little section with the County that I can't just.
Thank you
JOthe piece of code below, seems to me is incorrect
CASE COUNT(DISTINCT spm33.sample_id) WHEN ( COUNT(DISTINCT spm97.sample_id) = 0 ) THEN (COUNT(DISTINCT spm33.sample_id) * 26) ELSE (COUNT(DISTINCT spm33.sample_id) * 4.75) END AS "CC"
It seems that the requirement here is that if * 'COUNT (DISTINCT spm97.sample_id) = 0' *, then * "CC" * should be ' COUNT (DISTINCT spm33.sample_id) (* 26) "ELSE, it should be" COUNT (DISTINCT spm33.sample_id) (* 4.75)»
If this is the case, the statement must include:CASE COUNT(DISTINCT spm97.sample_id) WHEN 0 THEN (COUNT(DISTINCT spm33.sample_id) * 26) ELSE (COUNT(DISTINCT spm33.sample_id) * 4.75) END AS "CC"
-
Hello.
I currently have a datablock, interviewed once brings a list of records. I'm looking to refine this list of records by declaring the from_date and to_date so that it retrieves only records = > from_date and < = to_date. the field that must be between these dates is the reg_date
in my query before trigger I have:
: car.reg_date: = * don't know what to put here *.
any help on this would be great, thanks.
Published by: user13390506 on 02-Sep-2010 07:10I suggest you explicitly cast values:
Declare LC$Where Varchar2(1000); Begin LC$Where := 'date between TO_DATE(''' || TO_CHAR(:blk.date_start, 'DD.MM.YYYY') || ''',''DD.MM.YYYY'') and TO_DATE(''' || TO_CHAR(:blk.date_end, 'DD.MM.YYYY') || ''',''DD.MM.YYYY'')' ; Set_Block_Property( 'BLK', DEFAULT_WHERE, LC$Where ) ; Go_Block ('BLK' ) ; Execute_query ; End;
-
Forming the query using xml forest
Hello
I need the code for the following items. I tried... but doing the extra tags.
Create table patient (pat_mrn varchar2(100)) ; Create table encount (pat_mrn varchar2(100), encounter_id varchar2(1000)); Create table oper (encounter_id varchar2(1000), comp_name varchar2(1000)); Insert into patient values ('63280'); Insert into encount values ('63280', '42'); Insert into oper values (42, 'sugar'); Insert into oper values (42, 'sbp'); Insert into oper values (42, 'dbp'); CREATE OR REPLACE TYPE COMPONENT AS OBJECT ( "ID" VARCHAR2(1000)); CREATE OR REPLACE TYPE component_list_t AS TABLE OF COMPONENT; CREATE OR REPLACE TYPE cm_results_o_t AS OBJECT (RES_LIST component_list_t); O/p required : <Patient> <pat_mrn> 63280 </pat_mrn> <Results> <Component> <ID>sugar</ID> </Component> <Component> <ID>sbp</ID> </Component> <Component> <ID>dbp</ID> </Component> </Results> </patient>
The code I wrote:
Select P.PAT_MRN, XMLELEMENT("Patient", (XMLELEMENT("pat_mrn", P.pat_mrn)), (XMLELEMENT("Results", XMLForest(cm_results_o_t(CAST(MULTISET (SELECT O.COMP_NAME AS "ID" FROM oper O WHERE O.ENCOUNTER_ID = E.ENCOUNTER_ID) AS component_list_t)) AS "Results")))) AS Orderxml FROM PATIENT P JOIN ENCOUNT E ON P.PAT_MRN = E.PAT_MRN AND P.PAT_MRN = '63280' AND E.ENCOUNTER_ID = 42
Thus, we can clearly see there is a lot of extra tags... o/p I get
<Patient> <pat_mrn>63280</pat_mrn> <Results> <Results> <RES_LIST> <COMPONENT> <ID>sugar</ID> </COMPONENT> <COMPONENT> <ID>sbp</ID> </COMPONENT> <COMPONENT> <ID>dbp</ID> </COMPONENT> </RES_LIST> </Results> </Results> </Patient>
I'm new to xml... Thus, any help is appreciated.
Thank you.
So what follows is one of the variants which leads to the same result
SELECT p.pat_mrn, XMLElement("Patient", XMLElement("pat_mrn", P.pat_mrn), (SELECT XMLAgg(XMLElement("Component", XMLElement("ID",o.comp_name))) FROM oper o WHERE e.encounter_id = o.encounter_id)) FROM patient p INNER JOIN encount e ON (P.PAT_MRN = E.PAT_MRN AND P.PAT_MRN = '63280' AND E.ENCOUNTER_ID = 42);
-
Hello
I'm new to the development of Oracle.
Oracle 10 g 2
My original query:
SELECT APP, count (*)
TRANSACTION
WHERE TYPE in ('ShipmentConfirmPR', 'ShipmentConfirm', 'ShipConfirm')
and the APP ("SAPPI", "SAPPI", "SAPR3") and INTERVENE ('9320 ', '9332','1208 ')
GROUP BY APP TYPE
order of the APP
the result of this query:
SAPPI 100
SAPPI 600
SAPR3 440
My requirement
And I want to have something like output
LDCS 100
TSW 600
PDC 440
IE.the APPP and STEP combinations. Must return the specified values
SAPPI & 9320-> LOC (here SAPPI IE APP is the same for both... but it's a coincidence IE APP can be sliced also)
SAPPI & 9332-> tsw
SAPR3 & 1208-> pdc
Options, I tried:
Query provided by one of the Forum members...
SELECT THE CHECK BOX
WHEN APP = "SAPP1" THEN DECODE (step, '9320', 'LSW', '9332', "TSW")
WHEN APP = "SAPR3" step = '1208' AND 'PDC '.
END app
COUNT (*)
TRANSACTION
WHERE TYPE in ('ShipmentConfirmPR', 'ShipmentConfirm', 'ShipConfirm')
AND THE APP ("SAPPI", "SAPPI", "SAPR3")
AND STEP IN ('9320', '9332', ' 1208')
GROUP BY APP, STEP
ORDER OF THE APP.
EXECUTION PLAN
| ID | Operation | Name |
------------------------------------------------------------------------
| 0 | SELECT STATEMENT |
| 1. GROUP SORT BY |
| 2. INLIST ITERATOR.
| 3. TABLE ACCESS BY INDEX ROWID | TRANSACTION |
| 4. INDEX RANGE SCAN | TRANSACTION_IDX |
The output of the query (as above) must partially match the following query (a particular combination of CLO)
SELECT count (1)
TIBCO. TRANSACTION_HISTORY
WHERE TYPE = 'ShipmentConfirm '.
and APP in ("SAPPI") and INTERVENE ('9332')
My Questions:
1.*There are indexes on all 3 APP passes it's IE, STEP and TYPE *. I don't want a FULL table Scan (as one would use the index). Can change us the query / use of indices, etc. to make it faster?
2. is the right to approach? Would the use of the concat operator in the function decode work better for my needs?
Something like
Select decode (APP |) STEP, 'SAPP9332', 'X') of TRANSACTION_HISTORY where < COND >
If Yes can you please provide the query?
3. ANY other approach / request for my requirement.
Thanks in advance.Hello
user13517642 wrote:
... EXECUTION PLAN| ID | Operation | Name |
------------------------------------------------------------------------
| 0 | SELECT STATEMENT |
| 1. GROUP SORT BY |
| 2. INLIST ITERATOR.
| 3. TABLE ACCESS BY INDEX ROWID | TRANSACTION |
| 4. INDEX RANGE SCAN | TRANSACTION_IDX |The output of the query (as above) must partially match the following query (a particular combination of CLO)
SELECT count (1)
TIBCO. TRANSACTION_HISTORY
WHERE TYPE = 'ShipmentConfirm '.
and APP in ("SAPPI") and INTERVENE ('9332')My Questions:
1.*There are indexes on all 3 APP passes it's IE, STEP and TYPE *. I don't want a FULL table Scan (as one would use the index). Can change us the query / use of indices, etc. to make it faster?
A full table scan might be the fastest way to get results. Do you have any reason to think that it would be faster to go through the index? How selective are the clues? In other words, what is the percentage of rows in the table correspond to each of the values in the WHERE clause?
2. is the right to approach?
It depends on what you're trying to do, which is not at all clear to me.
Would the use of the concat operator in the function decode work better for my needs?
Something likeSelect decode (APP |) STEP, 'SAPP9332', 'X') of TRANSACTION_HISTORY where
If you use this approach, look out for the problem Ab asse crevit . For example, if you have these 4 rows and 2 columns:
str1 str2 ---- ---- (NULL) FOO F OO FO O FOO (NULL)
There are 4 values of distict of str1 (counting NULL) and 4 separate values of str2, str1 but | str2 is the same for each of them. In the above example, it there is no way to know, just by looking at the concatenated string, including str1 and str2 ends begins. Maybe it's not the case for your specific data (for example, if the application is still exactly 5 characters long). otherwise, you may need to add some kind of delimiter, like this
app || '+' || step
where you know that '+' never occurs in one of these columns.
3. ANY other approach / request for my requirement.
CASES, as I mentioned in your previous message:
Decode the help function
and as you have used above.In this thread, you said "I have to use the decode function. Why? It is a competition of school Etudieeo DECODE is explicitly required? Why you don't want in the best way, what that turns out to be?
Your WHERE clause:
AND APP IN ('SAPPI', 'SAPPI', 'SAPR3') AND STEP IN ('9320', '9332', '1208')
admits 6 possible combinations of APA and step:
app step ----- ---- SAPP1 9320 SAPP1 9332 SAPP1 1208 SAPP3 9320 SAPP3 9332 SAPP3 1208
but you are looking for only 3 of these combinations in DECODE it or the expression BOX. (Have 2 copies of 'SAPP1' e list won't do any good, but it does hurt real, either.)
By the way, is "SAPPI" app with the letter 'I' at the end, or "SAPP1", with the number '1' at the end?Published by: Frank Kulash, March 24, 2011 19:44
-
How to manage the key event Go in the query region?
Hello
I created a page with the help of the region of the query, using the 'clear' button and the region Go are automatically created in the query page.
I need a check this button.
How can I capture the event of GO button?
Help, please!Hi used code below in the controller
OAQueryBean queryBean = (OAQueryBean) webBean.findChildRecursive ("
");
String GoBtnName = queryBean.getGoButtonName)If (pageContext.getParameter (GoBtnName)! = null)
{
Logic
}
Thank you
Pratap -
The effectiveness of the query is not stable.
Hello
I create an index on a field, then I execute a query operation that goes from 155 seconds by the table API.
A few minutes later, I run the same operation with the same query conditions, passing 6 seconds!
If I continue to run the same query operation, the time spent will be stable in 6 seconds.
The same situation also occurred when I run the query using the CLI operation.
What are the causes it to happen? I think that it is strange and interesting.
Servers such as database servers are long running and so the cache is normally cold. In order to test the performance of queries when the cache is cold, it is not the normal case. Tests, if the cache is hot is usually more relevant.
I suggest that you take a step back and ask, what is your goal as part of these tests.
-mark
-
Sorting the query according to the unique field data type.
Hello
I have a varchar data from the field in a table which cointains 'NumAriques' and 'Alphanumaric '. I need to sort the query using this field.
While data are numAriques should sort as numAriques else data out as varchar.
Is it posible in oracle. If so, please help me to get it.Hello
I would do something like this:
[11.2] Scott @ My11g > !cat t.sql with t(n) as ( select ' 123' from dual union all select '123CAD' from dual union all select '123TAD' from dual union all select '123' from dual union all select '1234 ' from dual union all select '11111' from dual union all select 'zzrytarz' from dual ) ------ end of sample data ------ select n -- uncomment 2 following line to ease understanding : --,case when regexp_like(trim(n),'^\d+$') then 1 else 2 end --,case when regexp_like(trim(n),'^\d+$') then to_char(to_number(n),'fm00000000000000000000') else n end from t order by case when regexp_like(trim(n),'^\d+$') then 1 else 2 end ,case when regexp_like(trim(n),'^\d+$') then to_char(to_number(n),'fm00000000000000000000') else n end / [11.2] Scott @ My11g > @t N ------------- 123 123 1234 11111 123CAD 123TAD zzrytarz 7 rows selected.
Maybe you are looking for
-
How to move a chart in my email send
I pasted a chart in my email, but he's stuck at the top of the page. I want to move and then add text. How can I move it? I can resize, but not move. Thunderbird is 31.0 and firefox is 30
-
my iphone 4 s refuses to connect to my laptop with windows 10, what could be the cause?
my iphone 4 s refuses to connect to my laptop which uses Windows 10. all ports operate and other phones are usually connected by using the same phones. recently, I downloaded the latest version of iTunes to my laptop which is specifically for 64-bit.
-
Error message "wireless mouse.
I have a Mac mini with OSX 10.8.5 end-2012. From time to time, but too often to be acceptable, I get an error message telling me that my wireless mouse is not found (see screenshot). I did not and have never had, a wireless mouse. When this message a
-
new printer installation.
new printer Ihave it workes print emailes etc. but I need to aply it to my document woord? Needhelp.
-
How to access my documents to my external hard drive? It's my hard drive to my other computer and it won't let me look at anything.