Duplicate line Macro information
Hello
I'm having some trouble doing this button or a macro of this pdf file.
I currently have a row of input cells he wishes to reproduce with the click of a button.
The reason that I do not repeat these lines is first of all to reduce the space of the file and allow users to create their own application form with the number of pieces they request.
I am running Adobe LiveCycle Designer ES 8.2.1
Thank you
If you want to publish the form on [email protected] I wil take a look. Please include a description of what you're trying to do.
Paul
Tags: Adobe LiveCycle
Similar Questions
-
change the size of the table and remove the duplicate line
Hello
I run a program that does a for loop 3 times. In each loop, the data are collected for x and y points (10 in this example). How can I output the table with 30 points in a simple 2D array (size 2 x 30)?
Moreover, from the table, how to delete duplicate rows that contain the same value of x?
Thank you
hiNi.
He probaibly which would have allowed to attach a file that has a consecutive duplicate lines.
Here is a simple solution.
-
The Oracle answers and duplicate lines
Hello
I have a SQL view called "DUP" which shows all lines that are the same
for the 4 main fields INCNUM, TAKE, SAMP, CORR.
Now there are a few lines that are duplicates, whereas these 4 key column, BUT
There are also lines that are duplicates of 100%.
The Oracle answers seems to hide lines that are 100% (all columns, not only 4) duplicates. (there no SEPARATE in SQL).
How can I avoid this behavior?
Also how to create whole new second report showing 100% duplicates lines i.e.
who are the same for each unique column?
Thank you
metalrayYou just have to do the reverse then. Uncheck the separate support feature in the RPD and you will separate in your SQL
Thank you
Prash -
can I create an index on the column of duplicate lines?
Hello
I tried to create a unique index on an existing table, containing the duplicate lines.
I want to create an index of type everything on this column of duplicate rows.
If possible pls suggest me.
Kind regards
Vincent.Yes.
-
Hello
I have a report that shows some of the duplicate records and need to eliminate duplicate rows on the report. How to hide the lines duplicated on the presentation of the report so that only 1 row appears on the report instead of having the same line appearing more than once? I use BI Publisher 11.1.1.7. Thank you.
It is advisable to eliminate duplicate rows to the data model, however, if you do not want to do it on the model please see below
Whereas you use RTF model
Change your for each tag to something like this
If you don't succeed; Thanks for posting your model and an example of xml data
-
Hello
We need help removing duplicate rows from a table please.
We get the following table:
We discovered that rare person_id repeated for a particular event (3):SSD@ermd> desc person_pos_history Name Null? Type ------------------------------------------------------------------------ -------- ------------------------ PERSON_POSITION_HISTORY_ID NOT NULL NUMBER(10) POSITION_TYPE_ID NOT NULL NUMBER(10) PERSON_ID NOT NULL NUMBER(10) EVENT_ID NOT NULL NUMBER(10) USER_INFO_ID NUMBER(10) TIMESTAMP NOT NULL DATE
If we look at the id of the person 1 "217045", we can see that he repeats 356 times for event id 3.select PERSON_ID, count(*) from person_pos_history group by PERSON_ID, EVENT_ID having event_id=3 and count(*) > 1 order by 2 PERSON_ID COUNT(*) ---------- ---------- 217045 356 216993 356 226198 356 217248 364 118879 364 204471 380 163943 384 119347 384 208884 384 119328 384 218442 384 141676 480 214679 495 219149 522 217021 636
It is prudent to assume that the event id per person with the earlier timestamp id is the one who was in charge of 1st, as a result, we want to keep and the rest should be deleted.SSD@ermd> select POSITION_ASSIGNMENT_HISTORY_ID, POSITION_TYPE_ID, PERSON_ID,EVENT_ID, to_char(timestamp, 'YYYY-MM-DD HH24:MI:SS') 2 from person_pos_history 3 where EVENT_ID=3 4 and person_id=217045 5 order by timestamp; PERSON_POSITION_HISTORY_ID POSITION_TYPE_ID PERSON_ID EVENT_ID TO_CHAR(TIMESTAMP,' ------------------------------ ---------------- ---------- ---------- ------------------- 222775 38 217045 03 2012-05-07 10:29:49 222774 18 217045 03 2012-05-07 10:29:49 222773 8 217045 03 2012-05-07 10:29:49 222776 2 217045 03 2012-05-07 10:29:49 300469 18 217045 03 2012-05-07 10:32:05 ... ... 4350816 38 217045 03 2012-05-08 11:12:43 4367970 2 217045 03 2012-05-08 11:13:19 4367973 8 217045 03 2012-05-08 11:13:19 4367971 18 217045 03 2012-05-08 11:13:19 4367972 38 217045 03 2012-05-08 11:13:19 356 rows selected.
You kindly help us with the sql for the removal of duplicates please.
ConcerningHello
rsar001 wrote:
... We must maintain a row of data for each unique position_type_id. The query you provided with gratitude deletes lines keep them all not only one line for the id of the person 119129 regardless of which is the position_type_id.In fact, the statement that I posted earlier maintains a line for each separate person_id combnination and event_id; This is what means the analytical PARTITION BY clause. That made this assertion would be clearer if you had a different sample of the value. You posted a rather large set of samples, about 35 lines, but each row has the same person_id, and each row has the same event_id. In your real table, you probably have different values in these columns. If so, you should test with data that looks more like your real table, with different values in these columns.
How can we change the query so that it holds the position_type_id County please before you delete the duplicate rows.
You can include positiion_type in the analytical PARTITION BY clause. Maybe that's what you want:
... , ROW_NUMBER () OVER ( PARTITION BY person_id , event_id , position_type -- ***** NEW ***** ORDER BY tmstmp ) AS r_num ...
CREATE TABLE and staements of INSERTION for the sample data and desired outcomes from these data, I'm only guessing.
-
SQL - remove duplicate lines based on date or another column column
Hello people, it is possible to select the last row by column for a query:
say that my query returns:
book_id, date, price
===========
19, 10-JAN-2012, 40
19, 16-MAY-2012, 35
18, 10-MAY-2012, 21
And I want him back
book_id, date, price
===========
19, 16-MAY-2012, 35
18, 10-MAY-2012, 21
In other words, book_id 19 appears twice at the beginning but I now come back to the last version of it (based on the date column) as well as all other lines that have no duplicates.
Is it possible to do?
Thank you very muchHello
Here's one way:
WITH got_r_num AS ( SELECT table_x.* , ROW_NUMBER () OVER ( PARTITION BY book_id ORDER BY dt DESC NULLS LAST ) AS r_num FROM table_x ) SELECT * -- or list all columns except r_num FROM got_r_num WHERE r_num = 1 ;
If the combination (book_id, dt) is unique, then you can also use the LAST aggregate function.
-
Need to delete duplicate lines
I have two tables MEMBER_ADRESS and MEMBER_ORDER. The MEMBER_ADRESS has the lines duplicated in MEMBER_ID and MATCH_VALUE-based computing. These ADDRESS_IDs duplicate have been used in MEMBER_ORDER. I need to remove duplicates of MEMBER_ADRESS by making sure I keep one with PRIMARY_FLAG = 'Y' and update MEMBER_ORDER. ADDRESS_ID to the MEMBER_ADRESS. ADDRESS_ID I kept.
I'm on 11 GR 1 material
Thanks for the help.
CREATE TABLE MEMBER_ADRESS ( ADDRESS_ID NUMBER, MEMBER_ID NUMBER, ADDRESS_1 VARCHAR2(30 BYTE), ADDRESS_2 VARCHAR2(30 BYTE), CITY VARCHAR2(25 BYTE), STATE VARCHAR2(2 BYTE), ZIPCODE VARCHAR2(10 BYTE), CREATION_DATE DATE, LAST_UPDATE_DATE DATE, PRIMARY_FLAG CHAR(1 BYTE), ADDITIONAL_COMPANY_INFO VARCHAR2(40 BYTE), MATCH_VALUE NUMBER(38) GENERATED ALWAYS AS (TO_NUMBER( REGEXP_REPLACE ("ADDITIONAL_COMPANY_INFO"||"ADDRESS_1"||"ADDRESS_2"||"CITY"||"STATE"||"ZIPCODE",'[^[:digit:]]'))) ); Insert into MEMBER_ADRESS (ADDRESS_ID, MEMBER_ID, ADDRESS_1, ZIPCODE, CREATION_DATE, LAST_UPDATE_DATE, PRIMARY_FLAG, MATCH_VALUE) Values (200, 30, '11 Hourse Rd.', '58754', TO_DATE('08/11/2011 10:56:25', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('08/12/2011 10:34:10', 'MM/DD/YYYY HH24:MI:SS'), 'Y', 1158754); Insert into MEMBER_ADRESS (ADDRESS_ID, MEMBER_ID, ADDRESS_1, ZIPCODE, CREATION_DATE, LAST_UPDATE_DATE, PRIMARY_FLAG, MATCH_VALUE) Values (230, 12, '101 Banks St', '58487', TO_DATE('08/11/2011 10:56:25', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('08/12/2011 10:35:42', 'MM/DD/YYYY HH24:MI:SS'), 'N', 10158487); Insert into MEMBER_ADRESS (ADDRESS_ID, MEMBER_ID, ADDRESS_1, ZIPCODE, CREATION_DATE, LAST_UPDATE_DATE, PRIMARY_FLAG, MATCH_VALUE) Values (232, 12, '101 Banks Street', '58487', TO_DATE('08/11/2011 10:56:25', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('08/12/2011 10:41:15', 'MM/DD/YYYY HH24:MI:SS'), 'N', 10158487); Insert into MEMBER_ADRESS (ADDRESS_ID, MEMBER_ID, ADDRESS_1, ZIPCODE, CREATION_DATE, LAST_UPDATE_DATE, PRIMARY_FLAG, MATCH_VALUE) Values (228, 12, '101 Banks St.', '58487', TO_DATE('08/11/2011 10:56:25', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('08/12/2011 10:38:19', 'MM/DD/YYYY HH24:MI:SS'), 'N', 10158487); Insert into MEMBER_ADRESS (ADDRESS_ID, MEMBER_ID, ADDRESS_1, ZIPCODE, CREATION_DATE, LAST_UPDATE_DATE, PRIMARY_FLAG, MATCH_VALUE) Values (221, 25, '881 Green Road', '58887', TO_DATE('08/11/2011 10:56:25', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('08/12/2011 10:34:18', 'MM/DD/YYYY HH24:MI:SS'), 'Y', 88158887); Insert into MEMBER_ADRESS (ADDRESS_ID, MEMBER_ID, ADDRESS_1, ZIPCODE, CREATION_DATE, LAST_UPDATE_DATE, PRIMARY_FLAG, MATCH_VALUE) Values (278, 28, '2811 Brown St.', '58224', TO_DATE('08/11/2011 10:56:25', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('08/12/2011 10:36:11', 'MM/DD/YYYY HH24:MI:SS'), 'N', 281158224); Insert into MEMBER_ADRESS (ADDRESS_ID, MEMBER_ID, ADDRESS_1, ZIPCODE, CREATION_DATE, LAST_UPDATE_DATE, PRIMARY_FLAG, MATCH_VALUE) Values (280, 28, '2811 Brown Street', '58224', TO_DATE('08/11/2011 10:56:25', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('08/12/2011 10:45:00', 'MM/DD/YYYY HH24:MI:SS'), 'Y', 281158224); Insert into MEMBER_ADRESS (ADDRESS_ID, MEMBER_ID, ADDRESS_1, ADDRESS_2, ZIPCODE, CREATION_DATE, LAST_UPDATE_DATE, PRIMARY_FLAG, MATCH_VALUE) Values (300, 12, '3421 West North Street', 'x', '58488', TO_DATE('08/11/2011 10:56:25', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('08/12/2011 10:42:04', 'MM/DD/YYYY HH24:MI:SS'), 'Y', 342158488); COMMIT; CREATE TABLE MEMBER_ORDER ( ORDER_ID NUMBER, ADDRESS_ID NUMBER, MEMBER_ID NUMBER ); Insert into MEMBER_ORDER (ORDER_ID, ADDRESS_ID, MEMBER_ID) Values (3, 200, 30); Insert into MEMBER_ORDER (ORDER_ID, ADDRESS_ID, MEMBER_ID) Values (4, 230, 12); Insert into MEMBER_ORDER (ORDER_ID, ADDRESS_ID, MEMBER_ID) Values (5, 228, 12); Insert into MEMBER_ORDER (ORDER_ID, ADDRESS_ID, MEMBER_ID) Values (6, 278, 28); Insert into MEMBER_ORDER (ORDER_ID, ADDRESS_ID, MEMBER_ID) Values (8, 278, 28); Insert into MEMBER_ORDER (ORDER_ID, ADDRESS_ID, MEMBER_ID) Values (10, 230, 12); COMMIT;
Chris:
In fact, it's a little meaner it seems at first glance, but it seems to work, at least on your sample data.
The update, based on the before and after the sample data.
SQL> SELECT * FROM member_adress; ADDRESS_ID MEMBER_ID ADDRESS_1 CREATION_DATE LAST_UPDATE_DATE P MATCH_VALUE ---------- ---------- ---------------------- ---------------------- ---------------------- - ----------- 228 12 101 Banks St. 08/11/2000 10:56:25 am 08/12/2005 10:38:19 am N 10158487 230 12 101 Banks St 08/11/2000 10:56:25 am 08/12/2006 10:35:42 am N 10158487 232 12 101 Banks Street 08/11/2000 10:56:25 am 08/12/2007 10:41:15 am N 10158487 300 12 3421 West North Street 08/11/2000 10:56:25 am 08/12/2011 10:42:04 am Y 342158488 221 25 881 Green Road 08/11/2000 10:56:25 am 08/12/2011 10:34:18 am Y 88158887 278 28 2811 Brown St. 08/11/2000 10:56:25 am 08/12/2006 10:36:11 am N 281158224 280 28 2811 Brown Street 08/11/2000 10:56:25 am 08/12/2011 10:45:00 am Y 281158224 200 30 11 Hourse Rd. 08/11/2000 10:56:25 am 08/12/2005 10:34:10 am Y 1158754 SQL> SELECT * FROM member_order 2 ORDER BY member_id, order_id; ORDER_ID ADDRESS_ID MEMBER_ID ---------- ---------- ---------- 4 228 12 5 230 12 10 230 12 11 232 12 12 300 12 6 278 28 8 278 28 3 200 30 SQL> UPDATE member_order mo 2 SET address_id = (SELECT address_id 3 FROM (SELECT address_id, member_id, match_value 4 FROM (SELECT address_id, member_id, match_value, 5 ROW_NUMBER () OVER (PARTITION BY member_id, match_value 6 ORDER BY CASE WHEN primary_flag = 'Y' THEN 1 7 ELSE 2 END, 8 GREATEST(NVL(creation_date,TO_DATE('1/1/0001','MM/DD/YYYY')), 9 NVL(last_update_date,TO_DATE('1/1/0001','MM/DD/YYYY'))) desc) rn 10 FROM member_adress) 11 WHERE rn = 1) ma 12 WHERE mo.member_id = ma.member_id and 13 ma.match_value = (SELECT match_value from member_adress maa 14 WHERE maa.address_id = mo.address_id)); SQL> SELECT * FROM member_order 2 ORDER BY member_id, order_id; ORDER_ID ADDRESS_ID MEMBER_ID ---------- ---------- ---------- 4 232 12 5 232 12 10 232 12 11 232 12 12 300 12 6 280 28 8 280 28 3 200 30
Then to remove it, something like:
SQL> DELETE FROM member_adress 2 WHERE rowid NOT IN (SELECT rowid 3 FROM (SELECT rowid, 4 ROW_NUMBER () OVER (PARTITION BY member_id, match_value 5 ORDER BY CASE WHEN primary_flag = 'Y' THEN 1 6 ELSE 2 END, 7 GREATEST(NVL(creation_date,TO_DATE('1/1/0001','MM/DD/YYYY')), 8 NVL(last_update_date,TO_DATE('1/1/0001','MM/DD/YYYY'))) desc) rn 9 FROM member_adress) 10 WHERE rn = 1); SQL> SELECT * FROM member_adress; ADDRESS_ID MEMBER_ID ADDRESS_1 CREATION_DATE LAST_UPDATE_DATE P MATCH_VALUE ---------- ---------- ---------------------- ---------------------- ---------------------- - ----------- 232 12 101 Banks Street 08/11/2000 10:56:25 am 08/12/2007 10:41:15 am N 10158487 300 12 3421 West North Street 08/11/2000 10:56:25 am 08/12/2011 10:42:04 am Y 342158488 221 25 881 Green Road 08/11/2000 10:56:25 am 08/12/2011 10:34:18 am Y 88158887 280 28 2811 Brown Street 08/11/2000 10:56:25 am 08/12/2011 10:45:00 am Y 281158224 200 30 11 Hourse Rd. 08/11/2000 10:56:25 am 08/12/2005 10:34:10 am Y 1158754
John
-
How to remove a line duplicate in a table.
This query allows to remove duplicate rows from a table
DELETE FROM our_table
WHERE rowid not in
(SELECT MIN (rowid)
Of our_table
GROUP BY column1, column2, Column3,...; -
Hello
I have a delicate situation.
my query is like this:
This query returns me 700 records with count (*) = 3select count(*), source, ttype, tdate, qty, rate, cost, charge, impstatus, errorcode, proj_code, res_code, role_code, job_id, extl_id, taskid, rate_curr, cost_curr,ext_source_id, chg_code, inp_type from transimp where ttype = 'X' and res_code = 'NLAB' and tdate >= '01-MAR-2009' group by source, ttype, tdate, qty, rate, cost, charge, impstatus, errorcode, proj_code, res_code, role_code, job_id, extl_id, taskid, rate_curr, cost_curr,ext_source_id, chg_code, inp_type having count(*) > 1
which means that they are duplicates of records in the table (Please note we have a primary key, but these documents are double according to the needs of the company).
I want to remove these duplicates, i.e. after the deletion if I run the same query again, count (*) must be 1.
any thoughts?
Thanks in advance.Dear Caitanya!
It is a DELETE statement that deletes duplicate rows:
DELETE FROM our_table WHERE rowid not in (SELECT MIN(rowid) FROM our_table GROUP BY column1, column2, column3... ;
Here Column1, Column2, Column3 is the key for each record.
Be sure to replace our_table with the name of the table for which you want to remove duplicate rows. GROUP BY is used on the columns making up the primary key of the table. This script deletes every line in the group after the first row.I hope this could be of help to you!
Yours sincerely
Florian W.
-
I play games that allows me to send gifts to your facebook friends and when I click on send gift, a screen with the names of my friends will be displayed so that I can select the ones to send to... but since yesterday morning, the pop up screen using firefox became so weak that it allows only one line and it does not allow me scroll more than 5 times... While the IE pop-up window shows 27 lines at once, and I can scroll the following 27 and so on... I have pictures of the screen if you need...
You should be able to resize the window pop - up to enlarge.
You allow sites resize the popup window?
-
Help to identify duplicate lines.
I have a table that looks like the following.
create table Testing( inv_num varchar2(100), po_num varchar2(100), line_num varchar2(100) )
data by the following.
Insert into Testing (INV_NUM,PO_num,line_num) values ('19782594','P0254836',1); Insert into Testing (INV_NUM,PO_num,line_num) values ('19782594','P0254836',1); Insert into Testing (INV_NUM,PO_num,line_num) values ('19968276','P0254836',1); Insert into Testing (INV_NUM,PO_num,line_num) values ('19968276','P0254836',1);
what I'm trying to do, is to identify the different multiple elements within the table with the same PO_num, but inv_num.
I have try this
SELECT T1.inv_num, T1.Po_num, T1.LINE_num , count(*) over( partition by T1.inv_num)myRecords FROM testing T1 where T1.Po_num = 'P0254836' group by T1.inv_num, T1.Po_num, T1.LINE_num order by t1.inv_num
but I end up with a result similar to the following.
"INV_NUM" "PO_NUM" "LINE_NUM" "MYRECORDS" '19782594 '. 'P0254836 '. « 1 » « 1 » "19968276" "P0254836" "1" "1" I really want to end up with a result similar to the following.
"INV_NUM" "PO_NUM" "LINE_NUM" "MYRECORDS" '19782594 '. 'P0254836 '. « 1 » « 1 » '19968276 '. 'P0254836 '. « 1 » « 2 » in substance, I wish that the County invoices in purchase order and I also want to include the row id so that I can pass the record with more than 1 at a separate table.
Please note that this is part of a much larger project, and I change took only a small subset of show here.
Thanks for the help in advance.
Hello
mlov83 wrote:
Hi Etbin
Thanks for the reply.
I really need it to be the result.
INV_NUM PO_NUM LINE_NUM RN 19782594 P0254836 1 1 19782594 P0254836 1 1 19968276 P0254836 1 2 19968276 P0254836 1 2 Let me explain a little better. I want to remove the second instance of inv_num for this po_num if 19782594 and 19782594, although they are duplicates which is the first dup put so I want to keep. Then19968276 for this po_num I want to identify separately as a separate set and finally remove from my table.
Hope that makes sense.
That's what you asked for:
SELECT inv_num, po_num, line_num
DENSE_RANK () OVER (PARTITION BY po_num,
ORDER BY inv_num
), Rn
Tests
ORDER BY po_num, inv_num
;
Output:
INV_NUM PO_NUM LINE_NUM RN
---------- ---------- ---------- -----
19782594 1 1 P0254836
19782594 1 1 P0254836
P0254836 19968276 1 2
P0254836 19968276 1 2
But I do not see how this will help to eliminate duplicates. Does not add another column in double for records that are already duplicates. To distinguish the first of a series of other duplicates, Etbin request is much more useful.
-
Command-line management information file
Hello
I just tried to play with jrmc and asks me if I want to set a cron jobs in Linux to collect information and subsequently discovers those of jrmc, is there a way to do it?
Oh...
No, it won't. So, you will need to use another tool to read.
If you have a chance to waste of time, what you can do is to add Xmanagement - option at the start of your application running JRockit. Then, when you save, instead of using jrcmd, you can use
java -jar JraRecordingStarter_15.jar
But then, you use the tool will be run time monitor to display these outputs:
java -jar RuntimeAnalyzer.jar
Hope this helps,
K.
-
How not to show duplicate lines, based on a field
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0
Hello
I have a query that looks like some scripts that did not stand on a given date compared to what ran the same day about a week ago. We want to include the start_datetime and the end_datetime but when I add it to the select statement, it evokes all instances of jobs that run several times during the day. Is it possible to exclude the extra lines based on the field of script_name?
SELECT instances.script_name, instances.instance_name, REGEXP_REPLACE(master.description, chr(49814), -- em-dash '-') description --instances.start_datetime FROM xxcar.xxcar_abat_instances Instances, xxcar.xxcar_abatch_master Master WHERE 1 = 1 AND TRUNC(start_datetime) = TRUNC(TO_DATE(:p_StartDate, 'YYYY/MM/DD HH24:MI:SS')) - (:p_NumOfWeeks * 7) AND Instances.SCRIPT_NAME = Master.SCRIPT_NAME (+) MINUS SELECT script_name, instance_name, NULL --NULL FROM xxcar.xxcar_abat_instances WHERE 1 = 1 AND TRUNC(start_datetime) = TRUNC(TO_DATE(:p_StartDate, 'YYYY/MM/DD HH24:MI:SS'))
LESS performs a set operation - remove lines from the first series that exactly match the second set.
When you add columns to the first series, you want a smaller filter - try a NOT IN multi-column:
To remove several courses, to regroup and get min/maxSELECT instances.script_name, instances.instance_name, REGEXP_REPLACE(master.description, chr(49814), -- em-dash '-') description, min(instances.start_datetime) start_datetime, min(instances.end_datetime) end_datetime FROM xxcar.xxcar_abat_instances Instances, xxcar.xxcar_abatch_master Master WHERE 1 = 1 AND TRUNC(start_datetime) = TRUNC(TO_DATE(:p_StartDate, 'YYYY/MM/DD HH24:MI:SS')) - (:p_NumOfWeeks * 7) AND Instances.SCRIPT_NAME = Master.SCRIPT_NAME (+) AND (script_name, instance_name) NOT IN ( SELECT script_name, instance_name FROM xxcar.xxcar_abat_instances WHERE 1 = 1 AND TRUNC(start_datetime) = TRUNC(TO_DATE(:p_StartDate, 'YYYY/MM/DD HH24:MI:SS')) ) group by instances.script_name, instances.instance_name, master.description
You do not give the definitions of table and query schemas in him, therefore I don't test it.
Kind regards
David -
Hello
I have the following structure
I need to come up with a query that will remove all but one of the duplicate as follows:ID txndate ItemNumber batch gain locator -- ------------ -------- -------- ----- --------- 1 15/05/2011 710 230 20 L123 2 15/05/2011 710 230 20 L123 3 19/05/2011 410 450 70 L456 4 05/05/2011 120 345 60 L721 5 05/05/2011 120 345 60 L721 6 05/05/2011 120 345 60 L721 7 15/06/2012 840 231 40 L435 8 15/05/2011 710 230 20 L123 9 15/05/2011 710 230 20 L123
Thank youID txndate ItemNumber batch gain locator -- ------- ---------- ----- ---- ------- 1 15/05/2011 710 230 20 L123 3 19/05/2011 410 450 70 L456 4 05/05/2011 120 345 60 L721 7 15/06/2012 840 231 40 L435 8 15/05/2011 710 230 20 L123 <== Note this - it needs to appear in 2 places
PS: My apologies for not being able to shape the structure of the table enough but I guess that it conveys the idea.
Published by: user3214090 on January 28, 2013 07:22Based on the sample paper & pencil - no database at hand :(
ID txndate ItemNumber batch gain locator x y z 4 05/05/2011 120 345 60 L721 1 3 1 5 05/05/2011 120 345 60 L721 2 3 2 6 05/05/2011 120 345 60 L721 3 3 3 1 15/05/2011 710 230 20 L123 1 0 1 2 15/05/2011 710 230 20 L123 2 0 2 8 15/05/2011 710 230 20 L123 3 5 1 9 15/05/2011 710 230 20 L123 4 5 2 3 19/05/2011 410 450 70 L456 1 2 1 7 15/06/2012 840 231 40 L435 1 6 1
Tabibitosan {message: id = 9535978} aka fixed the difference method should be used (groups - there - must first be generated) NO TESTS!
select id,txndate,itemnumber,batch,gain,locator from (select id,txndate,itemnumber,batch,gain,locator, row_number() over (partition by y order by id) z from (select id,txndate,itemnumber,batch,gain,locator, id - row_number() over (partition by txndate,itemnumber,batch,gain,locator order by id ) y from the_table ) ) where z = 1
Concerning
Etbin
Maybe you are looking for
-
Starkey Halo 2, 6s and Honda Handsfree link iPhone?
I am a new user of Starkey Halo 2 i2400s (like them!) with iOS 9.3.1. I had a problem with the link file between the HAs and the phone. When it falls, I can't get alerts in my AP or disseminate content to the HAs, and audio phone won't the HAs, alt
-
Someone put a password in bios on my netbook... code is CNU9141LMD can anyone help?
-
I have a Dell latitude with Windows 7 and a mouse Bluetooth Z8000... is it possible to connect... Yes or no? as I have already downloaded just about everything except Windows 8! Thank you
-
I have XP, how do I know if my computer can handle an upgrade to windows 7
When I defragment it says I have 70% free space. My work with Southern Co. uses windows 7 and I'm supposed to receive a refresher to windows 7 but I don't know if my computer will run well with the upgrade. Is it possible that I can find out if my co
-
Canon MG3550: IPad for my CanonMG3550 connection problems
I have just changed provider and router and my printer is always linked to my old router. How to disconnect my old router and connect to my new router?