Deleting duplicate lines

Hello
We need help removing duplicate rows from a table please.

We get the following table:
SSD@ermd> desc person_pos_history
 Name                                                                     Null?    Type
 ------------------------------------------------------------------------ -------- ------------------------

 PERSON_POSITION_HISTORY_ID                                               NOT NULL NUMBER(10)
 POSITION_TYPE_ID                                                         NOT NULL NUMBER(10)
 PERSON_ID                                                                NOT NULL NUMBER(10)
 EVENT_ID                                                                 NOT NULL NUMBER(10)
 USER_INFO_ID                                                                      NUMBER(10)
 TIMESTAMP                                                                NOT NULL DATE
We discovered that rare person_id repeated for a particular event (3):
select PERSON_ID, count(*)
from person_pos_history
group by PERSON_ID, EVENT_ID
having event_id=3
     and count(*) > 1
order by 2

 PERSON_ID   COUNT(*)
---------- ----------
    217045        356
    216993        356
    226198        356
    217248        364
    118879        364
    204471        380
    163943        384
    119347        384
    208884        384
    119328        384
    218442        384
    141676        480
    214679        495
    219149        522
    217021        636
If we look at the id of the person 1 "217045", we can see that he repeats 356 times for event id 3.
SSD@ermd> select POSITION_ASSIGNMENT_HISTORY_ID, POSITION_TYPE_ID, PERSON_ID,EVENT_ID, to_char(timestamp, 'YYYY-MM-DD HH24:MI:SS')
  2  from person_pos_history
  3  where EVENT_ID=3
  4  and person_id=217045
  5  order by timestamp;

   PERSON_POSITION_HISTORY_ID  POSITION_TYPE_ID  PERSON_ID   EVENT_ID TO_CHAR(TIMESTAMP,'
------------------------------ ---------------- ---------- ---------- -------------------
                        222775               38     217045         03 2012-05-07 10:29:49
                        222774               18     217045         03 2012-05-07 10:29:49
                        222773                8     217045         03 2012-05-07 10:29:49
                        222776                2     217045         03 2012-05-07 10:29:49
                        300469               18     217045         03 2012-05-07 10:32:05
...
...
                       4350816               38     217045         03 2012-05-08 11:12:43
                       4367970                2     217045         03 2012-05-08 11:13:19
                       4367973                8     217045         03 2012-05-08 11:13:19
                       4367971               18     217045         03 2012-05-08 11:13:19
                       4367972               38     217045         03 2012-05-08 11:13:19

356 rows selected.
It is prudent to assume that the event id per person with the earlier timestamp id is the one who was in charge of 1st, as a result, we want to keep and the rest should be deleted.

You kindly help us with the sql for the removal of duplicates please.

Concerning

Hello

rsar001 wrote:
... We must maintain a row of data for each unique position_type_id. The query you provided with gratitude deletes lines keep them all not only one line for the id of the person 119129 regardless of which is the position_type_id.

In fact, the statement that I posted earlier maintains a line for each separate person_id combnination and event_id; This is what means the analytical PARTITION BY clause. That made this assertion would be clearer if you had a different sample of the value. You posted a rather large set of samples, about 35 lines, but each row has the same person_id, and each row has the same event_id. In your real table, you probably have different values in these columns. If so, you should test with data that looks more like your real table, with different values in these columns.

How can we change the query so that it holds the position_type_id County please before you delete the duplicate rows.

You can include positiion_type in the analytical PARTITION BY clause. Maybe that's what you want:

...     ,     ROW_NUMBER () OVER ( PARTITION BY  person_id
                               ,                    event_id
                         ,             position_type     -- *****  NEW  *****
                         ORDER BY        tmstmp
                       )         AS r_num
...

CREATE TABLE and staements of INSERTION for the sample data and desired outcomes from these data, I'm only guessing.

Tags: Database

Similar Questions

  • Deleting duplicate line

    Hello

    I have a delicate situation.

    my query is like this:
    select count(*), source, ttype, tdate, qty, rate, cost, charge, impstatus, errorcode, proj_code, res_code, role_code, job_id, extl_id, taskid, rate_curr, cost_curr,ext_source_id, chg_code, inp_type
    from transimp
    where ttype = 'X'
    and res_code = 'NLAB'
    and tdate >= '01-MAR-2009'
    group by source, ttype, tdate, qty, rate, cost, charge, impstatus, errorcode, proj_code, res_code, role_code, job_id, extl_id, taskid, rate_curr, cost_curr,ext_source_id, chg_code, inp_type
    having count(*) > 1
    This query returns me 700 records with count (*) = 3
    which means that they are duplicates of records in the table (Please note we have a primary key, but these documents are double according to the needs of the company).

    I want to remove these duplicates, i.e. after the deletion if I run the same query again, count (*) must be 1.

    any thoughts?

    Thanks in advance.

    Dear Caitanya!

    It is a DELETE statement that deletes duplicate rows:

    
    DELETE FROM our_table
    WHERE rowid not in
    (SELECT MIN(rowid)
    FROM our_table
    GROUP BY column1, column2, column3... ;
    

    Here Column1, Column2, Column3 is the key for each record.
    Be sure to replace our_table with the name of the table for which you want to remove duplicate rows. GROUP BY is used on the columns making up the primary key of the table. This script deletes every line in the group after the first row.

    I hope this could be of help to you!

    Yours sincerely

    Florian W.

  • Need to delete duplicate lines

    I have two tables MEMBER_ADRESS and MEMBER_ORDER. The MEMBER_ADRESS has the lines duplicated in MEMBER_ID and MATCH_VALUE-based computing. These ADDRESS_IDs duplicate have been used in MEMBER_ORDER. I need to remove duplicates of MEMBER_ADRESS by making sure I keep one with PRIMARY_FLAG = 'Y' and update MEMBER_ORDER. ADDRESS_ID to the MEMBER_ADRESS. ADDRESS_ID I kept.

    I'm on 11 GR 1 material

    Thanks for the help.
    CREATE TABLE MEMBER_ADRESS
    (
      ADDRESS_ID               NUMBER,
      MEMBER_ID                NUMBER,
      ADDRESS_1                VARCHAR2(30 BYTE),
      ADDRESS_2                VARCHAR2(30 BYTE),
      CITY                     VARCHAR2(25 BYTE),
      STATE                    VARCHAR2(2 BYTE),
      ZIPCODE                  VARCHAR2(10 BYTE),
      CREATION_DATE            DATE,
      LAST_UPDATE_DATE         DATE,
      PRIMARY_FLAG             CHAR(1 BYTE),
      ADDITIONAL_COMPANY_INFO  VARCHAR2(40 BYTE),
      MATCH_VALUE              NUMBER(38) GENERATED ALWAYS AS (TO_NUMBER( REGEXP_REPLACE ("ADDITIONAL_COMPANY_INFO"||"ADDRESS_1"||"ADDRESS_2"||"CITY"||"STATE"||"ZIPCODE",'[^[:digit:]]')))
    );
    
    
    Insert into MEMBER_ADRESS
       (ADDRESS_ID, MEMBER_ID, ADDRESS_1, ZIPCODE, CREATION_DATE, LAST_UPDATE_DATE, PRIMARY_FLAG, MATCH_VALUE)
     Values
       (200, 30, '11 Hourse Rd.', 
        '58754', TO_DATE('08/11/2011 10:56:25', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('08/12/2011 10:34:10', 'MM/DD/YYYY HH24:MI:SS'), 'Y', 1158754);
    Insert into MEMBER_ADRESS
       (ADDRESS_ID, MEMBER_ID, ADDRESS_1, ZIPCODE, CREATION_DATE, LAST_UPDATE_DATE, PRIMARY_FLAG, MATCH_VALUE)
     Values
       (230, 12, '101 Banks St', 
        '58487', TO_DATE('08/11/2011 10:56:25', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('08/12/2011 10:35:42', 'MM/DD/YYYY HH24:MI:SS'), 'N', 10158487);
    Insert into MEMBER_ADRESS
       (ADDRESS_ID, MEMBER_ID, ADDRESS_1, ZIPCODE, CREATION_DATE, LAST_UPDATE_DATE, PRIMARY_FLAG, MATCH_VALUE)
     Values
       (232, 12, '101 Banks Street', 
        '58487', TO_DATE('08/11/2011 10:56:25', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('08/12/2011 10:41:15', 'MM/DD/YYYY HH24:MI:SS'), 'N', 10158487);
    Insert into MEMBER_ADRESS
       (ADDRESS_ID, MEMBER_ID, ADDRESS_1, ZIPCODE, CREATION_DATE, LAST_UPDATE_DATE, PRIMARY_FLAG, MATCH_VALUE)
     Values
       (228, 12, '101 Banks St.', 
        '58487', TO_DATE('08/11/2011 10:56:25', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('08/12/2011 10:38:19', 'MM/DD/YYYY HH24:MI:SS'), 'N', 10158487);
    Insert into MEMBER_ADRESS
       (ADDRESS_ID, MEMBER_ID, ADDRESS_1, ZIPCODE, CREATION_DATE, LAST_UPDATE_DATE, PRIMARY_FLAG, MATCH_VALUE)
     Values
       (221, 25, '881 Green Road', 
        '58887', TO_DATE('08/11/2011 10:56:25', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('08/12/2011 10:34:18', 'MM/DD/YYYY HH24:MI:SS'), 'Y', 88158887);
    Insert into MEMBER_ADRESS
       (ADDRESS_ID, MEMBER_ID, ADDRESS_1, ZIPCODE, CREATION_DATE, LAST_UPDATE_DATE, PRIMARY_FLAG, MATCH_VALUE)
     Values
       (278, 28, '2811 Brown St.', 
        '58224', TO_DATE('08/11/2011 10:56:25', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('08/12/2011 10:36:11', 'MM/DD/YYYY HH24:MI:SS'), 'N', 281158224);
    Insert into MEMBER_ADRESS
       (ADDRESS_ID, MEMBER_ID, ADDRESS_1, ZIPCODE, CREATION_DATE, LAST_UPDATE_DATE, PRIMARY_FLAG, MATCH_VALUE)
     Values
       (280, 28, '2811 Brown Street', 
        '58224', TO_DATE('08/11/2011 10:56:25', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('08/12/2011 10:45:00', 'MM/DD/YYYY HH24:MI:SS'), 'Y', 281158224);
    Insert into MEMBER_ADRESS
       (ADDRESS_ID, MEMBER_ID, ADDRESS_1, ADDRESS_2, ZIPCODE, CREATION_DATE, LAST_UPDATE_DATE, PRIMARY_FLAG, MATCH_VALUE)
     Values
       (300, 12, '3421 West North Street', 'x', 
        '58488', TO_DATE('08/11/2011 10:56:25', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('08/12/2011 10:42:04', 'MM/DD/YYYY HH24:MI:SS'), 'Y', 342158488);
    COMMIT;
    
    
    CREATE TABLE MEMBER_ORDER
    (
      ORDER_ID    NUMBER,
      ADDRESS_ID  NUMBER,
      MEMBER_ID   NUMBER
    );
    
    Insert into MEMBER_ORDER
       (ORDER_ID, ADDRESS_ID, MEMBER_ID)
     Values
       (3, 200, 30);
    Insert into MEMBER_ORDER
       (ORDER_ID, ADDRESS_ID, MEMBER_ID)
     Values
       (4, 230, 12);
    Insert into MEMBER_ORDER
       (ORDER_ID, ADDRESS_ID, MEMBER_ID)
     Values
       (5, 228, 12);
    Insert into MEMBER_ORDER
       (ORDER_ID, ADDRESS_ID, MEMBER_ID)
     Values
       (6, 278, 28);
    Insert into MEMBER_ORDER
       (ORDER_ID, ADDRESS_ID, MEMBER_ID)
     Values
       (8, 278, 28);
    Insert into MEMBER_ORDER
       (ORDER_ID, ADDRESS_ID, MEMBER_ID)
     Values
       (10, 230, 12);
    COMMIT;

    Chris:

    In fact, it's a little meaner it seems at first glance, but it seems to work, at least on your sample data.

    The update, based on the before and after the sample data.

    SQL> SELECT * FROM member_adress;
    
    ADDRESS_ID  MEMBER_ID ADDRESS_1              CREATION_DATE          LAST_UPDATE_DATE       P MATCH_VALUE
    ---------- ---------- ---------------------- ---------------------- ---------------------- - -----------
           228         12 101 Banks St.          08/11/2000 10:56:25 am 08/12/2005 10:38:19 am N    10158487
           230         12 101 Banks St           08/11/2000 10:56:25 am 08/12/2006 10:35:42 am N    10158487
           232         12 101 Banks Street       08/11/2000 10:56:25 am 08/12/2007 10:41:15 am N    10158487
           300         12 3421 West North Street 08/11/2000 10:56:25 am 08/12/2011 10:42:04 am Y   342158488
           221         25 881 Green Road         08/11/2000 10:56:25 am 08/12/2011 10:34:18 am Y    88158887
           278         28 2811 Brown St.         08/11/2000 10:56:25 am 08/12/2006 10:36:11 am N   281158224
           280         28 2811 Brown Street      08/11/2000 10:56:25 am 08/12/2011 10:45:00 am Y   281158224
           200         30 11 Hourse Rd.          08/11/2000 10:56:25 am 08/12/2005 10:34:10 am Y     1158754
    
    SQL> SELECT * FROM member_order
      2  ORDER BY member_id, order_id;
    
      ORDER_ID ADDRESS_ID  MEMBER_ID
    ---------- ---------- ----------
             4        228         12
             5        230         12
            10        230         12
            11        232         12
            12        300         12
             6        278         28
             8        278         28
             3        200         30
    
    SQL> UPDATE member_order mo
      2  SET address_id = (SELECT address_id
      3                    FROM (SELECT address_id, member_id, match_value
      4                          FROM (SELECT address_id, member_id, match_value,
      5                                        ROW_NUMBER () OVER (PARTITION BY member_id, match_value
      6                                                            ORDER BY CASE WHEN primary_flag = 'Y' THEN 1
      7                                                                          ELSE 2 END,
      8                                                                    GREATEST(NVL(creation_date,TO_DATE('1/1/0001','MM/DD/YYYY')),
      9                                                                             NVL(last_update_date,TO_DATE('1/1/0001','MM/DD/YYYY'))) desc) rn
     10                                FROM member_adress)
     11                          WHERE rn = 1) ma
     12                    WHERE mo.member_id = ma.member_id and
     13                          ma.match_value = (SELECT match_value from member_adress maa
     14                                            WHERE maa.address_id = mo.address_id));
    
    SQL> SELECT * FROM member_order
      2  ORDER BY member_id, order_id;
    
      ORDER_ID ADDRESS_ID  MEMBER_ID
    ---------- ---------- ----------
             4        232         12
             5        232         12
            10        232         12
            11        232         12
            12        300         12
             6        280         28
             8        280         28
             3        200         30
    

    Then to remove it, something like:

    SQL> DELETE FROM member_adress
      2  WHERE rowid NOT IN (SELECT rowid
      3                      FROM (SELECT rowid,
      4                                   ROW_NUMBER () OVER (PARTITION BY member_id, match_value
      5                                                       ORDER BY CASE WHEN primary_flag = 'Y' THEN 1
      6                                                                     ELSE 2 END,
      7                                                                GREATEST(NVL(creation_date,TO_DATE('1/1/0001','MM/DD/YYYY')),
      8                                                                         NVL(last_update_date,TO_DATE('1/1/0001','MM/DD/YYYY'))) desc) rn
      9                            FROM member_adress)
     10                          WHERE rn = 1);
    
    SQL> SELECT * FROM member_adress;
    
    ADDRESS_ID  MEMBER_ID ADDRESS_1              CREATION_DATE          LAST_UPDATE_DATE       P MATCH_VALUE
    ---------- ---------- ---------------------- ---------------------- ---------------------- - -----------
           232         12 101 Banks Street       08/11/2000 10:56:25 am 08/12/2007 10:41:15 am N    10158487
           300         12 3421 West North Street 08/11/2000 10:56:25 am 08/12/2011 10:42:04 am Y   342158488
           221         25 881 Green Road         08/11/2000 10:56:25 am 08/12/2011 10:34:18 am Y    88158887
           280         28 2811 Brown Street      08/11/2000 10:56:25 am 08/12/2011 10:45:00 am Y   281158224
           200         30 11 Hourse Rd.          08/11/2000 10:56:25 am 08/12/2005 10:34:10 am Y     1158754
    

    John

  • Delete duplicate lines

    Hello

    I have the following structure
    ID     txndate         ItemNumber batch     gain  locator
    --     ------------    --------   --------  ----- ---------
    1      15/05/2011      710        230       20    L123
    2      15/05/2011      710        230       20    L123
    3      19/05/2011      410        450       70    L456
    4      05/05/2011      120        345       60    L721
    5      05/05/2011      120        345       60    L721
    6      05/05/2011      120        345       60    L721
    7      15/06/2012      840        231       40    L435
    8      15/05/2011      710        230       20    L123
    9      15/05/2011      710        230       20    L123
    I need to come up with a query that will remove all but one of the duplicate as follows:
    ID txndate    ItemNumber batch gain locator
    -- -------    ---------- ----- ---- -------
    1  15/05/2011 710        230   20   L123
    3  19/05/2011 410        450   70   L456
    4  05/05/2011 120        345   60   L721
    7  15/06/2012 840        231   40   L435
    8  15/05/2011 710        230   20   L123 <== Note this - it needs to appear in 2 places
    Thank you

    PS: My apologies for not being able to shape the structure of the table enough but I guess that it conveys the idea.

    Published by: user3214090 on January 28, 2013 07:22

    Based on the sample paper & pencil - no database at hand :(

    ID     txndate         ItemNumber batch     gain  locator x    y    z
    4      05/05/2011      120        345       60    L721    1    3    1
    5      05/05/2011      120        345       60    L721    2    3    2
    6      05/05/2011      120        345       60    L721    3    3    3
    1      15/05/2011      710        230       20    L123    1    0    1
    2      15/05/2011      710        230       20    L123    2    0    2
    8      15/05/2011      710        230       20    L123    3    5    1
    9      15/05/2011      710        230       20    L123    4    5    2
    3      19/05/2011      410        450       70    L456    1    2    1
    7      15/06/2012      840        231       40    L435    1    6    1
    

    Tabibitosan {message: id = 9535978} aka fixed the difference method should be used (groups - there - must first be generated) NO TESTS!

    select id,txndate,itemnumber,batch,gain,locator
      from (select id,txndate,itemnumber,batch,gain,locator,
                   row_number() over (partition by y order by id) z
              from (select id,txndate,itemnumber,batch,gain,locator,
                           id - row_number() over (partition by txndate,itemnumber,batch,gain,locator
                                                       order by id
                                                  ) y
                      from the_table
                   )
           )
     where z = 1
    

    Concerning

    Etbin

  • change the size of the table and remove the duplicate line

    Hello

    I run a program that does a for loop 3 times.  In each loop, the data are collected for x and y points (10 in this example).  How can I output the table with 30 points in a simple 2D array (size 2 x 30)?

    Moreover, from the table, how to delete duplicate rows that contain the same value of x?

    Thank you

    hiNi.

    He probaibly which would have allowed to attach a file that has a consecutive duplicate lines.

    Here is a simple solution.

  • deleting duplicate records

    I'm doing all the records in the small suitcase to eliminate the sensitivity of the data to display for example

    (1) setting a day email set email = lower (email); who gives a unique constraint error

    (2) I saw emails in duplicate with the following command
    Select E-mail
    enamel
    where lower (email) in
    (select lower (email)
    enamel
    Lower group (email)
    view count (*) > 1) order by e-mail CSA;

    (3) I want to delete duplicate entries as described in (2) with the sql statement because they are over 500 and if I start to remove manually it will take too much of my time

    Thank you and best regards
    SQL> create table t(email varchar2(100))
      2  /
    
    Table created.
    
    SQL> create unique index t_idx on t(email)
      2  /
    
    Index created.
    
    SQL> insert into t values('[email protected]')
      2  /
    
    1 row created.
    
    SQL> insert into t values('[email protected]')
      2  /
    
    1 row created.
    
    SQL> insert into t values('[email protected]')
      2  /
    
    1 row created.
    
    SQL> insert into t values('[email protected]')
      2  /
    
    1 row created.
    
    SQL> insert into t values('[email protected]')
      2  /
    
    1 row created.
    
    SQL> commit
      2  /
    
    Commit complete.
    
    SQL> select * from t
      2  /
    
    EMAIL
    ----------------------------------------------------------------------------------------------------
    [email protected]
    [email protected]
    [email protected]
    [email protected]
    [email protected]
    
    SQL> update t set email = lower(email)
      2  /
    update t set email = lower(email)
    *
    ERROR at line 1:
    ORA-00001: unique constraint (SYSADM.T_IDX) violated
    
    SQL> delete
      2    from t
      3   where rowid not in (select rowid
      4                    from (select rowid,row_number() over(partition by lower(email) order by lower(email)) rno
      5                            from t)
      6                   where rno = 1)
      7  /
    
    3 rows deleted.
    
    SQL> update t set email = lower(email)
      2  /
    
    2 rows updated.
    
    SQL> select * from t
      2  /
    
    EMAIL
    ----------------------------------------------------------------------------------------------------
    [email protected]
    [email protected]
    
    SQL>
    
  • I have verticle colored lines - from the bottom to the 1/2 in screen - I can see through them to read - I need to know the WAY to delete these lines

    I mâtinées verticle lines - from the bottom to the 1/2 in screen - I can see through them to read - I need to know the WAY to delete these lines - I once, a few years ago, removed, but they reappear - not my conscious fault - this is my old computer - A Compac - Windows 95 as an operating system

    Hello

    You should check with Support of Compaq, their documentation online and drivers,
    and ask in their forums about known issues.

    Microsoft product support has ended for Windows 95 a decade ago.

    Compaq - support, drivers and documentation online
    http://www.Compaq.com/country/cpq_support.html

    Compaq (HP) - Forums
    http://h30434.www3.HP.com/PSG/|

    I hope this helps.

    Rob Brown - Microsoft MVP<- profile="" -="" windows="" expert="" -="" consumer="" :="" bicycle="" -="" mark="" twain="" said="" it="">

  • Can't delete duplicate contacts

    Can't delete duplicate in windows contacts live mail... any suggestions?... I don't have "windows contacts" list "all programs" under the Start button.

    Really frustrating!

    The Windows Live products belong to Windows Live Solution Center.

    For WLM issues, the forum is Mail Forums - son of Mail section:

    http://windowslivehelp.com/forums.aspx?ProductID=15

    Meanwhile, I use the version of WLM 2011. Maybe I could help him.

    I arrived just five minutes ago deleted a triple contact addresses.

    Open WLM > in the column list of the folder (left column), at the bottom, click on the Contact icon > right click on the contact duplicate > click on delete.

    Can you explain at what stage you were not able to remove duplicates?

  • I want to delete duplicate pictures and or files

    I just wanted to know if there was a program in windows 7 that finds files or duplicate images, I'm pretty much a rookie the computer thing so any help would be wonderful

    There are many programs that can do this. Here are a few ones that are free, you can use:

    http://download.CNET.com/1770-20_4-0.html?query=delete+duplicate&SearchType=downloads&filter=licenseName=free | OS = 133 | platform = Windows & filterName = licenseName free = | OS = Windows % 207 | platform = Windows & tag = Lieutenant Colonel

    Questions about installing Windows 7?
    FAQ - Frequently Asked Questions from Installation Windows 7 & responses

  • deleting several lines in a report?

    db11gxe, apex 4.0, firefox 24,

    How to delete several lines in a report at the same time?

    Thank you

    Hi newbi_egy,

    Here's a demo with a few steps that can be used in your scenario-

    (1) this is the query for a report on the emp table that has column empno with values single-

    SELECT APEX_ITEM.CHECKBOX(1,empno) " ", ename, job FROM   emp ORDER  by 1
    

    (2) a button Delete is created that submits the page.

    (3) a sur-soumettre the process is created with point process on submit - after calculations and Validations to determine that the sub process is running when you click the button remove.

    BEGIN
    FOR i in 1..APEX_APPLICATION.G_F01.count
    LOOP
    delete from emp where empno=APEX_APPLICATION.G_F01(i);
    END LOOP;
    END;
    

    I hope this could help you.

    Kind regards

    Das123

  • I use Dreamweaver cc 2014 and after styling css to my fluid page layout grid I lose him resize, delete, duplicate and move up / down ability.

    I use Dreamweaver cc 2014 and after styling css to my fluid page layout grid I lose him resize, delete, duplicate a high displacement / low capacity.

    For this reason I can't build new pages by copying a page to create a new one.

    I have a third style sheet that I use for the styles of navigation and h1 - h6 ect. Tags. and I'm also using a dropdown menu CSS, if one of them can be the problem?

    The menu css that I use has the following script: I spend at the bottom of the html page. Before the closing tag, body

    < script >

    $(function () {})

    $("#nav").tinyNav ();

    });

    < /script >

    I also use the following for an image - I have put it to the top of the fluid

    grid style sheet.

    * {

    box-sizing: border-box; / * Opera/IE 8 + * /.

    -moz-box-sizing: border-box; / * Firefox, another Gecko * /.

    -webkit-box-sizing: border-box; / * Safari/Chrome, another WebKit * /.

    }

    Can you please help.

    My experience is that you should not touch the style sheets that have been created by the system of FGL. Also, there is no need to copy and paste whenever it could disrupt the spoilsports. If you need to apply your own styles and then put them in a second stylesheet, that way if something goes wrong, you can always revert to the original.

  • How to delete duplicate records

    Suppose I have a table with N columns and K TAB (K < N) for which I should not have duplicated (in other words, the K column can be a key primary candidated)
    Can you give me a SQL query to delete duplicate records of TAB for columns K?
    Thank you very much!

    remove the tab where rowid not in (select min (rowid) of the TAB group by k)

  • Delete forbidden line

    Hello

    I created a set of tables (see SQL code below) and a few foreign key between them.
    I filled the S, R, you and V tables only, leaving empty. When I want to delete a line of U, I get the following
    error: ' * ORA-01460: letter dead or unreasonable conversion requested. "

    If I disable A_U_FK1 forced, I can delete a line of u. Notice that A is always empty.

    Sorry for my stupid question, but I do not understand why I have to disable the constraint of A_U_FK1 for the removal of a line of U:{{class=fontblue}}
    Is this a bug or did I miss something?

    Stephan

    Oracle XE 10.2.1.0
    Windows XP SP3
    --------------------
    SQL code for creating tables
    CREATE TABLE S
      (
        S_ID  NUMBER NOT NULL ENABLE,
        S_NOM VARCHAR2(1024 BYTE) NOT NULL ENABLE,
        CONSTRAINT S_PK PRIMARY KEY (S_ID),
        CONSTRAINT S_UK1 UNIQUE (S_NOM)
      );
    
    CREATE TABLE R
      (
        R_URL VARCHAR2(2048 BYTE) NOT NULL ENABLE,
        S_NOM VARCHAR2(1024 BYTE),
        CONSTRAINT R_PK PRIMARY KEY (R_URL),
        CONSTRAINT R_S_FK1 FOREIGN KEY (S_NOM) REFERENCES S (S_NOM) ON
      DELETE CASCADE ENABLE
      );
    
    CREATE TABLE U
      (
        U_URL VARCHAR2(2048 BYTE) NOT NULL ENABLE,
        U_DATE_AJOUT DATE DEFAULT sysdate NOT NULL ENABLE,
        U_CONTENU BLOB,
        S_NOM VARCHAR2(1024 BYTE) NOT NULL ENABLE,
        CONSTRAINT U_PK PRIMARY KEY (U_URL, U_DATE_AJOUT),
        CONSTRAINT U_S_FK1 FOREIGN KEY (S_NOM) REFERENCES S (S_NOM) ON
      DELETE CASCADE ENABLE
      );
    
    CREATE TABLE V
      (
        V_ID  NUMBER NOT NULL ENABLE,
        V_NOM VARCHAR2(1024 BYTE) NOT NULL ENABLE,
        CONSTRAINT V_PK PRIMARY KEY (V_ID),
        CONSTRAINT V_V_FK1 FOREIGN KEY (V_ID_ALIAS) REFERENCES V (V_ID) ON
      DELETE CASCADE ENABLE
      );
    
    CREATE TABLE A
      (
        A_ID NUMBER NOT NULL ENABLE,
        U_DATE_AJOUT DATE,
        U_URL VARCHAR2(20 BYTE),
        V_ID  NUMBER,
        CONSTRAINT A_PK PRIMARY KEY (A_ID),
        CONSTRAINT A_V_FK1 FOREIGN KEY (V_ID) REFERENCES V (V_ID) ON
      DELETE CASCADE ENABLE,
        CONSTRAINT A_U_FK1 FOREIGN KEY (U_URL, U_DATE_AJOUT) REFERENCES U (U_URL, U_DATE_AJOUT) ON
      DELETE CASCADE ENABLE
      );

    Not exactly, but try sync - lies the dimensions of your U_URL column between the tables A and U. U, it is 2048, in one he is 20.

  • Delete blank lines don't other containers

    Here is a weird problem that I have. Essentially, I'm working on a Flash editor. I start with a container, and the user can write text. Then, when the height of the text in the container reaches the last line, I create a new container. When the first bin is full, he jumps correctly to the next container. The same goes for when a user deletes the second container, it goes back to the first container. These containers represent pages.

    Now, the problem is the deletion of lines in the 2nd and jump back to the first container. All the works, well, IF there are characters on the last line of the container 1. If, however, I have 5 empty lines (just the jumps of line on them) in the first container, and then I get to the first character in the 2nd container, pressing return back does nothing. The flow composer correctly reduces the number of lines, but the cursor only goes in the 1st container when it encounters a line that has the characters on it. Hit the return back 3 times for example, keeps the cursor on the first line in the 2nd container. If I click on the last line in the 1st controller, the cursor will move to the 3rd in the last line (where the cursor should be finished after pressing BACKSPACE 3 times). Try clicking on the 2nd container, I can't put the cursor there aren't any longer has no lines in there.

    So my question is, why don't the cursor jumps to the 1st container when clearly the composer knows that there is no lines in the 2nd container?

    You use the TLF even like what I've said?   I use the Flex SDK with TLF 2.0.0.232 in 4.5.0.20135 kit.

  • Duplicate line Macro information

    Hello

    I'm having some trouble doing this button or a macro of this pdf file.

    I currently have a row of input cells he wishes to reproduce with the click of a button.

    The reason that I do not repeat these lines is first of all to reduce the space of the file and allow users to create their own application form with the number of pieces they request.

    I am running Adobe LiveCycle Designer ES 8.2.1

    Thank you

    Re: Duplicate line Macro information

    If you want to publish the form on [email protected] I wil take a look. Please include a description of what you're trying to do.

    Paul

Maybe you are looking for

  • My phone i 5 does not light

    Hi one day I went to update my iphone 5 and he remained on the screen to update and then it went black and no more lights so I thought it was dead, so I tried to load it, but still nothing so I left it on the charger all night and when I woke up stil

  • HP 15-r030wm: max updates for hp 15-r030wm

    I want to upgrade this laptop in a game without the hassle of gutting it and starting from scratch. I have already upgraded RAM from 4 GB to 8 GB of ram and want to push even more to a 16 GB of ram card.  (one RAM slot) I want to upgrade the graphics

  • Mini Compaq bios password

    Hi all Could someone please help. I have a model of compaq mini 110 c - 11106 has the code I get is CNU9353Z74 Thanks for any help

  • Since the download of windows live I can't log out hotmail. How to solve the problem

    I bought a windows mobile and Zune downloaded on my desktop, windows live was also downloaded. Since then, I was unable to log out of my account hotmail of origin on my desk. I have since deleted zune, but the problem is still there.  It seems to be

  • Back up photos and videos on the SD card by default using XA Ultra

    How can I save photos and videos on the SD card by default using XA Ultra? Help, please... Thank you