Duplicate leads

Hello again, another question of process:

Lets say, we have a scenario where we have a company to request a catalog of our Internet site, which automatically generates a lead and then request a sample request, which automatically generates a different leader. Marketing gets (2) son and verifies the accuracy, are associated with a contact/account if possible, etc. and calls the witness request execution and it goes on sale to manage. This is what should happen to the catalog - lead. Clearly they also associate with an account/Contact for historical reference - but then they archive it?

Thanks for all the help I get on these forums!

Hi Derrick,

You have a few options.
(1) Siebel has a function of fusion for prospects. You can merge before you assign it.
(2) get us leads on the site a little differently. The interface is not simultaneous. It runs every hours and validates the data before creating without end chimney. He took time to perfect the validation, ubt it now works like a charm.
(3) If you request related, then I check if it was the same contact to download the catalog and the sample request. If this isn't very well these may be 2 separate optys. If it is the same contact, then very probably would archive the purchase catalog since the other driver has more specific information concerning the needs of the prospect.

Good luck.

Tags: Oracle

Similar Questions

  • Office sync with two Macs leads to duplicate the files/folders

    Hi all

    I m using MacOS Sierra sync with desktop (and documents). Logged into my account, are two Macs (Air and a MB Pro). No I find in my iCloud drive a documents folder (it's good and the documents of the MB Air folder) and in this case a "Dokumente - MacBook Pro mg" folder which is the folder documents of my MB Pro (this is not so good). This isn't what I expected, I expected to have a documents folder which is identical on both machines. Does anyone know what I need to adjust to get things clear?

    Greetings from the Germany

    Mike

    I posted an article that explains all this to http://www.quecheesoftware.com/iCloud.html . This is a downloadable PDF file.

    Basically, you add to the folder Documents after activation of the synchronization of the files will be synchronized as you expect. But the files that were in your Documents on several Mac folders when you switch on for the first time on the synchronization will be placed in specific subfolders to your new folder of Documents iCloud Drive device. Apple apparently felt that he could not predict how each user would want their old files organized in the folder of Documents Drive iCloud, so Apple left to each user to reorganize the file later.

    I have now moved all my own files on the first level of my folder of Documents Drive iCloud and trashed the duplicate of the other device-specific folder. Now, everything syncs between computers, as expected. I left the two subfolders of specific to the device in place, either two empty for now, in case in the future that I might want to keep some files associated with one of the devices - for example, the alias files do not work on the other Mac. They will always sync in another subfolder, but at least I know to use it only on their own device.

    Apple certainly could have documented it better.

  • Mac question? -Duplicate file names appear once I have copy on mp3s

    It seems that although the Compatible Mac box, it was a stretch far from easy to use or Setup for a non techno.

    I figured out how to import a CD on ITunes and convert it to an mp3. Then copy it to the "rocket".

    But my problem is when I copy a song or an entire album, when I turn on the "rocket", it shows duplicate songs and duplicates are in the Unreconizable folder and do not work.

    This can become very confusing. Anyone has any ideas on how to import MP3s on a mac without getting these ghosts / duplicate files?

    If I can't understand, I guess I'll try to return it and get an IPod. No use to me like that.

    Help, please. I want to keep that player.

    thegaspar wrote:

    I figured out how to import a CD on ITunes and convert it to an mp3. Then copy it to the "rocket".

    But my problem is when I copy a song or an entire album, when I turn on the "rocket", it shows duplicate songs and duplicates are in the Unreconizable folder and do not work.

    Your problem is that you used the Finder to copy files to the player.  Who creates the files 'duplicate' which are really containing of Finder metadata files.  They lead the players nuts.

    I wrote a web page describing this problem and how to get an MP3 as the "rocket" to work with the Mac.  See http://guyscharf.wordpress.com/mp3-on-macintosh/ for more details.

  • Duplicate with prev and next record

    I want to find the disease only id duplicate State (i.e.103, 105) of the sub sample where date previous illness > next date of the disease.

    Can we write a message with the file

    ID date previous illness (*) is greater than the date of the next disease (*)

    WITH abc AS

    (SELECT 101 AS id,

    1 AS indx,

    "fever" AS the disease,

    23 November 2014 ' AS disease_start

    OF the double

    UNION

    SELECT 101, 4, 'viral fever", 29 December 2014 ' FROM dual

    UNION

    SELECT 101, 3, "Asthma", "the DOUBLE

    UNION

    "SELECT 101, 2,"Blood pressure", November 1, 2015" DOUBLE

    UNION

    SELECT 102, 2, 'cold fever', 'OF THE double. "

    UNION

    SELECT 102, 1, "bronchial asthma", October 5, 2013 "OF THE double

    UNION

    SELECT 103, 1, "Hay fever", May 19, 2005 "OF the double

    UNION

    SELECT 103, 3, "allergic asthma", "the DOUBLE

    UNION

    SELECT 103, 4, 'Hay fever', cm double

    UNION

    SELECT 103, 2, "Hay fever", January 8, 2002 "OF THE double

    UNION

    SELECT 104, 1, "Creatinine", "the DOUBLE

    UNION

    SELECT 104, 2, "Creatinine", November 8, 2006 "OF THE double

    UNION

    SELECT 104, 3, "Creatinine", November 8, 2007 "OF the double

    UNION

    SELECT 105, 1, ' high blood sugar ", November 8, 2010"FROM dual;

    UNION

    SELECT 105, 2, ' high blood sugar ', 8 November 2009' FROM dual.

    UNION

    SELECT 106, 1, ' low blood sugar ", November 8, 2010"FROM dual;

    UNION

    SELECT 106, 2, "blood serum", November 8, 2009 "OF THE double

    UNION

    SELECT 106, 3, "low blood glucose", "the DOUBLE

    UNION

    SELECT 106, 4, ' low sugar ', 8 November 2009' FROM dual.

    UNION

    SELECT 106, 5, ' low blood sugar ", November 30, 2005"FROM dual;

    )

    SELECT *.

    Of

    (SELECT ID,

    disease,

    disease_start,

    lag(disease_start,1,disease_start) over (partition BY id ORDER BY indx) AS prev_disease_date,

    lEAD(disease_start,1,disease_start) over (partition BY id ORDER BY indx) AS NEXT_disease_date

    ABC

    )

    WHERE prev_disease_date > NEXT_disease_date

    Try the below code it should work

    WITH abc AS

    (SELECT 101 AS id,

    1 AS indx,

    "fever" AS the disease,

    23 November 2014 ' AS disease_start

    OF the double

    UNION

    SELECT 101, 4, 'viral fever", 29 December 2014 ' FROM dual

    UNION

    SELECT 101, 3, "Asthma", "the DOUBLE

    UNION

    "SELECT 101, 2,"Blood pressure", November 1, 2015" DOUBLE

    UNION

    SELECT 102, 2, 'cold fever', 'OF THE double. "

    UNION

    SELECT 102, 1, "bronchial asthma", October 5, 2013 "OF THE double

    UNION

    SELECT 103, 1, "Hay fever", May 19, 2005 "OF the double

    UNION

    SELECT 103, 3, "allergic asthma", "the DOUBLE

    UNION

    SELECT 103, 4, 'Hay fever', cm double

    UNION

    SELECT 103, 2, "Hay fever", January 8, 2002 "OF THE double

    UNION

    SELECT 104, 1, "Creatinine", "the DOUBLE

    UNION

    SELECT 104, 2, "Creatinine", November 8, 2006 "OF THE double

    UNION

    SELECT 104, 3, "Creatinine", November 8, 2007 "OF the double

    UNION

    SELECT 105, 1, ' high blood sugar ", November 8, 2010"FROM dual;

    UNION

    SELECT 105, 2, ' high blood sugar ', 8 November 2009' FROM dual.

    UNION

    SELECT 106, 1, ' low blood sugar ", November 8, 2010"FROM dual;

    UNION

    SELECT 106, 2, "blood serum", November 8, 2009 "OF THE double

    UNION

    SELECT 106, 3, "low blood glucose", "the DOUBLE

    UNION

    SELECT 106, 4, ' low sugar ', 8 November 2009' FROM dual.

    UNION

    SELECT 106, 5, ' low blood sugar ", November 30, 2005"FROM dual;

    )

    SELECT a.ID, ' id date previous illness '. a.prev_disease_date | "is greater than the date of the next disease | a.NEXT_disease_date message

    Of

    (SELECT ID,

    disease,

    disease_start,

    lag(disease_start,1,disease_start) over (partition BY id ORDER BY indx) AS prev_disease_date,

    lEAD(disease_start,1,disease_start) over (partition BY id ORDER BY indx) AS NEXT_disease_date

    ABC

    WHERE ID in (select id from (SELECT count (*), the abc by id group ID, disease having count (*) > 1))

    ) a

    WHERE prev_disease_date > NEXT_disease_date

    BR,

    Patrick

  • Eliminate duplicates while counting dates

    Hi all

    Version: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0.


    How to count the dates and eliminate duplicate dates.

    For example,.

    Name from Date to Date

    AAA 31/05/2014-06/13/2014

    AAA 14/06/2014 06/27/2014

    AAA 21/06/2014 07/20/2014

    AAA 21/07/2014 08/20/2014

    We want the output as,

    County of name not days

    AAA 14

    AAA 14

    23 AAA (in this case must be eliminated 7 days from June 21, 2014-20/07/2014' because it is already counted dates previous ie.'06/14/2014 - 27/06/2014 '.) If we want the exit 23)

    AAA 31

    Thank you

    Pradeep D.

    Untested carefully... But can give you an idea:

    -----

    WITH t
         AS (SELECT 'AAA' nm,
                    TO_DATE ('05/31/2014', 'MM/DD/YYYY') from_dt,
                    TO_DATE ('06/13/2014', 'MM/DD/YYYY') TO_DT
               FROM DUAL
             UNION ALL
             SELECT 'AAA',
                    TO_DATE ('06/14/2014', 'MM/DD/YYYY'),
                    TO_DATE ('06/27/2014', 'MM/DD/YYYY')
               FROM DUAL
             UNION ALL
             SELECT 'AAA',
                    TO_DATE ('06/21/2014', 'MM/DD/YYYY'),
                    TO_DATE ('07/20/2014', 'MM/DD/YYYY')
               FROM DUAL
             UNION ALL
             SELECT 'AAA',
                    TO_DATE ('07/21/2014', 'MM/DD/YYYY'),
                    TO_DATE ('08/20/2014', 'MM/DD/YYYY')
               FROM DUAL),
         tt
         AS (SELECT nm,
                    from_dt,
                    to_dt,
                    (to_dt - from_dt) total_diff,
                    CASE
                       WHEN   LEAD (from_dt, 1)
                                 OVER (PARTITION BY nm ORDER BY from_dt)
                            - to_dt < 0
                       THEN
                            LEAD (from_dt, 1)
                               OVER (PARTITION BY nm ORDER BY from_dt)
                          - to_dt
                          - 1
                       ELSE
                          0
                    END
                       diff
               FROM t)
    SELECT nm,
           from_dt,
           to_dt,
             1
           + total_diff
           + LAG (diff, 1, 0) OVER (PARTITION BY nm ORDER BY from_dt) final_diff
      FROM tt
    

    Output:

    ------------

    NM FROM_DT TO_DT FINAL_DIFF

    AAA 2014 6/13/5/31/2014 14

    AAA 2014 6/27/6/14/2014 14

    AAA 23 6/2014 7/20/21/2014

    AAA 31 7/2014-8/20/21/2014

    See you soon,.

    Manik.

  • Why are there duplicate TNS in list names drop-down connection

    Why are there duplicate TNS in list names drop-down connection.

    Also what is the list updated, if I make an addition to my tnsnames.ora?

    Probably because you have multiple tnsnames files in your directory.

    Most people don't realize this, but SQL * Plus has the same behavior. If you have a. BAK copy/version, rename it with a leader 1 or something in front of the ILO "tnsnames" to the file name.

  • Lead and lead state details status

    It is a two part question on the State of conduct and lead the details of situation.  We are about to revise our current model in SFDC.

    First of all, if you are tracking MQL, SAL, SQL, SQO, do you rely on your CRM drive State and details of the state drive to differentiate the prospects who agreed sales or sales qualified by result status such as Open, work, committed?  If 'no', how are you going to do it?

    Then, here is the revised lead the statutes (in blue) and details of situation lead before us.  Are there statutes obvious lack?

    Working

    Committed

    Enrich (culture)

    Partners

    1st attempt to contact

    BANT appeal at the request

    Future calendar (contacted but not a SAL)

    Application of accepted partner

    2nd attempt to contact

    In the Conversation

    Future needs (contacted but not a SAL)

    Partner request denied

    3rd attempt to contact

    Planning meeting

    Future Budget (contacted but not SAL)

    No attempt to contact

    Awaiting response from the customer

    Lack of Contact Info

    The Contact Search

    Awaiting response from the customer

    No answer

    Work of PH

    Disqualified

    Purge

    Convert

    Prospecting

    Competitor

    Bad Contact Information

    Convert to partner

    Account name

    Best practices survey

    Invalid company information

    Converted to Contact (existing possibility)

    Enriched Ph

    No Contact Center

    Duplicate record

    Converted to opportunity

    REP selected

    No decision maker / influencer

    No longer works

    Outside Service area

    Out of business

    These recent ACD / IVR purchase

    Name of the vendor

    Too small

    WFO only

    Hi, Kim.  Thank you for your thoughtful response.  Looks like you've been down this road before and your insight is appreciated.  Follow a few answers/clarification:

    • Open the report - I don't understand the above, but there isn't.  Open (without details) is the default value. Our plan is to implement something similar to what you did; after XX days intact, "resume" and return to a flow of feed.
    • Work situation - this status allows officials to hold the wires in their queue, without worrying about "recovery" we have SLAS in place that limit the duration of that recording can sit in the work, but these SLA is not forced, rather through reports of aging Rep systemattically.  If some representatives sit on the job too long, then they will get a call from their boss. No automatted 'come back' in the case of 'work '.
    • Excellent point on the dependecy necessary to lead State detail for lead status = Enrich.  I make sure to drive retail for committed State.
    • Status of committed - it seems actually a SAL and teams were.  In our version of a report for the pipeline reach and waterfall, anything before Engaged is considered a MQL, suspicious or investigation.  Change in lead to 'Committed' to 'Open' or 'Work' status indicates that it is accepted sales.

    I am happy to know that you (and probably others) rely on status of lead and lead State detail to differentiate what is accepted by sales, and what is simply the collection of dust.  I wasn't sure if there was a better way.  Now, I am more confident in what we do.

    Thank you and have a great Friday.

    J

  • DataGuard duplicate sequence numbers?

    Hi all

    11.2.0.3.10

    AIX 6

    We have setup dataguard.

    I checked our primary db using:

    1. SELECT sequence #, first_time, next_time

    2 from v$ archived_log

    3 * ORDER BY sequence #.

    SEQUENCE # FIRST_TIME NEXT_TIME

    ---------- -------------------- --------------------

    2140 14 JULY 2014 09:04:16 14 JULY 2014 09:15:37

    2140 14 JULY 2014 09:04:16 14 JULY 2014 09:15:37

    2141 14 JULY 2014 09:15:37 14 JULY 2014 09:26:40

    2141 14 JULY 2014 09:15:37 14 JULY 2014 09:26:40

    2142 14 JULY 2014 09:26:40 14 JULY 2014 09:37:49

    2142 14 JULY 2014 09:26:40 14 JULY 2014 09:37:49

    2143 14 JULY 2014 09:37:49 14 JULY 2014 09:48:59

    2143 14 JULY 2014 09:37:49 14 JULY 2014 09:48:59

    2144 14 JULY 2014 09:48:59 14 JULY 2014 10:00:17

    2144 14 JULY 2014 09:48:59 14 JULY 2014 10:00:17

    I noticed that the SEQUENCE # are duplicates in elementary school

    While on the physical db waiting, I get unique SEQUENCE # only.

    1. SELECT sequence #, first_time next_time, applied

    2 from v$ archived_log

    3 * ORDER BY sequence #.

    FIRST_TIME NEXT_TIME SEQUENCE # APPLIED

    ---------- -------------------- -------------------- ---------

    2129 YES 14 JULY 2014 07:11 14 JULY 2014 07:23:36

    2130 YES 14 JULY 2014 23:07:36 14 JULY 2014 07:36:06

    2131 YES 14 JULY 2014 07:36:06 14 JULY 2014 07:48:42

    2132 YES 14 JULY 2014 07:48:42 14 JULY 2014 08:00:13

    2133 YES 14 JULY 2014 08:00:13 14 JULY 2014 08:05:44

    2134 YES 14 JULY 2014 08:05:44 14 JULY 2014 08:06:55

    2135 YES 14 JULY 2014 08:06:55 14 JULY 2014 08:18:46

    2136 YES 14 JULY 2014 08:18:46 14 JULY 2014 08:30:37

    2137 YES 14 JULY 2014 08:30:37 14 JULY 2014 08:42:22

    2138 YES 14 JULY 2014 08:42:22 14 JULY 2014 08:54:16

    2139 YES 14 JULY 2014 08:54:16 14 JULY 2014 09:04:16

    2140 YES 14 JULY 2014 09:04:16 14 JULY 2014 09:15:37

    2141 YES 14 JULY 2014 09:15:37 14 JULY 2014 09:26:40

    2142 YES 14 JULY 2014 09:26:40 14 JULY 2014 09:37:49

    2143 YES 14 JULY 2014 09:37:49 14 JULY 2014 09:48:59

    2144 14 JULY 2014 09:48:59 14 JULY 2014 10:00:17 IN-MEMORY

    Please help how to solve this error?

    Thank you

    MK

    You need primary archivelogs in order to restore and recover the primary and also be able to ship to wait, if there is a lag / delay /outage leading to the eve of the primary, trolling.

    You must Redo to be shipped to the waiting to synchronize the day before.

    You * don't * have a single destination, if you want to sleep.

    Hemant K

  • How to limit the data download duplicate via WebADI

    Hello Experts,

    The isue is in what regards the data transfer WebADI. Currently system allows the user to download the same set of records repeatedly, even if they could have already been pushed to the project.

    This system leads to double upload of documents by the user behavior. It could be a valid case when the user tries to download the first time data sheet, sheet visiting non-responsive State, with the transfer having been completed
    but in the background.

    Nowwhen the user trying to download, the system is not validate if the records have already been processed and sent to the Oracle projects.

    Is it possible in the rows of flag during treatment, in order to avoid duplicate downloads.

    Any provision of the custom extensions?

    Kind regards

    Shan

    Hi Shan,

    Here are the details, the use of your Source of the Transaction. Try n use Unique Trx ID...

    When WebADI imports data Oracle projects because this is information known by the user and which comes from outside of projects.

    But in business case doesn't want duplicates to come in, they can write an Import Client Extension operation (prior to importation) so that you can put your custom logic to see, already, whether it exists in the system. If already exists then you can stop the download.

     

    Allow a double reference

    Enable this option allow multiple transactions with the source of the transaction use the same original system. If you enable this option, you cannot uniquely identify the source of the transaction and the original system reference element.

    Concerning

    Christopher K

  • Report of guarantee with several duplicate entries

    I seem to have a problem with the warranty on my Dell OME installation status.  Previously, I was running version 1.2.0 and I started noticing multiple entries for servers with less than 90 days of warranty remaining.  Initially there was a writing by contract of service by the server, so I'm used to seeing 2 inputs for each server.  However, the problem gradually snowballed to the point where now I see a report of 90 day warranty list entries in 1345, when OME only communicates with 94 servers, with only 9 servers who have reached the 90-day expiry period.

    I performed the 1.2.1 update this morning, but the duplicate entries always appear in the report of the guarantee.  Using the troubleshooting tool, I am able to query the correct information of the API of warranty Dell, which leads me to believe that there may be some database entries that are not be cleaned.

    Any ideas on how to clean these duplicates?  Is there something I can try the web console of OpenManage Essentials?

    If this topic has been covered in another thread, can someone point me in the right direction?

    If you are uncomfortable with performing queries on your database directly (AFTER making a backup), you can use SQL Server Manager (free) and delete the entries in the table of coverage. The table is re-filled when the report is run. Use the following SQL to clean the table.

    Remove from dbo.warranty

    If you are not comfortable doing so, you can instead open a ticket and have the support guys review (800-945-3355).

    PPrabhu

  • Sync Outlook 07 for Palm Treo 650 Sync Conduit product Contacts DUPLICATES in Outlook... Help!

    I downloaded the Palm sync Conduit, linked a Treo 650 sync with Outlook 2007 (Vista 32) and it created duplicates for most of my CONTACTS in Outlook 07, not all but most, weird.

    Treo 650 (Cingular) - (Palm Desktop 6.2) - Outlook 2007 (Office 2007) - PC Dual Core Vista Edition Home Premium

    Can someone help get Sync Palm leads do not create DUP? I deleted everything and tried twice. There are no duplicates in the Treo or office, never had a problem syncing Palm Desktop, only Outlook (I'm converting Windows Mail to Outlook)

    Thank you

    Smitty 631-345-0000

    The hotsync Manager create you any duplicates.

    Clean the database of contacts in Outlook, you need hard reset your Treo, set the desktop hotsync conduits replaces handheld and then synchronize.

    Click on the following link for the Web page of kb.palm.com for the article about reset procedures.

    http://www.Palm.com/cgi-bin/cso_kbURL.cgi?id=887

    There is a 3rd party app that will remove duplicates in Outlook.

    Here is the link to the site; http://www.MAPILab.com/Outlook/remove_duplicates/

    One thing that you may not be aware is that Palm desktop 6.2 lines already installed Outlook 2007, there is no need to install the update for Outlook 2007 lines.

    You can remove the update 2007 conduit using the control panel.

    For reference purposes, click the following link to the support page for your device on the kb.palm.com Web page.
    http://KB.Palm.com/wps/portal/KB/na/Treo/650/unlocked/home/page_en.html

    There are links on the page the user Troubleshooting Guide, how to, downloads, etc.

  • Update using lead/rank

    Goodmorning,

    I have a little trouble trying to update using lead/rank.

    Summary of the issue: My table contains duplicate data (only difference being 2 fields reason viewer_org & interest) and rather than the operation/processing multuple lines I want to assign the org of duplicate records Viewer and then I can hold 1 row, which has a list of org that can consult them.

    I took a few of the fields of interest here: -.
    create table copy_test 
    (
     OWNER_ORG varchar(10),
     GEN_REC NUMBER(10),
     VIEWER_ORG varchar(10),
     INTEREST_REASON varchar(10),
     col_1 varchar(10),
     col_2 varchar(10),
     col_3 varchar(10),
     col_4 varchar(10)
    );
    Samples: -.
    INSERT INTO COPY_TEST (OWNER_ORG ,GEN_REC ,VIEWER_ORG ,INTEREST_REASON ,COL_1 ,COL_2 ,COL_3 ,COL_4 ) VALUES ('5AA' ,12345 ,'5AA' ,'6543' ,'' ,'' ,'' ,''  );
    INSERT INTO COPY_TEST (OWNER_ORG ,GEN_REC ,VIEWER_ORG ,INTEREST_REASON ,COL_1 ,COL_2 ,COL_3 ,COL_4 ) VALUES ('5AA' ,12345 ,'5BB' ,'5430' ,'' ,'' ,'' ,''  );
    INSERT INTO COPY_TEST (OWNER_ORG ,GEN_REC ,VIEWER_ORG ,INTEREST_REASON ,COL_1 ,COL_2 ,COL_3 ,COL_4 ) VALUES ('5BB' ,32165 ,'5CC' ,'430' ,'' ,'' ,'' ,''  );
    INSERT INTO COPY_TEST (OWNER_ORG ,GEN_REC ,VIEWER_ORG ,INTEREST_REASON ,COL_1 ,COL_2 ,COL_3 ,COL_4 ) VALUES ('5BB' ,32165 ,'5AA' ,'5430' ,'' ,'' ,'' ,''  );
    INSERT INTO COPY_TEST (OWNER_ORG ,GEN_REC ,VIEWER_ORG ,INTEREST_REASON ,COL_1 ,COL_2 ,COL_3 ,COL_4 ) VALUES ('5BB' ,32165 ,'5BB' ,'6543' ,'' ,'' ,'' ,''  );
    INSERT INTO COPY_TEST (OWNER_ORG ,GEN_REC ,VIEWER_ORG ,INTEREST_REASON ,COL_1 ,COL_2 ,COL_3 ,COL_4 ) VALUES ('YAA' ,98765 ,'5AA' ,'0' ,'' ,'' ,'' ,''  );
    INSERT INTO COPY_TEST (OWNER_ORG ,GEN_REC ,VIEWER_ORG ,INTEREST_REASON ,COL_1 ,COL_2 ,COL_3 ,COL_4 ) VALUES ('YAA' ,98765 ,'5BB' ,'543' ,'' ,'' ,'' ,''  );
    Data looks like this: -.
     select * from copy_test;
    
    OWNER_ORG     GEN_REC VIEWER_ORG INTEREST_R COL_1      COL_2      COL_3      COL_4
    ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
    5AA             12345 5AA        6543
    5AA             12345 5BB        5430
    5BB             32165 5CC        430
    5BB             32165 5AA        5430
    5BB             32165 5BB        6543
    YAA             98765 5AA        0
    YAA             98765 5BB        543
    Essential, we have 3 examples above (claim on gen_rec). The 1st example 5AA owner is a record that the Organization 5AA and 5BB are allowed to see. That's why he existing twice, viewer_org 5AA on 1 row and 5BB on the other. Then I need to assign these two organizations against one of the lines. I stated thise who strives to identify (a little): -.
    SET LINESIZE 250;
    
    select GEN_REC ,VIEWER_ORG ,INTEREST_REASON,OWNER_ORG,VIEWER_ORG CL_1,
    LEAD (VIEWER_ORG,1,0) OVER (PARTITION BY GEN_REC ORDER BY GEN_REC ,VIEWER_ORG ,INTEREST_REASON,OWNER_ORG) as CL_2,
    LEAD (VIEWER_ORG,2,0) OVER (PARTITION BY GEN_REC ORDER BY GEN_REC ,VIEWER_ORG ,INTEREST_REASON,OWNER_ORG) as CL_3,
    LEAD (VIEWER_ORG,3,0) OVER (PARTITION BY GEN_REC ORDER BY GEN_REC ,VIEWER_ORG ,INTEREST_REASON,OWNER_ORG) as CL_4,
    RANK() OVER (PARTITION BY GEN_REC ORDER BY GEN_REC ,VIEWER_ORG ,INTEREST_REASON,OWNER_ORG) rank
    from COPY_TEST 
    order by GEN_REC ,VIEWER_ORG ,INTEREST_REASON,OWNER_ORG;
    Gives these results:--
       GEN_REC VIEWER_ORG INTEREST_R OWNER_ORG  CL_1       CL_2       CL_3       CL_4             RANK
    ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
         12345 5AA        6543       5AA        5AA        5BB        0          0                   1
         12345 5BB        5430       5AA        5BB        0          0          0                   2
         32165 5AA        5430       5BB        5AA        5BB        5CC        0                   1
         32165 5BB        6543       5BB        5BB        5CC        0          0                   2
         32165 5CC        430        5BB        5CC        0          0          0                   3
         98765 5AA        0          YAA        5AA        5BB        0          0                   1
         98765 5BB        543        YAA        5BB        0          0          0                   2
    This is the result, I need: -.
    GEN_REC     VIEWER_ORG     INTEREST_R     OWNER_ORG     CL_1     CL_2     CL_3     CL_4
    12345     5AA     6543     5AA     5AA     5BB          
    12345     5BB     5430     5AA                    
    32165     5AA     5430     5BB                    
    32165     5BB     6543     5BB     5BB     5AA     5CC     
    32165     5CC     430     5BB                    
    98765     5AA     0     YAA                    
    98765     5BB     543     YAA     5AA     5BB          
    I need the information in the viewer_org field to be placed in depeneding CL_1 CL_2, CL_3 or CL_04 the number of duplicate rows it y a. (is never more than 4) the hierarchy of what line I want to update would be pitched what was the Interest_reason above, even if this isn't a body of numbers It may be that '0' is important.


    Any ideas guys?

    One way would be to use Polish CASE in SELECT

    with t_data as
    (
    SELECT
         GEN_REC ,
         VIEWER_ORG ,
         INTEREST_REASON,
         MAX(INTEREST_REASON) OVER (PARTITION BY GEN_REC) as max_int_reason,
         OWNER_ORG,
         VIEWER_ORG CL_1,
         LEAD (VIEWER_ORG,1,0) OVER (PARTITION BY GEN_REC ORDER BY GEN_REC ,VIEWER_ORG ,INTEREST_REASON,OWNER_ORG) AS CL_2,
         LEAD (VIEWER_ORG,2,0) OVER (PARTITION BY GEN_REC ORDER BY GEN_REC ,VIEWER_ORG ,INTEREST_REASON,OWNER_ORG) AS CL_3,
         LEAD (VIEWER_ORG,3,0) OVER (PARTITION BY GEN_REC ORDER BY GEN_REC ,VIEWER_ORG ,INTEREST_REASON,OWNER_ORG) AS CL_4,
         RANK() OVER (PARTITION BY GEN_REC ORDER BY GEN_REC ,VIEWER_ORG ,INTEREST_REASON,OWNER_ORG) rank
    FROM
         COPY_TEST
    ORDER BY
         GEN_REC ,
         VIEWER_ORG ,
         INTEREST_REASON,
         OWNER_ORG
    )
    SELECT
         t1.GEN_REC,
         VIEWER_ORG ,
         INTEREST_REASON,
         CASE
              WHEN INTEREST_REASON = max_int_reason
              THEN
              (SELECT
                   decode(cl_1,'0',null,cl_1)
              FROM
                   t_data t2
              WHERE
                   rank          = 1
              AND t1.gen_rec = t2.gen_rec)
              ELSE
              null
         END cl_1,
         CASE
              WHEN INTEREST_REASON = max_int_reason
              THEN
              (SELECT
                   decode(cl_2,'0',null,cl_2)
              FROM
                   t_data t2
              WHERE
                   rank          = 1
              AND t1.gen_rec = t2.gen_rec)
              ELSE
              null
         END cl_2,
         CASE
              WHEN INTEREST_REASON = max_int_reason
              THEN
              (SELECT
                   decode(cl_3,'0',null,cl_3)
              FROM
                   t_data t2
              WHERE
                   rank          = 1
              AND t1.gen_rec = t2.gen_rec)
              ELSE
              null
         END cl_3,
         CASE
              WHEN INTEREST_REASON = max_int_reason
              THEN
              (SELECT
                   decode(cl_4,'0',null,cl_4)
              FROM
                   t_data t2
              WHERE
                   rank          = 1
              AND t1.gen_rec = t2.gen_rec)
              ELSE
              null
         END cl_4
    FROM
         t_data t1 ;
    
    GEN_REC                VIEWER_ORG INTEREST_REASON CL_1       CL_2       CL_3       CL_4
    ---------------------- ---------- --------------- ---------- ---------- ---------- ----------
    12345                  5AA        6543            5AA        5BB
    12345                  5BB        5430
    32165                  5AA        5430
    32165                  5BB        6543            5AA        5BB        5CC
    32165                  5CC        430
    98765                  5AA        0
    98765                  5BB        543             5AA        5BB                              
    
     7 rows selected 
    
  • duplicate key was found for object

    Hello
    in tools 8.49 (HRMS90) MSSQL 2005, linking Data Mover, I got the following error:
    Import  PORTAL_CSS_RUN  11
     Building required indexes for PORTAL_CSS_RUN 
     - SQL Error. Error Position: 0  Return: 8601 - [Microsoft][SQL Native Client][SQL Server]The CREATE UNIQUE INDEX statement terminated because a duplicate key was found for the object name 'dbo.PS_PORTAL_CSS_RUN' and the index name 'PS_PORTAL_CSS_RUN'. The duplicate key value is (PSEM, permsync).
    [Mic
     CREATE UNIQUE CLUSTERED INDEX PS_PORTAL_CSS_RUN ON PS_PORTAL_CSS_RUN (OPRID,    RUN_CNTL_ID)
    Error: Unable to process create statement for PORTAL_CSS_RUN 
     SQL Spaces: 0  Tables: 5743  Triggers: 0 Indexes: 6841  Views: 0
    Any solution?

    Thank you.

    I guess that the process resembles a process of app-engine.
    last run should lead to the error. next time, when you try to run, you must add the work and erase the lines of the previous job. App engines should never go to error. they cause blocking on the tables of data problems.
    This is what error u see the log file. unlock the rows in this respective table, and then try again. u must succeed.
    Thnks!

  • How can I have two desktops on iCloud sync at the same time, without having duplicate files?

    Hi all

    I have two computers, a MacBook Pro and iMac. I use the new feature for Sierra macOS, which allows me to continually synchronize my office on iCloud. I welcome the Office on my iMac be synchronized on the cloud. Now, I would have the exact same office on my MacBook Pro. I tried many ways, including by dragging files to iCloud Drive on my desktop, but the synchronization does not work. If I enable syncing on my MacBook Pro, I know that I have duplicate files.

    With this method, I would like to: whenever I have add a file on my desktop iMac computer, it shows up on my MacBook Pro Office without the need for me to keep about iCloud road opening.

    Thanks for reading this.

    Ethan

    If you enable sync on the MacBook Pro, it should not be duplicates.  What makes you think it?

  • Various system processes lead alienated CPU

    I don't know where to begin - several rotating system process drive my CPU alienated to the point where my 2014 rMBP lasts maybe 3 hours off the charger and maintains high fans.

    The processes include:

    • Google Chrome help
    • CalendarAgent
    • CalNCService
    • PhotoAnalysis
    • Process associated with Sophos (this is a commercial application of the antivirus that I need to have on my computer by work/school)

    Things, I tried, without permanent success:

    • Reset my keychain - what causes a detective of other problems leading to -.
    • Restore my computer from a time Machine to the top
    • Remove all 3 Google calendars I've linked
    • Removal of caches

    Ideas? I'm a student, so he is facing the embarrassment of fans screaming out of control... or my wrists while in the middle of burning class. I just noticed that running my MBP with an external screen made it to calm down but the process still zip to the top intermittently.

    Crazy to believe, but the deactivation of reminders iCloud did the trick...

Maybe you are looking for

  • To trust Windows?

    I was counting on the download of Windows on my Mac in parallels so down, I can load some games that are not available on OS X, so I was wondering if I should trust in parallels? Has anyone downloaded them?

  • Device password

    I have a hp mini 110-1331DX It has a black screen that says: enter the current password The system stopped with a code CNU941C4TX, which can do to access my computer

  • AC adapter / CC uninstalled

    Separated from this thread. By mistake I uninstalled Microsoft AC Adapter also. Now, my laptop does not start with the battery (only shows the hp logo screen). IT only boots on the plugged adapter and battery removed. I have HP Probook 4520 s, 4 year

  • Array of bytes to double signs for AutoPowerSpectrum

    I'm trying to analyze a wav file and get the thd. During the read wav file, I read in one data (void *) table.  And according to the type of bit, depends on how to call PlotWaveform.  So, in my example, I have a 16-bit mono signal.  And I'm calling d

  • BIOS A08 to place 8 Pro

    Has anyone installed the BIOS A08? Is it safe?