Maximum matching

I want the most UNITED of the two tables
ID does not occure in both tables
ID 1 has therefore United more high in table 2 up to 9
ID 2 is 10 form table 2
ID 3 is 7 in table 2
and 4 ID is 11


TABLE 1
ID     UNI
1     3
1     4
1     6
2     3
2     9
4       11
TABLE 2
ID     UNI
1     9
1     5
2     10
3     4
3     7
WANTED:
ID     UNI
1     9
2     10
3     7
ALSO

How I would count the ID of each table individually
to get:
ID      CNT_T1     CNT_T2
1     3     2
2     2     1
3     NULL     2
4     1     NULL
Published by: Chloe_19 on 05/07/2012 17:45

Published by: Chloe_19 on 05/07/2012 17:50

Published by: Chloe_19 on 05/07/2012 17:54

Hello

Chloe_19 wrote:
>

I want the most UNITED of the two tables
ID does not occure in both tables
ID 1 has therefore United more high in table 2 up to 9
ID 2 is 10 form table 2
ID 3 is 7 in table 2
and 4 ID is 11

TABLE 1

ID     UNI
1     3
1     4
1     6
2     3
2     9
4       11

TABLE 2

ID     UNI
1     9
1     5
2     10
3     4
3     7

WANTED:

ID     UNI
1     9
2     10
3     7

Sorry, I don't understand this problem.
You said you wanted the maximum uni from both tables. It's not 11?
In addition, 7 and 9 are not maximum uni in each table. Why do you want them?
Think you that you do not want the maximum uni, or the id associated with it?

I apologize for canada.

Is it more useful?

WITH t1
AS
(
SELECT 1  ID,  5 UNI FROM Dual union all
SELECT 1  ID,  6 UNI FROM Dual union all
SELECT 1  ID,  7 UNI FROM Dual union all
SELECT 2  ID,  3 UNI FROM Dual union all
SELECT 3  ID,  3 UNI FROM Dual union all
SELECT 4  ID,  11 UNI FROM Dual
)

WITH t2
AS
(
SELECT 1  ID,  9 UNI FROM Dual union all
SELECT 1  ID,  5 UNI FROM Dual union all
SELECT 2  ID,  10 UNI FROM Dual union all
SELECT 3  ID,  4 UNI FROM Dual union all
SELECT 3  ID,  7 UNI FROM Dual
)

AND

ALSO
How I would count the ID of each table individually
to get:

ID      CNT_T1     CNT_T2
1     3     2
2     2     1
3     NULL     2
4     1     NULL

Why do you want to cnt_t1 be NULL on the line where id = 3?

You can use a FULL OUTER JOIN or a UNION.

WITH     got_t1_cnt     AS
(
     SELECT       id          AS t1_id
     ,       COUNT (*)     AS t1_cnt
     FROM       t1
     GROUP BY  id
)
,     got_t2_cnt     AS
(
     SELECT       id          AS t2_id
     ,       COUNT (*)     AS t2_cnt
     FROM       t2
     GROUP BY  id
)
SELECT       NVL (c1.t1_id, c2.t2_id)     AS id
,       c1.t1_cnt
,       c2.t2_cnt
FROM           got_t1_cnt  c1
FULL OUTER JOIN      got_t2_cnt  c2  ON  c1.t1_id  = c2.t2_id
ORDER BY  id
;

Published by: Frank Kulash, may 7, 2012 22:29

Tags: Database

Similar Questions

  • VM use Audit comments

    I was wondering if someone could give me a helping hand with an easy one (looked for other jobs, but not of joy).

    I've been playing with the VI Toolkit trying to export some simple information to a CSV file. I'm looking for a list of all the VM guests, including some which Cluster they are a part of it, disk.usage.average Maximum for the last 30 days and the Maximum average value of memory for the last 30 days use.

    To keep it simple (just started playing with Powershell), I started with a modular approach, just trying to output a CSV file with guest VM name and disk.usage.average maximum value for the last 30 days and resulted in the following:

    <br /><br/>
    $vmguest = Get-VM server_1<br/><br/>
    $vmguestMAX = Get-Stat -Entity ($vmguest) -stat disk.usage.average -Start ([DateTime]::Now.AddDays(-1)) -Finish ([DateTime]::Now) `<br/><br/>
    |Measure-object -property Value -max `<br/><br/>
    |Select-Object Maximum 


    Then, I found that data that results are indeed what I'm looking for, but I'm not aware enough with the PS to find a way to output the results to a CSV file.

    Finally, I would like to get out a complete audit of its guest use VM to a CSV file as described above, so any help to get in this direction would be great.

    Thanks for any help,

    DMA

    My previous code was indeed the first missing line (i.e. the init of the Bay of hash).

    The simplest solution is to go to a simple table and export to a CSV file

    Something like this:

    $stats = @()
    Get-VM | % {
        $row = "" | Select VMname, DiskUsgAvg
         $row.VMname = $_.Name
         $row.DiskUsgAvg = (Get-Stat -Entity ($_) -stat disk.usage.average -Start (Get-Date).AddDays(-30) -Finish (Get-Date) -IntervalMins 30  | `
                Measure-Object -property Value -maximum).Maximum
        $stats += $row
    }
    $stats | Export-Csv "c:\diskstats.csv" -NoTypeInformation
    

    I don't understand why the maximum match with what you see in the VIC for customers who have some 30 days of statistical data.

    But you can omit the parameter - IntervalMins if you want.

  • maximum number of entries in "characteristics of writing."

    What is the maximum number of channels in the "write data"? It seems that it is only 16 entries. I have more than 16 parameters in my application. What do you suggest me? I would like to have all of the data collected in a single file.

    The Multiplex module allows to combine several strings into one. In general, use the parameter "by block. The output will be a single thread with all multiplex channels in the block. You can have a maximum of 16:1 multiplexed. Use several Multiplex modules. They do not have to be symmetrical, but do not keep track of how many channels is multiplexed on a single thread.

    In the writing module, click the box to the format Options file (ASCII or DASYLab) - at the bottom of the dialog box, you configure so that it matches how you multiplexes - by block or by value (for example). And then, on the right, set the number of channels is multiplexed in each of the input channels.

    Do this way said DASYLab disabling multiplex the data in the file. You will lose the channel names. You can configure the names of channel for multiplexed channels using the chains. At the bottom of the dialog box, you can assign a global chain for each channel. Tedious, but it can be done.

  • Cannot create Alias name for Cardinal cannot right click maximum DF PROFI II CARD

    Cannot create Alias name for Cardinal cannot right click to the maximum example of MAX installation instructions, does not match what I see. See attached picture.

    At the suggestion of Ryan to technical support, I improved the VISA to 5.4.1 and the problem was solved.

    Thank you.

  • Maximum value such as a number or a string

    Hallo,

    I calculated the maximum value of a channel. The next step is to use this value for further computation. My problem is that I can't use this value as a number. It is just a string. I tried with

    ChnPropGet then I got the number. But I can't use it for calculations. Anyone have an idea how I can fix this problem?

    I've attached a screenshot of my script. I use the German version of 12 DIAdem.

    Thanks for the help...

    Hello LePot,

    the properties. The function returns the property in its native type. If you navigate to the maximum property of a channel, it will return the value as a floating point number. You can check this by running the following script:

    (The script assumes that the dataset example is which has a number of Group 3 with a channel named "Res_Schall_1". But your chains should give you the same result with the message box showing "double."

    Dim Maximum
    Maximum = Data.Root.ChannelGroups (3). Channels ("Res_Schall_1"). Properties ('minimum'). Value
    MsgBox (TypeName (Maximum))

    If you ever want to convert a number represented as a string, be careful when you use "CDbl". This VBScript function assumes that the number is formatted by using the language of oyur BONES, especially the decimal setting (in '. 'or', '). If the setting of the operating system does not match the way the number is converted to a string, you have a problem. That's why the offer DIAdem::Str() to go a certain number to string s and Val() to go the opposite direction. Using CDbl can create rellay average errors that are difficult to detect.

    Andreas

  • How can I get good results from the detection of matching/shape of reason for this image?

    I need to be able to identify all nine squares in the first image (0000094718...) and others like him.  I tried to detect shape, filtering and geometric pattern match.  I tried to vary all parameters within these functions.  I tried the image preprocessing (and the model, if any: usually created from only one of the boxes) with thresholding and/or a muliplier before using the detection or the corresponding function and I still can not get all nine boxes and only these nine boxes.

    Can someone offer some help?  Ideally I would be able to use the shape detection, because it returns the coordinates of the corner rectangles and those who are particularly useful to me.  What confuses me about this feature, is that after setting the width/height minimum/maximum of rectangles to find, the program seems to ignore these instructions and returns matches outside these limits.

    It would be even better to be able to have a script which will identify all nine boxes in two attached images.  The boxes in the second image (X820A32) are more square.

    Thank you

    Holly

    Try the attached script.  I think that you use only may not be the optimal threshold values.  You may find that you want to use a smoothing filter before making the threshold.  This should also help with nuisance particles.

  • What is the maximum allowed by a HP Pavilion zv5000 laptop RAM allocation

    The specifications for an HP Pavilion Notebook ZV5160US (DZ330U) available on HP web site indicates the allocation of eligible as maximum RAM (1x256M, 1x1GB). In the meantime, Crucial downloadable memory scan suggests that 2x1GB is a legitimate laptop SODIMM (2) configuration. Who should I believe; and, in addition, what truth is there for some versions of Windows only benefiting from extra RAM so memory "match" the size and configuration? This laptop runs Windows XP Home Edition.

    Thank you, Paul!

    I did not hold much hope of much attention to issues related to this one laptop dated - but I am pleasantly surprised to find that someone was interested to address an issue that may find application even in conjunction with the latest computer models.

    Given your answer, several nuggets of info available on the web (and on the web site of HP) and information from scan of Crucial memory, I will probably go with modules of memory of 1 GB. For sure, current applications can use (and most certainly end up using) all the RAM, you can collect!

    Dana (ChumpDNS)

  • ACS 5.6 Maximum by user sessions

    Hi all,

    I have a client that has installed a 5.6 ACS joined his announcement. He wants to limit the maximum number of sessions per user, but these users are authenticated by the AD. How can I limit sessions maximum?

    I tried to do in the configuration of the ACS, access policies > Max user Session policy > Max Session user settings, but is does not match this policy. I apply this policy, authorization policy? How can I do this?

    If you can leave me a link where it is explained, thank you much.

    Kind regards.

    David.

    Hi David. In my opinion, that these settings apply to internal groups of the ACS. So in order to take advantage of those you need to map your ad to an internal group ACS groups and then apply the 'Max user political Session' to those. Here is a link with more information:

    http://www.Cisco.com/c/en/us/TD/docs/net_mgmt/cisco_secure_access_control_system/5-3/user/guide/acsuserguide/access_policies.html#pgfId-1162308

    I hope this helps!

    Thank you for evaluating useful messages!

  • Maximum traffic for a vmxnet3

    Assuming that there are no bottlenecks elsewhere, what is maximum penetration network traffic that a unique vmxnet3 adapter on a virtual machine can receive network?  It is 10 Gbit/s (gigabytes per second) or 10 Gbps (GigaBITS per second)

    Thank you!

    In theory and in the physical world, the maximum data rate would be 10 Gigabit/s, since vmxnet3 emulates a 10GBASE-T physical link.

    This flow is governed by physical limitations and traffic on the wire of the standard, but these do not apply in a purely virtual configuration (vSwitch and port group 2 virtual on the same host and same computers).

    Invited on the same host and vSwitch and port group are able to exceed beyond 10 Gbit/s. I know, we could think that for example the e1000, which has a link from 1 Gbps to the guest, is limited to 1 GB/s maximum. or vmxnet3 is limited to a maximum of 10 Gbps. But this isn't the case. They can easily exceed their "speed of the virtual link. Test it with a tool of network throughput as iperf a see for yourself.

    This is because only the true physically imposed restrictions do not apply in a virtualized environment between two virtual machines on the same host/port signalling group. Operating systems don't artificially restrict traffic to match the speed of the agreed line unless it is physically required.

    To give you an example, I am able to reach 25 + Gbps between 2 virtual Linux machines with a single on the same host/network vmxnet3 vNIC

    For reference, I am able to get 25 + Gbps with the test tool of network throughput iperf between two virtual Linux machines with a vNIC vmxnet3 unique on the same host/port group. (Yes, 25Gbps. Even if a vmxnet3 emule link 10 Gbps, throughput is not artificially capped without physical limitation of signal).

    Once you get to the external communication outside a host then you are limited by your physical host of ESXi links limitations.

  • Question to get the maximum virtual machine via powercli read IOPS / s

    Hello

    I use this script REPORT maximum and average OPS / s (writing & reading) based mainly in this script by LucD vmware communities user:

    $datastores = "datastore.*" EMC - LUN. * | "SAN - vm.."

    $MinIOPSWriteMax = 10

    $MinIOPSReadMax = 10

    $metrics = "disk.numberwrite.summation", "disk.numberread.summation".

    $start = (get-Date). AddDays(-1)

    $report = @)

    $vms = get - VM | where {$_.} PowerState - eq "Receptor"}

    $stats = get-Stat-Realtime-Stat $metrics - entity $vms - start $start

    $interval = $stats [0]. IntervalSecs

    $lunTab = @ {}

    foreach ($ds in (Get-Datastore - VM $vms | where {($_.)})) Type - eq "VMFS") - and ($_.) Name - match $datastores)})) {}

    $ds.ExtensionData.Info.Vmfs.Extent | %{

    $lunTab [$_] DiskName] = $ds. Name

    }

    }

    $report = $stats | Group-object - property {$_.} @entity.name}, Instance | %{

    New-object PSObject-property @ {}

    VM = $_. Values [0]

    Disc = $_. Values [1]

    IOPSWriteMax = ($_.) Group | `

    where {$_.} MetricId - eq "disk.numberwrite.summation"} | `

    Measure - Object - property - maximum value). Maximum / $interval

    IOPSWriteAvg = ($_.) Group | `

    where {$_.} MetricId - eq "disk.numberwrite.summation"} | `

    Measure-object-propriete value - average). Average / $interval

    IOPSReadMax = ($_.) Group | `

    where {$_.} MetricId - eq "disk.numbertread.summation"} | `

    Measure - Object - property - maximum value). Maximum / $interval

    IOPSReadAvg = ($_.) Group | `

    where {$_.} MetricId - eq "disk.numberread.summation"} | `

    Measure-object-propriete value - average). Average / $interval

    Data store = $lunTab [$_] Values [1]]

    }

    }

    $report | IOPSWriteMax tri-objet-descending | WHERE-object {($_.)} IOPSWriteMax - ge $MinIOPSWriteMax) - or ($_.) IOPSReadMax - ge $MinIOPSReadMax)} | Select the virtual machine, disc, Datastore, IOPSWriteMax, IOPSWriteAvg, IOPSReadMax, IOPSReadAvg | more

    Is this a normal behavior, still getting 0 in all the IOPSReadMax vm?

    Thanks in advance.

    Best regards

    Pablo

    You have just a typing mistake in the name of metrics here ("numbertread" instead of "numberread"):

    where {$_.} MetricId - eq 'drive. numbertread.summation '} | `

  • When loading, error: field in the data file exceeds the maximum length

    Oracle Database 11 g Enterprise Edition Release 11.2.0.3.0 - 64 bit Production

    PL/SQL Release 11.2.0.3.0 - Production

    CORE Production 11.2.0.3.0

    AMT for Solaris: 11.2.0.3.0 - Production Version

    NLSRTL Version 11.2.0.3.0 - Production

    I am trying to load a table, small size (110 lines, 6 columns).  One of the columns, called NOTES is less error when I run the load.  That is to say that the size of the column exceeds the limit max.  As you can see here, the column of the table is equal to 4000 bytes)

    CREATE TABLE NRIS. NRN_REPORT_NOTES

    (

    Sys_guid() NOTES_CN VARCHAR2 (40 BYTE) DEFAULT is NOT NULL.

    REPORT_GROUP VARCHAR2 (100 BYTE) NOT NULL,

    POSTCODE VARCHAR2 (50 BYTE) NOT NULL,

    ROUND NUMBER (3) NOT NULL,

    VARCHAR2 (4000 BYTE) NOTES,

    LAST_UPDATE TIMESTAMP (6) WITH ZONE SCHEDULE systimestamp NOT NULL default

    )

    TABLESPACE USERS

    RESULT_CACHE (DEFAULT MODE)

    PCTUSED 0

    PCTFREE 10

    INITRANS 1

    MAXTRANS 255

    STORAGE)

    80K INITIAL

    ACCORDING TO 1 M

    MINEXTENTS 1

    MAXEXTENTS UNLIMITED

    PCTINCREASE 0

    DEFAULT USER_TABLES

    DEFAULT FLASH_CACHE

    DEFAULT CELL_FLASH_CACHE

    )

    LOGGING

    NOCOMPRESS

    NOCACHE

    NOPARALLEL

    MONITORING;

    I did a little investigating, and it does not match.

    When I run

    Select max (lengthb (notes)) in NRIS. NRN_REPORT_NOTES

    I got a return of

    643

    .

    Which tells me that the larger size of this column is only 643 bytes.  But EACH insert is a failure.

    Here is the header of the file loader and first couple of inserts:

    DOWNLOAD THE DATA

    INFILE *.

    BADFILE '. / NRIS. NRN_REPORT_NOTES. BAD'

    DISCARDFILE '. / NRIS. NRN_REPORT_NOTES. DSC"

    ADD IN THE NRIS TABLE. NRN_REPORT_NOTES

    Fields ended by '; '. Eventually framed by ' |'

    (

    NOTES_CN,

    REPORT_GROUP,

    Zip code

    ALL ABOUT NULLIF (R = 'NULL'),

    NOTES,

    LAST_UPDATE TIMESTAMP WITH TIME ZONE ' MM/DD/YYYY HH24:MI:SS. FF9 TZR' NULLIF (LAST_UPDATE = 'NULL')

    )

    BEGINDATA

    | E2ACF256F01F46A7E0440003BA0F14C2; | | DEMOGRAPHIC DATA |; A01003; | 3 ; | demographic results show that 46% of visits are made by women.  Among racial and ethnic minorities, the most often encountered are native American (4%) and Hispanic / Latino (2%).  The breakdown by age shows that the Bitterroot has a relatively low of children under 16 (14%) proportion in the population of visit.  People over 60 represent about 22% of visits.   Most of the visitation comes from the region.  More than 85% of the visits come from people who live within 50 miles. | ; 29/07/2013 0, 16:09:27.000000000 - 06:00

    | E2ACF256F02046A7E0440003BA0F14C2; | | DESCRIPTION OF THE VISIT; | | A01003; | 3 ; | most visits to the Bitterroot are relatively short.  More than half of the visits last less than 3 hours.  The median duration of visiting sites for the night is about 43 hours, or about 2 days.  The average Wilderness visit lasts only about 6 hours, although more than half of these visits are shorter than the duration of 3 hours.   Most of the visits come from people who are frequent visitors.  Over thirty percent are made by people who visit between 40 and 100 times a year.  Another 8% of visits from people who say they visit more than 100 times a year. | ; 29/07/2013 0, 16:09:27.000000000 - 06:00

    | E2ACF256F02146A7E0440003BA0F14C2; | | ACTIVITIES |. A01003; | 3 ; | most often reported the main activity is hiking (42%), followed by alpine skiing (12%) and hunting (8%).  More than half of the report visits participating in the relaxation and the display landscape. | ; 29/07/2013 0, 16:09:27.000000000 - 06:00

    Here's the full start of log loader, ending after the return of the first row.  (They ALL say the same error)

    SQL * Loader: Release 10.2.0.4.0 - Production Thu Aug 22 12:09:07 2013

    Copyright (c) 1982, 2007, Oracle.  All rights reserved.

    Control file: NRIS. NRN_REPORT_NOTES. CTL

    Data file: NRIS. NRN_REPORT_NOTES. CTL

    Bad File:. / NRIS. NRN_REPORT_NOTES. BAD

    Discard File:. / NRIS. NRN_REPORT_NOTES. DSC

    (Allow all releases)

    Number of loading: ALL

    Number of jump: 0

    Authorized errors: 50

    Link table: 64 lines, maximum of 256000 bytes

    Continuation of the debate: none is specified

    Path used: classics

    NRIS table. NRN_REPORT_NOTES, loaded from every logical record.

    Insert the option in effect for this table: APPEND

    Column Position Len term Encl. Datatype name

    ------------------------------ ---------- ----- ---- ---- ---------------------

    FIRST NOTES_CN *;  O (|) CHARACTER

    REPORT_GROUP NEXT *;  O (|) CHARACTER

    AREA CODE FOLLOWING *;  O (|) CHARACTER

    ROUND                                NEXT     *   ;  O (|) CHARACTER

    NULL if r = 0X4e554c4c ('NULL' character)

    NOTES                                NEXT     *   ;  O (|) CHARACTER

    LAST_UPDATE NEXT *;  O (|) DATETIME MM/DD/YYYY HH24:MI:SS. FF9 TZR

    NULL if LAST_UPDATE = 0X4e554c4c ('NULL' character)

    Sheet 1: Rejected - error in NRIS table. NRN_REPORT_NOTES, information ABOUT the column.

    Field in the data file exceeds the maximum length.

    I don't see why this should be failed.

    Hello

    the problem is bounded by default, char (255) data... Very useful, I know...

    you need two, IE sqlldr Hat data is longer than this.

    so change notes to notes char (4000) you control file and it should work.

    see you soon,

    Harry

  • SmartView: OLAP _error (1020011): Maximum number of lines [65000] exceeded

    My client is unable to load lines in Smartview AdHoc, they receive the following error message

    OLAP _error (1020011): Maximum number of lines [65000] exceeded

    Excel version: 2007
    SmartView release: 11.1.1.3
    APS 11.1.1.3
    Essbase 11.1.1.3

    Essbase.Properties file was below the parameter / value:
    service.olap.dataQuery.grid.maxRows = 1000000

    Essbase.cfg file was below the parameter / value:
    SSPROCROWLIMIT 500000

    In the service Console Regional vendor of servers-> right click the server-> settings-> properties-> tab-> maximum number of text box lines - 65000

    Please suggest what to increase to solve this problem?

    Thanks in advance.

    ~ KK

    The number in the Regional service console must match the value in the service.olap.dataQuery.grid.maxRows parameter. Probably you have edited the file essbase.properties evil.

    At this point, I would like to change the setting in environmental assessments to the desired limit or 0 for unlimited and then restart APS.

  • Cascade of matches

    Oracle 10.2.0.4

    I have a table (USER_ATT) that follows the five attributes against an employee in time. (USERID, ASOFDATE, ATT1, ATT2, ATT3, ATT4, ATT5) - all fields are not null, except for ASOFDATE all fields are varchar2 with a maximum length of 15.

    My requirement is to have a table of parameters (RULE_ATT) that contains the logical decision based on 5 attribute fields. Users want to set values for ATT [1,2,3,4,5], then an order for all the users who meet these criteria. They also want the ability to have coverage criteria (for example, enter values for ATT [1-4] and leave empty ATT5) the idea is that if the program cannot find a match on all of the 5 attributes, it search all the instructions for the first 4 attributes. If it does not find a match on 4 attributes, then he should find a match on 3, etc.

    Thus, for example, that, in USER_ATT, I have

    USER01, JANUARY 1, 2005, A, B, C, D, E
    USER02, JANUARY 1, 2005, A, B, C, D, G
    USER03, JANUARY 1, 2005, A, B, C, Q, Q
    USER04, JANUARY 1, 2005, Q, B, C, D, F


    in the table of parameters (RULE_ATT), I have the following rules

    A, B, C, D, < space >, "salary increase of 4 percent.
    A, B, C, D, G 'salary increase of 5%.
    A, B, C, < space > < space >, '' 3 percent salary increase. ''

    I need to write some SQL code so that I get USER01-03 returned and USER04 is not returned, because there is no matching rules
    USER01, 4% pay increase
    USER02, 5% pay increase
    USER03, increase pay by 3%

    At the moment I'm looking for a set of 5 United SQL with a NOT EXISTs clause to manage cascading hierarchy.

    I suspect that there is a smarter way to do it, possibly using Analytics-(MAX (perhaps DECODE ())?)
    but I'm not sure of the best approach.

    As a guide to volume, USER_ATT should have about 130 000 lines (with say 65 000 USERID distinct values). RULE_ATT should have about 15 000 lines. The tables have archive equivalent, so each year, data are deleted main tables and shipped to their equivalents archive to allow us to keep the relatively static volume.

    Any suggestions / traps / code snippets gratefully received.

    Hello

    Paula Scorchio wrote:
    It is possible that a user will change their attributes to halfway through the year. For example, the user may be a revision on January 1, 2005 (TO_DATE (: 1, ' MON-DD-YYYY "") and another review as at July 1, 2005.) The rules would be held at a date parameter (so the HR Department can tell to execute this process for August 1, 2005). I need to identify the "current" line as at August 1, 2005 for each user name. In general, I would do this by using a statement such as

    SELECT A.*
    OF USER_ATT HAS
    WHERE A.ASOFDATE = (SELECT MAX (A1. ASOFDATE) OF A1 USER_ATT WHERE A1. USERID = A.USERID AND A1. ASOFDATE<= to_date(:1,="" 'dd-mon-yyyy)="">
    Where: 1 is the date of arrival.

    The columns are indeed the child-parent relationship, ATT1 can be any business within the organization unit. Be any Department within the business unit value Att1 Att2. Att3 can be any cost center attached to the combination of ATT1:ATT2 BusinessUnit:Department.

    My SQL at the moment is

    CREATE VIEW USER_CURRENT_ATT AS (SELECT A.USERID, B.ASOFDATE, A.ATT1, A.ATT2, A.ATT3, A.ATT4, A.ATT5
    OF USER_ATT A, (SELECT TO_DATE('01-JAN-2000') + ROWNUM 'ASOFDATE' OF OBJECT) B
    WHERE A.ASOFDATE = (SELECT MAX (A1. ASOFDATE) OF A1 USER_ATT WHERE A1. USERID = A.USERID AND A1. ASOFDATE<= b.asofdate="" )="">

    See the query below to much more effectively.

    SELECT A.USERID, B.RULE_INST
    OF USER_CURRENT_ATT A, RULE_ATT B
    WHERE A.ASOFDATE = TO_DATE (: 1, ' MON-DD-YYYY "")
    AND A.ATT1 = B.ATT1
    AND A.ATT2 = B.ATT2
    AND A.ATT3 = B.ATT3
    AND A.ATT4 = B.ATT4
    AND A.ATT5 = B.ATT5
    UNION
    SELECT A.USERID, B.RULE_INST
    OF USER_CURRENT_ATT A, RULE_ATT B
    WHERE A.ASOFDATE = TO_DATE (: 1, ' MON-DD-YYYY "")
    AND A.ATT1 = B.ATT1
    AND A.ATT2 = B.ATT2
    AND A.ATT3 = B.ATT3
    AND A.ATT4 = B.ATT4
    AND B.ATT5 = ' '


    AND NOT EXISTS (SELECT 1 FROM RULE_ATT C WHERE C.ATT1 = A.ATT1 AND C.ATT2 = A.ATT2 AND C.ATT3 = A.ATT3 AND C.ATT4 = A.ATT4 AND C.ATT5 = A.ATT5)
    UNION
    SELECT A.USERID, B.RULE_INST
    OF USER_CURRENT_ATT A, RULE_ATT B
    WHERE A.ASOFDATE = TO_DATE (: 1, ' MON-DD-YYYY "")
    AND A.ATT1 = B.ATT1
    AND A.ATT2 = B.ATT2
    AND A.ATT3 = B.ATT3
    AND B.ATT4 = ' '
    AND B.ATT5 = ' '
    AND NOT EXISTS (SELECT 1 FROM RULE_ATT C WHERE C.ATT1 = A.ATT1 AND C.ATT2 = A.ATT2 AND C.ATT3 = A.ATT3 AND C.ATT4 = A.ATT4)
    UNION
    SELECT A.USERID, B.RULE_INST
    OF USER_CURRENT_ATT A, RULE_ATT B
    WHERE A.ASOFDATE = TO_DATE (: 1, ' MON-DD-YYYY "")
    AND A.ATT1 = B.ATT1
    AND A.ATT2 = B.ATT2
    AND B.ATT3 = ' '
    AND B.ATT4 = ' '
    AND B.ATT5 = ' '
    AND NOT EXISTS (SELECT 1 FROM RULE_ATT C WHERE C.ATT1 = A.ATT1 AND C.ATT2 = A.ATT2 AND C.ATT3 = A.ATT3)
    UNION
    SELECT A.USERID, B.RULE_INST
    OF USER_CURRENT_ATT A, RULE_ATT B
    WHERE A.ASOFDATE = TO_DATE (: 1, ' MON-DD-YYYY "")
    AND A.ATT1 = B.ATT1
    AND A.ATT2 = ' '
    AND B.ATT3 = ' '
    AND B.ATT4 = ' '
    AND B.ATT5 = ' '
    AND NOT EXISTS (SELECT 1 FROM RULE_ATT C WHERE C.ATT1 = A.ATT1 AND C.ATT2 = A.ATT2)

    the query above to produce the desired results?

    Without answers to my questions, all I can do is guess.
    My best guess right now is:

    WITH     got_d_num     AS
    (
         SELECT     user_att.*
         ,     ROW_NUMBER () OVER ( PARTITION BY  userid
                             ORDER BY        asofdate     DESC
                           )     AS d_num
         FROM     user_att
         WHERE     asofdate     <= TO_DATE ( :1
                                , 'DD-MON-YYYY'
                                )
    )
    ,     got_r_num     AS
    (
         SELECT     u.*
         ,     r.rule_inst
         ,     ROW_NUMBER () OVER ( PARTITION BY  u.userid
                             ORDER BY        CASE
                                       WHEN  u.att2 = r.att2
                                        AND  u.att3 = r.att3
                                        AND  u.att4 = r.att4
                                        AND  u.att5 = r.att5
                                            THEN 1
                                       WHEN  u.att2 = r.att2
                                        AND  u.att3 = r.att3
                                        AND  u.att4 = r.att4
                                            THEN 2
                                       WHEN  u.att2 = r.att2
                                        AND  u.att3 = r.att3
                                            THEN 3
                                       WHEN  u.att2 = r.att2
                                            THEN 4
                                     END
                            ,             CASE
                                       WHEN  r.att2 != ' '
                                        AND  r.att3 != ' '
                                        AND  r.att4 != ' '
                                        AND  r.att5 != ' '
                                            THEN 4
                                       WHEN  r.att2 != ' '
                                        AND  r.att3 != ' '
                                        AND  r.att4 != ' '
                                            THEN 3
                                       WHEN  r.att2 != ' '
                                        AND  r.att3 != ' '
                                            THEN 2
                                       WHEN  r.att2 != ' '
                                            THEN 1
                                     END
                           )     AS r_num
         FROM     got_d_num  u
         JOIN     rule_att   r  ON  u.att1 = r.att1
         WHERE     u.d_num     = 1
    )
    SELECT     userid
    ,     rule_inst
    FROM     got_r_num
    WHERE     r_num     = 1
    ;
    
  • Fact in export DV using "Match sequence settings.

    I'm VHS and Hi8 video capture using a Canopus ADVC-300 and Premiere Pro CS6.  The output is DV, approx. 14 GB/hour.  Most of the catches are unattended (have a 4 TB drive, supported of course, but space isn't a big concern at the moment).  I would get everything imported before any serious editing, so I capture each strip twice: the first time, the video itself; then I go back, turn on the indicator timestamp overlay and capture again with the camera works at speed x 2.  This way I can quickly find after a particular segment of a band recorded without winding the Ribbon around the camera, looking for the scene and the time is not superimposed on the image.

    I have two questions:

    (1) given that the catches are unattended, I need to cut the blue screen at the end of each captured files to save disk space.  I want to confirm that if I cut a sequence and export with game "Match sequence settings", this first reencode the DV except the parts that I've actually changed (I sometimes use the dip to black around damaged segment of tape).  Based on the speed of export, I don't think it's recompression and quality looks the same but I want to check a second time.

    Premiere Pro CS6 always mark the timeline with the color bar if treatment is required?  "Dip to black" parts get a yellow bar, but the rest has no bar.  That would be a clue.

    Also, the export dialog box has "Use rendered maximum quality" unchecked by default.  I check it - does it matter?

    (2) what is the best way to permanently associate a DateTime with the captured video? I don't see anyway to bake in the DV for each segment.  I use comments in sequence marks, established at the beginning of each clip, but the sequence markers do not move if I have ripple - delete, let's say 10 seconds of noise in the middle of a track.  I can't figure out how to get the markers attached to the video track instead of the sequence.  I found a few videos that say how to do this, but they do not seem to apply to CS6.  I found that if I export the sequence, the markers can get supported in DV.  But I need them to stay if I ripple-edit.  Is this possible?

    (3) my plan is to add a title overlay second 3 or 5 at the beginning of each segment to say what it is and where it was captured, but only for the final export.  Is there something I could use to create a heading for a spot?   This isn't a big deal, but it looks like a useful feature.

    Sorry if the answer to these questions is obvious - thanks for your patience.

    1. on a Windows machine PP will make a copy of simple file for executives not corrupted when you make the DV, DV without transcoding.  You need not (and should not use) sequence settings to Match for that to happen.

    2 clip markers are defined in the Source monitor Panel, not in the sequence.

    3. copy/paste is about it.

  • The maximum size of the default packages in 3.7.0.1

    Hello

    Looks like there is another reason to do a full restart of cluster to apply the first patch for 3.7 (3.7.0.1). On a machine with MTU = 1500, it seems that the default maximum packet size value is passed from 1452 to 65535 and this change prevents the cluster to form during a rolling upgrade.

    May 16, 2011 14:08:01, Logger@813856187 ERROR 3.7.0.1 376: 2011-05-16 14:08:01.376/1.539 Oracle coherence GE 3.7.0.1 < error > (thread = Cluster, Member = n/a): this member was unable to join the cluster due to an incompatibility between this member configuration and the configuration used by the rest of the cluster. The package size maximum (65 535) for this member does not match the maximum packet size that uses the current cluster of execution. Rejected by Member (Id = 1, Timestamp is 2011-05-16 14:05:49.303, address = XXX2:11500, MachineId = 38601, location = machine: YYY2, process: 14945, members: cacheserver:1, role = cacheserver).


    May 16, 2011 14:24:27, 675 ERROR Logger@98371294 3.7.0.0: 2011-05-16 14:24:27.674/1.539 Oracle coherence GE 3.7.0.0 < error > (thread = Cluster, Member = n/a): this member was unable to join the cluster due to an incompatibility between this member configuration and the configuration used by the rest of the cluster. The maximum packet size (1452) for that Member does not match the maximum packet size that uses the cluster running. Rejected by Member (Id = 1, Timestamp is 2011-05-16 14:11:11.927, address = XXX1:11500, MachineId = 38600, location = machine: YYY1, process: 18509, members: cacheserver:1, role = cacheserver).


    Can you please confirm?


    See you soon,.
    Alexey

    Alexey salvation,

    Yes, the maximum packet size by default for 3.7 was believed to have been 65535, however there is a bug that prevented this work. 3.7.0.1 solves this problem and therefore "presents" the new default higher. A rolling restart is always possible, however, you need to manually configure the maximum to be 1468. The new maximum 65535 allows default configuration of coherence in support of large groups (greater then ~ 450 JVMs), where old default configuration would prevent new members joining and would require a reconfiguration and a full reboot.

    Thank you

    Mark
    The Oracle coherence

Maybe you are looking for