What is an effective way to handle this

Scenario 1

a. If the target table has any record higher then a specific date for a ID given (SELECT INTO)

b. If count > 0 then

Delete all lines using the same where clause

further treatment

end if

Scenario 2

a. delete a line if the condition in which clause matches (even where clause that we use in above select statement)

b. If sql % rowcount > 0 then

perform another treatment

end if

Scenario 1 still runs a SELECT statement but REMOVE will be executed conditionally

Scenario 2 run DELETE statement always

I wonder which of the scenarios above is more effective than the other. My table has about 200 M lines

Thank you

Scenario 1

one requires of e/s

b requires the same IO, as Oracle will answer where new clause

So, if count > 0

will do everything twice.

If count = 0, regardless of the second statement is not executed because it would cost IO anyway.

Scenario 2 is the most effective.

-----------

Sybrand Bakker

Senior Oracle DBA

Tags: Database

Similar Questions

  • I have an After Effects project and I want to re-do edge animate. What is the best way work for this?

    I have an After Effects project and I want to re-do edge animate. What is the best way work for this?

    Hello

    Sorry, but you will probably need to rebuild everything in Animate. Although their timelines are similar in some ways, there is no interoperability between After Effects and animate dashboard.

    Kind regards

    Joe

  • What is the best way to read this binary file?

    I wrote a program that acquires data from a card DAQmx and he writes on a binary file (attached file and photo). The data I'm acquisition comes 2.5ms, 8-channel / s for at least 5 seconds. What is the best way to read this binary file, knowing that:

    -I'll need it also on the graphic (after acquisition)

    -J' I need also to see these values and use them later in Matlab.

    I tried the 'chain of array to worksheet', but LabView goes out of memory (even if I don't use all 8 channels, but only 1).

    LabView 8.6

    I think that access to data is just as fast, what happens to a TDMS file which is an index generated in the TDMS file that says 'the byte positions xxxx data written yyyy' which is the only overload for TDMS files as far as I know.

    We never had issues with data storage. Data acquisition, analysis and storage with > 500 kech. / s, the questions you get are normally most of the time a result of bad programming styles.

    Tone

  • I'm moving my files to workspaces to cloud.adobe.  What is the best way to achieve this?

    I'm moving my files to workspaces to cloud.adobe.  What is the best way to achieve this?

    Hi Traci,

    So big that you get ready for the transition to cloud.acrobat.com! Please see that the specified item was not found. For more detailed information on downloading your files.

    When you open a session workspaces, however, you will see a big red box at the bottom left. Click this box, and your files will be packed to the top for you. You will receive an email with the instructions when the packaged files are ready for you.

    Best,

    Sara

  • What is an effective way to adjust the position of all the clips on the timeline

    In one of my projects, I leave cuts on a timeline. And it seems that the position of a few clips has been accidentally changes. I need to have the same position for all the clips. Is there a way to define the position of all the clips in the world - what is the best way to tacke this? Thanks in advance. -renized

    Press A key for the track Selection tool.  Click the first clip that you want to settle and every item after that will be also selected.  You can move them as a group.

    If you need clips on more than a selected track, hold CTRL when you click.

  • Is there a more effective way to interrogate this cache?

    I have an interesting dilemma that I don't know how to fix. I have a cache of objects ClientType (see below). I'm trying to find the CustomerGroups that contain a password key for the customerValues card, which also have a CustomerValue.value between a high and low value. In others, do me all the CustomerGroups who have a client of "101" code, which contains a CustomerValue between 50 and 250. I have this fine work to aid request a customer filter - but the problem I have is that the cache contains approximately 500,000 items. The research must be applied to all objects, and because most of the clients contain a customervalue - we end up deserialization of objects to perform the comparison of the value in the cache that fills the eden space so quickly (in a multi-threaded env) that we are to throw himself into the holder space. This causes performance issues because it weakens the large global catalogs. We use 3.5.2 consistency and all are active POF. The works of current query - I'm looking for the most effective way to do with respect to the use of time and lot. I am willing to trade increased Rightsizing memory use for best performance - which means the addition of index / improves the news is very good. The big problem is fill the eden space so quickly. Any ideas?
    class CustomerGroup
    {
        Set<Customer> customers ;
    }
     
    class Customer
    {
        Map<Integer, CustumerValue> customerValues;
    }
    
    class CustomerValue
    {
      private int value ;
      private boolean isNew ;
    }

    Simi74 wrote:
    Robert - appreciate the help. The question I have for this juicer is that the price is related to a specific customer. For example:

    ClientType contains 1 client.

    This customer has 2 CustomerValues in his card.

    These entries to resemble the following:

    Key: 100
    Value: CustomerValue.value = 10
    
    Key: 200
    Value: CustomerValue.value = 50
    

    If I'm following your logic, the extractor that you recommended would apply to values (that is to say, 10, and 50). So, if I searched for CustomerGroups that have values between 5 and 15,
    return the ClientType as one of the values is 10. However, I also this map to a specific client code (in this case either 100 or 200 - they key in the matching values). Thus, the query is really more like 'give me all the objects ClientType to 100 client code where the value is between 5 and 15'. I should only get this object if the client code is 100. I can't do a ContainsFilter with the client code and use BetweenFilter because using an AllFilter since this logic would be tantamount to a 'false' positive press 200. Logical in this case being 'the ClientType contains a 200 client' who is true and does the ClientType contain a value between 5 and 15, which is also true, because the value list has 10 and 50. But it must return this object ONLY if the customer key passed is 100. The example below is a false positive.

    /* following code returns the object, but should not! */
    Filter[] filterArray = { new ContainsFilter( new CustomerKeyExtractor(), new Integer( 200 ) ),
    new BetweenFilter( new CustomerValueExtractor(), 5, 15) };
    
    Filter filter = new AllFilter( filterArray ) ; 
    

    So my question is how to apply a hint on the CustomerKey/CustomerValue as a whole without deserialize all the objects for each search? Or is it a problem with the structuring of the object? If that's the problem - of recommendations on alterations are welcome as well fix this filtering.

    Ah, ok, got it now...

    What is the client code?

    Is it just a key generated with an arbitrary value, or is it a thing of enum type (key to a value of metadata property with only a finite number of different property names, etc.) with only a small number of different codes in all of the cache?

    If it's a thing of enum type, then instead to extract only the 'value' s as integers, you can extract pairs ' value: code "represented as long with the code being the high 32 bit which would be always classifiable and allow querying: instead of valueMin and valueMax, you would code: valueMin and code: valueMax as the beach.

    If it's an arbitrary generated value, then it is a bit more problematic (extraction POF does not support crossed down in individual map entries, so it can't be indexed without the plan, including the entire Customer objects to deserialize), but I wouldn't address that if it is not necessary, so please indicate if the previous approach fits your needs :). In the case that it is not, you would probably want to the extracted value to be structured on the other hand, the value is the code and 32-bit higher at least 32. You must also write a custom filter that supports the index which is able to use this index going in the direction of Alexey BetweenFilter but with additional controls for the code... In addition, you must check whether the consumption of memory of the filter can be too high...

    Best regards

    Robert

  • I'll do a clean install on a blank hard drive upgrade but want to keep my Firefox settings - what is the best way to do this?

    I'll do an upgrade from Windows XP to Windows 7. I will be installing Windows 7 on a new empty hard drive. I want to keep my bookmarks Firefox and Ad Ons. What is the best way to do it. Thank you for your help.

    Hello

    The best thing for you to do is to make a backup of your Firefox profile. It is a folder that stores bookmarks and Add-ons that you can then add to the reinstalled Firefox on your new operating system.

    Learn you more about the Firefox profile folder, how to backup and restore, here.

    I hope this helps, but if not, please come back here and we can look at in another option for you.

  • What is the best way to detect this text is part of the ContainerControllers without scrolling?

    Hello.

    Question

    What is the best way to detect that the text typed by the user (or added programmatically) exceeds the available space container and find where the truncated part begins? There are others (as described below) highlights the easy way to detect or to prohibit controllers to receive more characters that can be displayed in the area of publication given?

    My attempt partially (Simplified)

    For example, lets say I have a textflow editable with joints two instances of ContainerController.


    var flow:TextFlow = createSomeFlowFromGivenString(sampleText),
        firstController = new ContainerController(firstSprite, 100, 30),
        lastController = new ContainerController(secondSprite, 600, 30);
    
    
    flow.interactionManager = new EditManager(new UndoManager());
    flow.flowComposer.addController(firstController);
    flow.flowComposer.addController(lastController);
    
    flow.flowComposer.updateAllControllers();
    

    Vertical scroll policy enabled I can compare the height of the composition in the last controller with the height of the content:

    var bounds:Rectangle = lastController.getContentBounds(),
        overflow:Boolean =  lastController.compositionHeight < bounds.height;
    
    trace('Content does not fit into given area?', overflow)
    
    

    But when I change of vertical scroll policy off (lastController.verticalScrollPolicy = ScrollPolicy.OFF)-Unfortunately this does not work anymore... (In my case scroll must be disabled, since the text boxes can have only one line with narrow width)

    Use cases

    I want to create the form to fill out. Field can have one or more lines. A field could start in the middle of the page, continue in the following line, where it spreads throughout the page and put an end to the third line - long quarter of the width of the page. Text typed by the user may not exceed given the region since it could cover some static text that is located just after in below field.

    Something like ascii image below:

    --------------------------------------------
    |                <PAGE>                    |
    |                                          |
    |                                          |
    |                                          |
    |               [Field starts here........ |  
    | ........................................ |
    | ........................................ |
    | Ends here..]                             |
    |                                          |
    |                                          |
    | [Another field] xxxx  xxxx xxxxxxxx x xx |
    | xxxxxxxxxxxxxxxxxxx                      |
    |                                          |
    |                              [One more.. |
    | .....]                                   |
    |                                          |
    |                                          |
    |                                          |
    |                                          |
    |                                          |
    |                                          |
    |                                          |
    |                                          |
    --------------------------------------------
    

    Info:

     [......]  <-- form fields starts with '[' character, and ends with ']'
     xxx       <-- sample, static text
     | and _   <-- page borders
    

    If you want to detect the overflow in the final container, there is another thread discussed before.

    http://forums.Adobe.com/thread/795264

    You can detect it with lastContainerController.absoluteStart + lastContainerController.textLength «»<>

  • What is the best way to perform this batch export operation

    Hello

    I could use some pointers on the next task before I have stark wasting a lot of time to go in the wrong direction:

    I have about 1000 records, each of them containing a .psd file I need to export a single group in .png in 4 different resolutions.

    4 different resolutions must be named correctly as well as the name of the group in photoshop, for example the group 'Cup of tea' will result in exports in 4 resolutions named 'teacup1.png', 'teacup2.png', 'teacup3.png', 'teacup4.png '.

    No indication on how to proceed in the most effective way is much appreciated

    Cheers, Moritz

    You might look into Dr. Russell Brown Image Processor Pro

    You can also try to combine your own Action with the batch or Image Processor scripts

  • What is the best way to create this image?

    Hi, I use Illustrator CS5 and I don't know there is a pretty easy way to create this image just steps from copy/recovery... However, I can't understand.

    Any suggestions?

    Thank you!

    wheel.jpg

    If you use a brush, you can make a model reusable and adjustable teeth. For example, you can apply it to any path of the form and can adjust the number of teeth by changing the weight of the race.

    JET

  • What is the best way to get this layout of the chapter?

    I have a document which is now a large flow of pages.

    However, there are "chapters, in this 'book'. A 'chapter' actually starts on the right page. But I want the left page to participate in the design of the spread of the beginning 'chapter '. This means that left before the start page real "chapter" is reserved in control of layout and all "chapter" before that would end on a left page, needs a right-hand page empty inserted, after which the left page of the early-'chapter' spread follows.

    Now, I can do the pages on the left of the start "chapter" start a section and to correct even a page number. That it is positioned on the left side. But when I add pages from the previous section, this means I have to change the page numbers of the beginning of all the sections that follow hand.

    Are there not a better way to do this? Somehow I don't have to worry about the numbering of the pages myself while still getting the page before the "chapter" included in the section of this "chapter"?

    (I write "chapter" because these aren't chpaters Id in an Id book, but a few sections in a single document Id).

    If the previous "chapter ends with a page of pairs (left hand), you add two pages, if it ends on the right add you a..

    Another way to do this automatically is to set an Option to keep on the paragraph style that you use for the first paragraph on the page at beginning of chapter, then it will start the section on the next odd-numbered page.

  • What is the best way to achieve this?

    What is the best to do the following?

    Form1 lists items that have a cost value

    Form2 is a form of invoice

    What is the best way to export goods of a value of cost of form1 to form2?

    A user would be making selections of point in form1 and save the form, and then send it or export data.

    Then form2 would retrieve the selections point of form1 and display cost values that I can sum up at the end of the form2.

    Thanks in advance for any advice!

    Hello

    If both forms have the same objects to the items and the same names, then export to xml of form1 and import of xml in form2 should work.

    Data in the xml file that does not match the names of the objects to form2 will be ignored.

    In both forms, objects that you want to export/import must have their liaison, the normal/name of the object value > Bindings tab.

    Niall

  • What is the most effective way to run this sql

    I have a sql like this:
    select 
    ((select q_avg24 from view1 where years=2009 and qtr=1) - (select q_avg24 from view1 where years=2010 and qtr=1)) as sub_s1_avg24,
    ((select q_avg24 from view2 where years=2009 and qtr=1) - (select q_avg24 from view2 where years=2010 and qtr=1)) as sub_s2_avg24,
    ((select q_avg24 from view3 where years=2009 and qtr=1) - (select q_avg24 from view3 where years=2010 and qtr=1)) as sub_s3_avg24,
    ((select q_avg24 from view4 where years=2009 and qtr=1) - (select q_avg24 from view4 where years=2010 and qtr=1)) as sub_s4_avg24
    from dual
    As you can see, the sql calculate a field value to another view (View1 view2 View3, view4).
    It returns the difference between the 2 years.

    In this sql, you can see she will solicit the views of the 2 x 4 = 8 times to get result table.
    Each query sql (select statement) return a result within 5 seconds, then... for total = 8 X 5 = 40secs...

    40 dry is not so big problem... but in fact, I have at least 5 games queries (select statements) with other parameters are not an entry for the calculation of this query...
    There are a lot of your time in SQL that results from: (8X5secs) x 5 = 200secs...

    I would like to know is there a solution better if we face such problems?
    Thank you!!!

    You can use as... not tested...
    But if you provide additional information on the data, I think we can think best solution

    select sum(decode(id,1,decode(yers,2009,av,0),0))-sum(decode(id,1,decode(yers,2010,av,0),0)) av1,
           sum(decode(id,2,decode(yers,2009,av,0),0))-sum(decode(id,2,decode(yers,2010,av,0),0)) av2,
           sum(decode(id,3,decode(yers,2009,av,0),0))-sum(decode(id,3,decode(yers,2010,av,0),0)) av3,
           sum(decode(id,4,decode(yers,2009,av,0),0))-sum(decode(id,4,decode(yers,2010,av,0),0)) av4
    from(
      select 1 id,q_avg24 av,years
      from view1
      where years in (2009,2010)
      and qtr = 1
      union all
      select 2 id,q_avg24,years
      from view2
      where years in (2009,2010)
      and qtr = 1
      union all
      select 3 id,q_avg24,years
      from view3
      where years in (2009,2010)
      and qtr = 1
      union all
      select 4 id,q_avg24,years
      from view4
      where years in (2009,2010)
      and qtr = 1)
    

    Published by: JAC on March 30, 2012 13:31

  • What is an effective way to way logarithmic bin data with a constant number of points per decade?

    Hi all

    I would like to clean a logarithmic field of PSD in binning and averaging so that I have a constant number of points per decade (say 10, just for the sake of argument). Generally, means simpler and cleaner, I can think about getting this is research in the table entering all points between the frequencies A and B, with an average of these points and assigning a frequency (A + B) / 2 of the new average bin. However, I cannot find how to access frequency information, I need to achieve this. To be more clear, I can imagine if I had two tables, one who holds the frequencies calculated from my stream of incoming data, and the other which held and the amplitude of each corresponding frequency, that I might look for clues in the frequency table with values between A and B, then the average of the values in the table of amplitude which lie between the indices back put them in a new table with a new array of corresponding frequency. The process is a little more general that just on average every ten points, to say, as the number of points per decade continues to grow. My main obstacle at the moment, however, is that the amplitudes of the voltage are a set of values that receive through the operation of PSD, while the part of the frequency of the wave seems to be a DBL continues single-valued. I hope I've explained that well enough for someone to shed some light on my problem. Also, if anyone has a suggestion for a better way to approach the problem please let me know - there must be a pretty simple answer there, but it's deceiving me right now. Thanks in advance for the help.

    -Alex

    Hello

    If I get you right. you have:

    a table with the frequencies

    a table with the corresponding values of amplitude

    Then you want to merge parts of the data by averaging on the specific frequency ranges. I think that there is no VI 1 solution, you will need to write this on your own ():

    I start to get the min/max of frequencies and then interpolate a scale from your needs (like logarythmic) mounting with the quantity of bins you want. This should be an array again.

    Next step is to browse the frequency table, check (the first and) the last value in the location wanted (stop the loop, return the index). This should end up with an array of index. [I guess that's where you can save some computation time most by smart]

    Finally, use these indices to browse the amplitude values and make your average. Should return an array of the length of your array of locations.

    Ground in color fantasies and enjoy.

    Just what you intend to do?

  • What is the best way to change this program and to read 1 sample of each channel?

    The initial program was conducted with NOR-traditional DAQ. I change it to DAQmx the best I could. The program, he is already the application of voltage to generate code (Daqmx Write.vi). But I have problems with the acquisition of tensions, give me readings rare (Daqmx Read.vi) I don't know if I have to make a (Daqmx start Task.vi) for each channel in the program or if I can make it work with a single. Notice I have no significant changes a lot because this program is already running in another lab, and they send us the program so we had no problems so much but rather than get the BNC-2090, they got the BNC-2090 is who uses DAQmx instead of the traditional. If anyone can help?

    A BNC-2090 is just a connection block.  It has no effect on the question of whether you should use traditional DAQ or DAQmx.  Which is determined by the DAQ card that you are connected to the terminal block too.

    You can refer to this document, differences between the BNC-2090 and connector BNC-2090 blocks has, but it's just saying to change the label of the Terminal Board to reflect the new DAQ cards.

    What problems are you having with the new VI you just posted?  You get an erro rmessage?  I don't know what mean "rare readings.

    You really shoud look at some cases DAQmx in the finder of the example.  Some problems that you are experiencing, is that your blocks of data acquisition are all kinds of disconnected.  Generally, you should connect the purple wire from your work function of creation, through the beginning, reading or writing, then the narrow task.  Many of your data acquisition functions are sitting there on the small islands now.  You should also connect to your son for the error.

    With DAQmx, you should be combining all your analogue channels in a single task.  It should resemble Dev0/AI0... AI7.  Then use a sample of channel 1 N DAQmx read to get an array of the readings, you can then use array of index to break.

    Other things you need to do is to replace the structures sequence stacked with flat sequence structures.  Turn on automatic growth for some of your structures such as loops.  At the end of the day, you might find, you can eliminate some of the structures of the sequence.

Maybe you are looking for

  • migration of data from the Office of windows 7 2007

    Hello Keep transferring my files of Office 2007 (including Outlook e-mail) on a Windows 7 on my Macbook Air (El Capitan). I understand that I must buy Office for Mac and install first 2016. Does anyone have experience with these data migrations? I'm

  • Is a refund available for films that have download problems?

    A video rental in a VRBO took so long to download we couldn't watch it. The film can be moved to another device or a refund is available?

  • Docking station for a Satellite U500

    Hye, I broke my laptop power supply. I can't change it to another because it will always be the same.I'm looking for a docking station, but I'm not able to find one that is compatible with my laptop.Is it possible that no one does exist for this refe

  • White screen on Satellite 1110-Z15

    Hello My Satellite 1110 is great until last night. I updated my Norton Antivirus and ran a scan. When she had finished I turned it off. Today, when he starts towards the top of the screen is totally blank. I can tell by the lights and the noise that

  • Need new motherboard for my Satellite M60

    Hello My Satellite M60-104 needs his replacement motherboard. Does anyone know if I need to find a motherboard for the exact model number or other motherboards M60 will work? I am struggling to find the exact model number card mother PSM60E-00s02EAV.