Based on the Performance

Hello

A function increases the performance and not a decode/box nested when handling huge data?

example:
Select decode (decode '6', '1', '0', ('6 ', '2','3 ', decode ('6 ',' 4','5 ', ' 6'))) double
Select double get_code('6');

get_code (m_code)
East of return varchar2
Start
If m_code = '1' then return '0'
elsif m_code = "2" and "3" back then
elsif m_code = '4' and '5' back then
otherwise return '6'
end;

PL inside SQL # is never quick. Check this.

SQL> create or replace function my_fn(pInput1 integer, pInput2 integer) return integer
  2  as
  3  begin
  4     if pInput1>pInput2
  5     then
  6             return 1;
  7     else
  8             return 0;
  9     end if;
 10  end;
 11  /

Function created.

SQL> create table t
  2  as
  3  select dbms_random.value(1,9999) no1, dbms_random.value(1,9999) no2
  4    from dual
  5  connect by level <= 100000
  6  /

Table created.

SQL> exec dbms_stats.gather_table_stats(user,'T')

PL/SQL procedure successfully completed.

SQL> set autotrace traceonly
SQL> set timing on
SQL>
SQL> select no1,
  2         no2,
  3         case when no1>no2 then 1 else 0 end val
  4    from t
  5  /

100000 rows selected.

*Elapsed: 00:00:46.03*

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=159 Card=100000 By
          tes=4300000)

   1    0   TABLE ACCESS (FULL) OF 'T' (TABLE) (Cost=159 Card=100000 B
          ytes=4300000)

Statistics
----------------------------------------------------------
          *1  recursive calls*
          0  db block gets
       7355  consistent gets
          0  physical reads
          0  redo size
    5830943  bytes sent via SQL*Net to client
      73826  bytes received via SQL*Net from client
       6668  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
     100000  rows processed

SQL> select no1,
  2         no2,
  3         my_fn(no1,no2)
  4    from t
  5  /

100000 rows selected.

*Elapsed: 00:00:49.04*

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=159 Card=100000 By
          tes=4300000)

   1    0   TABLE ACCESS (FULL) OF 'T' (TABLE) (Cost=159 Card=100000 B
          ytes=4300000)

Statistics
----------------------------------------------------------
         *18  recursive calls*
          0  db block gets
       7374  consistent gets
          0  physical reads
          0  redo size
    5830954  bytes sent via SQL*Net to client
      73826  bytes received via SQL*Net from client
       6668  SQL*Net roundtrips to/from client
          5  sorts (memory)
          0  sorts (disk)
     100000  rows processed

Thank you
Knani.

Tags: Database

Similar Questions

  • How to perform account on a Table hierarchical Oracle based on the Parent link

    Hello


    I have the following to Oracle 11 g R2 hierarchical table definition:


    Table Name: TECH_VALUES:
      ID,
      GROUP_ID,
      LINK_ID
      PARENT_GROUP_ID,
      TECH_TYPE

    Above the hierarchical table definition, some examples of data might look like this:


    ID      GROUP_ID      LINK_ID      PARENT_GROUP_ID      TECH_TYPE 
    ------- ------------- ------------ -------------------- --------------
    1       100           LETTER_A     0
    2       200           LETTER_B     0
    3       300           LETTER_C     0
    4       400           LETTER_A1    100                  A 
    5       500           LETTER_A2    100                  A 
    6       600           LETTER_A3    100                  A 
    7       700           LETTER_AA1   400                  B 
    8       800           LETTER_AAA1  700                  C 
    9       900           LETTER_B2    200                  B 
    10      1000          LETTER_BB5   900                  B 
    12      1200          LETTER_CC1   300                  C
    13      1300          LETTER_CC2   300                  C
    14      1400          LETTER_CC3   300                  A
    15      1500          LETTER_CCC5  1400                 A
    16      1600          LETTER_CCC6  1500                 C
    17      1700          LETTER_BBB8  900                  B
    18      1800          LETTER_B     0
    19      1900          LETTER_B2    1800                 B 
    20      2000          LETTER_BB5   1900                 B 
    21      2100          LETTER_BBB8  1900
                     B


    Keeping in mind that there are only three Types of technology, i.e. A, B and C, but could not span on different LINK_IDs , how can I do a count on these three different TECH_TYPEs based solely on the ID of parent link where the parent group id is 0 and there are children below them?

    NOTE: It is also possible to have parents in dual link ID such as LETTER_B and all values of children but different group ID.

    I'm basically after a table/report query that looks like this:

    Link ID        Tech Type A         Tech Type B          Tech Type C
    -------------- ------------------- -------------------- -------------------
    LETTER_A      
    3                   1                    1
    LETTER_B      
    0                   3                    0
    LETTER_C      
    2                   0                    3
    LETTER_B      
    0                   3                    0

    Be hierarchical and my table can consist more of 30 000 files, I must also ensure that performance to produce the report above shown here query is fast.

    Obviously, in order to produce the report above, I need to gather all necessary County outages based on TECH_TYPE for all parents of the link id where the PARENT_GROUP_ID = 0 and store it in a table according to the guidelines of this report layout.

    Hope someone can help with maybe a combined query that performs the counties as well as stores the information in a new table called LINK_COUNTS, which will be based on this report. Columns of this table will be:

    ID,

    LINK_ID,

    TECH_TYPE_A,

    TECH_TYPE_B,

    TECH_TYPE_C

    At the end of this entire requirement, I want to be able to update the LINK_COUNTS table based on the results returned by the sample data above in a SQL UPDATE transaction as the link ID parent top-level already exists within my table LINK_COUNTS, just need to provide values for breaking County for each parent node link , i.e.

    LETTER_A

    LETTER_B

    LETTER_C

    LETTER_B

    using something like:

    UPDATE link_counts

    SET (TECH_TYPE_A,TECH_TYPE_B,TECH_TYPE_C) =

       (with xyz  where link_id = LINK_COUNTS.link_id .... etc

    Which must match exactly the above table/report

    Thank you.

    Tony.

    Hi, John,.

    Thanks for posting the sample data.

    John Spencer wrote:

    ...  If you need to hide the ID column, then you could simply encapsulate another external query around me. ...

    Or simply not display the id column:

    Select link_id, -id,

    Count (case when tech_type = 'A' end then 1) tech_a.

    Count (case when tech_type = 'B' then 1 end) tech_b,.

    Count (case when tech_type = "C" then 1 end) tech_c

    of (connect_by_root select link_id link_id,)

    the connect_by_root ID, tech_type

    of sample_data

    Start with parent_group_id = 0

    connect prior group_id = parent_group_id)

    Link_id group, id

    order by link_id, id;

    Same results, using SELECT... PIVOT

    WITH got_roots AS

    (

    SELECT CONNECT_BY_ROOT link_id AS link_id

    Id CONNECT_BY_ROOT ID

    tech_type

    OF sample_data

    START WITH parent_group_id = 0

    CONNECT BY PRIOR group_id = parent_group_id

    )

    SELECT link_id, tech_a, tech_b, tech_c

    OF got_roots

    PIVOT (COUNT (*)

    FOR tech_type IN ('A' AS tech_a

    'B' AS tech_b

    'C' AS tech_c

    )

    )

    Id ORDER BY link_id

    ;

  • OBIEE Performance based on the Machine customer specifications

    Hello

    Recently, I was testing OBIEE dashboard reports using two types of laptops, and strangely, I get different performance for both machines.
    First machine, I used is Dell Latitude D430 with 1.33 GHz and 1.99 GB of RAM. A successfully cache the reports take almost 20 ~ 25 dry and sometimes little more than to return to the user interface.
    Second machine, I used is Dell Latitude E4300 with 2.26. GHz and 1.95 GB of RAM. For this machine with cache hit reports developed in 5 seconds.
    I'm really confused. The speed of the system can it makes such a big difference in the performance? I use a LAN connection in my office for test reports.
    I'll be very grateful if someone can shed some light on this.


    Concerning

    Using IE6 is bad practice. Anyway, there are a lot of configuration options in the browser. Check the security and proxy configs.

  • VI to convert input signals NI 9402 in a RPM value, based on the frequency of the pulses

    Hello

    I'm looking for a VI convert an input signal NI 9402 in a RPM value, based on the frequency of the pulses. Is there such a thing that exists in the library of national instruments?

    I run LAbview 2014 integrated control and monitoring on on a cRIO 9802 high performance integrated system with NEITHER 9402, 4 channels, 50 LV, LV TTL Module input/output digital, ultra high speed digital i/o for the cRIO module.

    Any help would be greatly appreciated.

    The easiest way is to use the FPGA to get the time between the edges of your pulse increase (shift registers to maintain the current situation and the time will be necessary).  This will give you the period.  If it's a single pulse per turn, then the number of laps is just 60/T, where T is the time in seconds.

  • create a spectrum of the order from scratch (i.e. get a fft-based on the position of the same time deductions in the sample data)

    Hello people,

    THAT THE QUESTION PERTAINS TO:

    I play on 2 parameters of a system based on the sampling time: Rotary position and vibration (accelerometer g increments).  I want to take a fft based on the post to create a spectrum of the amplitude-phase speed order in / s.  To do this, perform the following:

    1 integrate (and scale) g vibration signal in the / s (SVT Integration.vi)

    2 signal sampled vibration resample the same time at an angle similarly charged signal (ma-resample unevenly sampled input (linear interpolation) .vi)

    THE QUESTION:

    Order in which operations should be carried out, integrate then resample or vice versa?  I didn't order would be important, but using the same set of data, the results are radically different.

    OR ORDER ANALYSIS 2.0 TOOLSET:

    I have the NO order Analysis Toolset 2.0, but I could not find a way to get the speed profile generation live to work with signals of position encoder DAQmx (via pxi-6602) quadrature.  In addition, it seems that I have to specify all the commands I'm interested to watch, which I don't really know at this point (I want to see all available commands) so I decided to do my own fft based on the post to get a spectrum of the order.

    Any help is greatly appreciated.

    Chris

    The order is to integrate the time domain of first - creating a speed channel.  You now have a new channel of data.  In general I would put this in the same table of waveform with waves of acceleration time.

    Then re - sample your acceleration and/or your speed signals, and then you can calculate the spectrum of the order.

  • Change the structure of program based on the input file

    I have a program that takes parameters of an input file and then executes a Visual acquisition, using IMAQ, controlling some other hardware at the same time.  The duration of the various stages of this process control settings.

    There is a sequence stacked structure to control the playback of the input file, the initialization of the hardware, and then a while loop on the last image to actual purchase.

    The user of the program now wish to have several games acquisition in the same race, possibly with different time settings.  This would mean different iterations of the final loop, based on the parameters of the input file.  There could be 5 games of acquisition on occasion, 3 on another, etc., in a performance of the program.

    The structure VI already seems a little baroque, and I don't want to make it even more complicated.

    I would appreciate advice on how best to proceed, because I'm really a novice of LabView.

    If you use a "State Machine" architecture, you can do exactly that.

    Instead of having a structure of stacked sequence, it is essentially a case structure in a while loop.

    Each case represents a State.

    So in your case, you would have a State for:

    -reading the input file

    -initialization of hardware

    -data acquisition

    You can have the user controls the number of iterations of the State for the acquisition of data that you want to run.

  • MSR maps - research based on the address no longer appears.

    Original title: cards MSR

    Microsoft has stopped support MSR maps?  The research based on the address seems to no longer work.  I use this site frequently to retrieve USGS maps.

    Hi Mark,

    What exactly happens when you perform a search by address? You receive error messages?

    You can read the following article:

    On the Microsoft Research maps

  • UCS-C200M2 and the performance of the 1980s...

    Has anyone deployed UCS C200 M2 boxes and tried to team onboard gigabit interfaces? I'm void Mbps deployed VM CUCM 8.5.1 performance using the prescribed Cisco 1000 user model. I ran the NIC and stand alone teams and get the same result, seriously bad performance. I have a Dell 2970 on the same switch running ESXi and am pulling almost gigabit wirline her. The two are running EXSi 4.1. Both are associated by using the road based on the hash of the IP. Same downloads from the ESXi host IP address is dead slow.

    Please open a TAC service request because we need more information about this configuration.

    Just a quick check. You have all the traffic shaping configured on the virtual switch on the ESXi host output ports?

    HTH

    Padma

  • Should we leave how much free space on the C drive before affecting the performance and speed?

    Hello!

    Hard drive of my new PC (C) is 1 TB (processor Samsung 7i, Windows 8.1). It doesn't have any other disk/partition. How this space is safe to use to store large files (videos, esp., stored in the native video library file) without harming the performance and speed?

    And if I create another partition (D) and just use it for storage, it will make much difference compared to the above?

    Thank you!

    Anna

    Sunday, March 1, 2015 10:59 + 0000, AnaFilipaLopes wrote:

    And if I create another partition (D) and just use it for storage, it will make much difference compared to the above?

    Planning your Partitions

    The Question

    Partitions, how much should I have on my hard drive, what do I use
    each of them for, and what size should each one be?

    It s a common question, but unfortunately this doesn t have a
    only simple, just answer to all the world. A lot of people will respond with
    the way they do, but their response isn't necessarily best for the
    person seeking (in many cases it isn't right even for the person)
    response).

    Terminology

    First, let's rethinking the terminology. Some people ask "should I".
    partition my drive? That s the wrong question, because the
    the terminology is a little strange. Some people think that the word
    "partition" means divide the drive into two or more partitions.
    That s not correct: to partition a drive is to create one or several
    partitions on it. You must have at least one partition to use
    He who think they have an unpartitioned disk actually
    have a player with only one partition on it and it s normally
    Called C:. The choice you have is to have more than one
    partition, not that it's the partition at all.

    A bit of history

    Back before Windows 95 OEM Service Release 2 (also known as Windows
    95 (b) was published in 1996, all MS-DOS and Windows hard drives have been set
    using the file system FAT16 (except for very tiny to aid
    FAT12). That 16-bit only because were used for addressing, FAT16 has a
    maximum 2 GB partition size.

    More than 2 GB of hard disks were rare at the time, but if you had
    one, you must have multiple partitions to use all the available
    space. But even if your drive was not larger than 2GB, FAT16 created
    Another serious problem for many people - the size of the cluster has been
    more great if you had a larger partition. Cluster sizes increased from 512
    bytes for a partition to no greater than32Mb all the way up to 32 KB for a
    partition of 1 GB or more.

    More the cluster size, the space more is wasted on a hard drive.
    That s as space for all the files is allocated in whole clusters
    only. If you have 32 KB clusters, a 1 byte file takes 32 KB, a file, a
    greater than 32 k byte takes 64 k and so on. On average, each file
    about half of his last group waste.

    If large partitions create a lot of waste (called "soft"). With a 2 GB
    FAT16 drive in a single cluster, if you have 10,000 files, each
    lose half a cluster of 32 KB, you lose about 160 MB for relationships. This s
    back in an important part of a player that probably cost more than $400
    1996 - around $ 32.

    So what did the people? They divided their 2 GB drive in two,
    three or more logical drives. Each of these logical drives has been
    smaller the real physical disk, had smaller clusters, and
    so less waste. If, for example, she was able to keep all the partitions
    less than 512 MB, cluster size was only 8 KB, and the loss was reduced to a
    a quarter of what it would be otherwise.

    People partitioned for other reasons also, but back in the days of
    FAT16, it was the main reason to do so.

    The present

    Three things have changed radically since 1996:

    1. the FAT32 and NTFS file systems came along, allowing a larger
    partitions with smaller clusters and so much less waste. In
    with NTFS, cluster sizes are 4 K, regardless of the size of the partition.

    2 hard drives have become much larger, often more than 1 TB (1000 GB) in
    size.

    3 hard drives have become much cheaper. For example, a 500 GB drive
    can be bought today for about $50. That s 250 times the size of this
    Player 2Gb typical 1996, about one-eighth of the price.

    What these things mean together which is the reason to be old to have
    multiple partitions to avoid the considerable wastage of disk space left.
    The amount of waste is much less than it used to be and the cost of
    that waste is much less. For all practical purposes, almost nobody does
    should no longer be concerned about slack, and it should no longer be
    has examined when planning your partition structure.

    What Partitions are used for today

    There are a variety of different ways people put in place several
    scores of these days. Some of these uses are reasonable, some are
    debatable, some are downright bad. I'll discuss a number of Commons
    partition types in the following:

    1. a partition for Windows only

    Most of the people who create such a partition are because they believe
    If they never have to reinstall Windows properly, at least they
    He won t lose their data and he won t have to reinstall their applications.
    because both are safe on other partitions.

    The first of these thoughts is a false comfort and the second
    is downright bad. See the analysis of the types of partition 2 and 4
    below to find out why.

    Also note that over the years, a lot of people who find their windows
    partition that has begun to be the right size proves to be too
    small. For example, if you have such a partition for Windows and later
    upgrade to a newer version of Windows, you may find that your Windows
    partition is too small.

    2. a partition for installed programs

    This normally goes hand in hand with the partition type 1, a partition for
    just Windows. The thought that if you reinstall Windows, your
    installed application programs are safe if they are in another
    partitions is simply not true. That s because all programs installed
    (with the exception of an occasional trivial) have pointers to the inside
    Windows, in the registry and elsewhere, as well as associated files
    buried in the Windows folder. So if Windows, pointers and
    the files go with it. Given that the programs need to be reinstalled if Windows
    the fact, this reasoning to a separate partition for programs not
    work. In fact, there is almost never a good reason to separate
    Windows of the software application into separate partitions.

    3. a partition for the pagefile.

    Some people think mistakenly that the pagefile on another
    score will improve performance. It is also false; It doesn t
    help and often I hurt, performance, because it increases the movement of the head
    to get back to the page to another file frequently used
    data on the disk. For best performance, the paging file should normally
    be on the most widely used score of less used physical player. For
    almost everyone with a single physical disk than the same drive s
    Windows is on C:.

    4. a partition for backup for other partitions.

    Some people make a separate partition to store backups of their other
    or partitions. People who rely on a "backup" are a joke
    themselves. It is only very slightly better than no backup at all.
    because it leaves you likely to be simultaneously the original losses
    and backup for many of the most common dangers: the head crashes and other
    types of drive, serious glitches to power failure, near lightning
    strikes, virus attack, even stolen computer.  In my opinion,.
    secure backup must be on a media removable and not stored in the
    computer.

    5. a partition for data files

    Above, when I discussed separate Windows on a clean partition,
    I pointed out that separate data from Windows is a false comfort if
    He of done with the idea that data will be safe if Windows ever
    must be reinstalled. I call it a false comfort that's because
    I'm afraid many people will rely on this separation, think that their
    data are safe there and so do not take measures to
    Back it up. In truth the data is not safe there. Having to reinstall
    Windows is just one of the dangers to someone a s hard disk and not
    probably even one. This kind of "backup" falls into the same
    category, as a backup to other partitions partition; It lets you
    sensitive to the simultaneous loss of the original and the backup on many of
    the most common dangers that affect the entire physical disk, not
    just the particular partition. Security comes from a solid backup
    diet, not how partition you.

    However, for some people, it may be a good idea to separate Windows and
    programs on the one hand of the data on the other, putting each of the
    two types into separate partitions. I think that most people
    partitioning scheme must be based on their backup system and backup
    plans are generally of two types: whole hard disk imaging
    or data only backup. If you back up data, backup is
    usually facilitated by the presence of a separate with data only partition;
    to save just the score easily, without having to
    collect pieces from here and elsewhere. However, for
    those who backup by creating an image of the entire disk, there is
    usually little, if any, benefit the separation of data in a partition of
    its own.

    Furthermore, in all honesty, I must point out that there are many
    very respected people who recommend a separate partition for Windows,
    Whatever your backup plan.  Their arguments haven t convinced
    me, but there are clearly two views different here.

    6. a partition to image files

    Some people like to deal with the images and videos as something separate
    other data files and create a separate partition for them. To my
    the spirit, an image is simply another type of data and there is no
    the advantage in doing so.

    7. a partition for music files.

    The comments above related to the image files also apply to music
    files. They are just another type of data and must be dealt with the
    just like the other data.

    8. a partition for a second operating system to dual-boot to.

    For those who manage several operating systems (Windows Vista, Windows
    XP, Windows 98, Linux, etc.), a separate partition for each operating
    system is essential. The problems here are beyond the scope of this
    discussion, but simply to note that I have no objection to s
    all these partitions

    Performance

    Some people have several partitions because they believe that it
    somehow improves performance. That s not correct. The effect is
    probably low on modern computers with modern hard disks, but if
    whatever it is, the opposite is true: more music mean poorer
    performance. That's because normally no partition is full and it
    so are gaps between them. It takes time for the drive s
    read/write heads to cross these gaps. Close all files
    are, faster access to them will be.

    Organization

    I think a lot of people overpartition because they use scores as a
    organizational structure. They have a keen sense of order and you want to
    to separate the apples from the oranges on their readers.

    Yes, separating the different types of files on partitions is a
    technical organization, but then is to separate different types of
    files in folders. The difference is that the walls are static and
    fixed in size, while the files are dynamic, changing size automatically
    as needed to meet your changing needs. This usually done records
    a much better way to organize, in my opinion.

    Certainly, partitions can be resized when necessary, but except with the latter
    versions of Windows, which requires a third-party software (the and the
    possibility to do so in Windows is primitive compared to the third-party
    solutions). These third party software normally costs money and not
    any point and how stable it is, affects the entire disk.
    with the risk of losing everything. Plan your partitions in
    first place and repartitioning, none will be necessary.  The need
    to repartition usually occurs as a result of overpartitioning in
    the first place.

    What often happens when people organize with partitions instead
    records are that they make a miscalculation of how much room they need on each
    This partition, and then when they run out of space on the partition
    When a file is logically, while having plenty of space
    on the other hand, they simply saving the file in the score of "poor".
    Paradoxically, therefore, results in this kind of score structure
    less organization rather than more.

    So how should I partition my drive

    If you read what came before, my findings will not come as a
    surprise:

    1. If your backup set is the image of the entire disk, have just one
    single (usually c partition :));

    2. If you backups just data, have two partitions one for Windows and
    application programs installed (usually c :)), the other for data
    (normally D :).)

    With the exception of multiple operating systems, it is rarely
    any advantage to have more than two partitions.

  • Accident stroke multicolor where color is based on the color of the pixel at the edge

    Hi and first of all sorry for the terrible title. I've been looking for a way to create something like a regular edge effect, but the color should be based on the color of the edge pixel. I need something like this because when I am doing textures for my models and put them in a game engine, I have a problem where the white lines are showing on a certain distance from my model. The solution to this problem is to add color around the islands of uv that is similar to the color of their edges. What I've done until now is put almost every island uv on a layer separate behind the main texture, so when I want to resize this layer copied it would do the trick. But this is a long process for the more than 50 models, so I thought that maybe someone has a better idea of how to approach this problem. Thank you

    Do not know if this fits your needs but you can try, after performing a selection of pixels of the layer, select > modify > border, choose a couple or a few pixels and then edit > fill > Content Aware Fill. You could also simply extend the selection with Select > modify > Expand and then do an intersection of the selection with the original selection, and then Content Aware Fill, but I don't know if the additional step will buy you anything. If that works, create an Action and it in batches on all files, could speed things up.

  • Improving the performance of OLAP in Oracle Database 11

    Hi all

    First of all sorry for a very generic question, lately I been faced with some problems of performance during the generation cubes

    for example we had a problem of low SGA_memory, now we have referred questions to Doc-ID 1464064.1

    to cut short-we have a list of things to look for in advance for OLAP can improve performance?

    similar to the one mentioned in a blog post (improving the performance of OLAP in Oracle Database 10 g Release 1) which is for 10 g

    we ORDM DW, which has the dimension of TIME based with about 10 k records sometimes, that it takes about 20 minutes to build recently its not not building at all throwing error mentioned Doc-ID 1464064.1 I have a wip SR to even SR 3-9620792761

    Similarly, we produced Sun with approximately 9million records sometimes he finished in 4 hours sometimes it ends quite

    I have twice fallen & recreated workspace that helps a little but does not appear inconsistent.

    Pointers would be really useful

    Thank you

    Prateek

    You can update following anonymous pl/sql block and change in your

    specific information for the name of the cube and measure names. You

    can add more steps as you like following the same convention of xml.

    DECLARE

    CLOB xmlCLOB;

    tmp varchar2 (4000);

    BEGIN

    DBMS_LOB.CREATETEMPORARY (xmlCLOB, true);

    DBMS_LOB. Open (xmlCLOB, DBMS_LOB. LOB_READWRITE);

    tmp: ='

    ';

    DBMS_LOB. WriteAppend (xmlCLOB, length (tmp), tmp);

    dbms_cube.import_xml (xmlClob);

    DBMS_LOB. Close (xmlClob);

    END;

  • Hi I'm trying to downkoad based on the free trial version. I hane Internet explore and when I click on run a sibdow will open with the sign of the sownloading, but it still remains at 0%

    Hi I am trying to download based on the free trial version. I hane Internet explore and when I click on run a window will open with the sign of the download, but it always remains at 0% and no download is performed. Can anyone help?

    Use another browser.

    Mylenium

  • 'Dimmed' how is the performance of Windows 7 on ESXi 5.1?

    I've never used ESXi. I am considering establishing a laboratory box because soon I will have to deal with ESX on a customer site and also because it looks like very cool software.

    I consider my software development office, which is windows 7, running on this box ESXi. I don't think that my needs are very intense about the software, I starts the day on this day. ESXi box would have a latest i7 processor with more power than the autonomous area, that I use now.

    The question is: how performance would lose I running Windows 7 on ESXi compared to a normal installation? The ESXi server would have 32 GB of ram and I would probably devote 16 GB to this workstation vm, which is what he has now on the dedicated area.

    I would like to see the 'plan' as viable, as long as I don't see a drop off the coast in the performance.

    It runs very well.  You won't see any significant performance decrease.

    In fact, you will probably see a performance increase based on new material.  * Make sure your new ESXi server is on the HCL.

  • App problem based on the Web on the process

    Hi all

    AIX 6.1

    11.2.0.3

    We have a new application based on the web, put in work, by changing the old system.

    We have only 100 users who can simultaneously connected, and our process of db is set to 120.

    But we met often too can mistake Treaty which we met in the previous system.

    It seems that the process are left behind when the disconnection of users or disconnected from the new web-based application.,.

    Is it possible that I can to check which processes are orphaned? And how do I auto cleaning/kill them?

    Oracle 11g has proc stored to manage this questions?

    Thank you very much

    zxy

    You say you have a web-based application.  Which implies that it is a three-tier application?  If, therefore, normally, middle-tier servers maintain pools of connections.  If this is the case, you wouldn't think that a session is created each time a user connected to the application, or that a session ended whenever a user logged out of the application.  At best, the connections of killing the middle tier has opened in the connection pool would cause problems of application performance.  At worst, you cause the application being unusable.  You could certainly do things like create profiles that limit a user can have how many sessions, how long those sessions can be slowed down, etc.  But in doing so for a three-tier application would generally strongly discouraged.

    With regard to the question of if you can change PROCESSES affecting dynamically...

    Oracle documentation is available at

    http://Tahiti.Oracle.com

    Once you're there, choose the documentation for your version of Oracle

    Online Oracle Database Documentation 11g Release 2 (11.2)

    Go to the reference

    http://docs.Oracle.com/CD/E11882_01/server.112/e25513/TOC.htm

    Now, either search METHODS or access the initialization parameters section to find the parameter page

    http://docs.Oracle.com/CD/E11882_01/server.112/e25513/initparams200.htm#i1132608

    As with all other initialization settings, should be recorded the editable attribute.

    Justin

  • Text field value based on the numeric value of a number field

    I'm trying to make a PDF form to be completed online for an assessment of the Performance.  I have three radio buttons that put a value in a total box.  For example, is not good = 15, good = 30, Excellent = 45.  So according to the radio button you select, the value of it will be 15, 30, or 45.  I have 6 different categories, while the score can be anywhere between 100-300.  Based on these findings, it ranking is determined, so what I'm trying to do, is make a text box to match their classification according to their total score.  That's what I tried without success.

    SalesPA.SalesPA.OverallScore::calculate - JavaScript, client)

    if TotalPoints > 234

    {

    ce = 'Exceeds expectations';

    }

    if TotalPoints > 179

    {

    ce = "Meets expectations";

    }

    if TotalPoints > 99

    {

    ce = 'No reply to the expectations';

    }

    Your JavaScript syntax was incorrect. Use the syntax validation tool in the future to avoid this. Here is a corrected version.

    If (TotalPoints > 234)

    {

    this.rawValue = "exceeding expectations."

    }

    If (TotalPoints > 179)

    {

    this.rawValue = "meets expectations."

    }

    If (TotalPoints > 99)

    {

    this.rawValue = "not meet the expectations."

    }

Maybe you are looking for