Weird scenario - full table on TEST db scan kills performance in Production

Hello
We have a battery of Linux servers connected to HP XP as SAN storage.
Observe behavior strange since last week... every time the table sys.aud$ is full-table-analysed in the TEST database of someother in someother server (also connected to the same SAN), the number of Sessions active in the PROD shot thru the roof and demand grinds online banking services to stop.

HP storage engineers came and open consoles and showed graphs of performance in mosaic... and it didn't break a sweat with the exception of a peak in 15 sec during the FTS in TEST. They swore on their first wives that the storage is not the reason for it.

We are also engaging Oracle Support on this, but Oracle Support is not very creative to combat this situation and just point to the problem of the AWR reports storage (it then, surprise!)

Everyone has never experienced a similar situation, please let us know of your resolution.
5.6 Linux, 11.2.0.3, using ASM (Infrastructure grid version 11.2.0.3)

Thank you

932957 wrote:
Thanks Mark, EXACTLY during the problem, without CPU spikes... ASM iostat shows an increased activity and vmstat BONE ' watch the block-ed queue springs (2nd column below):

What is the table$ aud, what kind of I/O calls are made to read.

The 'in principle' the explanation is simple - research is so great and works like a large number of (relatively small) e/s asynchronous requests that it floods the I/O queue to the San. ANY sharing of a SAN, by I/O intensive applications is likely to result in a request to slow down during I/O points by others.

If your banking application is running 3-layer with connection pooling in the middle tier, it is probably configured with dynamic connections pools, and when the database response time increase (due to I/O test) the middle layer is short of free connections and creates a few more - who are probably slow to start because the I/O response time is poor , therefore the intermediate creates others...

It was more weird when this problem arose at the beginning... every 30 minutes, the prod will boost Active Sessions and for the life of us, we could not understand... it took days to understand metric Grid Control "FAILED CONNECTION ATTEMPTS" runs a full scan on sys.aud$ in the TEST database every 30 minutes and then whenever we did a COUNT (*) SELECT SYS. AUD$ test, we were able to reproduce it.

Not one of the best bits of implementation ;) Oracle http://jonathanlewis.wordpress.com/2010/04/05/failed-login/

Concerning
Jonathan Lewis

Tags: Database

Similar Questions

  • Trend ServerProtect Real Time Scan kills Performance

    I've just virtualized a Windows 2003/Citrix Presentation Server 4.5 server on a host of vSphere with a NetApp FAS2020 using NAS as the data store where the virtual machine is stored.  There is no other guests of the VM on the host for the moment and the NetApp is not still used for other purposes (i.e. nothing should be taxing the material).  I found that ServerProtect V5.58 time real scan running on the Citrix server limits the CPU at a constant 100% once about 8 users are connected.  If I disable the real-time scanning, everything goes back to normal.

    Clearly, I must be able to protect users from malicious software Citrix sessions.  What is the best way to achieve this with Citrix/Terminal Server VMware?

    Someone at - it a version more recent ServerProtect or even OfficeScan running successfully within Citrix/Terminal Services hosted on VMware?

    Thank you

    D.

    Hello

    Moved to the Security Forum.

    Reinstall the trend after a P2V could help but maybe not.

    Trend also made programs specifically for virtualization and vSphere that will do A / V of the analyses using the API that will not also drastically affect performance vStorage, it's something to look into. Maybe but not 'real time' will allow better overall analysis.

    Best regards
    Edward L. Haletky VMware communities user moderator, VMware vExpert 2009, 2010

    Now available: url = http://www.astroarch.com/wiki/index.php/VMware_Virtual_Infrastructure_Security'VMware vSphere (TM) and Virtual Infrastructure Security' [/ URL]

    Also available url = http://www.astroarch.com/wiki/index.php/VMWare_ESX_Server_in_the_Enterprise"VMWare ESX Server in the enterprise" [url]

    Blogs: url = http://www.virtualizationpractice.comvirtualization practice [/ URL] | URL = http://www.astroarch.com/blog Blue Gears [url] | URL = http://itknowledgeexchange.techtarget.com/virtualization-pro/ TechTarget [url] | URL = http://www.networkworld.com/community/haletky Global network [url]

    Podcast: url = http://www.astroarch.com/wiki/index.php/Virtualization_Security_Round_Table_Podcastvirtualization security Table round Podcast [url] | Twitter: url = http://www.twitter.com/TexiwillTexiwll [/ URL]

  • FULL TABLE SCAN even with the index, but why?

    Could someone please explain why I'd get FULL TABLE SCAN explain plan results when joining 2 tables on columns that already have clues about them? For example,.
    consider this fictional scenario:

    employee table with columns:
    employee # (primary key column)
    name

    address table with columns:
    employee # (foreign key to employee.employee #)
    subscription_type
    address

    Select Employee.Name since it is, address.address_type, address.address
    where employee.employee # = address.employee #.

    This query shows a full table scan in terms of the explain command.

    Full scan of the table is not necessarily slow and index access is not necessarily fast.

    You will recover, no doubt, most if not all the ranks on both sides. The fastest way to retrieve each row in a table is to do a table scan. Using an index, and a single block of reading for each row in a table is much less effective than to do a table scan.

    Justin

  • How do full table scan

    Hi all

    I have a table which is accesed by application every 5 seconds. Now, this table has several delete insert updates current. The table size is aprox 200 MB (high tide) and there is say 5 ranks, which will be a sentence of 20 to 30 KB. My CMS is say 2 GB. So now the stats are not met and there is no index in this table. Now I see full table scan as his wait event. Now, I want to know.

    How a scan full of tabel happens Oracle load the entire 200MB of data in the SGA and then do a table scan or should just the actual size used by the table IE 20 to 30 KB.

    Thank you

    A

    Hello

    high waters is precisely the limit up to which Oracle must read to be sure that all the data has been seen, so if you have only about 30 KB of data in the table, even if the data is in the first a few blocks from the table, a complete analysis must read the 200 MB (which is not so good but takes more time to read a few blocks). (the reason is that it was once the data written in this block and it triggered the HWM),

    You can reorganize the table (alter table mytable move or use DBMS_REDEFINITION so that you can do this, the application uses the table) to reset the HWM. ("If the current small" size used"is transient and if you expect the table to increase again to use 200 MB or more, don't need to reorg; do it if you are confident that the table will remain very weak)

    Best regards

    Brno Vroman.

  • Path to XML index table is full table scan

    Hi all

    I have a version of oracle 11.2.0.4.6 database

    Try to implement partitioning on XML indexes.

    Creates a table and index partitioned by time stamp as below.

    Whenever I'm trying to find the path table makes a full table scan.

    I have applied the fix as indicated ( Doc ID 13522189.8 ).

    So the recovery is quite slow and partition pruning does not not on XML indexes.

    Wondering if anyone has experienced the same problem?

    CREATE TABLE INCIDENT

    (

    INCIDENT_PK NUMBER (14.5).

    INCIDENTGROUPING_PK NUMBER (14.5).

    INCIDENTTYPE_PK NUMBER (14.5).

    SECURITYCLASS_PK NUMBER (14.5).

    STAMP OF INCIDENT_DATE,

    SYS INCIDENT_DETAIL. XMLTYPE

    )

    TABLESPACE DATA_TBS_INCIDENT

    PCTUSED 0

    PCTFREE 10

    INITRANS 1

    MAXTRANS 255

    STORAGE)

    64K INITIALS

    MINEXTENTS 1

    MAXEXTENTS UNLIMITED

    PCTINCREASE 0

    DEFAULT USER_TABLES

    )

    LOGGING

    NOCOMPRESS

    PARTITION BY RANGE (INCIDENT_DATE)

    (PARTITION SEP2013_WEEK1 VALUES LESS THAN (to_timestamp (' 00:00:00.00 2013-09-08 ',' YYYY-MM-DD HH24:MI:SS.))) FF2')),

    PARTITION SEP2013_WEEK2 VALUES LESS THAN (to_timestamp ('2013-09-15 00:00:00.00 ',' YYYY-MM-DD HH24:MI:SS.)) FF2')),

    PARTITION SEP2013_WEEK3 VALUES LESS THAN (to_timestamp ('2013-09-22 00:00:00.00 ',' YYYY-MM-DD HH24:MI:SS.)) FF2')),

    ..........);

    CREATE the INDEX INCIDENTxdb_idx

    ON corpaudlive. INCIDENT (INCIDENT_detail) INDEXTYPE IS XDB. LOCAL XMLINDEX 10 PARALLEL

    PARAMETERS (' PATH TABLE INCIDENT_PATHTABLE (TABLESPACE DATA_TBS_INCIDENT))

    PIKEY INDEX INCIDENT_PATHTABLE_PIKEY_IX (TABLESPACE IDX_TBS_INCIDENT)

    PATH ID INDEX INCIDENT_PATHTABLE_ID_IX (TABLESPACE IDX_TBS_INCIDENT)

    INCIDENT_PATHTABLE_VALUE_IX VALUE INDEX (TABLESPACE IDX_TBS_INCIDENT)

    ORDER KEY INDEX INCIDENT_PATHTABLE_KEY_IX (TABLESPACE IDX_TBS_INCIDENT)

    Paths (INCLUDE (//forename //surname //postcode //dateofbirth //street //town))');

    SQL > explain the plan for

    2 Select INCIDENT_pk in INCIDENT where XMLEXISTS ('/ / name [text () = 'john']' by the way of INCIDENT_detail)

    3 and XMLEXISTS ("/ / name [text () 'clark' =]' by the way of INCIDENT_detail")

    4 and a.INCIDENT_date between TO_TIMESTAMP (January 10, 2014 ',' DD/MM/YYYY "")

    5 and TO_TIMESTAMP (September 10, 2014 ',' DD/MM/YYYY ');

    He explained.

    Elapsed time: 00:00:02.77

    SQL > select * from table (dbms_xplan.display);

    PLAN_TABLE_OUTPUT

    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    Hash value of plan: 123057549

    -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    | ID | Operation                                       | Name                           | Lines | Bytes | Cost (% CPU). Time | Pstart. Pstop |    TQ | IN-OUT | PQ Distrib.

    -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    |   0 | SELECT STATEMENT |                                |     1.    70.  1803 (5) | 00:00:22 |       |       |        |      |            |

    |   1.  COORDINATOR OF PX |                                |       |       |            |          |       |       |        |      |            |

    |   2.   PX SEND QC (RANDOM). : TQ10003 |     1.    70.  1803 (5) | 00:00:22 |       |       |  Q1, 03 | P > S | QC (RAND) |

    |   3.    SEMI NESTED LOOPS.                                |     1.    70.  1803 (5) | 00:00:22 |       |       |  Q1, 03 | SVCP |            |

    |   4.     NESTED LOOPS |                                |     1.    57.  1800 (5) | 00:00:22 |       |       |  Q1, 03 | SVCP |            |

    |   5.      VIEW                                       | VW_SQ_1                        |   239.  5975 |  1773 (5) | 00:00:22 |       |       |  Q1, 03 | SVCP |            |

    PLAN_TABLE_OUTPUT

    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    |   6.       UNIQUE HASH |                                |   239. 25334 |            |          |       |       |  Q1, 03 | SVCP |            |

    |   7.        RECEIVE PX |                                |   239. 25334 |            |          |       |       |  Q1, 03 | SVCP |            |

    |   8.         PX SEND HASH | : TQ10002 |   239. 25334 |            |          |       |       |  Q1, 02 | P > P | HASH |

    |   9.          UNIQUE HASH |                                |   239. 25334 |            |          |       |       |  Q1, 02 | SVCP |            |

    | * 10 |           HASH JOIN |                                |   239. 25334 |  1773 (5) | 00:00:22 |       |       |  Q1, 02 | SVCP |            |

    |  11.            KIND OF BUFFER.                                |       |       |            |          |       |       |  Q1, 02 | ISSUE |            |

    |  12.             RECEIVE PX |                                |     1.    22.     3 (0) | 00:00:01 |       |       |  Q1, 02 | SVCP |            |

    |  13.              PX SEND BROADCAST | : TQ10000 |     1.    22.     3 (0) | 00:00:01 |       |       |        | S > P | BROADCAST |

    |  14.               TABLE ACCESS BY INDEX ROWID | X$ PT74MSS0WBH028JE0GUCLBK0LHM4 |     1.    22.     3 (0) | 00:00:01 |       |       |        |      |            |

    | * 15 |                INDEX RANGE SCAN | X$ PR74MSS0WBH028JE0GUCLBK0LHM4 |     1.       |     2 (0) | 00:00:01 |       |       |        |      |            |

    | * 16.            HASH JOIN |                                | 12077 |   990K |  1770 (5) | 00:00:22 |       |       |  Q1, 02 | SVCP |            |

    PLAN_TABLE_OUTPUT

    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    |  17.             RECEIVE PX |                                |   250K |    10 M |    39 (0) | 00:00:01 |       |       |  Q1, 02 | SVCP |            |

    |  18.              PX SEND BROADCAST | : TQ10001 |   250K |    10 M |    39 (0) | 00:00:01 |       |       |  Q1, 01 | P > P | BROADCAST |

    |  19.               SYSTEM PARTITION ALL |                                |   250K |    10 M |    39 (0) | 00:00:01 |     1.   112.  Q1, 01 | ISSUE |            |

    | * 20.                TABLE ACCESS BY LOCAL INDEX ROWID | INCIDENT_PATHTABLE |   250K |    10 M |    39 (0) | 00:00:01 |     1.   112.  Q1, 01 | SVCP |            |

    | * 21.                 INDEX RANGE SCAN | INCIDENT_PATHTABLE_VALUE_IX |   161.       |    25 (0) | 00:00:01 |     1.   112.  Q1, 01 | SVCP |            |

    |  22.             ITERATOR BLOCK PX |                                |   221 M |  8865M |  1671 (1) | 00:00:21 |    53.    54.  Q1, 02 | ISSUE |            |

    | * 23.              TABLE ACCESS FULL | INCIDENT_PATHTABLE |   221 M |  8865M |  1671 (1) | 00:00:21 |    53.    54.  Q1, 02 | SVCP |            |

    | * 24.      TABLE ACCESS BY ROWID USER | INCIDENT |     1.    32.     1 (0) | 00:00:01 | ROWID | ROWID |  Q1, 03 | SVCP |            |

    | * 25.     SEE PUSHED PREDICATE. VW_SQ_2                        |     1.    13.    20 (0) | 00:00:01 |       |       |  Q1, 03 | SVCP |            |

    |  26.      NESTED LOOPS |                                |     1.   106.    20 (0) | 00:00:01 |       |       |  Q1, 03 | SVCP |            |

    |  27.       NESTED LOOPS |                                |     4.   106.    20 (0) | 00:00:01 |       |       |  Q1, 03 | SVCP |            |

    PLAN_TABLE_OUTPUT

    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    |  28.        NESTED LOOPS |                                |     4.   256.     8 (0) | 00:00:01 |       |       |  Q1, 03 | SVCP |            |

    |  29.         TABLE ACCESS BY INDEX ROWID | X$ PT74MSS0WBH028JE0GUCLBK0LHM4 |     1.    22.     3 (0) | 00:00:01 |       |       |  Q1, 03 | SVCP |            |

    | * 30 |          INDEX RANGE SCAN | X$ PR74MSS0WBH028JE0GUCLBK0LHM4 |     1.       |     2 (0) | 00:00:01 |       |       |  Q1, 03 | SVCP |            |

    |  31.         ITERATOR SYSTEM PARTITION.                                |     4.   168.     5 (0) | 00:00:01 |    53.    54.  Q1, 03 | SVCP |            |

    | * 32 |          TABLE ACCESS BY LOCAL INDEX ROWID | INCIDENT_PATHTABLE |     4.   168.     5 (0) | 00:00:01 |    53.    54.  Q1, 03 | SVCP |            |

    | * 33 |           INDEX RANGE SCAN | INCIDENT_PATHTABLE_PIKEY_IX |     4.       |     4 (0) | 00:00:01 |    53.    54.  Q1, 03 | SVCP |            |

    |  34.        ITERATOR SYSTEM PARTITION.                                |     1.       |     2 (0) | 00:00:01 |   KEY |   KEY |  Q1, 03 | SVCP |            |

    | * 35 |         INDEX RANGE SCAN | INCIDENT_PATHTABLE_KEY_IX |     1.       |     2 (0) | 00:00:01 |   KEY |   KEY |  Q1, 03 | SVCP |            |

    | * 36 |       TABLE ACCESS BY LOCAL INDEX ROWID | INCIDENT_PATHTABLE |     1.    42.     3 (0) | 00:00:01 |     1.     1.  Q1, 03 | SVCP |            |

    -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    PLAN_TABLE_OUTPUT

    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):

    ---------------------------------------------------

    10 - access("SYS_P9".") PATHID '=' ID')

    Access (SYS_PATH_REVERSE ("PATH") 15 - > = HEXTORAW ('02582E') AND SYS_PATH_REVERSE ("PATH") < HEXTORAW ('02582EFF'))

    16 - access("SYS_P11".") RID "IS 'SYS_P9'." GET RID OF"AND TBL$ OR$ IDX$ PART$ NUM ("CORPAUDLIVE". "THE INCIDENT", 0,7,65535, "SYS_P9" "." " "RID") = TBL$ OR$ IDX$ PART$ NUM ("CORPAUDLIVE". "INCIDENT_PATHTAB

    THE', 0,7,65535, ROWID))

    filter ("SYS_P9". "ORDER_KEY" < = 'SYS_P11' "." " ORDER_KEY' AND 'SYS_P11 '. "" ORDER_KEY "< SYS_ORDERKEY_MAXCHILD ("SYS_P9". "ORDER_KEY")) "

    20 filter (SYS_XMLI_LOC_ISTEXT ("SYS_P11". "LOCATOR", "SYS_P11" "." " PATHID') = 1)

    21 - access("SYS_P11".") The VALUE "= 'John')

    23 filter (SYS_XMLI_LOC_ISNODE ("SYS_P9". "LOCATOR") = 1 AND SYS_OP_BLOOM_FILTER (: BF0000, "SYS_P9".) " PATHID'))

    PLAN_TABLE_OUTPUT

    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    24 - filter("A".") INCIDENT_DATE' > = TIMESTAMP' 2014 - 10 - 01 00:00:00.000000000 "AND"A"". "" INCIDENT_DATE"< = TIMESTAMP' 2014 - 10 - 09 00:00:00.000000000' AND

    "ITEM_2" = TBL$ OR$ IDX$ PART$ NUM ("INCIDENT", 0,7,65535, "A". ROWID))

    25 filter ("ITEM_4" = TBL$ OR$ IDX$ PART$ NUM ("INCIDENT", 0,7,65535, "A".) ROWID))

    30 - access (SYS_PATH_REVERSE ("PATH") > = HEXTORAW('027FF9') AND SYS_PATH_REVERSE ("PATH") < HEXTORAW ('027FF9FF'))

    32 filter (SYS_XMLI_LOC_ISNODE ("SYS_P2". "LOCATOR") = 1) "

    33 - access("SYS_P2".") GET RID OF"="A ". ROWID AND 'SYS_P2 '. ("' PATHID '=' ID ')

    35 - access("SYS_P4".") GET RID OF"="A ". ROWID AND 'SYS_P2 '. "" ORDER_KEY "< ="SYS_P4. " "" ORDER_KEY "AND"SYS_P4 ". "" ORDER_KEY "< SYS_ORDERKEY_MAXCHILD ("SYS_P2". "ORDER_KEY")) "

    filter ("SYS_P4". "RID"IS "SYS_P2"." GET RID OF"AND TBL$ OR$ IDX$ PART$ NUM("INCIDENT",0,7,65535,"SYS_P2".") "RID") = TBL$ OR$ IDX$ PART$ NUM ("INCIDENT_PATHTABL

    E «(, 0,7,65535, ROWID)).

    36 - filter("SYS_P4".") The VALUE '= 'clark' AND SYS_XMLI_LOC_ISTEXT ("SYS_P4".' LOCATOR', 'SYS_P4 '. (("" PATHID ') = 1).

    PLAN_TABLE_OUTPUT

    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    Note

    -----

    -dynamic sample used for this survey (level = 6)

    69 selected lines.

    Elapsed time: 00:00:00.47

    SQL > spool off

    Thank you

    CenterB

    You must create a XMLIndex with two groups:

    create table actionnew)

    number of action_pk

    action_date timestamp

    action_detail xmltype

    )

    partition by (range (action_date)

    partition values before_2015 less (timestamp ' 2015-01-01 00:00:00 ')

    , partition values jan_2015 less (timestamp ' 2015-02-01 00:00:00 ')

    , partition values feb_2015 less (timestamp ' 2015-03-01 00:00:00 ')

    );

    create index actionnew_sxi on actionnew (action_detail)

    indexType is xdb.xmlindex

    local

    parameters (q'~)

    Group my_group_1

    XMLTable actionnew_xt1

    "/ audit/action_details/screen_data/tables/table/row.

    path of varchar2 (100) the columns "name".

    , path of surname varchar2 (100) "first name".

    Group my_group_2

    XMLTable actionnew_xt2

    "/ audit/action_details/fields.

    path of varchar2 (100) the columns "name".

    , path of surname varchar2 (100) "first name".

    ~'

    );

    Select x.*

    to actionnew t

    xmltable)

    "/ audit/action_details/screen_data/tables/table/row.

    in passing t.action_detail

    path of varchar2 (100) the columns "name".

    , path of surname varchar2 (100) "first name".

    ) x

    where t.action_date between timestamp ' 2015-02-01 00:00:00 '

    and timestamp ' 2015-03-01 00:00:00 '

    and x.forename = 'anwardo. '

    and x.surname = 'gram '.

    ;

  • Full Table Scan: logical reads are the same as the number of blocks

    Hi people,

    Please see the following execution plan:

    Hash value of plan: 1148783227

    ---------------------------------------------------------------------------------------------------------------------------------------------------

    | ID | Operation | Name                          | Begins | E - lines. A - lines.   A - time | Pads | Bed |  OMem |  1Mem | Used Mem.

    ---------------------------------------------------------------------------------------------------------------------------------------------------

    |   0 | SELECT STATEMENT |                               |      1.        |      0 | 00:01:20.23 |     481K |    481K |       |       |          |

    |*  1 |  HASH JOIN |                               |      1.  50351 |      0 | 00:01:20.23 |     481K |    481K |  7902K |  2074K | 7997K (0) |

    |*  2 |   HASH JOIN |                               |      1.  50351 |  31333 | 00:00:01.45 |    3138.   3134 |    17 M |  2295K |   18 M (0).

    |*  3 |    TABLE ACCESS FULL | INS_DCT_BUSINESS_FOLDER |      1.  50351 |    122K | 00:00:00.82 |    2262 |   2260 |       |       |          |

    |   4.    TABLE ACCESS FULL | INS_DCT_CLAIM_DECEASED_FOLDER |      1.  73533 |  76656 | 00:00:00.34 |     876.    874.       |       |          |

    |*  5 |   TABLE ACCESS FULL | INS_COMMON_PARTY |      1.    616K |      0 | 00:01:18.71 |     478K |    478K |       |       |          |

    ---------------------------------------------------------------------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):

    ---------------------------------------------------

    1 - access("THIS_".") PARTY_PK "= 'PARTY1_'." PK")

    2 - access("THIS_".") FOLDER_ID '= 'THIS_1_'.' FOLDER_ID")

    3 filter (("THIS_1_". "STATUS" <>"10" AND "THIS_1_" "." " STATUS' <>' 07 "AND"THIS_1_". ((' ' STATUS ' <>' 08'))

    5 filter (("PARTY1_". "CUSTOMER_ID" LIKE "%#CHAMP290501C #00000 ' AND 'PARTY1_'." CUSTOMER_ID' IS NOT NULL))

    The full table on INS_COMMON_PARTY scan generated 478 K physical IO.

    But the table contains 479K

    SQL > select dba_segments blocks where nom_segment = 'INS_COMMON_PARTY ';

    BLOCKS

    ----------

    479488

    The 10046 trace file shows that each IO get back most of the time 16 blocks:

    WAITING #11529215045786738576: nam = "direct path read" ela = file number 5619 = 27 dba first 1695088 block cnt = 16 obj = #= 19115 tim = 4076488005225

    WAITING #11529215045786738576: nam = "direct path read" ela = 33322 file number = 26 dba first 758658 block cnt = 14 obj = #= 19115 tim = 4076488044875

    WAITING #11529215045786738576: nam = "direct path read" ela = 2140 file number = 26 dba first 758672 block cnt = 16 obj = #= tim 19115 = 4076488053342

    WAITING #11529215045786738576: nam = "direct path read" ela = 205 file number = 26 dba first 758688 block cnt = 16 obj = #= 19115 tim = 4076488054012

    WAITING #11529215045786738576: nam = "direct path read" ela = 2057 file number = 26 dba first 758704 block cnt = 16 obj = #= 19115 tim = 4076488056622

    WAITING #11529215045786738576: nam = "direct path read" ela = 22034 folder = 26 dba first 758720 block cnt = 16 obj = #= tim 19115 = 4076488079117

    WAITING #11529215045786738576: nam = "direct path read" ela = 5516 file number = 26 dba first 758736 block cnt = 16 obj = #= 19115 tim = 4076488085001

    WAITING #11529215045786738576: nam = "direct path read" ela = file number 4914 = 26 dba first 758752 block cnt = 16 obj = #= 19115 tim = 4076488090434

    WAITING #11529215045786738576: nam = "direct path read" ela = file number 7748 = 26 dba first 758768 block cnt = 16 obj = #= tim 19115 = 4076488098836

    WAITING #11529215045786738576: nam = "direct path read" ela = 1046 file number = 9 first dba = 1411 block cnt = 5 obj #= 19076 tim = 4076488101527

    WAITING #11529215045786738576: nam = "direct path read" ela = file number 3882 = 9 first dba = 1424 block cnt = 8 obj #= 19076 tim = 4076488105439

    WAITING #11529215045786738576: nam = "direct path read" ela = 1736 file number = 9 first dba = 1433 block cnt = 15 obj #= 19076 tim = 4076488107310

    WAITING #11529215045786738576: nam = "direct path read" ela = 123 file number = 9 first dba = 1449 block cnt = 15 obj #= 19076 tim = 4076488107616

    WAITING #11529215045786738576: nam = "direct path read" ela = 876 of file = 9 first dba = 1465 block cnt = 15 obj #= 19076 tim = 4076488108814

    WAITING #11529215045786738576: nam = "direct path read" ela = file number 11326 = 9 first dba = 1481 block cnt = 15 obj #= 19076 tim = 4076488120464

    WAITING #11529215045786738576: nam = "direct path read" ela = file number 2497 = 9 first dba = 1497 block cnt = 15 obj #= 19076 tim = 4076488123305

    WAITING #11529215045786738576: nam = "direct path read" ela = 1382 file number = 9 first dba = 1513 block cnt = 15 obj #= 19076 tim = 4076488125037

    WAITING #11529215045786738576: nam = "direct path read" ela = 799 file = 9 first dba = 1529 block cnt = 7 obj #= 19076 tim = 4076488126162

    WAITING #11529215045786738576: nam = "direct path read" ela = 45 file number = 17 dba first = 1920 block cnt = 8 obj #= 19076 tim = 4076488126533

    WAITING #11529215045786738576: nam = "direct path read" ela = 2593 file number = 18 dba first 1794 block cnt = 14 obj = #= 19076 tim = 4076488129290

    WAITING #11529215045786738576: nam = "direct path read" ela = 1727 file number = 18 dba first = 1808 block cnt = 16 obj #= 19076 tim = 4076488131202

    WAITING #11529215045786738576: nam = "direct path read" ela = 7308 file number 18 dba first = 1824 block cnt = 16 obj = #= 19076 tim = 4076488138872

    WAITING #11529215045786738576: nam = "direct path read" ela = file number 514 = 18 dba first = 1840 block cnt = 16 obj #= 19076 tim = 4076488139735

    WAITING #11529215045786738576: nam = "direct path read" ela = 110 file number = 18 dba first 1856 block cnt = 16 obj = #= 19076 tim = 4076488140232

    WAITING #11529215045786738576: nam = "direct path read" ela = 114 file number = 18 dba first = 1872 block cnt = 16 obj #= 19076 tim = 4076488140689

    WAITING #11529215045786738576: nam = "direct path read" ela = 114 file number = 18 dba first 1888 block cnt = 16 obj = #= 19076 tim = 4076488141146

    WAITING #11529215045786738576: nam = "direct path read" ela = 113 file number = 18 dba first = 1904 block cnt = 16 obj #= 19076 tim = 4076488141603

    WAITING #11529215045786738576: nam = "direct path read" ela = 695 of file = 19 dba first 1794 block cnt = 14 obj = #= 19076 tim = 4076488142645

    WAITING #11529215045786738576: nam = "direct path read" ela = 549 of file = 19 dba first = 1808 block cnt = 16 obj #= 19076 tim = 4076488143540

    WAITING #11529215045786738576: nam = "direct path read" ela = 1742 file number = 19 dba first 1824 block cnt = 16 obj = #= 19076 tim = 4076488145588

    WAITING #11529215045786738576: nam = "direct path read" ela = file number 1834 = 19 dba first = 1840 block cnt = 16 obj #= 19076 tim = 4076488147769

    ................................

    WAITING #11529215045786738576: nam = "direct path read" ela = file number 113966 = 19 dba first 52960 block cnt = 16 obj = #= 19076 tim = 4076492053842

    WAITING #11529215045786738576: nam = "direct path read" ela = file number 3173 = 19 dba first 52976 block cnt = 16 obj = #= 19076 tim = 4076492057550

    WAITING #11529215045786738576: nam = "direct path read" ela = file number 3486 = 19 dba first 52992 block cnt = 16 obj = #= 19076 tim = 4076492061390

    WAITING #11529215045786738576: nam = "direct path read" ela = 2288 file number = 19 dba first 53008 block cnt = 16 obj = #= 19076 tim = 4076492064029

    WAITING #11529215045786738576: nam = "direct path read" ela = file number 4692 = 19 dba first 53024 block cnt = 16 obj = #= 19076 tim = 4076492069069

    WAITING #11529215045786738576: nam = "direct path read" ela = 1239 file number = 19 dba first 53040 block cnt = 16 obj = #= 19076 tim = 4076492070657

    WAITING #11529215045786738576: nam = "direct path read" ela = file number 2365 = 19 dba first 53056 block cnt = 16 obj = #= 19076 tim = 4076492073373

    WAITING #11529215045786738576: nam = "direct path read" ela = 227 file number = 19 dba first 53072 block cnt = 16 obj = #= 19076 tim = 4076492073970

    WAITING #11529215045786738576: nam = "direct path read" ela = 215 file number = 19 dba first 53088 block cnt = 16 obj = #= 19076 tim = 4076492074531

    WAITING #11529215045786738576: nam = "direct path read" ela = 204 of file = 19 dba first 53104 block cnt = 16 obj = #= 19076 tim = 4076492075082

    WAITING #11529215045786738576: nam = "direct path read" ela = 198 file number = 19 dba first = 53120 block cnt = 16 obj #= 19076 tim = 4076492075626

    WAITING #11529215045786738576: nam = "direct path read" ela = 217 file number = 19 dba first 53136 block cnt = 16 obj = #= 19076 tim = 4076492076191

    WAITING #11529215045786738576: nam = "direct path read" ela = 216 number of file = 19 dba first 53152 block cnt = 16 obj = #= 19076 tim = 4076492076755

    WAITING #11529215045786738576: nam = "direct path read" ela = 1199 file number = 19 dba first 53168 block cnt = 16 obj = #= 19076 tim = 4076492078302

    .......................................................

    STAT #11529215045786738576 id = 5 cnt = 0 pid = 1 pos = obj 2 = 19076 op ='TABLE ACCESS FULL INS_COMMON_PARTY (cr = 478541 pr = 478534 pw = time 0 = US 98541439 cost = size 141729 = map 132638015 = 616921)'

    To me that the number of e/s is about 479488/16 = 29968 e / s

    Why is the number of e/s so close to the number of blocks?

    Am I missing something here?

    Thanks for your help

    The column entitled "bed" is the number of blocks read, not the number of read requests.

    Concerning

    Jonathan Lewis

  • How to avoid the full table scan?

    Hello

    I'm new to sql tuning. When I run the following query. Full table scan that happens and it does not use the index.

    SELECT / * + FIRST_ROWS (2) * / a0.t$ ttyp, a0.t$ amnt FROM forest112 WHERE a0.t$ amnt <>: 1 AND a0.t$ dapr =: 2 AND a0.t$ tapr =: 3;

    When I searched on the net, I found by changing the operator '<>' with 'NOT IN' we can make the query to use the index, but that will change the result. Is this true? What are the other changes that can be made to this query?

    I think that create under index may solve your problem, because in this case, it will not hit the table and get all desired data to index itself

    create index ind_1 on forest112 (tapr$ t, t$ WTSA, dapr$ t, t$ ttyp) compute statistics;

    Thank you

    Harman

  • Tuning sql insert that inserts 1 million lines makes a full table scan

    Hi Experts,

    I'm on Oracle 11.2.0.3 on Linux. I have a sql that inserts data into a table of History/Archives of a table main application based on the date. The application table has 3 million lines. and all the lines that are more then 6 months old must go in a table of History/Archives. This was decided recently, and we have 1 million rows that meet this criterion. This insertion in table archive takes about 3 minutes. Plan of the explain command shows a full table scan on the main Board - which is the right thing, because we are pulling 1 million rows in the main table in the history table.

    My question is that, is it possible that I can do this sql go faster?

    Here's the query plan (I changed the names of table etc.)

       INSERT INTO EMP_ARCH
       SELECT *
    FROM EMP M
    where HIRE_date < (sysdate - :v_num_days);
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse        2      0.00       0.00          0          0          0           0
    Execute      2     96.22     165.59      92266     147180    8529323     1441230
    Fetch        0      0.00       0.00          0          0          0           0
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total        4     96.22     165.59      92266     147180    8529323     1441230
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: FIRST_ROWS
    Parsing user id: 166
    Rows     Row Source Operation
    -------  ---------------------------------------------------
    1441401   TABLE ACCESS FULL EMP (cr=52900 pr=52885 pw=0 time=21189581 us)
    
    
    

    I heard that there is a way to use opt_param tip to increase the multiblock read County but did not work for me... I will be grateful for suggestions on this. can collections and this changing in pl/sql also make it faster?

    Thank you

    OrauserN

    (1) create an index on hire_date

    (2) tip 'additional' use in the 'select' query '

    (3) run ' alter session parallel DML'; before you run the entire statement

  • Events in db of SIDS waiting for full table scan

    Guys,

    10.2.0.5 / node 2 RAC / RHEL-3

    CanAnyone give me a sql to find all done SID full table scans in a database?

    Thank you!
    Hari

    You hear all the sessions a FTS right now, or all sessions that have never done a FTS? For the latter, you can run something like this,

    select sid,name,value from v$statname natural join v$sesstat
    where name like 'table scans%tables%' order by sid;
    
  • The query makes a full table scan?

    I have a simple select query that filters on the last 10 or 11 days of data in a table. In the first case, it runs in 1 second. In the second case it takes 15 minutes and still not done.

    I can say that the second query (11 days) makes a full table scan.
    -Why is this happening? ... I guess some kind of threshold?
    -Are there a way to avoid this? ... or encourage Oracle to play nice.

    I find confusing from the point of view before end/query to get very different performances.

    Jason
    Oracle 10g
    Toad quest 10.6

    CREATE TABLE delme10 AS 
    SELECT *
    FROM ed_visits
    WHERE first_contact_dt >= TRUNC(SYSDATE-10,'D');
    
    Plan hash value: 915912709
    
    --------------------------------------------------------------------------------------------------
    | Id  | Operation                    | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    --------------------------------------------------------------------------------------------------
    |   0 | CREATE TABLE STATEMENT       |                   |  4799 |  5534K|  4951   (1)| 00:01:00 |
    |   1 |  LOAD AS SELECT              | DELME10           |       |       |            |          |
    |   2 |   TABLE ACCESS BY INDEX ROWID| ED_VISITS         |  4799 |  5534K|  4796   (1)| 00:00:58 |
    |*  3 |    INDEX RANGE SCAN          | NDX_ED_VISITS_020 |  4799 |       |    15   (0)| 00:00:01 |
    --------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       3 - access("FIRST_CONTACT_DT">=TRUNC(SYSDATE@!-10,'fmd'))
    
    
    CREATE TABLE delme11 AS 
    SELECT *
    FROM ed_visits
    WHERE first_contact_dt >= TRUNC(SYSDATE-11,'D');
    Plan hash value: 1113251513
    
    -----------------------------------------------------------------------------------------------------------------
    | Id  | Operation              | Name      | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    -----------------------------------------------------------------------------------------------------------------
    |   0 | CREATE TABLE STATEMENT |           | 25157 |    28M| 14580   (1)| 00:02:55 |        |      |            |
    |   1 |  LOAD AS SELECT        | DELME11   |       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR       |           |       |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM) | :TQ10000  | 25157 |    28M| 14530   (1)| 00:02:55 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR  |           | 25157 |    28M| 14530   (1)| 00:02:55 |  Q1,00 | PCWC |            |
    |*  5 |      TABLE ACCESS FULL | ED_VISITS | 25157 |    28M| 14530   (1)| 00:02:55 |  Q1,00 | PCWP |            |
    -----------------------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       5 - filter("FIRST_CONTACT_DT">=TRUNC(SYSDATE@!-11,'fmd'))

    This seems to change the plan to explain it...

    alter session set optimizer_index_cost_adj=10;
    
  • causing trunc of the full Table Scans

    I have a situtaion here where my query is this.

    SQL > select count (1) in the HBSM_SM_ACCOUNT_INFO where OPTIONAL_PARM5 = 'HD' and CUST_STATUS in ('UP', "UUP") and trunc (FIRST_ACTVN_DATE) = trunc (sysdate);

    COUNT (1)
    ----------
    6

    PLAN_TABLE_OUTPUT
    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    Hash value of plan: 3951750498

    ---------------------------------------------------------------------------------------------------------------
    | ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time | Pstart. Pstop |
    ---------------------------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | 1. 10. 13904 (1) | 00:02:47 |
    | 1. GLOBAL TRI | 1. 10 | | | | |
    | 2. SIMPLE LIST OF PARTITION. 1. 10. 13904 (1) | 00:02:47 | 12. 12.
    |* 3 | TABLE ACCESS FULL | HBSM_SM_ACCOUNT_INFO | 1. 10. 13904 (1) | 00:02:47 | 12. 12.
    ---------------------------------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):
    ---------------------------------------------------

    3 filter (("CUST_STATUS" = "UP" OU "CUST_STATUS" = 'UUP') AND)
    TO_DATE (INTERNAL_FUNCTION ("FIRST_ACTVN_DATE")) = TO_DATE (TO_CHAR(SYSDATE@!)))

    16 selected lines.


    If I remove the trunc clause in the query performance improves significantly the results are false.

    SQL > select count (1) in the HBSM_SM_ACCOUNT_INFO where OPTIONAL_PARM5 = 'HD' and CUST_STATUS in ('UP', "UUP") and FIRST_ACTVN_DATE = trunc (sysdate);

    COUNT (1)
    ----------
    0


    PLAN_TABLE_OUTPUT
    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    Hash value of plan: 454529511

    ---------------------------------------------------------------------------------------------------------------------------
    | ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time | Pstart. Pstop |
    ---------------------------------------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | 1. 40. 47 (0) | 00:00:01 |
    |* 1 | TABLE ACCESS BY INDEX ROWID | HBSM_SM_ACCOUNT_INFO | 1. 40. 47 (0) | 00:00:01 | 12. 12.
    |* 2 | INDEX RANGE SCAN | IND_FIRST_ACTVN_DATE | 51. 4 (0) | 00:00:01 |
    ---------------------------------------------------------------------------------------------------------------------------


    Can anyone please help me by which I can get the right data, and I can also prevent such full table scans.

    Unless you use a functional index, apply any function to an indexed column prevents the use of the index.

    The way around it in your case is to realize that

    select count(1) from HBSM_SM_ACCOUNT_INFO where OPTIONAL_PARM5='MH' and CUST_STATUS in ('UP','UUP') and trunc(FIRST_ACTVN_DATE) = trunc(sysdate)
    

    Really asking that the FIRST_ACTVN_DATE are sometimes today. You can rewrite so as

    select count(1) from HBSM_SM_ACCOUNT_INFO where OPTIONAL_PARM5='MH' and CUST_STATUS in ('UP','UUP')
    and FIRST_ACTVN_DATE >= trunc(sysdate)
    and FIRST_ACTVN_DATE < trunc(sysdate) + 1
    

    Note that this may not always use the index according to the number of lines are the date of the day compared to how many are outside today's date.

    Also, when you post, don't forget to put your code between

     tags and to post create table scripts and sample data inserts.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    
    
  • Confusion of full Table Scan

    Hello experts,

    I am on 11g R2 RHEL5, I have a general question here, oracle said complete random for table scan reads are slower sequential reading, as far as my knowledge a sequential read is a monobloc read into the buffer cache, and read a straggling is a multiblock read that can occur for a full restricted index scan or a full table scan. My question is what is a shuffle? and how it is different from the sequential reading? on the technical side... Please put some light on these technical terms, so that I can work on the setting. There is also a warrant RANDOM i/o.

    In a very brief way, the scan would be something like, you ask for the first time for employees starting with the name 'Aman' (wasn't there a lot with this name) to access using an index be a better choice (perhaps not a good example of a column containing names in real time. Example is just for the sake of discussion) then asking with the name of 'John '.

    Aman...

  • Index on non unique values in order to avoid the full table scan

    I have a table with > 100 k records. The table is updated only during the race every night. All columns except one have non - unique values and I am querying the table with this request.

    COL3 - non - unique values - only 40 distinct values
    unique values - no - COL4 - 1000 distinct values
    last_column - 100 k unique values

    Select last_column in the table_name where in (...) col3 or col4 (...)

    I tried to create a Bitmap index individually on col3 and col4 and also combined. However, in both cases, it performs a full table scan.

    Please, help me optimize this query as it is used in the term altogether the system and the cost of the query is very high around 650.

    I don't have much experience with popular indexes then all tracks.

    Thank you
    Sensey

    Published by: user13312817 on November 7, 2011 11:32

    An alternative might be to use a union instead and the 2 index:

    create index my_index1 on my_table (col3, last_column) compress 1;
    create index my_index2 on my_table (col4, last_column) compress 1;

    Select last_column from my_table
    where col3 in (...)
    Union
    Select last_column from my_table
    where col4 (...)

    In other words, if the UNION would apply here whereas in double values for last_column will be deleted.

  • Full table scan without sweeping particular column (CLOB data type column)

    I want to select the online_bank table. This table with a column of type CLOB data. I want to select all the columns in the table, except the column type of clob data. but oracle full table, including the column analysis server type CLOB data when the query is run. It took a long time to complete the table. Please give me a solution of full table scan without analysis the CLOB data type column. How to avoid the time scanning CLOB data type column... ?

    878728 wrote:
    I want to select the online_bank table. This table with a column of type CLOB data. I want to select all the columns in the table, except the column type of clob data. but oracle full table, including the column analysis server type CLOB data when the query is run. It took a long time to complete the table. Please give me a solution of full table scan without analysis the CLOB data type column. How to avoid the time scanning CLOB data type column... ?

    We do not have your table.
    We do not have your data.
    Therefore, we have no answer to your apparent mystery.

  • request done full table scan

    Hai all,

    10.2.0.4 on solaris 10

    SELECT sum(RechargeForPrepaid/10000), to_date(substr (TIMESTAMP, 1,8),'YYYY/MM/DD')
    FROM medt.crm_t  WHERE to_date(substr (TIMESTAMP, 1,8),'YYYY/MM/DD') >= trunc(sysdate)-1 and tradetype != '0' group by to_date(substr (TIMESTAMP, 1,8),'YYYY/MM/DD');
    The shows to explain that it performs a table full scan on crm_t. I created indexes on the column
    timestamp and tradetype too. collected stats too. but still, it makes a full table scan.


    Please guide

    Thank you
    Kai

    Obviously.

    The column is named incorrectly--> timestamp is a reserved word.
    It's also the wrong type--> dates should not be stored as varchar2.

    Also () =
    where is indexed
    always removes the index.

    The design of this table is a complete mess.
    Drop and redesign.

    You can yourself to hack and put an index of feature based on the expression, but the table will still be a disaster.
    I hope that this code does not belong to a commercial application.
    Me cry and me to hire a lawyer to sue the seller.

    -----------
    Sybrand Bakker
    Senior Oracle DBA

Maybe you are looking for

  • I can't upgrade to watch OS 3

    I followed all the instructions on the Update software on your Apple Watch - Apple Support, but I always get the message "the 2.2.1 version is up-to-date. What to do?

  • X 240, fingerprint reader not working after sleep or lockscreen

    Hi all I have a new X 240, and the fingerprint logon no longer works when the computer comes out of standby or a lockscreen (screen saver). I checked other solutions here and on other sites, namely: 1 uncheck the option of the power of the drive in t

  • For a client WIFI VLAN access control list

    I am getting my 70 VLAN (GUESTWIFI) access only to the Internet and to deny access to the local network. I set up the ACL below but I can't road on through my firewall (it has 2 interfaces, 1 is a subinterface) Can someone tell what I am doing wrong

  • WebWorks is the easiest?

    Hello I am new to develop and now I start just at the level of the noob, I make websites and I don't want to get into the mobile market. I have tons of ideas, but not the capital, to hire a dev unfortantly. Is the best place to start for beginners ab

  • Best/Recommended way to record information in the Release version

    Hello I've searched around, but don't see any results on the issue yet. Then I would ask what is the best/recommended way to record Release version information? I use Momentics IDE 10.2. When I choose run or Debug, qDebug() & qWarning display informa