Bad alphabetically in the index

Dear community,

I use ID CS6.

I have discovered a reproducible error, given the order of the names in the index sheets.

When I put marks index to names

  • Anton huh
  • Martin huh
  • Johannes Heindl

in this way they will be sorted:

  1. Huh, Anton
  2. Heindl, Johannes
  3. Huh, Martin

which is obviously incorrect. The order should rather be 1,3,2.

I think the problem is that the spaces between the first and last name are ignored:

The names "Smith, John" is treated as "Smithjohn".

Corsica, I can force ID to sort correctly by using the "sort by"-field. But why is that need?

Names are generally classified by NAME and if they are the same, you look on the first NAME. Word by word instead of the letter by letter.

Are there really no feature that provides "the word of the order of the words" without interfering manually?

Thank you in advance!

I have the honour, of course, you put these scripts to use, but they serve a particular purpose and they mess around with the document considerably. Because everything what needs index of martin is setting/updating the fields from the sort order, it is easier to target the index panel immediately: it is faster and safer.

You are partially right that remove the space after that a comma sorts the sorting problem, but using the sign $ is delicate because you might want to do things later, and symbols that have a special meaning in regular expressions are to be avoided if possible, just to make life easier. So, I use the number 5 instead. The advantage is that I can use different numbers to refine the sort order. Here's the script:

(function () {
  if (app.documents[0].indexes.length > 0) {
    var order;
    var topics = app.documents[0].indexes[0].allTopics;
    for (var i = topics.length-1; i >= 0; i--) {
      order = topics[i].sortOrder;
      // If the topic already has a sort order defined, change it
      if (order !== '' && order.indexOf(', ') > -1) {
        topics[i].sortOrder = order.replace (', ', ',5');
      // Otherwise use the topic's name
      } else if (topics[i].name.indexOf(', ') > -1) {
        topics[i].sortOrder = topics[i].name.replace (', ', ',5');
      }
    }
  }
}());

Peter

Tags: InDesign

Similar Questions

  • Swedish reference in the index, always in the order of the English alphabet

    My Indesign is in English but the text is in Swedish. I can do the headers of the index in the Swedish alphabet, making, correctly, the letters A, O and has come last, but the headers are always classified in English. For example, in Swedish, starting with en subheading would go ahead starting with Fo subheading. But adobe indexing o reads like an o and it highlights the r.

    Is there any help out there or will I cut and paste the index entries? Thank you, Linnea


    Linnea,

    You must set the default language for the document as well. The language applied to text is not serious, but a default language. To set the default language for the document, make sure that nothing is selected (press Shift + Ctrl + A or its Mac equivalent), open the character Panel, then select Swedish. Now your index will sort correctly, no need to do anything with the fields of sort order.

    InDesign behavior is strange here and fix clearly. It's like this:

    = To get the sorted words correctly to their first letter, you must set the header type in the Sorting Options window

    = To get words sorted correctly by the rest of the letters, you must set the default language for the document.

    Peter

  • The Indexing Service does not not on Windows server R2 (Server Manager)

    In Windows 7 Enterprise or Windows 8

    I activated the indexing service in or out of the Windows features screen.

    After installed I restart the server. Opens the computer management on the server. Indexing Service installed appears under the Services and Applications

    In Windows Server R2 2012, I turned the "Windows Search Service".

    In the computer management screen, showing not the indexing service. ?

    Please help me on this.

    If I had no bad thing help me.

    Thanks :)

    This issue is beyond the scope of this site (for consumers) and to be sure, you get the best (and fastest) reply, we have to ask either on Technet (for IT Pro) or MSDN (for developers)
  • Query took too much time when adding new column to the table and the index set on this

    I added a new column to the table that contains thousands of records. and created the composite index with three columns (those newly added + two existing column)

    for the specifics. TBL table there are two columns col1, col2

    I added the new column col3 to TBL and created composit index (col1, col2, col3).

    Now for all the records in col3 is NULL. When I choose on this table, it takes too long...

    Any idea what my I do bad., I have check the query plan, it is using the index

    It is solved using collection of statistics using the

    DBMS_STATS. GATHER_TABLE_STATS

    @Top.Gun thanks for your review...

  • scan of the index systematic range

    Hello

    I read about the differences between the systematic index scan range, single scan, skip scan.

    According to the docs, to how the CBO Evaluates in-list of iterators, http://docs.oracle.com/cd/B10500_01/server.920/a96533/opt_ops.htm

    , I can see that

    "The IN -list iterator is used when a query contains a IN clause with values." The execution plan is the same which would result for a statement of equality clause instead of IN with the exception of an extra step. This step occurs when the IN -list iterator feeds section of equality with the unique values of the IN -list. »

    Of course, the doc is Oracle9i Database. (I do not find it in the docs of 11 g)

    And the example 2-1 list iterators initial statement, shows that is used in the INDEX RANGE SCAN.


    On my Oracle 11 GR 2 database, if I issue a statement similar to the example of the doc, so: select * from employees where employee_id in (7076, 7009, 7902), I see that it uses a SINGLE SCAN


    On Oracle Performance Tuning: the Index access methods: Oracle Tuning Tip #11: Unique Index Scan , I read that

    If Oracle should follow the Index Unique Scan, and then in SQL, equality operator (=) must be used. If any operator is used in other than op_Equality, then Oracle cannot impose this Unique Index Scan.

    (and I think this sentence is somewhere in the docs also).

    Thus, when using predicates in the list, why in my case Oracle used the unique scan on primary key column index? Because it wasn't a level playing field.

    Thank you.

    It is Internet... find us a lot of information a lot but don't know who to trust.

    Exactly! It is thought, you should ALWAYS have in the back of your mind when you visit ANY site (no matter the author), read a book or document, listen to no matter WHAT presentation or read responses from forum (that's me included).

    All sources of information can and will be errors, omissions and inaccuracies. An example which is used to illustrate a point can involve/suggest that it applies to the related points as well. It's just not possible to cover everything.

    Your post doc 9i is a good example. The earliest records (even 7.3 always available online docs) often have a LOT of better explanations and examples of basic concepts. One of the reasons is that there were not nearly that many advanced concepts that explaining necessary; they did not exist.

    michaelrozar17 just posted a link to a 12 c doc to refute my statement that the article you used was bad. No problem. Maybe this doc has been published because of these lines:

    The database performs a unique sweep when the following conditions apply:

    • A query predicate refers to all columns in a unique index using an equality operator key, such as WHERE prod_id=10 .
    • A SQL statement contains a predicate of equality on a column referenced in an index created with the CREATE UNIQUE INDEX statement.

    The authors mean that a single scan is ONLY performed for these conditions? We do not know. There could be several reasons that an INLIST ITERATOR has not been included in this list:

    1. a LIST is NOT for this use case (what michaelrozar might suggest)

    2. the authors were not aware that the CBO may also consider a unique analysis for a predicate INLIST

    3. the authors WERE aware but forgot to include INLIST in the document

    4. the authors were simply provide the conditions most common where a single sweep would be considered

    We have no way of knowing what was the real reason. This does not mean that the document is not reliable.

    In the other topic, I posted on the analysis of hard steps, site of BURLESON, and Jonathan contradicted me. If neither Burleson isn't reliable, do not know which author have sufficient credibility... of course, the two Burleson and Jonathan can say anything, it's true I can say anything, of course.

    If site X is false, site is fake, Z site is fake... all people should read the documentation only and not other sites?

    This is the BEST statement of reality to find the info I've seen displayed.

    No matter who is the author, and what credibility that they could rely on the spent items you should ALWAYS keep these statements you comes to mind.

    This means you need to do ' trust and verify. " You of 'trust', and then you "checked" and now have a conflict between WORDS and REALITY.

    On those which is correct. If your reality is correct, the documentation is wrong. Ok. If your reality is wrong, then you know why.

    Except that nobody has posted ANY REALITY that shows that your reality is wrong. IMHO, the reason for this is because the CBO probably MUCH, done a LOT of things that are not documented and that are never explored because there is never no reason to spend time exploring other than of curiosity.

    You have not presented ANY reason to think that you are really concerned that a single scan is used.

    Back to your original question:

    Thus, when using predicates in the list, why in my case Oracle used the unique scan on primary key column index? Because it wasn't a level playing field.

    1. why not use a single sweep?

    2. what you want Oracle to use instead? A full table scan? A scan of the index systematic range? An index skip scan? A Full Scan index? An analysis of index full?

    A full table scan?  For three key values? When there is a unique index? I hope not.

    A scan of the index systematic range? Look a the doc 12 c provided for those other types of indexes

    How the Index range scans work

    In general, the process is as follows:

    1. Read the root block.
    2. Read the bundle branch block.
    3. Replacing the following steps until all data is retrieved:
      1. Read a block of sheets to get a rowid.

      2. Read a block to retrieve a table row.

    . . .
    For example, to analyze the index, the database moves backward or forward through the pads of sheets. For example, an analysis of identifications between 20 and 40 locates the first sheet block that has the lowest value of key that is 20 or more. The analysis produced horizontally through the linked list nodes until it finds a value greater than 40 and then stops.

    If that '20' was the FIRST index value and the '40' was the LAST one who reads ALL of the terminal nodes. That doesn't look good for me.

    How to index full scans of work

    The database reads the root block and then sailed on the side of the index (right or left hand if do a descending full scan) until it reaches a block of sheets. The database then reads down the index, one block at a time, in a sorted order. The analysis uses single e/s rather than I/O diluvium.

    Which is about as the last example is not?

    How to index Fast Full Scans work

    The database uses diluvium I/O to read the root block and all the blocks of leaf and branch. Databases don't know branch blocks and the root and reads the index on blocks of leaves entries.

    Seems not much better than the last one for your use case.

    Skip index scans

    An index skip scan occurs when the first column of a composite index is "skipped" or not specified in the query.

    . . .

    How Index Skip scan work

    An index skip scan logically divides a composite index in smaller subindex. The number of distinct values in the main columns of the index determines the number of logical subindex. The more the number, the less logical subindex, the optimizer should create, and becomes the most effective analysis. The scan reads each logical index separately and "jumps" index blocks that do not meet the condition of filter on the column no leader.

    Which does not apply to your use cases; you do not have a composite index, and there is nothing to jump. If Oracle were to 'jump' between the values of the list in it would be still reads these blocks 'inbetween' and them to jump.

    Which brings back us to the using a single scan, one at a time, for each of the values in the list in. The root index block will be in the cache after the first value lies, so it only needs to be read once. After that just Oracle detects that the entry of only ONE necessary index. Sounds better than any other variants for me if you are only dealing with a small number of values in the IN clause.

  • Is it possible to display the index of the real estate code I created in the page html, css layout?

    Hi all!

    First of all, I'm really sorry if this is a bad question, but I found this option in other html tools (like PHP storm) and tried many options in CS6, without success.

    What I do: just started to learn html 5 with an online course (without a master, a few videos), so I'm really a user "rookie".

    I am working to put an image in html using css for formatting, so:


    1 - the index.html, I wrote the following code:

    < figure class = "foto-legenda" >

    < img src = "_imagens/image.jpg" >

    < figcaption >

    Test image < h3 > < / h3 >

    Legend of the Image Test < p > < /p >

    < / figcaption >

    < / figure >


    2. If, then, I need to define properties for "foto-legenda", in "file.css", like border etc, and what I've done:

    }

    {Figure.foto - legenda}

    border: 8px #F00 solid;

    }

    Far, so what it s works fine, but the question is:

    After I have written "< figure class ="foto-legenda...»" ', in html code, I want to write in css page, a bit of code and retrieve the indicator for the options that I started on the HTML page.

    For example, in css page, I'm starting to write 'figure'and get the hint of 'illustration' or 'figure class' (all from "figure"), after you selected the desired setting and start writing get «foto...» ", " foto-legenda " indicator.

    I hope it's clear.

    Thank you in advance.

    Sorry, it does not work like that.

    First of all, define the class of CSS selector.  Save your CSS file.

    .foto-legenda {border: 1px solid #CCC; padding: 0.5em}

    Now, when you type the class HTML = "foto... indicators of code will give you the class in a drop-down list.  See screenshot.

    As far as I KNOW, the code in DW CSS hints never picked up HTML classes.

    Nancy O.

  • The index used is not in the group by.

    Here's the scenario with examples. The big table 333 to 500 million in the table rows. Statistics are collected. Are there histograms. Index is not however be used. Why?
      CREATE TABLE "XXFOCUS"."some_huge_data_table" 
       (  "ORG_ID" NUMBER NOT NULL ENABLE, 
      "PARTNERID" VARCHAR2(30) NOT NULL ENABLE, 
      "EDI_END_DATE" DATE NOT NULL ENABLE, 
      "CUSTOMER_ITEM_NUMBER" VARCHAR2(50) NOT NULL ENABLE, 
      "STORE_NUMBER" VARCHAR2(10) NOT NULL ENABLE, 
      "EDI_START_DATE" DATE, 
      "QTY_SOLD_UNIT" NUMBER(7,0), 
      "QTY_ON_ORDER_UNIT" NUMBER(7,0), 
      "QTY_ON_ORDER_AMT" NUMBER(10,2), 
      "QTY_ON_HAND_AMT" NUMBER(10,2), 
      "QTY_ON_HAND_UNIT" NUMBER(7,0), 
      "QTY_SOLD_AMT" NUMBER(10,2), 
      "QTY_RECEIVED_UNIT" NUMBER(7,0), 
      "QTY_RECEIVED_AMT" NUMBER(10,2), 
      "QTY_REQUISITION_RDC_UNIT" NUMBER(7,0), 
         "QTY_REQUISITION_RDC_AMT" NUMBER(10,2), 
         "QTY_REQUISITION_RCVD_UNIT" NUMBER(7,0), 
         "QTY_REQUISITION_RCVD_AMT" NUMBER(10,2), 
         "INSERTED_DATE" DATE, 
         "UPDATED_DATE" DATE, 
         "CUSTOMER_WEEK" NUMBER, 
         "CUSTOMER_MONTH" NUMBER, 
         "CUSTOMER_QUARTER" NUMBER, 
         "CUSTOMER_YEAR" NUMBER, 
         "CUSTOMER_ID" NUMBER, 
         "MONTH_NAME" VARCHAR2(3), 
         "ORG_WEEK" NUMBER, 
         "ORG_MONTH" NUMBER, 
         "ORG_QUARTER" NUMBER, 
         "ORG_YEAR" NUMBER, 
         "SITE_ID" NUMBER, 
         "ITEM_ID" NUMBER, 
         "ITEM_COST" NUMBER, 
         "UNIT_PRICE" NUMBER, 
          CONSTRAINT "some_huge_data_table_PK" PRIMARY KEY ("ORG_ID", "PARTNERID", "EDI_END_DATE", "CUSTOMER_ITEM_NUMBER", "STORE_NUMBER")
      USING INDEX TABLESPACE "xxxxx"  ENABLE, 
          CONSTRAINT "some_huge_data_table_CK_START_DATE" CHECK (edi_end_date - edi_start_date = 6) ENABLE
       );
    
    SQL*Plus: Release 11.2.0.2.0 Production on Fri Sep 14 12:11:16 2012
    
    Copyright (c) 1982, 2010, Oracle.  All rights reserved.
    
    
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    
    SQL> SELECT num_rows FROM user_tables s WHERE s.table_name = 'some_huge_data_table';
    
      NUM_ROWS                                                                      
    ----------                                                                      
     333338434                                                                      
    
    SQL> SELECT MAX(edi_end_date)
      2    FROM some_huge_data_table p
      3   WHERE p.org_id = some_number
      4     AND p.partnerid = 'some_string';
    
    MAX(EDI_E                                                                       
    ---------                                                                       
    13-MAY-12                                                                       
    
    Elapsed: 00:00:00.00
    
    
    SQL> explain plan for
      2  SELECT MAX(edi_end_date)
      3    FROM some_huge_data_table p
      4   WHERE p.org_id = some_number
      5     AND p.partnerid = 'some_string';
    
    Explained.
    
    SQL> /
    
    PLAN_TABLE_OUTPUT                                                                                   
    ----------------------------------------------------------------------------------------------------
    Plan hash value: 2104157595                                                                         
                                                                                                        
    --------------------------------------------------------------------------------------------        
    | Id  | Operation                    | Name        | Rows  | Bytes | Cost (%CPU)| Time     |        
    --------------------------------------------------------------------------------------------        
    |   0 | SELECT STATEMENT             |             |     1 |    22 |     4   (0)| 00:00:01 |        
    |   1 |  SORT AGGREGATE              |             |     1 |    22 |            |          |        
    |   2 |   FIRST ROW                  |             |     1 |    22 |     4   (0)| 00:00:01 |        
    |*  3 |    INDEX RANGE SCAN (MIN/MAX)| some_huge_data_table_PK |     1 |    22 |     4   (0)| 00:00:01 |        
    --------------------------------------------------------------------------------------------        
    
    SQL> explain plan for
      2  SELECT MAX(edi_end_date),
      3         org_id,
      4         partnerid
      5    FROM some_huge_data_table
      6   GROUP BY org_id,
      7            partnerid;
    
    Explained.
    
    PLAN_TABLE_OUTPUT                                                                                   
    ----------------------------------------------------------------------------------------------------
    Plan hash value: 3950336305                                                                         
                                                                                                        
    -------------------------------------------------------------------------------                     
    | Id  | Operation          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |                     
    -------------------------------------------------------------------------------                     
    |   0 | SELECT STATEMENT   |          |     2 |    44 |  1605K  (1)| 05:21:03 |                     
    |   1 |  HASH GROUP BY     |          |     2 |    44 |  1605K  (1)| 05:21:03 |                     
    |   2 |   TABLE ACCESS FULL| some_huge_data_table |   333M|  6993M|  1592K  (1)| 05:18:33 |                     
    -------------------------------------------------------------------------------                     
    Why is he would not use the index in the group by? If I write a loop to search for different partnerid (there are only three), things together takes less than a second. Any help is appreciated.

    BTW, I too gave the index indicator. Did not work. Version mentioned in the example.

    Published by: RPuttagunta on September 14, 2012 11:24

    Published by: RPuttagunta on September 14, 2012 11:26

    the actual names are 'cleaned' for obvious reasons. Don't worry, I don't have the name of the tables in different cases.

    RPuttagunta wrote:

    Looks like either I called index_ss asked Jonathan a bad indicator, or, I don't know if he used the 'skip scan ".

    You don't specify correctly, it should be: index_ss (table_alias) or index_ss (table_alias index_name) or index_ss (table_alias (list of columns to index)).

    But I just tried a quick test on 11.2.0.3, and he does not have what we would really like to do.

    Concerning
    Jonathan Lewis

  • Associative arrays... items in the index

    I can control the index of an associative array like that...
     TYPE aat_id IS TABLE OF PLS_INTEGER INDEX BY PLS_INTEGER;
      aa_ids aat_id;
    
    BEGIN
    
         aa_ids(1) := 3;
         aa_ids(2) := 8;
         aa_ids(2) := 10;
         
    aa_ids.delete(2);
    
    dbms_output.put_line(aa_ids.count);
    
    END;
    How can I control the index in the statement of collection in bulk as follows... so that I can delete records using this index
     SELECT decode(mod(rownum,2),1,1,2), object_name BULK COLLECT INTO aa_recs
         FROM   all_objects WHERE  ROWNUM <= 6;
    Thank you
    HESH.

    HESH wrote:

    but I have to play with the collected values, I want to delete the values in bulk without a loop through the list there at - it a trick possible to do this?

    Using not PL/SQL and bays of PL/SQL at all - just using simple native SQL.

    Before bulk collection and construction of an associative array, for the use of the values in table for future SQL operations, I rather to store the collection of values in a TWG (global temporary table) instead. A TWG can be indexed - and can thus provide much better access to the data as an associative array. He played better. It can be used via SQL, native mode.

    The best place for the data is in the database. Not in the PL/SQL layer. This means that in SQL tables and not in PL/SQL tables or collections.

    There are very few situations in PL/SQL, which require the use of associative arrays. If few, the majority of the PL/SQL code using associative arrays, IMO do badly. And what you've posted so far, I do not see a specific requirement for associative arrays.

    So be sure you use the correct feature - and make sure that it is also well put to use in your code.

    PS. HESH is a strange name. I'm used to HESH meaning High Explosive Squash Head (used for the filming of shielding). :-)

  • Display performance of resources by using the index of area XDBHI_IDX

    Hi all

    We have an application XMLDB home grown to make the validation of diagram against incoming messages. Enforcement issues several queries like below. I know, it analyzes the hard, which is very bad, but it's not where the real problem is at the moment. The problem is with the index of XDBHI_IDX field against xdb resource table $. We use a very specific way of results in the rows returned 0 or 1, but as you can see that the domain index has an intermediate result of 48041 lines, so the index is not effective here. Statistics are up to date, the index has been rebuild. Does anyone know what we can do with the index of the field to do what he must do: identify an xml document in this way?
    SELECT 0 
    FROM
     resource_view WHERE any_path=
      '/home/app/incoming/ARCH_IN/2012/01/592174'
    
    
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse        2      0.02       0.02          0       1041          0           0
    Execute      2      0.00       0.00          0          0          0           0
    Fetch        2     81.26      85.43          0     202556     576158           0
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total        6     81.28      85.45          0     203597     576158           0
    
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 67  (app)   (recursive depth: 3)
    
    Rows     Row Source Operation
    -------  ---------------------------------------------------
          0  TABLE ACCESS BY INDEX ROWID XDB$RESOURCE (cr=341531 pr=0 pw=0 time=60883725 us)
      48041   DOMAIN INDEX  XDBHI_IDX (cr=311030 pr=0 pw=0 time=1548774 us)
    Kind regards
    Rob.

    PS: The database version is 10.2.0.3.0

    Hi Rob,

    See EQUALS_PATH (and UNDER_PATH) conditions: http://docs.oracle.com/cd/B19306_01/server.102/b14200/conditions010.htm#i1051094

    SQL> select *
      2  from resource_view v
      3  where equals_path(v.res, '/office/excel/docs') = 1
      4  ;
    
    Execution Plan
    ----------------------------------------------------------
    Plan hash value: 3007404872
    
    --------------------------------------------------------------------------------------------
    | Id  | Operation                   | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
    --------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT            |              |    81 | 17172 |    32   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| XDB$RESOURCE |    81 | 17172 |    32   (0)| 00:00:01 |
    |*  2 |   DOMAIN INDEX              | XDBHI_IDX    |       |       |            |          |
    --------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       2 - access("XDB"."EQUALS_PATH"(SYS_MAKEXML('8758D485E6004793E034080020B242C6',734
                  ,"XMLEXTRA","XMLDATA"),'/office/excel/docs',8888)=1)
    
  • Help not in alpha order when the user clicks on the key word in the Index

    When I test my index finger in a generated Webhelp project, the topics associated with the keyword that I click are not willing to be placed in alphabetical order, but randomly.

    When I click on the same keyword, in RoboHelp (in the Index before the generation) project, the topics are arranged correctly.

    Is there a setting in RoboHelp or Internet Explorer that would straighten out this point, or is it just a bug of RoboHelp?

    Using RoboHelp 8 and IE 8.

    Thank you

    John

    Hi, John D. He is John D.

    You can manually "rearrange the Perfect file" as you describe to achieve the desired order. Some notes of clarification are added below:

    1. I doubt if your Perfect file is corrupt (although all things are possible.) The Perfect file is an xml text file and easily modified with Notepad.
    2. Of course, you want to back up your original first before editing.
    3. You cannot delete the Index in the designer and recreate it on the fly as simply as the use of the automatic table of contents that uses the folder structure. The only way to create an "Automatic Index", is to use the smart Index Wizard. But most of the people who can't find useful because it takes more time to use it properly. It is best to create it from scratch. In addition, this would not solve your question alphabetical.
    4. I stuck a test below where I added an alpha character in each topic title directly in the XML, as shown. (In other words, I do not add the alpha character to the real title tag in the subject.) It works as advertised WebHelp output.
    5. Interestingly, it does not in the MS HTML Help .chm version. It's because Microsoft uses the title of the topic to form the text of the displayed entry.
    6. It's not a bug, because a large number of process is governed by Perfect original format of Microsoft. But it would be an interesting feature request (signature of Rick above)

    Here's what the output of WebHelp appears for the two Index entries:

    And, the MS HTML Help CHM:

    Happy alphabetical order!

    John Daigle

    Adobe Certified RoboHelp and Captivate instructor

    Evergreen, Colorado

    www.showmethedemo.com

  • Strange - Inserts first slowly, then quickly after the index drop and recreate

    Hello

    I have a chart with lines more 1,250,000,000 on Oracle 11.1.0.7, Linux. He had 4 global index, not partitioned. Insertions in this table have been very slow - lots of db file sequential reads, each taking an average of 0,009 second (from tkprof) - not bad, but the overall performance was wrong - so I fell & re-create the primary key index (3 columns in this index) and permanently other 3 index. As a result, total number of db file sequential reads decreased about 4 times (I was expecting that - now, there is only 1, not 4 index) but not only that - the avarage db file sequential read time fell just 0.0014 second!

    In further investigation, I found in the form of traces, that BEFORE the fall & recreate, each sequential reading of the db file has been reading completely different blocks ("random") and AFTER the fall & recreate, blocks accessed by db file sequential reads are almost successively ordered (which allows to obtain cached storage Bay, and I think it's why I get 0.0014 instead of 0.009)! My question is - HOW it HAPPENED? Why the index rebuild has helped so much? The index is fragmented? And perhaps helped PCTFREE 10%, which I've recreated the index with, and there is no index block is divided now (but will appear in the future)?

    Important notes - the result set that I insert is and has always ordered columns of table KP index. FILESYSTEMIO_OPTIONS parameter is set to SETALL is not OS cache (I presume), which makes my reading faster (because I have Direct IO).

    Here is an excerpt of the trace file (expected a single insert operation):

    -> > FRONT:
    WAITING #12: nam = 'db file sequential read' ela = 35089 file #= 15 block #= blocks 20534014 = 1 obj #= tim 64560 = 1294827907110090
    WAITING #12: nam = 'db file sequential read' ela = 6434 file #= 15 block #= blocks 61512424 = 1 obj #= tim 64560 = 1294827907116799
    WAITING #12: nam = 'db file sequential read' ela = 7961 file #= 15 block #= blocks 33775666 = 1 obj #= tim 64560 = 1294827907124874
    WAITING #12: nam = 'db file sequential read' ela = 16681 file #= 15 block #= blocks 60785827 = 1 obj #= tim 64560 = 1294827907143821
    WAITING #12: nam = 'db file sequential read' ela = 2380 file #= 15 block #= blocks 60785891 = 1 obj #= tim 64560 = 1294827907147000
    WAITING #12: nam = 'db file sequential read' ela = 4219 file #= 15 block #= blocks 33775730 = 1 obj #= tim 64560 = 1294827907151553
    WAITING #12: nam = 'db file sequential read' ela = 7218 file #= 15 block #= blocks 58351090 = 1 obj #= tim 64560 = 1294827907158922
    WAITING #12: nam = 'db file sequential read' ela = 6140 file #= 15 block #= blocks 20919908 = 1 obj #= tim 64560 = 1294827907165194
    WAITING #12: nam = ela 'db file sequential read' = 542 file #= 15 block #= blocks 60637720 = 1 obj #= tim 64560 = 1294827907165975
    WAITING #12: nam = 'db file sequential read' ela = 13736 file #= 15 block #= blocks 33350753 = 1 obj #= tim 64560 = 1294827907179807
    WAITING #12: nam = 'db file sequential read' ela = 57465 file #= 15 block #= blocks 59840995 = 1 obj #= tim 64560 = 1294827907237569
    WAITING #12: nam = 'db file sequential read' ela = file No. 20077 = 15 block #= blocks 11266833 = 1 obj #= tim 64560 = 1294827907257879
    WAITING #12: nam = 'db file sequential read' ela = 10642 file #= 15 block #= blocks 34506477 = 1 obj #= tim 64560 = 1294827907268867
    WAITING #12: nam = 'db file sequential read' ela = 5393 file #= 15 block #= blocks 20919972 = 1 obj #= tim 64560 = 1294827907275227
    WAITING #12: nam = 'db file sequential read' ela = 15308 file #= 15 block #= blocks 61602921 = 1 obj #= tim 64560 = 1294827907291203
    WAITING #12: nam = 'db file sequential read' ela = 11228 file #= 15 block #= blocks 34032720 = 1 obj #= tim 64560 = 1294827907303261
    WAITING #12: nam = 'db file sequential read' ela = 7885 file #= 15 block #= blocks 60785955 = 1 obj #= tim 64560 = 1294827907311867
    WAITING #12: nam = 'db file sequential read' ela = 6652 file #= 15 block #= blocks 19778448 = 1 obj #= tim 64560 = 1294827907319158
    WAITING #12: nam = 'db file sequential read' ela = 8735 file #= 15 block #= blocks 34634855 = 1 obj #= tim 64560 = 1294827907328770
    WAITING #12: nam = 'db file sequential read' ela = 14235 file #= 15 block #= blocks 61411940 = 1 obj #= tim 64560 = 1294827907343804
    WAITING #12: nam = 'db file sequential read' ela = 7173 file #= 15 block #= blocks 33350808 = 1 obj #= tim 64560 = 1294827907351214
    WAITING #12: nam = 'db file sequential read' ela = 8033 file #= 15 block #= blocks 60493866 = 1 obj #= tim 64560 = 1294827907359424
    WAITING #12: nam = 'db file sequential read' ela = 14654 file #= 15 block #= blocks 19004731 = 1 obj #= tim 64560 = 1294827907374257
    WAITING #12: nam = 'db file sequential read' ela = 6116 file #= 15 block #= blocks 34565376 = 1 obj #= tim 64560 = 1294827907380647
    WAITING #12: nam = 'db file sequential read' ela = 6203 file #= 15 block #= blocks 20920100 = 1 obj #= tim 64560 = 1294827907387054
    WAITING #12: nam = 'db file sequential read' ela = 50627 file #= 15 block #= blocks 61602985 = 1 obj #= tim 64560 = 1294827907437838
    WAITING #12: nam = 'db file sequential read' ela = 13752 file #= 15 block #= blocks 33351193 = 1 obj #= tim 64560 = 1294827907451875
    WAITING #12: nam = 'db file sequential read' ela = 6883 file #= 15 block #= blocks 58686059 = 1 obj #= tim 64560 = 1294827907459551
    WAITING #12: nam = 'db file sequential read' ela = file No. 13284 = 15 block #= blocks 19778511 = 1 obj #= tim 64560 = 1294827907473558
    WAITING #12: nam = 'db file sequential read' ela = 16678 file #= 15 block #= blocks 34226211 = 1 obj #= tim 64560 = 1294827907493010
    WAITING #12: nam = 'db file sequential read' ela = 9565 file #= 15 block #= blocks 61123267 = 1 obj #= tim 64560 = 1294827907507419
    WAITING #12: nam = 'db file sequential read' ela = 6893 file #= 15 block #= blocks 20920164 = 1 obj #= tim 64560 = 1294827907515073
    WAITING #12: nam = 'db file sequential read' ela = 9817 file #= 15 block #= blocks 61603049 = 1 obj #= tim 64560 = 1294827907525598
    WAITING #12: nam = 'db file sequential read' ela = 4691 file #= 15 block #= blocks 33351248 = 1 obj #= tim 64560 = 1294827907530960
    WAITING #12: nam = 'db file sequential read' ela = file No. 25983 = 15 block #= blocks 58351154 = 1 obj #= tim 64560 = 1294827907557661
    WAITING #12: nam = 'db file sequential read' ela = 7402 file #= 15 block #= blocks 5096358 = 1 obj #= tim 64560 = 1294827907565927
    WAITING #12: nam = 'db file sequential read' ela = 7964 file #= 15 block #= blocks 61603113 = 1 obj #= tim 64560 = 1294827907574570
    WAITING #12: nam = 'db file sequential read' ela = 32776 file #= 15 block #= blocks 33549538 = 1 obj #= tim 64560 = 1294827907608063
    WAITING #12: nam = 'db file sequential read' ela = 5674 file #= 15 block #= blocks 60493930 = 1 obj #= tim 64560 = 1294827907614596
    WAITING #12: nam = 'db file sequential read' ela = 9525 file #= 15 block #= blocks 61512488 = 1 obj #= tim 64560 = 1294827907625007
    WAITING #12: nam = 'db file sequential read' ela = 15729 file #= 15 block #= blocks 33549602 = 1 obj #= tim 64560 = 1294827907641538
    WAITING #12: nam = 'db file sequential read' ela = file No. 11510 = 15 block #= blocks 60902458 = 1 obj #= tim 64560 = 1294827907653819
    WAITING #12: nam = 'db file sequential read' ela = 26431 files #= 15 block #= blocks 59841058 = 1 obj #= tim 64560 = 1294827907680940
    WAITING #12: nam = 'db file sequential read' ela = 9196 file #= 15 block #= blocks 33350809 = 1 obj #= tim 64560 = 1294827907690434
    WAITING #12: nam = 'db file sequential read' ela = 7745 file #= 15 block #= blocks 60296291 = 1 obj #= tim 64560 = 1294827907698353
    WAITING #12: nam = 'db file sequential read' ela = 429 file #= 15 block #= blocks 61603177 = 1 obj #= tim 64560 = 1294827907698953
    WAITING #12: nam = 'db file sequential read' ela = 8459 file #= 15 block #= blocks 33351194 = 1 obj #= tim 64560 = 1294827907707695
    WAITING #12: nam = 'db file sequential read' ela = 25998 file #= 15 block #= blocks 49598412 = 1 obj #= tim 64560 = 1294827907733890

    2011-01-12 11:25:07.742
    WAITING #12: nam = 'db file sequential read' ela = 7988 file #= 15 block #= blocks 11357900 = 1 obj #= tim 64560 = 1294827907742683
    WAITING #12: nam = 'db file sequential read' ela = 10066 file #= 15 block #= blocks 61512552 = 1 obj #= tim 64560 = 1294827907753540
    WAITING #12: nam = 'db file sequential read' ela = 8400 file #= 15 block #= blocks 33775858 = 1 obj #= tim 64560 = 1294827907762668
    WAITING #12: nam = 'db file sequential read' ela = 11750 file #= 15 block #= blocks 60636761 = 1 obj #= tim 64560 = 1294827907774667
    WAITING #12: nam = 'db file sequential read' ela = 16933 file #= 15 block #= blocks 20533183 = 1 obj #= tim 64560 = 1294827907791839
    WAITING #12: nam = 'db file sequential read' ela = 8895 file #= 15 block #= blocks 61603241 = 1 obj #= tim 64560 = 1294827907801047
    WAITING #12: nam = 'db file sequential read' ela = file No. 12685 = 15 block #= blocks 33775922 = 1 obj #= tim 64560 = 1294827907813913
    WAITING #12: nam = 'db file sequential read' ela = file No. 12664 = 15 block #= blocks 60493994 = 1 obj #= tim 64560 = 1294827907827379
    WAITING #12: nam = 'db file sequential read' ela = 8271 file #= 15 block #= blocks 19372356 = 1 obj #= tim 64560 = 1294827907835881
    WAITING #12: nam = 'db file sequential read' ela = file No. 10825 = 15 block #= blocks 59338524 = 1 obj #= tim 64560 = 1294827907847439
    WAITING #12: nam = 'db file sequential read' ela = 13086 file #= 15 block #= blocks 49440992 = 1 obj #= tim 64793 = 1294827907862022
    WAITING #12: nam = 'db file sequential read' ela = file No. 16491 = 15 block #= blocks 32853984 = 1 obj #= tim 64793 = 1294827907879282
    WAITING #12: nam = 'db file sequential read' ela = 9349 file #= 15 block #= blocks 60133021 = 1 obj #= tim 64793 = 1294827907888849
    WAITING #12: nam = 'db file sequential read' ela = 5680 files #= 15 block #= blocks 20370585 = 1 obj #= tim 64793 = 1294827907895281
    WAITING #12: nam = 'db file sequential read' ela = 34021 file #= 15 block #= blocks 58183834 = 1 obj #= tim 64793 = 1294827907930014
    WAITING #12: nam = 'db file sequential read' ela = 8574 file #= 15 block #= blocks 32179028 = 1 obj #= tim 64793 = 1294827907938813
    WAITING #12: nam = 'db file sequential read' ela = file No. 10862 = 15 block #= blocks 49402735 = 1 obj #= tim 64793 = 1294827907949821
    WAITING #12: nam = 'db file sequential read' ela = 4501 file #= 15 block #= blocks 11270933 = 1 obj #= tim 64793 = 1294827907954533
    WAITING #12: nam = 'db file sequential read' ela = 9936 file #= 15 block #= blocks 61007523 = 1 obj #= tim 64793 = 1294827907964616
    WAITING #12: nam = 'db file sequential read' ela = 7631 file #= 15 block #= blocks 34399970 = 1 obj #= tim 64793 = 1294827907972457
    WAITING #12: nam = 'db file sequential read' ela = 6162 file #= 15 block #= blocks 60305187 = 1 obj #= tim 64793 = 1294827907978797
    WAITING #12: nam = 'db file sequential read' ela = 8555 file #= 15 block #= blocks 20912586 = 1 obj #= tim 64793 = 1294827907987532
    WAITING #12: nam = 'db file sequential read' ela = 9499 file #= 15 block #= blocks 61007587 = 1 obj #= tim 64793 = 1294827907997296
    WAITING #12: nam = 'db file sequential read' ela = 23690 file #= 15 block #= blocks 19769014 = 1 obj #= tim 64793 = 1294827908024105
    WAITING #12: nam = 'db file sequential read' ela = 7081 file #= 15 block #= blocks 61314072 = 1 obj #= tim 64793 = 1294827908031968
    WAITING #12: nam = 'db file sequential read' ela = 31727 file #= 15 block #= blocks 34026602 = 1 obj #= tim 64793 = 1294827908063914
    WAITING #12: nam = 'db file sequential read' ela = 4932 file #= 15 block #= blocks 60905313 = 1 obj #= tim 64793 = 1294827908069052
    WAITING #12: nam = 'db file sequential read' ela = 6616 file #= 15 block #= blocks 20912650 = 1 obj #= tim 64793 = 1294827908075835
    WAITING #12: nam = 'db file sequential read' ela = 8443 file #= 15 block #= blocks 33781968 = 1 obj #= tim 64793 = 1294827908084594
    WAITING #12: nam = 'db file sequential read' ela = 22291 file #= 15 block #= blocks 60641967 = 1 obj #= tim 64793 = 1294827908107052
    WAITING #12: nam = 'db file sequential read' ela = 6610 file #= 15 block #= blocks 18991774 = 1 obj #= tim 64793 = 1294827908113879
    WAITING #12: nam = 'db file sequential read' ela = 6493 file #= 15 block #= blocks 34622382 = 1 obj #= tim 64793 = 1294827908120535
    WAITING #12: nam = 'db file sequential read' ela = 5028 file #= 15 block #= blocks 20912714 = 1 obj #= tim 64793 = 1294827908125861
    WAITING #12: nam = 'db file sequential read' ela = file No. 11834 = 15 block #= blocks 61679845 = 1 obj #= tim 64793 = 1294827908137858
    WAITING #12: nam = 'db file sequential read' ela = 4261 file #= 15 block #= blocks 34498166 = 1 obj #= tim 64793 = 1294827908142305
    WAITING #12: nam = 'db file sequential read' ela = 19267 file #= 15 block #= blocks 60905377 = 1 obj #= tim 64793 = 1294827908161695
    WAITING #12: nam = 'db file sequential read' ela = file No. 14108 = 15 block #= blocks 19769078 = 1 obj #= tim 64793 = 1294827908176046
    WAITING #12: nam = 'db file sequential read' ela = 4128 file #= 15 block #= blocks 33781465 = 1 obj #= tim 64793 = 1294827908180396
    WAITING #12: nam = 'db file sequential read' ela = 9986 file #= 15 block #= blocks 61007651 = 1 obj #= tim 64793 = 1294827908190535
    WAITING #12: nam = 'db file sequential read' ela = 8907 file #= 15 block #= blocks 20912778 = 1 obj #= tim 64793 = 1294827908199614
    WAITING #12: nam = 'db file sequential read' ela = 12023 file #= 15 block #= blocks 34230838 = 1 obj #= tim 64793 = 1294827908211852
    WAITING #12: nam = 'db file sequential read' ela = 29837 file #= 15 block #= blocks 60905441 = 1 obj #= tim 64793 = 1294827908241853
    WAITING #12: nam = 'db file sequential read' ela = 5989 file #= 15 block #= blocks 60133085 = 1 obj #= tim 64793 = 1294827908248065
    WAITING #12: nam = 'db file sequential read' ela = 74172 file #= 15 block #= blocks 33357369 = 1 obj #= tim 64793 = 1294827908322391
    WAITING #12: nam = 'db file sequential read' ela = 5443 file #= 15 block #= blocks 60498917 = 1 obj #= tim 64793 = 1294827908328064
    WAITING #12: nam = 'db file sequential read' ela = 4645 file #= 15 block #= blocks 20912842 = 1 obj #= tim 64793 = 1294827908332912
    WAITING #12: nam = 'db file sequential read' ela = file No. 13595 = 15 block #= blocks 61679909 = 1 obj #= tim 64793 = 1294827908346618
    WAITING #12: nam = 'db file sequential read' ela = 9120 file #= 15 block #= blocks 58356376 = 1 obj #= tim 64793 = 1294827908355975
    WAITING #12: nam = 'db file sequential read' ela = 3186 file #= 15 block #= blocks 19385867 = 1 obj #= tim 64793 = 1294827908359374
    WAITING #12: nam = 'db file sequential read' ela = 5114 file #= 15 block #= blocks 61589533 = 1 obj #= tim 64793 = 1294827908364630
    WAITING #12: nam = 'db file sequential read' ela = 42263 file #= 15 block #= blocks 33356474 = 1 obj #= tim 64793 = 1294827908407045
    WAITING #12: nam = 'db file sequential read' ela = 10683 file #= 15 block #= blocks 58183898 = 1 obj #= tim 64793 = 1294827908417994
    WAITING #12: nam = 'db file sequential read' ela = file No. 10284 = 15 block #= blocks 20529486 = 1 obj #= tim 64793 = 1294827908429134
    WAITING #12: nam = 'db file sequential read' ela = file No. 12544 = 15 block #= blocks 60498981 = 1 obj #= tim 64793 = 1294827908441945
    WAITING #12: nam = 'db file sequential read' ela = 8311 file #= 15 block #= blocks 33191548 = 1 obj #= tim 64793 = 1294827908451011
    WAITING #12: nam = 'db file sequential read' ela = 4261 file #= 15 block #= blocks 59083610 = 1 obj #= tim 64793 = 1294827908455902
    WAITING #12: nam = 'db file sequential read' ela = 4653 file #= 15 block #= blocks 18991837 = 1 obj #= tim 64793 = 1294827908461264
    WAITING #12: nam = 'db file sequential read' ela = 4905 file #= 15 block #= blocks 34685472 = 1 obj #= tim 64793 = 1294827908466897
    WAITING #12: nam = 'db file sequential read' ela = file No. 12360 = 15 block #= blocks 61775403 = 1 obj #= tim 64793 = 1294827908480080
    WAITING #12: nam = 'db file sequential read' ela = 6956 file #= 15 block #= blocks 58921225 = 1 obj #= tim 64793 = 1294827908487704
    WAITING #12: nam = 'db file sequential read' ela = 6068 file #= 15 block #= blocks 19769142 = 1 obj #= tim 64793 = 1294827908494608
    WAITING #12: nam = 'db file sequential read' ela = 5249 file #= 15 block #= blocks 33781528 = 1 obj #= tim 64793 = 1294827908500666
    WAITING #12: nam = 'db file sequential read' ela = 6013 file #= 15 block #= blocks 60905505 = 1 obj #= tim 64793 = 1294827908507366
    WAITING #12: nam = 'db file sequential read' ela = 3014 file #= 15 block #= blocks 20912970 = 1 obj #= tim 64793 = 1294827908511019
    WAITING #12: nam = 'db file sequential read' ela = 3636 file #= 15 block #= blocks 33781591 = 1 obj #= tim 64793 = 1294827908515425
    WAITING #12: nam = 'db file sequential read' ela = file No. 12226 = 15 block #= blocks 58183962 = 1 obj #= tim 64793 = 1294827908528268
    WAITING #12: nam = 'db file sequential read' ela = 7635 file #= 15 block #= blocks 60499173 = 1 obj #= tim 64793 = 1294827908536613
    WAITING #12: nam = 'db file sequential read' ela = 7364 file #= 15 block #= blocks 11270996 = 1 obj #= tim 64793 = 1294827908544203
    WAITING #12: nam = 'db file sequential read' ela = 5452 file #= 15 block #= blocks 34622446 = 1 obj #= tim 64793 = 1294827908550475
    WAITING #12: nam = 'db file sequential read' ela = 9734 file #= 15 block #= blocks 20913034 = 1 obj #= tim 64793 = 1294827908561029
    WAITING #12: nam = 'db file sequential read' ela = 14077 file #= 15 block #= blocks 61679973 = 1 obj #= tim 64793 = 1294827908575440
    WAITING #12: nam = 'db file sequential read' ela = 9694 file #= 15 block #= blocks 34550681 = 1 obj #= tim 64793 = 1294827908585311
    WAITING #12: nam = 'db file sequential read' ela = 6753 file #= 15 block #= blocks 61007715 = 1 obj #= tim 64793 = 1294827908592228
    WAITING #12: nam = 'db file sequential read' ela = 12577 file #= 15 block #= blocks 19769206 = 1 obj #= tim 64793 = 1294827908604943
    WAITING #12: nam = 'db file sequential read' ela = file No. 609 = 15 block #= blocks 61589534 = 1 obj #= tim 64793 = 1294827908605735
    WAITING #12: nam = 'db file sequential read' ela = 6267 file #= 15 block #= blocks 33356538 = 1 obj #= tim 64793 = 1294827908612148
    WAITING #12: nam = 'db file sequential read' ela = 7876 file #= 15 block #= blocks 58184026 = 1 obj #= tim 64793 = 1294827908620164
    WAITING #12: nam = 'db file sequential read' ela = file No. 14058 = 15 block #= blocks 32767835 = 1 obj #= tim 80883 = 1294827908634546
    WAITING #12: nam = 'db file sequential read' ela = 9798 file #= 15 block #= blocks 58504373 = 1 obj #= tim 80883 = 1294827908644575
    WAITING #12: nam = 'db file sequential read' ela = 11081 file #= 15 block #= blocks 11118811 = 1 obj #= tim 80883 = 1294827908655908
    WAITING #12: nam = 'db file sequential read' ela = 6249 file #= 15 block #= blocks 58087798 = 1 obj #= tim 80883 = 1294827908662451
    WAITING #12: nam = 'db file sequential read' ela = 9513 file #= 15 block #= blocks 33331129 = 1 obj #= tim 80883 = 1294827908672904
    WAITING #12: nam = 'db file sequential read' ela = 4648 file #= 15 block #= blocks 60301818 = 1 obj #= tim 80883 = 1294827908677736
    WAITING #12: nam = 'db file sequential read' ela = 6147 file #= 15 block #= blocks 20523119 = 1 obj #= tim 80883 = 1294827908684075
    WAITING #12: nam = 'db file sequential read' ela = file No. 59531 = 15 block #= blocks 61016570 = 1 obj #= tim 80883 = 1294827908743795

    2011-01-12 11:25:08.752
    WAITING #12: nam = 'db file sequential read' ela = 8787 file #= 15 block #= blocks 33770842 = 1 obj #= tim 80883 = 1294827908752846
    WAITING #12: nam = 'db file sequential read' ela = 9858 file #= 15 block #= blocks 60895354 = 1 obj #= tim 80883 = 1294827908762960
    WAITING #12: nam = 'db file sequential read' ela = 11237 file #= 15 block #= blocks 19369506 = 1 obj #= tim 80883 = 1294827908775138
    WAITING #12: nam = 'db file sequential read' ela = 5838 file #= 15 block #= blocks 34229712 = 1 obj #= tim 80883 = 1294827908782100
    WAITING #12: nam = 'db file sequential read' ela = 6518 file #= 15 block #= blocks 61221772 = 1 obj #= tim 80883 = 1294827908789403
    WAITING #12: nam = 'db file sequential read' ela = 9946 file #= 15 block #= blocks 20523183 = 1 obj #= tim 80883 = 1294827908800089
    WAITING #12: nam = 'db file sequential read' ela = 16699 file #= 15 block #= blocks 61016634 = 1 obj #= tim 80883 = 1294827908817077
    WAITING #12: nam = 'db file sequential read' ela = file No. 15215 = 15 block #= blocks 33770900 = 1 obj #= tim 80883 = 1294827908832934
    WAITING #12: nam = 'db file sequential read' ela = 8403 file #= 15 block #= blocks 60895418 = 1 obj #= tim 80883 = 1294827908842317
    WAITING #12: nam = 'db file sequential read' ela = 8927 file #= 15 block #= blocks 18950791 = 1 obj #= tim 80883 = 1294827908852190
    WAITING #12: nam = 'db file sequential read' ela = 4382 files #= 15 block #= blocks 34493493 = 1 obj #= tim 80883 = 1294827908856821
    WAITING #12: nam = 'db file sequential read' ela = 9356 file #= 15 block #= blocks 61324964 = 1 obj #= tim 80883 = 1294827908866337
    WAITING #12: nam = 'db file sequential read' ela = 10575 file #= 15 block #= blocks 20883018 = 1 obj #= tim 80883 = 1294827908877102
    WAITING #12: nam = 'db file sequential read' ela = 16601 file #= 15 block #= blocks 60502307 = 1 obj #= tim 80883 = 1294827908893926
    WAITING #12: nam = 'db file sequential read' ela = 5236 file #= 15 block #= blocks 33331193 = 1 obj #= tim 80883 = 1294827908899387
    WAITING #12: nam = 'db file sequential read' ela = 9981 file #= 15 block #= blocks 59830076 = 1 obj #= tim 80883 = 1294827908910427
    WAITING #12: nam = 'db file sequential read' ela = 8100 file #= 15 block #= blocks 19767805 = 1 obj #= tim 80883 = 1294827908918751
    WAITING #12: nam = 'db file sequential read' ela = 12492 file #= 15 block #= blocks 67133332 = 1 obj #= tim 80883 = 1294827908931732
    WAITING #12: nam = 'db file sequential read' ela = 5876 file #= 15 block #= blocks 34229775 = 1 obj #= tim 80883 = 1294827908937859
    WAITING #12: nam = 'db file sequential read' ela = 8741 file #= 15 block #= blocks 61408244 = 1 obj #= tim 80883 = 1294827908948439
    WAITING #12: nam = 'db file sequential read' ela = 8477 file #= 15 block #= blocks 20523247 = 1 obj #= tim 80883 = 1294827908957099
    WAITING #12: nam = 'db file sequential read' ela = 7947 file #= 15 block #= blocks 61016698 = 1 obj #= tim 80883 = 1294827908965210
    WAITING #12: nam = 'db file sequential read' ela = 2384 file #= 15 block #= blocks 33331257 = 1 obj #= tim 80883 = 1294827908967773
    WAITING #12: nam = 'db file sequential read' ela = 3585 file #= 15 block #= blocks 59571985 = 1 obj #= tim 80883 = 1294827908971564
    WAITING #12: nam = 'db file sequential read' ela = 7753 file #= 15 block #= blocks 5099571 = 1 obj #= tim 80883 = 1294827908979647
    WAITING #12: nam = 'db file sequential read' ela = 8205 file #= 15 block #= blocks 61408308 = 1 obj #= tim 80883 = 1294827908988200
    WAITING #12: nam = 'db file sequential read' ela = 7745 file #= 15 block #= blocks 34229335 = 1 obj #= tim 80883 = 1294827908996129
    WAITING #12: nam = 'db file sequential read' ela = file No. 10942 = 15 block #= blocks 61325028 = 1 obj #= tim 80883 = 1294827909007244
    WAITING #12: nam = 'db file sequential read' ela = 6247 file #= 15 block #= blocks 20523311 = 1 obj #= tim 80883 = 1294827909013706
    WAITING #12: nam = 'db file sequential read' ela = file No. 16188 = 15 block #= blocks 60777362 = 1 obj #= tim 80883 = 1294827909030088
    WAITING #12: nam = 'db file sequential read' ela = file No. 16642 = 15 block #= blocks 33528224 = 1 obj #= tim 80883 = 1294827909046971
    WAITING #12: nam = 'db file sequential read' ela = file No. 10118 = 15 block #= blocks 60128498 = 1 obj #= tim 80883 = 1294827909057402
    WAITING #12: nam = 'db file sequential read' ela = file No. 10747 = 15 block #= blocks 802317 = 1 obj #= tim 64495 = 1294827909069165
    WAITING #12: nam = 'db file sequential read' ela = 4795 file number = 15 block #= blocks 33079541 = 1 obj #= tim 64560 = 1294827909074367
    WAITING #12: nam = 'db file sequential read' ela = 6822 file #= 15 block #= blocks 20913098 = 1 obj #= tim 64793 = 1294827909081436
    WAITING #12: nam = 'db file sequential read' ela = file No. 10932 = 15 block #= blocks 19369570 = 1 obj #= tim 80883 = 1294827909092607



    --> > AFTER:
    WAITING #23: nam = 'db file sequential read' ela = 16367 file #= 15 block #= blocks 70434065 = 1 obj #= tim 115059 = 1295342220878947
    WAITING #23: nam = 'db file sequential read' ela = 1141 file #= 15 block #= blocks 70434066 = 1 obj #= tim 115059 = 1295342220880549
    WAITING #23: nam = 'db file sequential read' ela = 456 file #= 15 block #= blocks 70434067 = 1 obj #= tim 115059 = 1295342220881615
    WAITING #23: nam = 'db file sequential read' ela = 689 file #= 15 block #= blocks 70434068 = 1 obj #= tim 115059 = 1295342220882617
    WAITING #23: nam = 'db file sequential read' ela = 495 file #= 15 block #= blocks 70434069 = 1 obj #= tim 115059 = 1295342220883482
    WAITING #23: nam = 'db file sequential read' ela = 419 file #= 15 block #= blocks 70434070 = 1 obj #= tim 115059 = 1295342220884195
    WAITING #23: nam = 'db file sequential read' ela = 149 file #= 15 block #= blocks 70434071 = 1 obj #= tim 115059 = 1295342220884629
    WAITING #23: nam = 'db file sequential read' ela = 161 file #= 15 block #= blocks 70434072 = 1 obj #= tim 115059 = 1295342220885085
    WAITING #23: nam = 'db file sequential read' ela = 146 file #= 15 block #= blocks 70434073 = 1 obj #= tim 115059 = 1295342220885533
    WAITING #23: nam = ela 'db file sequential read' = 188 file #= 15 block #= blocks 70434074 = 1 obj #= tim 115059 = 1295342220886026
    WAITING #23: nam = 'db file sequential read' ela = 181 file #= 15 block #= blocks 70434075 = 1 obj #= tim 115059 = 1295342220886498
    WAITING #23: nam = 'db file sequential read' ela = 303 file #= 15 block #= blocks 70434076 = 1 obj #= tim 115059 = 1295342220887082
    WAITING #23: nam = 'db file sequential read' ela = file No. 550 = 15 block #= blocks 70434077 = 1 obj #= tim 115059 = 1295342220887916
    WAITING #23: nam = 'db file sequential read' ela = 163 file #= 15 block #= blocks 70434078 = 1 obj #= tim 115059 = 1295342220888402
    WAITING #23: nam = 'db file sequential read' ela = 200 file #= 15 block #= blocks 70434079 = 1 obj #= tim 115059 = 1295342220888980
    WAITING #23: nam = 'db file sequential read' ela = 134 file #= 15 block #= blocks 70434080 = 1 obj #= tim 115059 = 1295342220889409
    WAITING #23: nam = 'db file sequential read' ela = 157 file #= 15 block #= blocks 70434081 = 1 obj #= tim 115059 = 1295342220889850
    WAITING #23: nam = 'db file sequential read' ela = 5112 file #= 15 block #= blocks 70434540 = 1 obj #= tim 115059 = 1295342220895272
    WAITING #23: nam = 'db file sequential read' ela = 276 file #= 15 block #= blocks 70434082 = 1 obj #= tim 115059 = 1295342220895640

    2011-01-18 10:17:00.898
    WAITING #23: nam = 'db file sequential read' ela = 2936 file #= 15 block #= blocks 70434084 = 1 obj #= tim 115059 = 1295342220898921
    WAITING #23: nam = 'db file sequential read' ela = 1843 file number = 15 block #= blocks 70434085 = 1 obj #= tim 115059 = 1295342220901233
    WAITING #23: nam = 'db file sequential read' ela = 452 file #= 15 block #= blocks 70434086 = 1 obj #= tim 115059 = 1295342220902050
    WAITING #23: nam = 'db file sequential read' ela = 686 file #= 15 block #= blocks 70434087 = 1 obj #= tim 115059 = 1295342220903031
    WAITING #23: nam = 'db file sequential read' ela = 1582 file #= 15 block #= blocks 70434088 = 1 obj #= tim 115059 = 1295342220904933
    WAITING #23: nam = 'db file sequential read' ela = 179 file #= 15 block #= blocks 70434089 = 1 obj #= tim 115059 = 1295342220905544
    WAITING #23: nam = 'db file sequential read' ela = 426 file #= 15 block #= blocks 70434090 = 1 obj #= tim 115059 = 1295342220906303
    WAITING #23: nam = 'db file sequential read' ela = 138 file #= 15 block #= blocks 70434091 = 1 obj #= tim 115059 = 1295342220906723
    WAITING #23: nam = 'db file sequential read' ela = 3004 file #= 15 block #= blocks 70434092 = 1 obj #= tim 115059 = 1295342220910053
    WAITING #23: nam = 'db file sequential read' ela = 331 file #= 15 block #= blocks 70434093 = 1 obj #= tim 115059 = 1295342220910765
    WAITING #23: nam = 'db file sequential read' ela = 148 file #= 15 block #= blocks 70434094 = 1 obj #= tim 115059 = 1295342220911236
    WAITING #23: nam = 'db file sequential read' ela = 296 file #= 15 block #= blocks 70434095 = 1 obj #= tim 115059 = 1295342220911836
    WAITING #23: nam = 'db file sequential read' ela = 441 file #= 15 block #= blocks 70434096 = 1 obj #= tim 115059 = 1295342220912581
    WAITING #23: nam = 'db file sequential read' ela = 157 file #= 15 block #= blocks 70434097 = 1 obj #= tim 115059 = 1295342220913038
    WAITING #23: nam = 'db file sequential read' ela = 281 file #= 15 block #= blocks 70434098 = 1 obj #= tim 115059 = 1295342220913603
    WAITING #23: nam = 'db file sequential read' ela = file No. 150 = 15 block #= blocks 70434099 = 1 obj #= tim 115059 = 1295342220914048
    WAITING #23: nam = 'db file sequential read' ela = 143 file #= 15 block #= blocks 70434100 = 1 obj #= tim 115059 = 1295342220914498
    WAITING #23: nam = 'db file sequential read' ela = 384 file #= 15 block #= blocks 70434101 = 1 obj #= tim 115059 = 1295342220916907
    WAITING #23: nam = 'db file sequential read' ela = file No. 164 = 15 block #= blocks 70434102 = 1 obj #= tim 115059 = 1295342220917458
    WAITING #23: nam = 'db file sequential read' ela = 218 file #= 15 block #= blocks 70434103 = 1 obj #= tim 115059 = 1295342220917962
    WAITING #23: nam = 'db file sequential read' ela = file No. 450 = 15 block #= blocks 70434104 = 1 obj #= tim 115059 = 1295342220918698
    WAITING #23: nam = 'db file sequential read' ela = file No. 164 = 15 block #= blocks 70434105 = 1 obj #= tim 115059 = 1295342220919159
    WAITING #23: nam = 'db file sequential read' ela = 136 file #= 15 block #= blocks 70434106 = 1 obj #= tim 115059 = 1295342220919598
    WAITING #23: nam = 'db file sequential read' ela = 143 file #= 15 block #= blocks 70434107 = 1 obj #= tim 115059 = 1295342220920041
    WAITING #23: nam = 'db file sequential read' ela = 3091 file #= 15 block #= blocks 70434108 = 1 obj #= tim 115059 = 1295342220925409

    user12196647 wrote:
    Hemant, Jonathan - thanks for the comprehensive replies. To summarize:

    It's the 11.1 (11.1.0.7) database on 64-bit Linux. No compression is used for everything, all the blocks are 16 k tablespaces are SAMS created with attributes EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO BIGFILE tablespaces (separate for tables and separate for the index tablespace). I do not have to rebuild the indexes of PK, but abandoned all indexes, and then recreated the KP (but it's okay - reconstruction is still that let down & create, isn't it?).

    Traces were made during the pl/sql loop forall was actually insert. Each pl/sql forall loop inserts 100 lines "at once" (we use fetch bulk collect limit 100), but the loading process inserts a few million records (pl/sql forall loop is closed :)). I stuck that part of traces - wait events for 100 (a set of) 'before' 100 'after' inserts and inserts. Wait for events to another inserts look exacly the same - always blocks 'random' in 'before the index recreate' inserts and inserts almost ordained blocks in "after the index recreate." The objects_ids were certainly indexes.

    I think that the explanation of Hemant is correct. The point is, my structure of the index is of (X, Y, DATE), where X and are 99% repeating at each loading data, values and DATE is increased by one for each data load. Because I'm always insert ordered by PK, for loading a new DATE, I visit each index leaf block, in order, as in the analysis of comprehensive index. Because I'm inserted in each block sheet in every load, I have many index block splits. Which caused my pads of sheets to be physically non-contiguous after awhile. Reconstruction has been my healing...

    You got your answer before I had time to finish a model of your data set - but I think you're description is fairly accurate.
    A few thoughts if:

    WAITING #12: nam = 'db file sequential read' ela = file No. 10747 = 15 block #= blocks 802317 = 1 obj #= tim 64495 = 1294827909069165
    WAITING #12: nam = 'db file sequential read' ela = 4795 file number = 15 block #= blocks 33079541 = 1 obj #= tim 64560 = 1294827909074367
    WAITING #12: nam = 'db file sequential read' ela = 6822 file #= 15 block #= blocks 20913098 = 1 obj #= tim 64793 = 1294827909081436
    WAITING #12: nam = 'db file sequential read' ela = file No. 10932 = 15 block #= blocks 19369570 = 1 obj #= tim 80883 = 1294827909092607

    Unless you missed a bit when your trace file copying, the index with the id of the object 64495 has not subject to drive the way readings which the other clues were. It would be nice to know why. There are two "obvious" possibilities - (a) it has been very well buffered or (b) there is an index where almost all the entries are zero, so it is rarely changed. Because it has the object id lowest of all indexes, it is possible that it is the primary key index (but I don't because I tend to create the primary key of a table before I create any index) and if this is correct your waste of time was not on the primary key index, and perhaps he tends to be very well buffered the nature of popular queries.

    Changes in performance when inserting millions of lines tend to be non-linear as the number of indexes grow.

    As you insert data in the primary key order draw you maximum benefits caching for insertions in the KP index. And since you insert a very large number of lines - the order of 0.5% - 1% of the current lines, in light of your comment 'millions' - you're likely to insert two or three lines in each block of the pharmacokinetics of the index (by the way it must compress on the first two columns)) allowing Oracle to optimize its work in several ways.

    But for the other clues you are probably very randomly jump to insert rows, and this led to two different effects:


      You must keep N times as many blocks in the buffer to get similar read benefits
      each insertion in the index non-PK blocks is likely to be a row insert - which maximise cancel it and redo overhead more undo and redo written
      each insertion in an index block finally requires the block to write on the disc - which means more i/o, which slows down the readings
      as you read blocks (and you have read several of them) you can force the Oracle to write and other index blocks that need to be reviewed to equal.

    It is perfectly possible that almost all of your performance gain comes to drop the three indexes, and only a relatively small fraction come to rebuild the primary key.

    One final thought of block shares. I think that you have found an advantage any readahead (non-Oracle) when the index blocks are classified physically, if you have a trade-off between how many times you rebuild to this advantage and to find a time during which you can afford the resources to rebuild. If you want the best compromise (a) don't forget the compression option - it seems appropriate, (b) consider the benefits of partitioning range date - it seems very appropriate in your case, and (c) by varying the PCTFREE when you rebuild the index you can assign the number of insert cycles before the effects of the splits of block of sheets have a significant impact on the randomness of the IO.

    I have an idea-if I changed the index structure to (DATE, X, Y) then I would always insert in block leaves more right side, I'd have 90-10 splits instead of 50 / 50 splits and leafs would be physically contiguous, so any necessary new buildings - am I right?

    You cannot change the order of index column until you check the use of the index. If the most important and most frequent queries are "select table where colx = X and coly = O and date_col between A and B", you must index this round way. (In fact, you can look at the possibility of using a range partiitoned index organized table for data).

    Concerning
    Jonathan Lewis

  • create the index privilege

    I like to read

    http://www.DBA-Oracle.com/concepts/grant_user_privileges.htm

    and there

    grant create index

    but I thought that it was not this privilege, such as index creation. I tried and got:

    ERROR on line 1:
    ORA-00990: missing or not valid privilege

    So the site is bad?

    Thank you

    Oracleguy,
    I'm not sure what is the context of the cited site but in the Oracle docs, there is a small reference that says this,
    http://download.Oracle.com/docs/CD/B19306_01/server.102/b14231/indexes.htm#sthref2475
    >
    Creating indexes

    This section describes how to create indexes. To create an index in your own schema, one or more of the following conditions must apply:

    The table or cluster to be indexed is in your own schema.

    You have the privilege of the INDEXES on the table to be indexed.

    You have the CREATE ANY INDEX system privilege. >
    The irony is that there is no privilege as such who called Index priv. Once you get a create table, you get automatically create index above. As others have mentioned, there is a creation of an index , but this is something totally different.
    The answer to the question you asked, there are no such priv, AFAIK.
    HTH
    Aman...

  • I followed these steps several times, but he still has to work to rebuild the index.  Is there something else in the way of this work?

    Have you tried to rebuild the index spotlight several times, but it didn't work. I followed the steps through the system preferences, but there is no result for the rebuilding of the index.  Is there another way to do it, or is there another problem preventing it from working?

    Do you mean the following steps:

    Rebuild the index on your Mac - Apple Support Spotlight

  • "The document is not valid. The index.xml file is missing.

    I suddenly can't open one of my docs of numbers. I get a msg "'name of the doc' document is not valid. The index.xml file is missing.  I literally had the open document this morning without problem, closed, tried to reopen and now I get this msg... all in the same session of the computer. It began not after an update or anything like that, right in the middle of a session of the computer.  You seem to affect all the docs of numbers.  I can't open an any of them now.  I am running OS X El Capitan Version 10.11 and numbers 09, Version 2.1.

    Hi jg,.

    The case usual this message, it is that the document has been opened in numbers version 3.xx, which converts it into a new file version that uses the file index.xml internal requested by Numbers ' 09.

    You have 3 numbers installed on your machine? This file has already been opened in this application?

    The file was saved to iCloud or opened by the iOS version numbers or numbers for iCloud?

    Recommended 'cure' is to open the file using the 3 numbers, then save as... or export to format Numbers ' 09. As a result, remember numbers to quit smoking (v3) - menu numbers > numbers to quit smoking; by clicking on the red light closed the file, but doesn't end numbers. So avoid open numbers files by double-clicking on the file itself.

    Instead, start Numbers ' 09 (v2.3) and open the file in the application.

    Kind regards

    Barry

  • Qosmio F20: Bad sound using the MSN Messenger & video transmission

    Hi all

    I recently bought a laptop Qosmio F20, who is currently connected to the Internet via a router/modem WiFi Netgear (DG834v2).
    I tried to use MSN messenger v7.5 (7.5.0311 and 7.5.0324) with a webcam Creative NX () to make a video call.
    But the sound of the person I'm talking to is so distorted, it is impossible to have a conversation (I tried with several of my friends, who all have good sound on their end).
    However, if I have an audio only (no video) conversation, I have a perfect sound.
    Despite my best efforts (update drivers, reinstall messenger, windows firewall ok for messenger etc...) I've had no chance of finding the problem so far. If anyone has any suggestions I would be very grateful.

    Thanks in advance.
    François

    Hello

    You suggested that the sound working properly without the video connection.
    In my opinion, it happens because of the webcam. Maybe the software has a bad influence on the sound.
    This is not a product of Toshiba, and Toshiba is not responsible for any 3rd party device.
    I recommend you to check the site of the manufacture of cam. Maybe you will find the solution.

Maybe you are looking for