Global reclassification of the markers of the index to cross reference markers

Is this possible? If so, how?

Under FM MIF.

Open MIF with text editor.

In the world change every:

TO:

Save the MIF. Re-open in frame.

It might even work, but I've never tried. It seems that a real cross-references ( <> tags in the MIF) not to declare the type of marker, so that they can be automatically updated.

The usual problem is that you're fortunately marking through the text for indexing, then accidentally select an existing marker of another type. This resets silently him special > marker > marker Type: [Index] to [George], for the next and all the following new markers.

I normally fix this by going back to the last correct Index mark, and then type markers Gisèle research and their fixing at a time.

Tags: Adobe FrameMaker

Similar Questions

  • INDD 6.0.6 give up when I try to delete the reference in the index

    See also http://forums.Adobe.com/message/4263687 for a discussion of the same problem. Ask here, I'm starting a discussion separate from the same issue affecting someone else (me).

    I am running INDD 6.0.6 on OS X 10.6.8 on a MacBook Pro (2.53 GHz Intel Core 2 Duo). I have a document. INDD that is part of a 50-page book. The document includes an index. The index was imported originally from MS Word (from another user, unknown version). I'm rewriting the index, editing and re-release strongly. I recently discovered that the index is a reference without a target. To be clear, I don't know of a reference in the text of the document. It is an index entry, created by new cross-reference... menu item in the Index pane. There is no target, because when you look at it with forwarding options, there is a first-level topic, but the referenced field (below the Type see also) is empty.

    I can't delete this entry. When I try to delete, INDD crashes. When INDD covers, he warns that the document is may be damaged, and if I try to delete it again, INDD breaks down. If I try to change the referenced field, INDD accepts but does not take into account the change (the closing of the shutter, but the remains of the unchanged index).

    My current solution is to change the theme to ZZZ and add a new index entry (ZZZ: corrupt entry: delete this before shipping), but it is less satisfactory. What can I do to remove the corrupted index cross-reference?

    The biggest problem is that even if you and someone from Adobe to be able to actually understand what that is the problem and that it is, in fact, a bug, there's nothing they can do to help you. There won't be any several bug fixes for CS4. My memory is that he had some bugs in indexing in the first version of CS4 and at least some of them are fixed, then you must install the patch...

  • Using cross-references in the index?

    I was wondering if there are any suggestions out there for using cross-references in the index.

    I don't think you use the option 'see also' is what I'm looking for; what I want to do is to create index with a cross next to them reference entries.

    I tried this by creating an entry index and typing (see "another name of the topic") next to it, but when I compiled the project that this index entry has not been included because I do not have a subject entrusted to him.

    Is it possible to create an index entry of reference without an assignment of subject? Or are there other workaround solutions that you use?

    Index of cross-references are designed to allow you to have keywords in the index when you click on redirect the user to another keyword. For example I use for acronyms to redirect users to the long version. You are subject cross references within the project? If so, you may not use the index of references by indexing the same way? If the cross-reference you want is outside the project, using the index is not the way I would say. Why not use a link in a topic to a URL, file, etc. You may be able to do what you want with a redirection topic, but I haven't looked into this. Come back with more details if I leave the brand.

  • Cross reference marker text differs from the text on the page

    The text for a cross-reference marker is different from the written text of the page. For example, I have a numbered heading written as "3.1 Cats and Dogs", but the text of cross-reference to this position marker is "55227: title 1: 2.1 pigs and horses." The cross reference looks and works great on the page, but I can't understand why the text of the marker is different, and why it cannot be updated to match the text on the page.

    I use Windows XP and FM9. Thanks in advance.

    Ron,

    The target of a cross-reference marker must have a unique string that differentiates it from any other marker of cross-reference in the file... it is how Frame numbers how when it comes so it can maintain the link and make updates. As you have probably guessed, this string is a concatenation of way random (I think) integer to format BMP target and target the text of TFP at the time of the creation of the xref. With all these settings, it is easy to achieve the goal of a single string.

    It seems appropriate to have this text update during an update of the xref, if something changes in the target. However, this would create two problems:

    -All xrefs that are not being updated at the time wherever the marker is published would be broken. For example, suppose you have xrefs folder has a target and make updates, which modifies the marker of target due to changes to the target. Any xrefs in file B (or any file for that matter) would become broken, because they are still configured to search for the previous string of marker.

    -An update of the xref always you would have to save the changes to the target file, which is an inconvenience at best. For example, if you have made an update of the book and xref target markers have been changed, you'd have to make sure that the files associated with the changes were saved later, otherwise your xrefs update would be broken later.

    So, in summary, the text of the marker is just a unique identifier that is defined during the creation of the xref and necessarily must stay the same for its useful life, unless you want to go around fixing broken xref all the time. Moral of the story... do not put swearing or slandering of boss in your topics like temporary jokes if you never create xrefs their

    Russ

  • Cross-references in the output PDF from Framemaker: how to center the Pages?

    I think I understand quite well the functioning of cross-references in Framemaker, but one thing that bothers me, it's how, when you generate a PDF from Framemaker file and clicking on a reference, the resulting page view locates the cross at the top of a page reference marker.

    It's very annoying and confusing when, say, referring to a figure of legend that appears at the bottom of a page. Page display that results when such a reference in a PDF document after Center the text of the _following_ page, with the text of the legend of the barely visible figure at the top of the page.

    My question: is it possible to build cross references that show the page, when you click in a PDF document, is centered on the page that displays the cross-reference material?

    Thank you very much.

    JMW

    Not AFAIK in FrameMaker.

    But in Acrobat, set the initial view one Page instead of continuous.

    File > properties > Initial View.

    Two upward (direction) and two up (cover Page) would also work.

  • Eratic behavior: book, the main text blocks and Cross References

    I'm on InDesign 2014.2. Mac.

    I'm working on a fairly large book. About 14 chapters / sections, about 400 total pages. There are also hundreds of cross references, almost all of them to anchor references.

    The design of the book is as simple as possible. There is no graphics, text blocks only.

    I use a single master page, with facing pages, and a block of text type. I used it throughout the production of the book, and the primary text flow worked well, chapter after chapter. Pages have been created automatically, and automatically flowed text.

    Maybe I need to change the page size of the book, and I also have a lot of editing to do which will reduce a lot of the chapters. I wanted to make sure all pages use the main text flow to do that, go as gently as possible.

    To my dismay, I discovered that although I used systematically throughout the book master text streams, all but three of my chapters have inexplicably lost their text stream. All images in these chapters have converted into ordinary images. That I edit the book, it's playing havoc with paging and referrals.

    I tried to restore the flow of text in one of the first chapters. It's easy to do, in theory.

    • Move the first page off to the side frame
    • Delete all pages except the first page
    • Command-shift-click on the first page to restore the flow of the main text
    • Copy and paste the contents of the old frame into the primary stream restored.


    When I do, the pages to move properly, and I'm back to having a chapter with a block of primary text on each page. Unfortunately, this breaks all cross-references that refer to this newly reorganized chapter and all references to the front in this chapter.


    I tried to do all that I could imagine. Only remove the frames and not pages. Try without the busy book. Nothing works. I tried the command update cross-references in the book menu. Has nothing.


    It seems impossible to redistribute the text in a new primary master framework, without completely breaking the cross references.

    I finally found something that seems to work:

    • Create a new master page at the end of the document.
    • Go to the last page of the history of the chapter and the last history image a link to the text block type on the page that you just created.
    • In the pages palette, delete all pages in the document, with the exception of the master page that you created.

    All text will be redistributed within the frameworks of masters, and cross-references (to and from the frame) seem to be intact.

  • Cross-references and pulling exactly the text you need

    Hello, my question today concerns the references.

    My problem with the referral system is that, in its present form, it can be quite rigid. Essentially, I need only a small amount to be drawn from the cross reference which is nt specific anywhere in the paragraph of the text.

    My first idea was to develop a format that would be part of a paragraph ONLY text that has a style CrossRefNo character called (in my case), which is just a character style empty developed purely for this purpose. However when you use the

    < name cs = "" > < / CS > definition its plain enough that it is not infact extract text which is a certain CStyle but rather the text found in a certain formatting style (which seems a little redundant next to the button below which gives you the ability to apply a character style to the reference). So if you only use this format unless you type something in the field, it will produce no real text.

    I'm looking for is essentially the objects they used to build the formatting code cross references.  If they have already available from 10 items or if it is extremely restrictive.  Is there a type of < foundText = theTextWithinThisParaThatIsFormatedWithThisCharacterStyle = "CrossRefNo" > < / found text >

    Thank you! and please let me know if it belongs to another jurisdiction.

    chughes133 wrote:

    Hello, my question today concerns the references.

    My problem with the referral system is that, in its present form, it can be quite rigid. Essentially, I need only a small amount to be drawn from the cross reference which is nt specific anywhere in the paragraph of the text.

    My first idea was to develop a format that would be part of a paragraph ONLY text that has a style CrossRefNo character called (in my case), which is just a character style empty developed purely for this purpose. However when you use the

    definition of its plain enough that it is not infact extract text which is a certain CStyle but rather the text found in a certain formatting style (which seems a little redundant next to the button below which gives you the ability to apply a character style to the reference). So if you only use this format unless you type something in the field, it will produce no real text.

    I'm looking for is essentially the objects they used to build the formatting code cross references.  If they have already available from 10 items or if it is extremely restrictive.  Is there a type of

    Thank you! and please let me know if it belongs to another jurisdiction.

    You are right that InDesign CS4 references this is impossible. However, dtptools.com plugin commercial references can to extract only the text that is tagged with a character of a paragraph of the source style.

    HTH

    Kind regards

    Peter

    _______________________

    Peter gold

    Know-how ProServices

  • Partitioned global index on partitioned table range, but the index partition does not work

    Hello:

    I was creating an index partitioned on table partitioned and partitioned index does not work.

    create table table_range)

    CUST_FIRST_NAME VARCHAR2 (20).

    CUST_GENDER CHAR (1),

    CUST_CITY VARCHAR2 (30),

    COUNTRY_ISO_CODE CHAR (2),

    COUNTRY_NAME VARCHAR2 (40),

    COUNTRY_SUBREGION VARCHAR2 (30),

    PROD_ID NUMBER NOT NULL,

    CUST_ID NUMBER NOT NULL,

    TIME_ID DATE NOT NULL,

    CHANNEL_ID NUMBER NOT NULL,

    PROMO_ID NUMBER OF NON-NULL,

    QUANTITY_SOLD NUMBER (10.2) NOT NULL,

    AMOUNT_SOLD NUMBER (10.2) NOT NULL

    )

    partition by (range (time_id)

    lower partition p1 values (u01 tablespace to_date('2001/01/01','YYYY/MM/DD')),

    lower partition (to_date('2002/01/01','YYYY/MM/DD')) tablespace u02 p2 values

    );

    create index ind_table_range on table2 (prod_id)

    () global partition range (prod_id)

    values less than (100) partition p1,

    lower partition p2 values (maxvalue)

    );

    SQL > select TABLE_NAME, SUBPARTITION_COUNT, HIGH_VALUE, nom_partition NUM_ROWS of user_tab_partitions;

    TABLE_NAME NOM_PARTITION SUBPARTITION_COUNT HIGH_VALUE NUM_ROWS

    ----------- ---------------- ------------------ -------------------------------------------------------------------------------- ----------

    TABLE_RANGE P2 0 TO_DATE (' 2002-01-01 00:00:00 ',' SYYYY-MM-DD HH24:MI:SS ',' NLS_CALENDAR = GREGORIA 259418)

    TABLE_RANGE P1 0 TO_DATE (' 2001-01-01 00:00:00 ',' SYYYY-MM-DD HH24:MI:SS ',' NLS_CALENDAR = GREGORIA 659425)

    SQL > select INDEX_NAME, NUM_ROWS nom_partition, HIGH_VALUE user_ind_partitions;

    INDEX_NAME NOM_PARTITION HIGH_VALUE NUM_ROWS

    ------------------------------ ------------------------------ -------------------------- ----------

    P1 IND_TABLE_RANGE 100 479520

    IND_TABLE_RANGE P2 MAXVALUE 439323

    SQL > EXECUTE DBMS_STATS. GATHER_TABLE_STATS (USER, 'TABLE_RANGE');

    SQL > EXECUTE DBMS_STATS. GATHER_TABLE_STATS (USER, 'TABLE_RANGE', GRANULARITY = > 'PARTITION');

    SQL > EXECUTE DBMS_STATS. GATHER_INDEX_STATS (USER, 'IND_TABLE_RANGE');

    SQL > EXECUTE DBMS_STATS. GATHER_INDEX_STATS (USER, 'IND_TABLE_RANGE', GRANULARITY = > 'PARTITION');

    SQL > set autotrace traceonly

    SQL > alter shared_pool RAS system;

    SQL > changes the system built-in buffer_cache;

    SQL > select * from table_range

    where prod_id = 127;

    ---------------------------------------------------------------------------------------------------

    | ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time | Pstart. Pstop |

    ---------------------------------------------------------------------------------------------------

    |   0 | SELECT STATEMENT |             | 16469 |  1334K |  3579 (1) | 00:00:43 |       |       |

    |   1.  RANGE OF PARTITION ALL THE |             | 16469 |  1334K |  3579 (1) | 00:00:43 |     1.     2.

    |*  2 |   TABLE ACCESS FULL | TABLE_RANGE | 16469 |  1334K |  3579 (1) | 00:00:43 |     1.     2.

    ---------------------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):

    ---------------------------------------------------

    2 - filter ("PROD_ID" = 127)

    Statistics

    ----------------------------------------------------------

    320 recursive calls

    2 db block Gets

    13352 consistent gets

    11820 physical reads

    0 redo size

    855198 bytes sent via SQL * Net to client

    12135 bytes received via SQL * Net from client

    1067 SQL * Net back and forth to and from the client

    61 sorts (memory)

    0 sorts (disk)

    15984 rows processed

    Once the sentence you say ' does not ' and then you go to paste plans that seem to show that it "works".

    What gives?

    In fact, if you look at the plans - think Oracle you have 16 k rows in the table and he'll be back k 12 rows for your select statement. In this case, Oracle is picking up the right plan - full scan 16 ranks of k is a lot less work to digitize the index lines k 12 followed by the research of rank k 12 rowid.

  • How to find the index is local and the global in the dictionary of the oracle

    Is there a way to know if the index partition is global or local oracle dictionary.

    Is there a way to know if the index partition is global or local oracle dictionary.

    select index_name, locality from all_part_indexes
    
  • Associative arrays... items in the index

    I can control the index of an associative array like that...
     TYPE aat_id IS TABLE OF PLS_INTEGER INDEX BY PLS_INTEGER;
      aa_ids aat_id;
    
    BEGIN
    
         aa_ids(1) := 3;
         aa_ids(2) := 8;
         aa_ids(2) := 10;
         
    aa_ids.delete(2);
    
    dbms_output.put_line(aa_ids.count);
    
    END;
    How can I control the index in the statement of collection in bulk as follows... so that I can delete records using this index
     SELECT decode(mod(rownum,2),1,1,2), object_name BULK COLLECT INTO aa_recs
         FROM   all_objects WHERE  ROWNUM <= 6;
    Thank you
    HESH.

    HESH wrote:

    but I have to play with the collected values, I want to delete the values in bulk without a loop through the list there at - it a trick possible to do this?

    Using not PL/SQL and bays of PL/SQL at all - just using simple native SQL.

    Before bulk collection and construction of an associative array, for the use of the values in table for future SQL operations, I rather to store the collection of values in a TWG (global temporary table) instead. A TWG can be indexed - and can thus provide much better access to the data as an associative array. He played better. It can be used via SQL, native mode.

    The best place for the data is in the database. Not in the PL/SQL layer. This means that in SQL tables and not in PL/SQL tables or collections.

    There are very few situations in PL/SQL, which require the use of associative arrays. If few, the majority of the PL/SQL code using associative arrays, IMO do badly. And what you've posted so far, I do not see a specific requirement for associative arrays.

    So be sure you use the correct feature - and make sure that it is also well put to use in your code.

    PS. HESH is a strange name. I'm used to HESH meaning High Explosive Squash Head (used for the filming of shielding). :-)

  • How to find and replace text in the index entries?

    Hello

    Recently, I copied a manual to create a new one for another product. After a global search and replace the product names and other texts, I needed to replace a specific word that appears in the index entries.

    I noticed the search box would allow me to search the text of the marker, but I had to replaceme the word by hand - cut and paste the new Word, click on change the marker to save it and then find the next occurrence.

    It's with frame 7.2. Does anyone know a faster way to do this?

    Yours,

    Michael F.

    =========

    Peter,

    Thanks for the lead.

    I downloaded the zip file and all it contains is a DLL file, but no instructions.

    What should I do to make it work and how do I know that it works?

    Yours,

    Michael F

    ========

  • Strange - Inserts first slowly, then quickly after the index drop and recreate

    Hello

    I have a chart with lines more 1,250,000,000 on Oracle 11.1.0.7, Linux. He had 4 global index, not partitioned. Insertions in this table have been very slow - lots of db file sequential reads, each taking an average of 0,009 second (from tkprof) - not bad, but the overall performance was wrong - so I fell & re-create the primary key index (3 columns in this index) and permanently other 3 index. As a result, total number of db file sequential reads decreased about 4 times (I was expecting that - now, there is only 1, not 4 index) but not only that - the avarage db file sequential read time fell just 0.0014 second!

    In further investigation, I found in the form of traces, that BEFORE the fall & recreate, each sequential reading of the db file has been reading completely different blocks ("random") and AFTER the fall & recreate, blocks accessed by db file sequential reads are almost successively ordered (which allows to obtain cached storage Bay, and I think it's why I get 0.0014 instead of 0.009)! My question is - HOW it HAPPENED? Why the index rebuild has helped so much? The index is fragmented? And perhaps helped PCTFREE 10%, which I've recreated the index with, and there is no index block is divided now (but will appear in the future)?

    Important notes - the result set that I insert is and has always ordered columns of table KP index. FILESYSTEMIO_OPTIONS parameter is set to SETALL is not OS cache (I presume), which makes my reading faster (because I have Direct IO).

    Here is an excerpt of the trace file (expected a single insert operation):

    -> > FRONT:
    WAITING #12: nam = 'db file sequential read' ela = 35089 file #= 15 block #= blocks 20534014 = 1 obj #= tim 64560 = 1294827907110090
    WAITING #12: nam = 'db file sequential read' ela = 6434 file #= 15 block #= blocks 61512424 = 1 obj #= tim 64560 = 1294827907116799
    WAITING #12: nam = 'db file sequential read' ela = 7961 file #= 15 block #= blocks 33775666 = 1 obj #= tim 64560 = 1294827907124874
    WAITING #12: nam = 'db file sequential read' ela = 16681 file #= 15 block #= blocks 60785827 = 1 obj #= tim 64560 = 1294827907143821
    WAITING #12: nam = 'db file sequential read' ela = 2380 file #= 15 block #= blocks 60785891 = 1 obj #= tim 64560 = 1294827907147000
    WAITING #12: nam = 'db file sequential read' ela = 4219 file #= 15 block #= blocks 33775730 = 1 obj #= tim 64560 = 1294827907151553
    WAITING #12: nam = 'db file sequential read' ela = 7218 file #= 15 block #= blocks 58351090 = 1 obj #= tim 64560 = 1294827907158922
    WAITING #12: nam = 'db file sequential read' ela = 6140 file #= 15 block #= blocks 20919908 = 1 obj #= tim 64560 = 1294827907165194
    WAITING #12: nam = ela 'db file sequential read' = 542 file #= 15 block #= blocks 60637720 = 1 obj #= tim 64560 = 1294827907165975
    WAITING #12: nam = 'db file sequential read' ela = 13736 file #= 15 block #= blocks 33350753 = 1 obj #= tim 64560 = 1294827907179807
    WAITING #12: nam = 'db file sequential read' ela = 57465 file #= 15 block #= blocks 59840995 = 1 obj #= tim 64560 = 1294827907237569
    WAITING #12: nam = 'db file sequential read' ela = file No. 20077 = 15 block #= blocks 11266833 = 1 obj #= tim 64560 = 1294827907257879
    WAITING #12: nam = 'db file sequential read' ela = 10642 file #= 15 block #= blocks 34506477 = 1 obj #= tim 64560 = 1294827907268867
    WAITING #12: nam = 'db file sequential read' ela = 5393 file #= 15 block #= blocks 20919972 = 1 obj #= tim 64560 = 1294827907275227
    WAITING #12: nam = 'db file sequential read' ela = 15308 file #= 15 block #= blocks 61602921 = 1 obj #= tim 64560 = 1294827907291203
    WAITING #12: nam = 'db file sequential read' ela = 11228 file #= 15 block #= blocks 34032720 = 1 obj #= tim 64560 = 1294827907303261
    WAITING #12: nam = 'db file sequential read' ela = 7885 file #= 15 block #= blocks 60785955 = 1 obj #= tim 64560 = 1294827907311867
    WAITING #12: nam = 'db file sequential read' ela = 6652 file #= 15 block #= blocks 19778448 = 1 obj #= tim 64560 = 1294827907319158
    WAITING #12: nam = 'db file sequential read' ela = 8735 file #= 15 block #= blocks 34634855 = 1 obj #= tim 64560 = 1294827907328770
    WAITING #12: nam = 'db file sequential read' ela = 14235 file #= 15 block #= blocks 61411940 = 1 obj #= tim 64560 = 1294827907343804
    WAITING #12: nam = 'db file sequential read' ela = 7173 file #= 15 block #= blocks 33350808 = 1 obj #= tim 64560 = 1294827907351214
    WAITING #12: nam = 'db file sequential read' ela = 8033 file #= 15 block #= blocks 60493866 = 1 obj #= tim 64560 = 1294827907359424
    WAITING #12: nam = 'db file sequential read' ela = 14654 file #= 15 block #= blocks 19004731 = 1 obj #= tim 64560 = 1294827907374257
    WAITING #12: nam = 'db file sequential read' ela = 6116 file #= 15 block #= blocks 34565376 = 1 obj #= tim 64560 = 1294827907380647
    WAITING #12: nam = 'db file sequential read' ela = 6203 file #= 15 block #= blocks 20920100 = 1 obj #= tim 64560 = 1294827907387054
    WAITING #12: nam = 'db file sequential read' ela = 50627 file #= 15 block #= blocks 61602985 = 1 obj #= tim 64560 = 1294827907437838
    WAITING #12: nam = 'db file sequential read' ela = 13752 file #= 15 block #= blocks 33351193 = 1 obj #= tim 64560 = 1294827907451875
    WAITING #12: nam = 'db file sequential read' ela = 6883 file #= 15 block #= blocks 58686059 = 1 obj #= tim 64560 = 1294827907459551
    WAITING #12: nam = 'db file sequential read' ela = file No. 13284 = 15 block #= blocks 19778511 = 1 obj #= tim 64560 = 1294827907473558
    WAITING #12: nam = 'db file sequential read' ela = 16678 file #= 15 block #= blocks 34226211 = 1 obj #= tim 64560 = 1294827907493010
    WAITING #12: nam = 'db file sequential read' ela = 9565 file #= 15 block #= blocks 61123267 = 1 obj #= tim 64560 = 1294827907507419
    WAITING #12: nam = 'db file sequential read' ela = 6893 file #= 15 block #= blocks 20920164 = 1 obj #= tim 64560 = 1294827907515073
    WAITING #12: nam = 'db file sequential read' ela = 9817 file #= 15 block #= blocks 61603049 = 1 obj #= tim 64560 = 1294827907525598
    WAITING #12: nam = 'db file sequential read' ela = 4691 file #= 15 block #= blocks 33351248 = 1 obj #= tim 64560 = 1294827907530960
    WAITING #12: nam = 'db file sequential read' ela = file No. 25983 = 15 block #= blocks 58351154 = 1 obj #= tim 64560 = 1294827907557661
    WAITING #12: nam = 'db file sequential read' ela = 7402 file #= 15 block #= blocks 5096358 = 1 obj #= tim 64560 = 1294827907565927
    WAITING #12: nam = 'db file sequential read' ela = 7964 file #= 15 block #= blocks 61603113 = 1 obj #= tim 64560 = 1294827907574570
    WAITING #12: nam = 'db file sequential read' ela = 32776 file #= 15 block #= blocks 33549538 = 1 obj #= tim 64560 = 1294827907608063
    WAITING #12: nam = 'db file sequential read' ela = 5674 file #= 15 block #= blocks 60493930 = 1 obj #= tim 64560 = 1294827907614596
    WAITING #12: nam = 'db file sequential read' ela = 9525 file #= 15 block #= blocks 61512488 = 1 obj #= tim 64560 = 1294827907625007
    WAITING #12: nam = 'db file sequential read' ela = 15729 file #= 15 block #= blocks 33549602 = 1 obj #= tim 64560 = 1294827907641538
    WAITING #12: nam = 'db file sequential read' ela = file No. 11510 = 15 block #= blocks 60902458 = 1 obj #= tim 64560 = 1294827907653819
    WAITING #12: nam = 'db file sequential read' ela = 26431 files #= 15 block #= blocks 59841058 = 1 obj #= tim 64560 = 1294827907680940
    WAITING #12: nam = 'db file sequential read' ela = 9196 file #= 15 block #= blocks 33350809 = 1 obj #= tim 64560 = 1294827907690434
    WAITING #12: nam = 'db file sequential read' ela = 7745 file #= 15 block #= blocks 60296291 = 1 obj #= tim 64560 = 1294827907698353
    WAITING #12: nam = 'db file sequential read' ela = 429 file #= 15 block #= blocks 61603177 = 1 obj #= tim 64560 = 1294827907698953
    WAITING #12: nam = 'db file sequential read' ela = 8459 file #= 15 block #= blocks 33351194 = 1 obj #= tim 64560 = 1294827907707695
    WAITING #12: nam = 'db file sequential read' ela = 25998 file #= 15 block #= blocks 49598412 = 1 obj #= tim 64560 = 1294827907733890

    2011-01-12 11:25:07.742
    WAITING #12: nam = 'db file sequential read' ela = 7988 file #= 15 block #= blocks 11357900 = 1 obj #= tim 64560 = 1294827907742683
    WAITING #12: nam = 'db file sequential read' ela = 10066 file #= 15 block #= blocks 61512552 = 1 obj #= tim 64560 = 1294827907753540
    WAITING #12: nam = 'db file sequential read' ela = 8400 file #= 15 block #= blocks 33775858 = 1 obj #= tim 64560 = 1294827907762668
    WAITING #12: nam = 'db file sequential read' ela = 11750 file #= 15 block #= blocks 60636761 = 1 obj #= tim 64560 = 1294827907774667
    WAITING #12: nam = 'db file sequential read' ela = 16933 file #= 15 block #= blocks 20533183 = 1 obj #= tim 64560 = 1294827907791839
    WAITING #12: nam = 'db file sequential read' ela = 8895 file #= 15 block #= blocks 61603241 = 1 obj #= tim 64560 = 1294827907801047
    WAITING #12: nam = 'db file sequential read' ela = file No. 12685 = 15 block #= blocks 33775922 = 1 obj #= tim 64560 = 1294827907813913
    WAITING #12: nam = 'db file sequential read' ela = file No. 12664 = 15 block #= blocks 60493994 = 1 obj #= tim 64560 = 1294827907827379
    WAITING #12: nam = 'db file sequential read' ela = 8271 file #= 15 block #= blocks 19372356 = 1 obj #= tim 64560 = 1294827907835881
    WAITING #12: nam = 'db file sequential read' ela = file No. 10825 = 15 block #= blocks 59338524 = 1 obj #= tim 64560 = 1294827907847439
    WAITING #12: nam = 'db file sequential read' ela = 13086 file #= 15 block #= blocks 49440992 = 1 obj #= tim 64793 = 1294827907862022
    WAITING #12: nam = 'db file sequential read' ela = file No. 16491 = 15 block #= blocks 32853984 = 1 obj #= tim 64793 = 1294827907879282
    WAITING #12: nam = 'db file sequential read' ela = 9349 file #= 15 block #= blocks 60133021 = 1 obj #= tim 64793 = 1294827907888849
    WAITING #12: nam = 'db file sequential read' ela = 5680 files #= 15 block #= blocks 20370585 = 1 obj #= tim 64793 = 1294827907895281
    WAITING #12: nam = 'db file sequential read' ela = 34021 file #= 15 block #= blocks 58183834 = 1 obj #= tim 64793 = 1294827907930014
    WAITING #12: nam = 'db file sequential read' ela = 8574 file #= 15 block #= blocks 32179028 = 1 obj #= tim 64793 = 1294827907938813
    WAITING #12: nam = 'db file sequential read' ela = file No. 10862 = 15 block #= blocks 49402735 = 1 obj #= tim 64793 = 1294827907949821
    WAITING #12: nam = 'db file sequential read' ela = 4501 file #= 15 block #= blocks 11270933 = 1 obj #= tim 64793 = 1294827907954533
    WAITING #12: nam = 'db file sequential read' ela = 9936 file #= 15 block #= blocks 61007523 = 1 obj #= tim 64793 = 1294827907964616
    WAITING #12: nam = 'db file sequential read' ela = 7631 file #= 15 block #= blocks 34399970 = 1 obj #= tim 64793 = 1294827907972457
    WAITING #12: nam = 'db file sequential read' ela = 6162 file #= 15 block #= blocks 60305187 = 1 obj #= tim 64793 = 1294827907978797
    WAITING #12: nam = 'db file sequential read' ela = 8555 file #= 15 block #= blocks 20912586 = 1 obj #= tim 64793 = 1294827907987532
    WAITING #12: nam = 'db file sequential read' ela = 9499 file #= 15 block #= blocks 61007587 = 1 obj #= tim 64793 = 1294827907997296
    WAITING #12: nam = 'db file sequential read' ela = 23690 file #= 15 block #= blocks 19769014 = 1 obj #= tim 64793 = 1294827908024105
    WAITING #12: nam = 'db file sequential read' ela = 7081 file #= 15 block #= blocks 61314072 = 1 obj #= tim 64793 = 1294827908031968
    WAITING #12: nam = 'db file sequential read' ela = 31727 file #= 15 block #= blocks 34026602 = 1 obj #= tim 64793 = 1294827908063914
    WAITING #12: nam = 'db file sequential read' ela = 4932 file #= 15 block #= blocks 60905313 = 1 obj #= tim 64793 = 1294827908069052
    WAITING #12: nam = 'db file sequential read' ela = 6616 file #= 15 block #= blocks 20912650 = 1 obj #= tim 64793 = 1294827908075835
    WAITING #12: nam = 'db file sequential read' ela = 8443 file #= 15 block #= blocks 33781968 = 1 obj #= tim 64793 = 1294827908084594
    WAITING #12: nam = 'db file sequential read' ela = 22291 file #= 15 block #= blocks 60641967 = 1 obj #= tim 64793 = 1294827908107052
    WAITING #12: nam = 'db file sequential read' ela = 6610 file #= 15 block #= blocks 18991774 = 1 obj #= tim 64793 = 1294827908113879
    WAITING #12: nam = 'db file sequential read' ela = 6493 file #= 15 block #= blocks 34622382 = 1 obj #= tim 64793 = 1294827908120535
    WAITING #12: nam = 'db file sequential read' ela = 5028 file #= 15 block #= blocks 20912714 = 1 obj #= tim 64793 = 1294827908125861
    WAITING #12: nam = 'db file sequential read' ela = file No. 11834 = 15 block #= blocks 61679845 = 1 obj #= tim 64793 = 1294827908137858
    WAITING #12: nam = 'db file sequential read' ela = 4261 file #= 15 block #= blocks 34498166 = 1 obj #= tim 64793 = 1294827908142305
    WAITING #12: nam = 'db file sequential read' ela = 19267 file #= 15 block #= blocks 60905377 = 1 obj #= tim 64793 = 1294827908161695
    WAITING #12: nam = 'db file sequential read' ela = file No. 14108 = 15 block #= blocks 19769078 = 1 obj #= tim 64793 = 1294827908176046
    WAITING #12: nam = 'db file sequential read' ela = 4128 file #= 15 block #= blocks 33781465 = 1 obj #= tim 64793 = 1294827908180396
    WAITING #12: nam = 'db file sequential read' ela = 9986 file #= 15 block #= blocks 61007651 = 1 obj #= tim 64793 = 1294827908190535
    WAITING #12: nam = 'db file sequential read' ela = 8907 file #= 15 block #= blocks 20912778 = 1 obj #= tim 64793 = 1294827908199614
    WAITING #12: nam = 'db file sequential read' ela = 12023 file #= 15 block #= blocks 34230838 = 1 obj #= tim 64793 = 1294827908211852
    WAITING #12: nam = 'db file sequential read' ela = 29837 file #= 15 block #= blocks 60905441 = 1 obj #= tim 64793 = 1294827908241853
    WAITING #12: nam = 'db file sequential read' ela = 5989 file #= 15 block #= blocks 60133085 = 1 obj #= tim 64793 = 1294827908248065
    WAITING #12: nam = 'db file sequential read' ela = 74172 file #= 15 block #= blocks 33357369 = 1 obj #= tim 64793 = 1294827908322391
    WAITING #12: nam = 'db file sequential read' ela = 5443 file #= 15 block #= blocks 60498917 = 1 obj #= tim 64793 = 1294827908328064
    WAITING #12: nam = 'db file sequential read' ela = 4645 file #= 15 block #= blocks 20912842 = 1 obj #= tim 64793 = 1294827908332912
    WAITING #12: nam = 'db file sequential read' ela = file No. 13595 = 15 block #= blocks 61679909 = 1 obj #= tim 64793 = 1294827908346618
    WAITING #12: nam = 'db file sequential read' ela = 9120 file #= 15 block #= blocks 58356376 = 1 obj #= tim 64793 = 1294827908355975
    WAITING #12: nam = 'db file sequential read' ela = 3186 file #= 15 block #= blocks 19385867 = 1 obj #= tim 64793 = 1294827908359374
    WAITING #12: nam = 'db file sequential read' ela = 5114 file #= 15 block #= blocks 61589533 = 1 obj #= tim 64793 = 1294827908364630
    WAITING #12: nam = 'db file sequential read' ela = 42263 file #= 15 block #= blocks 33356474 = 1 obj #= tim 64793 = 1294827908407045
    WAITING #12: nam = 'db file sequential read' ela = 10683 file #= 15 block #= blocks 58183898 = 1 obj #= tim 64793 = 1294827908417994
    WAITING #12: nam = 'db file sequential read' ela = file No. 10284 = 15 block #= blocks 20529486 = 1 obj #= tim 64793 = 1294827908429134
    WAITING #12: nam = 'db file sequential read' ela = file No. 12544 = 15 block #= blocks 60498981 = 1 obj #= tim 64793 = 1294827908441945
    WAITING #12: nam = 'db file sequential read' ela = 8311 file #= 15 block #= blocks 33191548 = 1 obj #= tim 64793 = 1294827908451011
    WAITING #12: nam = 'db file sequential read' ela = 4261 file #= 15 block #= blocks 59083610 = 1 obj #= tim 64793 = 1294827908455902
    WAITING #12: nam = 'db file sequential read' ela = 4653 file #= 15 block #= blocks 18991837 = 1 obj #= tim 64793 = 1294827908461264
    WAITING #12: nam = 'db file sequential read' ela = 4905 file #= 15 block #= blocks 34685472 = 1 obj #= tim 64793 = 1294827908466897
    WAITING #12: nam = 'db file sequential read' ela = file No. 12360 = 15 block #= blocks 61775403 = 1 obj #= tim 64793 = 1294827908480080
    WAITING #12: nam = 'db file sequential read' ela = 6956 file #= 15 block #= blocks 58921225 = 1 obj #= tim 64793 = 1294827908487704
    WAITING #12: nam = 'db file sequential read' ela = 6068 file #= 15 block #= blocks 19769142 = 1 obj #= tim 64793 = 1294827908494608
    WAITING #12: nam = 'db file sequential read' ela = 5249 file #= 15 block #= blocks 33781528 = 1 obj #= tim 64793 = 1294827908500666
    WAITING #12: nam = 'db file sequential read' ela = 6013 file #= 15 block #= blocks 60905505 = 1 obj #= tim 64793 = 1294827908507366
    WAITING #12: nam = 'db file sequential read' ela = 3014 file #= 15 block #= blocks 20912970 = 1 obj #= tim 64793 = 1294827908511019
    WAITING #12: nam = 'db file sequential read' ela = 3636 file #= 15 block #= blocks 33781591 = 1 obj #= tim 64793 = 1294827908515425
    WAITING #12: nam = 'db file sequential read' ela = file No. 12226 = 15 block #= blocks 58183962 = 1 obj #= tim 64793 = 1294827908528268
    WAITING #12: nam = 'db file sequential read' ela = 7635 file #= 15 block #= blocks 60499173 = 1 obj #= tim 64793 = 1294827908536613
    WAITING #12: nam = 'db file sequential read' ela = 7364 file #= 15 block #= blocks 11270996 = 1 obj #= tim 64793 = 1294827908544203
    WAITING #12: nam = 'db file sequential read' ela = 5452 file #= 15 block #= blocks 34622446 = 1 obj #= tim 64793 = 1294827908550475
    WAITING #12: nam = 'db file sequential read' ela = 9734 file #= 15 block #= blocks 20913034 = 1 obj #= tim 64793 = 1294827908561029
    WAITING #12: nam = 'db file sequential read' ela = 14077 file #= 15 block #= blocks 61679973 = 1 obj #= tim 64793 = 1294827908575440
    WAITING #12: nam = 'db file sequential read' ela = 9694 file #= 15 block #= blocks 34550681 = 1 obj #= tim 64793 = 1294827908585311
    WAITING #12: nam = 'db file sequential read' ela = 6753 file #= 15 block #= blocks 61007715 = 1 obj #= tim 64793 = 1294827908592228
    WAITING #12: nam = 'db file sequential read' ela = 12577 file #= 15 block #= blocks 19769206 = 1 obj #= tim 64793 = 1294827908604943
    WAITING #12: nam = 'db file sequential read' ela = file No. 609 = 15 block #= blocks 61589534 = 1 obj #= tim 64793 = 1294827908605735
    WAITING #12: nam = 'db file sequential read' ela = 6267 file #= 15 block #= blocks 33356538 = 1 obj #= tim 64793 = 1294827908612148
    WAITING #12: nam = 'db file sequential read' ela = 7876 file #= 15 block #= blocks 58184026 = 1 obj #= tim 64793 = 1294827908620164
    WAITING #12: nam = 'db file sequential read' ela = file No. 14058 = 15 block #= blocks 32767835 = 1 obj #= tim 80883 = 1294827908634546
    WAITING #12: nam = 'db file sequential read' ela = 9798 file #= 15 block #= blocks 58504373 = 1 obj #= tim 80883 = 1294827908644575
    WAITING #12: nam = 'db file sequential read' ela = 11081 file #= 15 block #= blocks 11118811 = 1 obj #= tim 80883 = 1294827908655908
    WAITING #12: nam = 'db file sequential read' ela = 6249 file #= 15 block #= blocks 58087798 = 1 obj #= tim 80883 = 1294827908662451
    WAITING #12: nam = 'db file sequential read' ela = 9513 file #= 15 block #= blocks 33331129 = 1 obj #= tim 80883 = 1294827908672904
    WAITING #12: nam = 'db file sequential read' ela = 4648 file #= 15 block #= blocks 60301818 = 1 obj #= tim 80883 = 1294827908677736
    WAITING #12: nam = 'db file sequential read' ela = 6147 file #= 15 block #= blocks 20523119 = 1 obj #= tim 80883 = 1294827908684075
    WAITING #12: nam = 'db file sequential read' ela = file No. 59531 = 15 block #= blocks 61016570 = 1 obj #= tim 80883 = 1294827908743795

    2011-01-12 11:25:08.752
    WAITING #12: nam = 'db file sequential read' ela = 8787 file #= 15 block #= blocks 33770842 = 1 obj #= tim 80883 = 1294827908752846
    WAITING #12: nam = 'db file sequential read' ela = 9858 file #= 15 block #= blocks 60895354 = 1 obj #= tim 80883 = 1294827908762960
    WAITING #12: nam = 'db file sequential read' ela = 11237 file #= 15 block #= blocks 19369506 = 1 obj #= tim 80883 = 1294827908775138
    WAITING #12: nam = 'db file sequential read' ela = 5838 file #= 15 block #= blocks 34229712 = 1 obj #= tim 80883 = 1294827908782100
    WAITING #12: nam = 'db file sequential read' ela = 6518 file #= 15 block #= blocks 61221772 = 1 obj #= tim 80883 = 1294827908789403
    WAITING #12: nam = 'db file sequential read' ela = 9946 file #= 15 block #= blocks 20523183 = 1 obj #= tim 80883 = 1294827908800089
    WAITING #12: nam = 'db file sequential read' ela = 16699 file #= 15 block #= blocks 61016634 = 1 obj #= tim 80883 = 1294827908817077
    WAITING #12: nam = 'db file sequential read' ela = file No. 15215 = 15 block #= blocks 33770900 = 1 obj #= tim 80883 = 1294827908832934
    WAITING #12: nam = 'db file sequential read' ela = 8403 file #= 15 block #= blocks 60895418 = 1 obj #= tim 80883 = 1294827908842317
    WAITING #12: nam = 'db file sequential read' ela = 8927 file #= 15 block #= blocks 18950791 = 1 obj #= tim 80883 = 1294827908852190
    WAITING #12: nam = 'db file sequential read' ela = 4382 files #= 15 block #= blocks 34493493 = 1 obj #= tim 80883 = 1294827908856821
    WAITING #12: nam = 'db file sequential read' ela = 9356 file #= 15 block #= blocks 61324964 = 1 obj #= tim 80883 = 1294827908866337
    WAITING #12: nam = 'db file sequential read' ela = 10575 file #= 15 block #= blocks 20883018 = 1 obj #= tim 80883 = 1294827908877102
    WAITING #12: nam = 'db file sequential read' ela = 16601 file #= 15 block #= blocks 60502307 = 1 obj #= tim 80883 = 1294827908893926
    WAITING #12: nam = 'db file sequential read' ela = 5236 file #= 15 block #= blocks 33331193 = 1 obj #= tim 80883 = 1294827908899387
    WAITING #12: nam = 'db file sequential read' ela = 9981 file #= 15 block #= blocks 59830076 = 1 obj #= tim 80883 = 1294827908910427
    WAITING #12: nam = 'db file sequential read' ela = 8100 file #= 15 block #= blocks 19767805 = 1 obj #= tim 80883 = 1294827908918751
    WAITING #12: nam = 'db file sequential read' ela = 12492 file #= 15 block #= blocks 67133332 = 1 obj #= tim 80883 = 1294827908931732
    WAITING #12: nam = 'db file sequential read' ela = 5876 file #= 15 block #= blocks 34229775 = 1 obj #= tim 80883 = 1294827908937859
    WAITING #12: nam = 'db file sequential read' ela = 8741 file #= 15 block #= blocks 61408244 = 1 obj #= tim 80883 = 1294827908948439
    WAITING #12: nam = 'db file sequential read' ela = 8477 file #= 15 block #= blocks 20523247 = 1 obj #= tim 80883 = 1294827908957099
    WAITING #12: nam = 'db file sequential read' ela = 7947 file #= 15 block #= blocks 61016698 = 1 obj #= tim 80883 = 1294827908965210
    WAITING #12: nam = 'db file sequential read' ela = 2384 file #= 15 block #= blocks 33331257 = 1 obj #= tim 80883 = 1294827908967773
    WAITING #12: nam = 'db file sequential read' ela = 3585 file #= 15 block #= blocks 59571985 = 1 obj #= tim 80883 = 1294827908971564
    WAITING #12: nam = 'db file sequential read' ela = 7753 file #= 15 block #= blocks 5099571 = 1 obj #= tim 80883 = 1294827908979647
    WAITING #12: nam = 'db file sequential read' ela = 8205 file #= 15 block #= blocks 61408308 = 1 obj #= tim 80883 = 1294827908988200
    WAITING #12: nam = 'db file sequential read' ela = 7745 file #= 15 block #= blocks 34229335 = 1 obj #= tim 80883 = 1294827908996129
    WAITING #12: nam = 'db file sequential read' ela = file No. 10942 = 15 block #= blocks 61325028 = 1 obj #= tim 80883 = 1294827909007244
    WAITING #12: nam = 'db file sequential read' ela = 6247 file #= 15 block #= blocks 20523311 = 1 obj #= tim 80883 = 1294827909013706
    WAITING #12: nam = 'db file sequential read' ela = file No. 16188 = 15 block #= blocks 60777362 = 1 obj #= tim 80883 = 1294827909030088
    WAITING #12: nam = 'db file sequential read' ela = file No. 16642 = 15 block #= blocks 33528224 = 1 obj #= tim 80883 = 1294827909046971
    WAITING #12: nam = 'db file sequential read' ela = file No. 10118 = 15 block #= blocks 60128498 = 1 obj #= tim 80883 = 1294827909057402
    WAITING #12: nam = 'db file sequential read' ela = file No. 10747 = 15 block #= blocks 802317 = 1 obj #= tim 64495 = 1294827909069165
    WAITING #12: nam = 'db file sequential read' ela = 4795 file number = 15 block #= blocks 33079541 = 1 obj #= tim 64560 = 1294827909074367
    WAITING #12: nam = 'db file sequential read' ela = 6822 file #= 15 block #= blocks 20913098 = 1 obj #= tim 64793 = 1294827909081436
    WAITING #12: nam = 'db file sequential read' ela = file No. 10932 = 15 block #= blocks 19369570 = 1 obj #= tim 80883 = 1294827909092607



    --> > AFTER:
    WAITING #23: nam = 'db file sequential read' ela = 16367 file #= 15 block #= blocks 70434065 = 1 obj #= tim 115059 = 1295342220878947
    WAITING #23: nam = 'db file sequential read' ela = 1141 file #= 15 block #= blocks 70434066 = 1 obj #= tim 115059 = 1295342220880549
    WAITING #23: nam = 'db file sequential read' ela = 456 file #= 15 block #= blocks 70434067 = 1 obj #= tim 115059 = 1295342220881615
    WAITING #23: nam = 'db file sequential read' ela = 689 file #= 15 block #= blocks 70434068 = 1 obj #= tim 115059 = 1295342220882617
    WAITING #23: nam = 'db file sequential read' ela = 495 file #= 15 block #= blocks 70434069 = 1 obj #= tim 115059 = 1295342220883482
    WAITING #23: nam = 'db file sequential read' ela = 419 file #= 15 block #= blocks 70434070 = 1 obj #= tim 115059 = 1295342220884195
    WAITING #23: nam = 'db file sequential read' ela = 149 file #= 15 block #= blocks 70434071 = 1 obj #= tim 115059 = 1295342220884629
    WAITING #23: nam = 'db file sequential read' ela = 161 file #= 15 block #= blocks 70434072 = 1 obj #= tim 115059 = 1295342220885085
    WAITING #23: nam = 'db file sequential read' ela = 146 file #= 15 block #= blocks 70434073 = 1 obj #= tim 115059 = 1295342220885533
    WAITING #23: nam = ela 'db file sequential read' = 188 file #= 15 block #= blocks 70434074 = 1 obj #= tim 115059 = 1295342220886026
    WAITING #23: nam = 'db file sequential read' ela = 181 file #= 15 block #= blocks 70434075 = 1 obj #= tim 115059 = 1295342220886498
    WAITING #23: nam = 'db file sequential read' ela = 303 file #= 15 block #= blocks 70434076 = 1 obj #= tim 115059 = 1295342220887082
    WAITING #23: nam = 'db file sequential read' ela = file No. 550 = 15 block #= blocks 70434077 = 1 obj #= tim 115059 = 1295342220887916
    WAITING #23: nam = 'db file sequential read' ela = 163 file #= 15 block #= blocks 70434078 = 1 obj #= tim 115059 = 1295342220888402
    WAITING #23: nam = 'db file sequential read' ela = 200 file #= 15 block #= blocks 70434079 = 1 obj #= tim 115059 = 1295342220888980
    WAITING #23: nam = 'db file sequential read' ela = 134 file #= 15 block #= blocks 70434080 = 1 obj #= tim 115059 = 1295342220889409
    WAITING #23: nam = 'db file sequential read' ela = 157 file #= 15 block #= blocks 70434081 = 1 obj #= tim 115059 = 1295342220889850
    WAITING #23: nam = 'db file sequential read' ela = 5112 file #= 15 block #= blocks 70434540 = 1 obj #= tim 115059 = 1295342220895272
    WAITING #23: nam = 'db file sequential read' ela = 276 file #= 15 block #= blocks 70434082 = 1 obj #= tim 115059 = 1295342220895640

    2011-01-18 10:17:00.898
    WAITING #23: nam = 'db file sequential read' ela = 2936 file #= 15 block #= blocks 70434084 = 1 obj #= tim 115059 = 1295342220898921
    WAITING #23: nam = 'db file sequential read' ela = 1843 file number = 15 block #= blocks 70434085 = 1 obj #= tim 115059 = 1295342220901233
    WAITING #23: nam = 'db file sequential read' ela = 452 file #= 15 block #= blocks 70434086 = 1 obj #= tim 115059 = 1295342220902050
    WAITING #23: nam = 'db file sequential read' ela = 686 file #= 15 block #= blocks 70434087 = 1 obj #= tim 115059 = 1295342220903031
    WAITING #23: nam = 'db file sequential read' ela = 1582 file #= 15 block #= blocks 70434088 = 1 obj #= tim 115059 = 1295342220904933
    WAITING #23: nam = 'db file sequential read' ela = 179 file #= 15 block #= blocks 70434089 = 1 obj #= tim 115059 = 1295342220905544
    WAITING #23: nam = 'db file sequential read' ela = 426 file #= 15 block #= blocks 70434090 = 1 obj #= tim 115059 = 1295342220906303
    WAITING #23: nam = 'db file sequential read' ela = 138 file #= 15 block #= blocks 70434091 = 1 obj #= tim 115059 = 1295342220906723
    WAITING #23: nam = 'db file sequential read' ela = 3004 file #= 15 block #= blocks 70434092 = 1 obj #= tim 115059 = 1295342220910053
    WAITING #23: nam = 'db file sequential read' ela = 331 file #= 15 block #= blocks 70434093 = 1 obj #= tim 115059 = 1295342220910765
    WAITING #23: nam = 'db file sequential read' ela = 148 file #= 15 block #= blocks 70434094 = 1 obj #= tim 115059 = 1295342220911236
    WAITING #23: nam = 'db file sequential read' ela = 296 file #= 15 block #= blocks 70434095 = 1 obj #= tim 115059 = 1295342220911836
    WAITING #23: nam = 'db file sequential read' ela = 441 file #= 15 block #= blocks 70434096 = 1 obj #= tim 115059 = 1295342220912581
    WAITING #23: nam = 'db file sequential read' ela = 157 file #= 15 block #= blocks 70434097 = 1 obj #= tim 115059 = 1295342220913038
    WAITING #23: nam = 'db file sequential read' ela = 281 file #= 15 block #= blocks 70434098 = 1 obj #= tim 115059 = 1295342220913603
    WAITING #23: nam = 'db file sequential read' ela = file No. 150 = 15 block #= blocks 70434099 = 1 obj #= tim 115059 = 1295342220914048
    WAITING #23: nam = 'db file sequential read' ela = 143 file #= 15 block #= blocks 70434100 = 1 obj #= tim 115059 = 1295342220914498
    WAITING #23: nam = 'db file sequential read' ela = 384 file #= 15 block #= blocks 70434101 = 1 obj #= tim 115059 = 1295342220916907
    WAITING #23: nam = 'db file sequential read' ela = file No. 164 = 15 block #= blocks 70434102 = 1 obj #= tim 115059 = 1295342220917458
    WAITING #23: nam = 'db file sequential read' ela = 218 file #= 15 block #= blocks 70434103 = 1 obj #= tim 115059 = 1295342220917962
    WAITING #23: nam = 'db file sequential read' ela = file No. 450 = 15 block #= blocks 70434104 = 1 obj #= tim 115059 = 1295342220918698
    WAITING #23: nam = 'db file sequential read' ela = file No. 164 = 15 block #= blocks 70434105 = 1 obj #= tim 115059 = 1295342220919159
    WAITING #23: nam = 'db file sequential read' ela = 136 file #= 15 block #= blocks 70434106 = 1 obj #= tim 115059 = 1295342220919598
    WAITING #23: nam = 'db file sequential read' ela = 143 file #= 15 block #= blocks 70434107 = 1 obj #= tim 115059 = 1295342220920041
    WAITING #23: nam = 'db file sequential read' ela = 3091 file #= 15 block #= blocks 70434108 = 1 obj #= tim 115059 = 1295342220925409

    user12196647 wrote:
    Hemant, Jonathan - thanks for the comprehensive replies. To summarize:

    It's the 11.1 (11.1.0.7) database on 64-bit Linux. No compression is used for everything, all the blocks are 16 k tablespaces are SAMS created with attributes EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO BIGFILE tablespaces (separate for tables and separate for the index tablespace). I do not have to rebuild the indexes of PK, but abandoned all indexes, and then recreated the KP (but it's okay - reconstruction is still that let down & create, isn't it?).

    Traces were made during the pl/sql loop forall was actually insert. Each pl/sql forall loop inserts 100 lines "at once" (we use fetch bulk collect limit 100), but the loading process inserts a few million records (pl/sql forall loop is closed :)). I stuck that part of traces - wait events for 100 (a set of) 'before' 100 'after' inserts and inserts. Wait for events to another inserts look exacly the same - always blocks 'random' in 'before the index recreate' inserts and inserts almost ordained blocks in "after the index recreate." The objects_ids were certainly indexes.

    I think that the explanation of Hemant is correct. The point is, my structure of the index is of (X, Y, DATE), where X and are 99% repeating at each loading data, values and DATE is increased by one for each data load. Because I'm always insert ordered by PK, for loading a new DATE, I visit each index leaf block, in order, as in the analysis of comprehensive index. Because I'm inserted in each block sheet in every load, I have many index block splits. Which caused my pads of sheets to be physically non-contiguous after awhile. Reconstruction has been my healing...

    You got your answer before I had time to finish a model of your data set - but I think you're description is fairly accurate.
    A few thoughts if:

    WAITING #12: nam = 'db file sequential read' ela = file No. 10747 = 15 block #= blocks 802317 = 1 obj #= tim 64495 = 1294827909069165
    WAITING #12: nam = 'db file sequential read' ela = 4795 file number = 15 block #= blocks 33079541 = 1 obj #= tim 64560 = 1294827909074367
    WAITING #12: nam = 'db file sequential read' ela = 6822 file #= 15 block #= blocks 20913098 = 1 obj #= tim 64793 = 1294827909081436
    WAITING #12: nam = 'db file sequential read' ela = file No. 10932 = 15 block #= blocks 19369570 = 1 obj #= tim 80883 = 1294827909092607

    Unless you missed a bit when your trace file copying, the index with the id of the object 64495 has not subject to drive the way readings which the other clues were. It would be nice to know why. There are two "obvious" possibilities - (a) it has been very well buffered or (b) there is an index where almost all the entries are zero, so it is rarely changed. Because it has the object id lowest of all indexes, it is possible that it is the primary key index (but I don't because I tend to create the primary key of a table before I create any index) and if this is correct your waste of time was not on the primary key index, and perhaps he tends to be very well buffered the nature of popular queries.

    Changes in performance when inserting millions of lines tend to be non-linear as the number of indexes grow.

    As you insert data in the primary key order draw you maximum benefits caching for insertions in the KP index. And since you insert a very large number of lines - the order of 0.5% - 1% of the current lines, in light of your comment 'millions' - you're likely to insert two or three lines in each block of the pharmacokinetics of the index (by the way it must compress on the first two columns)) allowing Oracle to optimize its work in several ways.

    But for the other clues you are probably very randomly jump to insert rows, and this led to two different effects:


      You must keep N times as many blocks in the buffer to get similar read benefits
      each insertion in the index non-PK blocks is likely to be a row insert - which maximise cancel it and redo overhead more undo and redo written
      each insertion in an index block finally requires the block to write on the disc - which means more i/o, which slows down the readings
      as you read blocks (and you have read several of them) you can force the Oracle to write and other index blocks that need to be reviewed to equal.

    It is perfectly possible that almost all of your performance gain comes to drop the three indexes, and only a relatively small fraction come to rebuild the primary key.

    One final thought of block shares. I think that you have found an advantage any readahead (non-Oracle) when the index blocks are classified physically, if you have a trade-off between how many times you rebuild to this advantage and to find a time during which you can afford the resources to rebuild. If you want the best compromise (a) don't forget the compression option - it seems appropriate, (b) consider the benefits of partitioning range date - it seems very appropriate in your case, and (c) by varying the PCTFREE when you rebuild the index you can assign the number of insert cycles before the effects of the splits of block of sheets have a significant impact on the randomness of the IO.

    I have an idea-if I changed the index structure to (DATE, X, Y) then I would always insert in block leaves more right side, I'd have 90-10 splits instead of 50 / 50 splits and leafs would be physically contiguous, so any necessary new buildings - am I right?

    You cannot change the order of index column until you check the use of the index. If the most important and most frequent queries are "select table where colx = X and coly = O and date_col between A and B", you must index this round way. (In fact, you can look at the possibility of using a range partiitoned index organized table for data).

    Concerning
    Jonathan Lewis

  • appearance of numbers without text to the index

    Hello

    I have an automatically generated index for fm 9, some numbers of page text (I see them below also correct) keep appearing as symbols

    It's the first time I meet markers with start and end range

    but I wonder if something is wrong and if I can do something about that...

    Thank you
    Elena

    Open marker window, then use the Hypertext (hold the button ctrl + alt and click on the page numbers) in the index to no access these NONE entires of text index. Inspect the content shown in the marker window and modify it (or remove) accordingly.

    Each index Startrange entry must have a corresponding endrange. The spelling of the terms must be identical and FM is case-sensitive. See FM9 help for more details: http://help.adobe.com/en_US/FrameMaker/9.0/Using/WSd817046a44e105e21e63e3d11ab7f7561e-7fae .html #WSd817046a44e105e21e63e3d11ab7f7960b-7ec4

  • derivative of the Index entries

    I am creating an index and would like to see my index entries highlighted in the text during the indexing process. Obviously, these are non-printable. I know that I can view hidden characters and see the blue signs, but there they are so small and difficult to discern with the representation of things hidden character all.

    I thought about setting up of a temporary character style, but some of the entries already have a different character style. Can I do something with layers?

    Ideally, I would like to see a transparent yellow highlight as what you can do with the Acrobat commenting tools.

    I see two problems with your idea

    1. the position of the index mark. It may be the beginning of a Word, or at the end or somewhere in the middle. In fact, it does not matter even if its anywhere near the word, as it is on the same page!

    2. the indexed text. Suppose you have a book on cats and dogs. For the index, you can add all of the cats like 'cats' and dogs like ' canines - the word you put into the Index Entry add should not be literally in your text. '. Another example is "Indexing", which would be indexed as "index" or possibly "index, creating a".

    Now on the right thing!

    You can create a GREP style that highlights the text surrounding an index mark. GREP styles are amazing - even if they apply character Styles, they interfere with any existing character styles! They blend together perfectly. It does not totally reliable - more on that below - but the general idea is:

    a. create a new character style 'Brand of Index'. Set it to emphasize and underline options, set 12 pt thickness, offset to pt - 3 and color to something real bright.

    b. in your main paragraph for the plain text style (you 're using paragraph styles, you're not? if not, you should!), select "GREP Styles.

    c. Add a new GREP style. Define its character style 'Brand of Index' and 'Text' '~I\w+' field - it's an eye in capital letters. Do not add quotation marks.

    d. Add another style GREP. Also define its style of character 'Brand of Index' and set the "text" field "\w+~I" (again, the eye uppercase, no quotes needed).

    e. click OK. Your index entries get highlighted.

    Why is this not perfect? Well, I guess it goes beyond what the Adobe engineers have been imagine we would use GREP styles to. The index mark, tilde-capital eye, seems to mess up the text turns a little. Therefore, the usual "\w+" expected thing pick up as many "Word characters" possible suddenly fails and may be missing a few characters.

    It takes two GREP styles, rather than a ubiquitous '\w*~I\w*' because this was to pick up all the words containing an index mark, but, in reality, does not pick up anything at all.

    There is another small disadvantage: because it's true underlining, it will also print and appear in the PDF files. But it is easily resolved: print (or export to PDF, or whatever), go to the character style "Benchmark" and turn off the underline. If you need to see the markers, put it back on and hey! They - re ba - ack!

  • compress the index

    Hello
    Compressing my indexes improve performance or decrease? When it will use exactly

    Dear user1175505,

    Online documentation of the Oracle;

    +"+
    + Using Compression of Index +.

    + Bitmap indexes are always stored as patented, compressed without the need for user intervention. B-tree indexes, however, can be stored specifically compressed way to allow huge space saving, storing more keys in each index block, which also leads to less i/o and better performance. +

    + Compression key allows you to compress an index B-tree, which reduces the overhead of repeated values storage. In the case of a non-unique index, all indexes on columns can be stored in a compressed format, while in the case of a unique index, at least one index column must be kept not compressed. +

    + In general, the an index keys have two pieces, a piece of group and a unique piece. If the key is not set to have a unique piece, Oracle provides a in the form of a rowid annexed to the pool room. Key compression is a method of breaking the pool room and store it so it can be shared by several unique pieces. The cardinality of the columns selected must be compressed determines the compression ratio that can be achieved. Thus, for example, if a unique index, which consists of five columns providing the uniqueness mainly the last two columns, it is more optimal to choose three columns to be stored compressed. If you choose to compress the four columns, the repetition will have almost disappeared, and the compression ratio will be worse. +

    * + Key compression reduces storage of an index, but it can increase the time processor required to rebuild the key during a scan of the index column values. He faces also certain additional storage overhead, because each prefix entry has an overload of four bytes associated with it. + *

    + Indexes Local versus Global index +.

    + Index B-tree on the partitioned tables can be global or local. With Oracle8i, and earlier versions, Oracle has recommended that global indexes will not be used in data warehouse environments, because a partition DDL statement (for example, ALTER TABLE...) DROP PARTITION) would invalidate the entire index, and the index rebuilding is expensive. Since the Oracle 10 g database, global index can be maintained without Oracle marking them as unusable after the DDL. This improvement makes overall more efficient index for data warehouse environments. +

    + However, index will be more common than the overall index. Global indexes must be used when there is a specific requirement that cannot be met by local indexes (for example, a unique index on a no partitioning key or a requirement of performance). +

    + Bitmap indexes on partitioned tables are always local. +
    +"+

    Ogan

Maybe you are looking for