Identify the index with emphasis in listfield custom

I'm working with a custom listfield. I would like to know the index of the row that has focus, access from a menu on the screen for customization possible highlight.

I thought it would be ListField.getSelectedIndex () but it seems that a line is selected only when you click it (or switch to selection mode).

It doesn't seem to be a simple, API unless I'm missing something?

Tom

ListField.getSelectedIndex () returns the index in line with development,-1 otherwise.

Do you have the code examples to provide?

Tags: BlackBerry Developers

Similar Questions

  • Siebel Database Configuration Assistant unable to identify the indexes tablespace

    Hello

    I install Siebel CRM 8.1.1 on AIX platform. We have a RAC database has allowed the backend and that two tablespace is created, other data and another for the index. Now in the wizrd of configuration database to the name of these two storage spaces; provide us with the name of tablespae of data in the Tablespace filed Table and index tablespace in the tablespace index rating of the Setup Wizard. Now when we click on the next button. The wizrd of configuration gives error of unable to identify the indexes tablespace.

    Why is - what happens as index tablespace is correctly created in the database. Please help me in this. Quick Respance you will be appreciated.

    Kind regards

    Abgrall

    If you ran:

    change the quota unlimited on user SIEBEL

    ?

  • Wants to get the field with emphasis

    Hello

    I added 5 field list in verticalFieldManager and verticalField added to the screen.

    Now the problem is that when I click on the menu I want to know that, on what list field, the focus is?

    I used the function on the Menu click event

    Field field = MainScreen.this.getFieldWithFocus ();  but it always returns the VerticalFieldManager

    so, how can I get the index of the listField...?

    Thank you

    Himanshu Lathiya

    Try the getLeafFieldwithFocus() method. It will always return no Manager field. You can compare using the instanceof operator.

  • What is the right time to use the index with force?

    I have an EMPLOYEE table.  I join with the ROLE, and it has only about 200 distinct values on column EMPLOYEE. EMPLOYEE_TYPE_ID.

    Select / * + INDEX (an i_employee_type_id) * / b.SID as EMP_NAME, b.role_cd in the ROLE

    Of

    EMPLOYEE,

    B ROLE

    where

    a.EMPLOYEE_TYPE_ID = b.EMPLOYEE_TYPE_ID

    AND a.EFFECTIVE_END_TS > = systimestamp;

    Is it a good idea to use the index?  .. Or let the full table scan.

    SQL > select distinct EMPLOYEE_TYPE_ID of the EMPLOYEE;

    238 selected lines.

    If you don't know it will help not to use.

    Personally, I found several SQLs with index finger tips (written by programmers thinking index access is ALWAYS GOOD) end up harming performance. Oracle made a reading diluvium full table scan, single index reads as follows, according to the % of the returned array it is actually faster to do a full table scan. The optimizer did a great job to determine this.

    If you think there are cardinality estimation problems, and the optimizer expects a large number of lines when there are actually few being returned, in THIS case a suspicion was justified. Even so, in this case, I prefer that oracle manages the flag itself.

    You can do this by running the SQL tuning advisor. In fact, it will trigger the optimizer checks that the estimates are turned off and it will create a profile for you. (A profile is actually a stored set of advice that set the execution plan for you). If at the point where the underlying data changes significantly and profile ends up hurting performance, you do not need to touch the code like you would with manually added notes, you can just disable or delete the profile and let the optimizer re - analyze the statement.

    Concerning

    EDIT: In case of small tables, it is preferable to just cache the whole table in the POOL to KEEP and let oracle scan if necessary.

    Edit2: With the notable exception of index fast full scans, which are diluvium index readings. But they are only relevant when the request is quite satisfied by the index and has no need to visit the table.

  • Identify the tables with data or not

    Hello

    I want to find the tables with data and containing not data without checking each table. Is there a possible way?

    Like this?

    select
      table_name,
        to_number(
          extractvalue(
            xmltype(
     dbms_xmlgen.getxml('select count(*) c from '||table_name))
            ,'/ROWSET/ROW/C')) count
      from user_tables;
    

    Or may be

    select
      table_name,
       CASE WHEN
           to_number(
              extractvalue(
                xmltype(
         dbms_xmlgen.getxml('select count(*) c from '||table_name))
                ,'/ROWSET/ROW/C'))>0 THEN 'Table Has Data'
        ELSE 'Table is Empty'
        END
      from user_tables;
    
  • Get the index of TestSocket in a Custom IOS built in labview



  • What is a cheapest Toshiba laptop with PXE in the Bios with the XP support

    I am a reseller of Toshiba.
    A client asked me this question, and I was hoping someone here could help me answer:

    > What is the cheapest Toshiba laptop which has PXE in the bios with XPP?

    Customer is downgraded to XPP, looking for a low cost which will always start LAN laptop. Good that now they buy the L300 and indicated they are not PXE bios. Any ideas to what series of portable Tosh they must migrate?

    Thank you
    -Alan

    Hello

    PXE means that you might be able to boot from a local network.
    AFAIK all laptop Toshiba supports this option.

    But most laptops are pre-installed with Win Vista and delivered with Win Vista.
    Some professional laptops come with a second disc that allows to downgrade the laptop to Win XP.
    But these professional laptops are not cheap.

  • How to detect an interaction of the user with a VerticalFieldManager object?

    Hello

    It is a question for those who have more experience with gui on BB things.  I use the eclipse with Simulator pack 6 and 9800 component plugin.

    I have a VerticalFieldManager (VFM) object that contains a few fields menu. I would like to detect when the user attempts to modify one of these fields, to load a file and update the menus. The thing is that these menus are usually not changed by a user, perhaps only once for the time up-front, something like the choice of your country and city and other, then you will not change that the next time around. So I want to avoid loading the file and simply use the settings saved in the place. Here is what I tried and the solutions I've had but NOT as aestatically

    (1) I can not load the file as a response to one of the modification of fields (using setChangeListener) because it is already too late and the present menu is outdated.

    (2) I can add a checkbox to the optimization of resources and take care to load the file and update the menus there. It works, but I DON'T want. It's ugly and not easy to flow!

    (3) I tried to setChangeListener for the optimization of resources, but it receives no change! Here is the code for my VFM:

    the final private class OkVerticalFieldManager extends VerticalFieldManager implements FieldChangeListener
    {
    ----------------------------

    ----------------------------

    ' Public Sub fieldChanged (field field, int context)
    {
    If ((contexte & FieldChangeListener.PROGRAMMATIC) == 0)
    {

    ----------------------------------------------

    -------------------------------------------

    }

    }

    }

    (4) I tried to listen to reminder to focus on the optimization of resources, but it is called by display from the beginning and it seems that I can't control it. I mean I don't want to be called until the user actually click on the object.

    So I hope that there is a better way to detect a user of ANY click or an interaction with the optimization of resources and I catch and do my thing, before the opening of the menus in there.

    Hope that this clear enough.

    Thank you.

    A few possibilities:

    1. you can check the field with emphasis (to use VOR getLeafFieldWithFocus()) in your makeMenu() and branch accordingly.

    2. you can define listener focus on each field change and bake menu items when its focusChanged() method is called with equal to FOCUS_GAINED eventType. There you can check the setting of field and prepare the relevant menu items.

    You can use the same setCookie/getCookie approach I have proposed before in both scenarios if you wish.

  • Identify the Version of the hotfix in ACS for Windows 4.2

    Hi guys,.

    I need to identify the good patch Version in a customer ACS for Windows 4.2

    How can I do this task?

    In the page, I can't find any reference to patch

    My best regards,

    André Lomonaco

    It will show also from my experience you PATCH version, here's another thread that said click on the Cisco logo, let me know if it works for you or not.

    https://supportforums.Cisco.com/thread/1003509

    Thank you

    Tarik Admani
    * Please note the useful messages *.

  • InDesign - how to re-enable the button "create a new item of the Index.

    When I was creating topics for my index finger, I saved the document after doing a subject - (because I had created topics yesterday and is not saved and lost all of my work) - when I tried to create a new item of the Index, the button wasn't dark - not active, so I was not able to add more topics to my Index. I rebooted my MacBook Pro (Mac OS X Version 10.6.8) and reopened my InDesign (Version 3.0.1) document and the palette Index box still lacks a button "Create a new item of the Index" assets to allow the action. I do not know how to do this - does anyone have any suggestions?

    I worked on the range of the Index of reference - adding items to the Index with a subject that I had already entered and when I went back to fashion topic, I noticed that the 'Create a new item of the Index' button is enabled - so I think everything is good now. Thanks for the forum.

  • Strange - Inserts first slowly, then quickly after the index drop and recreate

    Hello

    I have a chart with lines more 1,250,000,000 on Oracle 11.1.0.7, Linux. He had 4 global index, not partitioned. Insertions in this table have been very slow - lots of db file sequential reads, each taking an average of 0,009 second (from tkprof) - not bad, but the overall performance was wrong - so I fell & re-create the primary key index (3 columns in this index) and permanently other 3 index. As a result, total number of db file sequential reads decreased about 4 times (I was expecting that - now, there is only 1, not 4 index) but not only that - the avarage db file sequential read time fell just 0.0014 second!

    In further investigation, I found in the form of traces, that BEFORE the fall & recreate, each sequential reading of the db file has been reading completely different blocks ("random") and AFTER the fall & recreate, blocks accessed by db file sequential reads are almost successively ordered (which allows to obtain cached storage Bay, and I think it's why I get 0.0014 instead of 0.009)! My question is - HOW it HAPPENED? Why the index rebuild has helped so much? The index is fragmented? And perhaps helped PCTFREE 10%, which I've recreated the index with, and there is no index block is divided now (but will appear in the future)?

    Important notes - the result set that I insert is and has always ordered columns of table KP index. FILESYSTEMIO_OPTIONS parameter is set to SETALL is not OS cache (I presume), which makes my reading faster (because I have Direct IO).

    Here is an excerpt of the trace file (expected a single insert operation):

    -> > FRONT:
    WAITING #12: nam = 'db file sequential read' ela = 35089 file #= 15 block #= blocks 20534014 = 1 obj #= tim 64560 = 1294827907110090
    WAITING #12: nam = 'db file sequential read' ela = 6434 file #= 15 block #= blocks 61512424 = 1 obj #= tim 64560 = 1294827907116799
    WAITING #12: nam = 'db file sequential read' ela = 7961 file #= 15 block #= blocks 33775666 = 1 obj #= tim 64560 = 1294827907124874
    WAITING #12: nam = 'db file sequential read' ela = 16681 file #= 15 block #= blocks 60785827 = 1 obj #= tim 64560 = 1294827907143821
    WAITING #12: nam = 'db file sequential read' ela = 2380 file #= 15 block #= blocks 60785891 = 1 obj #= tim 64560 = 1294827907147000
    WAITING #12: nam = 'db file sequential read' ela = 4219 file #= 15 block #= blocks 33775730 = 1 obj #= tim 64560 = 1294827907151553
    WAITING #12: nam = 'db file sequential read' ela = 7218 file #= 15 block #= blocks 58351090 = 1 obj #= tim 64560 = 1294827907158922
    WAITING #12: nam = 'db file sequential read' ela = 6140 file #= 15 block #= blocks 20919908 = 1 obj #= tim 64560 = 1294827907165194
    WAITING #12: nam = ela 'db file sequential read' = 542 file #= 15 block #= blocks 60637720 = 1 obj #= tim 64560 = 1294827907165975
    WAITING #12: nam = 'db file sequential read' ela = 13736 file #= 15 block #= blocks 33350753 = 1 obj #= tim 64560 = 1294827907179807
    WAITING #12: nam = 'db file sequential read' ela = 57465 file #= 15 block #= blocks 59840995 = 1 obj #= tim 64560 = 1294827907237569
    WAITING #12: nam = 'db file sequential read' ela = file No. 20077 = 15 block #= blocks 11266833 = 1 obj #= tim 64560 = 1294827907257879
    WAITING #12: nam = 'db file sequential read' ela = 10642 file #= 15 block #= blocks 34506477 = 1 obj #= tim 64560 = 1294827907268867
    WAITING #12: nam = 'db file sequential read' ela = 5393 file #= 15 block #= blocks 20919972 = 1 obj #= tim 64560 = 1294827907275227
    WAITING #12: nam = 'db file sequential read' ela = 15308 file #= 15 block #= blocks 61602921 = 1 obj #= tim 64560 = 1294827907291203
    WAITING #12: nam = 'db file sequential read' ela = 11228 file #= 15 block #= blocks 34032720 = 1 obj #= tim 64560 = 1294827907303261
    WAITING #12: nam = 'db file sequential read' ela = 7885 file #= 15 block #= blocks 60785955 = 1 obj #= tim 64560 = 1294827907311867
    WAITING #12: nam = 'db file sequential read' ela = 6652 file #= 15 block #= blocks 19778448 = 1 obj #= tim 64560 = 1294827907319158
    WAITING #12: nam = 'db file sequential read' ela = 8735 file #= 15 block #= blocks 34634855 = 1 obj #= tim 64560 = 1294827907328770
    WAITING #12: nam = 'db file sequential read' ela = 14235 file #= 15 block #= blocks 61411940 = 1 obj #= tim 64560 = 1294827907343804
    WAITING #12: nam = 'db file sequential read' ela = 7173 file #= 15 block #= blocks 33350808 = 1 obj #= tim 64560 = 1294827907351214
    WAITING #12: nam = 'db file sequential read' ela = 8033 file #= 15 block #= blocks 60493866 = 1 obj #= tim 64560 = 1294827907359424
    WAITING #12: nam = 'db file sequential read' ela = 14654 file #= 15 block #= blocks 19004731 = 1 obj #= tim 64560 = 1294827907374257
    WAITING #12: nam = 'db file sequential read' ela = 6116 file #= 15 block #= blocks 34565376 = 1 obj #= tim 64560 = 1294827907380647
    WAITING #12: nam = 'db file sequential read' ela = 6203 file #= 15 block #= blocks 20920100 = 1 obj #= tim 64560 = 1294827907387054
    WAITING #12: nam = 'db file sequential read' ela = 50627 file #= 15 block #= blocks 61602985 = 1 obj #= tim 64560 = 1294827907437838
    WAITING #12: nam = 'db file sequential read' ela = 13752 file #= 15 block #= blocks 33351193 = 1 obj #= tim 64560 = 1294827907451875
    WAITING #12: nam = 'db file sequential read' ela = 6883 file #= 15 block #= blocks 58686059 = 1 obj #= tim 64560 = 1294827907459551
    WAITING #12: nam = 'db file sequential read' ela = file No. 13284 = 15 block #= blocks 19778511 = 1 obj #= tim 64560 = 1294827907473558
    WAITING #12: nam = 'db file sequential read' ela = 16678 file #= 15 block #= blocks 34226211 = 1 obj #= tim 64560 = 1294827907493010
    WAITING #12: nam = 'db file sequential read' ela = 9565 file #= 15 block #= blocks 61123267 = 1 obj #= tim 64560 = 1294827907507419
    WAITING #12: nam = 'db file sequential read' ela = 6893 file #= 15 block #= blocks 20920164 = 1 obj #= tim 64560 = 1294827907515073
    WAITING #12: nam = 'db file sequential read' ela = 9817 file #= 15 block #= blocks 61603049 = 1 obj #= tim 64560 = 1294827907525598
    WAITING #12: nam = 'db file sequential read' ela = 4691 file #= 15 block #= blocks 33351248 = 1 obj #= tim 64560 = 1294827907530960
    WAITING #12: nam = 'db file sequential read' ela = file No. 25983 = 15 block #= blocks 58351154 = 1 obj #= tim 64560 = 1294827907557661
    WAITING #12: nam = 'db file sequential read' ela = 7402 file #= 15 block #= blocks 5096358 = 1 obj #= tim 64560 = 1294827907565927
    WAITING #12: nam = 'db file sequential read' ela = 7964 file #= 15 block #= blocks 61603113 = 1 obj #= tim 64560 = 1294827907574570
    WAITING #12: nam = 'db file sequential read' ela = 32776 file #= 15 block #= blocks 33549538 = 1 obj #= tim 64560 = 1294827907608063
    WAITING #12: nam = 'db file sequential read' ela = 5674 file #= 15 block #= blocks 60493930 = 1 obj #= tim 64560 = 1294827907614596
    WAITING #12: nam = 'db file sequential read' ela = 9525 file #= 15 block #= blocks 61512488 = 1 obj #= tim 64560 = 1294827907625007
    WAITING #12: nam = 'db file sequential read' ela = 15729 file #= 15 block #= blocks 33549602 = 1 obj #= tim 64560 = 1294827907641538
    WAITING #12: nam = 'db file sequential read' ela = file No. 11510 = 15 block #= blocks 60902458 = 1 obj #= tim 64560 = 1294827907653819
    WAITING #12: nam = 'db file sequential read' ela = 26431 files #= 15 block #= blocks 59841058 = 1 obj #= tim 64560 = 1294827907680940
    WAITING #12: nam = 'db file sequential read' ela = 9196 file #= 15 block #= blocks 33350809 = 1 obj #= tim 64560 = 1294827907690434
    WAITING #12: nam = 'db file sequential read' ela = 7745 file #= 15 block #= blocks 60296291 = 1 obj #= tim 64560 = 1294827907698353
    WAITING #12: nam = 'db file sequential read' ela = 429 file #= 15 block #= blocks 61603177 = 1 obj #= tim 64560 = 1294827907698953
    WAITING #12: nam = 'db file sequential read' ela = 8459 file #= 15 block #= blocks 33351194 = 1 obj #= tim 64560 = 1294827907707695
    WAITING #12: nam = 'db file sequential read' ela = 25998 file #= 15 block #= blocks 49598412 = 1 obj #= tim 64560 = 1294827907733890

    2011-01-12 11:25:07.742
    WAITING #12: nam = 'db file sequential read' ela = 7988 file #= 15 block #= blocks 11357900 = 1 obj #= tim 64560 = 1294827907742683
    WAITING #12: nam = 'db file sequential read' ela = 10066 file #= 15 block #= blocks 61512552 = 1 obj #= tim 64560 = 1294827907753540
    WAITING #12: nam = 'db file sequential read' ela = 8400 file #= 15 block #= blocks 33775858 = 1 obj #= tim 64560 = 1294827907762668
    WAITING #12: nam = 'db file sequential read' ela = 11750 file #= 15 block #= blocks 60636761 = 1 obj #= tim 64560 = 1294827907774667
    WAITING #12: nam = 'db file sequential read' ela = 16933 file #= 15 block #= blocks 20533183 = 1 obj #= tim 64560 = 1294827907791839
    WAITING #12: nam = 'db file sequential read' ela = 8895 file #= 15 block #= blocks 61603241 = 1 obj #= tim 64560 = 1294827907801047
    WAITING #12: nam = 'db file sequential read' ela = file No. 12685 = 15 block #= blocks 33775922 = 1 obj #= tim 64560 = 1294827907813913
    WAITING #12: nam = 'db file sequential read' ela = file No. 12664 = 15 block #= blocks 60493994 = 1 obj #= tim 64560 = 1294827907827379
    WAITING #12: nam = 'db file sequential read' ela = 8271 file #= 15 block #= blocks 19372356 = 1 obj #= tim 64560 = 1294827907835881
    WAITING #12: nam = 'db file sequential read' ela = file No. 10825 = 15 block #= blocks 59338524 = 1 obj #= tim 64560 = 1294827907847439
    WAITING #12: nam = 'db file sequential read' ela = 13086 file #= 15 block #= blocks 49440992 = 1 obj #= tim 64793 = 1294827907862022
    WAITING #12: nam = 'db file sequential read' ela = file No. 16491 = 15 block #= blocks 32853984 = 1 obj #= tim 64793 = 1294827907879282
    WAITING #12: nam = 'db file sequential read' ela = 9349 file #= 15 block #= blocks 60133021 = 1 obj #= tim 64793 = 1294827907888849
    WAITING #12: nam = 'db file sequential read' ela = 5680 files #= 15 block #= blocks 20370585 = 1 obj #= tim 64793 = 1294827907895281
    WAITING #12: nam = 'db file sequential read' ela = 34021 file #= 15 block #= blocks 58183834 = 1 obj #= tim 64793 = 1294827907930014
    WAITING #12: nam = 'db file sequential read' ela = 8574 file #= 15 block #= blocks 32179028 = 1 obj #= tim 64793 = 1294827907938813
    WAITING #12: nam = 'db file sequential read' ela = file No. 10862 = 15 block #= blocks 49402735 = 1 obj #= tim 64793 = 1294827907949821
    WAITING #12: nam = 'db file sequential read' ela = 4501 file #= 15 block #= blocks 11270933 = 1 obj #= tim 64793 = 1294827907954533
    WAITING #12: nam = 'db file sequential read' ela = 9936 file #= 15 block #= blocks 61007523 = 1 obj #= tim 64793 = 1294827907964616
    WAITING #12: nam = 'db file sequential read' ela = 7631 file #= 15 block #= blocks 34399970 = 1 obj #= tim 64793 = 1294827907972457
    WAITING #12: nam = 'db file sequential read' ela = 6162 file #= 15 block #= blocks 60305187 = 1 obj #= tim 64793 = 1294827907978797
    WAITING #12: nam = 'db file sequential read' ela = 8555 file #= 15 block #= blocks 20912586 = 1 obj #= tim 64793 = 1294827907987532
    WAITING #12: nam = 'db file sequential read' ela = 9499 file #= 15 block #= blocks 61007587 = 1 obj #= tim 64793 = 1294827907997296
    WAITING #12: nam = 'db file sequential read' ela = 23690 file #= 15 block #= blocks 19769014 = 1 obj #= tim 64793 = 1294827908024105
    WAITING #12: nam = 'db file sequential read' ela = 7081 file #= 15 block #= blocks 61314072 = 1 obj #= tim 64793 = 1294827908031968
    WAITING #12: nam = 'db file sequential read' ela = 31727 file #= 15 block #= blocks 34026602 = 1 obj #= tim 64793 = 1294827908063914
    WAITING #12: nam = 'db file sequential read' ela = 4932 file #= 15 block #= blocks 60905313 = 1 obj #= tim 64793 = 1294827908069052
    WAITING #12: nam = 'db file sequential read' ela = 6616 file #= 15 block #= blocks 20912650 = 1 obj #= tim 64793 = 1294827908075835
    WAITING #12: nam = 'db file sequential read' ela = 8443 file #= 15 block #= blocks 33781968 = 1 obj #= tim 64793 = 1294827908084594
    WAITING #12: nam = 'db file sequential read' ela = 22291 file #= 15 block #= blocks 60641967 = 1 obj #= tim 64793 = 1294827908107052
    WAITING #12: nam = 'db file sequential read' ela = 6610 file #= 15 block #= blocks 18991774 = 1 obj #= tim 64793 = 1294827908113879
    WAITING #12: nam = 'db file sequential read' ela = 6493 file #= 15 block #= blocks 34622382 = 1 obj #= tim 64793 = 1294827908120535
    WAITING #12: nam = 'db file sequential read' ela = 5028 file #= 15 block #= blocks 20912714 = 1 obj #= tim 64793 = 1294827908125861
    WAITING #12: nam = 'db file sequential read' ela = file No. 11834 = 15 block #= blocks 61679845 = 1 obj #= tim 64793 = 1294827908137858
    WAITING #12: nam = 'db file sequential read' ela = 4261 file #= 15 block #= blocks 34498166 = 1 obj #= tim 64793 = 1294827908142305
    WAITING #12: nam = 'db file sequential read' ela = 19267 file #= 15 block #= blocks 60905377 = 1 obj #= tim 64793 = 1294827908161695
    WAITING #12: nam = 'db file sequential read' ela = file No. 14108 = 15 block #= blocks 19769078 = 1 obj #= tim 64793 = 1294827908176046
    WAITING #12: nam = 'db file sequential read' ela = 4128 file #= 15 block #= blocks 33781465 = 1 obj #= tim 64793 = 1294827908180396
    WAITING #12: nam = 'db file sequential read' ela = 9986 file #= 15 block #= blocks 61007651 = 1 obj #= tim 64793 = 1294827908190535
    WAITING #12: nam = 'db file sequential read' ela = 8907 file #= 15 block #= blocks 20912778 = 1 obj #= tim 64793 = 1294827908199614
    WAITING #12: nam = 'db file sequential read' ela = 12023 file #= 15 block #= blocks 34230838 = 1 obj #= tim 64793 = 1294827908211852
    WAITING #12: nam = 'db file sequential read' ela = 29837 file #= 15 block #= blocks 60905441 = 1 obj #= tim 64793 = 1294827908241853
    WAITING #12: nam = 'db file sequential read' ela = 5989 file #= 15 block #= blocks 60133085 = 1 obj #= tim 64793 = 1294827908248065
    WAITING #12: nam = 'db file sequential read' ela = 74172 file #= 15 block #= blocks 33357369 = 1 obj #= tim 64793 = 1294827908322391
    WAITING #12: nam = 'db file sequential read' ela = 5443 file #= 15 block #= blocks 60498917 = 1 obj #= tim 64793 = 1294827908328064
    WAITING #12: nam = 'db file sequential read' ela = 4645 file #= 15 block #= blocks 20912842 = 1 obj #= tim 64793 = 1294827908332912
    WAITING #12: nam = 'db file sequential read' ela = file No. 13595 = 15 block #= blocks 61679909 = 1 obj #= tim 64793 = 1294827908346618
    WAITING #12: nam = 'db file sequential read' ela = 9120 file #= 15 block #= blocks 58356376 = 1 obj #= tim 64793 = 1294827908355975
    WAITING #12: nam = 'db file sequential read' ela = 3186 file #= 15 block #= blocks 19385867 = 1 obj #= tim 64793 = 1294827908359374
    WAITING #12: nam = 'db file sequential read' ela = 5114 file #= 15 block #= blocks 61589533 = 1 obj #= tim 64793 = 1294827908364630
    WAITING #12: nam = 'db file sequential read' ela = 42263 file #= 15 block #= blocks 33356474 = 1 obj #= tim 64793 = 1294827908407045
    WAITING #12: nam = 'db file sequential read' ela = 10683 file #= 15 block #= blocks 58183898 = 1 obj #= tim 64793 = 1294827908417994
    WAITING #12: nam = 'db file sequential read' ela = file No. 10284 = 15 block #= blocks 20529486 = 1 obj #= tim 64793 = 1294827908429134
    WAITING #12: nam = 'db file sequential read' ela = file No. 12544 = 15 block #= blocks 60498981 = 1 obj #= tim 64793 = 1294827908441945
    WAITING #12: nam = 'db file sequential read' ela = 8311 file #= 15 block #= blocks 33191548 = 1 obj #= tim 64793 = 1294827908451011
    WAITING #12: nam = 'db file sequential read' ela = 4261 file #= 15 block #= blocks 59083610 = 1 obj #= tim 64793 = 1294827908455902
    WAITING #12: nam = 'db file sequential read' ela = 4653 file #= 15 block #= blocks 18991837 = 1 obj #= tim 64793 = 1294827908461264
    WAITING #12: nam = 'db file sequential read' ela = 4905 file #= 15 block #= blocks 34685472 = 1 obj #= tim 64793 = 1294827908466897
    WAITING #12: nam = 'db file sequential read' ela = file No. 12360 = 15 block #= blocks 61775403 = 1 obj #= tim 64793 = 1294827908480080
    WAITING #12: nam = 'db file sequential read' ela = 6956 file #= 15 block #= blocks 58921225 = 1 obj #= tim 64793 = 1294827908487704
    WAITING #12: nam = 'db file sequential read' ela = 6068 file #= 15 block #= blocks 19769142 = 1 obj #= tim 64793 = 1294827908494608
    WAITING #12: nam = 'db file sequential read' ela = 5249 file #= 15 block #= blocks 33781528 = 1 obj #= tim 64793 = 1294827908500666
    WAITING #12: nam = 'db file sequential read' ela = 6013 file #= 15 block #= blocks 60905505 = 1 obj #= tim 64793 = 1294827908507366
    WAITING #12: nam = 'db file sequential read' ela = 3014 file #= 15 block #= blocks 20912970 = 1 obj #= tim 64793 = 1294827908511019
    WAITING #12: nam = 'db file sequential read' ela = 3636 file #= 15 block #= blocks 33781591 = 1 obj #= tim 64793 = 1294827908515425
    WAITING #12: nam = 'db file sequential read' ela = file No. 12226 = 15 block #= blocks 58183962 = 1 obj #= tim 64793 = 1294827908528268
    WAITING #12: nam = 'db file sequential read' ela = 7635 file #= 15 block #= blocks 60499173 = 1 obj #= tim 64793 = 1294827908536613
    WAITING #12: nam = 'db file sequential read' ela = 7364 file #= 15 block #= blocks 11270996 = 1 obj #= tim 64793 = 1294827908544203
    WAITING #12: nam = 'db file sequential read' ela = 5452 file #= 15 block #= blocks 34622446 = 1 obj #= tim 64793 = 1294827908550475
    WAITING #12: nam = 'db file sequential read' ela = 9734 file #= 15 block #= blocks 20913034 = 1 obj #= tim 64793 = 1294827908561029
    WAITING #12: nam = 'db file sequential read' ela = 14077 file #= 15 block #= blocks 61679973 = 1 obj #= tim 64793 = 1294827908575440
    WAITING #12: nam = 'db file sequential read' ela = 9694 file #= 15 block #= blocks 34550681 = 1 obj #= tim 64793 = 1294827908585311
    WAITING #12: nam = 'db file sequential read' ela = 6753 file #= 15 block #= blocks 61007715 = 1 obj #= tim 64793 = 1294827908592228
    WAITING #12: nam = 'db file sequential read' ela = 12577 file #= 15 block #= blocks 19769206 = 1 obj #= tim 64793 = 1294827908604943
    WAITING #12: nam = 'db file sequential read' ela = file No. 609 = 15 block #= blocks 61589534 = 1 obj #= tim 64793 = 1294827908605735
    WAITING #12: nam = 'db file sequential read' ela = 6267 file #= 15 block #= blocks 33356538 = 1 obj #= tim 64793 = 1294827908612148
    WAITING #12: nam = 'db file sequential read' ela = 7876 file #= 15 block #= blocks 58184026 = 1 obj #= tim 64793 = 1294827908620164
    WAITING #12: nam = 'db file sequential read' ela = file No. 14058 = 15 block #= blocks 32767835 = 1 obj #= tim 80883 = 1294827908634546
    WAITING #12: nam = 'db file sequential read' ela = 9798 file #= 15 block #= blocks 58504373 = 1 obj #= tim 80883 = 1294827908644575
    WAITING #12: nam = 'db file sequential read' ela = 11081 file #= 15 block #= blocks 11118811 = 1 obj #= tim 80883 = 1294827908655908
    WAITING #12: nam = 'db file sequential read' ela = 6249 file #= 15 block #= blocks 58087798 = 1 obj #= tim 80883 = 1294827908662451
    WAITING #12: nam = 'db file sequential read' ela = 9513 file #= 15 block #= blocks 33331129 = 1 obj #= tim 80883 = 1294827908672904
    WAITING #12: nam = 'db file sequential read' ela = 4648 file #= 15 block #= blocks 60301818 = 1 obj #= tim 80883 = 1294827908677736
    WAITING #12: nam = 'db file sequential read' ela = 6147 file #= 15 block #= blocks 20523119 = 1 obj #= tim 80883 = 1294827908684075
    WAITING #12: nam = 'db file sequential read' ela = file No. 59531 = 15 block #= blocks 61016570 = 1 obj #= tim 80883 = 1294827908743795

    2011-01-12 11:25:08.752
    WAITING #12: nam = 'db file sequential read' ela = 8787 file #= 15 block #= blocks 33770842 = 1 obj #= tim 80883 = 1294827908752846
    WAITING #12: nam = 'db file sequential read' ela = 9858 file #= 15 block #= blocks 60895354 = 1 obj #= tim 80883 = 1294827908762960
    WAITING #12: nam = 'db file sequential read' ela = 11237 file #= 15 block #= blocks 19369506 = 1 obj #= tim 80883 = 1294827908775138
    WAITING #12: nam = 'db file sequential read' ela = 5838 file #= 15 block #= blocks 34229712 = 1 obj #= tim 80883 = 1294827908782100
    WAITING #12: nam = 'db file sequential read' ela = 6518 file #= 15 block #= blocks 61221772 = 1 obj #= tim 80883 = 1294827908789403
    WAITING #12: nam = 'db file sequential read' ela = 9946 file #= 15 block #= blocks 20523183 = 1 obj #= tim 80883 = 1294827908800089
    WAITING #12: nam = 'db file sequential read' ela = 16699 file #= 15 block #= blocks 61016634 = 1 obj #= tim 80883 = 1294827908817077
    WAITING #12: nam = 'db file sequential read' ela = file No. 15215 = 15 block #= blocks 33770900 = 1 obj #= tim 80883 = 1294827908832934
    WAITING #12: nam = 'db file sequential read' ela = 8403 file #= 15 block #= blocks 60895418 = 1 obj #= tim 80883 = 1294827908842317
    WAITING #12: nam = 'db file sequential read' ela = 8927 file #= 15 block #= blocks 18950791 = 1 obj #= tim 80883 = 1294827908852190
    WAITING #12: nam = 'db file sequential read' ela = 4382 files #= 15 block #= blocks 34493493 = 1 obj #= tim 80883 = 1294827908856821
    WAITING #12: nam = 'db file sequential read' ela = 9356 file #= 15 block #= blocks 61324964 = 1 obj #= tim 80883 = 1294827908866337
    WAITING #12: nam = 'db file sequential read' ela = 10575 file #= 15 block #= blocks 20883018 = 1 obj #= tim 80883 = 1294827908877102
    WAITING #12: nam = 'db file sequential read' ela = 16601 file #= 15 block #= blocks 60502307 = 1 obj #= tim 80883 = 1294827908893926
    WAITING #12: nam = 'db file sequential read' ela = 5236 file #= 15 block #= blocks 33331193 = 1 obj #= tim 80883 = 1294827908899387
    WAITING #12: nam = 'db file sequential read' ela = 9981 file #= 15 block #= blocks 59830076 = 1 obj #= tim 80883 = 1294827908910427
    WAITING #12: nam = 'db file sequential read' ela = 8100 file #= 15 block #= blocks 19767805 = 1 obj #= tim 80883 = 1294827908918751
    WAITING #12: nam = 'db file sequential read' ela = 12492 file #= 15 block #= blocks 67133332 = 1 obj #= tim 80883 = 1294827908931732
    WAITING #12: nam = 'db file sequential read' ela = 5876 file #= 15 block #= blocks 34229775 = 1 obj #= tim 80883 = 1294827908937859
    WAITING #12: nam = 'db file sequential read' ela = 8741 file #= 15 block #= blocks 61408244 = 1 obj #= tim 80883 = 1294827908948439
    WAITING #12: nam = 'db file sequential read' ela = 8477 file #= 15 block #= blocks 20523247 = 1 obj #= tim 80883 = 1294827908957099
    WAITING #12: nam = 'db file sequential read' ela = 7947 file #= 15 block #= blocks 61016698 = 1 obj #= tim 80883 = 1294827908965210
    WAITING #12: nam = 'db file sequential read' ela = 2384 file #= 15 block #= blocks 33331257 = 1 obj #= tim 80883 = 1294827908967773
    WAITING #12: nam = 'db file sequential read' ela = 3585 file #= 15 block #= blocks 59571985 = 1 obj #= tim 80883 = 1294827908971564
    WAITING #12: nam = 'db file sequential read' ela = 7753 file #= 15 block #= blocks 5099571 = 1 obj #= tim 80883 = 1294827908979647
    WAITING #12: nam = 'db file sequential read' ela = 8205 file #= 15 block #= blocks 61408308 = 1 obj #= tim 80883 = 1294827908988200
    WAITING #12: nam = 'db file sequential read' ela = 7745 file #= 15 block #= blocks 34229335 = 1 obj #= tim 80883 = 1294827908996129
    WAITING #12: nam = 'db file sequential read' ela = file No. 10942 = 15 block #= blocks 61325028 = 1 obj #= tim 80883 = 1294827909007244
    WAITING #12: nam = 'db file sequential read' ela = 6247 file #= 15 block #= blocks 20523311 = 1 obj #= tim 80883 = 1294827909013706
    WAITING #12: nam = 'db file sequential read' ela = file No. 16188 = 15 block #= blocks 60777362 = 1 obj #= tim 80883 = 1294827909030088
    WAITING #12: nam = 'db file sequential read' ela = file No. 16642 = 15 block #= blocks 33528224 = 1 obj #= tim 80883 = 1294827909046971
    WAITING #12: nam = 'db file sequential read' ela = file No. 10118 = 15 block #= blocks 60128498 = 1 obj #= tim 80883 = 1294827909057402
    WAITING #12: nam = 'db file sequential read' ela = file No. 10747 = 15 block #= blocks 802317 = 1 obj #= tim 64495 = 1294827909069165
    WAITING #12: nam = 'db file sequential read' ela = 4795 file number = 15 block #= blocks 33079541 = 1 obj #= tim 64560 = 1294827909074367
    WAITING #12: nam = 'db file sequential read' ela = 6822 file #= 15 block #= blocks 20913098 = 1 obj #= tim 64793 = 1294827909081436
    WAITING #12: nam = 'db file sequential read' ela = file No. 10932 = 15 block #= blocks 19369570 = 1 obj #= tim 80883 = 1294827909092607



    --> > AFTER:
    WAITING #23: nam = 'db file sequential read' ela = 16367 file #= 15 block #= blocks 70434065 = 1 obj #= tim 115059 = 1295342220878947
    WAITING #23: nam = 'db file sequential read' ela = 1141 file #= 15 block #= blocks 70434066 = 1 obj #= tim 115059 = 1295342220880549
    WAITING #23: nam = 'db file sequential read' ela = 456 file #= 15 block #= blocks 70434067 = 1 obj #= tim 115059 = 1295342220881615
    WAITING #23: nam = 'db file sequential read' ela = 689 file #= 15 block #= blocks 70434068 = 1 obj #= tim 115059 = 1295342220882617
    WAITING #23: nam = 'db file sequential read' ela = 495 file #= 15 block #= blocks 70434069 = 1 obj #= tim 115059 = 1295342220883482
    WAITING #23: nam = 'db file sequential read' ela = 419 file #= 15 block #= blocks 70434070 = 1 obj #= tim 115059 = 1295342220884195
    WAITING #23: nam = 'db file sequential read' ela = 149 file #= 15 block #= blocks 70434071 = 1 obj #= tim 115059 = 1295342220884629
    WAITING #23: nam = 'db file sequential read' ela = 161 file #= 15 block #= blocks 70434072 = 1 obj #= tim 115059 = 1295342220885085
    WAITING #23: nam = 'db file sequential read' ela = 146 file #= 15 block #= blocks 70434073 = 1 obj #= tim 115059 = 1295342220885533
    WAITING #23: nam = ela 'db file sequential read' = 188 file #= 15 block #= blocks 70434074 = 1 obj #= tim 115059 = 1295342220886026
    WAITING #23: nam = 'db file sequential read' ela = 181 file #= 15 block #= blocks 70434075 = 1 obj #= tim 115059 = 1295342220886498
    WAITING #23: nam = 'db file sequential read' ela = 303 file #= 15 block #= blocks 70434076 = 1 obj #= tim 115059 = 1295342220887082
    WAITING #23: nam = 'db file sequential read' ela = file No. 550 = 15 block #= blocks 70434077 = 1 obj #= tim 115059 = 1295342220887916
    WAITING #23: nam = 'db file sequential read' ela = 163 file #= 15 block #= blocks 70434078 = 1 obj #= tim 115059 = 1295342220888402
    WAITING #23: nam = 'db file sequential read' ela = 200 file #= 15 block #= blocks 70434079 = 1 obj #= tim 115059 = 1295342220888980
    WAITING #23: nam = 'db file sequential read' ela = 134 file #= 15 block #= blocks 70434080 = 1 obj #= tim 115059 = 1295342220889409
    WAITING #23: nam = 'db file sequential read' ela = 157 file #= 15 block #= blocks 70434081 = 1 obj #= tim 115059 = 1295342220889850
    WAITING #23: nam = 'db file sequential read' ela = 5112 file #= 15 block #= blocks 70434540 = 1 obj #= tim 115059 = 1295342220895272
    WAITING #23: nam = 'db file sequential read' ela = 276 file #= 15 block #= blocks 70434082 = 1 obj #= tim 115059 = 1295342220895640

    2011-01-18 10:17:00.898
    WAITING #23: nam = 'db file sequential read' ela = 2936 file #= 15 block #= blocks 70434084 = 1 obj #= tim 115059 = 1295342220898921
    WAITING #23: nam = 'db file sequential read' ela = 1843 file number = 15 block #= blocks 70434085 = 1 obj #= tim 115059 = 1295342220901233
    WAITING #23: nam = 'db file sequential read' ela = 452 file #= 15 block #= blocks 70434086 = 1 obj #= tim 115059 = 1295342220902050
    WAITING #23: nam = 'db file sequential read' ela = 686 file #= 15 block #= blocks 70434087 = 1 obj #= tim 115059 = 1295342220903031
    WAITING #23: nam = 'db file sequential read' ela = 1582 file #= 15 block #= blocks 70434088 = 1 obj #= tim 115059 = 1295342220904933
    WAITING #23: nam = 'db file sequential read' ela = 179 file #= 15 block #= blocks 70434089 = 1 obj #= tim 115059 = 1295342220905544
    WAITING #23: nam = 'db file sequential read' ela = 426 file #= 15 block #= blocks 70434090 = 1 obj #= tim 115059 = 1295342220906303
    WAITING #23: nam = 'db file sequential read' ela = 138 file #= 15 block #= blocks 70434091 = 1 obj #= tim 115059 = 1295342220906723
    WAITING #23: nam = 'db file sequential read' ela = 3004 file #= 15 block #= blocks 70434092 = 1 obj #= tim 115059 = 1295342220910053
    WAITING #23: nam = 'db file sequential read' ela = 331 file #= 15 block #= blocks 70434093 = 1 obj #= tim 115059 = 1295342220910765
    WAITING #23: nam = 'db file sequential read' ela = 148 file #= 15 block #= blocks 70434094 = 1 obj #= tim 115059 = 1295342220911236
    WAITING #23: nam = 'db file sequential read' ela = 296 file #= 15 block #= blocks 70434095 = 1 obj #= tim 115059 = 1295342220911836
    WAITING #23: nam = 'db file sequential read' ela = 441 file #= 15 block #= blocks 70434096 = 1 obj #= tim 115059 = 1295342220912581
    WAITING #23: nam = 'db file sequential read' ela = 157 file #= 15 block #= blocks 70434097 = 1 obj #= tim 115059 = 1295342220913038
    WAITING #23: nam = 'db file sequential read' ela = 281 file #= 15 block #= blocks 70434098 = 1 obj #= tim 115059 = 1295342220913603
    WAITING #23: nam = 'db file sequential read' ela = file No. 150 = 15 block #= blocks 70434099 = 1 obj #= tim 115059 = 1295342220914048
    WAITING #23: nam = 'db file sequential read' ela = 143 file #= 15 block #= blocks 70434100 = 1 obj #= tim 115059 = 1295342220914498
    WAITING #23: nam = 'db file sequential read' ela = 384 file #= 15 block #= blocks 70434101 = 1 obj #= tim 115059 = 1295342220916907
    WAITING #23: nam = 'db file sequential read' ela = file No. 164 = 15 block #= blocks 70434102 = 1 obj #= tim 115059 = 1295342220917458
    WAITING #23: nam = 'db file sequential read' ela = 218 file #= 15 block #= blocks 70434103 = 1 obj #= tim 115059 = 1295342220917962
    WAITING #23: nam = 'db file sequential read' ela = file No. 450 = 15 block #= blocks 70434104 = 1 obj #= tim 115059 = 1295342220918698
    WAITING #23: nam = 'db file sequential read' ela = file No. 164 = 15 block #= blocks 70434105 = 1 obj #= tim 115059 = 1295342220919159
    WAITING #23: nam = 'db file sequential read' ela = 136 file #= 15 block #= blocks 70434106 = 1 obj #= tim 115059 = 1295342220919598
    WAITING #23: nam = 'db file sequential read' ela = 143 file #= 15 block #= blocks 70434107 = 1 obj #= tim 115059 = 1295342220920041
    WAITING #23: nam = 'db file sequential read' ela = 3091 file #= 15 block #= blocks 70434108 = 1 obj #= tim 115059 = 1295342220925409

    user12196647 wrote:
    Hemant, Jonathan - thanks for the comprehensive replies. To summarize:

    It's the 11.1 (11.1.0.7) database on 64-bit Linux. No compression is used for everything, all the blocks are 16 k tablespaces are SAMS created with attributes EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO BIGFILE tablespaces (separate for tables and separate for the index tablespace). I do not have to rebuild the indexes of PK, but abandoned all indexes, and then recreated the KP (but it's okay - reconstruction is still that let down & create, isn't it?).

    Traces were made during the pl/sql loop forall was actually insert. Each pl/sql forall loop inserts 100 lines "at once" (we use fetch bulk collect limit 100), but the loading process inserts a few million records (pl/sql forall loop is closed :)). I stuck that part of traces - wait events for 100 (a set of) 'before' 100 'after' inserts and inserts. Wait for events to another inserts look exacly the same - always blocks 'random' in 'before the index recreate' inserts and inserts almost ordained blocks in "after the index recreate." The objects_ids were certainly indexes.

    I think that the explanation of Hemant is correct. The point is, my structure of the index is of (X, Y, DATE), where X and are 99% repeating at each loading data, values and DATE is increased by one for each data load. Because I'm always insert ordered by PK, for loading a new DATE, I visit each index leaf block, in order, as in the analysis of comprehensive index. Because I'm inserted in each block sheet in every load, I have many index block splits. Which caused my pads of sheets to be physically non-contiguous after awhile. Reconstruction has been my healing...

    You got your answer before I had time to finish a model of your data set - but I think you're description is fairly accurate.
    A few thoughts if:

    WAITING #12: nam = 'db file sequential read' ela = file No. 10747 = 15 block #= blocks 802317 = 1 obj #= tim 64495 = 1294827909069165
    WAITING #12: nam = 'db file sequential read' ela = 4795 file number = 15 block #= blocks 33079541 = 1 obj #= tim 64560 = 1294827909074367
    WAITING #12: nam = 'db file sequential read' ela = 6822 file #= 15 block #= blocks 20913098 = 1 obj #= tim 64793 = 1294827909081436
    WAITING #12: nam = 'db file sequential read' ela = file No. 10932 = 15 block #= blocks 19369570 = 1 obj #= tim 80883 = 1294827909092607

    Unless you missed a bit when your trace file copying, the index with the id of the object 64495 has not subject to drive the way readings which the other clues were. It would be nice to know why. There are two "obvious" possibilities - (a) it has been very well buffered or (b) there is an index where almost all the entries are zero, so it is rarely changed. Because it has the object id lowest of all indexes, it is possible that it is the primary key index (but I don't because I tend to create the primary key of a table before I create any index) and if this is correct your waste of time was not on the primary key index, and perhaps he tends to be very well buffered the nature of popular queries.

    Changes in performance when inserting millions of lines tend to be non-linear as the number of indexes grow.

    As you insert data in the primary key order draw you maximum benefits caching for insertions in the KP index. And since you insert a very large number of lines - the order of 0.5% - 1% of the current lines, in light of your comment 'millions' - you're likely to insert two or three lines in each block of the pharmacokinetics of the index (by the way it must compress on the first two columns)) allowing Oracle to optimize its work in several ways.

    But for the other clues you are probably very randomly jump to insert rows, and this led to two different effects:


      You must keep N times as many blocks in the buffer to get similar read benefits
      each insertion in the index non-PK blocks is likely to be a row insert - which maximise cancel it and redo overhead more undo and redo written
      each insertion in an index block finally requires the block to write on the disc - which means more i/o, which slows down the readings
      as you read blocks (and you have read several of them) you can force the Oracle to write and other index blocks that need to be reviewed to equal.

    It is perfectly possible that almost all of your performance gain comes to drop the three indexes, and only a relatively small fraction come to rebuild the primary key.

    One final thought of block shares. I think that you have found an advantage any readahead (non-Oracle) when the index blocks are classified physically, if you have a trade-off between how many times you rebuild to this advantage and to find a time during which you can afford the resources to rebuild. If you want the best compromise (a) don't forget the compression option - it seems appropriate, (b) consider the benefits of partitioning range date - it seems very appropriate in your case, and (c) by varying the PCTFREE when you rebuild the index you can assign the number of insert cycles before the effects of the splits of block of sheets have a significant impact on the randomness of the IO.

    I have an idea-if I changed the index structure to (DATE, X, Y) then I would always insert in block leaves more right side, I'd have 90-10 splits instead of 50 / 50 splits and leafs would be physically contiguous, so any necessary new buildings - am I right?

    You cannot change the order of index column until you check the use of the index. If the most important and most frequent queries are "select table where colx = X and coly = O and date_col between A and B", you must index this round way. (In fact, you can look at the possibility of using a range partiitoned index organized table for data).

    Concerning
    Jonathan Lewis

  • Change the Page numbers in the index

    Hello

    I have a catalog and I added two pages in the middle of the catalog. Now, due to the addition of two pages, I need to update the page numbers on the index with the new page numbers. I only need to increase each of the page by two numbers, but that starts only at a certain point. Thus, for example, on one of the index I have, 1-70 page numbers may remain the same, since the two extra pages were added AFTER page 70. But every number after 70 must increase by 2.

    I have a find and replace script. I create a table in excel and drop into the document. The script then takes any word or number that is listed in column A and replace it with what is in column B. I used initially this script to find and replace in the catalog product codes, and it worked great, because all the product codes were quite unique.

    Try to use this same script to change the page numbers, but she totally fizzled. The problem is that the same number is repeated later in the table, so the script has the table constantly replace all numbers with the first two. So, here is a small sample of my table:

    Col Col B

    423 425

    422 424

    421 423

    420 422

    But when I run the script, here is what it will turn into:

    Col Col B

    423 becomes 425 425

    422 becomes 424 424

    421 becomes 425 423 becomes 425

    420 becomes 424 422 becomes 424

    Because it alters the table, all corresponding numbers in the document then will increase to 425 or 424.

    Is there a way to ensure that the script does not change the information in the table that she uses as a reference?

    Here is the script below. I didn't write (makes a very nice person on these forums) and don't know anything about the scripts.

    the_table = app.selection [0] .tables [0];
    app.findChangeTextOptions = null;
    with (app.findChangeTextOptions)
    {
    caseSensitive = true;
    wholeWord = true;
    }
    app.findTextPreferences = null;
    app.changeTextPreferences = null;
    for (row = 0; row < the_table.rows.length; line ++)
    {
    If (the_table.rows [row] .cells [0] .silence == ")
    continue;
    app.findTextPreferences.findWhat = the_table.rows [row] .cells [0] .silence;
    app.changeTextPreferences.changeTo = the_table.rows [row] .cells [1] .silence;
    app.activeDocument.changeText ();
    }

    Thank you very much!

    It should work, but make sure you do backup first. I'm not going to say once: first make a backup copy. Yes, I said that you should make a backup first!

    Copy the following script, paste it into an editor appropriate - Adobe ESTK which comes with InDesign is good enough. Save it as "omgwrongnumbers.jsx" in your user Scripts folder. Select as little text as possible, the script will be blindly increment (or decrement) all numbers in the range. And then double-click the script runs.

    //DESCRIPTION:omg the page numbers are all wrong!
    // A Jongware Script 18-Aug-2010
    if (app.documents.length == 0)
    {
         alert ("Oh give me some text to play with :'(");
         exit(0);
    }
    if (app.selection.length != 1)
    {
         alert ("We can't go on like this. Select some text first.");
         exit(0);
    }
    
    myDialog = app.dialogs.add ({name:"omg the numbers are wrong!",canCancel:true});
    
    with (myDialog)
    {
         with (dialogColumns.add())
         {
              with (dialogRows.add())
                   staticTexts.add ({staticLabel:"First to change"});
              with (dialogRows.add())
                   aBox = integerEditboxes.add({editContents:"1"});
              with (dialogRows.add())
                   staticTexts.add ({staticLabel:"Last to change"});
              with (dialogRows.add())
                   bBox = integerEditboxes.add({editContents:"99999"});
              with (dialogRows.add())
                   staticTexts.add ({staticLabel:"Add or subtract this value"});
              with (dialogRows.add())
                   cBox = integerEditboxes.add({editContents:"2"});
         }
    }
    if (!myDialog.show())
    {
         myDialog.destroy();
         exit(0);
    }
    first = aBox.editValue;
    last = bBox.editValue;
    step = cBox.editValue;
    
    if (first < 1 || first > last || step == 0)
    {
         alert ("Now you're pulling my nose arentya");
         exit(0);
    }
    app.findGrepPreferences = null;
    app.findGrepPreferences.findWhat = "\\b\\d+\\b";
    list = app.selection[0].findGrep(true);
    changes = 0;
    for (i=0; i= first && n <= last)
              changes++, list[i].contents = String(n+step);
    }
    alert ("Number of changes: "+changes);
    
  • ways of optimizing bit every night it is still the index fragmentation

    Nice day

    I am fairly new to the Oracle text and have read a lot of documentation. I inherited, a system that is running the synchronization of the work every 5 minutes and the optimization of the text indexes every night. It seems that all is well until I do a check of the index with the CTX_REPORT. It seems that the indexes are fragmented again. We are running Oracle 10.2.0.4.1 on Solaris 10. I have rebuilt the index and which will remove fragmentation, but I thought that the optimization work this treat every night. I have attached a log of an index I on a training Server (sorry I can't post my production server data), but it is also the case on my production server as well (same hardware / software). This was improved to 8i a few years ago so I don't know if something is not respected during the upgrade or if this is normal behavior. I am now the validity of jobs sync as well. Any help at this point would be appreciated. I am looking to rebuild all indexes at this point (yes - I've read all the advantages and disadvantages). I also read that right. Should I be concerned with the fragmentation of line al all. I removed the display chips to save space.

    Thanks in advance for your help

    Ray


    --------------------------------------------------------------------------------


    SQL > declare
    2 x clob: = null;
    3. start
    4 ctx_report.index_stats('ACIIS_AB00_USER.) CTX_REMARKS_IDX', x);
    5. insert into output values (x);
    6 validation;
    7 dbms_lob.freetemporary (x);
    8 end;
    9.

    PL/SQL procedure successfully completed.

    SQL > set long 32000
    SQL > set off head
    SQL > set pagesize 10000
    SQL > select * output;

    ===========================================================================
    STATISTICS FOR THE 'ACIIS_AB00_USER '. "" CTX_REMARKS_IDX ".
    ===========================================================================

    Indexing of documents: 154
    allocated docids: 154
    $I lines: 6 786

    ---------------------------------------------------------------------------
    STATISTICS OF TOKEN
    ---------------------------------------------------------------------------

    Let's take unique: 649
    average $I by token lines: 10.46
    chips with most $I lines:


    statistics by type of token:
    token of type: 0:TEXT
    Let's take unique: 649
    Total lines: 6 786
    average lines: 10.46
    total size: 36 528 (35,67 KB)
    average size: 56
    average frequency: 18.21



    ---------------------------------------------------------------------------
    FRAGMENTATION STATISTICS
    ---------------------------------------------------------------------------

    total area of data $I: 36 528 (35,67 KB)

    $I lines: 6 786
    $I lines considered if optimal: 652
    estimated row fragmentation: 90%

    garbage docids: 0
    projected size of garbage: 0

    more fragmented chips:
    780 (0:TEXT) 100%
    555 (0:TEXT) 1


    SQL > START
    2 ctx_ddl.sync_index('ACIIS_AB00_USER.) CTX_REMARKS_IDX');
    3 end;
    4.

    PL/SQL procedure successfully completed.

    SQL > truncate the output of the table;

    Table truncated.

    SQL > declare
    2 x clob: = null;
    3. start
    4 ctx_report.index_stats('ACIIS_AB00_USER.) CTX_REMARKS_IDX', x);
    5. insert into output values (x);
    6 validation;
    7 dbms_lob.freetemporary (x);
    8 end;
    9.

    PL/SQL procedure successfully completed.

    SQL > select * output;

    ===========================================================================
    STATISTICS FOR THE 'ACIIS_AB00_USER '. "" CTX_REMARKS_IDX ".
    ===========================================================================

    Indexing of documents: 154
    allocated docids: 154
    $I lines: 6 786

    ---------------------------------------------------------------------------
    STATISTICS OF TOKEN
    ---------------------------------------------------------------------------

    Let's take unique: 649
    average $I by token lines: 10.46

    average frequency per token: 18.21


    ---------------------------------------------------------------------------
    FRAGMENTATION STATISTICS
    ---------------------------------------------------------------------------

    total area of data $I: 36 528 (35,67 KB)

    $I lines: 6 786
    $I lines considered if optimal: 652
    estimated row fragmentation: 90%

    garbage docids: 0
    projected size of garbage: 0

    more fragmented chips:
    780 (0:TEXT) 100%
    555 (0:TEXT) 1


    SQL > spo off

    Ray,

    above all, you started the optimization of a sessionwhere work NLS_LANGUAGE/NLS_LANG is not American and are facing a bug 2544938to 11.2
    For a workaround, see the Note 223465.1 .

    Thank you
    Edwin

  • Problem with sleep, cannot identify the customer in ALF

    Hello!
    I have primary and standby DB (10.2.0.1, OEL5.3 x 64 primer, sleep on OEL5.3 itanium)
    Almost everything is fine, but sometimes I have this problem:

    FAL [Server]: cannot identify the customer in ALF, null string provided

    [ORA-07445: exception encountered: core dump [< 0x3f4b370560 >] [SIGSEGV] [address not mapped to object] [0x403FE6D3A60] []]


    Piece of newspaper of primary alerts:

    Wed Mar 25 15:00:30 2010
    Thread 1 cannot allot of new newspapers, sequence 30980
    Private stream flush is not complete
    Currently Journal # 4, seq # 30979 mem # 0: /sdd/oradata/a10/redo04a.log
    Currently Journal # 4, seq # 30979 mem # 1: /sdc/oradata/a10/redo04b.log
    Thread 1 Advanced to record the sequence 30980
    Currently journal # 5 seq # 30980 mem # 0: /sdd/oradata/a10/redo05a.log
    Currently journal # 5 seq # 30980 mem # 1: /sdc/oradata/a10/redo05b.log
    Wed Mar 25 15:00:37 2010
    FAL [Server]: cannot identify the customer in ALF, null string provided
    Wed Mar 25 15:00:37 2010
    Errors in the /app/oracle/admin/a10/udump/a10_fal_957.trc file:
    [ORA-07445: exception encountered: core dump [< 0x3f4b370560 >] [SIGSEGV] [address not mapped to object] [0x403FE6D3A60] []]
    Wed Mar 25 15:00:47 2010
    ARC3: Standby redo log file selected for thread 1 sequence 30979 for destination LOG_ARCHIVE_DEST_3


    Piece of a10_fal_957.trc:

    ALF Redo shipping Client does not have a network connection
    Exception signal: 11 (SIGSEGV), code: 1 (address not mapped to the object), ADR: 0x403fe6d3a60 PC: [0x3f4b370560, cannot find s$]
    < 0x3f4b370560 >]


    The primary settings:

    VALUE OF TYPE NAME
    ------------------------------------ ----------- ------------------------------
    fal_client string a10sb
    fal_server
    log_archive_config string
    string log_archive_dest_3 = a10sb service
    log_archive_dest_state_3 string ENABLE
    log_archive_format string %t_%s_%r.dbf


    Piece of standby alert log:

    Media, recovery waiting for thread 1 sequence 30979
    Wed Mar 25 15:03:03 2010
    Pick up the empty sequence in thread 1, sequence gap 30979-30979
    FAL [client, MRP0]: error recovery 3113 archived redo log of the a10
    Wed Mar 25 15:03:04 2010
    Errors in the /app/oracle/admin/a10sb/bdump/a10sb_mrp0_15411.trc file:
    ORA-03113: end of file on communication channel
    Wed Mar 25 15:03:13 2010
    RFS [6]: Eve successfully opened journal 1: ' / sdd/oradata/a10sb/redo01a.log'
    Wed Mar 25 15:03:34 2010
    Online Redo Log recovery: thread 1 mem Group 1 Seq 30979 reading 0
    Mem # 0 0 error: /sdd/oradata/a10sb/redo01a.log
    # 1 MEM Err 0: /sda/a10sb/redologs/redo01b.log
    Wed Mar 25 15:05:03 2010
    Media, recovery waiting for thread 1 sequence 30980

    Sleep settings:
    VALUE OF TYPE NAME
    ------------------------------------ ----------- ------------------------------
    fal_client
    fal_server string a10

    What is going on?
    Why "ALF redo shipping Client not establish Network Login" does it happen?
    And how I can fix without path?

    A big thank you!

    Dmitry

    FAL_CLIENT is an entry of tns for the current instance
    FAL_SERVER is an entry of tns for example where RSF fetch missing archive logs.

    Please provide both side sleep. Make sure you have them in file tnsnames.ora pending

  • I created a form with fields default text for a user to update/customize.  Is there a way to style of the text, so I can quickly identify the changes to the default text in a field?

    I created a form with fields default text for a user to update/customize.  Is there a way to style of the text, so I can quickly identify the changes to the default text in a field?

    You can use a validation script customized to each text field that looks like:

    event.target.textFont = event.value = event.target.defaultValue? font. HelvI: fonts. Helv;

    This will make the text italic (Helvetica) when the field value is the value default and regular otherwise. There are other properties that you can use instead, as the field color, border width, background color, text or text size...

Maybe you are looking for