several strategies index: separator?

Just a quick question:

In BDB XML documentation I noticed that the API says it is [a list separated by commas of strings | http://www.oracle.com/technology/documentation/berkeley-db/xml/api_cxx/XmlIndexSpecification_addIndex.html], while the Getting Started Guide mentions [a list separated by spaces | http://www.oracle.com/technology/documentation/berkeley-db/xml/gsg_xml/cxx/indexdeclarations.html]. Could someone please tell me who he really is? :)

Thank you!
Lucas

Comma and space delimited indexes specifications will work.

Lauren

Tags: Database

Similar Questions

  • Can you have several strategies of crypto isakmp on a router?

    I have a router 1841 as a hub for several IPSec tunnels. I have a single ISAKMP policy that looks like this:

    crypto ISAKMP policy 1

    BA 3des

    preshared authentication

    Group 2

    isakmp encryption key * address x.x.x.x

    isakmp encryption key * address y.y.y.y

    isakmp encryption key * address z.z.z.z

    I want to start using AES as the encryption ISAKMP protocol, but I can't be there to change the other ends of all other tunnels. Can I create an another crypto isakmp strategy 2 and just put the pre-shared key for new connections in this one while I'm migration?

    Thank you

    Chris

    Chris

    You can have several strategies of isakmp on your router. The router will run through them in order until it finds a match. If you just need to add a new policy for isakmp with a number of different sequence, for example.

    crypto ISAKMP policy 2

    BA aes

    AUTH pre-shared

    Group 2

    This will not affect your original isakmp policy.

    Not sure what you mean by putting the pre-shared 'under' the isakmp policy. The key is not related to any person isakmp policy - you can see that the configuration you specify above.

    All you need to do to switch is to configure isakmp on your router 1841 strategy and then move the remote as and when you can. Those that you changed uses AES, you have not yet changed that will continue to use 3DES.

    HTH

    Jon

  • Written for several digital lines separated by commas

    Hello people,

    I use a USB-6009 box and want to write several digital lines created in the style, separated by commas:

    error = DAQmxCreateDOChan(taskSelHead,"dev3/port0/line0,dev3/port0/line6","",DAQmx_Val_ChanPerLine);

    When I try to write in this channel I do

    uInt8 data [8] = {d1, 0, 0, 0, 0, 0, d2, 0}; with d1 and d2 that represents 0 or 1, which bits I want to get written

    int error = DAQmxWriteDigitalLines (taskSelectFilter, 1, 1, 10, DAQmx_Val_GroupByChannel, data, NULL, NULL);

    The result is, this only $line0 is updated, lin6 rest 0.

    I also tried DAQmxWriteDigitalU8 with the same effect.

    Can anyone help?

    Thanks in advance,

    Michael

    Hi Michael,

    you have defined two dig.out channels in your task: line 0 and line 6. So, when you write an array of string values, d1 Gets the mapping to your line 0 straight - but line 6 still receives a zero!

    Have you tried to set data uInt8 [2] = {d1, d2}?

    Best regards
    Sebastian

  • Several VPN strategies to even peer. Is is possible?

    I am trying to create several strategies of VPN for the peer even on a TZ 105.  The peer is an another SonicWall.   Whenever I have create the second strategy the peer starts sending invalid ID back messages in IKE1 negotiations.

    The two policies are using sources different subnets and subnets of different destination.  A source subnet is connected to the X 0 port and the other to port X 2.   The basic idea is for devices on the subnet connected to the X 0 port to reach a limited number of private behind the SonicWall remote subnets.  Devices connect to port X 2 should tunnel all public internet traffic over the VPN and access the internet through the SonicWall remote.   There are complicated reasons behind this desired configuration.

    I am new to SonicWall, so I don't know if it is still possible to what I'm trying to do.  If this is the case, I am clearly something wrong.  I'll fill in more details if necessary.

    No you can't do that. You must create 1 policy that contains all the networks you want to allow to browse this VPN.

    Thank you
    Ben D
    Reference Dell SonicWall
    #iwork4Dell

  • scan of the index systematic range

    Hello

    I read about the differences between the systematic index scan range, single scan, skip scan.

    According to the docs, to how the CBO Evaluates in-list of iterators, http://docs.oracle.com/cd/B10500_01/server.920/a96533/opt_ops.htm

    , I can see that

    "The IN -list iterator is used when a query contains a IN clause with values." The execution plan is the same which would result for a statement of equality clause instead of IN with the exception of an extra step. This step occurs when the IN -list iterator feeds section of equality with the unique values of the IN -list. »

    Of course, the doc is Oracle9i Database. (I do not find it in the docs of 11 g)

    And the example 2-1 list iterators initial statement, shows that is used in the INDEX RANGE SCAN.


    On my Oracle 11 GR 2 database, if I issue a statement similar to the example of the doc, so: select * from employees where employee_id in (7076, 7009, 7902), I see that it uses a SINGLE SCAN


    On Oracle Performance Tuning: the Index access methods: Oracle Tuning Tip #11: Unique Index Scan , I read that

    If Oracle should follow the Index Unique Scan, and then in SQL, equality operator (=) must be used. If any operator is used in other than op_Equality, then Oracle cannot impose this Unique Index Scan.

    (and I think this sentence is somewhere in the docs also).

    Thus, when using predicates in the list, why in my case Oracle used the unique scan on primary key column index? Because it wasn't a level playing field.

    Thank you.

    It is Internet... find us a lot of information a lot but don't know who to trust.

    Exactly! It is thought, you should ALWAYS have in the back of your mind when you visit ANY site (no matter the author), read a book or document, listen to no matter WHAT presentation or read responses from forum (that's me included).

    All sources of information can and will be errors, omissions and inaccuracies. An example which is used to illustrate a point can involve/suggest that it applies to the related points as well. It's just not possible to cover everything.

    Your post doc 9i is a good example. The earliest records (even 7.3 always available online docs) often have a LOT of better explanations and examples of basic concepts. One of the reasons is that there were not nearly that many advanced concepts that explaining necessary; they did not exist.

    michaelrozar17 just posted a link to a 12 c doc to refute my statement that the article you used was bad. No problem. Maybe this doc has been published because of these lines:

    The database performs a unique sweep when the following conditions apply:

    • A query predicate refers to all columns in a unique index using an equality operator key, such as WHERE prod_id=10 .
    • A SQL statement contains a predicate of equality on a column referenced in an index created with the CREATE UNIQUE INDEX statement.

    The authors mean that a single scan is ONLY performed for these conditions? We do not know. There could be several reasons that an INLIST ITERATOR has not been included in this list:

    1. a LIST is NOT for this use case (what michaelrozar might suggest)

    2. the authors were not aware that the CBO may also consider a unique analysis for a predicate INLIST

    3. the authors WERE aware but forgot to include INLIST in the document

    4. the authors were simply provide the conditions most common where a single sweep would be considered

    We have no way of knowing what was the real reason. This does not mean that the document is not reliable.

    In the other topic, I posted on the analysis of hard steps, site of BURLESON, and Jonathan contradicted me. If neither Burleson isn't reliable, do not know which author have sufficient credibility... of course, the two Burleson and Jonathan can say anything, it's true I can say anything, of course.

    If site X is false, site is fake, Z site is fake... all people should read the documentation only and not other sites?

    This is the BEST statement of reality to find the info I've seen displayed.

    No matter who is the author, and what credibility that they could rely on the spent items you should ALWAYS keep these statements you comes to mind.

    This means you need to do ' trust and verify. " You of 'trust', and then you "checked" and now have a conflict between WORDS and REALITY.

    On those which is correct. If your reality is correct, the documentation is wrong. Ok. If your reality is wrong, then you know why.

    Except that nobody has posted ANY REALITY that shows that your reality is wrong. IMHO, the reason for this is because the CBO probably MUCH, done a LOT of things that are not documented and that are never explored because there is never no reason to spend time exploring other than of curiosity.

    You have not presented ANY reason to think that you are really concerned that a single scan is used.

    Back to your original question:

    Thus, when using predicates in the list, why in my case Oracle used the unique scan on primary key column index? Because it wasn't a level playing field.

    1. why not use a single sweep?

    2. what you want Oracle to use instead? A full table scan? A scan of the index systematic range? An index skip scan? A Full Scan index? An analysis of index full?

    A full table scan?  For three key values? When there is a unique index? I hope not.

    A scan of the index systematic range? Look a the doc 12 c provided for those other types of indexes

    How the Index range scans work

    In general, the process is as follows:

    1. Read the root block.
    2. Read the bundle branch block.
    3. Replacing the following steps until all data is retrieved:
      1. Read a block of sheets to get a rowid.

      2. Read a block to retrieve a table row.

    . . .
    For example, to analyze the index, the database moves backward or forward through the pads of sheets. For example, an analysis of identifications between 20 and 40 locates the first sheet block that has the lowest value of key that is 20 or more. The analysis produced horizontally through the linked list nodes until it finds a value greater than 40 and then stops.

    If that '20' was the FIRST index value and the '40' was the LAST one who reads ALL of the terminal nodes. That doesn't look good for me.

    How to index full scans of work

    The database reads the root block and then sailed on the side of the index (right or left hand if do a descending full scan) until it reaches a block of sheets. The database then reads down the index, one block at a time, in a sorted order. The analysis uses single e/s rather than I/O diluvium.

    Which is about as the last example is not?

    How to index Fast Full Scans work

    The database uses diluvium I/O to read the root block and all the blocks of leaf and branch. Databases don't know branch blocks and the root and reads the index on blocks of leaves entries.

    Seems not much better than the last one for your use case.

    Skip index scans

    An index skip scan occurs when the first column of a composite index is "skipped" or not specified in the query.

    . . .

    How Index Skip scan work

    An index skip scan logically divides a composite index in smaller subindex. The number of distinct values in the main columns of the index determines the number of logical subindex. The more the number, the less logical subindex, the optimizer should create, and becomes the most effective analysis. The scan reads each logical index separately and "jumps" index blocks that do not meet the condition of filter on the column no leader.

    Which does not apply to your use cases; you do not have a composite index, and there is nothing to jump. If Oracle were to 'jump' between the values of the list in it would be still reads these blocks 'inbetween' and them to jump.

    Which brings back us to the using a single scan, one at a time, for each of the values in the list in. The root index block will be in the cache after the first value lies, so it only needs to be read once. After that just Oracle detects that the entry of only ONE necessary index. Sounds better than any other variants for me if you are only dealing with a small number of values in the IN clause.

  • Logical search strategies

    With the help of Oracle 9.2.0.8 on AIX, Oracle text. Front end .net

    I have a very demanding client, requiring precise results (high relevance). Want it provide a simple search for end users (ala Google) interface, but take advantage of more advanced in the back-end logic. We deal with only the French and English content... for most of indexing CLOB and varchar2s.

    The dilemma I face to, is the end user is not familiar with Oracle Advanced search syntax, but still want accurate and relevant results... "The advanced search screen offers 3 text boxes, 'All words', exact phrase" and "words". There is also a 'part of the word' checkbox, when enabled, tells stored proc to make a joker on each keyword match.

    Based on what text box is selected, the stored procedure injects the appropriate Boolean operators and encapsulates each term with a BLUR (70, weighted score) operator...

    The current implementation of research, according to them, sucks.

    I was responsible to improve the results of the research... so far I have set up the following:

    -conversion of base_letter
    -arising (index stems)
    -fuzzy matching enabled
    -skipjoins for hyphens (treat words as post-secondary education)
    -index_themes, prove_themes (on request)
    -index of the substring (for wildcard matching)

    I also have a user_datastore to combine the 3 columns (a varchar2 and two CLOB) that were previously indexed separately and controlled on the varchar2 and column scores a clob.

    I also replaced it with FUZZY query operator searches using 'a WAY' integrated knowledge base.

    With these changes, they find the results improved, but I think that you can make more.

    The stored procedure strips on special characters (used by Oracle Text in the control unit) in the query terms and injects its own operators and Boolean logic based on what text box was used to submit the query.

    Is there a better way to do it? There seems to be a delicate balance between maintaining the simple search interface, while still developing advanced search on the back-end functions.

    Some scenarios I played wiith:

    -Wrap each with a CONNECTION query term, GOLD ' ing or AND'ing of each term, depending on whether the user entered terms in 'words' or 'all words '. Problem here is WHAT ignores special characters... even a hyphen which is part of a Word, even if I escape by using "-" or "{"... base don feedback: query, words like "post-secondary" will be searched as:

    Search for ALL words, 1, 0, CRT ACCUMULATE, 1
    Search for all words, 2, 1 CRT, on, posting, 1
    CRT search for all words, 3, 1, SUBJECT, secondariness, 2

    How a handful of Oracle research? Asktom?

    Thanks for the tips and advice

    Stéphane

    Any chance of move you to a supported version, I guess?

    The best way to improve the relevance without losing callback is to use the Progressive Relaxation - see
    http://www.Oracle.com/technology/products/text/htdocs/prog_relax.html
    but unfortunately this requires 10g.

    Otherwise, you might try something like this:

    Application of single word for dog:

    (dog) * 4, subject (dog) * 3% * 2 dog,? dog * 1

    The ',' is the ACCUM operator and the "*" multiplies the score. So we say that a match exact higher scores, followed by a generic, followed by a fuzzy character followed by a theme.

    Several Word searches, looking for searches of words to match highest, followed by and, followed by or. So for a search for 'black dog '...

    (black dog) * 4 (black and dog) * 3, (about (black) and about (dog)) * 2.5, (black or dog) * 2, (% black and dog%)*1.5, (black or dog %) * 1... etc.)

    There are some combinations missing here - and it will get exponentially more complex with terms more, but you get the picture. In addition, you could be smart when dealing with sentences...

    "camping or hiking in Western Europe"-there are three groups of Word it: camping, hiking, and western europe. Looking for empty words will help you separate sentences, then you can search for 'western europe', which appears in the same place but worry not "camping hiking" or "hiking western."

    Moreover, I do not recommend using SUBSTRING_INDEX. It is useful when you perform wildcard searches, and it is rarely necessary to search for % black or dog %. SUBSTRING_INDEX will greatly increase the size and timing of the generation of your index.

    Published by: Roger Ford on February 12, 2010 09:28

  • How can I stop several tabs opening as well as the Mozilla Firefox Start Page at the time of launch. How can this be stopped?

    Whenever I run Firefox, tabs has also begin to open with the Start Page Mozilla Firefox, which is very irritating. How can I avoid this happening whenever I run the browser?

    See the following for a few suggestions:

    It is also possible that there is a problem with the sessionstore.js and sessionstore.bak files in the Firefox profile folder.

    Delete the sessionstore.js and sessionstore.bak files in the Firefox profile folder.

    If you see files sessionstore-# .js with a number in the left part of the name as sessionstore - 1.js then delete those as well.

    Delete sessionstore.js will cause App Tabs and groups of tabs to get lost, so you'll have to create them again (note).

    See:

  • How to read a counter value for the separation of the two edge before meter is stopped by the second edge (6602 Council)?

    I use a timer/counter with DAQmx 6602. I use the separation of two - available via DAQmx cash edge. Count between the two edges works properly, however I do not know how to read the value of the counter during the counting operation (i.e. after the first edge triggered the beginning of the count, but before the second edge triggered the end of the counting). I'll have to wait for the second goes off the edge of the end of the countdown until I can get a counter value. I need to be able to access the current value of the County during the count operation. This was possible in traditional DAQ. How can it be accomplished using DAQmx?

    Ah shoot - I was afraid that this might be the case (for what it's worth, my series of X returned intermediate values, but the material and the underlying driver are quite different)...

    You just need to take one measure at a time or you are buffer several measures of separation of the two edges at the same time?  So just to take one measure at a time, you can set a task of edges of count using the database internal time as the source using an arm start trigger (first edge) and a sample of clock (second Board) to work around the problem.

    Best regards

  • I get corrupted files indexing whenever I have access directly to the files on my system.

    I installed Windows 7 on a partition of a raid 0 with Windows XP on a second partition as a dual boot. The installer went well and all the drivers seem to work well. It seems anytime access to files via windows Explorer, my computer, etc., the indexing of files get corrupted. The system detects corruption and upon reboot it travels the disk repair utilities and deletes several files indexing, that it says are corrupt and the cycle goes round. I disabled the indexing on all partitions file and the problem got better but I still occasionally run the disc scan and correct indexing errors more. Everyone knows about this problem?

    Hi clonemark,

    Thanks for the reply on the community forum.

    Have you run any disk hard diagnosis tools that your hardware vendor can provide?  Initially, it sounds like there may be some bad sectors on the disks themselves hard.  If your system is under warranty, you may contact your provider and walk through a test of health with them.  If this is not the case, they have a tool on the site to help you with this.

    Don't forget a RAID 0 provides no redundancy so if any disk fails, without a State of full system / or all of the files you want to back up you are in danger of losing everything.  I want to make sure it is before much more is done, please make a good backup of all the data that you consider to be important for you on this system.

    In addition, if the material is not corrupted.  Check if the table that you created is corrupt.  Errors that you are declaring seem to indicate somewhere in this - table of data or hardware is damaged.  If chkdsk runs and can't find any error then it would be not significant to this procedure, but chkdsk running and to find errors, this is what is worrying.

    After checking if the material and the table are healthy, start in a clean boot state.

    Step 1: Perform a clean boot - this comes from an article in Windows Vista, however, the procedure is the same.  Don't forget to follow step 7 of this article, after the troubleshooting steps are made - http://support.microsoft.com/kb/929135 Note if the computer is connected to a network, the network may be policy settings prevent you from following these steps. We recommend strongly that you do not use System Configuration utility to change startup options on the computer, unless a support engineer Microsoft directs you to do so. This can make the computer unusable.

    1. Log the computer by using an account with administrator rights.
    2. Click Start

      , type msconfig.exe in the box start looking for and then press ENTER to start the System Configuration utility.

    If you are prompted for an administrator password or for confirmation, type your password or click on continue.

  • On the general tab of the , click on selective startup of , and then click to clear the load startup items check box. (The box use the file Boot isn't available.)
  • Under the Services tab, select the hide all Microsoft services checkbox and then click disable all.

    Note Following this step lets services Microsoft continue to run. These services include networking, Plug-and-Play, record of events, error reporting, and other services. If you disable these services, you can permanently delete all restore points. Do not do this if you want to use to restore the system with the existing restore points.

  • -Click OK and then click Restart.
  • After you restart your computer follow these Rummy to run chkdsk.  This should stop the procedure was running and bringing only to run in your previous post.  Run chkdsk until more no error is detected.  Then the run once more to verify it shows always 0 errors.

    If you still experience the problem, and all the steps listed above have been verified, run a tool to remove malware and VIRUS SCAN on all partitions in your operating system.  Check that there is no software of thugs on the system.

    Let us know on this question of how things progress.

    Kind regards

    Debbie
    Microsoft Answers Support Engineer
    Visit our Microsoft answers feedback Forum and let us know what you think.

  • deletion of a partitioned index

    Hi friends,

    I use 10.2.0.4 oracle on solaris.

    I have several partitioned index with the 2011 created on a daily basis. I tried to drop one of the indexes and got the below error.

    SQL > ALTER INDEX QOSDEV. PK_RATE_CISCOMEMORYPOOL DROP PARTITION 'OCTOBER 5, 2012 '.

    ALTER INDEX QOSDEV. PK_RATE_CISMEPOOL DROP PARTITION 'OCTOBER 5, 2012 '.

    Error on line 2

    ORA-14076: submitted alter index partition/subpartition operation is not valid for local partitioned indexes

    Script done on line 2.

    I ask you how to remove these partitions.

    Thank you

    DBApps

    Hello

    Try-

    ALTER drop partition table RATE_CISCOMEMORYPOOL 'October 5, 2012;

    Anand

  • Why the feature multiple column indexes using index skip scan?

    Hi all

    I have just been hired by a new company and I explored its database infrastructure. Interestingly, I see several function based indexed column used for all the tables. I found it strange, but they said ' we use Axapta to connect Axapta with Oracle, function index according to should be used to improve performance. Therefore, our DBAs create several indexes of feature based for each table in the database. "Unfortunately, I can not judge their business logic.

    My question is, I just created similar to my local database tables in order to understand the behavior of the function index according to several columns. In order to create indexes of based function (substr and nls_lower), I have to declare the columns as varchars2. Because in my business our DBAs had created a number of columns as a varchar2 data type. I created two excatly same table for my experience. I create miltiple function according to index on the my_first table, and then I create several normal index on the my_sec table. The interesting thing is, index skip scan cannot be performed on more than one basic function index (table my_first). However, it can be performed to normal several index on my_sec table. I hope that I have to express myself clearly.

    Note: I also ask the logic of the rule function based index, they said when they index a column they don't ((column length) * 2 + 1) formula. For example, I want to create indexes on the zip code column, column data type VARCHAR2 (3), so I have to use 3 * 2 + 1 = 7, (substr (nls_lower (areacode), 1, 7). substr (nls_lower ()) notation is used nested for any function function index. I know that these things are very illogical, but they told me, they use this type of implementation for Axapta.

    Anyway, in this thread, my question is reletad to function function with index index skip scan, not logical bussiness, because I can not change the business logic.

    Also, can you please give hints or clues for multiple function based indexes?

    Thanks for your help.


    SQL > create table my_first as select '201' codeZone, to_char (100 + rownum) account_num, dbms_random.st
    Ring name ('A', 10) from dual connect by level < = 5000;

    Table created.

    SQL > create table my_sec as select '201' codeZone, to_char (100 + rownum) account_num, dbms_random.st

    Ring name ('A', 10) from dual connect by level < = 5000;

    Table created.

    SQL > alter table my_first change account_num varchar2 (12);

    Modified table.


    SQL > alter table my_sec change account_num varchar2 (12);

    Modified table.

    SQL > alter table my_first change codeZone VARCHAR2 (3);

    Modified table.

    SQL > alter table my_sec change codeZone VARCHAR2 (3);

    Modified table.

    SQL > create index my_first_i on my_first (substr (nls_lower (areacode), 1, 7), substr (nls_lower (account_num), 1, 15));

    The index is created.

    SQL > create index my_sec_i on my_sec (area code, account_num);

    The index is created.

    SQL > analyze table my_first computing statistics for all columns indexed for all indexes.

    Parsed table.

    SQL > analyze table my_sec computing statistics for all columns indexed for all indexes.

    Parsed table.

    SQL > exec dbms_stats.gather_table_stats (USER, 'MY_FIRST');

    PL/SQL procedure successfully completed.

    SQL > exec dbms_stats.gather_table_stats (USER, 'MY_SEC');

    PL/SQL procedure successfully completed.

    SQL > my_first desc;
    Name                                      Null?    Type
    ----------------------------------------- -------- ----------------------------
    CODEZONE VARCHAR2 (3)
    ACCOUNT_NUM VARCHAR2 (12)
    NAME VARCHAR2 (4000)

    SQL > desc my_sec
    Name                                      Null?    Type
    ----------------------------------------- -------- ----------------------------
    CODEZONE VARCHAR2 (3)
    ACCOUNT_NUM VARCHAR2 (12)
    NAME VARCHAR2 (4000)

    SQL > select * from my_sec where account_num = '4000';


    Execution plan
    ----------------------------------------------------------
    Hash value of plan: 1838048852

    --------------------------------------------------------------------------------
    --------

    | ID | Operation | Name | Lines | Bytes | Cost (% CPU). TI
    me |

    --------------------------------------------------------------------------------
    --------

    |   0 | SELECT STATEMENT |          |     1.    19.     3 (0) | 00
    : 00:01 |

    |   1.  TABLE ACCESS BY INDEX ROWID | MY_SEC |     1.    19.     3 (0) | 00
    : 00:01 |

    |*  2 |   INDEX SKIP SCAN | MY_SEC_I |     1.       |     2 (0) | 00
    : 00:01 |

    --------------------------------------------------------------------------------
    --------


    Information of predicates (identified by the operation identity card):
    ---------------------------------------------------

    2 - access ("ACCOUNT_NUM" = '4000')
    Filter ("ACCOUNT_NUM" = '4000')


    Statistics
    ----------------------------------------------------------
    1 recursive calls
    0 db block Gets
    Gets 7 compatible
    0 physical reads
    0 redo size
    543 bytes sent via SQL * Net to client
    384 bytes received via SQL * Net from client
    2 SQL * Net back and forth to and from the client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed

    SQL > select * from my_first where substr (nls_lower (account_num), 1: 25) = '4000';


    Execution plan
    ----------------------------------------------------------
    Hash value of plan: 1110109060

    ------------------------------------------------------------------------------
    | ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time |
    ------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT |          |     1.    20.     9 (12) | 00:00:01 |
    |*  1 |  TABLE ACCESS FULL | MY_FIRST |     1.    20.     9 (12) | 00:00:01 |
    ------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):
    ---------------------------------------------------

    1 Filter (SUBSTR (NLS_LOWER ("MY_FIRST". "" "" ACCOUNT_NUM")(, 1, 15) ="4000"
    AND SUBSTR (NLS_LOWER ("ACCOUNT_NUM"), 1, 25) = '4000')


    Statistics
    ----------------------------------------------------------
    15 recursive calls
    0 db block Gets
    Gets 26 consistent
    0 physical reads
    0 redo size
    543 bytes sent via SQL * Net to client
    384 bytes received via SQL * Net from client
    2 SQL * Net back and forth to and from the client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed

    SQL > Select / * + INDEX_SS (MY_FIRST) * / * from my_first where substr (nls_lower (account_num), 1: 25) = '4000';


    Execution plan
    ----------------------------------------------------------
    Hash value of plan: 2466066660

    --------------------------------------------------------------------------------
    ----------

    | ID | Operation | Name | Lines | Bytes | Cost (% CPU).
    Time |

    --------------------------------------------------------------------------------
    ----------

    |   0 | SELECT STATEMENT |            |     1.    20.    17 (6) |
    00:00:01 |

    |*  1 |  TABLE ACCESS BY INDEX ROWID | MY_FIRST |     1.    20.    17 (6) |
    00:00:01 |

    |*  2 |   INDEX SCAN FULL | MY_FIRST_I |     1.       |    16 (7) |
    00:00:01 |

    --------------------------------------------------------------------------------
    ----------


    Information of predicates (identified by the operation identity card):
    ---------------------------------------------------

    1 - filter (SUBSTR (NLS_LOWER ("ACCOUNT_NUM"), 1, 25) = '4000')
    2 - access (SUBSTR (NLS_LOWER ("ACCOUNT_NUM"), 1, 15) = '4000')
    Filter (substr (NLS_LOWER ("ACCOUNT_NUM"), 1, 15) = '4000')


    Statistics
    ----------------------------------------------------------
    15 recursive calls
    0 db block Gets
    857 consistent gets
    0 physical reads
    0 redo size
    543 bytes sent via SQL * Net to client
    384 bytes received via SQL * Net from client
    2 SQL * Net back and forth to and from the client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed

    Check MoS for a bug with the FBI and Skip Scan - it sounds like it could be a bug.

    On 11.2.0.4 with your sample code 10053 trace shows the optimizer whereas an index FULL scan to the point where she should consider an index SKIP scan for "unique table path".

    A person with 12.1.0.1 practice would like to run your test and see if it's fixed in this version.

    Concerning

    Jonathan Lewis

  • Compilation of several PDF

    In Adobe Reader, you can compile several PDF documents and add a hyperlink directory?  For example, I have several strategies that my boss wants to merged into a single PDF with a link table of contents.  I can't find this option in reader XI, can I Pro?

    You will need Adobe Acrobat. Standard or Pro that will.

  • INI_TRANS for TABLE and INDEX

    Hello

    My version of DB is 10.2.0.3

    As default, the INI_TRANS of the tables is 1 and for the index is 2. For some reason I have to increase the INI_TRANS of a table at 5, then what value set for INI_TRANS of indexes on the table?

    I read somewhere that INI_TRANS of the index to be set twice from INI_TRANS. means of the table if table INI_TRANS is 5 then INI_TRANS of the index must be set to 10. Is this true? If Yes, then what is the logic behind this?



    Thank you
    Oratest

    Published by: oratest on February 4, 2013 15:26

    oratest wrote:

    My version of DB is 10.2.0.3

    As default, the INI_TRANS of the tables is 1 and for the index is 2. For some reason I have to increase the INI_TRANS of a table at 5, then what value set for INI_TRANS of indexes on the table?

    I read somewhere that INI_TRANS of the index to be set twice from INI_TRANS. means of the table if table INI_TRANS is 5 then INI_TRANS of the index must be set to 10. Is this true?

    N °
    There is no useful guidelines for a generic parameter. Consider the conflicting scenes:

    (a) five sessions insert five rows separated in a single table block - with 5 locations of ITL. There is an index on the table, but the five inserted rows just keep these different values that their index entries go to 5 different leaf blocks: the index didn't need initrans 5.

    (b) using the different sessions to insert a row in a table - the nature of the SAMS means that (on average) there are 5 rows of SAMS 80 inserted into 16 different blocks - the SAMS table must be 5. However, in this case the indexed column is generated by a sequence and all 80 index entries should be inserted in the same block of sheets, so you initrans 80 on the index. (Except Oracle will make sheets index block split to work around the problems of initrans was too small on the concurrent inserts.)

    You need to consider the nature of the index and model of changing data for each unique index separately - and then you could assign initrans on the index to do NO MORE THAN the initrans + 1 table because (a) who will lose not too much space (b) he is not likely to let the too great claim to happen (c) Oracle will extend dynamically LAUGHED in most cases (d) if it is bad for an index special you will notice it pretty quickly - and there will be other problems (buffer waits occupied) deal anyway on this index.

    Concerning
    Jonathan Lewis

  • Bad generation of DDL for indexes primary key partitioned - lost partitions

    Hello

    In our design of the database, we want to divide some tables and their indexes. The partition type is hash indexes must be local, also divided No problem affecting storage of the table partitions. Also no aucun probleme problem with the index partitions, not included the primary key.

    Our problem appears in the generation of DDL: partitions of tables and indexes are generated fine except the primary partitions it shows key - phrase-'alter table' primary key as if it was not partitioned.

    Apparently indexes primary keys must be generated as the other indices, is it not?

    Thanks in advance,

    Bernat Fabregat

    Published by: Berni 11/29/2010 12:37

    Hello Bernat,

    for local partitioning, you need to create indexes separated on column PK (if you do not already). Set partitioning for this index, and then in the dialog box for the primary key in the physical model:
    (1) in the 'Général' tab, clause "with the help of the index' - select 'by the name of the index.
    (2) in tab 'using Index' - 'Existing Index' drop-down list box - select defined index.

    Global partitioning can be created directly on the primary key in the physical model.

    Philippe

  • FrameMaker 8: Possible to combine the index the same page entries?

    Is it possible in FrameMaker to have several identical index on one page entries 'sink' into a single entry?

    For example, if I a document which includes index for "Adams, John" entries on pages 13 and 14, the index shows "Adams, John, 13, 14". " If I can do some changes that cause the second entry to appear on the same page as the first entry (for example, by not including not not certain conditional text), the index shows "Adams, John, 13, 13.

    Other than the removal of the second index mark, is possible to have the index generate like "Adams, John, 13"?

    Thank you!

    The index entries were added within the FM, or were they imported and text from another application (such as Word)?

    If you want really legal to this topic, I would save the file as MIF, then we the freebie MIFBrowse

    http://www.grahamwideman.com/GW/tech/FrameMaker/mifbrowse.htm

    to look at each entry (if you have not yet seen MIF, it can be a little 'dense' to digest, but the best thing is to first search for a rather unique word which could be just before entries and then move down through the screen until you find the index entries).

    Or, if time is critical, I had it just delete the entries, save, then re-create them, being extremely careful, that it is not all the differences in their and update.

    Another possibility may be less if the reference of the index page has been inadvertently damaged - you could try to add a new index to the book and see if it generates correctly.

    Edit: on the other issues, please post the separate Forum ("topics") for each of them, otherwise it's hard for everyone offer advice if several problems are addressed in a single thread.  And, it is always helpful if you specify exactly which version of FM you use (help > on, "pxxx" numbers) and your platform + service pack level, too, it helps to avoid a lot of confusion.

    Sheila

    Post edited by: Sheila

Maybe you are looking for