Use fractions to set my leaders to avoid the use of decimal of repetition?

Is it possible to use fractions to set the X and Y of a guide? I need to create a pamphlet three components which is 11 "high with a buffer of 1/8". In decimal, my first guide is 0.125 and my second guide must be 3.791666666666666. I can use this number, but I fear that it is not perfectly accurate. Yes, I know it is too much trouble obsessive-compulsive, but I thought I would give it a shot. It would be nice if I could tell Illustrator to place my ruler to 3 19/24 ". Thank you.

Jon,

You can use simple expressions based on two values, such as 91/24 (3 * 24 + 19 = 91, so 91/24 = 3 + 19/24).

Tags: Illustrator

Similar Questions

  • what setting to change to avoid the "unable to allocate new log"

    Hello everyone.
    I'm on 9i r2 on windows server 2003 edition of the SD with attached ISCSI. I have the drive 1 recovery as well as DBF newspaper group in the data directory. 2 other groups redo are on the 2 local disks separated and archiving logs (in another folder).

    I'm getting some errors "cannot allocate new logs" every two days and in Event Viewer "process Archive error: Instance ORACLE prdps - cannot allocate Journal, check-in required.


    I do not know what setting I should change.

    Current configuration:
    db_writer_processes 1
    dbwr_io_slaves 0



    Here is the output from v$ sysstat:
    49 8 7410575 written DBWR checkpoint buffers
    50 8 7748 written DBWR transaction table
    51 DBWR undo block writes 8 4600265
    52 DBWR revisited being written 8 5313 buffer
    DBWR 53 free ask 8 26383
    54 DBWR free buffers are 8 19838373
    55 DBWR lru scans 21831-8
    56 DBWR summed scan depth 8 21265425
    57 8 21265425 scanned DBWR buffers
    58 control DBWR 8 1719 points
    59 DBWR cross instance writes 40 0
    60 fusion DBWR written 40 0


    It's alert.log:
    Fri Mar 06 00:25:52 2009
    Arc0: Complete 1 thread 1 log archiving sequence 7004
    Fri Mar 06 00:25:54 2009
    Thread 1 Advanced for you connect to sequence 7006
    Currently Journal # 3 seq # 7006 mem # 0: E:\ORACLE\ORADATA\PRDPS\REDO03A. JOURNAL
    Currently Journal # 3 seq # 7006 mem # 1: F:\ORACLE\ORADATA\PRDPS\REDO03B. JOURNAL
    Currently Journal # 3 seq # 7006 mem # 2: G:\ORACLE\ORADATA\PRDPS\REDO03C. JOURNAL
    Fri Mar 06 00:25:54 2009
    Arc1: Evaluating archive log 2 thread 1 sequence 7005
    Arc1: Begins to archive log 2 thread 1 sequence 7005
    Creating archives LOG_ARCHIVE_DEST_1 destination: "F:\ORACLE\ORADATA\PRDPS\ARCHIVE\PRDPS_001_07005.ARC."
    Fri Mar 06 00:26:03 2009
    Thread 1 Advanced for you connect to sequence 7007
    Currently journal # 1, seq # 7007 mem # 0: E:\ORACLE\ORADATA\PRDPS\REDO01A. JOURNAL
    Currently journal # 1, seq # 7007 mem # 1: F:\ORACLE\ORADATA\PRDPS\REDO01B. JOURNAL
    Currently journal # 1, seq # 7007 mem # 2: G:\ORACLE\ORADATA\PRDPS\REDO01C. JOURNAL
    Fri Mar 06 00:26:03 2009
    Arc0: Assessment of the archive log 2 thread 1 sequence 7005
    Arc0: Impossible to archive log 2 thread 1 sequence 7005
    Newspapers archived by another process
    Arc0: Assessment of the 3 thread 1 sequence 7006 log archives
    Arc0: Starts to archive log 3 thread 1 sequence 7006
    Creating archives LOG_ARCHIVE_DEST_1 destination: "F:\ORACLE\ORADATA\PRDPS\ARCHIVE\PRDPS_001_07006.ARC."
    Fri Mar 06 00:26:15 2009
    Arc1: Finished 2 thread 1 log archiving sequence 7005
    Arc1: Evaluating archive log 3 thread 1 sequence 7006
    Arc1: Impossible to archive log 3 thread 1 sequence 7006
    Newspapers archived by another process
    Fri Mar 06 00:26:16 2009
    Thread 1 cannot allot of new newspapers, sequence 7008
    All newspapers need to check-in online
    Currently journal # 1, seq # 7007 mem # 0: E:\ORACLE\ORADATA\PRDPS\REDO01A. JOURNAL
    Currently journal # 1, seq # 7007 mem # 1: F:\ORACLE\ORADATA\PRDPS\REDO01B. JOURNAL
    Currently journal # 1, seq # 7007 mem # 2: G:\ORACLE\ORADATA\PRDPS\REDO01C. JOURNAL
    Thread 1 Advanced for you connect to sequence 7008
    Currently Journal # 2 seq # 7008 mem # 0: E:\ORACLE\ORADATA\PRDPS\REDO02A. JOURNAL
    Currently Journal # 2 seq # 7008 mem # 1: F:\ORACLE\ORADATA\PRDPS\REDO02B. JOURNAL
    Currently Journal # 2 seq # 7008 mem # 2: G:\ORACLE\ORADATA\PRDPS\REDO02C. JOURNAL
    Fri Mar 06 00:26:16 2009
    Arc1: Evaluating archive log 3 thread 1 sequence 7006
    Arc1: Impossible to archive log 3 thread 1 sequence 7006
    Newspapers archived by another process
    Arc1: Evaluating archive log 1 thread 1 sequence 7007
    Arc1: Beginning to archive journal 1-wire 1 sequence 7007


    Should I just change
    db_writer_processes 1
    dbwr_io_slaves 2

    Thank you
    Any help appreciated.

    This message indicates that Oracle wants to re-use a redo log file, but
    the
    associated with corresponding control point is not over. In this case,.
    Oracle
    Wait for the control point is carried out entirely. This situation
    may be encountered especially when transactional activity is
    important.

    search for:
    -Background checkpoint started.
    -Control your mat point.

    These two statistics should not differ more than once. If it is

    not true, your base weighs on the control points. LGWR does not reach
    continue
    next write operations that ends the control points.

    Three reasons may explain this difference:

    -A frequency of control points which is too high.
    -A control points are starting but not filling not
    -One DBWR writing too slowly.

    How to resolve incomplete control points is through tuning
    control points and
    newspapers:

    (1) gives the checkpoint process more time to scroll through newspapers
    -Add more redo log groups
    -increase the size of the logs again
    (2) to reduce the frequency of control points
    -increase the LOG_CHECKPOINT_INTERVAL
    -increase the size of newspapers in restoration online
    (3) improve the effectiveness of the control points for the CKPT process
    CHECKPOINT_PROCESS = TRUE
    (4) set LOG_CHECKPOINT_TIMEOUT = 0. This disables the verification script
    based on
    time interval.
    (5) another way to solve this error is for DBWR to write quickly
    the dirty
    buffers to disk. The parameter associated with this task is:

    DB_BLOCK_CHECKPOINT_BATCH.

    DB_BLOCK_CHECKPOINT_BATCH specifies the number of blocks that are
    dedicated
    inside the batch size for writing the control points. When you want to
    speed up
    control points, it is necessary to increase this value.

  • How to set permissions for drives avoid the ORA-15063 start-up ASM?

    Hello

    When I try to reboot the DSO after a reboot of the server, I get

    ERROR: diskgroup DATA was not mounted

    ORA-15032: not all changes made

    ORA-15017: diskgroup 'DATA' cannot be mounted

    ORA-15063: ASM discovered an insufficient number of drives for diskgroup "DATA".

    It seems the permission of disks have changed after the reboot. I guess the spfile is inside the DATA diskgroup because it can not even start ASM. When I create a file with values asm_diskgroup and asm_diskstring pfile and 777 on the drive, I managed to start the ASM.

    * .asm_diskgroups = "INDX", "DATA".

    * .asm_diskstring = "/ dev / *'"

    $ oracleasm querydisk p - DATA

    Disk 'DATA' is a valid ASM disk

    / dev/sdb4: LABEL = 'DATA' TYPE = 'oracleasm.

    $ ls-la/dev/sdb *.

    ...

    BRW - rw - 1 root disk 8, 19, 29 mai 10:42 / dev/sdb3 it

    brwxrwxrwx 1 root disk 8, 20, 29 mai 15:17 / dev/sdb4

    ...

    And when I read the documentation I see this ' that the owner of the ORACLE binary file has permissions to read/write on the discs"(so do not run required) but the problem is that this test server should have the same parameters as in production. And the production server has brw - r - for the corresponding disk. So my questions are

    Can it be handled somehow in a different way? How can this have been reached production without the same settings I put in test? Is there an other way around for me in test to achieve?

    The network user (who owns the infrastructue grid) belongs to the same group of test and production. Test and production have ASM version 11.2.0.3, but the different operating system.

    production: 2.6.18 - 274.12.1.el5

    test: 2.6.32 - 504.16.2.el6.x86_64

    As this is my first ASM/grid infrastructure, I don't even know where to begin this review except the support.oracle.com search but I've not found anything except the Action for the error code (which is perhaps the only way around it)

    Any help is appreciated

    Thank you

    / Jenny

    Remember that the permissions of the device must be configured so that the owner of the Oracle database or the process has full access to the appropriate device files. It is not enough to only give ASM full authorization for devices files. ASM provides management for volume, but the database has direct access to device files and ASM does not interfere with inputs and outputs. You can set up groups and owner accordingly. If you have a different owner for ASM, you must assign appropriate groups to give full access ASM configuring oracleasm.

  • Avoiding the TRUNC as a condition of setting for the date of transaction

    I've never noticed this before, but tonight I have tested a query and made a trunc (mmt.transaction_date) between April 1, 2012"and 20 April 2012 ' in TOAD and I noticed it was taking a long time to run.

    If I remove the trunk and use mmt.transaction_date > = 1 April 2012 ' and mmt.transaction_date < 21 April 2012 "the query ran a lot faster and the explain command plan was also much better."

    Then I read that this truncing the date of the operation will be used the index on this field in the table when searching it.

    But if I change the setting so that the condition is set to add - it to the parameter, it does not work and maybe someone can help you. This is probably an obvious solution that I just think not.

    I created the discoverer as conditions:
    The operation date > =: Start Date of Transaction
    and
    The operation date < = (: Start Date of Transaction + 1).

    But I get an error "one of the arguments of the function has an invalid data type.

    Or maybe I'm wasting my time trying to avoid using the TRUNK on the date of the transaction?

    Kind regards
    Jerry

    Hello

    You should be OK with the condition:

    The transaction date > =: Start Date of Transaction and the Transaction Date< to_date(:transaction="" start="" date)="" +="">

    Rod West

  • Using bulk collect into with assistance from the limit to avoid the TEMP tablespace error run out?

    Hi all

    I want to know if using bulk collect into limit will help to avoid the TEMP tablespace error run out.

    We use Oracle 11 g R1.

    I am assigned to a task of creating journal facilitated for all tables in a query of the APEX.

    I create procedures to execute some sql statements to create a DEC (Create table select), and then fires on these tables.

    We have about three tables with more than 26 million records.

    It seems very well running until we reached a table with more than 15 million record, we got an error says that Miss tablespace TEMP.

    I googled on this topic and retrieve the tips:

    Use NO LOG

    Parallel use

    BULK COLLECT INTO limited

    However, the questions for those above usually short-term memory rather than running out of TEMPORARY tablespace.

    I'm just a junior developer and does not have dealed with table more than 10 million documents at a time like this before.

    The database support is outsourced. If we try to keep it as minimal contact with the DBA as possible. My Manager asked me to find a solution without asking the administrator to extend the TEMP tablespace.

    I wrote a few BULK COLLECT INTO to insert about 300,000 like once on the development environment. It seems.

    But the code works only against a 000 4000 table of records. I am trying to add more data into the Test table, but yet again, we lack the tablespace on DEV (this time, it's a step a TEMP data)

    I'll give it a go against the table of 26 million records on the Production of this weekend. I just want to know if it is worth trying.

    Thanks for reading this.

    Ann

    I really need check that you did not have the sizes of huge line (like several K by rank), they are not too bad at all, which is good!

    A good rule of thumb to maximize the amount of limit clause, is to see how much memory you can afford to consume in the PGA (to avoid the number of calls to the extraction and forall section and therefore the context switches) and adjust the limit to be as close to that amount as possible.

    Use the routines below to check at what threshold value would be better suited for your system because it depends on your memory allocation and CPU consumption.  Flexibility, based on your limits of PGA, as lines of length vary, but this method will get a good order of magnitude.

    CREATE OR REPLACE PROCEDURE show_pga_memory (context_in IN VARCHAR2 DEFAULT NULL)

    IS

    l_memory NUMBER;

    BEGIN

    SELECT st. VALUE

    IN l_memory

    SYS.v_$ session se, SYS.v_$ sesstat st, SYS.v_$ statname nm

    WHERE se.audsid = USERENV ('SESSIONID')

    AND st.statistic # nm.statistic = #.

    AND themselves. SID = st. SID

    AND nm.NAME = 'pga session in memory. "

    Dbms_output.put_line (CASE

    WHEN context_in IS NULL

    THEN NULL

    ELSE context_in | ' - '

    END

    || 'Used in the session PGA memory ='

    || To_char (l_memory)

    );

    END show_pga_memory;

    DECLARE

    PROCEDURE fetch_all_rows (limit_in IN PLS_INTEGER)

    IS

    CURSOR source_cur

    IS

    SELECT *.

    FROM YOUR_TABLE;

    TYPE source_aat IS TABLE OF source_cur % ROWTYPE

    INDEX BY PLS_INTEGER;

    l_source source_aat;

    l_start PLS_INTEGER;

    l_end PLS_INTEGER;

    BEGIN

    DBMS_SESSION.free_unused_user_memory;

    show_pga_memory (limit_in |) "- BEFORE"); "."

    l_start: = DBMS_UTILITY.get_cpu_time;

    OPEN source_cur.

    LOOP

    EXTRACTION source_cur

    LOOSE COLLECTION l_source LIMITED limit_in;

    WHEN l_source EXIT. COUNT = 0;

    END LOOP;

    CLOSE Source_cur;

    l_end: = DBMS_UTILITY.get_cpu_time;

    Dbms_output.put_line (' elapsed time CPU for limit of ')

    || limit_in

    || ' = '

    || To_char (l_end - l_start)

    );

    show_pga_memory (limit_in |) "- AFTER");

    END fetch_all_rows;

    BEGIN

    fetch_all_rows (20000);

    fetch_all_rows (40000);

    fetch_all_rows (60000);

    fetch_all_rows (80000);

    fetch_all_rows (100000);

    fetch_all_rows (150000);

    fetch_all_rows (250000);

    -etc.

    END;

  • 13 elements requires an internet connection?  I just paid $70 in the hope of avoiding the need to use internet!

    13 elements requires an internet connection?  I just paid $70 in the hope of avoiding the need to use internet!

    I'm sorry - never mind.  I think I found my problem...

  • How do I avoid the accumulation of color / opacity where two brush strokes overlap?  In other words, I want to use more than one path with the Paintbrush tool, but see no additive effect where strokes overlap.  What Miss me?

    How do I avoid the accumulation of color / opacity where two brush strokes overlap?  In other words, I want to use more than one path with the Paintbrush tool, but see no additive effect where strokes overlap. 5 Lightroom

    I use it all the time. Turn your opacity, density and traffic all 100%.

    Benjamin

  • How to avoid the second NULL records table using the join

    I followed two tables:

    Name of the table: Emp
    EmpNo EmpName salary Dept DeptLocation
    1 Lar 1000 1
    2 Dai 2 2000
    3 mar 3 3000
    4 Apr 4000 4 NULL

    Name of the table: Dept
    DeptNo DeptName DeptLocation
    1 HR A
    2 Dev B
    2 Dev NULL
    3 test NULL
    NULL terminator 4


    I try to get following result:
    EmpNo EmpName salary DeptName DeptLocation
    LAR 1000 1 HR has
    2 Dai 2000 Dev B
    March 3 3000 Test C
    4 Apr 4000 end NULL


    Rules:
    -Get all matching records from Emp & the DeptNo from Dept
    -If the Dept table has more entries for the same DeptNo then consider records with DeptLocation is not NULL and avoid the record with DeptLocation with NULL value
    -Get all records matching Emp & Dept from the DeptNo where DeptLocation is NULL and exist only once in Dept


    Thanks in advance for your suggestions.

    Hello

    So when deptlocation is in the two tables, but not the same thing, you want to take the value of the table emp, not the dept table.

    In this case, reverse the NVL arguments:

    WITH     got_rnk          AS
    (
         SELECT     deptno, deptname, deptlocation
         ,     DENSE_RANK () OVER ( PARTITION BY  deptno
                                   ORDER BY          NVL2 ( deptlocation
                                                  , 1
                                       , 2
                                       )
                           )     AS rnk
         FROM    dept
    )
    SELECT     e.empno
    ,     e.empname
    ,     e.salary
    ,     r.deptno
    ,     r.deptname
    ,     NVL ( e.deptlocation          -- Changed
             , r.deptlocation          -- Changed
             )          AS deptlocation
    FROM     emp      e
    JOIN     got_rnk     r     ON     e.dept     = r.deptno
    WHERE     r.rnk     = 1
    ;
    

    Apart from the 2 marked lines "Changed", it's the same query I posted earlier.

  • How to avoid the duplicateConnection on a sql database?

    I have a database of newspapers that I have access to both c ++ and QML.

    While everything works correctly logs qml page complained about a double connection whenever it is open, which in turn spam log file.

    In c ++, the database is initialized once, with db being QSqlDatabase:

    QString databaseFileName = QDir::homePath() + QDir::separator() + "logs.db";
    db = QSqlDatabase::addDatabase("QSQLITE", "loggerDatabase");if (!db.open()) {        printf("Critical: Error opening logs database\n");}
    

    QML, I use a very simple way to view the contents of the db: A GroupDataModel with a data source.

    I have set up a query on the data source and fill in the datamodel with the result:

    DataSource {
       id: dataSource
       source: loggerService.dbPath
       onDataLoaded: {
          dataModel.clear();
          dataModel.insertList(data);
       }
    }
    

    I assume that the data source uses SQLDataAccess to open a connection to the db, and db already has a connection there is a complaint:

    QSqlDatabasePrivate::addDatabase: duplicate connection name ' / accounts/1000/appdata/x/data/logs.db' old deleted connection.

    How can I avoid this?

    I've given up trying through QML and just dynamically added the groupDataModel of c ++.

    I was getting the same error, without reason.

  • How to avoid the JSR 75 write alert popup with blackberry?

    Hello

    This is my first post for the blackberry developer forums and I'm trying to transfer my J2ME application to blackberry.

    In our application, we use the JSR-75, and we got the versign certificate that we use for Nokia and Sony ericsson.

    The blackberry A60_How_And_When_To_Sign_V2.pdf .i found only one way to avoid the popup is carrier signature and but I do not find any carrier specific certificate in my blackberry "BOLD" (Airtel, India).

    (1) Please help me in this area.

    (2) is thereany way to avoid this popup?

    (3) possible Willit for blackberry partners?

    I use eclipse with the component 4.6.0 Blackberry package.

    Thanks in advance.

    Kind regards

    Samir.

    I used the ApplicationPermissions class to set the permission and it works.

  • How to avoid the report query needs a unique key to identify each line

    Hello

    How to avoid the error below

    The report query needs a unique key to identify each row. The supplied key cannot be used for this query. Please change the report attributes to define a unique key column.

    I have master-detail tables but without constraints, when I created a query as dept, emp table he gave me above the error.

    Select d.deptno, e.ename

    by d, e emp dept

    where e.deptno = d.deptno

    Thank you and best regards,

    Ashish

    Hi mickael,.

    Work on interactive report?

    You must set the column link (in the attributes report) to something other than "link to display single line." You can set it to 'Exclude the column link' or 'target link to Custom.

  • How to avoid the glossy look and brilliant nostrils?

    I just built my first character of fuse and when I import into Photoshop, its nostrils are incandescent - as if the light shines through the back of his head!

    This fuse:

    Screen Shot 2016-01-17 at 04.27.53.png

    Becomes this in Photoshop:

    Screen Shot 2016-01-17 at 04.47.21.png

    I use a brush to set the nostrils, but have no idea how fix eye - of the suggestions?

    Even better - any ideas on how to avoid the glossy look and glowing nostrils?

    Thank you very much
    Malcolm

    Hey, Malcolm.

    Best way to explain what basically rendering 3D correctly really takes a lot of time, haha.  So that you may be able to work with the real-time 3D model and make changes quickly, we use two different rendering methods.

    There is an "Interactive" mode which is not like the beautiful light/shade, but is very fast - and that's what you see when you interact with the default template.

    Then, there is a mode "Raytraced" which is much more advanced calculations and stuff to give you a proper lighting / shadow.  Raytraced rendered may take time if so we can not use it all the time.

    In order to get the lights/shadows appropriate you need to perform a path Ray would make on the document.  Best way to do this:

    • Select your 3D layer in the layers panel
    • Make a selection in the drawing area to the area that you want to make (I recommend to test rendering of area to check the lighting/shadows before committing to make the whole layer).
    • Push the button is rendered at the bottom of the properties panel (it looks like a cube in a rectangle box, right next to the delete icon).

    There are other things that you must do if you want to get the best image search quality such as the addition of secondary lights!  You can add more lights in the 3D Panel using the small icon of light at the bottom.  Have 2-3 stage lights and adjusting their colors can make a big difference with the Assembly of your character in the scene.  Here is a small image for some comparisons:

    You can see the image with two lights a look much more realistic lighting and shadows and raytraced of one and two versions are much nicer and cleaner!

    Hope that helps!

  • How to avoid the OutOfMemory exception in an intensive application of memory in Java?

    down to the voice favorite 

    We have developed a java application, whose goal is to read a file (input file), treat it and convert it into set of output files.

    (I gave a generic description of our solution, to avoid irrelevant details).

    This program works perfectly well when the input file is 4 GB, with memory settings-Xms4096m-Xmx16384m in a 32 GB of RAM

    Now we need run our application with the input of size 130 GB file.

    We used a linux machine with 250 GB of RAM and memory setting of - Xms40g-Xmx200g (also tried a few other variants) to run the application and click on OutOfMemory Exception.

    At this stage of our project, it is very difficult to think about the redesign of the code to meet hadoop (or someother large-scale data treatment framework), the current hardware configuration which we can afford is also 250GB of RAM.

    Can you please offer us ways to avoid the OutOfMemory Exceptions, what the general practice when developing applications of this kind. ?

    Thanks in advance

    Thanks a lot for all your quick responses.

    We decided to investigate the Redis it will solve the problem that is described in the post. All hashmaps can be put in databases (secondary memory) and exceptions of memory can be managed.

  • How to avoid the FDM-command being moved files from the OpenBatch folder

    Hello world

    I have a little problem with Batch Processing of the FDM - I need to stop the movement of files in the folder OpenBatch - when a batch is executed.

    The installer by using the Task Manager, a load a Batch Script and Script integration all works very well. However, the process must run every 3 hours, so I need the file "A_LedgerTransLocation_Actual_nov - 2013_RR.txt" to remain in the \Inbox\Batches\OpenBatch\ folder at any time.

    How to avoid the file is moved?


    Best regards
    Frederik




    PS: I noticed on the OTN Forum is it may be possible to script a solution such as:

    FSO1 = CreateObject ("Scripting.FileSystemObject") set
    Set File1 = FSO1. GetFile ("FDM Directory\FDM Application\Inbox\Batches\Openbatch\A_LedgerTransLocation_Actual_nov-2013_RR.txt")
    The BATCHENG value. PcolFiles = BATCHENG.fFileCollectionCreate (CStr (strDelimiter), File1)

    However this is not possible, as the controllers of the company need to edit the. TXT file themselves. They will not be able to edit the script too.

    I don't think you can prevent the FDM, move the file. I'm assuming that the file change in each period to use the last period of POV, so I think that option easiset to copy the file (based on a part of the name (location?) to a temporary location before began the FOM and write again later.)

  • best way to avoid the full table scan for clause "column is zero.

    I have a query with is control null, and because of that it performs a total scan of table (in millions of rows in the table)

    SELECT id, x,
    LAG (id) OVER (PARTITION BY userid ORDER BY has had place, id) as p_id,.
    FROM MyTable
    WHERE X is ZERO


    What is the best way for me to avoid the full table scan. I have indexes for the X column and other columns.

    Thank you

    Hi Vasif

    NULL values are indexed if the indexed entry also includes a value non-zero.

    If you create an index such as:

    CREATE INDEX mytable_x_idx ON mytable (x, ' ');

    ensure all null values for the column X are indexed and will therefore potentially use the index to search for null values, assuming of course the result set is small enough to justify the use of the index in your query.

    I have spoken previously on my blog:

    http://richardfoote.WordPress.com/2008/01/23/indexing-nulls-empty-spaces/

    See you soon

    Richard Foote
    http://richardfoote.WordPress.com/

Maybe you are looking for