Seq context in multiple executions

Hi, I have to change the values of local variables TS of LV directly without the context of the sequence. I mean, I have a VI that is running standalone, it is not called by TS. Then I should only use the TS APIs. The use of these APIs I can access the global station as you can see in the attached file. I use a model of batch/parallel, then I need the context of each sequence is run.

Can someone help me?

I found a work around the use of the PostUImessage TS, but it does not work properly.

Thank you


Tags: NI Software

Similar Questions

  • NPL of multiple execution for the awr sql_id

    Hi Experts,

    NPL of multiple execution for the AWR sql_id,

    I followed questions

    1. What plan using opimizer tcurrently?
    2. make sure optimizer to choose good plans


    SQL > select * from table (dbms_xplan.display_awr ('fb0p0xv370vmb'));

    PLAN_TABLE_OUTPUT
    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    SQL_ID fb0p0xv370vmb
    --------------------

    PLAN_TABLE_OUTPUT
    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    Hash value of plan: 417907468

    ---------------------------------------------------------------------------------------------------------------
    | ID | Operation | Name | Lines | Bytes | TempSpc | Cost (% CPU). Time |
    ---------------------------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | 63353 (100) |
    | 1. UPDATE |
    | 2. SORT ORDER BY | 17133. 2978K | 3136K | 63353 (1) | 00:14:47 |
    | 3. HASH JOIN RIGHT SEMI | 17133. 2978K | 62933 (1) | 00:14:42 |
    | 4. COLLECTION ITERATOR PICKLER FETCH | |
    | 5. HASH JOIN RIGHT SEMI | 68530 | 11 M | 62897 (1) | 00:14:41 |

    PLAN_TABLE_OUTPUT
    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    | 6. VIEW | VW_NSO_1 | 5000 | 35000 | 33087 (1) | 00:07:44 |
    | 7. COUNTY STOPKEY |
    | 8. VIEW | 127K | 868K | 33087 (1) | 00:07:44 |
    | 9. GROUP SORT BY STOPKEY | 127K | 2233K | 46 M | 33087 (1) | 00:07:44 |
    | 10. TABLE ACCESS FULL | ASYNCH_REQUEST | 1741K | 29 M | 29795 (1) | 00:06:58 |
    | 11. TABLE ACCESS FULL | ASYNCH_REQUEST | 1741K | 280 M | 29801 (1) | 00:06:58 |
    ---------------------------------------------------------------------------------------------------------------

    SQL_ID fb0p0xv370vmb
    --------------------
    SELECT ASYNCH_REQUEST_ID, REQUEST_STATUS, REQUEST_TYPE, REQUEST_DATA, PRIORITY, SUBMIT_BY, SUBMIT_DATE.

    Hash value of plan: 2912273206

    --------------------------------------------------------------------------------------------------------------------------
    | ID | Operation | Name | Lines | Bytes | TempSpc | Cost (% CPU). Time |

    PLAN_TABLE_OUTPUT
    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    --------------------------------------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | 45078 (100) |
    | 1. UPDATE |
    | 2. SORT ORDER BY | 1323. 257K | 45078 (1) | 00:10:32 |
    | 3. TABLE ACCESS BY INDEX ROWID | ASYNCH_REQUEST | 1. 190. 3 (0) | 00:00:01 |
    | 4. NESTED LOOPS | 1323. 257K | 45077 (1) | 00:10:32 |
    | 5. THE CARTESIAN MERGE JOIN. 5000 | 45000 | 30069 (1) | 00:07:01 |
    | 6. UNIQUE FATE |
    | 7. COLLECTION ITERATOR PICKLER FETCH | |
    | 8. KIND OF BUFFER. 5000 | 35000 | 30034 (1) | 00:07:01 |
    | 9. VIEW | VW_NSO_1 | 5000 | 35000 | 30033 (1) | 00:07:01 |

    PLAN_TABLE_OUTPUT
    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    | 10. UNIQUE FATE | 5000 | 35000 |
    | 11. COUNTY STOPKEY |
    | 12. VIEW | 81330 | 555K | 30033 (1) | 00:07:01 |
    | 13. GROUP SORT BY STOPKEY | 81330 | 1 429 K | 2384K | 30033 (1) | 00:07:01 |
    | 14. TABLE ACCESS FULL | ASYNCH_REQUEST | 86092 | 1513K | 29731 (1) | 00:06:57 |
    | 15. INDEX RANGE SCAN | ASYNCH_REQUEST_SUB_IDX | 1 | | | 1 (0) | 00:00:01 |
    --------------------------------------------------------------------------------------------------------------------------

    Hash value of plan: 3618200564

    --------------------------------------------------------------------------------------------------------------------------------
    | ID | Operation | Name | Lines | Bytes | TempSpc | Cost (% CPU). Time |

    PLAN_TABLE_OUTPUT
    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    --------------------------------------------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | 59630 (100) |
    | 1. UPDATE |
    | 2. SORT ORDER BY | 4474 | 777K | 59630 (1) | 00:13:55 |
    | 3. HASH JOIN RIGHT SEMI | 4474 | 777K | 59629 (1) | 00:13:55 |
    | 4. VIEW | VW_NSO_1 | 5000 | 35000 | 30450 (1) | 00:07:07 |
    | 5. COUNTY STOPKEY |
    | 6. VIEW | 79526 | 543K | 30450 (1) | 00:07:07 |
    | 7. GROUP SORT BY STOPKEY | 79526 | 1397K | 7824K | 30450 (1) | 00:07:07 |
    | 8. TABLE ACCESS FULL | ASYNCH_REQUEST | 284K | 5003K | 29804 (1) | 00:06:58 |
    | 9. TABLE ACCESS BY INDEX ROWID | ASYNCH_REQUEST | 71156 | 11 M | 29141 (1) | 00:06:48 |

    PLAN_TABLE_OUTPUT
    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    | 10. NESTED LOOPS | 71156 | 11 M | 29177 (1) | 00:06:49 |
    | 11. UNIQUE FATE |
    | 12. COLLECTION ITERATOR PICKLER FETCH | |
    | 13. INDEX RANGE SCAN | ASYNCH_REQUEST_EFFECTIVE_IDX | 327K | | 392 (1) | 00:00:06 |
    --------------------------------------------------------------------------------------------------------------------------------

    Thank you
    -Raj

    Published by: tt0008 on August 22, 2012 20:34

    Hello

    (1) you can see what plan has been used lately by running this query:

    select begin_interval_time, plan_hash_value
    from dba_hist_sqlstat st,
            dba_hist_snapshot sn
    where st.snap_id = sn.snap_id
    and sql_id = 'fb0p0xv370vmb'
    order by begin_interval_time desc;
    

    However, there is no guarantee that the next time you run this query, the latest plan will be chosen.
    Periodically, the plan is regenerated (for example when new statistics are collected, is the structure of a table referenced in)
    the query is changed etc.), and you can get 4 plans, or even a new function of many factors
    (statistics, bind variable values, the optimizer, NLS etc settings settings.)

    (2) this question is too large for the answer to fit into a thread, there are books written on the subject. The short answer is:
    If you know which of the 4 plans is right for you, then you can use a stored outline to lock in (it seems that you are not on 11g so SQL Profiler are not an option for you).
    Or you can try to find out why the optimizer generates different plans and address the underlying issue (the most common reason is to bind peeking - but to say
    course, we need to know more, starting with the text of your query).

    Best regards
    Nikolai

  • Research using fuzzy context in multiple params

    Hi, my previous post is about research catserch and the context, I had advice of Roger and Barbara (both are extremely useful ~ thanks), I focus now by using the search from the context, but I had a problem of fuzzy search. I did some research, didnlt get very much, I think that it is a question for the expertise once again, sorry for the trouble and sincerely thank you for your time.

    ------------------------------
    Test data
    ----------------------------
    CREATE TABLE cust_catalog
    (id NUMBER (16),
    First name VARCHAR2 (80).
    name VARCHAR2 (80).
    Date of birth,
    type VARCHAR2 (10)
    )

    INSERT ALL
    INTO cust_catalog VALUES (1, 'John', 'Smith', to_date('10/03/1971','DD/MM/YYYY'), 'Mâle')
    INTO cust_catalog VALUES (2, 'Xavier', 'Johnson', to_date('05/07/1974','DD/MM/YYYY'), 'Female')
    INTO cust_catalog VALUES (3, 'David', 'Miles', to_date('16/10/1978','DD/MM/YYYY'), 'Mâle')
    INTO cust_catalog VALUES (4, 'Chris', 'Johnny', to_date('25/02/1976','DD/MM/YYYY'), 'Mâle')
    INTO cust_catalog VALUES (5, 'Jenny', 'The forge', to_date('28/11/1977','DD/MM/YYYY'), 'Female')
    INTO cust_catalog VALUES (6, 'Andy', 'Mil', to_date('16/08/1975','DD/MM/YYYY'), 'Mâle')
    INTO cust_catalog VALUES (7, 'Andrew', 'Smithe', to_date('15/12/1974','DD/MM/YYYY'), 'Mâle')
    INTO cust_catalog VALUES (8, 'John', 'Smith', to_date('07/11/1972','DD/MM/YYYY'), 'Mâle')
    INTO cust_catalog VALUES (9, 'William John', "Henson", to_date('04/01/1971','DD/MM/YYYY'), "Mâle")
    INTO cust_catalog VALUES (10, "Emma John", "Mil", to_date('06/04/1979','DD/MM/YYYY'), "Mâle")
    INTO cust_catalog VALUES (11, 'Jon', 'Smith', to_date('19/09/1977','DD/MM/YYYY'), 'Mâle')
    INTO cust_catalog VALUES (12, 'Jen', 'Smith', to_date('17/06/1978','DD/MM/YYYY'), 'Female')
    INTO cust_catalog VALUES (13, 'Chrissie', 'Smith', to_date('21/05/1975','DD/MM/YYYY'), 'Mâle')
    INTO cust_catalog VALUES (14, 'Yau', 'Smith', to_date('21/05/1975','DD/MM/YYYY'), 'Mâle')
    INTO cust_catalog VALUES (15, 'Chrisi', 'Smith', to_date('21/05/1975','DD/MM/YYYY'), 'Mâle')
    INTO cust_catalog VALUES (16, 'Johnny', 'Clark', to_date('21/05/1975','DD/MM/YYYY'), 'Mâle')
    INTO cust_catalog VALUES (17, "Bobbie", "Clarkson", to_date('21/05/1975','DD/MM/YYYY'), "Mâle")
    INTO cust_catalog VALUES (18, 'Bob', 'Clark', to_date('21/05/1975','DD/MM/YYYY'), 'Mâle')
    INTO cust_catalog VALUES (19, 'Jones', 'Smith', to_date('21/05/1975','DD/MM/YYYY'), 'Mâle')
    INTO cust_catalog VALUES (20, 'Johan', 'Smith', to_date('21/05/1975','DD/MM/YYYY'), 'Mâle')
    INTO cust_catalog VALUES (21, "Jean", "Smithie", to_date('21/05/1975','DD/MM/YYYY'), "Mâle")
    INTO cust_catalog VALUES (22, "Chris", "Smithey", to_date('21/05/1975','DD/MM/YYYY'), "Mâle")
    INTO cust_catalog VALUES (23, "Jean", "The forge", to_date('21/05/1975','DD/MM/YYYY'), "Mâle")
    INTO cust_catalog VALUES (24, "Chris", "Smithie", to_date('21/05/1975','DD/MM/YYYY'), "Mâle")
    INTO cust_catalog VALUES (25, 'John', 'Smithke', to_date('21/05/1975','DD/MM/YYYY'), 'Mâle')
    SELECT * FROM DUAL

    EXEC CTX_DDL. CREATE_PREFERENCE ("cust_lexer", "BASIC_LEXER");
    EXEC CTX_DDL. SET_ATTRIBUTE ("cust_lexer", "SKIPJOINS',", ".) » +-()/');
    EXEC CTX_DDL. Create_Preference ("cust_wildcard_pref", "BASIC_WORDLIST");
    CTX_DDL.set_attribute EXEC ('cust_wildcard_pref', 'prefix_index', 'YES');

    EXEC CTX_DDL. CREATE_PREFERENCE ("forename_datastore", "MULTI_COLUMN_DATASTORE");
    EXEC CTX_DDL. SET_ATTRIBUTE ("forename_datastore", "COLUMNS", "FirstName");

    EXEC CTX_DDL. CREATE_PREFERENCE ("surname_datastore", "MULTI_COLUMN_DATASTORE");
    EXEC CTX_DDL. SET_ATTRIBUTE ('surname_datastore', 'COLUMNS', 'name');

    EXEC CTX_DDL. CREATE_SECTION_GROUP ("your_sec", "BASIC_SECTION_GROUP");
    EXEC CTX_DDL. ADD_FIELD_SECTION ("your_sec", "first name", "first name", TRUE);
    EXEC CTX_DDL. ADD_FIELD_SECTION ('your_sec', 'name', 'name', TRUE);
    EXEC CTX_DDL. ADD_FIELD_SECTION ("your_sec", "birth", "date of birth", TRUE);

    CREATE INDEX forename_context_idx ON cust_catalog (first name) INDEXTYPE IS CTXSYS. FRAMEWORK
    FILTER BY genre
    PARAMETERS
    ("Forename_datastore of the DATA store
    SECTION your_sec GROUP
    LEXER cust_lexer
    List of WORDS cust_wildcard_pref')

    CREATE INDEX surname_context_idx ON cust_catalog (family name) INDEXTYPE IS CTXSYS. FRAMEWORK
    FILTER BY genre
    PARAMETERS
    ("Surname_datastore of the DATA store
    SECTION your_sec GROUP
    LEXER cust_lexer
    List of WORDS cust_wildcard_pref')


    -This work sql ok if not blurred
    SELECT * from cust_catalog
    WHERE CONTAINS (name, "Jean |") Chris ') > 0 AND CONTAINS (name,'smith |) Miles | MIL ') > 0

    -cleaning
    DROP TABLE cust_catalog;
    EXEC CTX_DDL. DROP_PREFERENCE ("cust_lexer");
    EXEC CTX_DDL. DROP_PREFERENCE ("cust_wildcard_pref");
    EXEC CTX_DDL. DROP_PREFERENCE ("forename_datastore");
    EXEC CTX_DDL. DROP_PREFERENCE ("surname_datastore");
    EXEC CTX_DDL. DROP_SECTION_GROUP ("your_sec");
    DROP INDEX forename_context_idx;
    DROP INDEX surname_context_idx;
    -------------------
    Questions here
    -------------------
    1. I have problem when I try to implement fuzzy search

    SELECT * from cust_catalog
    WHERE CONTAINS (name,' fuzzy ({john | chris}, 1, 100, weight)') > 0
    AND CONTAINS (name, ' smith | ") Miles | Milton) > 0

    2. am I also like to add on the beach of date of birth in the search, don't know what is the best way to do this, please comment below

    SELECT * from cust_catalog
    WHERE CONTAINS (name,' fuzzy ({john | chris}, 1, 100, weight)') > 0
    AND CONTAINS (name, ' smith | ") Miles | Milton) > 0
    AND date of birth BETWEEN TO_DATE('01/02/1970','DD/MM/YYYY') AND TO_DATE('11/07/1980','DD/MM/YYYY')

    3. There is a new line night employment insertion in this table, the reference to Roger that I need to create a trigger to update the index, this means to DROP and CREATE indexes for the forename_context_idx and the surname_context_idx every time when night work is running?

    Published by: Emily Robertson on November 14, 2012 13:02

    For maximum effectiveness, you must use an index and one contains the clause. You should put all your columns of text, such as the first name and family name and sex of your group of multi_column_datastore and the article. You can use the filter by and surlabasedesdonneesdufabricantduballast to add the column to the index. When you query a date using surlabasedesdonneesdufabricantduballast, the date must be in the format yyyy/mm/dd.

    You can create index on a column of text. Which column you create the index on must be the column that you are looking for and should be the column that is being updated in order to cause the text Oracle to recognize that there was an update.

    Fuzzy cannot be applied to a single term (Word), you will need to use a concatenation and replace to change something like 'fuzzy(word1|word2)' to ' fuzzy (word1) | Fuzzy (word2) ".

    In the following example, I used the table, lexer, and list of words you provided. I used a multi_column_datastore with all the columns of text three (surname, name and sex) and put all three of these columns in the section group. I've added a dummy column called any_column and created the index on this column, filtering by date of birth, using the data store, section group, lexer and wordlist, and adding sync (on validation), which will cause the index that you want to synchronize whenever a row is inserted or deleted, or that the index is created on the any_column is updated. Then I inserted the data to show that the synchronization is carried out. I then showed a query using all of the features you requested and explained the plan shows that it uses an index, press to access everything, so it's a very efficient query.

    You will still need to optimize or rebuild or delete and recreate periodically to reduce the fragmentation of the index caused by frequent synchronization.

    -script:

    -- table, lexer, and wordlist you provided:
    CREATE TABLE cust_catalog
      (id                NUMBER   (16),
       forename          VARCHAR2 (80),
       surname           VARCHAR2 (80),
       birthdate         DATE,
       gender            VARCHAR2 (10))
    /
    EXEC CTX_DDL.CREATE_PREFERENCE ('cust_lexer', 'BASIC_LEXER');
    EXEC CTX_DDL.SET_ATTRIBUTE ('cust_lexer', 'SKIPJOINS' , ',''."+-()/');
    EXEC CTX_DDL.Create_Preference ('cust_wildcard_pref', 'BASIC_WORDLIST');
    EXEC CTX_DDL.set_attribute ('cust_wildcard_pref', 'prefix_index', 'YES');
    
    -- revised datastore, section group, added column, and index:
    EXEC CTX_DDL.CREATE_PREFERENCE ('names_and_gender_datastore', 'MULTI_COLUMN_DATASTORE');
    EXEC CTX_DDL.SET_ATTRIBUTE ('names_and_gender_datastore', 'COLUMNS', 'forename, surname, gender');
    
    EXEC CTX_DDL.CREATE_SECTION_GROUP ('your_sec', 'BASIC_SECTION_GROUP');
    EXEC CTX_DDL.ADD_FIELD_SECTION ('your_sec', 'forename', 'forename', TRUE);
    EXEC CTX_DDL.ADD_FIELD_SECTION ('your_sec', 'surname', 'surname', TRUE);
    EXEC CTX_DDL.ADD_FIELD_SECTION ('your_sec', 'gender', 'gender', TRUE);
    
    ALTER TABLE cust_catalog ADD (any_column  VARCHAR2(1))
    /
    CREATE INDEX all_columns_context_idx
    ON cust_catalog (any_column)
    INDEXTYPE IS CTXSYS.CONTEXT
    FILTER BY birthdate
    PARAMETERS
      ('DATASTORE      names_and_gender_datastore
        SECTION GROUP  your_sec
        LEXER          cust_lexer
        WORDLIST       cust_wildcard_pref
        SYNC           (ON COMMIT)')
    /
    -- data you provided:
    INSERT ALL
    INTO cust_catalog VALUES (1, 'John', 'Smith', to_date('10/03/1971','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (2, 'Emaily', 'Johnson', to_date('05/07/1974','DD/MM/YYYY'), 'Female', null)
    INTO cust_catalog VALUES (3, 'David', 'Miles', to_date('16/10/1978','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (4, 'Chris', 'Johnny', to_date('25/02/1976','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (5, 'Jenny', 'Smithy', to_date('28/11/1977','DD/MM/YYYY'), 'Female', null)
    INTO cust_catalog VALUES (6, 'Andy', 'Mil', to_date('16/08/1975','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (7, 'Andrew', 'Smithe', to_date('15/12/1974','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (8, 'John', 'Smith', to_date('07/11/1972','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (9, 'Willam John', 'Henson', to_date('04/01/1971','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (10, 'Emma John', 'Mil', to_date('06/04/1979','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (11, 'Jon', 'Smith', to_date('19/09/1977','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (12, 'Jen', 'Smith', to_date('17/06/1978','DD/MM/YYYY'), 'Female', null)
    INTO cust_catalog VALUES (13, 'Chrissie','Smith', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (14, 'Chrisy', 'Smith', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (15, 'Chrisi', 'Smith', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (16, 'Johnny', 'Clark', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (17, 'Bobbie', 'Clarkson', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (18, 'Bob', 'Clark', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (19, 'Jone', 'Smith', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (20, 'Johan', 'Smith', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (21, 'John', 'Smithie', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (22, 'Chris', 'Smithey', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (23, 'John', 'Smithy', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (24, 'Chris', 'Smithie', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (25, 'John', 'Smithke', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
    SELECT * FROM DUAL
    /
    COMMIT
    /
    -- query:
    COLUMN forename FORMAT A10
    COLUMN surname  FORMAT A10
    SET AUTOTRACE ON EXPLAIN
    SELECT * FROM cust_catalog
    WHERE CONTAINS
            (any_column,
             '(FUZZY (' || REPLACE ('john|chris', '|', ', 1, 100, WEIGHT) | FUZZY (')
              || ', 1, 100, WEIGHT) WITHIN forename) AND
              (FUZZY (' || REPLACE ('smith|Miles|Mil', '|', ', 1, 100, WEIGHT) | FUZZY (')
              || ', 1, 100, WEIGHT) WITHIN surname) AND
              (SDATA (birthdate BETWEEN ''1970/01/02'' AND ''1980/11/07''))') > 0
    /
    SET AUTOTRACE OFF
    -- cleaning
    DROP TABLE cust_catalog;
    EXEC CTX_DDL.DROP_PREFERENCE ('cust_lexer');
    EXEC CTX_DDL.DROP_PREFERENCE ('cust_wildcard_pref');
    EXEC CTX_DDL.DROP_PREFERENCE ('names_and_gender_datastore');
    EXEC CTX_DDL.DROP_SECTION_GROUP ('your_sec');
    

    -execution:

    SCOTT@orcl_11gR2> -- table, lexer, and wordlist you provided:
    SCOTT@orcl_11gR2> CREATE TABLE cust_catalog
      2    (id            NUMBER   (16),
      3       forename       VARCHAR2 (80),
      4       surname        VARCHAR2 (80),
      5       birthdate       DATE,
      6       gender            VARCHAR2 (10))
      7  /
    
    Table created.
    
    SCOTT@orcl_11gR2> EXEC CTX_DDL.CREATE_PREFERENCE ('cust_lexer', 'BASIC_LEXER');
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2> EXEC CTX_DDL.SET_ATTRIBUTE ('cust_lexer', 'SKIPJOINS' , ',''."+-()/');
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2> EXEC CTX_DDL.Create_Preference ('cust_wildcard_pref', 'BASIC_WORDLIST');
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2> EXEC CTX_DDL.set_attribute ('cust_wildcard_pref', 'prefix_index', 'YES');
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2>
    SCOTT@orcl_11gR2> -- revised datastore, section group, added column, and index:
    SCOTT@orcl_11gR2> EXEC CTX_DDL.CREATE_PREFERENCE ('names_and_gender_datastore', 'MULTI_COLUMN_DATASTORE');
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2> EXEC CTX_DDL.SET_ATTRIBUTE ('names_and_gender_datastore', 'COLUMNS', 'forename, surname, gender');
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2>
    SCOTT@orcl_11gR2> EXEC CTX_DDL.CREATE_SECTION_GROUP ('your_sec', 'BASIC_SECTION_GROUP');
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2> EXEC CTX_DDL.ADD_FIELD_SECTION ('your_sec', 'forename', 'forename', TRUE);
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2> EXEC CTX_DDL.ADD_FIELD_SECTION ('your_sec', 'surname', 'surname', TRUE);
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2> EXEC CTX_DDL.ADD_FIELD_SECTION ('your_sec', 'gender', 'gender', TRUE);
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2>
    SCOTT@orcl_11gR2> ALTER TABLE cust_catalog ADD (any_column  VARCHAR2(1))
      2  /
    
    Table altered.
    
    SCOTT@orcl_11gR2> CREATE INDEX all_columns_context_idx
      2  ON cust_catalog (any_column)
      3  INDEXTYPE IS CTXSYS.CONTEXT
      4  FILTER BY birthdate
      5  PARAMETERS
      6    ('DATASTORE     names_and_gender_datastore
      7        SECTION GROUP     your_sec
      8        LEXER          cust_lexer
      9        WORDLIST     cust_wildcard_pref
     10        SYNC          (ON COMMIT)')
     11  /
    
    Index created.
    
    SCOTT@orcl_11gR2> -- data you provided:
    SCOTT@orcl_11gR2> INSERT ALL
      2  INTO cust_catalog VALUES (1, 'John', 'Smith', to_date('10/03/1971','DD/MM/YYYY'), 'Male', null)
      3  INTO cust_catalog VALUES (2, 'Emaily', 'Johnson', to_date('05/07/1974','DD/MM/YYYY'), 'Female', null)
      4  INTO cust_catalog VALUES (3, 'David', 'Miles', to_date('16/10/1978','DD/MM/YYYY'), 'Male', null)
      5  INTO cust_catalog VALUES (4, 'Chris', 'Johnny', to_date('25/02/1976','DD/MM/YYYY'), 'Male', null)
      6  INTO cust_catalog VALUES (5, 'Jenny', 'Smithy', to_date('28/11/1977','DD/MM/YYYY'), 'Female', null)
      7  INTO cust_catalog VALUES (6, 'Andy', 'Mil', to_date('16/08/1975','DD/MM/YYYY'), 'Male', null)
      8  INTO cust_catalog VALUES (7, 'Andrew', 'Smithe', to_date('15/12/1974','DD/MM/YYYY'), 'Male', null)
      9  INTO cust_catalog VALUES (8, 'John', 'Smith', to_date('07/11/1972','DD/MM/YYYY'), 'Male', null)
     10  INTO cust_catalog VALUES (9, 'Willam John', 'Henson', to_date('04/01/1971','DD/MM/YYYY'), 'Male', null)
     11  INTO cust_catalog VALUES (10, 'Emma John', 'Mil', to_date('06/04/1979','DD/MM/YYYY'), 'Male', null)
     12  INTO cust_catalog VALUES (11, 'Jon', 'Smith', to_date('19/09/1977','DD/MM/YYYY'), 'Male', null)
     13  INTO cust_catalog VALUES (12, 'Jen', 'Smith', to_date('17/06/1978','DD/MM/YYYY'), 'Female', null)
     14  INTO cust_catalog VALUES (13, 'Chrissie','Smith', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
     15  INTO cust_catalog VALUES (14, 'Chrisy', 'Smith', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
     16  INTO cust_catalog VALUES (15, 'Chrisi', 'Smith', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
     17  INTO cust_catalog VALUES (16, 'Johnny', 'Clark', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
     18  INTO cust_catalog VALUES (17, 'Bobbie', 'Clarkson', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
     19  INTO cust_catalog VALUES (18, 'Bob', 'Clark', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
     20  INTO cust_catalog VALUES (19, 'Jone', 'Smith', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
     21  INTO cust_catalog VALUES (20, 'Johan', 'Smith', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
     22  INTO cust_catalog VALUES (21, 'John', 'Smithie', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
     23  INTO cust_catalog VALUES (22, 'Chris', 'Smithey', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
     24  INTO cust_catalog VALUES (23, 'John', 'Smithy', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
     25  INTO cust_catalog VALUES (24, 'Chris', 'Smithie', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
     26  INTO cust_catalog VALUES (25, 'John', 'Smithke', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
     27  SELECT * FROM DUAL
     28  /
    
    25 rows created.
    
    SCOTT@orcl_11gR2> COMMIT
      2  /
    
    Commit complete.
    
    SCOTT@orcl_11gR2> -- query:
    SCOTT@orcl_11gR2> COLUMN forename FORMAT A10
    SCOTT@orcl_11gR2> COLUMN surname  FORMAT A10
    SCOTT@orcl_11gR2> SET AUTOTRACE ON EXPLAIN
    SCOTT@orcl_11gR2> SELECT * FROM cust_catalog
      2  WHERE CONTAINS
      3            (any_column,
      4             '(FUZZY (' || REPLACE ('john|chris', '|', ', 1, 100, WEIGHT) | FUZZY (')
      5              || ', 1, 100, WEIGHT) WITHIN forename) AND
      6              (FUZZY (' || REPLACE ('smith|Miles|Mil', '|', ', 1, 100, WEIGHT) | FUZZY (')
      7              || ', 1, 100, WEIGHT) WITHIN surname) AND
      8              (SDATA (birthdate BETWEEN ''1970/01/02'' AND ''1980/11/07''))') > 0
      9  /
    
            ID FORENAME   SURNAME    BIRTHDATE GENDER     A
    ---------- ---------- ---------- --------- ---------- -
             1 John       Smith      10-MAR-71 Male
             8 John       Smith      07-NOV-72 Male
            10 Emma John  Mil        06-APR-79 Male
            11 Jon        Smith      19-SEP-77 Male
            13 Chrissie   Smith      21-MAY-75 Male
            14 Chrisy     Smith      21-MAY-75 Male
            15 Chrisi     Smith      21-MAY-75 Male
            19 Jone       Smith      21-MAY-75 Male
            20 Johan      Smith      21-MAY-75 Male
            21 John       Smithie    21-MAY-75 Male
            22 Chris      Smithey    21-MAY-75 Male
            23 John       Smithy     21-MAY-75 Male
            24 Chris      Smithie    21-MAY-75 Male
            25 John       Smithke    21-MAY-75 Male
    
    14 rows selected.
    
    Execution Plan
    ----------------------------------------------------------
    Plan hash value: 1894062579
    
    -------------------------------------------------------------------------------------------------------
    | Id  | Operation                   | Name                    | Rows  | Bytes | Cost (%CPU)| Time     |
    -------------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT            |                         |     1 |   127 |     4   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| CUST_CATALOG            |     1 |   127 |     4   (0)| 00:00:01 |
    |*  2 |   DOMAIN INDEX              | ALL_COLUMNS_CONTEXT_IDX |       |       |     4   (0)| 00:00:01 |
    -------------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       2 - access("CTXSYS"."CONTAINS"("ANY_COLUMN",'(FUZZY (john, 1, 100, WEIGHT) | FUZZY (chris,
                  1, 100, WEIGHT) WITHIN forename) AND           (FUZZY (smith, 1, 100, WEIGHT) | FUZZY (Miles,
                  1, 100, WEIGHT) | FUZZY (Mil, 1, 100, WEIGHT) WITHIN surname) AND           (SDATA (birthdate
                  BETWEEN ''1970/01/02'' AND ''1980/11/07''))')>0)
    
    Note
    -----
       - dynamic sampling used for this statement (level=2)
    
    SCOTT@orcl_11gR2> SET AUTOTRACE OFF
    SCOTT@orcl_11gR2> -- cleaning
    SCOTT@orcl_11gR2> DROP TABLE cust_catalog;
    
    Table dropped.
    
    SCOTT@orcl_11gR2> EXEC CTX_DDL.DROP_PREFERENCE ('cust_lexer');
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2> EXEC CTX_DDL.DROP_PREFERENCE ('cust_wildcard_pref');
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2> EXEC CTX_DDL.DROP_PREFERENCE ('names_and_gender_datastore');
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2> EXEC CTX_DDL.DROP_SECTION_GROUP ('your_sec');
    
    PL/SQL procedure successfully completed.
    
  • limit multiple executions of a procedure

    Hi all

    I would limit the simultaneous executions of a procedure. In other words, you want to serialize. Pretty simple, I use locking mechanism for this, but this procedure has a parameter which is a ref cursor and the lock will be released as soon as the ref cursor is open and control reaches the end of the procedure, but I is to restrict users to not be able to perform this procedure again until the closure of this ref cursor Is it possible that this can be achieved with in Oracle DB?

    Example Code:

    CREATE OR REPLACE PROCEDURE SH.SALES_BY_PRODUCT(P OUT SYS_REFCURSOR) IS
    BEGIN
    ..... --Lock here
      OPEN P FOR
        SELECT PROD_ID, SUM(AMOUNT_SOLD) AS AMOUNT FROM SH.SALES GROUP BY PROD_ID;
    ..... --Release the lock here --This approach of locking is not helping as the lock is released even before the cursor P is closed by the calling application.
    END;
    
    

    Thank you!

    You could get an application lock using package of dbms_lock which can be taken on the border of a transaction (commit) until you release or the session ends.

    HTH

  • Multiple execution plan.

    Hi all

    Can I get a query to see if there are several implementation plans for a query.
    I tried with v$ sql where CHILD_NUMBER can be 0 or 1 at a time.

    What interests me is any plans for a particular sql.

    Also I can get good technical quality for outln paper / and the profile of sql. I have some links but do not have enough to her.

    Thank you

    Rana.

    user582224 wrote:
    Can I get a query to see if there are several implementation plans for a query.
    I tried with v$ sql where CHILD_NUMBER can be 0 or 1 at a time.

    What interests me is any plans for a particular sql.

    Rana,

    If you are interested in what you have now in the pool that is shared, you can start with V$ SQL, V$ SQLAREA, SQLSTATS $ V and V$ SQL_PLAN.

    Note that different the same statement execution plans may already have aged out of the shared pool, that's why this information does not necessarily reflect the number of versions you may have had over time.

    If you are already on 10g or later, you can use the DBMS_XPLAN. Function DISPLAY_CURSOR. If you specify "NULL" for the second parameter ("cursor_child_no"), it will show you all the sliders of the specified SQL_ID child execution plans.

    If you have 10g or later and a CWA license, you can use DBA_HIST_SQL_PLAN and DBMS_XPLAN. DISPLAY_AWR to get stored for a particular SQL_ID. Note different execution plans that AWR cannot capture your particular statement, since it only samples consumers albums according to certain thresholds.

    If you do not have a license of AWR you will always get a similar historical view of your plans with using STATSPACK snapshot level > = 6 that captures SQL execution plans. Once your statement again could not enjoy according to the load and thresholds (STATSPACK documentation: "to collect plans for all statements in the shared pool, you can temporarily specify the threshold of executions (i_executions_th) to be zero (0) for these snapshots").

    Kind regards
    Randolf

    Oracle related blog stuff:
    http://Oracle-Randolf.blogspot.com/

    SQLTools ++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676 /.
    http://sourceforge.NET/projects/SQLT-pp/

  • Plans of multiple executions for the same SQL statement

    Dear experts,

    awrsqrpt. SQL shows several plans for a single SQL statement executions. How is it possible that a single SQL statement will be several Plans of executions within the AWR report.

    Here is the output of the awrsqrpt for your reference.

    WORKLOAD REPOSITORY SQL Report
    
    Snapshot Period Summary
    
    DB Name         DB Id    Instance     Inst Num Release     RAC Host
    ------------ ----------- ------------ -------- ----------- --- ------------
    TESTDB          2157605839 TESTDB1               1 10.2.0.3.0  YES testhost1
    
                  Snap Id      Snap Time      Sessions Curs/Sess
                --------- ------------------- -------- ---------
    Begin Snap:     32541 11-Oct-08 21:00:13       248     141.1
      End Snap:     32542 11-Oct-08 21:15:06       245     143.4
       Elapsed:               14.88 (mins)
       DB Time:               12.18 (mins)
    
    SQL Summary                            DB/Inst: TESTDB/TESTDB1  Snaps: 32541-32542
    
                    Elapsed
       SQL Id      Time (ms)
    ------------- ----------
    51szt7b736bmg     25,131
    Module: SQL*Plus
    UPDATE TEST SET TEST_TRN_DAY_CL = (SELECT (NVL(ACCT_CR_BAL,0) + NVL(ACCT_DR_BAL,
    0)) FROM ACCT WHERE ACCT_TRN_DT = (:B1 ) AND TEST_ACC_NB = ACCT_ACC_NB(+)) WHERE
     TEST_BATCH_DT = (:B1 )
    
              -------------------------------------------------------------
    
    SQL ID: 51szt7b736bmg                  DB/Inst: TESTDB/TESTDB1  Snaps: 32541-32542
    -> 1st Capture and Last Capture Snap IDs
       refer to Snapshot IDs witin the snapshot range
    -> UPDATE TEST SET TEST_TRN_DAY_CL = (SELECT (NVL(ACCT_CR_BAL,0) + NVL(AC...
    
        Plan Hash           Total Elapsed                 1st Capture   Last Capture
    #   Value                    Time(ms)    Executions       Snap ID        Snap ID
    --- ---------------- ---------------- ------------- ------------- --------------
    1   2960830398                 25,131             1         32542          32542
    2   3834848140                      0             0         32542          32542
              -------------------------------------------------------------
    
    
    Plan 1(PHV: 2960830398)
    -----------------------
    
    Plan Statistics                        DB/Inst: TESTDB/TESTDB1  Snaps: 32541-32542
    -> % Total DB Time is the Elapsed Time of the SQL statement divided
       into the Total Database Time multiplied by 100
    
    Stat Name                                Statement   Per Execution % Snap
    ---------------------------------------- ---------- -------------- -------
    Elapsed Time (ms)                            25,131       25,130.7     3.4
    CPU Time (ms)                                23,270       23,270.2     3.9
    Executions                                        1            N/A     N/A
    Buffer Gets                               2,626,166    2,626,166.0    14.6
    Disk Reads                                      305          305.0     0.3
    Parse Calls                                       1            1.0     0.0
    Rows                                        371,735      371,735.0     N/A
    User I/O Wait Time (ms)                         564            N/A     N/A
    Cluster Wait Time (ms)                            0            N/A     N/A
    Application Wait Time (ms)                        0            N/A     N/A
    Concurrency Wait Time (ms)                        0            N/A     N/A
    Invalidations                                     0            N/A     N/A
    Version Count                                     2            N/A     N/A
    Sharable Mem(KB)                                 26            N/A     N/A
              -------------------------------------------------------------
    
    Execution Plan
    ------------------------------------------------------------------------------------------------
    | Id  | Operation                    | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
    ------------------------------------------------------------------------------------------------
    |   0 | UPDATE STATEMENT             |                 |       |       |  1110 (100)|          |
    |   1 |  UPDATE                      | TEST            |       |       |            |          |
    |   2 |   TABLE ACCESS FULL          | TEST            |   116K|  2740K|  1110   (2)| 00:00:14 |
    |   3 |   TABLE ACCESS BY INDEX ROWID| ACCT            |     1 |    26 |     5   (0)| 00:00:01 |
    |   4 |    INDEX RANGE SCAN          | ACCT_DT_ACC_IDX |     1 |       |     4   (0)| 00:00:01 |
    ------------------------------------------------------------------------------------------------
    
    
    
    
    Plan 2(PHV: 3834848140)
    -----------------------
    
    Plan Statistics                        DB/Inst: TESTDB/TESTDB1  Snaps: 32541-32542
    -> % Total DB Time is the Elapsed Time of the SQL statement divided
       into the Total Database Time multiplied by 100
    
    Stat Name                                Statement   Per Execution % Snap
    ---------------------------------------- ---------- -------------- -------
    Elapsed Time (ms)                                 0            N/A     0.0
    CPU Time (ms)                                     0            N/A     0.0
    Executions                                        0            N/A     N/A
    Buffer Gets                                       0            N/A     0.0
    Disk Reads                                        0            N/A     0.0
    Parse Calls                                       0            N/A     0.0
    Rows                                              0            N/A     N/A
    User I/O Wait Time (ms)                           0            N/A     N/A
    Cluster Wait Time (ms)                            0            N/A     N/A
    Application Wait Time (ms)                        0            N/A     N/A
    Concurrency Wait Time (ms)                        0            N/A     N/A
    Invalidations                                     0            N/A     N/A
    Version Count                                     2            N/A     N/A
    Sharable Mem(KB)                                 26            N/A     N/A
              -------------------------------------------------------------
    
    Execution Plan
    ---------------------------------------------------------------------------------------------
    | Id  | Operation                    | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
    ---------------------------------------------------------------------------------------------
    |   0 | UPDATE STATEMENT             |              |       |       |     2 (100)|          |
    |   1 |  UPDATE                      | TEST         |       |       |            |          |
    |   2 |   TABLE ACCESS BY INDEX ROWID| TEST         |     1 |    28 |     2   (0)| 00:00:01 |
    |   3 |    INDEX RANGE SCAN          | TEST_DT_IND  |     1 |       |     1   (0)| 00:00:01 |
    |   4 |   TABLE ACCESS BY INDEX ROWID| ACCT         |     1 |    26 |     4   (0)| 00:00:01 |
    |   5 |    INDEX RANGE SCAN          | INDX_ACCT_DT |     1 |       |     3   (0)| 00:00:01 |
    ---------------------------------------------------------------------------------------------
    
    
    
    Full SQL Text
    
    SQL ID       SQL Text
    ------------ -----------------------------------------------------------------
    51szt7b736bm UPDATE TEST SET TEST_TRN_DAY_CL = (SELECT (NVL(ACCT_CR_BAL, 0) +
                  NVL(ACCT_DR_BAL, 0)) FROM ACCT WHERE ACCT_TRN_DT = (:B1 ) AND PB
                 RN_ACC_NB = ACCT_ACC_NB(+)) WHERE TEST_BATCH_DT = (:B1 )
    Your contribution is very much appreciated.

    Thank you for taking your time to answer my question.


    Concerning

    Oracle Lover3 wrote:
    How will I know (from Plan 1 and Plan 2) whose execution plan chose for the current run?

    Since you're already on 10.2, you can identify the actual execution plan by checking in V$ SESSION SQL_ID and SQL_CHILD_NUMBER column. This can be used to identify the plan in V$ SQL_PLAN (columns SQL_ID and CHILD_NUMBER) and in 10g, you can use the convenient DBMS_XPLAN. Function DISPLAY_CURSOR for the information of the real plan using these two parameters.

    Kind regards
    Randolf

    Oracle related blog stuff:
    http://Oracle-Randolf.blogspot.com/

    SQLTools ++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676 /.
    http://sourceforge.NET/projects/SQLT-pp/

  • What is the best way to remove a hum in the context of multiple clips

    I have about 40 short excerpts with the same buzz in the background. What is the best way to get rid of it? I know how to make a video, but I can do everything at the same time? When the workflow is the best time to do this? Should I use an adjustment layer?

    Thank you!

    I always do everything removing Hum as the first operation. As Bob says it depends on snoring, but if it's typical induced current then the DeHummer tool is the best to use. Once you have found the right settings you can save as a favorite and the use of batch processing to remove all clips at once.

  • execution of multiple DAC running plan, is this posible?

    Hi guru BI

    is it possible to run several implementation plan in the same container?

    or if we have several containers for kids free HR and finance, in this case the execution plan will be executed in each separate containers?
    What are the impacts on the DAC Server?

    Kind regards
    Kamlesh

    We can create multiple execution plans in a single container.
    But it can be scheduled one after the other don't.

  • Two parallel executions, calling a DLL function

    Hello

    Since this test takes about 6 hours to test my USE, I plan to use the parallel model to test 2 UUT at the same time in parallel.

    I implement the test code as a DLL of CVI.

    However, to my surprise, it seems that the steps that call a DLL function actually traveled in one series, not in parallel:

    Test 2 power outlets if one enters and executes a DLL works, the other waits for the first to complete its operation and return. While the other runs on the same copy of the DLL, so that the DLL global variables are actually shared between executions.

    So if a DLL will take 5 minutes to complete, two executions in the running at the same time take 10 minutes. This isn't a running in parallel in every way.

    What I want and expect also TestStand, was to completely isolate the copies of these two executions DLL such as test two casings could run at the same time the same DLL function by arbitrary executiong their copy of the function, completely isolated from one another.

    So they separated globals, discussions, etc., and two parallel jacks take 5 minutes to run a step, instead of 10.

    Such a scenario is possible?

    If not, how can I use my test in parallel (in truly parallel) when the use of 2-socket test?

    (1) Yes, he'll call the multiple executions in TestStand calling into the same dll in memory the same copy of this DLL. Thus dll called in this way must be thread-safe (that is written in a way that is safe for multiple threads running the code at the same time). This means usually avoiding the use of global variables among other things. Instead, you can store the thread shows in local variables within your sequence and pass it in the dll as a parameter as needed. Keep in mind all the DLLs your dll calls must also be thread-safe or you need to synchronize calls in other DLLs with locks or other synchronization primitives.

    1 (b) even if your dll are not thread-safe, you might still be able to get some benefits from parallel execution using the type of automatic planning step and split your sequence in independent sections, which can be performed in an order any. What it will do is allow you to run Test a socket A and B Test to another socket in parallel, and then once they are then perhaps test B will take place on one and test one run on the other. In this way, as long as each test is independent of the other you can safely run them in parallel at the same time even if it is not possible to run the same test in parallel at the same time (that is, if you can not run test on two Sockets at the same time, you might still be able to get an advantage of parallelism by running the Test B in one take during the tests in the other. See the online help for the type of step in autoscheduling for more details).

    (2) taken executions (and all executions of TestStand really) are threads separated within the same process. Since they are in the same process, the global variables in the dll are essentially shared between them. TestStand Station globals are also shared between them. TestStand Globals file, however, are not shared between runs (each run gets its own copy) unless you enable the setting in the movie file properties dialog box.

    (3) course, using index as a way to distinguish data access are perfectly valid. Just be careful that what each thread does not affect data that other threads have access. For example, if you have a global network with 2 elements, one for each grip test, you can use safely the decision-making of index in the table and in this way are not sharing data between threads even if you use a global variable, but the table should be made from the outset before start running threads , or it must be synchronized in some way, otherwise it is possible to have a thread tries to access the data, while the other thread is created. Basically, you need to make sure that if you use global data which the creation/deletion, modification and access in a thread does not affect the global data that the other thread use anyway in or we must protect these creation/deletion, modification and access to global data with locks, mutex or critical sections.

    Hope this helps,

    -Doug

  • IPSec VPN in the context of security... Static interface or not?

    Hello

    For the moment, I have a pair of ASA5510 in context configured Multiple. Everything is ok, but we use til now only the ACL functions.

    Now, I would be interested in configuration 2 contexts, with IPSec VPN. A VPN by context. But I can't find any information if it would be possible to use a common interface for both contexts. My wish would be only to spare public IPs...

    If I have to configure VPN 100 100 contexts, I need 100 public IPs?

    Thanks to anyone who can give me a tip,

    Kind regards

    Olivier

    Hello

    If you have separate IP addresses on the same subnet, you can reach these interfaces to different contexts

    You only configure a sub with a interface ID Vlan that is connected to the gateway of the ISP. You can join this subinterface settings as much as you want but the IP address on the interface must naturally be different in each context. To my knowledge ASA really prevent you from setting up the IP address if she sees him in a different context in the same subinterface.

    -Jouni

  • What is the best way to open close, then send instrument labview teststand model handles parallel?

    I have a number of test systems that use a parallel model with labview. We have a number of instruments (PXI).

    What is the preferred method to open, close and passing instrument handles with labview teststand?

    Hello

    A few ways

    1 package return the session as a U32 handle, there are a few VI TestStand i of the palette of TestStand that you can use to make conversation.

    2 through a LabVIEWIOControl. TestStand handles these.

    3 do something fance as the use of LVGOOP and leave the handle as an object property and leave in memory. for example, do not pass it back at all.

    One thing, you'll have to monitor multiple executions trying to talk to the same instrument, use some lock/unlock or synchronization to avoid thi.

    Concerning

    Ray Farmer

  • Multi frame ASA SSL VPN Question

    Hello

    We have a pair of firewalls, we do multiple contexts on clients.  We have recently updated their and have been using the newly Anyconnect customer support.  This all works fine but I feel I'm missing something.  If the customer does not have the anyconnect client already how do get?  Normally, you go to the web page and it will download the client, but all I get is "Clientless VPN is not supported in context mode Multiple." which is good, but how is the customer supposed to to get the customer in the first place?

    Any information would be helpful.

    Chris L.

    Hi Chris,

    The AnyConnect WebLaunch feature is not supported in ASA running on multi-contexte mode.

    There is a demand of improvement that has been opened to allow this as other characteristics while ASA in multi mode context. Here is the link, you can refer:

    https://Tools.Cisco.com/bugsearch/bug/CSCuw19758/?reffering_site=dumpcr

    Kind regards

    Aditya

    Please evaluate the useful messages and mark the correct answers.

  • ASA balancing to two routers

    Hi all

    Is there anyway that I can balance workloads on both routers.

    I have an ASA with two attached routers each router has two instances of HSRP runs on each with its own IP address, each router is the main for one of the instances of HSRP. If there was no ASA in the way that I would set DHCP to browse through all of the functions of server through another hey presto (of sort) load balancing. However, I can't do what the ASA has only a single internal IP address. Routers treat natting because they are on different IP ranges on different Internet service providers.

    I can't use GLBP as the external IP evolution would break VPN RDP and SMTP connections.

    Is it possible that I can make the road ASA based on the source IP address, or any other means to separate the traffic between two routers?

    Thanks in advance,

    Scott

    You cannot route based on ip source with only firewall with router possiable by ACB

    You can give each of them point to router deffrent with metric deffrent from the static routes

    in this case, it will make the topology as active standby, which is not good in your case

    but you can use sub interfaces on your case make the ASA NRTIs each subinterface in deffrent subnet and deffrent security level

    and let each subinterface use deffrent hsrp instance

    or there is another way

    IF you are not using VPN on your ASA you can reach in the context of multiple

    in the context of several you're going to separate your firewall virtually

    so if you have two VLAN in your network (two subnets deffrent)

    then each subnet use almost deffrent firewall

    goona u divide the internal interface to two subinterfaces

    and you can use a shred of interface between the context outside or separate for two subinterfaces

    and assign these interface for each context

    If you go to each context as firewall deffrent

    and you can use the HSRP deffrent on each context instance

    but the multiple context, you can use VPN on the firewall

    Use the following method *.

    The OTHER WAY THAT ALSO I have SUGIST YOU to TRY, this IS THE Transparent firewall

    in the case your firewall works in L2 mode

    so you can use routers in HSRP IPS AS there is no firewall in the path

    which i thnk useful for you case also

    in transperant mode the way to defaultgate for your customer will be the hsrp IP because the firewall will not have everything except IPs management

    the useres will also be in the same IP subnet as the gateway in your case HSRP VIP

    and also, you can control the security of the network through the firewall normally

    try this way and let me know

    See the following link for the configuration

    http://www.Cisco.com/en/us/products/HW/vpndevc/ps2030/products_configuration_example09186a008089f467.shtml

    Please, note useful

  • ASA VDC - is Eve address necessary?

    I have two ASAs 5545 - x in the context of Multiple, active/active failover mode and I don't understand - if I need to configure IP door Eve on the interfaces inside the TDC which will act as inside and outside.

    You have not to- but I always try.

    If you have an IP before the ASA also allows it to check the health of the other ASA.  In the contrary case, it limited its control for layer 2 checks only.

    It is especially good on the management interfaces, as at the time, you can connect either ASA, active or standby.

  • Using the XML file error

    Hello

    I use an XML file as the source in a map. This worked well until I changed the context during the execution of the mapping.

    Initially, during the execution of the development, it worked without any problem. But when I changed the context to run the Test, he gave me the following error:-

    ODI-1227: task load data-LKM SQL for Oracle-don't work not to connect to the source P6_ACTIVITIES - TEST.

    Caused by: java.sql.SQLException: the object name already exists: P6ACTI_READACTIVITIESRESPONSE in the statement [create table P6ACTI_READACTIVITIESRESPONSE (READACTIVITIESRESPONSEPK NUMERIC (10) NOT NULL, SNPSFILENAME varchar (255) NULL, SNPSFILEPATH varchar (255) NULL, SNPSLOADDATE varchar (255) NULL)]

    This object name, P6ACTI_READACTIVITIESRESPONSE created by ODI is a combination of scheme name defined in the JDBC URL properties (or the first five characters of the XML file) and the element root. Therefore, I don't have any control over the name, unless I have change the schema name property.

    Restart the agent will solve the problem temporarily, but the error will appear again when the context is changed. My question is, how do I create/drop / let agent do this each time that the XML file is accessed ODI. I use ODI 12.1.3.

    Thanks in advance,

    Xmen

    Only, you should be able to query code KM to understand if these tables are created, but generally it will be in the scheme of work specified on your server of intermediate technology.

    We are not allowed to change the KMs, because it would create problems of alimony.


    Who with Oracle or internally? A large part of ODI power lies in an open framework around KMs and as long as you develop and test carefully any customizations KM I really can't see why the support would be a problem. You can run the table to fall outside the knowledge module, would be - this also be a support issue, the main difference is that a KM custom allows to reuse easier?

Maybe you are looking for

  • Cannot open the new tab page

    By pressing Ctrl + t or by clicking the new tab button have no effect. I opened on: config, sought to "newtab", then reset all default: all to nothing does not.

  • for some reason any bar that has firefox and a down arrow left and I would like to get it back

    for some reason any bar that says firefox with an arrow down on the top browsers disappeared and I don't want to return

  • How to configure an ftp account?

    Hi-You have a tutorial or any document training which can help me create an FTP account? I have the FTP credentials including my username and password for our Organization's Web site, however, I need to get advice on how to set up the account so I ca

  • Skype for T5540

    I have a HP T5540 with installed on XPe Thin Client. I installed some add-ons from the HP like Java/Adobe site so I know that you can install programs on it, but I can't seem to install Skype on this computer? I looked for Embedded and Skype as well

  • Sizes made on Live Photo Gallery print command

    I want to print 4 or 6 photos per page of A4, 9 cm x 11 cm each.  But it seems that I cannot use the sizes of default model 4 9x13cm or 9 photos in Pocket format.  Is there a way to turn it?