Context of optimization


We are on ODI 12.1.3 version. Want to know in which SNP table, the context of optimization is stored?

I would be grateful if anyone can give the name of the table?

Thanks to advaance

VK

Thank you very much deker to provide me with the column table and frame of reference. I joined this table with snp_map_ref on i_map_ref column table for qualified_name

that is the context of optimization.

Thanks again

Kind regards

VK

Tags: Business Intelligence

Similar Questions

  • Migration - context of optimization

    Hello

    I'm migrating a repository of Dev Prod. I have a Dev in Dev and Prod prod context. In the Dev interfaces, the context of optimization is set to dev. Now, when I migrate objects to Prod, he always says the context of optimization is dev. Can I manually change each interface and define the context of the optimization of the Prod?

    This seems very tedious. I could do a search and replace in the exported XML file, but think that it is a normal matter that can be solved differently - or there is a very good reason why we like that?

    Thank you
    Matt

    Well Matt,

    I think that when they said 'Test _A team important these versions released for their trial in a separate working Repository_', they forgot to say 'Exécution WR'...

    Take a look at this post and tell me what you think: http://odiexperts.com/?p=574

    From experience, I can defend any other place to make any change then the initial development environment. I mean, if the test team found a problem, he should inform the development team who will correct and generates a new version to test. In my opinion, the test should report only, never change.

    What do you think?

    Cezar Santos
    http://www.odiexperts.com

  • Block of preloading of confusion

    Hello Experts,

    I read Troubleshooting Oracle Performance by Christian Antognini for awhile. In the context of optimization of joins, I came across a term called "Block Prefetching". It is written the next thing on it.

    "In order to improve the effectiveness of the nested loops joins, the database engine is able to take advantage of the early reading block. This optimization technique is intended to replace several single unit of physical reads performed on adjacent blocks, with a single physical multiblock read. This is true for tables and indexes. »

    In addition, he mentions also that watching a path can not tell you if the database engine will use the preload.

    As far as I know, multiblock I/O (as know, like playback of files scattered event db) is only used for FULL TABLE SCAN and FAST FULL INDEX SCAN. Other types of readings must be unique I/O (also known as the, the events db file sequential read). Thus, for example, how is it a systematic index scan range can use multi/o?

    I'm just trying to understand the logic behind it. Anyone can put light on this subject?

    Concerning

    Charlie

    A nested loops join is as a loop - for each line in the rowsource external/screwing, probe internal/probed rowsource.

    So, in general, we expect nested loops of transformation to be in the sense of:

    for each line in the rowsource external/driving (1)

    search indexed on the inside/probed rowsource (2)

    then use the rowid/s of the index to do a search on the table (3)

    end loop;

    Preloading can help take advantage of the physical i/o needed in steps 2 and 3.

    For example, index search, in older versions, as you point out, any request for physical i/o for such an INDEX RANGE SCAN will be via a sequential file to read a single block of the index.

    The index preloading optimization allows Oracle to anticipate on the other physical indexes IO calls, it might have to do in future iterations of this loop and therefore, instead of reading in the same block that he must now, to read in several blocks so that they are in the buffer when required.

  • Function index and virtual columns

    I just read the documentation of Oracle on the FBI. In the context of optimization with a function-based index, it is said that "a virtual column is useful for speed of access to data from expressions.". Here is the link Index-Organized Tables and indexes.

    My question is, does Oracle already not create a virtual column when we create a function-based index?

    Concerning

    Charlie

    Hi Charlie
    Yes, the database engine creates a virtual column. But this column is hidden. Reproduced in 11.2.0.3 example:
    SQL> CREATE TABLE t (n NUMBER);SQL> CREATE INDEX i ON t (round(n,0));
    SQL> SELECT column_name, hidden_column, virtual_column  2  FROM user_tab_cols
      3  WHERE table_name = 'T';
    
    COLUMN_NAME                    HIDDEN VIR
    ------------------------------ ------ ---
    N                              NO    NO
    SYS_NC00002$                  YES    YES
    

    HTH

    Chris Antognini

    Troubleshooting performance Oracle, Apress 2008/2014

  • Context of the sequence becomes invalid when the runtime is longer

    I have a sequence that loop for about 70 k iterations (which is pretty huge). The sequence called a subsequence where the inhabitants of the sequence called are modified. Question is, around 65 th iteration k, I get an error message indicating that the reference of the local past is invalid. The number of iterations when the error occurs varies from one computer to another, but always in the 65 k range. When I try to run another sequence (a single) consecutively, I get an "out of memory" error I guess the reason why the local reference is not valid because TS execution stops when the memory TS gets the complete decision-making context of sequence and therefore the reference local section not valid.

    I would like to know how to reduce the consumption of memory of TS because I disabled the result of record for all the result of sequences (Configure-> Options of Station-> run), disabled registration for each iteration of the loop, optimize no reentrant calls thi sequence is also enabled. I even tested with just this sequence of sequence and I wonder if there is a maximum number of threads of execution when a loop.

    PC configuration

    RAM-12 GB

    LabVIEW and Teststand 2010 SP1

    I will not be able to view the code, but the closure is a custom step which has sequence call as its adapter.

    Hello

    I tried this:

    Main sequence called a sub sequence, which includes 3 stages of instruction.

    I have a loop sequence sup to 75 k with result logging enabled.

    TS runs successfully - no problems.

    Try to jump steps (one at a time) in the sequence of sub to know at what stage leads to this question.

    Also try to reproduce using a simple sequence (that you can post online).

    Ravi

  • RH2015 - loading and the search optimization

    Hello

    Environment: HR 2015 12.0.2.384, generating merged WebHelp - nearly 40 HR projects in total and 7000 topics.

    Question: loading table of contents and search are very slow, and I need to find a way to optimize their. However, I do not understand how they work, so I hope that someone on the Forum could give me some advice.

    Current situation:

    I tested this a few weeks - the results below.

    First load of OCD:

    • Local computer:
      • That is to say 11-25 seconds
      • Chrome - 2 seconds
    • Site client-side:
      • IE - 20 seconds
      • Chrome - 10 seconds

    First search:

    • Local computer:
      • IE and Chrome - 2 seconds.
    • Site client-side:
      • IE and Chrome - 25 seconds.

    In research settings, I activated syntax highlighting, showing the context, hide the column rank and the total number of search results. (No substring search.) The activation of the option AND by default does not seem to change the speed. Because we include a PDF version of each online help module, I also tried to exclude the search PDF files, but has not made a difference either.

    (I also tried to generate the same content to the HTML5 format and the table of contents did load faster, but research takes 2 x longer...)

    Unfortunately our website client-side is protected by password, so I can't give the link.

    Issues related to the:

    1. As far as I can tell, a bunch of JS files in folders wh * are responsible for the table of contents and search - I'm on the right track? No one knows exactly how these files are generated? I opened the Console to the Chrome developer while searching for a term and it looks like a JS TAKES a few milliseconds of load, but there are so many files it adds up to nearly a half-minute.
    2. If I had to reduce the number of HR modules (for example, 20 instead of 40), it would make a difference, or I should basically the same number of JS files altogether, just put in different folders?
    3. If I had to reduce the number of subjects that would help?
    4. Is there something I can do to speed up the loading of content on IE? There is a big difference compared to Chrome, and most of our clients use IE...

    Thank you very much in advance!

    There are several issues at stake here that affect performance: the amount of topics, merged help and the performance of the server.

    In the WebHelp output, you can select options for speed optimization. Try to set this option to "Local area network", even if you publish on the web. This option controls if you have files wh * more and more small or big but fewer files wh *. Network speed have increased so that you can use the local area network option. And there's an added bonus: each file that needs to be loaded requires a download from the server. And have a lot of downloads for many small files is much slower than fewer calls for larger files. This is because for each file on the server, the server must read the file and send it to the customer. If the server can do this with larger files, you can benefit from the best internet connections of the past 10 years. Have a fast internet connection does not resolve the issue where you have to collect many small files.

    Second, merged help adds a huge head on download times. Basically, RoboHelp load of the main project. And then, it will make all steps load for each merged as well project. This slows down the process enormously. This is due to scripts in aid of the merger, but also because of the issue I described before. Reducing the number of projects were merged by moving in fewer and larger projects, will also help.

    Third, you can reduce the content to speed up searches. For research, the number of subjects is irrelevant. What interests us is the amount of content. When you have subjects less with more content, research database will have the same size as less content subjects. Of course, the subjects have less reduced also the table of contents entries and will speed up the loading of the table of contents. The research is more affected by the number of merged projects. Don't forget: many means of overhead projects merged.

    For customer oriented site, your computer must download the files from the remote server. For local help, the browser can access all the content immediately on the disc. If your server is not cached, you get a lot of overhead on the server side, slow things down. And if the server has a slow connection download (server upload max is your max download), which will still slow down the process. Especially if you have several people trying to access the content at the same time, the server must treat a large number of applications. But for any server moderately modern, queries should not be a problem.

  • CONTEXT clue and get slower search

    Hello

    I have a situation that I don't know what to do anymore...

    What we have:
    -We have Oracle 11.2.0.0.0 and the index of context on a table with 5 columns...
    -We have a .NET application that runs queries SELECT for 550 guests with bind variable for 6000 times a day...
    -Some queries have more then 550 bound variables but there's maybe 1-2 big like that... otherwise bellows 100
    -We have a clue to the mode optimization WORK, collect statistics like this:
    exec ctx_ddl.optimize_index('ORATEXT_ART_IDX','FULL');
    exec DBMS_stats.gather_index_stats(ownname=>TEST,indname=>'ORATEXT_ART_IDX');
    exec DBMS_stats.gather_table_stats('TEST','S_ARTICLE',cascade=>TRUE);
    This is the code for index:
    BEGIN
    CTX_DDL.CREATE_PREFERENCE('S_ARTICLE_LEX','BASIC_LEXER');
    CTX_DDL.SET_ATTRIBUTE('S_ARTICLE_LEX','SKIPJOINS','+&-');
    CTX_DDL.CREATE_PREFERENCE('DATASTORE_S_ARTICLE','MULTI_COLUMN_DATASTORE');
    CTX_DDL.SET_ATTRIBUTE(' DATASTORE_S_ARTICLE ','columns','TITLE,SUBTITLE,AUTHOR,SUMMARY,FULLTEXT'); 
    END;
    
    CREATE INDEX ORATEXT_ART_IDX ON S_ARTICLE(ORATEXT) INDEXTYPE IS CTXSYS.CONTEXT  
    FILTER BY ID 
    PARAMETERS (' LEXER S_ARTICLE_LEX STOPLIST CTXSYS.EMPTY_STOPLIST sync(ON COMMIT) DATASTORE DATASTORE_S_ARTICLE');
    Scenario is like this:
    Every time a new article comes from us:
    -Insert this ONE article DB and save it so that synchronizes the index
    -execute queries for whether a query has a 550 correspondence...
    -query has a form: SELECT * FROM ITEMS WHERE ID = xxxx AND TYPE = xx AND (MEDIA or MEDIA = yy = yy) AND (CONTAINS (A)) or CONTAINS (B) AND NOT CONTAINS (c)... etc...
    - so, different types are used on the current article, but still...
    -because the query always begins with exactly specific ID (the last inserted a) we put BY the ID of the FILTER when you create indexes of CONTEXT... so ID is always part of a query, but another TYPE, MEDIA, CATEGORIES, WORDS are user defined and to the average user...
    -each article has 10-3000 or more words... as usual newspaper articles...

    If query is always created dynamically on the criteria of the user...

    Where is our problem... When we started the time were 9-12 s, for this search... after 1 day, we are at 30-40sec...
    If we do an optimization index every 3-5 hours then we are at 20-23sec... but this isn't a solution for this... because times are larger

    Our computer is a new I7... 8core 16GBRam Windows server 2008...

    The CPU executes all the time between 97 and 100% when you do the research... and we saw that we have some sort of expectations...
    We have increased the SGA to 6Gbram... but this did not help, increased by cursors, we swiched/disable the automatic memory management...

    What we did, nothing helped...

    SQLArea shows that the most common sql is the oracle query generated for dynamic sampling of context index...

    Another question would be... is there a better way to search for the query instead of using context indexes...

    normal search as INSTR would be a better way to do research?

    Thank you.
    Kris

    Here is an example:

    set echo on
    
    drop table searches
    /
    create table searches( search_terms varchar2(2000), search_area varchar2(30), owner_name varchar2(30) )
    /
    
    insert into searches values( 'barack obama', 'US Politics', 'John' )
    /
    insert into searches values( 'washington', 'US Politics', 'John' )
    /
    insert into searches values( 'iraq or iran', 'Middle East', 'Peter' )
    /
    insert into searches values( 'finance or financial' , 'Economics', 'Mike' )
    /
    insert into searches values( 'NEAR( (financial, US) )', 'US Economics', 'Mike' )
    /
    
    create index search_index on searches( search_terms ) indextype is ctxsys.ctxrule
    /
    
    select search_area, owner_name
    from searches
    where matches(search_terms, 'Barack Obama yesterday announced that he flying to Iraq to discuss the financial status of US interests' ) > 0
    /
    
    select search_area, owner_name
    from searches
    where matches(search_terms, 'Yesterday in Washington nothing of interest happened.' ) > 0
    /
    

    The output of the query looks like:

    SQL> select search_area, owner_name
      2  from searches
      3  where matches(search_terms, 'Barack Obama yesterday announced that he flying to Iraq to discuss the financial status of US interests' ) > 0
      4  /
    
    SEARCH_AREA                 OWNER_NAME
    ------------------------------ ------------------------------
    US Politics                 John
    Middle East                 Peter
    Economics                 Mike
    US Economics                 Mike
    
    SQL>
    SQL> select search_area, owner_name
      2  from searches
      3  where matches(search_terms, 'Yesterday in Washington nothing of interest happened.' ) > 0
      4  /
    
    SEARCH_AREA                 OWNER_NAME
    ------------------------------ ------------------------------
    US Politics                 John
    
  • difference between catsearch (ctxcat) and contains (context)

    I am referring
    http://docs.Oracle.com/CD/B28359_01/text.111/b28303/IND.htm
    http://www.Oracle.com/technetwork/database/Enterprise-Edition/ctxcat-primer-090555.html
    both give details on oracle text
    the question that remains is when not to use catsearch
    Yes, it is answered in the second document already, but if you can provide a more definitive to
    When not to use catsearch or 'what are the other reasons for not using catsearch' I don't know what he tell fate many doubts of many
    Thank you very much
    (I did not mention here my version of db, please write on the most recent - 11g, most have 11g)

    Edited by: 946207 on 26 December 2012 13:28

    I can't access the second document, so I don't know what else you may have already read. There is a problem with the ctxcat and catsearch indexes that the optimizer can choose to try to use the functional invocation at unpredictable moments, but since catsearch does not support functional invocation, it generates an error instead of returning the results. Context of the index and contains do not have this problem. From Oracle 11 g, it seems little or nothing that ctxcat and catsearch can do that cannot be done with the context and which contains. Therefore, I always recommend not using catsearch and ctxcat.

  • Research using fuzzy context in multiple params

    Hi, my previous post is about research catserch and the context, I had advice of Roger and Barbara (both are extremely useful ~ thanks), I focus now by using the search from the context, but I had a problem of fuzzy search. I did some research, didnlt get very much, I think that it is a question for the expertise once again, sorry for the trouble and sincerely thank you for your time.

    ------------------------------
    Test data
    ----------------------------
    CREATE TABLE cust_catalog
    (id NUMBER (16),
    First name VARCHAR2 (80).
    name VARCHAR2 (80).
    Date of birth,
    type VARCHAR2 (10)
    )

    INSERT ALL
    INTO cust_catalog VALUES (1, 'John', 'Smith', to_date('10/03/1971','DD/MM/YYYY'), 'Mâle')
    INTO cust_catalog VALUES (2, 'Xavier', 'Johnson', to_date('05/07/1974','DD/MM/YYYY'), 'Female')
    INTO cust_catalog VALUES (3, 'David', 'Miles', to_date('16/10/1978','DD/MM/YYYY'), 'Mâle')
    INTO cust_catalog VALUES (4, 'Chris', 'Johnny', to_date('25/02/1976','DD/MM/YYYY'), 'Mâle')
    INTO cust_catalog VALUES (5, 'Jenny', 'The forge', to_date('28/11/1977','DD/MM/YYYY'), 'Female')
    INTO cust_catalog VALUES (6, 'Andy', 'Mil', to_date('16/08/1975','DD/MM/YYYY'), 'Mâle')
    INTO cust_catalog VALUES (7, 'Andrew', 'Smithe', to_date('15/12/1974','DD/MM/YYYY'), 'Mâle')
    INTO cust_catalog VALUES (8, 'John', 'Smith', to_date('07/11/1972','DD/MM/YYYY'), 'Mâle')
    INTO cust_catalog VALUES (9, 'William John', "Henson", to_date('04/01/1971','DD/MM/YYYY'), "Mâle")
    INTO cust_catalog VALUES (10, "Emma John", "Mil", to_date('06/04/1979','DD/MM/YYYY'), "Mâle")
    INTO cust_catalog VALUES (11, 'Jon', 'Smith', to_date('19/09/1977','DD/MM/YYYY'), 'Mâle')
    INTO cust_catalog VALUES (12, 'Jen', 'Smith', to_date('17/06/1978','DD/MM/YYYY'), 'Female')
    INTO cust_catalog VALUES (13, 'Chrissie', 'Smith', to_date('21/05/1975','DD/MM/YYYY'), 'Mâle')
    INTO cust_catalog VALUES (14, 'Yau', 'Smith', to_date('21/05/1975','DD/MM/YYYY'), 'Mâle')
    INTO cust_catalog VALUES (15, 'Chrisi', 'Smith', to_date('21/05/1975','DD/MM/YYYY'), 'Mâle')
    INTO cust_catalog VALUES (16, 'Johnny', 'Clark', to_date('21/05/1975','DD/MM/YYYY'), 'Mâle')
    INTO cust_catalog VALUES (17, "Bobbie", "Clarkson", to_date('21/05/1975','DD/MM/YYYY'), "Mâle")
    INTO cust_catalog VALUES (18, 'Bob', 'Clark', to_date('21/05/1975','DD/MM/YYYY'), 'Mâle')
    INTO cust_catalog VALUES (19, 'Jones', 'Smith', to_date('21/05/1975','DD/MM/YYYY'), 'Mâle')
    INTO cust_catalog VALUES (20, 'Johan', 'Smith', to_date('21/05/1975','DD/MM/YYYY'), 'Mâle')
    INTO cust_catalog VALUES (21, "Jean", "Smithie", to_date('21/05/1975','DD/MM/YYYY'), "Mâle")
    INTO cust_catalog VALUES (22, "Chris", "Smithey", to_date('21/05/1975','DD/MM/YYYY'), "Mâle")
    INTO cust_catalog VALUES (23, "Jean", "The forge", to_date('21/05/1975','DD/MM/YYYY'), "Mâle")
    INTO cust_catalog VALUES (24, "Chris", "Smithie", to_date('21/05/1975','DD/MM/YYYY'), "Mâle")
    INTO cust_catalog VALUES (25, 'John', 'Smithke', to_date('21/05/1975','DD/MM/YYYY'), 'Mâle')
    SELECT * FROM DUAL

    EXEC CTX_DDL. CREATE_PREFERENCE ("cust_lexer", "BASIC_LEXER");
    EXEC CTX_DDL. SET_ATTRIBUTE ("cust_lexer", "SKIPJOINS',", ".) » +-()/');
    EXEC CTX_DDL. Create_Preference ("cust_wildcard_pref", "BASIC_WORDLIST");
    CTX_DDL.set_attribute EXEC ('cust_wildcard_pref', 'prefix_index', 'YES');

    EXEC CTX_DDL. CREATE_PREFERENCE ("forename_datastore", "MULTI_COLUMN_DATASTORE");
    EXEC CTX_DDL. SET_ATTRIBUTE ("forename_datastore", "COLUMNS", "FirstName");

    EXEC CTX_DDL. CREATE_PREFERENCE ("surname_datastore", "MULTI_COLUMN_DATASTORE");
    EXEC CTX_DDL. SET_ATTRIBUTE ('surname_datastore', 'COLUMNS', 'name');

    EXEC CTX_DDL. CREATE_SECTION_GROUP ("your_sec", "BASIC_SECTION_GROUP");
    EXEC CTX_DDL. ADD_FIELD_SECTION ("your_sec", "first name", "first name", TRUE);
    EXEC CTX_DDL. ADD_FIELD_SECTION ('your_sec', 'name', 'name', TRUE);
    EXEC CTX_DDL. ADD_FIELD_SECTION ("your_sec", "birth", "date of birth", TRUE);

    CREATE INDEX forename_context_idx ON cust_catalog (first name) INDEXTYPE IS CTXSYS. FRAMEWORK
    FILTER BY genre
    PARAMETERS
    ("Forename_datastore of the DATA store
    SECTION your_sec GROUP
    LEXER cust_lexer
    List of WORDS cust_wildcard_pref')

    CREATE INDEX surname_context_idx ON cust_catalog (family name) INDEXTYPE IS CTXSYS. FRAMEWORK
    FILTER BY genre
    PARAMETERS
    ("Surname_datastore of the DATA store
    SECTION your_sec GROUP
    LEXER cust_lexer
    List of WORDS cust_wildcard_pref')


    -This work sql ok if not blurred
    SELECT * from cust_catalog
    WHERE CONTAINS (name, "Jean |") Chris ') > 0 AND CONTAINS (name,'smith |) Miles | MIL ') > 0

    -cleaning
    DROP TABLE cust_catalog;
    EXEC CTX_DDL. DROP_PREFERENCE ("cust_lexer");
    EXEC CTX_DDL. DROP_PREFERENCE ("cust_wildcard_pref");
    EXEC CTX_DDL. DROP_PREFERENCE ("forename_datastore");
    EXEC CTX_DDL. DROP_PREFERENCE ("surname_datastore");
    EXEC CTX_DDL. DROP_SECTION_GROUP ("your_sec");
    DROP INDEX forename_context_idx;
    DROP INDEX surname_context_idx;
    -------------------
    Questions here
    -------------------
    1. I have problem when I try to implement fuzzy search

    SELECT * from cust_catalog
    WHERE CONTAINS (name,' fuzzy ({john | chris}, 1, 100, weight)') > 0
    AND CONTAINS (name, ' smith | ") Miles | Milton) > 0

    2. am I also like to add on the beach of date of birth in the search, don't know what is the best way to do this, please comment below

    SELECT * from cust_catalog
    WHERE CONTAINS (name,' fuzzy ({john | chris}, 1, 100, weight)') > 0
    AND CONTAINS (name, ' smith | ") Miles | Milton) > 0
    AND date of birth BETWEEN TO_DATE('01/02/1970','DD/MM/YYYY') AND TO_DATE('11/07/1980','DD/MM/YYYY')

    3. There is a new line night employment insertion in this table, the reference to Roger that I need to create a trigger to update the index, this means to DROP and CREATE indexes for the forename_context_idx and the surname_context_idx every time when night work is running?

    Published by: Emily Robertson on November 14, 2012 13:02

    For maximum effectiveness, you must use an index and one contains the clause. You should put all your columns of text, such as the first name and family name and sex of your group of multi_column_datastore and the article. You can use the filter by and surlabasedesdonneesdufabricantduballast to add the column to the index. When you query a date using surlabasedesdonneesdufabricantduballast, the date must be in the format yyyy/mm/dd.

    You can create index on a column of text. Which column you create the index on must be the column that you are looking for and should be the column that is being updated in order to cause the text Oracle to recognize that there was an update.

    Fuzzy cannot be applied to a single term (Word), you will need to use a concatenation and replace to change something like 'fuzzy(word1|word2)' to ' fuzzy (word1) | Fuzzy (word2) ".

    In the following example, I used the table, lexer, and list of words you provided. I used a multi_column_datastore with all the columns of text three (surname, name and sex) and put all three of these columns in the section group. I've added a dummy column called any_column and created the index on this column, filtering by date of birth, using the data store, section group, lexer and wordlist, and adding sync (on validation), which will cause the index that you want to synchronize whenever a row is inserted or deleted, or that the index is created on the any_column is updated. Then I inserted the data to show that the synchronization is carried out. I then showed a query using all of the features you requested and explained the plan shows that it uses an index, press to access everything, so it's a very efficient query.

    You will still need to optimize or rebuild or delete and recreate periodically to reduce the fragmentation of the index caused by frequent synchronization.

    -script:

    -- table, lexer, and wordlist you provided:
    CREATE TABLE cust_catalog
      (id                NUMBER   (16),
       forename          VARCHAR2 (80),
       surname           VARCHAR2 (80),
       birthdate         DATE,
       gender            VARCHAR2 (10))
    /
    EXEC CTX_DDL.CREATE_PREFERENCE ('cust_lexer', 'BASIC_LEXER');
    EXEC CTX_DDL.SET_ATTRIBUTE ('cust_lexer', 'SKIPJOINS' , ',''."+-()/');
    EXEC CTX_DDL.Create_Preference ('cust_wildcard_pref', 'BASIC_WORDLIST');
    EXEC CTX_DDL.set_attribute ('cust_wildcard_pref', 'prefix_index', 'YES');
    
    -- revised datastore, section group, added column, and index:
    EXEC CTX_DDL.CREATE_PREFERENCE ('names_and_gender_datastore', 'MULTI_COLUMN_DATASTORE');
    EXEC CTX_DDL.SET_ATTRIBUTE ('names_and_gender_datastore', 'COLUMNS', 'forename, surname, gender');
    
    EXEC CTX_DDL.CREATE_SECTION_GROUP ('your_sec', 'BASIC_SECTION_GROUP');
    EXEC CTX_DDL.ADD_FIELD_SECTION ('your_sec', 'forename', 'forename', TRUE);
    EXEC CTX_DDL.ADD_FIELD_SECTION ('your_sec', 'surname', 'surname', TRUE);
    EXEC CTX_DDL.ADD_FIELD_SECTION ('your_sec', 'gender', 'gender', TRUE);
    
    ALTER TABLE cust_catalog ADD (any_column  VARCHAR2(1))
    /
    CREATE INDEX all_columns_context_idx
    ON cust_catalog (any_column)
    INDEXTYPE IS CTXSYS.CONTEXT
    FILTER BY birthdate
    PARAMETERS
      ('DATASTORE      names_and_gender_datastore
        SECTION GROUP  your_sec
        LEXER          cust_lexer
        WORDLIST       cust_wildcard_pref
        SYNC           (ON COMMIT)')
    /
    -- data you provided:
    INSERT ALL
    INTO cust_catalog VALUES (1, 'John', 'Smith', to_date('10/03/1971','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (2, 'Emaily', 'Johnson', to_date('05/07/1974','DD/MM/YYYY'), 'Female', null)
    INTO cust_catalog VALUES (3, 'David', 'Miles', to_date('16/10/1978','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (4, 'Chris', 'Johnny', to_date('25/02/1976','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (5, 'Jenny', 'Smithy', to_date('28/11/1977','DD/MM/YYYY'), 'Female', null)
    INTO cust_catalog VALUES (6, 'Andy', 'Mil', to_date('16/08/1975','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (7, 'Andrew', 'Smithe', to_date('15/12/1974','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (8, 'John', 'Smith', to_date('07/11/1972','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (9, 'Willam John', 'Henson', to_date('04/01/1971','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (10, 'Emma John', 'Mil', to_date('06/04/1979','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (11, 'Jon', 'Smith', to_date('19/09/1977','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (12, 'Jen', 'Smith', to_date('17/06/1978','DD/MM/YYYY'), 'Female', null)
    INTO cust_catalog VALUES (13, 'Chrissie','Smith', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (14, 'Chrisy', 'Smith', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (15, 'Chrisi', 'Smith', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (16, 'Johnny', 'Clark', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (17, 'Bobbie', 'Clarkson', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (18, 'Bob', 'Clark', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (19, 'Jone', 'Smith', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (20, 'Johan', 'Smith', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (21, 'John', 'Smithie', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (22, 'Chris', 'Smithey', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (23, 'John', 'Smithy', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (24, 'Chris', 'Smithie', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
    INTO cust_catalog VALUES (25, 'John', 'Smithke', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
    SELECT * FROM DUAL
    /
    COMMIT
    /
    -- query:
    COLUMN forename FORMAT A10
    COLUMN surname  FORMAT A10
    SET AUTOTRACE ON EXPLAIN
    SELECT * FROM cust_catalog
    WHERE CONTAINS
            (any_column,
             '(FUZZY (' || REPLACE ('john|chris', '|', ', 1, 100, WEIGHT) | FUZZY (')
              || ', 1, 100, WEIGHT) WITHIN forename) AND
              (FUZZY (' || REPLACE ('smith|Miles|Mil', '|', ', 1, 100, WEIGHT) | FUZZY (')
              || ', 1, 100, WEIGHT) WITHIN surname) AND
              (SDATA (birthdate BETWEEN ''1970/01/02'' AND ''1980/11/07''))') > 0
    /
    SET AUTOTRACE OFF
    -- cleaning
    DROP TABLE cust_catalog;
    EXEC CTX_DDL.DROP_PREFERENCE ('cust_lexer');
    EXEC CTX_DDL.DROP_PREFERENCE ('cust_wildcard_pref');
    EXEC CTX_DDL.DROP_PREFERENCE ('names_and_gender_datastore');
    EXEC CTX_DDL.DROP_SECTION_GROUP ('your_sec');
    

    -execution:

    SCOTT@orcl_11gR2> -- table, lexer, and wordlist you provided:
    SCOTT@orcl_11gR2> CREATE TABLE cust_catalog
      2    (id            NUMBER   (16),
      3       forename       VARCHAR2 (80),
      4       surname        VARCHAR2 (80),
      5       birthdate       DATE,
      6       gender            VARCHAR2 (10))
      7  /
    
    Table created.
    
    SCOTT@orcl_11gR2> EXEC CTX_DDL.CREATE_PREFERENCE ('cust_lexer', 'BASIC_LEXER');
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2> EXEC CTX_DDL.SET_ATTRIBUTE ('cust_lexer', 'SKIPJOINS' , ',''."+-()/');
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2> EXEC CTX_DDL.Create_Preference ('cust_wildcard_pref', 'BASIC_WORDLIST');
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2> EXEC CTX_DDL.set_attribute ('cust_wildcard_pref', 'prefix_index', 'YES');
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2>
    SCOTT@orcl_11gR2> -- revised datastore, section group, added column, and index:
    SCOTT@orcl_11gR2> EXEC CTX_DDL.CREATE_PREFERENCE ('names_and_gender_datastore', 'MULTI_COLUMN_DATASTORE');
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2> EXEC CTX_DDL.SET_ATTRIBUTE ('names_and_gender_datastore', 'COLUMNS', 'forename, surname, gender');
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2>
    SCOTT@orcl_11gR2> EXEC CTX_DDL.CREATE_SECTION_GROUP ('your_sec', 'BASIC_SECTION_GROUP');
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2> EXEC CTX_DDL.ADD_FIELD_SECTION ('your_sec', 'forename', 'forename', TRUE);
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2> EXEC CTX_DDL.ADD_FIELD_SECTION ('your_sec', 'surname', 'surname', TRUE);
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2> EXEC CTX_DDL.ADD_FIELD_SECTION ('your_sec', 'gender', 'gender', TRUE);
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2>
    SCOTT@orcl_11gR2> ALTER TABLE cust_catalog ADD (any_column  VARCHAR2(1))
      2  /
    
    Table altered.
    
    SCOTT@orcl_11gR2> CREATE INDEX all_columns_context_idx
      2  ON cust_catalog (any_column)
      3  INDEXTYPE IS CTXSYS.CONTEXT
      4  FILTER BY birthdate
      5  PARAMETERS
      6    ('DATASTORE     names_and_gender_datastore
      7        SECTION GROUP     your_sec
      8        LEXER          cust_lexer
      9        WORDLIST     cust_wildcard_pref
     10        SYNC          (ON COMMIT)')
     11  /
    
    Index created.
    
    SCOTT@orcl_11gR2> -- data you provided:
    SCOTT@orcl_11gR2> INSERT ALL
      2  INTO cust_catalog VALUES (1, 'John', 'Smith', to_date('10/03/1971','DD/MM/YYYY'), 'Male', null)
      3  INTO cust_catalog VALUES (2, 'Emaily', 'Johnson', to_date('05/07/1974','DD/MM/YYYY'), 'Female', null)
      4  INTO cust_catalog VALUES (3, 'David', 'Miles', to_date('16/10/1978','DD/MM/YYYY'), 'Male', null)
      5  INTO cust_catalog VALUES (4, 'Chris', 'Johnny', to_date('25/02/1976','DD/MM/YYYY'), 'Male', null)
      6  INTO cust_catalog VALUES (5, 'Jenny', 'Smithy', to_date('28/11/1977','DD/MM/YYYY'), 'Female', null)
      7  INTO cust_catalog VALUES (6, 'Andy', 'Mil', to_date('16/08/1975','DD/MM/YYYY'), 'Male', null)
      8  INTO cust_catalog VALUES (7, 'Andrew', 'Smithe', to_date('15/12/1974','DD/MM/YYYY'), 'Male', null)
      9  INTO cust_catalog VALUES (8, 'John', 'Smith', to_date('07/11/1972','DD/MM/YYYY'), 'Male', null)
     10  INTO cust_catalog VALUES (9, 'Willam John', 'Henson', to_date('04/01/1971','DD/MM/YYYY'), 'Male', null)
     11  INTO cust_catalog VALUES (10, 'Emma John', 'Mil', to_date('06/04/1979','DD/MM/YYYY'), 'Male', null)
     12  INTO cust_catalog VALUES (11, 'Jon', 'Smith', to_date('19/09/1977','DD/MM/YYYY'), 'Male', null)
     13  INTO cust_catalog VALUES (12, 'Jen', 'Smith', to_date('17/06/1978','DD/MM/YYYY'), 'Female', null)
     14  INTO cust_catalog VALUES (13, 'Chrissie','Smith', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
     15  INTO cust_catalog VALUES (14, 'Chrisy', 'Smith', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
     16  INTO cust_catalog VALUES (15, 'Chrisi', 'Smith', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
     17  INTO cust_catalog VALUES (16, 'Johnny', 'Clark', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
     18  INTO cust_catalog VALUES (17, 'Bobbie', 'Clarkson', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
     19  INTO cust_catalog VALUES (18, 'Bob', 'Clark', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
     20  INTO cust_catalog VALUES (19, 'Jone', 'Smith', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
     21  INTO cust_catalog VALUES (20, 'Johan', 'Smith', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
     22  INTO cust_catalog VALUES (21, 'John', 'Smithie', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
     23  INTO cust_catalog VALUES (22, 'Chris', 'Smithey', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
     24  INTO cust_catalog VALUES (23, 'John', 'Smithy', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
     25  INTO cust_catalog VALUES (24, 'Chris', 'Smithie', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
     26  INTO cust_catalog VALUES (25, 'John', 'Smithke', to_date('21/05/1975','DD/MM/YYYY'), 'Male', null)
     27  SELECT * FROM DUAL
     28  /
    
    25 rows created.
    
    SCOTT@orcl_11gR2> COMMIT
      2  /
    
    Commit complete.
    
    SCOTT@orcl_11gR2> -- query:
    SCOTT@orcl_11gR2> COLUMN forename FORMAT A10
    SCOTT@orcl_11gR2> COLUMN surname  FORMAT A10
    SCOTT@orcl_11gR2> SET AUTOTRACE ON EXPLAIN
    SCOTT@orcl_11gR2> SELECT * FROM cust_catalog
      2  WHERE CONTAINS
      3            (any_column,
      4             '(FUZZY (' || REPLACE ('john|chris', '|', ', 1, 100, WEIGHT) | FUZZY (')
      5              || ', 1, 100, WEIGHT) WITHIN forename) AND
      6              (FUZZY (' || REPLACE ('smith|Miles|Mil', '|', ', 1, 100, WEIGHT) | FUZZY (')
      7              || ', 1, 100, WEIGHT) WITHIN surname) AND
      8              (SDATA (birthdate BETWEEN ''1970/01/02'' AND ''1980/11/07''))') > 0
      9  /
    
            ID FORENAME   SURNAME    BIRTHDATE GENDER     A
    ---------- ---------- ---------- --------- ---------- -
             1 John       Smith      10-MAR-71 Male
             8 John       Smith      07-NOV-72 Male
            10 Emma John  Mil        06-APR-79 Male
            11 Jon        Smith      19-SEP-77 Male
            13 Chrissie   Smith      21-MAY-75 Male
            14 Chrisy     Smith      21-MAY-75 Male
            15 Chrisi     Smith      21-MAY-75 Male
            19 Jone       Smith      21-MAY-75 Male
            20 Johan      Smith      21-MAY-75 Male
            21 John       Smithie    21-MAY-75 Male
            22 Chris      Smithey    21-MAY-75 Male
            23 John       Smithy     21-MAY-75 Male
            24 Chris      Smithie    21-MAY-75 Male
            25 John       Smithke    21-MAY-75 Male
    
    14 rows selected.
    
    Execution Plan
    ----------------------------------------------------------
    Plan hash value: 1894062579
    
    -------------------------------------------------------------------------------------------------------
    | Id  | Operation                   | Name                    | Rows  | Bytes | Cost (%CPU)| Time     |
    -------------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT            |                         |     1 |   127 |     4   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| CUST_CATALOG            |     1 |   127 |     4   (0)| 00:00:01 |
    |*  2 |   DOMAIN INDEX              | ALL_COLUMNS_CONTEXT_IDX |       |       |     4   (0)| 00:00:01 |
    -------------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       2 - access("CTXSYS"."CONTAINS"("ANY_COLUMN",'(FUZZY (john, 1, 100, WEIGHT) | FUZZY (chris,
                  1, 100, WEIGHT) WITHIN forename) AND           (FUZZY (smith, 1, 100, WEIGHT) | FUZZY (Miles,
                  1, 100, WEIGHT) | FUZZY (Mil, 1, 100, WEIGHT) WITHIN surname) AND           (SDATA (birthdate
                  BETWEEN ''1970/01/02'' AND ''1980/11/07''))')>0)
    
    Note
    -----
       - dynamic sampling used for this statement (level=2)
    
    SCOTT@orcl_11gR2> SET AUTOTRACE OFF
    SCOTT@orcl_11gR2> -- cleaning
    SCOTT@orcl_11gR2> DROP TABLE cust_catalog;
    
    Table dropped.
    
    SCOTT@orcl_11gR2> EXEC CTX_DDL.DROP_PREFERENCE ('cust_lexer');
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2> EXEC CTX_DDL.DROP_PREFERENCE ('cust_wildcard_pref');
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2> EXEC CTX_DDL.DROP_PREFERENCE ('names_and_gender_datastore');
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2> EXEC CTX_DDL.DROP_SECTION_GROUP ('your_sec');
    
    PL/SQL procedure successfully completed.
    
  • How out-of-date statistics affects the decisions of the optimizer

    Hello

    We use Oracle 11.1 RAC. The optimizer_dyanmic_sampling parameter is set to 2. We do not voluntarily automatic grouping statistics. We have tables with stale statistics. I'm not asking if we need to collect statistics for these or not. My questions are: what can the optimizer due when he meets obsolete statistics? The optimizer can choose to use dynamic sampling of these tables with stale statistics? If yes how can I determine when this happens?

    Thank you

    Richard

    Published by: rbrieck on November 30, 2011 10:50

    I think that bland applies only to decide whether to collect statistics. The optimizer uses information, given in the context where it is running, which can include data access information in more modern versions. Missing statistics affect the optimizer - are stale? Only when he decides to collect.

  • set the optimizer mode for mapping

    How can I set the the optimizer mode (for example, "all_rows") explicitly for a mapping?

    ~ Prabha

    Hello

    in the design center, select "configure" in the context menu of a mapping. Under operators table, select a table, and then set the indicator of extraction.

    Kind regards
    Carsten.

  • cache 'dist-test' does not support direct optimization

    I noticed this message in the log of my proxy Extend JVM. It is registered at the INFO level.

    The cache dist-test does not support direct optimization for objects in internal format. If possible, consider using a different cache topology.

    The Extend JVM proxy runs as a disabled storage for the cluster node.

    Any ideas what is the cause?

    Dist-test is configured like this:
      <caching-scheme-mapping>
    
        <cache-mapping>
          <cache-name>dist-*</cache-name>
          <scheme-name>near-entitled-scheme</scheme-name>
        </cache-mapping>
    
      </caching-scheme-mapping>
    
      <caching-schemes>
    
        <near-scheme>
          <scheme-name>near-entitled-scheme</scheme-name>
          <front-scheme>
            <local-scheme>
              <eviction-policy>HYBRID</eviction-policy>
              <high-units>1000</high-units>
            </local-scheme>
          </front-scheme>
          <back-scheme>
            <distributed-scheme>
              <scheme-ref>dist-default</scheme-ref>
            </distributed-scheme>
          </back-scheme>
        </near-scheme>
    
        <distributed-scheme>
          <scheme-name>dist-default</scheme-name>
          <serializer>
            <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
          </serializer>
          <lease-granularity>member</lease-granularity>
          <backing-map-scheme>
            <local-scheme>
              <listener>
                <class-scheme>
                  <class-name>{backing-map-listener-class-name com.oracle.coherence.common.backingmaplisteners.NullBackingMapListener}</class-name>
                  <init-params>
                    <init-param>
                      <param-type>com.tangosol.net.BackingMapManagerContext</param-type>
                      <param-value>{manager-context}</param-value>
                    </init-param>
                  </init-params>
                </class-scheme>
              </listener>
            </local-scheme>
          </backing-map-scheme>
          <autostart>true</autostart>
        </distributed-scheme>
    
      </caching-schemes>
    I presume it's something to do with the plan closely because I don't see the message if I have the card dist-* caches directly to the dist - default scheme

    See you soon,.
    JK.

    Hi Jonathan,.

    You've found the warning as a near cache implements cache objects. Given that the service proxy uses POF, it must deserialize the POF serialized value in order to put it in the cache close. You do not see the message when you cache directly plan dist by default because it is configured to use the BOOM, allowing the proxy service pass the POF serialized value directly through the distributed cache service.

    Thank you
    Tom

  • FORALL context switch... How does it work?

    Hi guys,.

    in the asktom here link
    < u > http://asktom.oracle.com/pls/asktom/f?p=100:11:0:P11_QUESTION_ID:17483288166654 #17514276298239 < /u >

    It has been said that

    < i >
    ForAll i in 1... z_tab. Count
    Insert


    to do this:

    (a) pick up inpus to insert (all z_tab)
    (b) conduct PLSQL for SQL context switching
    (c) execute insert N-times
    (d) perform SQL PLSQL context switching
    < /i >

    My question is does FORALL statement loops?

    What this?

    < b > 1 < /b > example
    loop 1
    collect data to insert
    loop 2
    collect data to insert
    loop 3
    collect data to insert
    finishing of the loop
    context switch to the sql engine
    perform the insert 1 by 1

    or

    < b > example < /b > 2
    loop the loop 1 or not at all
    gather all of the required data to insert
    finishing of the loop
    context switch to the sql engine
    perform the insert 1 by 1



    My guess is example 1 is the right answer


    Tips gurus?

    Kind regards
    Noob

    From a simplistic point of view, a single select statement is processed by the SQL (for the FORALL statement) engine with the results returned in the form of lots (as the execution of the request as a simple query in SQL *, for example) rather than return to the data a line both PL/SQL (FOR example, the folder query LOOP) , so there is a change between SQL and PL/SQL minimum context.

    However, Oracle tends to do a good job of optimizing PL/SQL loops for if possible so that they are actually executed in the same way as a FORALL statement, so it is difficult to test the sustained performance improvements by using the FORALL statement.

    That said, I recommend always explicitly indicating the FORALL statement, rather than relying on the optimizer to re - write your code for you.

  • When I turn off the option "optimize the iphone storage" in iCloud photos, made my phone automatically re - download all my photos/videos in my phone again?

    I used the option "optimize the iphone storage" on my old phone to save space, but I hated waiting to load when I want to watch them. Now, I clicked on "Download and keep the originals", but I can't tell if my phone is re - download the original photos/videos to iCloud to my phone. What does automatically or are these pictures forever in the version "optimized" on my phone?

    Subsequently, it must download the originals. Synchronization is VERY slow, so it may take some time depending on how many photos you have.

  • Disable the optimization of storage?

    Hello

    Just upgraded to macOS Sierra.

    Is there a way to completely disable storage optimize? I don't need this?

    In iCloud > iCloud Drive > Options, check 'Optimize Mac Storage' control everything?
    I've checked it, is it enough?

    If you are going about this Mac > storage > Manage, there are a lot of options and I want that they OFF.

    Thanks for any answers real *.

    Steve

    * true = not an answer like "why would you disable it?

    dephilenaiguille wrote:

    Hello

    Just upgraded to macOS Sierra.

    Is there a way to completely disable storage optimize? I don't need this?

    In iCloud > iCloud Drive > Options, check 'Optimize Mac Storage' control everything?
    I've checked it, is it enough?

    If you are going about this Mac > storage > Manage, there are a lot of options and I want that they OFF.

    Thanks for any answers real *.

    Steve

    * true = not an answer like "why would you disable it?

    Found a description of the function - which, when your local disk storage is low (but does not say how low) it will move the old (inactive) files to the cloud and remove them of your drive - they will be available in the cloud - new files stay on your system.

    In order to optimize him unchecking storage should disable the function.

    Best way to clean a disk is to back up permanent - even if this is a print - then remove it from your drive.

    And it's something I would certainly turn off - as some 'old' documents are things that you need to keep but aren't necessarily watching.

Maybe you are looking for

  • Why Firefox 23 keeps freezing briefly all the time?

    Firefox has suddenly started to block briefly for something like 3 seconds all the time when it updated to 23. He does something like every 10 seconds. How can I prevent the really annoying short gel?

  • Firefox is incorrectly recognized as mobile device

    The use of Firefox in version 9.0.1Some sites are considered if I where using my mobile phone – for example, if I go to DirecTV.com, I see their Mobile site design... This does not happen with IE or Chrome. Also the sparatic problems with the site st

  • Satellite M300 - Sound driver for Vista 64 says upgrade failed

    I just reinstall my M300 with Vista 64 and I have the problem with the audio driver (I have download this Toshiba site). It says upgrade failed or something.

  • Displays swap when using double view on Satellite P100-437

    Hello I use display drivers downloaded from the Toshiba site for a P100-437, geforce nvidia go 7900 gs (v. 94.37)My setup is such that I have a LCD monitor plugged into the back of my laptop that I use as my main screen and screen of my laptop is use

  • DVD player does not recognize blank DVDs.

    I think that my drive can write to DVDs, but it doesn't even recognize empty (when I go into my computer after burst in, nothing doesn't appear). The disc seems to turn for a few seconds, then stops. He can read the previously empty DVD that had some