collection of statistics on a remote table

Hello

Is it possible and how to collect statistics on remote tables of non-Oracle?
In my set up an Oracle Server (11g r2) is connected to an Oracle Server remote gateway to Oracle and DB2 server through ODBC gateway. All queries are distributed and heterogeneous, that is, the access data of the two remote servers. The query optimizer uses the real on remote tables from Oracle statistics, while for DB2 tables some default values are used. The difference between the default values and actual statistics is huge.
I found in a user's guide this column selectivity cannot be collected for tables non-Oracle. Is it possible to collect general statistics such as number of rows from the tables? It would be different if I use DRDA gateway to access DB2?

Concerning

Ruslan,
The HS_FDS_SUPPORT_STATISTICS = TRUE parameter is supported using DG4ODBC, even if it is not documented. However, as already said it depends on the ODBC driver and the database non-Oracle to what is returned when set.
To see if it is used by the pilot, then check track of debug gateway. With the parameter set to true, then you should see explicit SQLStatistics ODBC calls. If these are not present then the parameter is to have no effect. You can also view an ODBC trace for these calls.

Kind regards
Mike

Tags: Database

Similar Questions

  • What is advised to collect statistics for the huge tables?

    We have a staging database, some tables are huge, hundreds GB in size.  Auto stats tasks are performed, but sometimes it will miss deadlines.

    We would like to know the best practices or tips.

    Thank you.

    Improvement of the efficiency of the collection of statistics can be achieved with:

    1. Parallelism using
    2. Additional statistics

    Parallelism using

    Parallelism can be used in many ways for the collection of statistics

    1. Parallelism object intra
    2. Internal parallelism of the object
    3. Inner and Intra object jointly parallelism

    Parallelism object intra

    The DBMS_STATS package contains the DEGREE parameter. This setting controls the intra parallelism, it controls the number of parallel processes to gather statistics. By default, this parameter has the value is equal to 1. You can increase it by using the DBMS_STATS.SET_PARAM procedure. If you do not set this number, you can allow oracle to determine the optimal number of parallel processes that will be used to collect the statistics. It can be achieved if you set the DEGREE with the DBMS_STATS. Value AUTO_DEGREE.

    Internal parallelism of the object

    If you have the 11.2.0.2 version of Oracle database you can set SIMULTANEOUS preferences that are responsible for the collection of statistics, preferably. When there is TRUE value at the same TIME, Oracle uses the Scheduler and Advanced Queuing to simultaneously manage several jobs statistics. The number of parallel jobs is controlled by the JOB_QUEUE_PROCESSES parameter. This parameter must be equal to two times a number of your processor cores (if you have two CPU with 8 cores of each, then the JOB_QUEUE_PROCESSES parameter must be equal to 2 (CPU) x 8 (cores) x 2 = 32). You must set this parameter at the level of the system (ALTER SYSTEM SET...).

    Additional statistics

    This best option corresponds to a partitioned table. If the INCREMENTAL for a partitioned table parameter is set to TRUE and the DBMS_STATS. GATHER_TABLE_STATS GRANULARITY setting is set to GLOBAL and the parameter of DBMS_STATS ESTIMATE_PERCENT. GATHER_TABLE_STATS is set to AUTO_SAMPLE_SIZE, Oracle will scan only the partitions that have changes.

    For more information, read this document and DBMS_STATS

  • Collection of statistics on partitioned and non-partitioned tables

    Hi all
    My DB is 11.1

    I find that the collection of statistics on partitioned tables are really slow.
    TABLE_NAME                       NUM_ROWS     BLOCKS SAMPLE_SIZE LAST_ANALYZED PARTITIONED COMPRESSION
    ------------------------------ ---------- ---------- ----------- ------------- ----------- -----------
    O_FCT_BP1                        112123170     843140    11212317 8/30/2011 3:5            NO                    DISABLED
    LEON_123456                      112096060     521984    11209606 8/30/2011 4:2           NO                   ENABLED
    O_FCT                           115170000     486556      115170 8/29/2011 6:3            YES         
    
    SQL> SELECT COUNT(*)  FROM user_tab_subpartitions
      2  WHERE table_name =O_FCT'
      3  ;
    
      COUNT(*)
    ----------
           112
    I used the following script:
    BEGIN
      DBMS_STATS.GATHER_TABLE_STATS(ownname          => user,
                                    tabname          => O_FCT',
                                    method_opt       => 'for all columns size auto',
                                    degree           => 4,
                                    estimate_percent =>10,
                                    granularity      => 'ALL',
                                    cascade          => false);
    END;
    /
    It costs 2 minutes for the first two tables to gather statistics respectively, but more than 10 minutes for the partitioned table.
    Time of collection of statistics represents a large part of the time of the whole lot.
    Most of the work of the lot are at full load, which case all partitions and subpartitions will be affected and we cannot collect just specified partitions.

    Does anyone have experiences on this subject? Thank you very much.

    Best regards
    Leon

    Published by: user12064076 on August 30, 2011 01:45

    Hi Leon

    Why don't collect you statistics to the partition level? If your data partitions will not change after a day (score range date for ex), you can simply do to the partition level

    GRANULARITY-online 'SCORE' for partition level and
    GRANULARITY-online 'SUBPARTITION' for subpartition level

    You collect global stats whenever you can not require.

    Published by: user12035575 on August 30, 2011 01:50

  • How can I use statistics for all the tables in a schema in SQL Developer? and how long will it take on average?

    Hello

    How can I use statistics for all the tables in a schema in SQL Developer? and how long will it take on average?

    Thank you

    Jay.

    Select the connection and right-click on it and select schema statistics collection

  • Collection of statistics takes forever after adding new index - why?

    Hello everyone,

    I'm on Oracle 11.2.0.2. Collection of statistics for one of my paintings takes extremely long (several hours), and I don't know why.

    I'm inserting about 11,000 new records in a table with 5.3 million existing records using a INSERT INTO... SELECT...

    This insert takes a little more than 3 minutes

    Then I collect stats using:

    DBMS_STATS.gather_table_stats ('SCOTT', 'S_RMP_EVALUATION_CSC_MESSAGE', estimate_percent = > DBMS_STATS.) AUTO_SAMPLE_SIZE);

    It takes 2 hours.

    If the number of records increases, the time it takes for the stats increases (8 hours after 70,000 newly inserted records)

    I don't have this problem until recently, I created the I_S_RMP_EVAL_CSC_MSG_ACTIONSindex, but I do not understand why it would cause a radical change like that. Especially since the insertion of records to update the index takes only a few minutes.

    I'm writing the create for the table and the index statements in the table below.

    There are about 5.3 million records in the table. The table uses about 7.8 GB of space for the 'regular' table data and 37.6 GB for the LOB. Use the index:

    Index Space used

    I_S_EVALUATION_CSC_MSG_LMID:

    152 MB

    I_S_EVALUATION_CSC_MSG_IDLM:

    144 MB

    PK_S_RMP_EVALUATION_CSC_MESSAG:

    118 MB
    I_S_RMP_EVAL_CSC_MSG_ACTIONS:5 MB

    CREATE TABLE "QQRCSBI0"."S_RMP_EVALUATION_CSC_MESSAGE" 
    ( "ID" NUMBER(22,0) NOT NULL ENABLE, 
      "XML_MESSAGE_TEXT" CLOB, 
      "CREATION_TIME" TIMESTAMP (6), 
      "LAST_UPDATE_TIME" TIMESTAMP (6), 
      "NEXT_UPDATE_SYNC_TS" TIMESTAMP (6), 
      "SW_VERSION_XML" VARCHAR2(100 BYTE), 
      "DWH_LM_TS_UTC" DATE DEFAULT NULL NOT NULL ENABLE, 
      CONSTRAINT "PK_S_RMP_EVALUATION_CSC_MESSAG" PRIMARY KEY ("ID")
      USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS NOLOGGING 
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "STAGING"  ENABLE
    ) SEGMENT CREATION IMMEDIATE 
    PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
    TABLESPACE "STAGING" 
    LOB ("XML_MESSAGE_TEXT") STORE AS BASICFILE (
      TABLESPACE "STAGING" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION 
      NOCACHE LOGGING 
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
    ) ;
    
    
    CREATE INDEX "QQRCSBI0"."I_S_EVALUATION_CSC_MSG_IDLM" ON "QQRCSBI0"."S_RMP_EVALUATION_CSC_MESSAGE" ("ID", "DWH_LM_TS_UTC") 
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS 
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "STAGING" ;
    
    
    CREATE INDEX "QQRCSBI0"."I_S_EVALUATION_CSC_MSG_LMID" ON "QQRCSBI0"."S_RMP_EVALUATION_CSC_MESSAGE" ("DWH_LM_TS_UTC", "ID") 
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS NOLOGGING 
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "STAGING" ;
    
    
    CREATE BITMAP INDEX "QQRCSBI0"."I_S_RMP_EVAL_CSC_MSG_ACTIONS" ON "QQRCSBI0"."S_RMP_EVALUATION_CSC_MESSAGE" (DECODE(INSTR("XML_MESSAGE_TEXT",'<actions>'),0,0,1)) 
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS 
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "STAGING" ;
    

    What causes this extreme long to collect statistics, and what I can do to fix that (apart from the removal of the index again)? Why collect stats lasts much longer than the update data and indexes?

    Thank you...

    It's a VIRTUAL column - Oracle does not store the value of the function in the table for each line, it simply records the description of the decisive function in the column definition in the data dictionary.  The index takes the calculated actual value (with the identifier of the line from each value), but Oracle does not have a mechanism to watch the index when collecting column statistics.  (In fact, such a mechanism could only be created for a single column still index).

    Concerning

    Jonathan Lewis

  • Collection of statistics online

    Hi all

    Is it possible to collect statistics for a schema that use sound. When I try to analyse the tables to a diagram, it shows that the statistics for this table are locked. So is it possible that rather than analyze a table, one by one, I can go for gathering statistics for schema objects of this scheme while is still in use (such as DML or select statements issued on these schema objects).

    DB version: 10.2.0.4
    Version of the OS: RHEL 5.8

    DB type: CARS


    Kind regards
    Imran Khan

    Imran khan says:
    Hi Mark,

    Why is there a question about the inability to update statistics? Someone has locked the collection of statistics.

    How can we check if statistics collection has been locked? Are you talking about on the level of the db, or for a particular schema or its purpose? As far as I know in oracle 10g statistics are collected automatically (if I have a lot of m).

    Kind regards
    Imran Khan

    Published by: imran khan on December 24, 2012 07:33

    Well, as SB said, just look at the relevant information in the documentation for your version.

    If you don't have local copies of the documentation for your version, you can find copies for recent versions here:

    http://www.Oracle.com/technetwork/indexes/documentation/index.html

    (or docs.oracle.com or tahiti.oracle.com)

    I recommend you watch DBMS_STATS package and the [USER |] ALL THE | S/n] _TAB_STATISTICS view (s).

    Those who should have what you need.

  • performance degrades after the collection of statistics

    Oracle 11 g 2 OEL 5

    We have several very large tables (40 million lines and more) and recently we gathered stats on tables and it degraded our performance. He began to do table scans complete rather than use the index. The same queries are very well in other environments. Only difference is the collection stats. Logically, the performance should be better after the collection of statistics. But it is rather poor.

    I ran track 10053 on request and I see that the cardinality and the cost is medium high in the inefficient environment. A test, I have restored the old stats in the environment and it put everything back to normal - the query runs quickly again. Note that the restored stats were collected for more than a year. Should not collect us statistics regularly on very large tables?

    Thank you.

    Hello

    the stats of the default collection behavior is to determine the number of cells (i.e. a histogram is necessary or not and if yes, how accurately must be) automatically, depending on distribution and usage of the column of data in different types of predicates. This means that, in many cases collection histogram is almost a random process - once you get a histogram, the next time that you don't have, even if there is almost no change. This is (unfortunately) the normal behavior.

    I could quite at the bottom of your question - the optimizer esteem seem to be all correct in the second case, it is not clear to me why the plan that is so bad (there are also some other problems, as lines of 40G supposed to be returned by one of the nested loops, or estimation of cardinality missing for another nested loop). But in any case, histograms and bind variables do not mix, so you can just solve your problem by specifying method_opt => "for columns size 1' to disable the histograms for this table.

    Best regards
    Nikolai

  • Collection of statistics with the cascade option is slow

    Hi all.

    The database is 11.2.0.3 on a linux machine.

    I published the following command, but the session was a bit slow.

    The size of the table is about 50 GB and has 3 clues.

    I said 'degree = 8' for parallel processing.

    When collecting statistics on the table, parallel slaves have been invoked and gather statistics on the table ended pretty quickly.

    However, when he goes to the collection of statistics on indexes, only an active session was invocked and so "level = 8" option is ignored.

    My question is:

    Do I need to use dbms_stats.gahter_index_stats instead of the option "cascade" in order to gather statistics on indexes with parallelism?

    exec dbms_stats.gather_table_stats(ownname=>'SDPSTGOUT',tabname=>'OUT_SDP_CONTACT_HIS',estimate_percent=>10, degree=>8 , method_opt=>'FOR ALL COLUMNS SIZE 1',Granularity=>'ALL',cascade=>TRUE);
    Thanks in advance.

    Best regards.

    Hello

    This could happen due to the index being created as NOPARALLEL. Try to redefine with DOP = 8 and see if that helps (running a quick test to verify this before making any expensive DDLS).

    Best regards
    Nikolai

  • Statistics gathering on the tables of AW

    Our design uses relational tables in a star schema and cubes aggregate level at our tables. We have materialized views of cube with the rewrite of the query.

    We meet statistical diligently for all of our tables in our star using DBMS_STATS schema. We also collect stats of the analytic workspace on our size and our cubes by using DBMS_AW_STATS.

    My question has to do with the collection of statistics on table on the CB$ and tables of $ CR that are created in the schema. I tried running dbms_stats.gather_table_stats against these tables and it works, and their updated stats do not appear in the database. My concerns are (1) is it still useful to have stats table on these tables? As far as I know, they are not really directly questioned for any reason whatsoever and (2) it takes a lot of time to gather statistics on some of these tables - much longer than the collection of statistics for our largest fact table - and stats collection will not work in parallel to these tables.

    Is it recommended to skip gathering statistics on the tables of $ CR and CB$?

    Execution of his stats on CB$ & CR$ MVs is at best useless and can even kill the system performance because it will force us to materialize all the lines in a cube.

  • With respect to the analysis and collection of statistics

    Hello

    Why the collection of statistics is necessary in the DB?

    Thank you

    Hello Balmo-Oracle

    You can start reading this document for example: Managing optimizer statistics

    or this: http://www.oracle.com/technetwork/database/bi-datawarehousing/twp-optimizer-stats-concepts-110711-1354477.pdf

    The best way if you check yourself in google.

    Best regards, David

  • Collection of statistics

    It is said that, in order to allow the CBO work correctly, I need first "collection of statistics. Is this true? If so, no ORACLE "collection of statistics" automatically? or I have to run a package to "gathering statistics?



    Thank you


    Scott

    scottjhn wrote:
    It is said that, in order to allow the CBO work correctly, I need first "collection of statistics.
    Is this true?

    Yes

    If so, no ORACLE "collection of statistics" automatically? or,

    With V10 + statistics are collected once every 24 hours.

    I have to run a package to "gathering statistics?

    DBMS_STATS can be used manually to collect statistics

  • ORA-22992 cannot use LOB Locators selected remote tables

    Hello.
    I have a problem.
    When I want to create a table select o sing a table a dblink, and the remote table has a clob field returns the error.
    I'll fix it, using dblink. Is there a way to fix? export/tax free.

    Thank you.

    PD: 10 gr 2.

    Hello
    Insert of canoe read you a remote via dblink database table when that table has a clob or blob column.

    If you must use inport / export option.

    Use the Data Pump functionality in 10g. Then, you can export and import only your table without major issues.

    Thank you

    * If the answer is correct please mark.

  • Create a db with a simple form application, based on a remote table

    Hello

    I have APEX 4.0 installed on my laptop and a distant DB (always in my local network) with EBS R12 on it.
    I created a dblink with my HR schema and updated my tnsname.ora. Now I can connect and browse the remote db with schema APPS/APPS.
    I can also browse table away from my local db user 'HR '.

    for example: select * from emp@apex_dblink; recover data without problem even if I'm a developer sql or standard interface of APEX with my APEX user.

    As a training, now I want to create an application database from scratch on a tabular presentation with the HR schema. When I browse for the table, I don't see my table 'emp' remote and when I push the "emp@apex_dblink" research, he said only: "emp@apex_dblink is not a valid table name.".

    Some have said, it is because of the 'grant privilege' and 'create the synonym' but I don't know if like my dblink I should think to solve this problem...
    In addition, I do not know much about the administration and do not know where to go...

    Here is some information:

    DB user: hr/hr@XE
    APEX user: ulrich/ulrich-> name of the workspace: APEX_WORKSPACE

    That's your time and your response.

    ARO

    Ulrich

    Apex gets his table & veiw your local database catalog definitions. Are two ways to do what you want:
    (1) create copies of the tables/views you want and create your forms & reports based on these tables/views. Once you have your app work (at least the table/view definitions are now known to Apex), delete local tables and replace them with synonyms for remote tables.

    -- run in local Apex database
    drop table emp;
    -- I'd recommended naming your DB link after the remote DB name to keep easy track of where it's pointing
    -- create database link EBSPROD connect to HR using HR123 identified by 'EBSPROD.world';
    -- or include the schema name if you need multiple DB links to that remote DB
    -- create database link EBSPROD@hr connect to HR using HR123 identified by 'EBSPROD.world';
    -- create database link EBSPROD@scott connect to scott using tiger identified by 'EBSPROD.world';
    -- select * from emp@EBSPROD@scott;
    
    create or replace synonym EMP for EMP@apex_dblink;
    

    (2) create local views to remote objects. Apex point to these points of view the.

    -- run in local Apex database
    create or replace view EMP as (select * from EMP@apex_dblink);
    

    Published by: maceyah on March 3, 2011 09:04

  • Update rows in a remote table by synonym

    Hi all

    I'm new to Apex and I have searched this forum about this, but could not find.

    Here's what I'm trying to do.

    I have DB1, where the Apex is installed. And I want to access/update of the data in the db2 tables. Here are the steps I followed

    1 created a dblink in DB1 to DB2.
    2. created synonym in DB! remote table TableA to DB2. create synonym tablea_syn for tablea@dblink1;
    3. now through the Apex object browser, I would be able to update/insert/delete records in tableA@dblink. Then, I selected synonyms in the object browser, but not able to see all the lines that are present in tableA@dblink1. I'm Basel to see lines using Toad.
    Is it possible to put day/remove or insert new lines into remote tables using synonyms thru Apex?

    Please help me with this problem.

    Thanks in advance.

    Salvation (what is your name?)

    I took a peek in your application and I have corrected and tested an update. You have a few syntax errors, and I don't blame you because I already had them when I posted the code. I wanted to write a quick pseudo-code and you have to change accordingly. In any case, it's working now, and here are a few syntax errors, I found:
    1. When you want to refer to an element of the apex in a SQL, it must begin by ":", for example: P1_EMP_NAME
    2 - the keword 'from' was missing in the inert training
    3 make sure create you processes in the logical right place, your submit the process that happens after the location header! which means that it will take place the first thing in your page, and it's wong. usually this type of process must be placed in the last section of process (on submit after validation... etc)

    Hope you feel better now,

    Thank you
    Sam

  • ORA-22992: cannot use LOB Locators selected from the remote tables...

    Oracle 10.2.1.0.4
    Solaris 10

    We try to access a table in another database via a db_link.

    The table we are trying to access has a LOB.

    We get the following error: ora-22992: cannot use LOB Locators selected from the remote tables.

    Is there a way to get around this? We need the data in the BLOB field.

    Thank you.

    See on metalink:

    ORA-22992 has a workaround solution in 10 gr 2
    DOC - ID: 436707.1

    Werner

Maybe you are looking for

  • Is there documentation/help files that I can download for use in offline mode?

    I use my browser for several things offline. Would be nice if I could access the help with FF without having to connect. Please advise, thank you.

  • error in the Terminal

    When external drives safely in OS X Terminal diskutil command, one of my external hard drives through the fine, but the other who gets an error of e/s... of the Mac mini. do this same thing of my Macbook Pro as a test using the same rules of the disk

  • Video does not play back. Clips are blacking out.

    Not sure why my videos don't play back in the Viewer. Audio seems to be well - all of this. Some clips are "dead".  It has not been done until I stopped and restarted.  I stopped him because some clips were black in the clip project tree, while the t

  • Tecra S10: undefined screen with a resolution of 1280 x 800

    Hi all, I have a strange problem with the resolution of my Tecra S10. My system: * Windows Vista, later lowered to Windows XP SP3 with the last update* Latest Toshiba nvidia drivers My default resolution is 1680 x 1050 initially and the display is sh

  • Purchase Mac Store

    Hello Say I bought apps on my iPhone. Can I download to my macbook using iCloud family? Doesn't seem to work. Help, please. Thank you, Alex.