Collects statistics for tables using jobs in Oracle 11 G 11.2.0.1.0?

Hello

My query is regarding the collection of statistics for tables.

My Version of Oracle DB is:
BANNER
--------------------------------------------------------------------------------
Oracle Database 11 g Enterprise Edition Release 11.2.0.1.0 - 64 bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
AMT for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production

In versions of prior oracle db, we used to schedule tasks to run on a daily basis for collecting statistics. Especially for tables that are frequent and huge inserts.

I read that in 11g stats for all of the schema on a database are automatically make every night jobs. I checked these jobs and I see that they are running on a monthly basis [joined query]. This job is enabled and is scheduled to run monthly.
This means that my diagram will be analyzed on a monthly basis. My understanding is correct?

Can I still plan jobs to collect statistics for specific tables on every week? This will diminish the performance?
We expect 100000 documents to insert on a daily basis.
SELECT  JOB_NAME,  
   START_DATE, REPEAT_INTERVAL, 
     LAST_START_DATE, 
    NEXT_RUN_DATE,ENABLED
FROM dba_scheduler_jobs
WHERE job_name LIKE '%STAT%'
ORDER BY 1;
JOB_NAME                       START_DATE                             REPEAT_INTERVAL                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  LAST_START_DATE                        NEXT_RUN_DATE                          ENABLED
------------------------------ -------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------- -------------------------------------- -------
BSLN_MAINTAIN_STATS_JOB        16-AUG-09 12.00.00.000000000 AM -07:00                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  14-APR-13 12.00.00.427370000 AM -07:00 21-APR-13 12.00.00.400000000 AM -07:00 TRUE    
MGMT_STATS_CONFIG_JOB          15-AUG-09 12.24.04.694342000 AM -07:00 freq=monthly;interval=1;bymonthday=1;byhour=01;byminute=01;bysecond=01                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           01-APR-13 01.01.01.710280000 AM -07:00 01-MAY-13 01.01.01.700000000 AM -07:00 TRUE    
Thank you
Somiya

Your understanding is not correct. These jobs are in dba_autotask_task.
And they will be run every day.

HTH

-------------
Sybrand Bakker
Senior Oracle DBA

Tags: Database

Similar Questions

  • How to collect statistics for a partition?

    Hi all

    I create a table partition. I need to collect statistics for this partition only. Before I used to analyze it, but now I need to analyze using DBMS_STATS.

    What is the best way to analyze the partition using DBMS_STATS?

    How long will it take to complete?

    How can I estimate the time of accomplishment for DBMS_STATS before starting?

    Thank you

    I create a table partition. I need to collect statistics for this partition only. Before I used to analyze it, but now I need to analyze using DBMS_STATS.

    What is the best way to analyze the partition using DBMS_STATS?

    Follow the documented instructions: INCREMENTIELLE TRUE and GRANULARITY on AUTO.

    See the section "Partitioned objects statistics" the doc of performance tuning

    http://docs.Oracle.com/CD/B28359_01/server.111/b28274/stats.htm#i42218

    With partitioned tables, the new data is usually loaded into a new partition. As new partitions are added and loaded, statistical data must be collected on the new partition and statistics need to be updated. If the INCREMENTAL for a partition table is set to the value TRUE , and collect you statistics on the table with the GRANULARITY parameter defined on AUTO , Oracle will collect statistics on the new partition and update statistics on the overall table by scanning only those partitions which have been modified and not the entire table. If the INCREMENTAL for the partitioned table is set to the value FALSE (the default), then a full table scan is used to maintain the global statistics. It is a highly resource intensive and time-consuming for large party.

    How long will it take to complete?

    No way to know - using an estimate of 10% takes less time than with an estimated of 40%, which takes less time than using 100%.

    How can I estimate the time of accomplishment for DBMS_STATS before starting?

    By comparing the amount of data and the percentage of estimate for the data that you have in the other partitions and the time required to collect statistics on other partitions.,.

  • collect statistics for the tablespace

    Friends...

    OS: Linux

    DB: 11 GR 2

    Data size: 1 TB

    I spend monthly multiple partitioned table spaces and bring together in a single annual partition. (for example tbs_2014_01, tbs_2014_02 - tbs_2014_12... all combine them into tbs_2014 as a tablespace)

    Over the weekend, work of database gets executed that collects statistics that are obsolete, it collects all the segments that have been moved from the storage.

    Given that the collection of statistics at the end of the week takes too long, I tried to find a smart way to collect statistics after each tablespace move rather than waiting for job to weekend which will take two or three days to complete.

    1. is there a way to gather statistics at the tablespace level and collect statistics for all objects in this table space?

    2. how to determine the overall stats of collection of statistics part?

    That is, suppose I have move the tbs_2014_01 tablespace and collect statistics with global stats that could take 2 hours but it will be difficult to spend 2 hours for each stats global tablespace which in my opinion is not good and we should be collecting global stats only once.

    3. any other advice?

    977272 wrote:

    @sol.beach... Thanks for your comments...

    I've not been asked to collect statistics to the tablespace but level to collect statistics after that finish objects move in storage.

    Given the size of the data, it is difficult to gather all the statistics at the weekend so trying to understand another method to collect the statistics the weekend load will be less.

    You can collect statistics object on an object by object basis level after that each object has been moved.

  • Job GATHER_STATS collect statistics for the tables 'static '.

    Oracle version: 10 gr 2

    If a corporate table has not changed (No. DML) in the last 10 days, will be the collection of default oracle job stats
    DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC
    yet collect statistics in this table?

    The answer is no, unless you have changed the default optimizer stats collection of statistics because approximately 10% of the data must have undergone change before that table is elgible for new statistics.

    See the next topic in the Performance and Tuning section 14.2.1 GATHER_STATS_JOB Manual:

    http://download.Oracle.com/docs/CD/B19306_01/server.102/b14211/stats.htm#sthref1068

    HTH - Mark D Powell.

  • What is advised to collect statistics for the huge tables?

    We have a staging database, some tables are huge, hundreds GB in size.  Auto stats tasks are performed, but sometimes it will miss deadlines.

    We would like to know the best practices or tips.

    Thank you.

    Improvement of the efficiency of the collection of statistics can be achieved with:

    1. Parallelism using
    2. Additional statistics

    Parallelism using

    Parallelism can be used in many ways for the collection of statistics

    1. Parallelism object intra
    2. Internal parallelism of the object
    3. Inner and Intra object jointly parallelism

    Parallelism object intra

    The DBMS_STATS package contains the DEGREE parameter. This setting controls the intra parallelism, it controls the number of parallel processes to gather statistics. By default, this parameter has the value is equal to 1. You can increase it by using the DBMS_STATS.SET_PARAM procedure. If you do not set this number, you can allow oracle to determine the optimal number of parallel processes that will be used to collect the statistics. It can be achieved if you set the DEGREE with the DBMS_STATS. Value AUTO_DEGREE.

    Internal parallelism of the object

    If you have the 11.2.0.2 version of Oracle database you can set SIMULTANEOUS preferences that are responsible for the collection of statistics, preferably. When there is TRUE value at the same TIME, Oracle uses the Scheduler and Advanced Queuing to simultaneously manage several jobs statistics. The number of parallel jobs is controlled by the JOB_QUEUE_PROCESSES parameter. This parameter must be equal to two times a number of your processor cores (if you have two CPU with 8 cores of each, then the JOB_QUEUE_PROCESSES parameter must be equal to 2 (CPU) x 8 (cores) x 2 = 32). You must set this parameter at the level of the system (ALTER SYSTEM SET...).

    Additional statistics

    This best option corresponds to a partitioned table. If the INCREMENTAL for a partitioned table parameter is set to TRUE and the DBMS_STATS. GATHER_TABLE_STATS GRANULARITY setting is set to GLOBAL and the parameter of DBMS_STATS ESTIMATE_PERCENT. GATHER_TABLE_STATS is set to AUTO_SAMPLE_SIZE, Oracle will scan only the partitions that have changes.

    For more information, read this document and DBMS_STATS

  • Collect statistics for a Table

    Hello

    I have a tabel 'XXX' need to collect statistics. Please tell me how can we collect and where it will store.



    Regs,
    Brij

    Please run this to collect statistics.

    exec DBMS_STATS.GATHER_TABLE_STATS ('', '');
    
  • How to collect statistics of Tables and index

    Hi all

    Please help me in the collection of statistics of Tables and index.

    Thank you

    for tables
    exec dbms_stats.gather_table_stats ("SCOTT", "EMPLOYEES");

    for indexes
    DBMS_STATS.gather_index_stats exec ('SCOTT', 'EMPLOYEES_PK');

    Visit this link for details
    http://nimishgarg.blogspot.com/2010/04/Oracle-dbmsstats-gather-statistics-of.html

  • MySQL-> table using SQL Developer Oracle

    Greetings,

    Currently, I am a (student) temporary employee who is responsible for the migration of the MySQL database of a company to their new (gr 11, 2) Oracle database.

    So far, I use SQL Developer, because I learned a few minor techniques with this application.

    I have tried different methods for the migration, including those of the Oracle (Migration of MySQL) page

    I however got no positive results with most of the methods.

    -J' have established a connection to the MySQL database Oracle database aswell, within SQL Developer

    Now, so far, I've copied manually (right click on the table, copy to Oracle) 90% of the tables from the MySQL to Oracle database successfully.

    However, these tables contain only about 5% of the data. The problem is in the 2 tables that actually have a decent amount of records.

    FOR INFO:

    File 1: 75MB file .csv, ~150.000 records (individual records contain all select queries, where the size)

    File 02:50 MB of files .csv, ~2.200.000 records

    My previous attempts to use the same copy for Oracle option to right click on one of the two largest tables resulted in the following error:

    The table of queries down. Message: java.io.IOException: IO error: socket read timed out

    My first action to prevent this error has been to add a second file data to the current storage space, giving it 200M maxsize.

    Okay it's coming, my real questions!

    This could be a possible solution to avoid the error mentioned earlier to return?

    What other issues are possible in case of failure?

    Uses an external table to a faster and more effective method?

    (The reason for which I have not yet tried external tables is because I had no SYS (TEM) privileges, so anything that requires privileges, I first have to discuss with my superior).

    Any help is appreciated! Although my next answer is maybe not earlier that come Monday 17.

    Welcome, Brent

    Hello

    I just discovered there is a csv Import Wizard in sqldeveloper also - don't know if you tried?

    tables in the destination database (oracle) - right-click and choose Import data - this will bring you in a Wizard of csv.

    However, I think that due to the volumes of data it may have similar problems.

    sqlldr is probably your best choice - it's pretty simple once you find a decent example - it seems to cover the basics of ok - Oracle SQL Stuff (for example): SQL-Loader: the tutorial step by step - example 1 (CSV file)

    See you soon,.

    Rich

  • create aggregation tables using job Manager windows client linux server BI

    Hi people,

    I have OBIEE 10.1.3.4.1 running on a Linux server.
    I'm trying to run the script in overall table created via the wizard by using the Task Manager.

    Task Manager requires a DSN to run on. But saw that the job scheduler runs on the Linux machine so there is no DSN point to?

    My question is:

    I need to install and configure a scheduler on a windows environment to launch task manager to run the NQCmd script to the aggregation tables?
    or
    Can I get the linux box to have his own NAME or see the windows one somehow?

    If you have information or advice that would be really untangle me.

    Thank you.

    You don't need to change anything, just use AnalyticsWeb in your nqcmd call, and it should work.

  • Issue when inserting into a table using sequence in Oracle 11 g Rel 2

    Hi all

    I am facing problem when I insert sequence values in my table.
    When inserting, my sequence does not begin with the beginning with the value.
    Example Script

    CREATE SEQUENCE xyz_seq
    START WITH 1
    INCREMENT BY 1
    NOMAXVALUE
    MINVALUE 1
    NOCYCLE
    NOCACHE
    ALL;

    create table abc (a number not null);

    insert into abc (xyz_seq.nextval) values;

    Select *.
    ABC;

    xyz_seq sequence of fall;

    drop table abc;

    Output

    Order of creation.
    Table created.
    1 line of creation.

    ONE
    ----------
    2
    1 selected line.
    Sequence has fallen.
    Deleted table.

    I can't understand why this is inserting value 2, when my sequence should start at 1.


    To overcome that road, I implemented a different logic.
    Example of

    CREATE SEQUENCE xyz_seq
    START WITH 1
    INCREMENT BY 1
    NOMAXVALUE
    MINVALUE 1
    NOCYCLE
    NOCACHE
    ALL;

    create table abc (a number not null);
    declare
    x number (1): = xyz_seq.nextval;
    Start


    insert into abc (x) values.
    end;
    Select *.
    ABC;

    xyz_seq sequence of fall;

    drop table abc;

    Output

    Order of creation.
    Table created.
    PL/SQL procedure successfully completed.

    ONE
    ----------
    1
    1 selected line.
    Sequence has fallen.
    Deleted table.

    However, my question remains why referring to the sequence.nextval in my insert does not return me the beginning with the value.

    Kind regards
    Rishi

    That's what says the doc ;

     If you attempt to insert a sequence value into a table that uses deferred segment creation, the first value that the sequence returns will be skipped.
    

    It ignores the value 1 and returns 2.

  • AUTHID CURRENT_USER collect statistics

    Hi all

    I use Oracle Database 11 g Enterprise Edition Release 11.2.0.3.0 - 64 bit Production. I created a procedure that collects statistics from table using DBMS_STATS using AUTHID CURRENT_USER to SChema1.

    CREATE OR REPLACE PROCEDURE SCHEMA1.getstats ( p_schema IN VARCHAR2,
                                p_table  IN VARCHAR2)
    AUTHID CURRENT_USER
    AS
    BEGIN
            dbms_stats.gather_table_stats(ownname          => p_schema,
                      tabname          => p_table,
                      method_opt       => 'FOR ALL COLUMNS SIZE AUTO',
                      degree           => 8,
                      force            => true,
                      no_invalidate    => false);
    END;
    

    I'm trying to use the same procedure to collect statistics for a table in another schema. However, I get the error ORA-20000: impossible to analyze the TABLE ' ' SCHEMA2.» DETAIL_VALUE', insufficient privileges or does not exist

    EXEC getstats ( 'SCHEMA2','DETAIL_VALUE');
    

    I grant privileges on the table to SCHEMA1. But still get the error. I understand AUTHID CURRENT_USER will attempt to use as long as the procedure as own drawing, but still, I get the error. Can someone help me on this?

    GRANT ALL ON DETAIL_VALUE TO SCHEMA1
    

    It has nothing to do with authid user current and everything to do with the privileges. Grant on the table is not enough for you to analyze.

    SQL> conn u1/u1
    Connected.
    SQL> create table foo(bar number);                                               
    
    Table created.                                                                   
    
    SQL> grant all on foo to u2;                                                     
    
    Grant succeeded.                                                                 
    
    SQL> conn u2/u2
    Connected.
    SQL> exec dbms_stats.gather_table_stats('U1', 'FOO');
    BEGIN dbms_stats.gather_table_stats('U1', 'FOO'); END;                           
    
    *
    ERROR at line 1:
    ORA-20000: Unable to analyze TABLE "U1"."FOO", insufficient privileges or does
    not exist
    ORA-06512: at "SYS.DBMS_STATS", line 33859
    ORA-06512: at line 1                                                             
    
    SQL> conn / as sysdba
    Connected.                                                                       
    
    SQL> grant analyze any to u2;                                                    
    
    Grant succeeded.                                                                 
    
    SQL> conn u2/u2
    Connected.
    SQL> exec dbms_stats.gather_table_stats('U1', 'FOO');                            
    
    PL/SQL procedure successfully completed.
    
  • How to collect statistics on the table for tables in a different schema

    Hi all

    I have a table in a schema, and I want to collect statistics for the table in a different schema.

    I gave GRANT ALL ON SCHEMA1. T1 TO SCHEMA2;

    And when I tried to run the command to collect statistics to help

    DBMS_STATS. GATHER_TABLE_STATS (OWNNAME = > 'SCHMEA1', TABNAME = > 'T1');

    The function will fail.

    Is there a way we can collect statistics of the table for tables in a schema into another schema.

    Thank you
    MK.

    You must grant analyze to schema2.

    SY.

  • Collect statistics of the table in windows

    Hello

    Could you please help me to collect statistics for the tables below in Windows. I tried collection using dbms_scheduler job for a table, but it is not the collection of statistics for tables.

    TABLE_NAME OWNERNUM_ROWS BLOCKSSize

    MQRDW DWREFERENCETRACE GO 2891985937 26372904, 17.86

    TRAY PON_SERIAL 563722072 5135734 GB 8,12

    TRAY PON_PSN2358851732009064 2.67 GB

    TRAY PON_BOM_LOG 37199475 212936 601 MB

    TRAY UNIQUE_ITEM_LOTS 6633907 79710 160 MB

    TRAY PON_BOM5921377 41717328 MB

    SQL > start

    (2 dbms_scheduler.create_job)

    job_name 3 = > "SCOTT_JOB_SCHEDULE"

    job_type 4 = > "EXECUTABLE."

    5 job_action = > ' begin dbms_stats.gather_table_stats (ownname = > "TRACE3")

    6 tabname = > 'PON_PSN ',.

    estimate_percent 7 = > 30,

    Cascade 8 = > true,

    method_opt 9 = > 'for all THE COLUMNS of SIZE 1.

    10 degrees = > 5,

    11 options = > "BRINGING OUTDATED"); end;',

    12 repeat_interval = > ' freq = daily; byhour = 04; byminute = 0; bysecond = 0; »,

    13 active = > TRUE,

    14 comments = > ' custom stats collection for engine risk ");

    15 end;

    17 N

    PL/SQL procedure successfully completed .


    Total number of CPU available on this server is 8

    Oracle - 10.2.0.4.0

    OS - windows

    Kind regards

    Bala

    52 million lines?

    and how to change every day or be inserted?

    It is possible that you can never bring his stats again

    and everything will probably continue to work as expected

    read the guide to setting performance to understand why you need to collect statistics and when

    https://docs.Oracle.com/CD/E11882_01/server.112/e41573/stats.htm#PFGRF94714

    After reading this guide, you'll be ready for any question, you get all that against this team

    Sometimes you need to say 'no' and why the answer is 'no '.

  • Collect statistics in oracle 11 g

    Hi all

    OPERATING SYSTEM: AIX
    DB: 11G

    I am trying to collect statistics for a specific schema using the mentioned below command:

    > exec dbms_stats.gather_schema_stats ('schema_name');

    The schema of such size is 140 GB, and it's almost an hour that the above command is still running.
    I need to know if it is normal, is there a way to set the control above to the acceleration of reunification.

    And how can we check if the above command runs successfully from the background, I mean if there is anyway or something like that?


    Kind regards
    Sphinx

    Hello

    The runtime for a 140 GB schema seems quite OK.

    Without any other parameters, the work of gather_stats will use the 'compute' option, which takes a long calculation for each table.

    I've written a procedure that checks for 'fade' in dba_tab_statistics tables and calculates these statistics only.

    The request for these tables up looks like:

    SELECT table_name, 'NONE' AS nom_partition

    OF dba_tab_statistics one

    Owner WHERE = i_schema

    AND NVL (stale_stats, 'NULL') = 'YES '.

    AND NVL (stattype_locked, 'NULL') = 'NULL '.

    AND NOT EXISTS

    (SELECT 1

    FROM dba_tab_partitions b

    WHERE table_owner = i_schema

    AND a.table_name = b.table_name)

    UNION ALL

    SELECT table_name, nom_partition

    OF dba_tab_statistics one

    Owner WHERE = i_schema

    AND NVL (stale_stats, 'NULL') = 'YES '.

    AND NVL (stattype_locked, 'NULL') = 'NULL '.

    AND nom_partition IS NOT NULL;

    The preceding section is smooth tables, the lower part is flawed partitions for partitioned tables.

    It allows the procedure to collect statistics for the tables and one or more partitions.

    It ignores the tables that were calculated before the work of the default oracle.

    See you soon,.

    FJFranken

  • Have we not collect statistics after reorg?

    Hello
    I did a reorg to a few tables and indexes. After the reorg, I need to collect statistics for these objects?

    Database version: 11.2.0.2

    Kind regards
    Sarayu K.S.

    ... [url http://docs.oracle.com/cd/B28359_01/server.111/b28310/tables006.htm#i1106606] all the statistics in the table become invalid and new statistics should be collected after the removal of the table.

Maybe you are looking for

  • x 200 Tablet - Single Touch? Or without a single contact?

    I just bought a x 200 refurbished Tablet (7450-8HU) and I'm loving. Upgrade of RAM, installed a 120 GB SSD and upgraded to Windows 8.1 - this is an amazing little machine. Windows 8.1, it runs like a beast! But I have a question. I did a lot of resea

  • RDP bi-ecrans window minimized and will not restore

    I have clients that newspaper in our application using RDP using two monitors. They can drag 1 window of the application for the second monitor, and everything's fine. However, when they minimize the window on the primary monitor it does not restore.

  • Password Bios/Admin HP 2000 Windows 8

    Hello, I have a HP 2000 with windows 8. I am able to go into the office and the OS password. But I'm trying to sell this laptop to my friend who doesn't like Windows 8. I tried to reinstall Windows 7 on this machine for her. But it won't let me boot

  • Re: Alternative to PNGEncodedImage.encode () for the 4.2.1?

    Does anyone know if there is an alternative to the PNGEncodedImage.encode () method that can be used in 4.2.1? I created a bitmap of a reduced EncodedImage and now I need to save the Bitmap as a file as a png. I can't understand anyway to encode a bi

  • Compiled slider but execution failed

    Hi allI am trying to remove the data from several tables based on the value of retention configured in different tables, I am able to compile the procedure without any error but when I run it I get ORA-06512 and ORA-01001, I googled the error it is s