Oracle DWH or OLTP
Hello
is my version of oracle: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64 bit Production
I would check if the connected database is DWH (DSS) or OLTP
Should what settings I check to see if the Oracle database is DWH (DSS) or OLTP?
Thanks in advance!
None of them. There is no parameters that will tell you how the database is used. In fact, a single database can be used for the server these two purposes.
Tags: Database
Similar Questions
-
Behavior in Oracle 11g R2 OLTP
Hello world.
Could someone explain this behavior.
I have three tables, one in compression OLTP, other basic compression and no compression on the other.
When I inserted on OLTP table lines, these lines are not compressed.
Only after doing a 'move' segment is shrinking.
My test cases (Oracle 11.2.0.3.8):
As you can see here, even after moving to compress to oltp, always the same space.create table a_normal as select rownum id, a.* from all_objects a / create table a_compress as select rownum id, a.* from all_objects a / create table a_comp_oltp as select rownum id, a.* from all_objects a / SQL> alter table a_compress move compress; Table altered. SQL> alter table a_comp_oltp compress for oltp; Table altered. SQL> select table_name, compression, compress_for, pct_free from dba_tables where table_name in ('A_COMPRESS','A_COMP_OLTP','A_NORMAL'); TABLE_NAME COMPRESS COMPRESS_FOR PCT_FREE ------------------------------ -------- ------------ ---------- A_NORMAL DISABLED 10 A_COMP_OLTP ENABLED OLTP 10 A_COMPRESS ENABLED BASIC 0 SQL> select segment_name, bytes/1024 from dba_segments where segment_name in ('A_COMPRESS','A_COMP_OLTP','A_NORMAL'); SEGMENT_NAME BYTES/1024 --------------------------------------------------------------------------------- ---------- A_COMP_OLTP 10240 A_COMPRESS 3072 A_NORMAL 10240
Now, after the 'move', our space is shrinking.
And, after that, I inserted 4 million without adding lines and more than 4 million with append, both are not compressed until before I did a "move".SQL> alter table a_comp_oltp move; Table altered. SQL> select segment_name, bytes/1024 from dba_segments where segment_name in ('A_COMPRESS','A_COMP_OLTP','A_NORMAL'); SEGMENT_NAME BYTES/1024 --------------------------------------------------------------------------------- ---------- A_COMP_OLTP 4096 A_COMPRESS 3072 A_NORMAL 10240
When I read the documentation, he said: "when a block reaches the PCTFREE value, it will be compressed, if do the compression of the leaves block the PCTFREE threshold, it will be can accept more lines and make the compression again."SQL> @ins_a_comp_oltp Enter value for 1: 4000000 old 3: l_rows number := &1; new 3: l_rows number := 4000000; Enter value for 1: 4000000 old 9: where rownum <= &1; new 9: where rownum <= 4000000; PL/SQL procedure successfully completed. SQL> select segment_name, bytes/1024 from dba_segments where segment_name in ('A_COMPRESS','A_COMP_OLTP','A_NORMAL'); SEGMENT_NAME BYTES/1024 --------------------------------------------------------------------------------- ---------- A_COMP_OLTP 493568 A_COMPRESS 3072 A_NORMAL 10240 SQL> alter table A_COMP_OLTP move; Table altered. SQL> select segment_name, bytes/1024 from dba_segments where segment_name in ('A_COMPRESS','A_COMP_OLTP','A_NORMAL'); SEGMENT_NAME BYTES/1024 --------------------------------------------------------------------------------- ---------- A_COMP_OLTP 188416 A_COMPRESS 3072 A_NORMAL 499712 SQL> @ins_a_comp_oltp_append Enter value for 1: 4000000 old 3: l_rows number := &1; new 3: l_rows number := 4000000; Enter value for 1: 4000000 old 9: where rownum <= &1; new 9: where rownum <= 4000000; PL/SQL procedure successfully completed. SQL> select segment_name, bytes/1024 from dba_segments where segment_name in ('A_COMPRESS','A_COMP_OLTP','A_NORMAL'); SEGMENT_NAME BYTES/1024 --------------------------------------------------------------------------------- ---------- A_COMP_OLTP 665600 A_COMPRESS 3072 A_NORMAL 499712 SQL> alter table A_COMP_OLTP move; Table altered. SQL> select segment_name, bytes/1024 from dba_segments where segment_name in ('A_COMPRESS','A_COMP_OLTP','A_NORMAL'); SEGMENT_NAME BYTES/1024 --------------------------------------------------------------------------------- ---------- A_COMP_OLTP 360448 A_COMPRESS 3072 A_NORMAL 499712
But here, even with a 4 million, the table insert is not compressed, nor with append (direct route), why?
Thank you and best regards,
Felipe.
Published by: Felipe Romeu on 10/09/2012-14:43Hi Felipe,.
The same issue happened to me some time ago. The problem was that I was using the sys as sysdba user. When I used the system of the user of the problem did not happen again.
What user of oracle database do you use?Kind regards
Roberto Faucz
-
ORA-01033: ORACLE initialization or shutting / TAF
Hi experts,
I have the following environment:
the listener on dwh:+ 2 Linux Redhat 5.7 0n X86/64 named dwh and stb and Oracle 11.2.0.2, + Database on dwh is primary and on stb is standby
the service_name on dwh parameter:ADR_BASE_LISTENER=/u00/app/oracle LISTENER = (ADDRESS_LIST = # for external procedure calls, create a separate listener # See basenv_user_guide.pdf for details (chapter of listener.ksh) (ADDRESS = (PROTOCOL = TCP) (Host = dwh ) (Port = 1521) ) ) SID_LIST_LISTENER = (SID_LIST = (SID_DESC = # Next line is necessary for dataguard >= 10g (GLOBAL_DBNAME = strm_site1_DGMGRL) (SID_NAME = STRM ) (ORACLE_HOME = /u01/app/oracle/product/11.2.0/db_1 ) ) )
the earpiece on the stb:SQL> show parameter service NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ service_names string STRM [oracle@dwh admin]$ lsnrctl status [oracle@dwh admin]$ lsnrctl status LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 14-SEP-2011 17:32:43 Copyright (c) 1991, 2010, Oracle. All rights reserved. Connecting to (ADDRESS=(PROTOCOL=TCP)(Host=dwh)(Port=1521)) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Linux: Version 11.2.0.2.0 - Production Start Date 14-SEP-2011 12:11:15 Uptime 0 days 5 hr. 21 min. 28 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/oracle/product/11.2.0/db_1/network/admin/listener.ora Listener Log File /u01/app/oracle/product/11.2.0/db_1/network/log/listener.log Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dwh)(PORT=1521))) Services Summary... Service "STRMXDB" has 1 instance(s). Instance "STRM", status READY, has 1 handler(s) for this service... Service "strm" has 1 instance(s). Instance "STRM", status READY, has 1 handler(s) for this service... Service "strm_site1" has 1 instance(s). Instance "STRM", status READY, has 1 handler(s) for this service... Service "strm_site1_DGB" has 1 instance(s). Instance "STRM", status READY, has 1 handler(s) for this service... Service "strm_site1_DGMGRL" has 1 instance(s). Instance "STRM", status UNKNOWN, has 1 handler(s) for this service... The command completed successfully
the service_name on stb parameter:[oracle@stb admin]$ cat listener.ora ADR_BASE_LISTENER=/u00/app/oracle LISTENER = (ADDRESS_LIST = # for external procedure calls, create a separate listener # See basenv_user_guide.pdf for details (chapter of listener.ksh) (ADDRESS = (PROTOCOL = TCP) (Host = stb ) (Port = 1521) ) ) SID_LIST_LISTENER = (SID_DESC = # Next line is necessary for dataguard >= 10g (GLOBAL_DBNAME = strm_site2_DGMGRL) (SID_NAME = STRM ) (ORACLE_HOME = /u01/app/oracle/product/11.2.0/db_1 ) ) )
My tnsnames.oraSQL> show parameter service NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ service_names string STRM [oracle@stb admin]$ lsnrctl status LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 14-SEP-2011 17:37:23 Copyright (c) 1991, 2010, Oracle. All rights reserved. Connecting to (ADDRESS=(PROTOCOL=TCP)(Host=stb)(Port=1521)) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Linux: Version 11.2.0.2.0 - Production Start Date 14-SEP-2011 12:12:39 Uptime 0 days 5 hr. 24 min. 44 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/oracle/product/11.2.0/db_1/network/admin/listener.ora Listener Log File /u01/app/oracle/product/11.2.0/db_1/network/log/listener.log Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=stb)(PORT=1521))) Services Summary... Service "strm" has 1 instance(s). Instance "STRM", status READY, has 1 handler(s) for this service... Service "strm_site2" has 1 instance(s). Instance "STRM", status READY, has 1 handler(s) for this service... Service "strm_site2_DGB" has 1 instance(s). Instance "STRM", status READY, has 1 handler(s) for this service... Service "strm_site2_DGMGRL" has 1 instance(s). Instance "STRM", status UNKNOWN, has 1 handler(s) for this service... The command completed successfully
My problem:# tnsnames.ora Network Configuration File: /u01/app/oracle/product/11.2.0.2/db_1/network/admin/tnsnames.ora # Generated by Oracle configuration tools. STRM= (DESCRIPTION= (LOAD_BALANCE=on) (FAILOVER=on) (ADDRESS=(PROTOCOL=tcp)(HOST=dwh)(PORT=1521)) (ADDRESS=(PROTOCOL=tcp)(HOST=stb)(PORT=1521)) (CONNECT_DATA= (SERVICE_NAME=strm) (FAILOVER_MODE=(TYPE=select)(METHOD=basic)) ) ) STRM_SITE1 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = dwh)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = STRM_SITE1) ) ) STRM_SITE2 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = stb)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = STRM_SITE2) ) )
Connection withC:\Documents and Settings\thai>sqlplus scott/scott@STRM SQL*Plus: Release 11.2.0.1.0 Production on Wed Sep 14 17:49:51 2011 Copyright (c) 1982, 2010, Oracle. All rights reserved. ERROR: ORA-01033: ORACLE initialization or shutdown in progress Process ID: 0 Session ID: 0 Serial number: 0
does not raise any problem!sqlplus sys/****@STRM as sysdba
What I did wrong? Help, please!
concerning
hqt200475
Published by: hqt200475 on Sep 14, 2011 09:04If I understand the second part of the question, you can use DBMS_SERVICE to create an alias on both servers.
Then according to the ROLE they are in you can just connect.
First of all an entry should be added to the tnsnames.ora to client that uses a SERVICE_NAME instead of a SID.
ernie = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = primary.host)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = standby.host)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = ernie) ) )
Then, the "ernie" service must be manually created on the primary database.
BEGIN DBMS_SERVICE.CREATE_SERVICE('ernie','ernie'); END; /
After creating the service must be started manually.
BEGIN DBMS_SERVICE.START_SERVICE('ernie'); END;
Several of the default settings can now be set for "ernie".
BEGIN DBMS_SERVICE.MODIFY_SERVICE ('ernie', FAILOVER_METHOD => 'BASIC', FAILOVER_TYPE => 'SELECT', FAILOVER_RETRIES => 200, FAILOVER_DELAY => 1); END; /
Finally a trigger to START database should be created to ensure that this service is only available if the database is the main.
CREATE TRIGGER CHECK_ERNIE_START AFTER STARTUP ON DATABASE DECLARE V_ROLE VARCHAR(30); BEGIN SELECT DATABASE_ROLE INTO V_ROLE FROM V$DATABASE; IF V_ROLE = 'PRIMARY' THEN DBMS_SERVICE.START_SERVICE('ernie'); ELSE DBMS_SERVICE.STOP_SERVICE('ernie'); END IF; END; /
Check the status using lsnrctl status
/ home/oracle: > lsnrctl status
"Ernie" service has 1 instance (s).
Stone comes from the Oracle example, you can add anything you want, and then if you make a passage or failover, it is transparent to your users.
See this for more details
http://download.Oracle.com/docs/CD/E11882_01/AppDev.112/e16760/d_serv.htm
Best regards
mseberg
-
Hello forum
IM new in oracle form and have some confusion on the use of oracle web application forms. can someone tell me pls why use forms oracle instead of many web technologies such as asp, c#, php, Jsp? as forms are heavy, and takes a long time to load then why should I use it?
can I use oracle applications forms such as public web applications where anyone can use this application? but given this ive another confusion! all users will need to install OC4J for execution of oracle forms? If they do then its going to be a very big hassle for users.
in fact, I want to know the basic concept of using oracle forms...Depends on your application needs, if you are developing Web sites, with a very weak interaction with a database, move away Oracle forms to Oracle Apex and database 10 g XE.
If you develop an ERP application or any other that interacts strongly with the database (Oracle db) and OLTP performance, validation data and the operations processing and others, then you must use Oracle Forms.
And as it is stated in the above post forms based on SQL and PL/SQL languages and its executables are interpreted at run time, it is not an exe file, it can be maintained more easily, you can take a form offline and the rest of your application would be quite normal.Some people may say forms has its limits, but with PJCs and Java Beans I couldn't find a limitation that cannot be conquered.
Users who run an Oracle web application need not run OC4J at the same time, you must OC4J only if you will be running your form with the manufacturer of forms, or for testing purposes.
For a production deployment, you will need an OAS Oracle applications server and users should only auto-télécharger/install JInitiator the first run of the application only after the first time it works magically.Tony
-
Hi experts
My RAC-system aut 2-nodes hat die Version 11.2.0.3 (on Linux RH 5.4/X 86-64) and a stand-alone database dg
The exit status of lisnrctl indicates:
I define the instances/services "DBUA0632275" and "DBUA3034148".[oracle@dwh admin]$ lsnrctl status LSNRCTL for Linux: Version 11.2.0.3.0 - Production on 17-APR-2012 19:39:47 Copyright (c) 1991, 2011, Oracle. All rights reserved. Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521)) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Linux: Version 11.2.0.3.0 - Production Start Date 17-APR-2012 17:06:59 Uptime 0 days 2 hr. 32 min. 49 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/11.2.0.3/grid/network/admin/listener.ora Listener Log File /u01/app/grid/diag/tnslsnr/dwh/listener/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.2.16)(PORT=1521))) Services Summary... Service "+ASM" has 1 instance(s). Instance "+ASM1", status READY, has 1 handler(s) for this service... Service "DBUA0632275" has 1 instance(s). Instance "DBUA0632275", status BLOCKED, has 1 handler(s) for this service... Service "DBUA3034148" has 1 instance(s). Instance "DBUA3034148", status BLOCKED, has 1 handler(s) for this service... Service "dg1" has 1 instance(s). Instance "dg", status READY, has 1 handler(s) for this service... Service "dg1_DGMGRL" has 1 instance(s). Instance "dg1", status UNKNOWN, has 1 handler(s) for this service... Service "dgXDB" has 1 instance(s). Instance "dg", status READY, has 1 handler(s) for this service... Service "rac" has 1 instance(s). Instance "rac1", status READY, has 1 handler(s) for this service... Service "racXDB" has 1 instance(s). Instance "rac1", status READY, has 1 handler(s) for this service... The command completed successfully [oracle@dwh admin]$
My Questions
1 / origin?
2 / why they are "BLOCKED"?
Thank for any help!
hqt200475
Published by: hqt200475 on April 17, 2012 10:51Hello
It is not a mystery, but it's something of undocumented.There is a moment ago, I got the same curiosity when I saw an unknown PMON process on my host.
I noticed that initializing DBCA, or the Oracle DBUA always creates a temporary instance named DBUAin a short period, for internal reasons, it also saves a service on the port to listen temporarily. This dummy instance is started to determine what options are available for DBCA use it. This instance is created and deleted in a few seconds, to run two select "select the option setting of $ v where the value = 'TRUE'" and "select version of v$ timezone_file.
Start dbca and check the pmon process.
You will see something like this:
$ ps -ef |grep pmon oracle 2485 1 4 23:14 ? 00:00:00 ora_pmon_DBUA0
For more information, see the trace log that is played during the DBCA runs.
$ORALCE_BASE/cfgtoollogs/dbca/trace.log_
See this example:
cat trace.log_OraDb11g_home1 . . . [main] [ 2012-04-17 23:13:58.079 BRT ] [OracleHome.initOptions:1226] Initializing Database Options with for dummy sid=DBUA0 using initfile=/u01/app/oracle/product/11.2.0/dbhome_1/dbs/initDBUA0.ora using pwdfile=/u01/app/oracle/product/11.2.0/dbhome_1/dbs/orapwDBUA0 . . . [main] [ 2012-04-17 23:14:07.671 BRT ] [OracleHome.initOptions:1240] executing: startup nomount pfile='/u01/app/oracle/product/11.2.0/dbhome_1/dbs/initDBUA0.ora' [main] [ 2012-04-17 23:14:19.175 BRT ] [OracleHome.initOptions:1250] executing: select parameter from v$option where value='TRUE' [main] [ 2012-04-17 23:14:19.198 BRT ] [OracleHome.initOptions:1256] Database Option Partitioning is ON [main] [ 2012-04-17 23:14:19.198 BRT ] [OracleHome.initOptions:1256] Database Option Objects is ON [main] [ 2012-04-17 23:14:19.198 BRT ] [OracleHome.initOptions:1256] Database Option Real Application Clusters is ON [main] [ 2012-04-17 23:14:19.199 BRT ] [OracleHome.initOptions:1256] Database Option Advanced replication is ON [main] [ 2012-04-17 23:14:19.199 BRT ] [OracleHome.initOptions:1256] Database Option Bit-mapped indexes is ON [main] [ 2012-04-17 23:14:19.199 BRT ] [OracleHome.initOptions:1256] Database Option Connection multiplexing is ON [main] [ 2012-04-17 23:14:19.200 BRT ] [OracleHome.initOptions:1256] Database Option Connection pooling is ON [main] [ 2012-04-17 23:14:19.200 BRT ] [OracleHome.initOptions:1256] Database Option Database queuing is ON [main] [ 2012-04-17 23:14:19.200 BRT ] [OracleHome.initOptions:1256] Database Option Incremental backup and recovery is ON [main] [ 2012-04-17 23:14:19.200 BRT ] [OracleHome.initOptions:1256] Database Option Instead-of triggers is ON [main] [ 2012-04-17 23:14:19.201 BRT ] [OracleHome.initOptions:1256] Database Option Parallel backup and recovery is ON [main] [ 2012-04-17 23:14:19.201 BRT ] [OracleHome.initOptions:1256] Database Option Parallel execution is ON [main] [ 2012-04-17 23:14:19.201 BRT ] [OracleHome.initOptions:1256] Database Option Parallel load is ON [main] [ 2012-04-17 23:14:19.201 BRT ] [OracleHome.initOptions:1256] Database Option Point-in-time tablespace recovery is ON [main] [ 2012-04-17 23:14:19.202 BRT ] [OracleHome.initOptions:1256] Database Option Fine-grained access control is ON [main] [ 2012-04-17 23:14:19.202 BRT ] [OracleHome.initOptions:1256] Database Option Proxy authentication/authorization is ON [main] [ 2012-04-17 23:14:19.202 BRT ] [OracleHome.initOptions:1256] Database Option Change Data Capture is ON [main] [ 2012-04-17 23:14:19.203 BRT ] [OracleHome.initOptions:1256] Database Option Plan Stability is ON [main] [ 2012-04-17 23:14:19.203 BRT ] [OracleHome.initOptions:1256] Database Option Online Index Build is ON [main] [ 2012-04-17 23:14:19.203 BRT ] [OracleHome.initOptions:1256] Database Option Coalesce Index is ON [main] [ 2012-04-17 23:14:19.204 BRT ] [OracleHome.initOptions:1256] Database Option Managed Standby is ON [main] [ 2012-04-17 23:14:19.204 BRT ] [OracleHome.initOptions:1256] Database Option Materialized view rewrite is ON [main] [ 2012-04-17 23:14:19.204 BRT ] [OracleHome.initOptions:1256] Database Option Database resource manager is ON [main] [ 2012-04-17 23:14:19.204 BRT ] [OracleHome.initOptions:1256] Database Option Spatial is ON [main] [ 2012-04-17 23:14:19.205 BRT ] [OracleHome.initOptions:1256] Database Option Export transportable tablespaces is ON [main] [ 2012-04-17 23:14:19.205 BRT ] [OracleHome.initOptions:1256] Database Option Transparent Application Failover is ON [main] [ 2012-04-17 23:14:19.206 BRT ] [OracleHome.initOptions:1256] Database Option Fast-Start Fault Recovery is ON [main] [ 2012-04-17 23:14:19.206 BRT ] [OracleHome.initOptions:1256] Database Option Sample Scan is ON [main] [ 2012-04-17 23:14:19.206 BRT ] [OracleHome.initOptions:1256] Database Option Duplexed backups is ON [main] [ 2012-04-17 23:14:19.207 BRT ] [OracleHome.initOptions:1256] Database Option Java is ON [main] [ 2012-04-17 23:14:19.207 BRT ] [OracleHome.initOptions:1256] Database Option OLAP Window Functions is ON [main] [ 2012-04-17 23:14:19.207 BRT ] [OracleHome.initOptions:1256] Database Option Block Media Recovery is ON [main] [ 2012-04-17 23:14:19.208 BRT ] [OracleHome.initOptions:1256] Database Option Fine-grained Auditing is ON [main] [ 2012-04-17 23:14:19.208 BRT ] [OracleHome.initOptions:1256] Database Option Application Role is ON [main] [ 2012-04-17 23:14:19.208 BRT ] [OracleHome.initOptions:1256] Database Option Enterprise User Security is ON [main] [ 2012-04-17 23:14:19.209 BRT ] [OracleHome.initOptions:1256] Database Option Oracle Data Guard is ON [main] [ 2012-04-17 23:14:19.209 BRT ] [OracleHome.initOptions:1256] Database Option Oracle Label Security is ON [main] [ 2012-04-17 23:14:19.209 BRT ] [OracleHome.initOptions:1256] Database Option OLAP is ON [main] [ 2012-04-17 23:14:19.210 BRT ] [OracleHome.initOptions:1256] Database Option Basic Compression is ON [main] [ 2012-04-17 23:14:19.210 BRT ] [OracleHome.initOptions:1256] Database Option Join index is ON [main] [ 2012-04-17 23:14:19.210 BRT ] [OracleHome.initOptions:1256] Database Option Trial Recovery is ON [main] [ 2012-04-17 23:14:19.211 BRT ] [OracleHome.initOptions:1256] Database Option Data Mining is ON [main] [ 2012-04-17 23:14:19.211 BRT ] [OracleHome.initOptions:1256] Database Option Online Redefinition is ON [main] [ 2012-04-17 23:14:19.212 BRT ] [OracleHome.initOptions:1256] Database Option Streams Capture is ON [main] [ 2012-04-17 23:14:19.212 BRT ] [OracleHome.initOptions:1256] Database Option File Mapping is ON [main] [ 2012-04-17 23:14:19.213 BRT ] [OracleHome.initOptions:1256] Database Option Block Change Tracking is ON [main] [ 2012-04-17 23:14:19.213 BRT ] [OracleHome.initOptions:1256] Database Option Flashback Table is ON [main] [ 2012-04-17 23:14:19.213 BRT ] [OracleHome.initOptions:1256] Database Option Flashback Database is ON [main] [ 2012-04-17 23:14:19.214 BRT ] [OracleHome.initOptions:1256] Database Option Transparent Data Encryption is ON [main] [ 2012-04-17 23:14:19.214 BRT ] [OracleHome.initOptions:1256] Database Option Backup Encryption is ON [main] [ 2012-04-17 23:14:19.214 BRT ] [OracleHome.initOptions:1256] Database Option Unused Block Compression is ON [main] [ 2012-04-17 23:14:19.215 BRT ] [OracleHome.initOptions:1256] Database Option Oracle Database Vault is ON [main] [ 2012-04-17 23:14:19.215 BRT ] [OracleHome.initOptions:1256] Database Option Result Cache is ON [main] [ 2012-04-17 23:14:19.215 BRT ] [OracleHome.initOptions:1256] Database Option SQL Plan Management is ON [main] [ 2012-04-17 23:14:19.216 BRT ] [OracleHome.initOptions:1256] Database Option SecureFiles Encryption is ON [main] [ 2012-04-17 23:14:19.216 BRT ] [OracleHome.initOptions:1256] Database Option Real Application Testing is ON [main] [ 2012-04-17 23:14:19.216 BRT ] [OracleHome.initOptions:1256] Database Option Flashback Data Archive is ON [main] [ 2012-04-17 23:14:19.217 BRT ] [OracleHome.initOptions:1256] Database Option DICOM is ON [main] [ 2012-04-17 23:14:19.217 BRT ] [OracleHome.initOptions:1256] Database Option Active Data Guard is ON [main] [ 2012-04-17 23:14:19.218 BRT ] [OracleHome.initOptions:1256] Database Option Server Flash Cache is ON [main] [ 2012-04-17 23:14:19.218 BRT ] [OracleHome.initOptions:1256] Database Option Advanced Compression is ON [main] [ 2012-04-17 23:14:19.218 BRT ] [OracleHome.initOptions:1256] Database Option XStream is ON [main] [ 2012-04-17 23:14:19.219 BRT ] [OracleHome.initOptions:1256] Database Option Deferred Segment Creation is ON [main] [ 2012-04-17 23:14:19.219 BRT ] [OracleHome.initOptions:1260] executing: select version from v$timezone_file [main] [ 2012-04-17 23:14:19.227 BRT ] [OracleHome.initOptions:1266] Timezone file version is 14 [main] [ 2012-04-17 23:14:20.663 BRT ] [SQLEngine.done:2167] Done called [main] [ 2012-04-17 23:14:20.665 BRT ] [OsUtilsBase.deleteFile:1838] OsUtilsBase.deleteFile: /u01/app/oracle/product/11.2.0/dbhome_1/dbs/initDBUA0.ora [main] [ 2012-04-17 23:14:20.666 BRT ] [OsUtilsBase.deleteFile:1838] OsUtilsBase.deleteFile: /u01/app/oracle/product/11.2.0/dbhome_1/dbs/orapwDBUA0 [main] [ 2012-04-17 23:14:20.666 BRT ] [OracleHome.initOptions:1301] Database Options queried: 63 [main] [ 2012-04-17 23:14:35.920 BRT ] [Host.checkOPS:2556] Inside checkOPS
Kind regards
Levi Pereira -
How to filter the by using (-) less symbol.
Hello
We use oracle EBS as OLTP. Sale of data stores the PO_HEADERS_ALL, PO_LINES_ALL table, which contains data of rejection/cancellation order.
Rejected/cancelled in data store also the same tables. We are able to identify the base denied/cancelled quantity indicated (-) less symbol. (Example: Qty:-30), and the Amount column is also indicated as (-) less symbol. So, how to filter the data on quantity everything is indicated - less symbol in OBIEE.
Kind regards.
CHRJay wrote:
IF it is set to digital use function of cast to convert to a char, then using as check the reason for the less to be filtered in the filter condition formula.If you want to filter all values of negetive, then use filter: measure<0 in="" filter="">0>
Hope this will help you.
Thank you
Jay.If the column was purely digital, why cast to CHAR? The OP can do everything suggested Robert Angel. My suggestion was where there was some other non-numeric values in the column that could be the reason why the column is CHAR, then using the SIMILAR filter operand would solve this problem.
The first part of your suggesting was not necessary if the column is a numeric data type and the second part was just a repetition of what Robert.
-
Solution to run sql of mysql application logs
We have a custom application to mysql, which generate logs stored in a common location on the linux machine that has format below open
$ cat minor20100809.log
20100809001 ^ Âselect (prod_id) count of product_logs;
20100809002 ^ Âselect (order_id) count of order_logs;
ID and the SQL with command A seperater and 1000 daily newspapers files are generated
We now have an oracle DWH, where import us all the data from mysql, and we need to test these counts on a daily basis.
Could you please suggest a solution in oracle for this?
high level, to follow the procedure below
(a) import only those log files in a table-> which is the best way sqlldr, impdp or plsql read file?
(b) execute these sql one statement
(c) store the counties table of daily newspaperWhat is a "tool for oracle? You have SQL * Loader on your client machine? SQL * more? Are you allowed to use these tools? What possible safety reason can there be for you to use certain tools on the client machine, but not others?
Justin
-
OPatch failed during pre-reqs check.
System:
WINDOWS 2000
Database version:
* 9.2.0.1.0 *.
opatch version:
OPatch Version: 1.0.0.0.61
Version of perl:
5.005_03
A patch which I am applying:
9.2.0.8.0
STEP: 1
Downloaded latest version of Metalink opatch:
D:\Downloads\OPatch > opatch version
Setup Oracle interim Patch version 1.0.0.0.61
Copyright (c) 2009 Oracle Corporation. All rights reserved...
Oracle recommends using the latest version of OPatch
and read the OPatch documentation in the docs/OPatch
Directory for its use. For more information on the last OPatch and
other support issues, please refer to document ID 293369.1
available on My Oracle Support (https://myoraclesupport.oracle.com)
OPatch Version: 1.0.0.0.61
OPatch returns with error code = 0
D:\Downloads\OPatch >
STEP 2:
Downloaded the latest version of fix oracle Metalink.
D:\Downloads\p8300340_92080_WINNT\8300340 > dir
Volume in drive D has no label.
Volume serial number is 201 b - 9310
Directory of D:\Downloads\p8300340_92080_WINNT\8300340
2009-04-27 16:07 < DIR >.
2009-04-27 16:07 < DIR >...
2009-04-27 14:59 < DIR > custom
2009-04-27 14:59 < DIR > etc.
2009-04-27 14:59 < DIR > files
2009-04-27 14:59 < DIR > ldap_scripts
2009-04-27 15:13 < DIR > locks
2009-04-27 16:07 < DIR > OPatch
2002-04-12 23:46 2 924 patchmd.xml
2009-04-08 11:43 61 715 README.html
2007-09-04 10:43 43 README.txt
28/12/2006 12:35 remove_demo.js 1 545
4 file (s) on 66 227 bytes
8 dir 62,529,622,016 bytes free
D:\Downloads\p8300340_92080_WINNT\8300340 >
STEP 3:
Changed the setting oracle_home as follows:
D:\Downloads\p8300340_92080_WINNT\8300340 > set oracle_home = D:\oracle\ora92New
STEP 4:
Inventory control and applying the patch:
D:\Downloads\OPatch > lsinventory opatch-retail
Setup Oracle interim Patch version 1.0.0.0.61
Copyright (c) 2009 Oracle Corporation. All rights reserved...
Oracle recommends using the latest version of OPatch
and read the OPatch documentation in the docs/OPatch
Directory for its use. For more information on the last OPatch and
other support issues, please refer to document ID 293369.1
available on My Oracle Support (https://myoraclesupport.oracle.com)
Oracle home: D:\oracle\ora92New
Inventory of Oracle Home: D:\Program Files\oracle\inventory
Inventory Center: D:\Program Files\oracle\inventory
from: n/d
YES location: D:\Program Files\Oracle\oui
YES shared library: D:\Program Files\Oracle\oui\bin\win32\oraInstaller.dll
Location of Java: "D:\Program Files\Oracle\jre\1.3.1\bin\java.exe".
Location of the log file: D:\oracle\ora92New/.patch_storage/ < patch ID > / *.log
Creation of logfile "D:\oracle\ora92new\.patch_storage\LsInventory__05-01-2009_15".
15 24.log.
Result:
VERSION OF THE PRODUCT NAME
============ =======
Queues advanced (AQ) API 9.2.0.1.0
Replication progress 9.2.0.1.0
Required agent Support 9.2.0.1.0 files
Configuration of Apache Oracle Java Server Pages 1.1.2.3.0
Apache configuration for the XML of Oracle 9.2.0.1.0 Developer Kit
Apache JServ 1.1.0.0.0g
Apache Web Server files 1.3.22.0.0a
Common files Wizard 9.2.0.1.0
Authentication and encryption 9.2.0.1.0
Bali hand 1.1.17.0.0
Duration of BC4J database 9.0.2.692.1
Capacity Planner 9.2.0.1.0
Change management common files 9.2.0.1.0
Character Migration 9.2.0.1.0 utility
Common files for generic connectivity using OLEDB 9.2.0.1.0
Common files 9.2.0.1.0 data management services
Database Configuration Assistant 9.2.0.1.0
Database SQL Scripts 9.2.0.1.0
Database upgrade Assistant 9.2.0.1.0
Database check 9.2.0.1.0 utility
Workspace Manager 9.2.0.1.0 database
DBJAVA necessary support files 9.2.0.1.0
Documentaion required support files 9.2.0.1.0
Enterprise Edition Options 9.2.0.1.0
Enterprise login Assistant 9.2.0.1.0
Company to manage 9.2.0.1.0 translated Web site files
Base classes of Enterprise Manager 9.2.0.1.0
Enterprise Manager Client 9.2.0.1.0
Enterprise Manager Common 9.2.0.1.0 Files
Enterprise Manager Console 9.2.0.1.0
Enterprise Manager 9.2.0.1.0 database applications
Enterprise Manager events 9.2.0.1.0
Enterprise Manager Expert Common 9.2.0.1.0 Files
Prior to the Installation of Enterprise Manager checks 9.2.0.1.0
Enterprise Manager integrates Applications 9.2.0.1.0
Enterprise Manager minimum integration 9.2.0.1.0
Common files 9.2.0.1.0 paging Enterprise Manager and WHO
Enterprise Manager in pagination 9.2.0.1.0 Server
Enterprise Manager Quick laps 9.2.0.1.0
Enterprise Manager Shared Libraries 9.2.0.1.0
Enterprise Manager translated 9.2.0.1.0 files
Enterprise Manager Web Site 9.2.0.1.0
Web Enterprise Manager 9.2.0.1.0 integration server
App Win32 Enterprise Manager translated 9.2.0.1.0 files
Enterprise Manager Win32 Application Common 9.2.0.1.0 Files
Enterprise Manager Win32 applications 9.2.0.1.0 bridge
Expert 9.2.0.1.0
Import/Export 9.2.0.1.0
Files common generic connectivity 9.2.0.1.0
Generic connectivity using ODBC 9.2.0.1.0
Using OLEDB - FS 9.2.0.1.0 generic connectivity
Using OLEDB - SQL 9.2.0.1.0 generic connectivity
Index Tuning Wizard 9.2.0.1.0
Installation of 9.2.0.1.0 common files
iSQL * Plus Extension for Windows 9.2.0.1.0
iSQL * Plus 9.2.0.1.0
JDBC 9.2.0.1.0 common files
JDBC/OCI 9.2.0.1.0 common files
JSDK 2.0.0.0.0d
Required LDAP Support 9.2.0.1.0 files
Extensions MIcrosoft SQLServer (TM) 9.2.0.1.0
9.2.0.1.0 migration utility
New database ID 9.2.0.1.0
Object Type translator 9.2.0.1.0
Oracle for Windows NT 9.2.0.1.0 Administration Assistant
Oracle Advanced Security 9.2.0.1.0
Applications 9.2.0.1.0 Oracle Extensions
Calling C++ Oracle 9.2.0.1.0 interface
Service of caching of Oracle for Java 2.1.0.0.0a
Oracle Call Interface (OCI) 9.2.0.1.0
Oracle Change Management Pack 9.2.0.1.0
Client Oracle required Support 9.2.0.1.0 files
Oracle 1.2.1.0.0A Code Editor
9.2.0.1.0 Oracle COM Automation feature
Schema common Oracle 9.2.0.1.0 demos
Complete Starter DSS Server 9.2.0.1.0 Oracle
Complete Starter OLTP server 9.2.0.1.0 Oracle
Basic Oracle required Support 9.2.0.1.0 files
Oracle Data Mining 9.2.0.1.0
Database Oracle 9.2.0.1.0 demos
2.2.11.0.0 Oracle database user interface
9.2.0.1.0 Oracle database utilities
Forms Server Oracle Developer 9.2.0.1.0 Manager
9.2.0.1.0 Oracle Diagnostics Pack
9.2.0.1.0 Oracle Directory Manager
Display Oracle 9.0.2.0.0 policies
Services server Oracle 9.2.0.1.0 Dynamics
Oracle e-Business Management Extensions 9.2.0.1.0
Agent EMD 9.2.0.1.0 Oracle Extensions
Oracle Enterprise Manager products 9.2.0.1.0
Oracle extended Windowing Toolkit 3.4.13.0.0
Oracle made Extensions 9.2.0.1.0
Oracle Help For Java 3.2.13.0.0
Oracle Help For Java 4.1.13.0.0
Oracle Help for the Web 1.0.7.0.0
Server Extensions HTTP Oracle 9.2.0.1.0
9.2.0.1.0 Oracle HTTP Server
Ice browser Oracle 5.06.8.0.0
Basic Agent Intelligent Oracle 9.2.0.1.0 component files
9.2.0.1.0 Oracle intelligent agent configuration tool
Agent extensions Intelligent Oracle 9.2.0.1.0
Intelligent Agent Oracle 9.2.0.1.0
9.2.0.1.0 Oracle interMedia Annotator
Audio interMedia Oracle 9.2.0.1.0
Compatibility customer interMedia Oracle 9.2.0.1.0 files
Oracle interMedia customer Demos 9.2.0.1.0
InterMedia Oracle 9.2.0.1.0 customer option
Files common 9.2.0.1.0 Oracle interMedia
Image 9.2.0.1.0 Oracle interMedia
Oracle interMedia Java Advanced Imaging 9.2.0.1.0
Oracle interMedia Java Client 9.2.0.1.0
Java Media Framework Client 9.2.0.1.0 Oracle InterMedia
9.2.0.1.0 Oracle interMedia Locator
InterMedia Oracle 9.2.0.1.0 video
Oracle interMedia Web Client 9.2.0.1.0
InterMedia Oracle 9.2.0.1.0
Oracle Internet Directory common customer 9.2.0.1.0 files
Oracle Internet Directory 9.2.0.1.0 customer
Oracle 9.2.0.1.0 Assistant INTYPE file
Oracle Java 2.0.1.0.0 layout engine
Oracle Java Server Pages 1.1.3.1.0
9.2.0.1.0 Oracle Java tools
JDBC Oracle 9.2.0.1.0 driver development
Oracle thin JDBC Driver for JDK 1.1 9.2.0.1.0
Oracle thin JDBC Driver for JDK 1.2 9.2.0.1.0
Oracle thin JDBC Driver for JDK 1.4 9.2.0.1.0
Driver Oracle JDBC/OCI JDK 1.1 9.2.0.1.0
Driver Oracle JDBC/OCI JDK 1.2 9.2.0.1.0
Driver Oracle JDBC/OCI JDK 1.4 9.2.0.1.0
Oracle JDBC/OCI Interfaces 9.2.0.1.0
Oracle extended JFC Windowing Toolkit 4.1.10.0.0
Oracle JVM 9.2.0.1.0
Oracle applications 9.2.0.1.0 Oracle Management Pack
9.2.0.1.0 Oracle Management Server
Common files for the gateway Message Oracle 9.2.0.1.0
Oracle PL/SQL Mod Gateway 3.0.9.8.3b
Oracle Net Configuration Assistant 9.2.0.1.0
Net Oracle 9.2.0.1.0 integration
9.2.0.1.0 Oracle Net listener
Net Oracle 9.2.0.1.0 Manager
Net Oracle 9.2.0.1.0 required support files
Net Services 9.2.0.1.0 Oracle
NET to Oracle 9.2.0.1.0
Oracle to 9.2.0.4.4 OLE objects
9.2.0.1.0 Oracle ODBC driver
OLAP Oracle 9.2.0.1.0 API
OLAP Oracle 9.2.0.1.0 Cube Viewer
OLAP Oracle 9.2.0.1.0 MCG Lite
Sheet 9.2.0.1.0 Oracle OLAP
Oracle OLAP 9.2.0.1.0
Oracle Partitioning 9.2.0.1.0
Perl Oracle 5.00503.0.0.0c interpreter
Oracle 9.2.0.1.0 Programmer
Provider for OLE DB 9.2.0.1.0 Oracle
Microsoft Transaction Server 9.2.0.1.0 Oracle services
Agent SNMP Oracle 9.2.0.1.0
Client Oracle SOAP 2.0.0.0.0a
SOAP for 2.0.0.0.0a of Oracle JServ
SOAP 2.0.0.0.0a Oracle Server
Oracle Spatial 9.2.0.1.0
Oracle SQLJ 9.2.0.1.0
Starter Server 9.2.0.1.0 Oracle
Oracle 9.2.0.1.0 text
Trace of Oracle required Support 9.2.0.1.0 files
Oracle 9.2.0.1.0 trace
Pack 9.2.0.1.0 Oracle Tuning
Oracle UIX 2.0.20.0.0
Search Ultra Oracle 9.2.0.1.0 common files
Search Ultra Oracle 9.2.0.1.0 intermediary
Search Server Ultra Oracle 9.2.0.1.0
Oracle 9.2.0.1.0 Portfolio Manager
Interfaces 9.2.0.1.0 Oracle Windows
9.2.0.1.0 Oracle Workflow Manager
The XML of Oracle 9.2.0.1.0 Developer Kit
XML Oracle 9.2.0.1.0 runtime components
Oracle XML SQL 9.2.0.1.0 utility
Oracle9i Database 9.2.0.1.0
Oracle9i Development Kit 9.2.0.1.0
Oracle9i support 9.2.0.1.0 globalization
Oracle9i Server 9.2.0.1.0 Syndication
Oracle9i Windows Documentation 9.2.0.1.0
Oracle9i 9.2.0.1.0
Parser generator syntactic necessary support files 9.2.0.1.0
Performance Manager 9.2.0.1.0
Embedded PL/SQL gateway 9.2.0.1.0
PL/SQL required Support 9.2.0.1.0 files
PL/SQL 9.2.0.1.0
Platform necessary support files 9.2.0.1.0
Common files nsqlprep 9.2.0.1.0
Required precompiler Support 9.2.0.1.0 files
Required RDBMS Support 9.2.0.1.0 files
Recovery Manager 9.2.0.1.0
RegExp 2.0.20.0.0
Presentation framework 9.2.0.1.0
Necessary Support 9.2.0.1.0 files
Secure Socket Layer 9.2.0.1.0
SQL analyze 9.2.0.1.0
SQL * Loader 9.2.0.1.0
SQL * Plus 9.2.0.1.0
SQLJ DURATION 9.2.0.1.0
SQLJ translator 9.2.0.1.0
Monitoring SQLServer 9.2.0.1.0 option
Required SSL Support 9.2.0.1.0 files
Sun JDK 9.2.0.1.0 extensions
Sun JDK 1.3.1.0.1a
Trace DataViewer 9.2.0.1.0
Utilities 9.2.0.1.0 common files
Visigenics ORB 3.4.0.0.0
Required XDK Support 9.2.0.1.0 files
XML for C++ 9.2.0.1.0 class generator
XML for Java 9.2.0.1.0 class generator
XML Parser for C++ 9.2.0.1.0
XML Parser for C 9.2.0.1.0
XML Parser for Java 9.2.0.1.0
XML Parser for Oracle 9.2.0.1.0 JVM
XML Parser for PL/SQL 9.2.0.1.0
XML Transviewer beans 9.2.0.1.0
Transx XML 9.2.0.1.0
XML 9.2.0.1.0
XSQL Servlet 9.2.0.1.0
There are 220 installed components.
There is no interim Patch
OPatch succeeded.
OPatch returns with error code = 0
D:\Downloads\OPatch >
*********************PATCH APPLY**************************
D:\Downloads\p8300340_92080_WINNT\8300340 > D:\Downloads\OPatch\opatch apply
Setup Oracle interim Patch version 1.0.0.0.61
Copyright (c) 2009 Oracle Corporation. All rights reserved...
Oracle recommends using the latest version of OPatch
and read the OPatch documentation in the docs/OPatch
Directory for its use. For more information on the last OPatch and
other support issues, please refer to document ID 293369.1
available on My Oracle Support (https://myoraclesupport.oracle.com)
Oracle home: D:\oracle\ora92New
Inventory of Oracle Home: D:\Program Files\oracle\inventory
Inventory Center: D:\Program Files\oracle\inventory
from: n/d
YES location: D:\Program Files\Oracle\oui
YES shared library: D:\Program Files\Oracle\oui\bin\win32\oraInstaller.dll
Location of Java: "D:\Program Files\Oracle\jre\1.3.1\bin\java.exe".
Location of the log file: D:\oracle\ora92New/.patch_storage/ < patch ID > / *.log
Creation of logfile "D:\oracle\ora92new\.patch_storage\8300340\Apply_8300340_05-".
"- 2009_15 - 14 - 00.log.
SKIPPING_COMPONENT = Oracle.emprod.oemagent.base_oemagent, 9.2.0.8.0
SKIPPING_COMPONENT = Oracle.P2K.oo4o, 9.2.0.7.0
SKIPPING_COMPONENT = Oracle.P2K.OleDb, 9.2.0.7.0
MISSING_COMPONENT: oracle.rsf.rdbms_rsf, 9.2.0.8.0
SKIPPING_COMPONENT = Oracle.RSF.net_rsf, 9.2.0.8.0
SKIPPING_COMPONENT = Oracle.RDBMS, 9.2.0.8.0
SKIPPING_COMPONENT = Oracle.RDBMS.sqlplus, 9.2.0.8.0
SKIPPING_COMPONENT = Oracle.Xml.TransView, 9.2.0.8.0
SKIPPING_COMPONENT = Oracle.Cartridges.Context, 9.2.0.8.0
SKIPPING_COMPONENT = Oracle.options.intermedia.JAI, 9.2.0.8.0
SKIPPING_COMPONENT = Oracle.Cartridges.Locator, 9.2.0.8.0
SKIPPING_COMPONENT = Oracle.ISearch.is_common, 9.2.0.8.0
SKIPPING_COMPONENT = Oracle.ISearch.Server, 9.2.0.8.0
SKIPPING_COMPONENT = Oracle.RSF.oracore_rsf, 9.2.0.8.0
SKIPPING_COMPONENT = Oracle.java.JavaVM.javatools, 9.2.0.8.0
SKIPPING_COMPONENT = Oracle.RSF.ldap_rsf, 9.2.0.8.0
SKIPPING_COMPONENT = Oracle.RSF.xdk_rsf, 9.2.0.8.0
SKIPPING_COMPONENT = Oracle.OID.client_common, 9.2.0.8.0
SKIPPING_COMPONENT = Oracle.RSF.nlsrtl_rsf, 9.2.0.8.0
SKIPPING_COMPONENT = Oracle.ntoramts, 9.2.0.8.0
This House of Oracle has no components/versions required by the patch.
ERROR: OPatch failed during pre-reqs check.
OPatch returns with error code = 150
D:\Downloads\p8300340_92080_WINNT\8300340 >Database version:
* 9.2.0.1.0 *.A patch which I am applying:
9.2.0.8.0You must be on 9.2.0.8 (4547809) to apply this hotfix.
Check the patch readme for more information file and the pre - req for required for this patch.
-
Hello
I asked about the best way to implement the logic of transformation with OWB. In my case, the source is a database and the amount of data is only a little more than 2 million lines. Extract phase I did a direct load from the source to the table in ITS phase where the table is structured the same as the source. Type of loading is TRUNCATE/INSERT. It takes two minutes to load which I think is pretty well in our environment. I've been making mappings that takes the data from the table - its, makes all the transformations and save a direct result of the dw-table (UPDATE/INSERT). All in a map. But when there is a lot of transformation in several expressions with box-when a lot of joins clauses and the research, the result is very inefficient - I think - can not be solved by tuning the plan of the explain command. I also include the surrogate_id in the Web of mapping (we do not use object dimension or cube objects, a few tables). There are situations where the data used in the expression are handled several times in the other expression which - I think - a lot of handling nested in the package. Time to update the dw table can be found on several hours... I think my mistake is trying to put too many things in the unique mapping when the order of operators can be found on the inefficient code.
I was told that the best way to manage data is to extract the data from the source to the table - its is construced as dw-table. I know people who did the tampering in separate SQL clauses in a PL/SQL procedure. Do a lot of updates for SA table. The only thing to do with OWB is to transfer the data from the table'S ready at the DW table. I don't bite. I tend to think that the transformation is always possible more effective with OWB (if the mapping is designed right way) because overall according to the mode that it generates only one article SQL to do any manipulation in the unique map. Is it? I wonder three different things:
(1) when to split the map to separate mappings.
(2) can I do maps update the original table with loading type updated or are still transferred directly mappings that stores the results in the new table?
(3) the use of tables in ITS region. When I should make transformation multimode (do more sa-tables to store the incremental result of each step)?
I think that it is difficult to clarify me on this. I hope you get my point and can give me some good guidelines to follow.
Kind regards
JuhaHi Juha,
It is not the order or the use of simple operators that make the mapping of poor performance but operations temporarily stores the data on the disc are those should I avoid - or try to minimize their effect.
Temporary tablespace using Oracle DWH (and perhaps on other RDBMS platforms) is general. I think that it is very difficult to avoid completely the temporary segments creting in environment DWH.
Or maybe I should concentrate on the path I have chosen and separate big joins on their own mappings and reduce the amount of joints in a mapping?
In my view, splitting complex mapping in several mappings is the best approach in your case.
Kind regards
Oleg -
Hello
We use the ODI to load some data from an Oracle ERP to an Oracle DWH. How should we define the index? Should we create them physically when you create tables DWH or should I use the KM to dynamically set?
Thank youHello
ODI is not made for the administration of the DBMS. It is better to manage the indexes directly in Oracle and return to them. If you don't do this, you cannot use the incremental update IKM requiring unique indexes.
HTH -
Informatica is a product of Oracle?
Hello
I was searched on the net, what is the relationship between Informatica and Oracle, but I've not found any results
The question why I am looking for this information?
Because I found two days ago I can download information from the oracle site https://delivery. Oracle.com and now I'm looking what is the relationship between Informatica and Oracle
Thank you
Informatica is not a product of Oracle.
But for OLIVIER Oracle was using Informatica as ETL to move data between sources and the DWH OBIA (I guess it's OLIVIER * 7.9.6).
In the latest version of OLIVIER (11) Oracle replaced by ODI, their own ETL Informatica.
So you can download Informatica from the oracle site in old 7.9.6 * OLIVIER OLIVIER pack, but it is not at all a product of Oracle.
-
After the export multiple tables compressed 11g and import in 12 c by datapump, column "compress_for" (dba_tables) changed OLTP (11g) to ADVANCED (12 c).
Does anyone know if "compressed_for" OLTP is obsolete in 12 c? Is there an impact on this change?
Thank you, Mike.
Of the doco:
http://docs.Oracle.com/database/121/REFRN/GUID-6823CD28-0681-468E-950B-966C6F71325D.htm#REFRN20286
I would say that the advanced compression includes compression OLTP, you need the license of advanced compression to use.
COMPRESS_FOR
VARCHAR2(30)
Default compression for this type of operation:
-
VECTOR_TRANSFORM and ORACLE 12 c InMemory
Patch level: "game of hotfixes in the OCW - updated: 12.1.0.2.4 (20831113).
Like many people I guess, I just upgraded an existing DWH, I set some 12 c in memory settings and expect significant performance improvements.
It did not.
So I decided to build a DWH sample with a time and the product dimension and a partitioned fact table that is so great that it would not completely fit in the inmemory part, to do some tests in a much simpler environment.
In my example, the dimensions are first loaded in memory in priority. The fact load depends on the use of partitions so that the 'real' data will not fit.
Questions I wanted to answer were:
1. the inmemory query will be much faster than the no version inmemory. (Answer: it depends.) It is often not!)
2 done ORACLE Exchange will introduce INMEMORY partitions when there is no more space and
other data partitions (for example in the form of a FIFO?), we wonder. (Answer: no, this is documented)
3. will I see in v$ im_segments I have no partitions of all the senses in memory
(No you need to associate DBA_SEGMENTS to see the partitions in memory.)Now, I tried to find out why I have no more success.
And while I played and I read, I found VECTOR_TRANSFORM
https://support.Oracle.com/epmos/faces/DocumentDisplay?_afrLoop=287258785184897 & ID = 1935305.1 & _afrWindowMode = 0 & _adf. CTRL-State = 15o1gcrocb_55
WHO gave a real helping hand to my request! (when the optimizer takes)I set myself (to have always to all my normal queries of DWH)
change all of the "_always_vector_transformation" system = true;
change the whole system "_optimizer_vector_cost_adj" = 20;But it does not work for all queries.
So I decided to share my findings with you and ask if anyone knows why ORACLE here sometimes decide not to use VECTOR_TRANSFORM.
I have found too much examples of work on the WEB that really show the difference in performance of INMEMORY. I did it here hope for other examples of work / tips etc.
Here in short my tests and you see that I am MUCH faster when both functions are used.
Without VECTOR_TRANSFORM and INMEMORY
Elapsed time: 00:00:38.56
A query that takes VECTOR_TRANSFORM and INMEMORY is:
Elapsed time: 00:00:00.75 < = factor 50 and VERY impressive!
Without VECTOR_TRANSFORM with INMEMORY
Elapsed time: 00:00:09.32
Even if the fact is not in memory query takes:
Elapsed time: 00:00:02.79Then, as my system memory is limited, I would put all the dimensions of INMEMORY and
hope that the VECTOR_TRANSFORM is ALWAYS usedBut even I put '_always_vector_transformation' and give a clue VECTOR_TRANSFORM.
VECTOR_TRANSFORM is sometimes not used.I put the events below to get an answer and checked the trace:
ALTER session set events trace [SQL_Transform.*] ' disk 'high '.
ALTER session set events trace [SQL_Costing.*] ' disk 'high '.but that didn't help either. (too much information ;-)
My settings (SOLARIS 10, Intel x 64, 64 CPU, the memory of the 1 TB server)
VALUE OF TYPE NAME
------------------------------------ ----------- ------------------------------
inmemory_force string by DEFAULT
inmemory_max_populate_servers integer 12
inmemory_query string ENABLE
inmemory_size big integer 1 G
inmemory_trickle_repopulate_servers_ integer 50
percent
optimizer_inmemory_aware Boolean TRUE
SGA_MAX_SIZE large whole 20G
Whole large SGA_TARGET 20G_always_vector_transformation Boolean TRUE
The DWH, which you can build yourself with the following size(The fact table has 4000 'days' / data using 32 GB partitions)
NOM_SEGMENT MO COUNT (*)
-------------------------- ----------
32000 4000 SALES_FACT
0 1 DAY_DIMENSION
2 1 PRODUCT_DIMENSION(If you don't not have 32 GB to use, it is easy to make the smaller example.)
PKs and constraints have been fixed.
Statistics with histograms on indexed lines, or on day_dimension, all areas are taken to give the optimizer everthing he might need.Thank you for thinking about it.
AndyQueries:
---------Q1:
------
Select
min (d.day_id), Max (d.day_id),
d.Year, p.product_type, sum (f.sale_price) sale_price, count (*)
of day_dimension d
inner join sales_fact f
on d.day_id = f.day_id
inner join product_dimension p
on p.product_id = f.product_id
where d.year = 2012
and p.product_type = 'TABLE'
D.year group, p.product_type
d.year order, p.product_type;
-Elapsed: 00:00:00.75-Q2: what 'min' is missing in the VECTOR_TRANSFORM SQL does NOT.
------
Select / * + VECTOR_TRANSFORM * /.
-min (d.day_id), max (d.day_id),
d.Year, p.product_type, sum (f.sale_price) sale_price, count (*)
of day_dimension d
inner join sales_fact f
on d.day_id = f.day_id
inner join product_dimension p
on p.product_id = f.product_id
where d.year = 2012
and p.product_type = 'TABLE'
D.year group, p.product_type
d.year order, p.product_type;
-Elapsed: 00:00:09.32-Q3: what 'min' is missing in the VECTOR_TRANSFORM SQL is NOT without INMOMRY, it is slow.
------
Select / * + VECTOR_TRANSFORM NO_INMEMORY * /.
-min (d.day_id), max (d.day_id),
d.Year, p.product_type, sum (f.sale_price) sale_price, count (*)
of day_dimension d
inner join sales_fact f
on d.day_id = f.day_id
inner join product_dimension p
on p.product_id = f.product_id
where d.year = 2012
and p.product_type = 'TABLE'
D.year group, p.product_type
d.year order, p.product_type;
-Elapsed: 00:00:38.78
-Q4:
------
Select / * + VECTOR_TRANSFORM NO_INMEMORY * /.
min (d.day_id), Max (d.day_id),
d.Year, p.product_type, sum (f.sale_price) sale_price, count (*)
of day_dimension d
inner join sales_fact f
on d.day_id = f.day_id
inner join product_dimension p
on p.product_id = f.product_id
where d.year = 2012
and p.product_type = 'TABLE'
D.year group, p.product_type
d.year order, p.product_type;
-Elapsed: 00:00:01.03-Q5:
-----
Select / * + NO_VECTOR_TRANSFORM NO_INMEMORY * /.
min (d.day_id), Max (d.day_id),
d.Year, p.product_type, sum (f.sale_price) sale_price, count (*)
of day_dimension d
inner join sales_fact f
on d.day_id = f.day_id
inner join product_dimension p
on p.product_id = f.product_id
where d.year = 2012
and p.product_type = 'TABLE'
D.year group, p.product_type
d.year order, p.product_type;
-Elapsed: 00:00:38.56-Q6: (done partitoned for 2014 not in memory)
------
Select
min (d.day_id), Max (d.day_id),
d.Year, p.product_type, sum (f.sale_price) sale_price, count (*)
of day_dimension d
inner join sales_fact f
on d.day_id = f.day_id
inner join product_dimension p
on p.product_id = f.product_id
where d.year = 2014
and p.product_type = 'TABLE'
D.year group, p.product_type
d.year order, p.product_type;
-Elapsed: 00:00:02.70Q1:
-----------------------------------------------------------------------------------------------------------------------------------
| ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time | Pstart. Pstop |
-----------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 259. 20720 | 176K (57) | 00:00:28 | | |
| 1. TRANSFORMATION OF THE TEMPORARY TABLE. | | | | | | |
| 2. LOAD SELECT ACE | SYS_TEMP_0FD9D6928_73F9117C | | | | | | |
| 3. GROUP BY VECTOR | | 366. 2928. 1 (0) | 00:00:01 | | |
| 4. VECTOR KEY CREATE BUFFER | : KV0000 | | | | | | |
|* 5 | INMEMORY COMPLETE ACCESS TABLE | DAY_DIMENSION | 366. 2928. 1 (0) | 00:00:01 | | |
| 6. LOAD SELECT ACE | SYS_TEMP_0FD9D6929_73F9117C | | | | | | |
| 7. GROUP BY VECTOR | | 1. 12. 1 (0) | 00:00:01 | | |
| 8. HASH GROUP BY. | 1. 12. 1 (0) | 00:00:01 | | |
| 9. VECTOR KEY CREATE BUFFER | : KV0001 | | | | | | |
| * 10 | INMEMORY COMPLETE ACCESS TABLE | PRODUCT_DIMENSION | 888. 10656. 1 (0) | 00:00:01 | | |
| 11. GROUP SORT BY NOSORT | | 259. 20720 | 176K (57) | 00:00:28 | | |
| * 12 | HASH JOIN | | 259. 20720 | 176K (57) | 00:00:28 | | |
| 13. THE CARTESIAN MERGE JOIN. | 366. 7320 | 4 (0) | 00:00:01 | | |
| 14. TABLE ACCESS FULL | SYS_TEMP_0FD9D6929_73F9117C | 1. 12. 2 (0) | 00:00:01 | | |
| 15. KIND OF BUFFER. | 366. 2928. 2 (0) | 00:00:01 | | |
| 16. TABLE ACCESS FULL | SYS_TEMP_0FD9D6928_73F9117C | 366. 2928. 2 (0) | 00:00:01 | | |
| 17. VIEW | VW_VT_4FBA27B6 | 259. 15540 | 176K (57) | 00:00:28 | | |
| 18. GROUP BY VECTOR | | 259. 3367 | 176K (57) | 00:00:28 | | |
| 19. HASH GROUP BY. | 259. 3367 | 176K (57) | 00:00:28 | | |
| 20. USE OF KEY VECTORS | : KV0000 | | | | | | |
| 21. USE OF KEY VECTORS | : KV0001 | | | | | | |
| 22. RANGE OF PARTITION SUBQUERY | | 160 M | 1983M | 77948 (2) | 00:00:13 | KEY (SQ) | KEY (SQ) |
| * 23. INMEMORY COMPLETE ACCESS TABLE | SALES_FACT | 160 M | 1983M | 77948 (2) | 00:00:13 | KEY (SQ) | KEY (SQ) |
-----------------------------------------------------------------------------------------------------------------------------------Q2:
---------------------------------------------------------------------------------------------------------------------
| ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time | Pstart. Pstop |
---------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1308K | 41 M | 78385 (2) | 00:00:13 | | |
| 1. GROUP SORT BY NOSORT | | 1308K | 41 M | 78385 (2) | 00:00:13 | | |
|* 2 | HASH JOIN | | 1308K | 41 M | 78385 (2) | 00:00:13 | | |
| 3. JOIN FILTER PART CREATE | : BF0000 | 366. 2928. 1 (0) | 00:00:01 | | |
|* 4 | INMEMORY COMPLETE ACCESS TABLE | DAY_DIMENSION | 366. 2928. 1 (0) | 00:00:01 | | |
|* 5 | HASH JOIN | | 14 M | 341 M | 78349 (2) | 00:00:13 | | |
| 6. JOIN CREATE FILTER | : BF0001 | 888. 10656. 1 (0) | 00:00:01 | | |
|* 7 | INMEMORY COMPLETE ACCESS TABLE | PRODUCT_DIMENSION | 888. 10656. 1 (0) | 00:00:01 | | |
| 8. USE OF JOIN FILTER | : BF0001 | 160 M | 1983M | 77948 (2) | 00:00:13 | | |
| 9. RANGE OF PARTITION-JOIN FILTER | | 160 M | 1983M | 77948 (2) | 00:00:13 | : BF0000 | : BF0000 |
| * 10 | INMEMORY COMPLETE ACCESS TABLE | SALES_FACT | 160 M | 1983M | 77948 (2) | 00:00:13 | : BF0000 | : BF0000 |
---------------------------------------------------------------------------------------------------------------------Q3:
--------------------------------------------------------------------------------------------------------------------
| ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time | Pstart. Pstop |
--------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1308K | 41 M | 85288 (2) | 00:00:14 | | |
| 1. GROUP SORT BY NOSORT | | 1308K | 41 M | 85288 (2) | 00:00:14 | | |
|* 2 | HASH JOIN | | 1308K | 41 M | 85288 (2) | 00:00:14 | | |
| 3. JOIN FILTER PART CREATE | : BF0000 | 366. 2928. 4 (0) | 00:00:01 | | |
|* 4 | TABLE ACCESS FULL | DAY_DIMENSION | 366. 2928. 4 (0) | 00:00:01 | | |
|* 5 | HASH JOIN | | 14 M | 341 M | 85248 (2) | 00:00:14 | | |
|* 6 | TABLE ACCESS FULL | PRODUCT_DIMENSION | 888. 10656. 9 (0) | 00:00:01 | | |
| 7. RANGE OF PARTITION-JOIN FILTER | | 160 M | 1983M | 84839 (2) | 00:00:14 | : BF0000 | : BF0000 |
| 8. TABLE ACCESS FULL | SALES_FACT | 160 M | 1983M | 84839 (2) | 00:00:14 | : BF0000 | : BF0000 |
--------------------------------------------------------------------------------------------------------------------Q4:
-----------------------------------------------------------------------------------------------------------------------------------
| ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time | Pstart. Pstop |
-----------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 259. 20720 | 176K (57) | 00:00:28 | | |
| 1. TRANSFORMATION OF THE TEMPORARY TABLE. | | | | | | |
| 2. LOAD SELECT ACE | SYS_TEMP_0FD9D6936_73F9117C | | | | | | |
| 3. GROUP BY VECTOR | | 366. 2928. 1 (0) | 00:00:01 | | |
| 4. VECTOR KEY CREATE BUFFER | : KV0000 | | | | | | |
|* 5 | INMEMORY COMPLETE ACCESS TABLE | DAY_DIMENSION | 366. 2928. 1 (0) | 00:00:01 | | |
| 6. LOAD SELECT ACE | SYS_TEMP_0FD9D6937_73F9117C | | | | | | |
| 7. GROUP BY VECTOR | | 1. 12. 1 (0) | 00:00:01 | | |
| 8. HASH GROUP BY. | 1. 12. 1 (0) | 00:00:01 | | |
| 9. VECTOR KEY CREATE BUFFER | : KV0001 | | | | | | |
| * 10 | INMEMORY COMPLETE ACCESS TABLE | PRODUCT_DIMENSION | 888. 10656. 1 (0) | 00:00:01 | | |
| 11. GROUP SORT BY NOSORT | | 259. 20720 | 176K (57) | 00:00:28 | | |
| * 12 | HASH JOIN | | 259. 20720 | 176K (57) | 00:00:28 | | |
| 13. THE CARTESIAN MERGE JOIN. | 366. 7320 | 4 (0) | 00:00:01 | | |
| 14. TABLE ACCESS FULL | SYS_TEMP_0FD9D6937_73F9117C | 1. 12. 2 (0) | 00:00:01 | | |
| 15. KIND OF BUFFER. | 366. 2928. 2 (0) | 00:00:01 | | |
| 16. TABLE ACCESS FULL | SYS_TEMP_0FD9D6936_73F9117C | 366. 2928. 2 (0) | 00:00:01 | | |
| 17. VIEW | VW_VT_4FBA27B6 | 259. 15540 | 176K (57) | 00:00:28 | | |
| 18. GROUP BY VECTOR | | 259. 3367 | 176K (57) | 00:00:28 | | |
| 19. HASH GROUP BY. | 259. 3367 | 176K (57) | 00:00:28 | | |
| 20. USE OF KEY VECTORS | : KV0000 | | | | | | |
| 21. USE OF KEY VECTORS | : KV0001 | | | | | | |
| 22. RANGE OF PARTITION SUBQUERY | | 160 M | 1983M | 77948 (2) | 00:00:13 | KEY (SQ) | KEY (SQ) |
| * 23. INMEMORY COMPLETE ACCESS TABLE | SALES_FACT | 160 M | 1983M | 77948 (2) | 00:00:13 | KEY (SQ) | KEY (SQ) |
-----------------------------------------------------------------------------------------------------------------------------------Q5:
--------------------------------------------------------------------------------------------------------------------
| ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time | Pstart. Pstop |
--------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1308K | 41 M | 85288 (2) | 00:00:14 | | |
| 1. GROUP SORT BY NOSORT | | 1308K | 41 M | 85288 (2) | 00:00:14 | | |
|* 2 | HASH JOIN | | 1308K | 41 M | 85288 (2) | 00:00:14 | | |
| 3. JOIN FILTER PART CREATE | : BF0000 | 366. 2928. 4 (0) | 00:00:01 | | |
|* 4 | TABLE ACCESS FULL | DAY_DIMENSION | 366. 2928. 4 (0) | 00:00:01 | | |
|* 5 | HASH JOIN | | 14 M | 341 M | 85248 (2) | 00:00:14 | | |
|* 6 | TABLE ACCESS FULL | PRODUCT_DIMENSION | 888. 10656. 9 (0) | 00:00:01 | | |
| 7. RANGE OF PARTITION-JOIN FILTER | | 160 M | 1983M | 84839 (2) | 00:00:14 | : BF0000 | : BF0000 |
| 8. TABLE ACCESS FULL | SALES_FACT | 160 M | 1983M | 84839 (2) | 00:00:14 | : BF0000 | : BF0000 |
--------------------------------------------------------------------------------------------------------------------Updated the DWH:
create the table day_dimension
as
WITH my_days like)
LEVEL SELECT ID, trunc (sysdate)-3900 + LEVEL my_day
OF THE DOUBLE
CONNECT BY LEVEL < = 4000)
--
Select
ID day_id,
my_day current_day,
TO_NUMBER (to_char(my_day,'YYYYMM')) month_id,
TO_NUMBER (to_char(my_day,'YYYY')) year,
to_char(my_day,'DAY') day_name,
to_char(my_day,'MONTH') month_name,
TO_CHAR (my_day, ' DAY DDMONTH ',' NLS_DATE_LANGUAGE = GERMAN ') german_day,.
TO_CHAR (my_day, ' DAY DDMONTH ',' NLS_DATE_LANGUAGE = FRENCH ') french_day,.
-case when my_day = trunc (sysdate) then 'Y' else ' n end current_day_flag.
cases where last_day (my_day) = my_day then 'Y' else ' n end month_end_flag.
MONTHS_BETWEEN (LAST_DAY (my_day), LAST_DAY (sysdate)) months_back
Of
my_days;ALTER table day_dimension ADD CONSTRAINT day_pk PRIMARY KEY (day_id) USING INDEX;
create the table product_dimension
as
Select product_id, product_name object_name, object_type product_type rownum,
object_name | » -'|| object_name | » -'|| object_name | » -'|| object_name | » -'|| object_name | » -'|| object_name product_name_long
from dba_objects where rownum < = 10000;ALTER table product_dimension ADD CONSTRAINT prod_pk PRIMARY KEY (product_id) USING INDEX;
drop table sales_fact;
create the table sales_fact
(day_id NUMBER (15.0) NOT NULL,)
product_id NUMBER (15.0) NOT NULL,
sale_price number,
filler_text VARCHAR2 (1000))
PARTITION OF RANGE (day_id) INTERVAL (1)
(PARTITION "PMIN" VALUES LOWER (1) SEGMENT DEFERRED CREATION)
;ALTER table sales_fact add constraint day_id_fk foreign (day_id) keys
disable the references day_dimension (day_id);ALTER table sales_fact add constraint product_id_fk foreign (product_id) keys
disable the references product_dimension (product_id);
Insert into sales_fact (product_id, sale_price, day_id, filler_text)
SELECT 1 day_id
trunc (DBMS_RANDOM. VALUE * 10000) product_id,.
Round (DBMS_RANDOM. VALUE * 1000, 2) sale_price;
Filler_text "the PL/SQL interface for components of the Advisor is described at the end of this article"
OF THE DOUBLE
CONNECT BY LEVEL < = 40000;
commit;Start
because me 2.4000 loop
Insert into sales_fact (product_id, sale_price, day_id, filler_text)
Select I product_id, sale_price, filler_text from sales_fact where day_id = 1;
commit;end loop;
end;
/-get statistics:
Start
DBMS_STATS. () GATHER_TABLE_STATS
ownname = > 'DWHADM ',.
tabname = > 'DAY_DIMENSION ',.
method_opt = > 'for all THE COLUMNS of SIZE AUTO. "
block_sample = > TRUE,
Cascade = > TRUE);
end;
/
Start
I'm in (select * from all_tables where table_name in ('SALES_FACT', 'PRODUCT_DIMENSION')) loop
DBMS_STATS. () GATHER_TABLE_STATS
ownname = > i.owner,
tabname = > i.table_name,
block_sample = > TRUE,
level = > 4,
Cascade = > TRUE);end loop;
end;
/Andy,
I built your model with a few adjustments and played a little bit.
I did not have the dimensions in memory and I added an extra column to the product dimension, and I built only 254 partitions (so all in 2005)
I think the optimizer ignores the vector transformation because he decides that your request is not a query group. Your entry is:
Select
-min (d.day_id), max (d.day_id),
d.Year, p.product_type, sum (f.sale_price) sale_price, count (*)
of day_dimension d
sales_fact f d.day_id = f.day_id inner join
inner join product_dimension p on p.product_id = f.product_id
where d.year = 2012 and p.product_type = 'TABLE '.
D.year group, p.product_type
d.year order, p.product_type
The optimizer rewrites that:
Select
2012, 'TABLE', sum (f.sale_price) sale_price, count (*)
of day_dimension d
sales_fact f d.day_id = f.day_id inner join
inner join product_dimension p on p.product_id = f.product_id
where d.year = 2012 and p.product_type = 'TABLE '.
2012 group
It seems to me that if he had to eliminate "Group by constant", and it is strange that she keeps the first two columns (switch provided that the order in the group by and rewriting will be "Group by 'TABLE'")
I think that, even if, to some Oracle processes a query point block this exliminates that "group by" and disables VT accordingly.
My clue, incidentally was wrong, I realized it to @sel$ 1 when it should have been directed to joining 3 tables merged.
The subquery indicator (even if she had worn on the right query block) not would not have worked because it seems that creating a filter of Bloom for elimination of partition overrides subquery filtering - at least in 12.1.0.2 - and if the day_dimension is used as a table of accumulation, it is automatically able to generate a Bloom filter , and the no_px_join_filter() applies only for the join filter, not for the elimination of partition filter.
One thing I found with this request, however, is that if I disabled it partition pruning by filter Bloom (OPT_PARAM ('_bloom_pruning_enabled' 'false')) then, on the my data set, he ran a little faster. I would be interested in what he does on your game. (With statistics_level a new typical and no index of gather_plan_statistics). Possibly the most robust test would be the next set of indicators - to change the 'swap' to 'no_swap '.
/*+
attack (@sel$ 9e43cb6e p@sel$2 f@sel$1 d@sel$1)
px_join_filter (@sel$ 9e43cb6e f@sel$1)
subquery_pruning (@sel$ 9e43cb6e f@sel$1 partition)
USE_HASH (@sel$ 9e43cb6e f@sel$1)
USE_HASH (@sel$ 9e43cb6e d@sel$1)
swap_join_inputs (@sel$ 9e43cb6e d@sel$1)
*/
Here are the plans of running two I - the first was traded, the second is not exchanged:
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| ID | Operation | Name | Begins | E - lines. Cost (% CPU). Pstart. Pstop | A - lines. A - time | Pads | Bed | OMem | 1Mem | Used Mem.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1. | 15086 (100) | | | 1. 00:00:39.24 | 50527 | 48906 | | | |
| 1. GROUP SORT BY NOSORT | | 1. 639K | 15086 (1) | | | 1. 00:00:39.24 | 50527 | 48906 | | | |
|* 2 | HASH JOIN | | 1. 639K | 15086 (1) | | | 1221K | 00:00:36.49 | 50527 | 48906 | 2440K | 2440K | 1472K (0) |
| 3. JOIN FILTER PART CREATE | : BF0000 | 1. 333. 18 (0) | | | 333. 00:00:00.01 | 57. 0 | | | |
|* 4 | TABLE ACCESS FULL | DAY_DIMENSION | 1. 333. 18 (0) | | | 333. 00:00:00.01 | 57. 0 | | | |
|* 5 | HASH JOIN | | 1. 641K | 15066 (1) | | | 1221K | 00:00:26.16 | 50470 | 48906 | 2440K | 2440K | 1184K (0) |
| 6. JOIN CREATE FILTER | : BF0001 | 1. 625. 61 (0) | | | 1210 | 00:00:00.01 | 214. 0 | | | |
|* 7 | TABLE ACCESS FULL | PRODUCT_DIMENSION | 1. 625. 61 (0) | | | 1210 | 00:00:00.01 | 214. 0 | | | |
| 8. USE OF JOIN FILTER | : BF0001 | 1. 10 M | 14981 (1) | | | 1389K | 00:00:14.92 | 50256 | 48906 | | | |
| 9. RANGE OF PARTITION-JOIN FILTER | | 1. 10 M | 14981 (1) | : BF0000 | : BF0000 | 1389K | 00:00:09.30 | 50256 | 48906 | | | |
| * 10 | INMEMORY COMPLETE ACCESS TABLE | SALES_FACT | 254. 10 M | 14981 (1) | : BF0000 | : BF0000 | 1389K | 00:00:03.72 | 50256 | 48906 | | | |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Information of predicates (identified by the operation identity card):
---------------------------------------------------
2 - access("D".") "DAY_ID" ="F" DAY_ID')
4 - filter("D".") AN "= 2005)
5 - access("P".") ' PRODUCT_ID ' =' F '" PRODUCT_ID')
7 - filter("P".") PRODUCT_TYPE "=" TABLE")
10 - inmemory (SYS_OP_BLOOM_FILTER (: BF0000, 'F'.)) ((' ' PRODUCT_ID '))
filter (SYS_OP_BLOOM_FILTER (: BF0000, 'F'.)) ((' ' PRODUCT_ID '))
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| ID | Operation | Name | Begins | E - lines. Cost (% CPU). Pstart. Pstop | A - lines. A - time | Pads | Bed | OMem | 1Mem | Used Mem.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1. | 16180 (100) | | | 1. 00:00:07.47 | 50584 | 48956 | | | |
| 1. GROUP SORT BY NOSORT | | 1. 639K | 16180 (1) | | | 1. 00:00:07.47 | 50584 | 48956 | | | |
|* 2 | HASH JOIN | | 1. 639K | 16180 (1) | | | 1221K | 00:00:04.88 | 50584 | 48956 | 88. 9683K | 78 M (0).
|* 3 | HASH JOIN | | 1. 641K | 15066 (1) | | | 1221K | 00:00:01.88 | 50527 | 48956 | 2440K | 2440K | 1134K (0) |
| 4. JOIN CREATE FILTER | : BF0000 | 1. 625. 61 (0) | | | 1210 | 00:00:00.01 | 214. 0 | | | |
|* 5 | TABLE ACCESS FULL | PRODUCT_DIMENSION | 1. 625. 61 (0) | | | 1210 | 00:00:00.01 | 214. 0 | | | |
| 6. USE OF JOIN FILTER | : BF0000 | 1. 10 M | 14981 (1) | | | 1389K | 00:00:01.25 | 50313 | 48956 | | | |
| 7. RANGE OF PARTITION SUBQUERY | | 1. 10 M | 14981 (1) | KEY (SQ) | KEY (SQ) | 1389K | 00:00:01.03 | 50313 | 48956 | | | |
|* 8 | INMEMORY COMPLETE ACCESS TABLE | SALES_FACT | 254. 10 M | 14981 (1) | KEY (SQ) | KEY (SQ) | 1389K | 00:00:00.78 | 50256 | 48956 | | | |
|* 9 | TABLE ACCESS FULL | DAY_DIMENSION | 1. 333. 18 (0) | | | 333. 00:00:00.01 | 57. 0 | | | |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Information of predicates (identified by the operation identity card):
---------------------------------------------------
2 - access("D".") "DAY_ID" ="F" DAY_ID')
3 - access("P".") ' PRODUCT_ID ' =' F '" PRODUCT_ID')
5 - filter("P".") PRODUCT_TYPE "=" TABLE")
8 - inmemory (SYS_OP_BLOOM_FILTER (: BF0000, 'F'.)) ((' ' PRODUCT_ID '))
filter (SYS_OP_BLOOM_FILTER (: BF0000, 'F'.)) ((' ' PRODUCT_ID '))
9 - filter("D".") AN "= 2005)
The massive difference in timing is almost entirely due to statistics_level = all.
By operating without this parameter times were 1.58 and 1.23 seconds.
I think the reason why there is not a lot of information on the Internet, it is that he is still relatively new and there are not many people who use it, and those who use it are so happy to improve they get that they are not checking to see if they get all the improvement that could be expected. and the people who tend to look at these things had much else to look at again. I spent a bit of time looking at HOW it works, but I didn't spend time looking for cases where the strategy is rejected, or the methods of costing.
Concerning
Jonathan Lewis
-
is an overall index provides fast response in OLTP times?
Hi all
I read a doc
http://docs.Oracle.com/CD/E18283_01/server.112/e16541/partition.htm#insertedID5
and it is said that If the application is an OLTP we and users needs quick response time, use an overall index
I don't know how an overall index provides quick access.
Assume there is a table and that is partitioned by hash.
This table has a local index on column owner - no partition key column.
To query the information of owner of SCOTT, Oracle will scan all index partitions that have given Scott.
On the other hand, there is an overall index on column 'owner '.
To query the information of SCOTT, Oracle probes also all index partitions that have given Scott.
So, here's the question.
Why everyone says that an overall index has good performance (response time) in OLTP?
Supplementary question
Similarly with above, there is a table copied from object. And the table is partitioned by hash (object_id).
To the query with where clause (where owner = xxx), If there is a local index, Oracle scans RANGE SCAN on the t_idx_local index in the first partition through the last partition (take a look at Pstart, Pstop)
But when Oracle uses the overall index, Pstart. Pstop column values are ' KEY | KEY "."
What does that mean??
Thank you.
The KEY to the formulation of the plan means that Oracle will use pruning partitiong dynamic rather than static.
Only it Oracle knows he can partition plum but it determines real partitions during execution, rather than during the analysis.
It's EXACTLY what you'd expect with an OVERALL index. Oracle search the index keys in the overall index and at this time there may find that some lines are in partition 13, others in the bulkhead 27, etc..
With a local index Oracle uses more generally STATIC pruning. You have a local index with a MONTHLY distribution key and your query wants all data of MAY. Oracle knows at the time you want the data for MAY and can use the data dictionary to determine what partitions have CAN data since the index INCLUDES partitiong keys.
-
HI: I analyze the STATSPACK report: this is the "volume test" on our UAT server for most of entry or "bind variables". Our shared pool is well used in oracle. Recovery of Oracle logs is not configured properly on this server, as in "Top 5 events of waiting", there are 2 for Oder.
I need to know what other information may be digging from of 'waiting in the foreground events' & ' background waiting events ", and which can help us better understand, in combination of ' Top 5 wait event, that how did the server test /? It could be overwhelming. wait events, so appreciate useful diagnostic or analyses. Database is oracle 11.2.0.4 updated from 11.2.0.3 on IBM AIX 64-bit, level 6.x system power
STATSPACK report
DB Id Instance Inst Num Startup Time Release RAC database~~~~~~~~ ----------- ------------ -------- --------------- ----------- ---
700000XXX XXX 1 22 April 15 12:12 11.2.0.4.0 no.
Host name Platform CPU Cores Sockets (G) memory~~~~ ---------------- ---------------------- ----- ----- ------- ------------
dXXXX_XXX AIX-Based Systems (64-2 1 0 16.0)
Snapshot Id Snap Snap time Sessions Curs/Sess comment~~~~~~~~ ---------- ------------------ -------- --------- ------------------
BEGIN Snap: 5635 22 April 15 13:00:02 114 4.6
End Snap: 5636 22 April 15 14:00:01 128 8.8
Elapsed time: 59.98 (mins) Av law Sess: 0.6
DB time: 35,98 (mins) DB CPU: 19,43 (mins)
Cache sizes Begin End~~~~~~~~~~~ ---------- ----------
Cache buffer: block 2 064 M Std size: 8 K
Shared pool: 3 072 M Log Buffer: 13 632 K
Load profile per second per Transaction per Exec by call
~~~~~~~~~~~~ ------------------ ----------------- ----------- -----------
DB Time (s): 0.0 0.6 0.00 0.00
DB CPU: 0.0 0.3 0.00 0.00
Size: 458 720,6 8,755.7
Logical reads: 245,7 12 874,2
Block changes: 1 356.4 25.9
Physical reads: 6.6 0.1
Physical writings: 61.8 1.2
The user calls: 38.8 2 033,7
Analysis: 286,5 5.5
Hard analysis: 0.5 0.0
Treated W/A Mo: 1.7 0.0
Logons: 1.2 0.0
Runs: 801,1 15.3
Cancellations: 6.1 0.1
Operations: 52.4
Indicators of the instance~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer % Nowait: 100.00 do NoWait %: 100.00
Buffer % success: 99.98% W/A optimal, Exec: 100.00
Library success %: 99,77% soft Parse: 99.82
Run parse %: 64.24 latch hit %: 99.98
Analyze the CPU to analyze Elapsd %: 53.15% Non-Parse CPU: 98.03
Shared pool statistics Begin End------ ------
% Memory use: 10.50 12.79
% SQL with executions > 1: 69,98 78,37
% Memory for SQL w/exec > 1: 70.22 81,96
Top 5 timed events Avg % Total
~~~~~~~~~~~~~~~~~~ wait Call
Event waits time (s) (ms) time
----------------------------------------- ------------ ----------- ------ ------
CPU time 847 50.2
ENQ: TX - 4 480 97 434 25.8 line lock conflict
Log file sync 284 169 185 1 11.0
log file parallel write 299 537 164 1 9.7
log file sequential read 698 16 24 1.0
Host CPU (processors: 2 hearts: Sockets 1: 0)
~ ~ ~ Medium load
Begin End User System Idle WIO WCPU
------- ------- ------- ------- ------- ------- --------
1.16 1.84 19.28 14.51 66.21 1.20 82.01
Instance of CPU~~~~~~~~~~~~ % Time (seconds)
-------- --------------
Host: Time (s) Total: 7,193.8
Host: Availability of time processor (s): 2,430.7
% of time host is busy: 33.8
Instance: Time processor Total (s): 1,203.1
% Busy CPU used, for example: 49.5
Instance: Time of database total (s): 2,426.4
% DB time waiting for CPU (resp. resources): 0.0
Statistical memory Begin End~~~~~~~~~~~~~~~~~ ------------ ------------
Host Mem (MB): 16,384.0 16 384,0
Use of LMS (MB): 7,136.0 7 136,0
Use of PGA (Mo): 282.5 361.4
Host % Mem used for SGA + PGA: 45.3 45.8
Foreground wait events DB/Inst: XXXXXs Snaps: 5635-5636
-> Only events with wait times Total (s) > =.001 are indicated
--> sorted by Total desc waiting time, waits desc (idle last events)
AVG % Total% Tim Total wait Wait Wait call
Event is waiting for the time (s) (ms) /txn times
---------------------------- ------------ ---- ---------- ------ -------- ------
ENQ: TX - line lock 4 480 0 434 97 contentio 0,0 25.8
284 167 0 185 1 file synchronization log 1.5 11.0
File I/O 8 741 of disk 0 4 operations 0.0 0.2
direct path write 0 13 247 3 0.1 0.2
DB file sequential read 6 058 0 1 0.0 0.1
buffer busy waits 1 800 0 1 1 0,0.1
SQL * Net more data to the client 29 161 0 1 0.2 0.1
direct path read 7 696 0 1 0.0 0.0
db file scattered read 316 0 1 2 0,0.0
latch: shared pool 144 0 0 2 0,0.0
Initialization of 30 0 0 3 0,0.0 CSS
cursor: hand 10 0 0 9 0,0.0 S
lock row cache 41 0 0 2 0,0.0
latch: rank objects cache 19 0 0 3 0,0.0
log file switch (private 8 0 0 7 0,0.0 str
library cache: mutex X 28 0 0 2 0,0.0
latch: cache buffers chains 54 0 0 1 0,0.0
free lock 290 0 0 0.0 0.0
sequential control file read 1 568 0 0 0.0 0.0
switch logfile (4 0 0 6 0,0.0 control point
Live sync 8 0 0 3 0,0.0 road
latch: redo allocation 60 0 0 0 0.0.0
SQL * Net break/reset for 34 0 0 1 0,0.0 customer
latch: enqueue hash chains 45 0 0 0 0.0.0
latch: cache buffers lru chain 7 0 0 2 0,0.0
latch: allowance 5 0 0 1 0,0.0 session
latch: object queue header 6 0 0 1 0,0.0 o
Operation of metadata files ASM 30 0 0 0 0.0.0
latch: in memory of undo latch 15 0 0 0.0 0.0
latch: cancel the overall data 8 0 0 0 0.0.0
SQL * Net client message 6 362 536 0 278 225 44 33.7
jobq slave wait 7 270 100 3 635 500 0.0
SQL * Net more data to 7 976 0 15 2 0,0 clien
SQL * Net message to client 6 362 544 0 8 0 33.7
-------------------------------------------------------------
Context of the DB/Inst events waiting: XXXXXs clings: 5635-5636
-> Only events with wait times Total (s) > =.001 are indicated
--> sorted by Total desc waiting time, waits desc (idle last events)
AVG % Total
% Tim Total wait Wait Wait call
Event is waiting for the time (s) (ms) /txn times
---------------------------- ------------ ---- ---------- ------ -------- ------
log file parallel write 299 537 0 164 1 1.6 9.7
log file sequential read 698 0 16 24 0.0 1.0
db file parallel write 9 556 0 13 1 0,1.8
146 0 10 70 0,0.6 startup operating system thread
control file parallel write 2 037 0 2 1 0,0.1
Newspaper archive e/s 35 0 1 30 0,0.1
LGWR wait for redo copy 2 447 0 0 0.0 0.0
async file IO DB present 9 556 0 0 0.1 0.0
DB file sequential read 145 0 0 2 0,0.0
File I/O disk 349 0 operations 0 0.0 0.0
db file scattered read 30 0 0 4 0,0.0
sequential control file read 5 837 0 0 0.0 0.0
ADR block lu file 19 0 0 4 0,0.0
Block ADR file write 5 0 0 15 0,0.0
direct path write 14 0 0 2 0,0.0
direct path read 3 0 0 7 0,0.0
latch: shared pool 3 0 0 6 0,0.0
single log file write 56 0 0 0.0 0.0
latch: redo allocation 53 0 0 0 0.0.0
latch: 1 0 0 3 0,0.0 active service list
free latch 11 0 0 0 0.0.0
CPI of RDBMS 5 314 523 57 189 182 1.7 message
Space Manager: slave wa slowed 4 086 88 18 996 4649 0.0
DIAG idle wait 7 185 100 1000 7 186 0.0
Streams AQ: waiting time 2 50 4 909 # 0,0
Streams AQ: qmn slowed slave 129 0 3 612 28002 0.0 w
Streams AQ: Coordinator of the 258 50 3 612 14001 0,0 qmn
SMON timer 2 43 3 605 83839 0.0
PMON timer 99 1 199 2999 3 596 0.0
SQL * Net client message 17 019 0 31 2 0.1
SQL * Net message to client 12 762 0 0 0.1 0
class slaves wait 28 0 0 0 0.0
Thank you very much!
Hello
I think that your CPU is overloaded by your stress tests. You have one VCPU with 2 wires (2 LCPU), right? And the load average is greater than one. You have time DB which is not counted in (CPU time + wait events) and which comes no doubt from time spent in the runqueue.
> Oracle recovery logs is not properly configured on this server, as in "Top 5 events of waiting", there are 2 for oder
It is an error in statspack for show "log file parallel write here." This moment is historical and is included in 'log file sync '. And I don't think you have to redo misconfiguration. Waiting for 1ms to commit is ok. In OLTP you should have more than one validation in a user interaction so that the user don't worry not about 1 m in batch mode, unless you commit to each row, 1 DC to commit should not increase the total execution time.
The fact that you have a lot of line lock (enq: TX - line lock conflict) but very little time (on average 97 ms) is probably a sign that testers are running simultaneously a charge affecting the same data. Their set of test data is perhaps too simple and short. An example: when stress tests of an order entry system if you run 1000 concurrent sessions, ordering the same product to the same customer, you can get this kind of symptoms, but the test we unrealistic.
It's a high activity of 2000 calls per second, 52 transactions per second, user. But you also have low average active sessions, so the report probably covers a period of non-uniform activity, which makes the averages without meaning.
So note to tell about the events of waiting here. But we don't have any info about 39% of DB time devoted to the CPU which is where something can be improved.
Kind regards
Franck.
Maybe you are looking for
-
We used to use Firefox... but for some reason, it stopped opening. We have uninstalled and we found that the old version of Firefox would open but it would be immediately updated to the new version and then the problems start again and it would not s
-
Disclosed the recent but older versions of software
Hello I would like to know what it's all about new versions of the software, who although older versions. For example. : * installation material *.[Date 28/05/08, version 3.0.1.0 | http://de.computers.toshiba-europe.com/innovation/download_driver_det
-
Photos not syncing, deleting, showing is not the activity of the messages
Keep spoil my Apple Watch and my phone and I don't know how to solve this problem, and I prefer not rely in factory setting. My watch won't sync my photos from my phone now, my activity on my watch will not be displayed on the phone app to see the ac
-
LaserJet 2420dn: LaserJet 2420dn - install black cartridge
Hello! I replaced the cartridge in a LaserJet 2420dn, and the printer seems to recognize the replacement cartridge, but after a bit, a message is displayed on the control screen, "install black cartridge". At first, I tried to install a used cartridg
-
Cannot open BKF files on disk external usb2 (G) __a Vista Home premium
Files BKF (backup utility) comes from the previous windows xp computer