For the 'VIM_VCDB' database transaction log is full
We live the error, as shown in the subject.
The problem is that we continue the expansion of the log file.
We have 3 servers esx 4.1 and about 25 of the virtual machines running on these servers.
The statistics are at:
5 minutes - 1 day - level 1
30 minutes - week 1 - level 1
2 hours - 1 month - level 1
1 day - 1 year - 1 level.
Database size check indicates that it should be about 0.5 GB with our configuration.
We have already extended 4 times since our migration to vcenter 4.1 decembre2010 to half way and now it's 4.3 GB.
Reduce the log with the unused space do not clean anything.
Anyone experienced this and has another tip, instead of extending once again?
Don't worry, with the recovery model not set to 'Simple', no transaction log is written.
If you wish, you can compare the settings and the size of the log of your vCenter database, because it is probably set to 'Simple' too.
André
Tags: VMware
Similar Questions
-
Unable to see the changes in the file for the RAC database alert log
Hello
We have two node cluster database. I stop a node instance with srvctl utility and then start it. When I checked the alert of this instance log, I've not seen these changes to the connected instance change log of this instance file. Ive tried with instance of shipment and found same. Why is happening. Why Oracle database is not to connect the start event and stop in the log file alert.
Kind regards
AbgrallWhat is your version of DB?
If you are using 11g, use adrci > display alerts > choose the database to view the log of alerts. The events are always recorded in the alert.log, you are probably looking to wrong/old file.
-
On a windows machine XP after putting in place the machine administrator and when trying to connect as a user, I get the error message "security log is full, only an administrator can log in to solve the problem." I know how to solve this problem by going to the event viewer, by selecting the security log and by setting the journal to "ignore the events as needed", but I would like to create a script that will do this automatically for me.
so far my research revealed that the value of the registry key [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Eventlog\Security] "MaxSize" controls this setting and changing the value default DWORD_VALUE to 0 x 01000000 to 0 x 00000000, change is possible. Well, when I did it in regedit, nothing has changed in the security log properties, the default setting of 'remove older items after 7 days' remained the same.
Can someone tell me what registry key, I need to change in order to make this change? Keep in mind im trying to include this in a script.
Thank you
Hi teddorosheff,
Your question is more complex than what is generally answered in the Microsoft Answers forums. It is better suited for the IT Pro TechNet public. Please ask your question in the following forum.
-
upgrade from 3.2 to 4.1 - error handling for the unavailable database links
Hello
I have a 3.2-> upgrade problem 4.1 associated with error handling for the damaged database links.
I have a conditional button Exists on a page that contains a SQL query to related tables. However, for 10 minutes every day where the target of the link database becomes a cold backup, the query fails. In the apex 3.2 old, I just had an error within the region where the button is located but otherwise, the page was still visible:
"Is not valid/not exists condition: ORA-02068: following a serious error of MYDBLINK ORA-01034: ORACLE not available ORA-27101: there is no shared memory realm."
However, in the apex 4.1.0.00.32 I get the following unhandled error and click 'OK' brings me to the edit page when logged in as a developer.
that is, the page cannot run at all so that the links to the database fail to this one area.
Treatment of error condition.
ORA-12518: TNS:listener could not hand off client connection
Technical information (only visible for developers):
is_internal_error: true
apex_error_code: APEX. CONDITION. UNHANDLED_ERROR
ora_sqlcode:-12518
ora_sqlerrm: ORA-12518: TNS:listener could not hand off client connection
Component.type: APEX_APPLICATION_PAGE_REGIONS
Component.ID: 4
Component.Name: alerts today
error_backtrace:
ORA-06512: at "SYS." WWV_DBMS_SQL', line 1041
ORA-06512: at "APEX_040100.WWV_FLOW_DYNAMIC_EXEC", line 687
ORA-06512: at "APEX_040100.WWV_FLOW_CONDITIONS", line 272
Users generally see this:
Treatment of error condition.
ORA-01034: ORACLE not available ORA-02063: preceding the line of MYDBLINK
by clicking on 'OK' takes the user to another page, don't know how the Summit decides that, but not a concern at the moment.
I did a search and read the http://www.inside-oracle-apex.com/apex-4-1-error-handling-improvements-part-1/ page, but the new apex error handling is not clear to me, and I don't know if the apex_error_handling_example provided on this page would be applicable to this situation.Hello
It was my fault, I forgot that the code will be compiled on the fly which already fails if the table/view of deletion is not accessible. Nice that you found yourself workaround.
Concerning
Patrick
-----------
My Blog: http://www.inside-oracle-apex.com
APEX Plug-Ins: http://apex.oracle.com/plugins
Twitter: http://www.twitter.com/patrickwolf -
Hello
Need of the primary database to implement the flashback database when a physical database ensures converts a standby time of the snapshot? Or something to do with the primary database? I find some documents this work to allow the return of flame for the primary databases, but I think that he didn't need to do.
Thank you
Best regards.
I did recently, I have not configured flashback on primary, only configured in the standby mode. I converted the standby database and restored the changes after the test. Primary database continued to send archives to the standby site. Instant once converted into sleep mode, as mseberg mentioned overlaps with the sync state after starting MRP.
-
A SCN for the entire database and the different SNA for the data files?
DB Version: 11 g
I always thought that there is a unique SCN for the database as a whole.
A quote from the link below as
When a control point is completed, Oracle stores the RCS individually in the control for each data file file
http://www.dbapool.com/articles/1029200701.html
What does that mean? There is a SNA for the entire database, and there are individual SCN for each data files?Well, unfortunately, the article says more bad than good things. Or if I can't call them wrong, they are rather confusing and rather than clear things for the reader, its making them appear to look more confused.
First things, YVERT is used for read consistency (CR) mechanism and the backbone of the notion of Multiversioning. The control point is the mechanism to help that recovery is decided. Contrary to what said article, not any kind of checkpoints update both the data file and the control file, and also, there is not a type of them as well. In addition, the article says that the LAST_CHECKPOOINT is set to NULL, while its actually set to the infinity since it is not possible to detect the moment when the database is opened, that the last issue of control over the file would be. In the case of complete control point, this number is saved and is also associated with the toa Controlfile own database leader at the next startup. If this is not the case, there is an inconsistency in the stop_checkpoint of the data file and the stop_checkpoint reocrded in the control file, leading to a recovery of the instance.
There are several types of control points. Similarly, there are several types of SNA as well. Without going into the details of these, IMO, the article simply means that when the control point write over a file passes, oracle updates the file checkpoint on it and this is recorded in the Controlifle. as well.
HTH
Aman... -
is this allowed? materialized view the log for the remote database (via db lin
: Hi guys.
try to do
create materialized view log on user@xxx with sequence, rowid (col1, col2), including the new values;
where xxx is a remote db link
------
had this error
ORA-00949 - illegal reference to the remote database
Google but do not know whelther this error is an internal error or not allowed for mviews characteristic.
help pleaase!
Rgds,
Noobdo you mean that the materialized log should be created in the same location where the main table
YES. Of course, it must be there! Any update to the main table should also update the MV log that is created against it!
Hemant K Collette
-
How to COPY a BACKUP as the... CONTROLFILE can be used for the OPEN database
I create a document such as a "How To" to move a position for a Junior DBA controlfile.
But it seems that I'm the Junior because I am facing the following...
Action plan:
Move/rename a Controlfile
Version of the database: 11.2.0.3
Controlfiles moving of:
/ goldengate/ORCL/ORADATA /.
TO:
/ GoldenGate/ORCL/controlfile
Step 1: Set up environment variables
$> export ORACLE_SID = ORCL1
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
$> echo $ORACLE_SID
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ORCL1
$> export ORACLE_BASE = / u01/app/oracle
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
$> echo $ORACLE_BASE
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
/ u01/app/Oracle
$> export ORACLE_HOME=$ORACLE_BASE/product/11.2.0.3/dbhome_1
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
$ echo $ORACLE_HOME
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
/U01/app/Oracle/product/11.2.0.3/dbhome_1
Step 2: Check control_files parameter
$> echo "see THE PARAMETER control_files | sqlplus-s "virtue sysdba".
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
VALUE OF TYPE NAME
------------------------------------ ----------- ------------------------------
control_files string/goldengate/ORCL/ORADATA/control
ol01. CTL, / goldengate/ORCL/ORA
DATA/control02.ctl
Step 3: Closing the open database
$> echo 'SHUTDOWN IMMEDIATE'; | sqlplus-s "virtue sysdba".
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The database is closed.
The database is dismounted.
ORACLE instance stops.
Step 4: Editing of the database
$> echo "STARTUP MOUNT"; | sqlplus-s "virtue sysdba".
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ORACLE instance started.
Total System Global Area 4275781632 bytes
Bytes of size 2235208 fixed
822084792 variable size bytes
3439329280 of database buffers bytes
Redo buffers 12132352 bytes
Mounted database.
Step 5: Creating a copy of the current controlfile
$> echo "AS BACKUP COPY CURRENT CONTROLFILE FORMAT ' / goldengate/ORCL/CONTROLFILE/control01.copy.ctl'; ' | RMAN target / nocatalog
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Recovery Manager: release 11.2.0.3.0 - Production Fri Oct 22 17:03:27 2015
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
connected to target database: ORCL (DBID = 1420762587, is not open)
using the control file of the target instead of recovery catalog database
RMAN >
From 22 October 15 backup
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID = 58 type of device = DISK
channel ORA_DISK_1: from data file copy
copy the current control file
tag name=/goldengate/ORCL/CONTROLFILE/control01.copy.ctl output file = RECID = 1 STAMP = 893783011 TAG20151022T170329
channel ORA_DISK_1: datafile copy complete, duration: 00:00:03
Backup finished at 22 October 15
RMAN >
Complete recovery manager.
Step 6: Change of parameter control_files
$> echo "ALTER SYSTEM SET control_files='/goldengate/ORCL/CONTROLFILE/control01.copy.ctl' SCOPE = SPFILE;" | sqlplus-s "virtue sysdba".
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Modified system.
Step 7: Closing of the mounted database
$> echo 'SHUTDOWN IMMEDIATE'; | sqlplus-s "virtue sysdba".
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ORA-01109: database is not open
The database is dismounted.
ORACLE instance stops.
Step 8: Installation of the database
$> echo "STARTUP MOUNT"; | sqlplus-s "virtue sysdba".
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ORACLE instance started.
Total System Global Area 4275781632 bytes
Bytes of size 2235208 fixed
822084792 variable size bytes
3439329280 of database buffers bytes
Redo buffers 12132352 bytes
Mounted database.
Step 9: Check control_files parameter
$> echo "see THE PARAMETER control_files | sqlplus-s "virtue sysdba".
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
VALUE OF TYPE NAME
------------------------------------ ----------- ------------------------------
control_files string/goldengate/ORCL/CONTROLFILE/c
ontrol01. Copy.CTL
Step 10: Open the mounted database
$> echo "ALTER DATABASE OPEN"; | sqlplus-s "virtue sysdba".
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ALTER DATABASE OPEN
*
ERROR on line 1:
ORA-01589: must use RESETLOGS or NORESETLOGS option of database open
$> echo "ALTER DATABASE OPEN NORESETLOGS"; | sqlplus-s "virtue sysdba".
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ALTER DATABASE OPEN NORESETLOGS
*
ERROR on line 1:
ORA-01610: recovery using BACKUP CONTROLFILE option must be
Then...
$> sqlplus/nolog
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL * more: Production release 11.2.0.3.0 the game Oct 22 17:14:43 2015
Copyright (c) 1982, 2011, Oracle. All rights reserved.
SQL > CONNECT sysdba virtue
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Connected.
SQL > RECOVER DATABASE with the HELP of BACKUP CONTROLFILE until CANCEL;
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ORA-00279: change 621941 September at 22/10/2015 16:57:33 needed to screw 1
ORA-00289: suggestion:
/U01/app/Oracle/product/11.2.0.3/dbhome_1/DBS/arch1_11_892981851.dbf
ORA-00280: change 621941 thread 1 is in sequence #11
Specify the log: {< RET > = suggested |} Filename | AUTO | CANCEL}
Cancel
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Cancelled media recovery.
SQL > ALTER DATABASE OPEN;
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ALTER DATABASE OPEN
*
ERROR on line 1:
ORA-01589: must use RESETLOGS or NORESETLOGS option of database open
Issues related to the:
What am I misunderstanding? BACKUP COPY THAT isn't really a COPY.
Why I can't use the 'copy' of the controlfile created by RMAN?
Note:
If I just copy the controlfile to the new location when the database shuts down everything works fine.
Thanks in advance.
Juan M
It is also mentioned in https://docs.oracle.com/cd/E11882_01/server.112/e25494/control.htm#ADMIN11288
Create additional Copies, rename and move the control files
-
My vCenter environment is 5.0 and has been initially implemented using SQL Server 2008 R2 Express as the database. We have reached the limit of size of 10 GB for the Express version, so I want to upgrade the database 2008 R2 express 2008 R2 standard.
Before the upgrade, I stopped the vCenter Server service before you start the SQL Server Setup program. I did an "Upgrade Edition", to consider express standard. It ends successfully with 2 minutes, and then I went and restarted the server vCenter service.
However, after restarting the service, vSphere is unable to communicate with the VIM_VCDB. For example, in vSphere if I click a virtual machine and click the performance tab, it tells me that it's impossible to view the web page. It seems to be unable to communicate with the VIM_VCDB. I have not changed all database names, or the ODBC settings, so not sure what is causing the problem.
If I go to ODBC data sources administrator and test the Data Source, the test completes successfully.
I have read all the documentation I could find and tried a few different things, but not found something that solves this problem. I ended up having to go back to the snapshot backup, I took before the upgrade of the database edition.
Anyone have any ideas on what the issue might be, or what should I do to solve it?
Shane
The database is probably fine.
The vCenter Web Management Service running?
-
Loading data to SQL server for the Oracle database
I want to create a table in oracle db table in sql server. Table is huge that he has obtained the documents 97,456,789.
I created the link (HS) db in the oracle database that points to the sql server. I choose this oracle db on the link table.
Select * from 'dbo '. "T1@dblink;
I shot below to create the table.
create table t2 nologging parallel (degree = 3) as select * from 'dbo '. "T1@dblink;
and its taking a long time... but its operation...
is there any other method to do this and and fill the table in oracle db faster.
Please notify. Thank you.vhiware wrote:
create table t2 nologging parallel (degree = 3) as select * from 'dbo '. "T1@dblink;
and its taking a long time... but its operation...I doubt that parallel processing will be used because it is unique to Oracle (using rowid varies in general) and not SQL-Server.
is there any other method to do this and and fill the table in oracle db faster.
Part of the performance overhead is to pull that data from SQL Server to Oracle in the whole of the liaison network between them. This can be accelerated by compressing the data first - and who then transfer them over the network.
For example: using + bcp + to export the data in the SQL Server box to a CSV file, compress/zip file, scp/sftp file Oracle and then to unzip there. In parallel and direct treatment of load can now be done using SQL * Loader to load the CSV file into Oracle.
If it is a basic Linux/Unix system, the process of decompression/unzip can be run in parallel with the SQL * process Loader by creating a pipe between the two – where the decompression process writes data uncompressed in the pipe and SQL * Loader reads and loads the data that is available through the pipe.
Otherwise you are PQ own transformation. Assume that the data is date varies. You can create a procedure on Oracle that looks like this:
{code}
create or replace procedure as copyday (day) is
Start
Insert / * + append * / into local_tab select * from remote_tab@remotedb where col_day = day;
-Add logging info, validation, etc.
end;
{code}You can now start 10 or more of these different days and run it in the background using DBMS_JOB.
-
Announcement for the external database - Secure ACS 5.2 or LDAP
I'm working on the project with Secure ACS 5.2. I'm trying to determine the external database appropriate to use. LDAP or directly to the AD?
In addition, the field in which I connect to a several subdomains. All users are currently in the subdomains, but will move to the root domain later. How do I set up the connection, I have to connect to each subdomain or can I connect just to the root?
Thank you
Hello
If you are using PEAP (mschapv2) [password based authentication] your best bet is to tie ACS to AD, because PEAP-mschapv2 is a hash mechanism that is only supported when you bind to AD, it will not work if you use the ldap integration.
Your best option is to connect ACS for the root domain, so he can use the transitive trust relationships to find the information in its subdomains.
Thank you
Tarik Admani
* Please note the useful messages *. -
Provide read permissions to others for the weblogic/Managed Server log files
Hello
We want to give read access to others under linux for all the oracle weblogic logs including the journal .out file.
We put 022 startweblogic.sh file. Set-aside are the output
----
-rw - r - r-. 1 oracle oinstall 81586 Apr 15 22:43 access.log
-rw - r - r-. 1 oracle oinstall 700087 Apr 15 22:45 DEV_Managed.log
-rw - r-. 1 oracle oinstall 20553 Apr 15 22:49 DEV_Managed.out
----
The only concern is its setting read other for access.log and DEV_Maanged.log but not for the log file of DEV_Managed.out
Please suggest what file to edit.
Thank you
Mireille
Hello
Try to also change the umask 022 startNodeManager.sh file, and then restart nodemanager, then the managed server (to rotate the log .out file and create a new using the new umask)
Kind regards
White
-
Use the java connector for the connector database?
Hello
I'm running on IOM 11gr2ps2 and need to use the database connector. We installed the .net connector server to operate with the connector AD.
The Oracle of https://docs.oracle.com/cd/E22999_01/doc.111/e20277.pdf documentation gives us an option to either install a java connector server to work with the database connector or install the IOM database connector without using a java connector server.
The documentation says "execution of a connector on the connector server.
allows to transmit queries put in service and reconciliation through the firewall in a
as defined by the connector server.
As I already have a connector server .net for AD, I would lean towards the installation of the java connector server. In this way architecture remains consistent.
Please, share your ideas.
Thank you
Khanh
Table of database connector uses the Java Connector server, or it can be deployed directly in the container of the IOM. If you have problems jar or different library due to database formats, you can use the connector server to isolate libraries and do not have to figure out how to make IOM in collaboration with several libraries. It can also take some of the load on your server to IOM for the transformation. I suggest to use the server connector for the isolation of the newspaper as well.
-Kevin
-
Need help with a query complex for the production database
Hello again,
I need your help once again, for a query how to show me how long each stage of production is by order.
See examples of data and what I expect.
Thank you all for your help.
We use Oracle Database 11 g Enterprise Edition Release 11.2.0.3.0 - 64 bit Production
Here the example data tables:
And here's what I expect of my request:CREATE TABLE TABLE_2 ( "ORDER_NR" VARCHAR2 (12) , "PRIORITY" VARCHAR2 (2) , "WO_STEP" VARCHAR2 (1) , "STEP_DATE" DATE ); CREATE TABLE TABLE_1 ( "ORDER_NR" VARCHAR2 (12) PRIMARY KEY , "PRIORITY" VARCHAR2 (2) , "CREATE_DATE" DATE , "ACT_STEP" VARCHAR2 (2) , "STEP_DATE" DATE , "EMPLOYEE" VARCHAR2 (5) , "DESCRIPTION" VARCHAR2 (20) ); INSERT INTO TABLE_1 (ORDER_NR, PRIORITY, CREATE_DATE, ACT_STEP, STEP_DATE, EMPLOYEE, DESCRIPTION) VALUES ('1KKA1T205634', '12', TO_DATE('10-FEB-13 10:00:00','DD-MON-RR HH24:MI:SS'), 'U', TO_DATE('28-FEB-13 12:00:00','DD-MON-RR HH24:MI:SS'), 'W0010', 'CLEAN HOUSE'); INSERT INTO TABLE_1 (ORDER_NR, PRIORITY, CREATE_DATE, ACT_STEP, STEP_DATE, EMPLOYEE, DESCRIPTION) VALUES ('1KKA1Z300612', '12', TO_DATE('08-FEB-13 14:00:00','DD-MON-RR HH24:MI:SS'), 'F', TO_DATE('20-FEB-13 16:00:00','DD-MON-RR HH24:MI:SS'), 'K0052', 'REPAIR CAR'); INSERT INTO TABLE_2 (ORDER_NR, PRIORITY, WO_STEP, STEP_DATE) VALUES ('1KKA1T205634', '12', 'A', TO_DATE('12-FEB-13 13:00:00','DD-MON-RR HH24:MI:SS')); INSERT INTO TABLE_2 (ORDER_NR, PRIORITY, WO_STEP, STEP_DATE) VALUES ('1KKA1T205634', '12', '5', TO_DATE('13-FEB-13 09:00:00','DD-MON-RR HH24:MI:SS')); INSERT INTO TABLE_2 (ORDER_NR, PRIORITY, WO_STEP, STEP_DATE) VALUES ('1KKA1T205634', '12', 'K', TO_DATE('13-FEB-13 10:00:00','DD-MON-RR HH24:MI:SS')); INSERT INTO TABLE_2 (ORDER_NR, PRIORITY, WO_STEP, STEP_DATE) VALUES ('1KKA1T205634', '12', '5', TO_DATE('13-FEB-13 11:00:00','DD-MON-RR HH24:MI:SS')); INSERT INTO TABLE_2 (ORDER_NR, PRIORITY, WO_STEP, STEP_DATE) VALUES ('1KKA1T205634', '12', 'K', TO_DATE('13-FEB-13 12:00:00','DD-MON-RR HH24:MI:SS')); INSERT INTO TABLE_2 (ORDER_NR, PRIORITY, WO_STEP, STEP_DATE) VALUES ('1KKA1T205634', '12', '5', TO_DATE('13-FEB-13 16:00:00','DD-MON-RR HH24:MI:SS')); INSERT INTO TABLE_2 (ORDER_NR, PRIORITY, WO_STEP, STEP_DATE) VALUES ('1KKA1T205634', '12', 'C', TO_DATE('14-FEB-13 08:00:00','DD-MON-RR HH24:MI:SS')); INSERT INTO TABLE_2 (ORDER_NR, PRIORITY, WO_STEP, STEP_DATE) VALUES ('1KKA1T205634', '12', 'B', TO_DATE('14-FEB-13 10:00:00','DD-MON-RR HH24:MI:SS')); INSERT INTO TABLE_2 (ORDER_NR, PRIORITY, WO_STEP, STEP_DATE) VALUES ('1KKA1T205634', '12', 'E', TO_DATE('18-FEB-13 13:00:00','DD-MON-RR HH24:MI:SS')); INSERT INTO TABLE_2 (ORDER_NR, PRIORITY, WO_STEP, STEP_DATE) VALUES ('1KKA1T205634', '12', 'F', TO_DATE('20-FEB-13 16:00:00','DD-MON-RR HH24:MI:SS')); INSERT INTO TABLE_2 (ORDER_NR, PRIORITY, WO_STEP, STEP_DATE) VALUES ('1KKA1T205634', '12', 'S', TO_DATE('21-FEB-13 08:00:00','DD-MON-RR HH24:MI:SS')); INSERT INTO TABLE_2 (ORDER_NR, PRIORITY, WO_STEP, STEP_DATE) VALUES ('1KKA1T205634', '12', 'R', TO_DATE('21-FEB-13 09:00:00','DD-MON-RR HH24:MI:SS')); INSERT INTO TABLE_2 (ORDER_NR, PRIORITY, WO_STEP, STEP_DATE) VALUES ('1KKA1T205634', '12', 'U', TO_DATE('28-FEB-13 12:00:00','DD-MON-RR HH24:MI:SS')); INSERT INTO TABLE_2 (ORDER_NR, PRIORITY, WO_STEP, STEP_DATE) VALUES ('1KKA1Z300612', '12', 'A', TO_DATE('12-FEB-13 13:52:42','DD-MON-RR HH24:MI:SS')); INSERT INTO TABLE_2 (ORDER_NR, PRIORITY, WO_STEP, STEP_DATE) VALUES ('1KKA1Z300612', '12', '5', TO_DATE('13-FEB-13 09:00:00','DD-MON-RR HH24:MI:SS')); INSERT INTO TABLE_2 (ORDER_NR, PRIORITY, WO_STEP, STEP_DATE) VALUES ('1KKA1Z300612', '12', 'K', TO_DATE('13-FEB-13 10:00:00','DD-MON-RR HH24:MI:SS')); INSERT INTO TABLE_2 (ORDER_NR, PRIORITY, WO_STEP, STEP_DATE) VALUES ('1KKA1Z300612', '12', '5', TO_DATE('13-FEB-13 11:00:00','DD-MON-RR HH24:MI:SS')); INSERT INTO TABLE_2 (ORDER_NR, PRIORITY, WO_STEP, STEP_DATE) VALUES ('1KKA1Z300612', '12', 'K', TO_DATE('13-FEB-13 12:00:00','DD-MON-RR HH24:MI:SS')); INSERT INTO TABLE_2 (ORDER_NR, PRIORITY, WO_STEP, STEP_DATE) VALUES ('1KKA1Z300612', '12', '5', TO_DATE('13-FEB-13 16:00:00','DD-MON-RR HH24:MI:SS')); INSERT INTO TABLE_2 (ORDER_NR, PRIORITY, WO_STEP, STEP_DATE) VALUES ('1KKA1Z300612', '12', 'C', TO_DATE('14-FEB-13 08:00:00','DD-MON-RR HH24:MI:SS')); INSERT INTO TABLE_2 (ORDER_NR, PRIORITY, WO_STEP, STEP_DATE) VALUES ('1KKA1Z300612', '12', 'B', TO_DATE('14-FEB-13 10:00:00','DD-MON-RR HH24:MI:SS')); INSERT INTO TABLE_2 (ORDER_NR, PRIORITY, WO_STEP, STEP_DATE) VALUES ('1KKA1Z300612', '12', 'E', TO_DATE('18-FEB-13 13:00:00','DD-MON-RR HH24:MI:SS')); INSERT INTO TABLE_2 (ORDER_NR, PRIORITY, WO_STEP, STEP_DATE) VALUES ('1KKA1Z300612', '12', 'F', TO_DATE('20-FEB-13 16:00:00','DD-MON-RR HH24:MI:SS')); COMMIT;
And now the explanation for the result of the query:SYSDATE 28.Feb.13 14:00 ORDER_NR PRIORITYCREATE_DATE STATUS STATUS_DATE DESCRIPTION AGE_1 AGE_2 WAITNG STEP_A STEP_B STEP_C STEP_5 STEP_K STEP_E STEP_F STEP_S STEP_R 1KKA1T205634 12 10.Feb.13 10:00 U 28.Feb.13 12:00 CLEAN HOUSE 18,083 8,833 2,125 0,833 4,125 0,083 0,750 0,208 2,125 0,666 0,042 7,125 1KKA1Z300612 12 08.Feb.13 14:00 F 20.Feb.13 16:00 REPAIR CAR 20,000 16,042 2,125 0,833 4,125 0,083 0,750 0,208 2,125 0,666
The AGE_1 is the difference in days between the "CREATE_DATE" and if EXSIST L'ETAPE 'U' then STEP_DATE or if the STEP 'U' is not found in TABLE_2 then it should show the difference in days between the "CREATE_DATE' and the 'SYSDATE.
The AGE_2 is the difference in days between the STEP "A" STEP_DATE and IF EXSIST L'ETAPE 'R' then STEP_DATE or if the STEP 'R' is not in TABLE_2 then it should show the difference in days between the "CREATE_DATE' and the 'SYSDATE.
The EXPECTATION is the difference in days between CREATE_DATE and STEP 'A' STEP_DATE
The following columns indicate the days, how long the ORDER_NR remains in these TIMES, if an ORDER_NR comes in the same STEP should be calculated together more than once.
If the ORDER_NR jump a step, it should show a zero in the specific field.
I hope that my explanation is good enough, my English skills are far from good.
Thank you for all your help.
Hosts Reinhard W.Solomon Yakobson says:
Just add the amounts:In fact, you could edit all CASES:
with t2 as ( select t.*, lead(step_date) over(partition by order_nr order by step_date) next_step_date from table_2 t ) select t1.*, nvl( max( case t2.wo_step when 'U' then t2.step_date end ), sysdate ) - t1.create_date age_1, nvl( max( case t2.wo_step when 'R' then t2.step_date end ), sysdate ) - t1.create_date age_2, sum( case when t2.wo_step in ('B','5') then t2.next_step_date - t2.step_date end ) step_b_5, sum( case t2.wo_step when 'C' then t2.next_step_date - t2.step_date end ) step_c, sum( case t2.wo_step when 'K' then t2.next_step_date - t2.step_date end ) step_k, sum( case t2.wo_step when 'E' then t2.next_step_date - t2.step_date end ) step_e, sum( case t2.wo_step when 'F' then t2.next_step_date - t2.step_date end ) step_f, sum( case t2.wo_step when 'S' then t2.next_step_date - t2.step_date end ) step_s, sum( case t2.wo_step when 'R' then t2.next_step_date - t2.step_date end ) step_r from table_1 t1, t2 where t2.order_nr = t1.order_nr group by t1.order_nr, t1.priority, t1.create_date, t1.act_step, t1.step_date, t1.employee, t1.description / ORDER_NR PR CREATE_DA AC STEP_DATE EMPLO DESCRIPTION AGE_1 AGE_2 STEP_B_5 STEP_C STEP_K STEP_E STEP_F STEP_S STEP_R ------------ -- --------- -- --------- ----- ----------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- 1KKA1T205634 12 10-FEB-13 U 28-FEB-13 W0010 CLEAN HOUSE 18.0833333 10.9583333 4.875 .083333333 .208333333 2.125 .666666667 .041666667 7.125 1KKA1Z300612 12 08-FEB-13 F 20-FEB-13 K0052 REPAIR CAR 44.252338 44.252338 4.875 .083333333 .208333333 2.125 SQL>
SY.
-
suggestion for the report database
I'm sorry to ask an open question, but when I asked this, I found too many comments which were not relevant.
I am running 11.2.0.2 Enterprise Edition on Solaris 10.
Our production database to fight too many reports and my manager asked watching some database options "declaration".
Our requirements are that the database must be real and in synchronization with the production.
Also, we don't want to impact on production (e.g., use of MVs).
Options I'm considering, but open to others:
logical standby - can we have it open for reporting while it SQL is applied?
I think so, but I wonder what are the disadvantages of this option.
physical standby - I think I can open only the standby database while recovery applied, but only if we buy DataGuard Active which is unaffordable.
Golden Gate - once again, cost prohibitive
MVs would cause too much impact on the primary database974632 wrote:
Well, I might agree with you, but this decision was made above my level and before that I just joined this new team. So, the question is now which option makes more sense to provide the database.
What do you mean by another sense that it must be fast, cheap and reliable, because in reality, you can only choose two of the three.
Apparently, they already were on the way and tested logical standby, but feel it was problematic.
What were the problems?
Now that I joined the team, they ask me which option makes more sense.
The answer depends on what is the underlying problem. If you were part of this investigation or not, you have to understand the problem.
They already use parallel queries,
Which could contribute to the problem
At this point, my goal is to answer their question they've already decided to examine a report database option and which option makes the best sense of the term.
With the level of detail you have shared and say you have, have not sufficient information to achieve this goal.
If there is a simple solution that is fast, cheap, reliable, consumes no resources and is easy to implement, a simple google search would find that the result and this debate would be unnecessary
Maybe you are looking for
-
Cam nest excluded Homekit?
I can only assume, because of the competitive relationship between Apple and Google, but I'm really disappointed to see that the family of nest products is excluded from Homekit. Stupid decision, because I do not know that I am not alone to already i
-
I erased my IPad 2 with the intention to follow him or to sell. I found a job for her and decided to keep it. Now, it will not accept the old password that I used for years... 4 figures
-
2410-303 dvd/cd-r won't play the cd.
Hello world! I own a 2410-303 win XP with a matsu * a dvd/cd-r, which lately is not read any cd I of fire m using it and drag ' not Drop the cd software of I m using are intenso 700 MB 1 x-52 x which is a 3-4 mark I ve use. The problem seems to be in
-
Touchpad does not work and Equium automatically restarts after a stop
Something going wrong drasticaly yesterday. I wasn't doing anything elaborate, just work excel and I'm tired of the Tablet touch and 'connected' one infrared I smile sometimes use... it did not work too well, low battery level, I guess, so I returned
-
Digital control do not light the Subvi table lines
Hello people of the forum NOR! I have a small question, I think you could help me: I tried to find a way that would allow my program read data, acquired from a millimeter and use a table - with coefficients - multiply the data obtained. When it was w